• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA RTX 5090 Geekbench Leak: OpenCL and Vulkan Tests Reveal True Performance Uplifts

GGforever

Staff member
Joined
Oct 22, 2024
Messages
98 (0.98/day)
The RTX 50-series fever continues to rage on, with independent reviews for the RTX 5080 and RTX 5090 dropping towards the end of this month. That does not stop benchmarks from leaking out, unsurprisingly, and a recent lineup of Geekbench listings have revealed the raw performance uplifts that can be expected from NVIDIA's next generation GeForce flagship. A sizeable chunk of the tech community was certainly rather disappointed with NVIDIA's reliance on AI-powered frame generation for much of the claimed improvements in gaming. Now, it appears we can finally figure out how much raw improvement NVIDIA was able to squeeze out with consumer Blackwell, and the numbers, for the most part, appear decent enough.

Starting off with the OpenCL tests, the highest score that we have seen so far from the RTX 5090 puts it around 367,000 points, which marks an acceptable jump from the RTX 4090, which manages around 317,000 points according to Geekbench's official average data. Of course, there are a plethora of cards that may easily exceed the average scores, which must be kept in mind. That said, we are not aware of the details of the RTX 5090 that was tested, so pitting it against average scores does seem fair. Moving to Vulkan, the performance uplift is much more satisfying, with the RTX 5090 managing a minimum of 331,000 points and a maximum of around 360,000 points, compared to the RTX 4090's 262,000 - a sizeable 37% improvement at the highest end. Once again, we are comparing the best results posted so far against last year's averages, so expect slightly more modest gains in the real world. Once more reviews start appearing after the embargo lifts, the improvement figures should become much more reliable.



View at TechPowerUp Main Site | Source
 
Joined
Dec 12, 2016
Messages
2,152 (0.72/day)
I'm still guessing that the 5090 will be 20-30% faster in pure rasterization benchmarks over the 4090 (these leaked benchmarks seem to back that up somewhat). That's reasonable for a 25% increase in price and a 30% increase in power on the same process node. Of course, it's way out of my budget range but I'm sure some wealthy game enthusiasts will enjoy the best possible performance money can buy.
 
Joined
Dec 16, 2017
Messages
2,986 (1.15/day)
System Name System V
Processor AMD Ryzen 5 3600
Motherboard Asus Prime X570-P
Cooling Cooler Master Hyper 212 // a bunch of 120 mm Xigmatek 1500 RPM fans (2 ins, 3 outs)
Memory 2x8GB Ballistix Sport LT 3200 MHz (BLS8G4D32AESCK.M8FE) (CL16-18-18-36)
Video Card(s) Gigabyte AORUS Radeon RX 580 8 GB
Storage SHFS37A240G / DT01ACA200 / ST10000VN0008 / ST8000VN004 / SA400S37960G / SNV21000G / NM620 2TB
Display(s) LG 22MP55 IPS Display
Case NZXT Source 210
Audio Device(s) Logitech G430 Headset
Power Supply Corsair CX650M
Software Whatever build of Windows 11 is being served in Canary channel at the time.
Benchmark Scores Corona 1.3: 3120620 r/s Cinebench R20: 3355 FireStrike: 12490 TimeSpy: 4624
a sizeable 37% improvement at the highest end.
Not sizable in terms of actual IPC improvement if the 5090 comes with 21760 CUDA cores. The 4090 had 16384, so the 5090 would have roughly 33% more CUDA cores, so that'd be like 5% faster IPC, roughly (would have to account for clock differences and such so that's a rough number). That, and the higher TDP, 575W vs 450W, roughly 28% higher.

So, with regards to Vulkan, the card is a bit more efficient, has a bit more IPC, but the overwhelming majority of the improvement comes from increased CUDA core counts and power consumption.
 
Last edited:
Joined
Sep 14, 2020
Messages
625 (0.39/day)
Location
Greece
System Name Office / HP Prodesk 490 G3 MT (ex-office)
Processor Intel 13700 (90° limit) / Intel i7-6700
Motherboard Asus TUF Gaming H770 Pro / HP 805F H170
Cooling Noctua NH-U14S / Stock
Memory G. Skill Trident XMP 2x16gb DDR5 6400MHz cl32 / Samsung 2x8gb 2133MHz DDR4
Video Card(s) Asus RTX 3060 Ti Dual OC GDDR6X / Zotac GTX 1650 GDDR6 OC
Storage Samsung 2tb 980 PRO MZ / Samsung SSD 1TB 860 EVO + WD blue HDD 1TB (WD10EZEX)
Display(s) Eizo FlexScan EV2455 - 1920x1200 / Panasonic TX-32LS490E 32'' LED 1920x1080
Case Nanoxia Deep Silence 8 Pro / HP microtower
Audio Device(s) On board
Power Supply Seasonic Prime PX750 / OEM 300W bronze
Mouse MS cheap wired / Logitech cheap wired m90
Keyboard MS cheap wired / HP cheap wired
Software W11 / W7 Pro ->10 Pro
It has ~20% more transistors, so 20% performance increase where the faster VRAM doesn’t matter means zero improvement. With the faster RAM coming into play, a 25-30% ( and sometimes higher) uplift is expected.
 
Joined
Nov 26, 2021
Messages
1,762 (1.52/day)
Location
Mississauga, Canada
Processor Ryzen 7 5700X
Motherboard ASUS TUF Gaming X570-PRO (WiFi 6)
Cooling Noctua NH-C14S (two fans)
Memory 2x16GB DDR4 3200
Video Card(s) Reference Vega 64
Storage Intel 665p 1TB, WD Black SN850X 2TB, Crucial MX300 1TB SATA, Samsung 830 256 GB SATA
Display(s) Nixeus NX-EDG27, and Samsung S23A700
Case Fractal Design R5
Power Supply Seasonic PRIME TITANIUM 850W
Mouse Logitech
VR HMD Oculus Rift
Software Windows 11 Pro, and Ubuntu 20.04
Not sizable in terms of actual IPC improvement if the 5090 comes with 21760 CUDA cores. The 4090 had 16384, so the 5090 would have roughly 33% more CUDA cores, so that'd be like 5% faster IPC, roughly (would have to account for clock differences and such so that's a rough number). That, and the higher TDP, 575W vs 450W, roughly 28% higher.

So, the card is a bit more efficient, has a bit more IPC, but the overwhelming majority of the improvement comes from increased CUDA core counts and power consumption.
I'm not sure about higher IPC; even the best OpenCL score is barely 16% more than the 4090 despite the 5090 having 33% more SMXs. In any case, the IPC comparison will have to wait for the 5070 which has almost the same number of SMXs as the 4070.
 
Joined
Nov 6, 2016
Messages
1,821 (0.61/day)
Location
NH, USA
System Name Lightbringer
Processor Ryzen 7 2700X
Motherboard Asus ROG Strix X470-F Gaming
Cooling Enermax Liqmax Iii 360mm AIO
Memory G.Skill Trident Z RGB 32GB (8GBx4) 3200Mhz CL 14
Video Card(s) Sapphire RX 5700XT Nitro+
Storage Hp EX950 2TB NVMe M.2, HP EX950 1TB NVMe M.2, Samsung 860 EVO 2TB
Display(s) LG 34BK95U-W 34" 5120 x 2160
Case Lian Li PC-O11 Dynamic (White)
Power Supply BeQuiet Straight Power 11 850w Gold Rated PSU
Mouse Glorious Model O (Matte White)
Keyboard Royal Kludge RK71
Software Windows 10
I'm still guessing that the 5090 will be 20-30% faster in pure rasterization benchmarks over the 4090 (these leaked benchmarks seem to back that up somewhat). That's reasonable for a 25% increase in price and a 30% increase in power on the same process node. Of course, it's way out of my budget range but I'm sure some wealthy game enthusiasts will enjoy the best possible performance money can buy.
Is it "objectively" reasonable? Or subjectively reasonable now that we've been conditioned to expect so much less with each release?
 
Joined
Dec 16, 2017
Messages
2,986 (1.15/day)
System Name System V
Processor AMD Ryzen 5 3600
Motherboard Asus Prime X570-P
Cooling Cooler Master Hyper 212 // a bunch of 120 mm Xigmatek 1500 RPM fans (2 ins, 3 outs)
Memory 2x8GB Ballistix Sport LT 3200 MHz (BLS8G4D32AESCK.M8FE) (CL16-18-18-36)
Video Card(s) Gigabyte AORUS Radeon RX 580 8 GB
Storage SHFS37A240G / DT01ACA200 / ST10000VN0008 / ST8000VN004 / SA400S37960G / SNV21000G / NM620 2TB
Display(s) LG 22MP55 IPS Display
Case NZXT Source 210
Audio Device(s) Logitech G430 Headset
Power Supply Corsair CX650M
Software Whatever build of Windows 11 is being served in Canary channel at the time.
Benchmark Scores Corona 1.3: 3120620 r/s Cinebench R20: 3355 FireStrike: 12490 TimeSpy: 4624
I'm not sure about higher IPC;
Made a small edit in my post to clarify it's about Vulkan. But yeah, OpenCL is actually worse. Not sure if it's just driver deficiencies or just Nvidia didn't care about OpenCL. I understand CUDA itself is far more popular? So maybe from Nvidia's POV OpenCL isn't super relevant so they just don't prioritize optimizing for it?
 
Joined
Jun 29, 2023
Messages
614 (1.06/day)
System Name Gungnir
Processor Ryzen 5 7600X
Motherboard ASUS TUF B650M-PLUS WIFI
Cooling Thermalright Peerless Assasin 120 SE Black
Memory 2x16GB DDR5 CL36 5600MHz
Video Card(s) XFX RX 6800XT Merc 319
Storage 1TB WD SN770 | 2TB WD Blue SATA III SSD
Display(s) 1440p 165Hz VA
Case Lian Li Lancool 215
Audio Device(s) Beyerdynamic DT 770 PRO 80Ohm
Power Supply EVGA SuperNOVA 750W 80 Plus Gold
Mouse Logitech G Pro Wireless
Keyboard Keychron V6
VR HMD The bane of my existence (Oculus Quest 2)
So my theory of the higher performance coming in because of the bigger chip was spot on, and thus the price on the 90 class card is higher because the chip is that much bigger, and there are no real improvements.
It's a throwaway generation really, we really are nearing stagnation.
 
Joined
Sep 13, 2020
Messages
173 (0.11/day)
I'm still guessing that the 5090 will be 20-30% faster in pure rasterization benchmarks over the 4090 (these leaked benchmarks seem to back that up somewhat). That's reasonable for a 25% increase in price and a 30% increase in power on the same process node. Of course, it's way out of my budget range but I'm sure some wealthy game enthusiasts will enjoy the best possible performance money can buy.
Not in my book.
The price will keep increasing while there are people paying. 1:1 perf% x price% may be the norm, but just until people stop swallowing ( ͡° ͜ʖ ͡°)
 
Joined
Nov 26, 2021
Messages
1,762 (1.52/day)
Location
Mississauga, Canada
Processor Ryzen 7 5700X
Motherboard ASUS TUF Gaming X570-PRO (WiFi 6)
Cooling Noctua NH-C14S (two fans)
Memory 2x16GB DDR4 3200
Video Card(s) Reference Vega 64
Storage Intel 665p 1TB, WD Black SN850X 2TB, Crucial MX300 1TB SATA, Samsung 830 256 GB SATA
Display(s) Nixeus NX-EDG27, and Samsung S23A700
Case Fractal Design R5
Power Supply Seasonic PRIME TITANIUM 850W
Mouse Logitech
VR HMD Oculus Rift
Software Windows 11 Pro, and Ubuntu 20.04
Made a small edit in my post to clarify it's about Vulkan. But yeah, OpenCL is actually worse. Not sure if it's just driver deficiencies or just Nvidia didn't care about OpenCL. I understand CUDA itself is far more popular? So maybe from Nvidia's POV OpenCL isn't super relevant so they just don't prioritize optimizing for it?
CUDA is more popular by far, but Nvidia has invested in OpenCL support as well. Vulkan might be a better point of comparison, but even there, gains range from 26% to 37%. The latter figure is probably an overclocked SKU so I suspect it's closer to 26% which is less than the 33% increase in SMX count.
 
Joined
Jun 7, 2024
Messages
15 (0.06/day)
Processor 7800x3d
Motherboard N7 B650E
Cooling Kraken Elite 240/ APNX FP2 120mm x 9
Memory T-FORCE XTREEM ARGB DDR5 2x24 7600 CL36
Video Card(s) ProArt GeForce RTX 4080 Super OC
Storage 990 PRO SSD 4TB PCIe 4.0 M.2 2280
Display(s) LG C2 42 Inch 4K OLED evo
Case O11 Air Mini Tempered Glass (Microcenter version)
Audio Device(s) KEF LSX II
Power Supply NZXT E850
I never gave it a second thought until now but does anybody know if geekbench 'points' are actually tied to a more official unit of measure?
 
Joined
Dec 16, 2017
Messages
2,986 (1.15/day)
System Name System V
Processor AMD Ryzen 5 3600
Motherboard Asus Prime X570-P
Cooling Cooler Master Hyper 212 // a bunch of 120 mm Xigmatek 1500 RPM fans (2 ins, 3 outs)
Memory 2x8GB Ballistix Sport LT 3200 MHz (BLS8G4D32AESCK.M8FE) (CL16-18-18-36)
Video Card(s) Gigabyte AORUS Radeon RX 580 8 GB
Storage SHFS37A240G / DT01ACA200 / ST10000VN0008 / ST8000VN004 / SA400S37960G / SNV21000G / NM620 2TB
Display(s) LG 22MP55 IPS Display
Case NZXT Source 210
Audio Device(s) Logitech G430 Headset
Power Supply Corsair CX650M
Software Whatever build of Windows 11 is being served in Canary channel at the time.
Benchmark Scores Corona 1.3: 3120620 r/s Cinebench R20: 3355 FireStrike: 12490 TimeSpy: 4624
I never gave it a second thought until now but does anybody know if geekbench 'points' are actually tied to a more official unit of measure?
No. Each benchmark is only comparable to the same exact benchmark, and preferably trying to minimize setup discrepancies. You can't grab Geekbench and say something like "ten points in Geekbench is the same as 20 points in 3Dmark" or whatever.
 
Joined
Jul 13, 2016
Messages
3,444 (1.10/day)
Processor Ryzen 7800X3D
Motherboard ASRock X670E Taichi
Cooling Noctua NH-D15 Chromax
Memory 32GB DDR5 6000 CL30
Video Card(s) MSI RTX 4090 Trio
Storage P5800X 1.6TB 4x 15.36TB Micron 9300 Pro 4x WD Black 8TB M.2
Display(s) Acer Predator XB3 27" 240 Hz
Case Thermaltake Core X9
Audio Device(s) JDS Element IV, DCA Aeon II
Power Supply Seasonic Prime Titanium 850w
Mouse PMM P-305
Keyboard Wooting HE60
VR HMD Valve Index
Software Win 10
Not sizable in terms of actual IPC improvement if the 5090 comes with 21760 CUDA cores. The 4090 had 16384, so the 5090 would have roughly 33% more CUDA cores, so that'd be like 5% faster IPC, roughly (would have to account for clock differences and such so that's a rough number). That, and the higher TDP, 575W vs 450W, roughly 28% higher.

So, with regards to Vulkan, the card is a bit more efficient, has a bit more IPC, but the overwhelming majority of the improvement comes from increased CUDA core counts and power consumption.

I'm pretty sure it's a 0% IPC improvement. There are 32.8% more cores on the 5090 than the 4090 and the boost clock of the 5090 is 7.69% higher.

Combine the core count and frequency increases and you get a number higher than the actual performance increase.

This is definitely a tock generation and one of the most lackluster one's at that. It's Nvidia's equivalent to the R9 300 series. No IPC gains, no efficiency gains, no new marquee features (only updates to existing ones).
 
Joined
Mar 11, 2024
Messages
122 (0.38/day)
I'm still guessing that the 5090 will be 20-30% faster in pure rasterization benchmarks over the 4090 (these leaked benchmarks seem to back that up somewhat). That's reasonable for a 25% increase in price and a 30% increase in power on the same process node. Of course, it's way out of my budget range but I'm sure some wealthy game enthusiasts will enjoy the best possible performance money can buy.
with 33% more cores and 75% memory bw, there is not way the improvement is less than 30%
 
Joined
Nov 26, 2021
Messages
1,762 (1.52/day)
Location
Mississauga, Canada
Processor Ryzen 7 5700X
Motherboard ASUS TUF Gaming X570-PRO (WiFi 6)
Cooling Noctua NH-C14S (two fans)
Memory 2x16GB DDR4 3200
Video Card(s) Reference Vega 64
Storage Intel 665p 1TB, WD Black SN850X 2TB, Crucial MX300 1TB SATA, Samsung 830 256 GB SATA
Display(s) Nixeus NX-EDG27, and Samsung S23A700
Case Fractal Design R5
Power Supply Seasonic PRIME TITANIUM 850W
Mouse Logitech
VR HMD Oculus Rift
Software Windows 11 Pro, and Ubuntu 20.04
Joined
Mar 16, 2017
Messages
252 (0.09/day)
Location
behind you
Processor Threadripper 1950X
Motherboard ASRock X399 Professional Gaming
Cooling IceGiant ProSiphon Elite
Memory 48GB DDR4 2934MHz
Video Card(s) MSI GTX 1080
Storage 4TB Crucial P3 Plus NVMe, 1TB Samsung 980 NVMe, 1TB Inland NVMe, 2TB Western Digital HDD
Display(s) 2x 4K60
Power Supply Cooler Master Silent Pro M (1000W)
Mouse Corsair Ironclaw Wireless
Keyboard Corsair K70 MK.2
VR HMD HTC Vive Pro
Software Windows 10, QubesOS
CUDA is more popular by far, but Nvidia has invested in OpenCL support as well. Vulkan might be a better point of comparison, but even there, gains range from 26% to 37%. The latter figure is probably an overclocked SKU so I suspect it's closer to 26% which is less than the 33% increase in SMX count.
Last I checked Nvidia's investment in OpenCL is the absolute bare minimum. You know how DirectX 12 has "feature levels?" Well OpenCL is worse. On paper Nvidia's 4000 series supports OpenCL 3.0 which came out in 2020. However OpenCL 3.0 only requires the complete OpenCL 1.2 functionality which came out in 2011! Everything more recent is optional, and AFAIK Nvidia's latest only implement the 1.2 feature set.
 
Joined
Nov 15, 2020
Messages
954 (0.62/day)
System Name 1. Glasshouse 2. Odin OneEye
Processor 1. Ryzen 9 5900X (manual PBO) 2. Ryzen 9 7900X
Motherboard 1. MSI x570 Tomahawk wifi 2. Gigabyte Aorus Extreme 670E
Cooling 1. Noctua NH D15 Chromax Black 2. Custom Loop 3x360mm (60mm) rads & T30 fans/Aquacomputer NEXT w/b
Memory 1. G Skill Neo 16GBx4 (3600MHz 16/16/16/36) 2. Kingston Fury 16GBx2 DDR5 CL36
Video Card(s) 1. Asus Strix Vega 64 2. Powercolor Liquid Devil 7900XTX
Storage 1. Corsair Force MP600 (1TB) & Sabrent Rocket 4 (2TB) 2. Kingston 3000 (1TB) and Hynix p41 (2TB)
Display(s) 1. Samsung U28E590 10bit 4K@60Hz 2. LG C2 42 inch 10bit 4K@120Hz
Case 1. Corsair Crystal 570X White 2. Cooler Master HAF 700 EVO
Audio Device(s) 1. Creative Speakers 2. Built in LG monitor speakers
Power Supply 1. Corsair RM850x 2. Superflower Titanium 1600W
Mouse 1. Microsoft IntelliMouse Pro (grey) 2. Microsoft IntelliMouse Pro (black)
Keyboard Leopold High End Mechanical
Software Windows 11
No surprise. I expect reviews when they emerge will essentially say the same thing they said about the 4090: Stupid price to performance (don't buy it) but it is the strongest available card.
 
Joined
Jun 14, 2020
Messages
4,200 (2.48/day)
System Name Mean machine
Processor AMD 6900HS
Memory 2x16 GB 4800C40
Video Card(s) AMD Radeon 6700S
Isn't the 5090 a prime example of why we need AI / DLSS / MFG etc.? The 5090 has a lot more of everything (power, cores, bandwidth, vram) and yet the gains are average to bad. Realistically a hypothetical 5090 that is twice the size of the 4090 - at similar power - would be like what, 50% faster? Makes no sense for nvidia to pursue that, it's completely unsustainable
 
Joined
Feb 20, 2019
Messages
8,639 (3.98/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
The 5090 has 33% more resources than the 4090, so a ~37% increase in performance is yet more proof that nearly all of the improvements for the 50-series are software gimmicks to improve the end result, rather than more actual raw performance.

Isn't the 5090 a prime example of why we need AI / DLSS / MFG etc.? The 5090 has a lot more of everything (power, cores, bandwidth, vram) and yet the gains are average to bad. Realistically a hypothetical 5090 that is twice the size of the 4090 - at similar power - would be like what, 50% faster? Makes no sense for nvidia to pursue that, it's completely unsustainable
I interpret it differently.
The 5090 is an example of 33% more physical hardware generating a 37% more performance, so your hypothetical double 4090 would be more than double as fast.

I'm not quite sure how you interpret a 33% more hardware for 37% faster as "average to bad".
 
Joined
Jun 14, 2020
Messages
4,200 (2.48/day)
System Name Mean machine
Processor AMD 6900HS
Memory 2x16 GB 4800C40
Video Card(s) AMD Radeon 6700S
The 5090 has 33% more resources than the 4090, so a ~37% increase in performance is yet more proof that nearly all of the improvements for the 50-series are software gimmicks to improve the end result, rather than more actual raw performance.


I interpret it differently.
The 5090 is an example of 33% more physical hardware generating a 37% more performance, so your hypothetical double 4090 would be more than double as fast.

I'm not quite sure how you interpret a 33% more hardware for 37% faster as "average to bad".
More hardware, more power, more bandwidth and more vram.
 
Joined
Feb 20, 2019
Messages
8,639 (3.98/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
It has ~20% more transistors, so 20% performance increase where the faster VRAM doesn’t matter means zero improvement. With the faster RAM coming into play, a 25-30% ( and sometimes higher) uplift is expected.
Don't use transistor count, it incluedes tens of billions of transistors that have nothing to do with compute performance - things like the video engine, fixed-function hardware for features, display output, PCIe connectivity, communications controllers etc. These do not scale linearly with the power of the card. The 4060 has 19% the core count of a 4090 but 26% of the transistor budget, and that's despite cutbacks to fixed function hardware like the PCIe interface in both generation and lane count, reductions to the number of nvenc encoders, and the removal of mGPU support logic altogether.

More hardware, more power, more bandwidth and more vram.
more hardware scales linearly - more pipelines/cores means more operations per clock.

power doesn't, it's just a side effect of more hardware drawing current at once
bandwidth doesn't, it's just a side effect of needing to feed more hardware without starving it
vram doesn't, it adds zero performance and is simply required to hold the data. Not having enough means you simply cannot run those settings or that dataset for simulation/LLM/compute.
 
Joined
Feb 1, 2019
Messages
3,763 (1.72/day)
Location
UK, Midlands
System Name Main PC
Processor 13700k
Motherboard Asrock Z690 Steel Legend D4 - Bios 13.02
Cooling Noctua NH-D15S
Memory 32 Gig 3200CL14
Video Card(s) 4080 RTX SUPER FE 16G
Storage 1TB 980 PRO, 2TB SN850X, 2TB DC P4600, 1TB 860 EVO, 2x 3TB WD Red, 2x 4TB WD Red
Display(s) LG 27GL850
Case Fractal Define R4
Audio Device(s) Soundblaster AE-9
Power Supply Antec HCG 750 Gold
Software Windows 10 21H2 LTSC
So my theory of the higher performance coming in because of the bigger chip was spot on, and thus the price on the 90 class card is higher because the chip is that much bigger, and there are no real improvements.
It's a throwaway generation really, we really are nearing stagnation.
Their focus is on the AI stuff, which can see has been boosted on all the cards. For consumer side to give something sellable we of course get given the this circus multi frame gen.

I have always had the opinion, if there isnt anything meaningful to release, then dont release it. Its so annoying we have instead this "release to a schedule, so there is always something new out there".

The 5000 series by far the best thing about it is the slimmer coolers and angled connectors. Does that warrant a new generation by itself? Probably not, instead release a rev 2 model with these shroud improvements, but I expect they wanted to use the new chips for the AI market, which has really led to the change on the consumer side as well, as well as of course marketing reasons.

If software hadnt become so inefficient, we could be all using 1080ti's getting 300fps every game. Or rather playing at 60fps consuming 50w.
 
Joined
Sep 15, 2011
Messages
6,849 (1.40/day)
Processor Intel® Core™ i7-13700K
Motherboard Gigabyte Z790 Aorus Elite AX
Cooling Noctua NH-D15
Memory 32GB(2x16) DDR5@6600MHz G-Skill Trident Z5
Video Card(s) ZOTAC GAMING GeForce RTX 3080 AMP Holo
Storage 2TB SK Platinum P41 SSD + 4TB SanDisk Ultra SSD + 500GB Samsung 840 EVO SSD
Display(s) Acer Predator X34 3440x1440@100Hz G-Sync
Case NZXT PHANTOM410-BK
Audio Device(s) Creative X-Fi Titanium PCIe
Power Supply Corsair 850W
Mouse Logitech Hero G502 SE
Software Windows 11 Pro - 64bit
Benchmark Scores 30FPS in NFS:Rivals
I'm still guessing that the 5090 will be 20-30% faster in pure rasterization benchmarks over the 4090 (these leaked benchmarks seem to back that up somewhat). That's reasonable for a 25% increase in price and a 30% increase in power on the same process node. Of course, it's way out of my budget range but I'm sure some wealthy game enthusiasts will enjoy the best possible performance money can buy.
There is absolutely NOTHING reasonable about this card, from all points of view, including price, performance via previous gen, or power consumption. Stop being so gullible and easily brainwashed by nVidia marketing and payed reviewers/influencers.
 
Joined
Feb 20, 2019
Messages
8,639 (3.98/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
There is absolutely NOTHING reasonable about this card, from all points of view, including price, performance via previous gen, or power consumption. Stop being so gullible and easily brainwashed by nVidia marketing and payed reviewers/influencers.
On top of that, distributors and AIB insiders are claiming that stock availability is so low for the 5090 that it might as well just be vaporware for gamers.

According to MLID, an Nvidia employee said that even Nvidia employees have limited stock in their own employee store.

I don't know if that's poor yields on such a gargantuan chip, or a result of the 5090 being too valuable to sell to gamers at "$2000" when the same silicon can be sold for $10000+ as an RTX 6000B workstation or server card instead. Remember, every 4090 that hit the market was because Nvidia had temporarily satisfied all the RTX 6000A customers willing to pay $8000+ for the exact same silicon.
 
Last edited:
Top