• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA AD103 and AD104 Chips Powering RTX 4080 Series Detailed

Joined
Oct 8, 2014
Messages
123 (0.03/day)
Which means a 4080ti (when it arrives) will probably be only 10% faster than 4080 16gb (76/80 SM's enabled), using a maxed out AD103 die (with 80/80 SM's enabled).

Which will still leave a huge gap to the 4090...

I guess they could cut down the AD102 die a ton, to create something truly in the middle, of the 4090 and 4080 16gb. But I think it's just going to be a maxed out ad103 die. :/
 

ARF

Joined
Jan 28, 2020
Messages
4,670 (2.61/day)
Location
Ex-usa | slava the trolls
It is closest to TSMC's 10 nm; TSMC's 12 nm is 16 nm with different standard cells.

No..

It is closest to 12 nm Ground Rules:

1663949213465.png

10 nm process - Wikipedia
 
Joined
Nov 26, 2021
Messages
1,705 (1.52/day)
Location
Mississauga, Canada
Processor Ryzen 7 5700X
Motherboard ASUS TUF Gaming X570-PRO (WiFi 6)
Cooling Noctua NH-C14S (two fans)
Memory 2x16GB DDR4 3200
Video Card(s) Reference Vega 64
Storage Intel 665p 1TB, WD Black SN850X 2TB, Crucial MX300 1TB SATA, Samsung 830 256 GB SATA
Display(s) Nixeus NX-EDG27, and Samsung S23A700
Case Fractal Design R5
Power Supply Seasonic PRIME TITANIUM 850W
Mouse Logitech
VR HMD Oculus Rift
Software Windows 11 Pro, and Ubuntu 20.04
Joined
Jul 13, 2016
Messages
3,337 (1.08/day)
Processor Ryzen 7800X3D
Motherboard ASRock X670E Taichi
Cooling Noctua NH-D15 Chromax
Memory 32GB DDR5 6000 CL30
Video Card(s) MSI RTX 4090 Trio
Storage P5800X 1.6TB 4x 15.36TB Micron 9300 Pro 4x WD Black 8TB M.2
Display(s) Acer Predator XB3 27" 240 Hz
Case Thermaltake Core X9
Audio Device(s) JDS Element IV, DCA Aeon II
Power Supply Seasonic Prime Titanium 850w
Mouse PMM P-305
Keyboard Wooting HE60
VR HMD Valve Index
Software Win 10
..are the gaming tech review sites going to persist in playing along with Nvidia's sham naming of this card or will they have the integrity to call it out for what it actually is?

Yep, I cannot believe they are charging $900 for a 295.4mm2 die. That's a massive reduction in size compared to the 3070 and less than half the size of the 3080. Mind you Nvidia is charging $200 more than a 3080. It's got to be a joke.
 

the54thvoid

Super Intoxicated Moderator
Staff member
Joined
Dec 14, 2009
Messages
13,121 (2.39/day)
Location
Glasgow - home of formal profanity
Processor Ryzen 7800X3D
Motherboard MSI MAG Mortar B650 (wifi)
Cooling be quiet! Dark Rock Pro 4
Memory 32GB Kingston Fury
Video Card(s) Gainward RTX4070ti
Storage Seagate FireCuda 530 M.2 1TB / Samsumg 960 Pro M.2 512Gb
Display(s) LG 32" 165Hz 1440p GSYNC
Case Asus Prime AP201
Audio Device(s) On Board
Power Supply be quiet! Pure POwer M12 850w Gold (ATX3.0)
Software W10
If you consider the 3080 was superior to previous gen 2080ti by around 25%-30%, it'll be telling where the 4080 12GB comes in. Notably, it'd have to be apples to apples without artificially constrained software implementation (ie DLSS3).

But if it's not similarly superior to the 3080ti, then I'll consider the 4080 12GB to be a ludicrous proposition.
 
Joined
Jul 13, 2016
Messages
3,337 (1.08/day)
Processor Ryzen 7800X3D
Motherboard ASRock X670E Taichi
Cooling Noctua NH-D15 Chromax
Memory 32GB DDR5 6000 CL30
Video Card(s) MSI RTX 4090 Trio
Storage P5800X 1.6TB 4x 15.36TB Micron 9300 Pro 4x WD Black 8TB M.2
Display(s) Acer Predator XB3 27" 240 Hz
Case Thermaltake Core X9
Audio Device(s) JDS Element IV, DCA Aeon II
Power Supply Seasonic Prime Titanium 850w
Mouse PMM P-305
Keyboard Wooting HE60
VR HMD Valve Index
Software Win 10
In the end, as always, what's matter is performance per $.

The name is the last important part fallow by memory bus.

I'd say that die size is pretty important as well. If you know the die size of a product you can see how much value you are getting relative to how much it costs Nvidia and to the rest of the stack. This "4080" has an extremely small die size compared to past generations and the 4090 so it stands to reason that at it's current price it's terrible value and that Nvidia could have provided far more value for the customer even in comparison to just last generation.
 
Joined
Nov 26, 2021
Messages
1,705 (1.52/day)
Location
Mississauga, Canada
Processor Ryzen 7 5700X
Motherboard ASUS TUF Gaming X570-PRO (WiFi 6)
Cooling Noctua NH-C14S (two fans)
Memory 2x16GB DDR4 3200
Video Card(s) Reference Vega 64
Storage Intel 665p 1TB, WD Black SN850X 2TB, Crucial MX300 1TB SATA, Samsung 830 256 GB SATA
Display(s) Nixeus NX-EDG27, and Samsung S23A700
Case Fractal Design R5
Power Supply Seasonic PRIME TITANIUM 850W
Mouse Logitech
VR HMD Oculus Rift
Software Windows 11 Pro, and Ubuntu 20.04
Joined
Sep 17, 2014
Messages
22,684 (6.05/day)
Location
The Washing Machine
System Name Tiny the White Yeti
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
VR HMD HD 420 - Green Edition ;)
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
No they don't, horse shit naming schemes need pointing out to noobs and only an Nvidia shareholder or fan would disagree.

Nvidia makes the confusion, not the press.

If the press cause more confusion,. Good it might make buyer's think more before purchasing.

A 104 die is not a X80 card GPU over nvidia's entire life span until today.

Wuuuut? GTX 1080 begs to differ... or GTX 980... or the 680, or...

The new kid on the block here is the fact there is a 103-SKU in between the 102 and 104. And even the 102 is a spin off from Titan's 110.

The fact is we're looking at the exact opposite situation: they can't place a 104 that high in the stack anymore. They need TWO bigger SKUs to cater to their top end of the stack, and 104 is upper midrange but at best - it no longer 'stands in as early high end'. It used to only get succeeded by a 102 later in gen, now they have to place it at the front of the gen to make even a tiny bit of impact compared to the last. ADA's current positioning is the best example: they're not even removing the 3xxx cards below it; we're looking at ONLY 102 dies from gen to gen populating half their new stack for a while. These are all signs Nvidia's running into new territory wrt their wiggle room: we're paying their wiggle room now on the Geforce stack; the 102/103 SKUs are simply new tiers also in terms of price, and they need every single piece of it to survive against the competition.

Back in the HD-days, they could make do with a 104 and then destroy AMD with a 102 either cut down or full later on. Which they have been doing up until they started pushing RT. Ever since, the changes happened, prices soared and VRAM magically went poof. The price of RT... again... ;) We still laughing about this fantastic tech?
 
Last edited:
Joined
Aug 13, 2010
Messages
5,482 (1.04/day)
No they don't, horse shit naming schemes need pointing out to noobs and only an Nvidia shareholder or fan would disagree.

Nvidia makes the confusion, not the press.

If the press cause more confusion,. Good it might make buyer's think more before purchasing.

A 104 die is not a X80 card GPU over nvidia's entire life span until today.
There's quite a justifiable riot around the not-RTX4070 and that's correct, but calling it a name its not called by its makers might confuse people who are completely out of the loop. We're talking about an official capacity here, not the deep depth of tech forums.

This will become worse when an actual RTX 4070 (or not RTX 4060) come out.

Press's opinions will be shared on the naming in their own content pieces.
 
Joined
Jul 10, 2015
Messages
754 (0.22/day)
Location
Sokovia
System Name Alienation from family
Processor i7 7700k
Motherboard Hero VIII
Cooling Macho revB
Memory 16gb Hyperx
Video Card(s) Asus 1080ti Strix OC
Storage 960evo 500gb
Display(s) AOC 4k
Case Define R2 XL
Power Supply Be f*ing Quiet 600W M Gold
Mouse NoName
Keyboard NoNameless HP
Software You have nothing on me
Benchmark Scores Personal record 100m sprint: 60m
Lets see how this plays out.... "Insert Popcorn meme here"
We have seen it in 2012 when Kepler debuted. x80 naming for xy104 gpu, move along, nothing new to see here.
 
Joined
Sep 17, 2014
Messages
22,684 (6.05/day)
Location
The Washing Machine
System Name Tiny the White Yeti
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
VR HMD HD 420 - Green Edition ;)
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
Joined
Apr 10, 2020
Messages
504 (0.29/day)
RTX 4080 12 gigs = GTX 1060 3 gigs Deja Vu all over again and we all know how poorly 1060 3 gigs aged over time. Nvidia's greed just hit new highs. I really hope AMD shoots them down, but I'm not holding my breath as industry rumors suggest AMD has been allocating it's wafers from Radeon division to Zen4 and Epyc CPUs lately. Navi 3 might be awesome, but it looks like there won't be enough of it around to eat into Ngreedia's market share and Huang knows that that's why he's confidently showing middle finger to value buyers :banghead:
 
Joined
Aug 21, 2013
Messages
1,940 (0.47/day)
Exactly. All chips and chiplets should be added together for total die area.
The point is that they are manufactured separately on different nodes. And thus both have better yields and lower cost than one monolithic chip manufactured on a more expensive process with worse yields. In this sense it is disingenuous to count them all together as one big die.
 
Joined
Jun 21, 2021
Messages
3,121 (2.43/day)
System Name daily driver Mac mini M2 Pro
Processor Apple proprietary M2 Pro (6 p-cores, 4 e-cores)
Motherboard Apple proprietary
Cooling Apple proprietary
Memory Apple proprietary 16GB LPDDR5 unified memory
Video Card(s) Apple proprietary M2 Pro (16-core GPU)
Storage Apple proprietary onboard 512GB SSD + various external HDDs
Display(s) LG UltraFine 27UL850W (4K@60Hz IPS)
Case Apple proprietary
Audio Device(s) Apple proprietary
Power Supply Apple proprietary
Mouse Apple Magic Trackpad 2
Keyboard Keychron K1 tenkeyless (Gateron Reds)
VR HMD Oculus Rift S (hosted on a different PC)
Software macOS Sonoma 14.7
Benchmark Scores (My Windows daily driver is a Beelink Mini S12 Pro. I'm not interested in benchmarking.)
Which means a 4080ti (when it arrives) will probably be only 10% faster than 4080 16gb (76/80 SM's enabled), using a maxed out AD103 die (with 80/80 SM's enabled).

Unless NVIDIA decides to use the 4090's AD102 die instead for the 4080 Ti.

They actually did this with the previous Ampere generation. The 3080 Ti uses the same GPU die as the 3090 and 3090 Ti.

Most PC hardware reviewers errorneously compared the 3080 Ti to the 3080. The 3080 Ti was essentially a slightly binned 3090 with half the VRAM.
 
Joined
Aug 4, 2020
Messages
1,624 (1.01/day)
Location
::1
[ ... ]
A 104 die is not a X80 card GPU over nvidia's entire life span until today.
untrue.

historically, the x80 (which had always been an anemic idiot choice tbh - not to be confused w/ the x80ti) had always been the 104 die. maxwell, pascal turing - you name it.
but those were also the days of the x80ti being like twice as powerful and the gap between the x80 and the x80ti larger than between x60 and x80

'twas not until ampere that nv decided to change their tiering and abolish the x80ti as the halo card.
 
Joined
Nov 26, 2021
Messages
1,705 (1.52/day)
Location
Mississauga, Canada
Processor Ryzen 7 5700X
Motherboard ASUS TUF Gaming X570-PRO (WiFi 6)
Cooling Noctua NH-C14S (two fans)
Memory 2x16GB DDR4 3200
Video Card(s) Reference Vega 64
Storage Intel 665p 1TB, WD Black SN850X 2TB, Crucial MX300 1TB SATA, Samsung 830 256 GB SATA
Display(s) Nixeus NX-EDG27, and Samsung S23A700
Case Fractal Design R5
Power Supply Seasonic PRIME TITANIUM 850W
Mouse Logitech
VR HMD Oculus Rift
Software Windows 11 Pro, and Ubuntu 20.04
Wuuuut? GTX 1080 begs to differ... or GTX 980... or the 680, or...

The new kid on the block here is the fact there is a 103-SKU in between the 102 and 104. And even the 102 is a spin off from Titan's 110.

The fact is we're looking at the exact opposite situation: they can't place a 104 that high in the stack anymore. They need TWO bigger SKUs to cater to their top end of the stack, and 104 is upper midrange but at best - it no longer 'stands in as early high end'. It used to only get succeeded by a 102 later in gen, now they have to place it at the front of the gen to make even a tiny bit of impact compared to the last. ADA's current positioning is the best example: they're not even removing the 3xxx cards below it; we're looking at ONLY 102 dies from gen to gen populating half their new stack for a while. These are all signs Nvidia's running into new territory wrt their wiggle room: we're paying their wiggle room now on the Geforce stack; the 102/103 SKUs are simply new tiers also in terms of price, and they need every single piece of it to survive against the competition.

Back in the HD-days, they could make do with a 104 and then destroy AMD with a 102 either cut down or full later on. Which they have been doing up until they started pushing RT. Ever since, the changes happened, prices soared and VRAM magically went poof. The price of RT... again... ;) We still laughing about this fantastic tech?
Most of the criticism is due to the price. Ampere has created the discontinuity, because from Maxwell to Turing, the 2nd largest die was used for the x80 GPU while the largest was used for the x80 Ti. Ampere used the largest die for both the x80 and what used to be the x80 Ti tier (x90 now). When the smaller die was used for the x80 tier, the gap in prices was also greater than is the case now. Moreover, AD102 has 80% more SMs than AD103; this hasn't been the case in a long time. The last time this happened, we had the 770 and the 780/780 Ti. There the gap was 87.5% in favour of the larger die, but the gap in price was also much greater than now. The 770 was selling for $330 by the time the 780 Ti was being sold for $700. The 1080 was sold for $500 when the 1080 Ti was $700, and the 2080 was sold for $699-$799 compared to $999-$1199 for the 2080 Ti.
 
Joined
Jun 21, 2021
Messages
3,121 (2.43/day)
System Name daily driver Mac mini M2 Pro
Processor Apple proprietary M2 Pro (6 p-cores, 4 e-cores)
Motherboard Apple proprietary
Cooling Apple proprietary
Memory Apple proprietary 16GB LPDDR5 unified memory
Video Card(s) Apple proprietary M2 Pro (16-core GPU)
Storage Apple proprietary onboard 512GB SSD + various external HDDs
Display(s) LG UltraFine 27UL850W (4K@60Hz IPS)
Case Apple proprietary
Audio Device(s) Apple proprietary
Power Supply Apple proprietary
Mouse Apple Magic Trackpad 2
Keyboard Keychron K1 tenkeyless (Gateron Reds)
VR HMD Oculus Rift S (hosted on a different PC)
Software macOS Sonoma 14.7
Benchmark Scores (My Windows daily driver is a Beelink Mini S12 Pro. I'm not interested in benchmarking.)
'twas not until ampere that nv decided to change their tiering and abolish the x80ti as the halo card.

The x90 cards are really Titans in all but name. Whether they are called Titan or 4090 is a marketing decision that now has a habit of changing without notice.

It's not wise to compare NVIDIA's graphics card generations by comparing model numbers since they aren't consistent with what each model number represents. It's really just a numerical slot relative to a given card's placement within that generation's product stack.

Clearly NVIDIA's current strategy is to use binning to maximize gross margin. They're not going to sell Joe Gamer an $800 graphics card with a GPU that can be overclocked +30%. Those days are over. They're going to keep those binned chips, relabel them, and sell them at a higher price like $1200.
 
Last edited:
Joined
Nov 26, 2021
Messages
1,705 (1.52/day)
Location
Mississauga, Canada
Processor Ryzen 7 5700X
Motherboard ASUS TUF Gaming X570-PRO (WiFi 6)
Cooling Noctua NH-C14S (two fans)
Memory 2x16GB DDR4 3200
Video Card(s) Reference Vega 64
Storage Intel 665p 1TB, WD Black SN850X 2TB, Crucial MX300 1TB SATA, Samsung 830 256 GB SATA
Display(s) Nixeus NX-EDG27, and Samsung S23A700
Case Fractal Design R5
Power Supply Seasonic PRIME TITANIUM 850W
Mouse Logitech
VR HMD Oculus Rift
Software Windows 11 Pro, and Ubuntu 20.04
Unless NVIDIA decides to use the 4090's AD102 die instead for the 4080 Ti.

They actually did this with the previous Ampere generation. The 3080 Ti uses the same GPU die as the 3090 and 3090 Ti.

Most PC hardware reviewers errorneously compared the 3080 Ti to the 3080. The 3080 Ti was essentially a slightly binned 3090 with half the VRAM.
The 3080 is also derived from the same die; they used one die for a ridiculous number of SKUs:
  • 3080 10 GB
  • 3080 12 GB
  • 3080 Ti
  • 3090
  • 3090 Ti
 
Joined
Mar 10, 2010
Messages
11,878 (2.20/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in four sticks./16Gb/16GB
Video Card(s) Powercolour RX7900XT Reference/Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506
There's quite a justifiable riot around the not-RTX4070 and that's correct, but calling it a name its not called by its makers might confuse people who are completely out of the loop. We're talking about an official capacity here, not the deep depth of tech forums.

This will become worse when an actual RTX 4070 (or not RTX 4060) come out.

Press's opinions will be shared on the naming in their own content pieces.
This could have all been avoided but, Nvidia. ..

untrue.

historically, the x80 (which had always been an anemic idiot choice tbh - not to be confused w/ the x80ti) had always been the 104 die. maxwell, pascal turing - you name it.
but those were also the days of the x80ti being like twice as powerful and the gap between the x80 and the x80ti larger than between x60 and x80

'twas not until ampere that nv decided to change their tiering and abolish the x80ti as the halo card.
Show me a Tpu review of a X80 class card, Any, with a 104 chip then.
 
Joined
Jun 21, 2021
Messages
3,121 (2.43/day)
System Name daily driver Mac mini M2 Pro
Processor Apple proprietary M2 Pro (6 p-cores, 4 e-cores)
Motherboard Apple proprietary
Cooling Apple proprietary
Memory Apple proprietary 16GB LPDDR5 unified memory
Video Card(s) Apple proprietary M2 Pro (16-core GPU)
Storage Apple proprietary onboard 512GB SSD + various external HDDs
Display(s) LG UltraFine 27UL850W (4K@60Hz IPS)
Case Apple proprietary
Audio Device(s) Apple proprietary
Power Supply Apple proprietary
Mouse Apple Magic Trackpad 2
Keyboard Keychron K1 tenkeyless (Gateron Reds)
VR HMD Oculus Rift S (hosted on a different PC)
Software macOS Sonoma 14.7
Benchmark Scores (My Windows daily driver is a Beelink Mini S12 Pro. I'm not interested in benchmarking.)
The 3080 is also derived from the same die; they used one die for a ridiculous number of SKUs:
  • 3080 10 GB
  • 3080 12 GB
  • 3080 Ti
  • 3090
  • 3090 Ti
Ack, you're right. I must have been thinking about something else.

Anyhow, the way NVIDIA binned their GA102 GPUs, the 3080 Ti still ended up much closer to the 3090 (CUDA, RT, Tensor) than the 3080.

The main point is regardless of the given die, NVIDIA is going to bin the foundry's output and differentiate GPUs to maximize gross margin regardless of what final model number they slap on the chip.

One thing that is becoming increasingly evident is that their best silicon is destined for datacenter systems not consumer gaming PCs.
 
Last edited:
Joined
Jun 14, 2020
Messages
3,538 (2.14/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
No competition? What do you mean by that? RDNA2 matched or beat 30 series in raster, FSR 2.0 has great reviews, and most certainly RDNA3 will compete, and because AMD's chiplet approach should be cheaper to manufacture, RDNA3 should offer better performance per dollar....but despite all of that, everyone will buy Nvidia and reward their behavior and perpetuate Nvidia's constant price increases in perpetuity.

Let's be honest everyone, AMD could release a GPU that matched Nvidia in every way including raytracing, and have FSR equal to DLSS in every way and charge less than Nvidia for it, and everyone would STILL buy Nvidia (which only proves consumer choices are quite irrational and are NOT decided by simply comparing specs, as the existence of fanboys should testify to...)...and as long as that's true, the GPU market will ALWAYS be hostile to consumers. The ONLY way things are going to improve for consumers is if AMD starts capturing marketshare and Nvidia is punished by consumers... but based on historical precedent, I have no hope for that...

And I don't believe Intel's presence would have improved the situation much, not as much as a wholly new company in the GPU space would have, because Intel would have leveraged success in the GPU market (which would have probably been carved away from AMD's limited marketshare instead of Nvidia's and would have resulted in Nvidia's marketshare remaining at 80% and AMD's 20% being divided between AMD and Intel) to further marginalize AMD in the x86 space (for example, by using influence with OEMs to have an Intel CPU matched with an Intel GPU and further diminish AMDs position among OEMs, which is how Intel devastated AMD in the 2000s BTW), and it would have been trading a marginally better GPU market for a much worse CPU market, imo. Although it'd never happen, what would be really improve the market would be if Nvidia got broken up like AT&T did in the 80s...
Nope, amd did not match nvidia in raster. Not in 4k where it matters. Also there was no far back in 2020 and rt performance is mediocre. WHEN and IF amd has a competitive product, people will buy amd cards. Pretending RDNA2 was comparable to ampere doesnt cut it. It just wasn't
 

Phayzon

New Member
Joined
Sep 23, 2022
Messages
2 (0.00/day)
Show me a Tpu review of a X80 class card, Any, with a 104 chip then.




The only post-Fermi x80s NOT based on a 104 were the 780 and 3080.
 
Joined
Dec 4, 2021
Messages
53 (0.05/day)
Processor Ryzen R7 5700X
Motherboard Gigabyte X570 I Pro WiFi
Cooling Noctua NH-L12
Memory 32 GB LPX DDR4-3200-RAM
Video Card(s) Nvidia Geforce RTX 4080 FE 16 GB
Storage 2 TB Gigabyte Aorus NVMe SSD
Display(s) EIZO ColorEdge CG2700X
Case FormD T1 V2 SW
Audio Device(s) Naim Mu-so QB2
Power Supply Corsair SF750
Mouse Logitech MX Vertical
Keyboard Mode 65, Mode Sonnet, ZSA Moonlander
Software Windows 10
SM count, CUDA cores, Tensor cores, RT cores, memory bus width, L2 cache size etc. All those specs are secondary when it comes to actual performance per Watt and real-world smackeroonies necessary to buy the product. To make an informed judgement on the current gen of Nvidia graphics cards, we need reliable tests. Then we can have a fruitful discussion about Ada.
 
Joined
Jul 15, 2020
Messages
1,021 (0.63/day)
System Name Dirt Sheep | Silent Sheep
Processor i5-2400 | 13900K (-0.02mV offset)
Motherboard Asus P8H67-M LE | Gigabyte AERO Z690-G, bios F29e Intel baseline
Cooling Scythe Katana Type 1 | Noctua NH-U12A chromax.black
Memory G-skill 2*8GB DDR3 | Corsair Vengeance 4*32GB DDR5 5200Mhz C40 @4000MHz
Video Card(s) Gigabyte 970GTX Mini | NV 1080TI FE (cap at 50%, 800mV)
Storage 2*SN850 1TB, 230S 4TB, 840EVO 128GB, WD green 2TB HDD, IronWolf 6TB, 2*HC550 18TB in RAID1
Display(s) LG 21` FHD W2261VP | Lenovo 27` 4K Qreator 27
Case Thermaltake V3 Black|Define 7 Solid, stock 3*14 fans+ 2*12 front&buttom+ out 1*8 (on expansion slot)
Audio Device(s) Beyerdynamic DT 990 (or the screen speakers when I'm too lazy)
Power Supply Enermax Pro82+ 525W | Corsair RM650x (2021)
Mouse Logitech Master 3
Keyboard Roccat Isku FX
VR HMD Nop.
Software WIN 10 | WIN 11
Benchmark Scores CB23 SC: i5-2400=641 | i9-13900k=2325-2281 MC: i5-2400=i9 13900k SC | i9-13900k=37240-35500
I'd say that die size is pretty important as well. If you know the die size of a product you can see how much value you are getting relative to how much it costs Nvidia and to the rest of the stack. This "4080" has an extremely small die size compared to past generations and the 4090 so it stands to reason that at it's current price it's terrible value and that Nvidia could have provided far more value for the customer even in comparison to just last generation.
Y the flying f* do you care about the manufactor profit?? How can it be a factor at all?
If the product suit your needs and in your budget frame and it's the best price\preformance in it's segments then get it.
Simple as that.
 
Top