• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA GeForce RTX 4070 isn't a Rebadged RTX 4080 12GB, To Be Cut Down

Joined
Oct 27, 2020
Messages
791 (0.53/day)
The amount of RAM or rather the amount of 32-Bit memorycontrollers isn't coupled to the amount of active GPCs and ROPs anymore since Ampere, if I remember correctly. Further more, there is a lot of freedom of how many SM in a GPC can be deactivated without losing the whole GPC and the ROPs. RTX 3070 laptop has the same 96 ROPs of 6 GPCs, even though it has only 40 SM active, which would fit perfectly in 5 fully activated GPCs which 8 SM each, while RTX 3080Ti Laptop based on GA103 still has 96 ROPs in the same 6 GPCs, just with 58SM in either 10 (like GA106) or 12 SM (like GA102) per GPC.

However, on Ada, Nvidia seems to have foregone the concept of many different configurations of how many SM each GPC contains. Either way, the amount of VRAM and the width of the MC gives as no clue as to how many GPCs are active.
I know ROPs are inside the GPC and decoupled from memory controller, that's basic stuff!
What I'm saying is if Nvidia keeps the bandwidth nearly the same as RTX 4080 12GB the logic thing is to keep all 5GPC active, that's my speculation.
 
Joined
Sep 15, 2013
Messages
54 (0.01/day)
Processor i5-4670k @ 4.2 GHz
Motherboard ASUS Z87 Pro
Cooling Corsair H105
Memory G.SKILL RipjawsX 16GB @ 2133 MHz
Video Card(s) Gigabyte GTX 780 GHz Edition
Storage Samsung 840 Evo 500GB
Case Thermaltake MK-1
Power Supply Seasonic X 750w
Mouse Razer DeathAdder
This is a murky situation, if the details will not be revealed before the 3rd of November, it will confuse AMD.
 
Joined
Nov 20, 2012
Messages
162 (0.04/day)
I know ROPs are inside the GPC and decoupled from memory controller, that's basic stuff!
Well, the text I quoted definitely sounded like you did not. It sounded like the new card had to have at leat 5 GPC active and thus > 48 SM because of the 12GB RAM, which isn't the case. Yes, the card will probably still have 12GB and only a few SM deactivated (via BIOS?) because the AIBs have to reuse all those 4080 12GB, but you can't relate the 12GB to the amount of GPC or SM at all.


What I got from the comparison of 2080Ti to 3070(Ti) and 3080 10/12GB is the amount of VRAM and ROPs means nothing in most games, it is only Fillrate and processing power. The distance between all those cards remains about the same in all games, resolutions and with RT on and off. Only in very fes cases (FarCry 6 in UHD and with RT) the 2080Ti will perorm much better than the 3070 and Ti because 8GB isn't enough anymore, but in some other similar cases the framerate will be unplayable either way.
 
Last edited:
Joined
Sep 1, 2022
Messages
487 (0.60/day)
System Name Firestarter
Processor 7950X
Motherboard X670E Steel Legend
Cooling LF 2 420
Memory 4x16 G.Skill X5 6000@CL36
Video Card(s) RTX Gigabutt 4090 Gaming OC
Storage SSDS: OS: 2TB P41 Plat, 4TB SN850X, 1TB SN770. Raid 5 HDDS: 4x4TB WD Red Nas 2.0 HDDs, 1TB ext HDD.
Display(s) 42C3PUA, some dinky TN 10.1 inch display.
Case Fractal Torrent
Audio Device(s) PC38X
Power Supply GF3 TT Premium 850W
Mouse Razer Basilisk V3 Pro
Keyboard Steel Series Apex Pro
VR HMD Pimax Crystal with Index controllers
Yuck, oh it's from MLID. Rumors are rumors and nothing more.
 
Joined
Sep 26, 2022
Messages
214 (0.27/day)
that's my idea too.
To just rebrand the "old" 4080 12 GB as 4070 and lower the price to $700 would have been a smart move, but that's not today's Nvidia.
They are playing dirty games on the distribution channel in order to keep prices as high as they can, since a while. They are just milking customers at this point.
See what's happening with DLSS 3.0, artificially limited to Series 40 in order to keep a good distance between 4080 16GB and 3080/3090 Ti in benchmarks and somehow justify the insanely high launching price.



most probably 4080 12 GB = 4070 Ti > 4070


480EUR more is not a small amount of money, and even if you are right about diminishing returns, not everyone want a card like a 4090 in his case.
Of course what I'm saying is: if you ARE purchasing a 4080, you have 1500EUR to put on a GPU, you most likely can add 480EUR for your new configuration (not all but in general), it's all I'm saying. Nvidia scaled the price/performance in a very vile way
 
Joined
Oct 27, 2020
Messages
791 (0.53/day)
Well, the text I quoted definitely sounded like you did not. It sounded like the new card had to have at leat 5 GPC active and thus > 48 SM because of the 12GB RAM, which isn't the case. Yes, the card will probably still have 12GB and only a few SM deactivated (via BIOS?) because the AIBs have to reuse all those 4080 12GB, but you can't relate the 12GB to the amount of GPC or SM at all.


What I got from the comparison of 2080Ti to 3070(Ti) and 3080 10/12GB is the amount of VRAM and ROPs means nothing in most games, it is only Fillrate and processing power. The distance between all those cards remains about the same in all games, resolutions and with RT on and off. Only in very fes cases (FarCry 6 in UHD and with RT) the 2080Ti will perorm much better than the 3070 and Ti because 8GB isn't enough anymore, but in some other similar cases the framerate will be unplayable either way.
If you check previous post that i made you'll understand if i have knowledge or not regarding GPC & ROPs correlation, it's very basic stuff, really.
You may don't agree with the speculation that i made, but just stating that since ROPs/GPCs are decoupled with memory controller it doesn't have to have all 5GPCs active it says nothing essential, you just repeating basic Ada architecture data.
If an upcoming part has unknown specs and you need to speculate you go with what makes the most sense, so if you debate the 5GPC (when 192bit bus and GDDR6X) then either you think 4GPC is more probable or you just saying that 5GPC are not 100% warranted which is true but not a very meaningful assessment if you want to speculate, right?
Everything plays it's role, ROPs have a theoretical pixel fillrate and a realised pixel fillrate based on bandwidth and architecture's ROP efficiency and compression (delta etc) and is very important for raster even today with today's shading power.
Based on my premature analysis of Ada ROPs efficiency i made the particular speculation.
Edit: regarding 3070Ti and 2080Ti, 3070Ti has 6144 cuda cores and 2080Ti 4352 if by processing power you mean FP32 throughput, alone it means nothing, the same thing will happen again with Navi31 vs Navi21 (performance difference will be a lot less than the FP32 ratings suggest)
 
Last edited:
Joined
May 31, 2016
Messages
4,437 (1.43/day)
Location
Currently Norway
System Name Bro2
Processor Ryzen 5800X
Motherboard Gigabyte X570 Aorus Elite
Cooling Corsair h115i pro rgb
Memory 32GB G.Skill Flare X 3200 CL14 @3800Mhz CL16
Video Card(s) Powercolor 6900 XT Red Devil 1.1v@2400Mhz
Storage M.2 Samsung 970 Evo Plus 500MB/ Samsung 860 Evo 1TB
Display(s) LG 27UD69 UHD / LG 27GN950
Case Fractal Design G
Audio Device(s) Realtec 5.1
Power Supply Seasonic 750W GOLD
Mouse Logitech G402
Keyboard Logitech slim
Software Windows 10 64 bit
Sure, when both products existed, and not taking into account the 2 years and 3 months or so the 1080Ti was out with no 5700XT in existence yet.
Obviously. Still, 5700xt was always a better buy in my eyes.
 
Joined
Nov 20, 2012
Messages
162 (0.04/day)
Based on my premature analysis of Ada ROPs efficiency i made the particular speculation.
Which is absolutely irrelevant in this case. They have 4080 12GB produced and lying around, which they still want to sell. They can only flash the BIOS to deactivate SM and/or Memory, but the later would be a waste of money. They can't lasercut the SM or rip off VRAM-ICs and they won't deactivate a huge amount of perfectly working SM. BTW I smell reactivation by BIOS-flashing like RX5700.

So either we will get a 4070(Ti) with the specs of the old 4080 12GB, or minus 2-4 SM or they shelve it for later and we first get a 4070 with 10-11GB of memory (there exists SKU like that) an much less SM, but from all we have seen on Ampere, Nvidia will not deactivate an entire GPC if not absolutely necessary. 3070 Laptop could have had only 5 GPC but still has 6, while 3060Ti only has 5. So a 4070 with 48 SM could have 4 GPC, but will most likely still have 5. But with that amount of SM, it wouldn't be much faster than a 3070Ti, which isn't plausible.
 
Joined
May 10, 2020
Messages
738 (0.45/day)
Processor Intel i7 13900K
Motherboard Asus ROG Strix Z690-E Gaming
Cooling Arctic Freezer II 360
Memory 32 Gb Kingston Fury Renegade 6400 C32
Video Card(s) PNY RTX 4080 XLR8 OC
Storage 1 TB Samsung 970 EVO + 1 TB Samsung 970 EVO Plus + 2 TB Samsung 870
Display(s) Asus TUF Gaming VG27AQL1A + Samsung C24RG50
Case Corsair 5000D Airflow
Power Supply EVGA G6 850W
Mouse Razer Basilisk
Keyboard Razer Huntsman Elite
Benchmark Scores 3dMark TimeSpy - 26698 Cinebench R23 2258/40751
Didn't Nvidia mess up this launch with the $1,600 flagship that is on average 60% faster than the $6-700-something 6900 XT? 160% performance for 240% price. Um... awesome? :wtf:

The point is that people will buy Nvidia even if it's unjustifiably expensive, but they won't buy AMD if it's ever so slightly out of its ideal price/performance range.
well, it's not officially launched yet, so we don't really know how the market will react...
4090 at insane price is widely available so it wasn't really a great launch, I would say (but was quite expected, since those are niche cards).
4080 value is even lower, so we'll see how well they will sell.
The world is in an economic recession, and gaming PC are not on the top list of customers anymore, I'm afraid...
 
Joined
Oct 27, 2020
Messages
791 (0.53/day)
Which is absolutely irrelevant in this case. They have 4080 12GB produced and lying around, which they still want to sell. They can only flash the BIOS to deactivate SM and/or Memory, but the later would be a waste of money. They can't lasercut the SM or rip off VRAM-ICs and they won't deactivate a huge amount of perfectly working SM. BTW I smell reactivation by BIOS-flashing like RX5700.
Why are you replying to me regarding this, did i say anything regarding lasercut the SM or rip off VRAM, the figures i gave are 12GB GDDR6X and 5GPCs and at least 52SM active (I speculated essentially that in worst case Nvidia will deactivate max 8SM from the full AD104 die if 12GB GDDR6X are present)
12GB GDDR6X means 192bit bus and bandwidth equivalent or near equivalent with 4080 12GB and based on ROPs efficiency (one factor of the many that i used) there is little chance any desktop AD104 design in the future with 12GB GDDR6X to be 4GPC/64ROPs, so 5GPC (60SM) in all likelihood just like i said.
So essentially you agreeing with my speculated specs?

So either we will get a 4070(Ti) with the specs of the old 4080 12GB, or minus 2-4 SM or they shelve it for later and we first get a 4070 with 10-11GB of memory (there exists SKU like that) an much less SM, but from all we have seen on Ampere, Nvidia will not deactivate an entire GPC if not absolutely necessary. 3070 Laptop could have had only 5 GPC but still has 6, while 3060Ti only has 5. So a 4070 with 48 SM could have 4 GPC, but will most likely still have 5. But with that amount of SM, it wouldn't be much faster than a 3070Ti, which isn't plausible.
We certainly are getting a even lesser cutdown AD104, if it comes first i don't know it although the most probable scenario imo is the higher end AD104 (52 or 56 SM) to come first.
Before starting the speculation i said that "if it's 12GB GDDR6X" and then continue, i didn't correlate it to a model number nor i said it will be the first AD104 model that comes to market (but again the most probable scenario is the higher end AD104 to come first in order to pressure less the pricing of Ampere's lower models) like i mentioned Nvidia will see what RDNA3 have and respond accordingly.
 
Joined
Nov 20, 2012
Messages
162 (0.04/day)
I understand now you tried to base your speculation on what amount of ROPs ergo active GPC ergo active would be feasible for a 192Bit memory bus with 12GB but I still am not certain if you speculate about how much the old 4080 12GB will be cut down or just how much AD104 can be cut down beforce 12GB/ 192Bit isn't useful anymore.

In both cases, your speculation is just theory with very little practicle relevance. As I said, the 4080 12GB are there and have to be used somehow, it was already stated Nvdia will compensate the AIBs for changing Box designs, flashing cards etc. So it is rather certain the already produced cards will be sold and there is a limit in how these can be cut down in that state. It is not possible to rip alerady installed memory ICs of the PCBs and not economically feasible to deativate them via BIOS.
As for any lower AD104-based product, there already was an 10GB SKU which was rumored to become 4070, so I expect that to be a plausible option for the next lower product. Allthough 3080 showed that in certain games 12GB is far superior to 10GB because in UHD with RT 10GB is not enough, there are two popular cards with 11GB (1080Ti and 2080Ti) and one very popular with 10GB (3080) in use, so games will not only be optimized for 8GB, 12GB and 16GB. Then there is saving the cost of one additional 2GB-GDDR6X IC for 10GB. All that will be much mor of a concern for NV then the question if 160 or 192 Bit work better with 80 or 96 ROPs.
 
Joined
Oct 27, 2020
Messages
791 (0.53/day)
ROPs efficiency was just one of the many factors like i said and i only mentioned it after you stated that ROPs means nothing for most of the games which is completely wrong.
Just like you said, if there are 4080 12GB boards out there already, my specification assumptions make even more sense, so where is the disagreement with what i said in my original post and you needed to reply?

"If it's still 12GB GDDR6X, this means 5GPC active, so 80ROPs and at least 208TC and 6656 cuda cores.
Being a 5GPC design, it will have more similar scaling with GA104 than GA102 in lower resolutions, meaning even if it goes from 2610MHz and 21Gbps GDDR6X to 2510MHz and 20.5Gbps GDDR6X it will probably match RTX 3090 in QHD, forcing RTX 3080Ti and 6950X to drop below it's SRP at least 10% or more, that's why it seems difficult (if the launch is this year) to be less than $799 when there is so Ampere stock according to reports.
Probably what Nvidia will do is wait AMD's announcement in 2 weeks from now and then based on RDNA3 pricing to respond accordingly"
 
Joined
Feb 14, 2012
Messages
2,355 (0.51/day)
System Name msdos
Processor 8086
Motherboard mainboard
Cooling passive
Memory 640KB + 384KB extended
Video Card(s) EGA
Storage 5.25"
Display(s) 80x25
Case plastic
Audio Device(s) modchip
Power Supply 45 watts
Mouse serial
Keyboard yes
Software disk commander
Benchmark Scores still running
Okay, rebadged and re-firmwared. You know it's the same card.
 
Joined
Nov 20, 2012
Messages
162 (0.04/day)
and i only mentioned it after you stated that ROPs means nothing for most of the games which is completely wrong.
Yes I was wrong there, I confused the correlation of ROPs and Pixelfillrate with TMUs. In fact, it seems ROPs an Pixelfillrate are the only aspect which could explain why the 2080Ti is glued between 3060Ti and 3070 in nearly all game scenarios despite having significantly more memory bandwith and texturefillrate, but less processing power.
 
Joined
Nov 30, 2021
Messages
135 (0.12/day)
Location
USA
System Name Star Killer
Processor Intel 13700K
Motherboard ASUS RO STRIX Z790-H
Cooling Corsair 360mm H150 LCD Radiator
Memory 64GB Corsair Vengence DDR5 5600mhz
Video Card(s) MSI RTX 3080 12GB Gaming Trio
Storage 1TB Samsung 980 x 1 | 1TB Crucial Gen 4 SSD x 1 | 2TB Samsung 990 Pro x 1
Display(s) 32inch ASUS ROG STRIX 1440p 170hz WQHD x 1, 24inch ASUS 165hz 1080p x 1
Case Lian Li O11D White
Audio Device(s) Creative T100 Speakers , Razer Blackshark V2 Pro wireless
Power Supply EVGA 1000watt G6 Gold
Mouse Razer Viper V2 Wireless with dock
Keyboard ASUS ROG AZOTH
Software Windows 11 pro
Do not believe anything this guy says. MLID Just blatently makes up leaks. A month ago he made a video saying his "sources" told him Arc was canceled and that they would be shutting down. He also "leaks" the most obvious information like how many cores a cpu will have, or the cooler design of the 4070. It doesn't take a genius to figure this stuff out. He also never shuts up about how he leaked stuff when it is right. Every episode he will say, "I actually was the first to leak this months ago, I knew this months ago, I was right about everything once again". He should not be taken seriously after his Arc video.
 
Joined
Jul 13, 2016
Messages
3,271 (1.07/day)
Processor Ryzen 7800X3D
Motherboard ASRock X670E Taichi
Cooling Noctua NH-D15 Chromax
Memory 32GB DDR5 6000 CL30
Video Card(s) MSI RTX 4090 Trio
Storage Too much
Display(s) Acer Predator XB3 27" 240 Hz
Case Thermaltake Core X9
Audio Device(s) Topping DX5, DCA Aeon II
Power Supply Seasonic Prime Titanium 850w
Mouse G305
Keyboard Wooting HE60
VR HMD Valve Index
Software Win 10
It's not pointless semantics. AD104 is not a cut down version of the GA102. It is a much smaller chip to begin with, and as such, a lot more of it can be manufactured per wafer. The defective parts of each fully manufactured chip are cut down to make lesser performing ones. Either you don't understand how chip design and manufacturing works and really believe that every single GPU is a cut down version of another, or it's you who decided to pick a pointless fight about semantics when you knew very well what I meant. If "cutting down" means what you seem to think it means (using the same architecture to make smaller chips), then what is the 1030 compared to the 1080 Ti or the 710 compared to the 780 Ti? C'mon...

If only products based on chips with defective parts form the basis of the initial launch, then I want to know where fully working chips go - probably to storage to be sold for an even higher price later. This is why I was scratching my head during the Ampere launch, and this is why I'm scratching my head now.

I never said AD104 was a cutdown version of GA102 (or AD102 seeing as you had a typo but I'm a human that can use context clues to infer intent unlike someone ;) ) I never specifically reference the dies in my original comment. That's an important distinction because I would not have used the word cut down when referring to the dies specifically because the word cut down holds a meaning specific to the dies but it does not to the model numbers.

I was saying that in comparison to the 4090, the 4080 has much less cores. In that sense it is cut down. Cut down can mean "a die that has had cores disabled" or "reduce the size, amount, or quantity of something." or "cause something to fall by cutting it through at the base." or even refer to murdering someone in the streets.

It turns out that in fact there is more than one way to use the word cut down.
 
Joined
Jun 21, 2021
Messages
3,121 (2.50/day)
System Name daily driver Mac mini M2 Pro
Processor Apple proprietary M2 Pro (6 p-cores, 4 e-cores)
Motherboard Apple proprietary
Cooling Apple proprietary
Memory Apple proprietary 16GB LPDDR5 unified memory
Video Card(s) Apple proprietary M2 Pro (16-core GPU)
Storage Apple proprietary onboard 512GB SSD + various external HDDs
Display(s) LG UltraFine 27UL850W (4K@60Hz IPS)
Case Apple proprietary
Audio Device(s) Apple proprietary
Power Supply Apple proprietary
Mouse Apple Magic Trackpad 2
Keyboard Keychron K1 tenkeyless (Gateron Reds)
VR HMD Oculus Rift S (hosted on a different PC)
Software macOS Sonoma 14.7
Benchmark Scores (My Windows daily driver is a Beelink Mini S12 Pro. I'm not interested in benchmarking.)
It turns out that in fact there is more than one way to use the word cut down.

Lots of words have multiple definitions that might have some degree of overlap. It's important to read for context, something some people don't do for reasons unexplained.

It's like the word fruit. Botanists use the word differently than grocers.

Relative to this discussion about the former 4080 12GB, it's a different GPU from a different die. The differentiation is not coming from fusing off cores or using binning to create product segmentation. You can't actually slice a piece off a AD102-300 (the GPU at the center of a 4090 card) to make a AD103-300 (the GPU at the heart of a 4080 16GB card).

The AD104-400 GPU that was slated for the now "unlaunched" 4080 12GB card was also from a different die. If using "cut down" to mean a smaller part with fewer transistors, yes. If it means some sort of larger GPU that has cores fused off, using "cut down" would be incorrect.

Clearly both AMD and NVIDIA are using binning to create differentiated GPU variants to increase profitability. Some samples that emerge from the foundry are better. Those are being reserved for a GPU SKU that AMD and NVIDIA charge more for.
 
Joined
Jun 20, 2022
Messages
302 (0.34/day)
Location
Germany
System Name Galaxy Tab S8+
Processor Snapdragon 8 gen 1 SOC
Cooling passive
Memory 8 GB
Storage 256 GB + 512 GB SD
Display(s) 2.800 x 1.752 Super AMOLED
Power Supply 10.090 mAh
Software Android 12
I can smell a "cheap" 4070 coming with a severely cut-down chip for $700, then a 4070 Ti with the same config as the 4080 12 GB would have been for $850. And then, Jensen won't understand why isn't everybody happy.

It really puzzles me how Nvidia doesn't have a well-thought out plan for the whole product stack before the launch of the flagship the way AMD and Intel do.

They had: Sell everything as expensive as possible based on what some people paid during the pandemic...

What puzzels me: Why would someone buy an RTX 4080 16GB (not yet available) for 1600€ if you can get an RTX 4090 for 1950€ in Europe. US pricing is a bit different but the European pricing of the lower stack cards does not make sense at all (4080 12GB was 1100€). In this price bracket it does not really matter if you spend 350 bucks more.
 
Joined
Jun 21, 2021
Messages
3,121 (2.50/day)
System Name daily driver Mac mini M2 Pro
Processor Apple proprietary M2 Pro (6 p-cores, 4 e-cores)
Motherboard Apple proprietary
Cooling Apple proprietary
Memory Apple proprietary 16GB LPDDR5 unified memory
Video Card(s) Apple proprietary M2 Pro (16-core GPU)
Storage Apple proprietary onboard 512GB SSD + various external HDDs
Display(s) LG UltraFine 27UL850W (4K@60Hz IPS)
Case Apple proprietary
Audio Device(s) Apple proprietary
Power Supply Apple proprietary
Mouse Apple Magic Trackpad 2
Keyboard Keychron K1 tenkeyless (Gateron Reds)
VR HMD Oculus Rift S (hosted on a different PC)
Software macOS Sonoma 14.7
Benchmark Scores (My Windows daily driver is a Beelink Mini S12 Pro. I'm not interested in benchmarking.)
They had: Sell everything as expensive as possible based on what some people paid during the pandemic...

What puzzels me: Why would someone buy an RTX 4080 16GB (not yet available) for 1600€ if you can get an RTX 4090 for 1950€ in Europe. US pricing is a bit different but the European pricing of the lower stack cards does not make sense at all (4080 12GB was 1100€). In this price bracket it does not really matter if you spend 350 bucks more.

Some people just want to own the halo product, even if their usage case would make a lower caliber product a better value.

I have a 3080 Ti. I play games with it. Sure, I could have paid more for the 3090 (or later 3090 Ti) but my usage case doesn't benefit from 24GB of VRAM. So essentially I have a near 3090 with half the VRAM and saved myself a few hundred bucks.

Sure, I paid more per GB of VRAM, but that's not the only consideration to make when contemplating a graphics card purchase. I'll point out that 300 dollars or euros buys quite a few gaming titles.

Some people have so much discretionary income that they can simply buy the halo card without blinking. For more people, an expensive GPU is a serious chunk of change.
 
Joined
Sep 17, 2014
Messages
22,422 (6.03/day)
Location
The Washing Machine
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling Thermalright Peerless Assassin
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
Sure, after both products launched, and we totally ignore the 2 years and 3 months or so the 1080Ti was out with no 5700XT in existence yet and it was impossible to make that particular 'better buy' choice.


More raster perf, ever so slightly better than RDNA2 RT perf (relative hit to performance) and the same playing catch up with other features as usual, that's my bet.

What might actually be exciting is price and availability.

EDIT: typo
Exactly, you have to consider that by the time 5700XT was out, there were already cheaper second hand 1080's and they also got rebadged into even cheaper 1070ti's, plus the advantage of tried and tested versus a new (and redone, at the time we thought) RDNA. Anyway, I didn't link that video for that little appearance we made in it :D Its more to show that idiots will always find clicky items to produce. Back then he was looking for outrage and he found it on a TPU forum advice topic. I mean, lol. What the hell, get a life, etc. Its the same thing in a different package today: zero substance. And ironically wrong on all counts.

That said I concede that it compares more favorably to a 1080ti, or is at least somewhere in between the two. And I'll drop it now, for being grossly offtopic too :p
 
Joined
Jun 21, 2021
Messages
3,121 (2.50/day)
System Name daily driver Mac mini M2 Pro
Processor Apple proprietary M2 Pro (6 p-cores, 4 e-cores)
Motherboard Apple proprietary
Cooling Apple proprietary
Memory Apple proprietary 16GB LPDDR5 unified memory
Video Card(s) Apple proprietary M2 Pro (16-core GPU)
Storage Apple proprietary onboard 512GB SSD + various external HDDs
Display(s) LG UltraFine 27UL850W (4K@60Hz IPS)
Case Apple proprietary
Audio Device(s) Apple proprietary
Power Supply Apple proprietary
Mouse Apple Magic Trackpad 2
Keyboard Keychron K1 tenkeyless (Gateron Reds)
VR HMD Oculus Rift S (hosted on a different PC)
Software macOS Sonoma 14.7
Benchmark Scores (My Windows daily driver is a Beelink Mini S12 Pro. I'm not interested in benchmarking.)
Am i the only one that thinks the high end should be no more then 800$

There are many who would echo your sentiment. However based on the prices that some shell out to scalpers, clearly not everyone agrees with you.

Certainly AMD and NVIDIA do not. They know what COGS are and they have their own idea of what gross margin should be.

Remember that the price people want to pay and what people are willing to pay are often two very different numbers.

The same applies to the concepts of "want" and "need." There are lots of people who want a 4090. There are very people who need a 4090.
 
Last edited:
Joined
Jan 14, 2019
Messages
12,337 (5.77/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE
I never said AD104 was a cutdown version of GA102 (or AD102 seeing as you had a typo but I'm a human that can use context clues to infer intent unlike someone ;) ) I never specifically reference the dies in my original comment. That's an important distinction because I would not have used the word cut down when referring to the dies specifically because the word cut down holds a meaning specific to the dies but it does not to the model numbers.

I was saying that in comparison to the 4090, the 4080 has much less cores. In that sense it is cut down. Cut down can mean "a die that has had cores disabled" or "reduce the size, amount, or quantity of something." or "cause something to fall by cutting it through at the base." or even refer to murdering someone in the streets.

It turns out that in fact there is more than one way to use the word cut down.
Yes. I used it in one way, and you used it in another way when you replied to my post. Anyway, you know what I mean, and I know what you mean, so that's that. :)

They had: Sell everything as expensive as possible based on what some people paid during the pandemic...

What puzzels me: Why would someone buy an RTX 4080 16GB (not yet available) for 1600€ if you can get an RTX 4090 for 1950€ in Europe. US pricing is a bit different but the European pricing of the lower stack cards does not make sense at all (4080 12GB was 1100€). In this price bracket it does not really matter if you spend 350 bucks more.
What puzzles me is why would anyone buy either when you can buy a 6900XT for much less.
 
Joined
Jun 10, 2014
Messages
2,985 (0.78/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
It really puzzles me how Nvidia doesn't have a well-thought out plan for the whole product stack before the launch of the flagship the way AMD and Intel do.
It depends what you mean by "plan". The chip designs are completed >1 year prior to launch before the tape out, and they do have a fairly good idea of the chips' characteristics ~6 months before launch with mature engineering samples. The final "cuts" and clocks are usually picked based on the QS samples, but keep in mind that the chips in the family aren't ready simultaneously, so when e.g. RTX 4090's specs was finalized, they only had a rough idea of the specs of RTX 4060, etc.

One thing to be aware of is that these decided specs can actually change in the final months prior to launch, if they discover issues or incorrectly estimates yields. This has happened before, like with the original "GTX 680" which was cancelled weeks ahead of release due to quality issues with the GK100 chip, forcing Nvidia to do a last minute rebrand of the planned "GTX 670 Ti" into the GTX 680 slot. (there were even pictures of pre-production models) I remember when I bought my GTX 680, it had stickers over every model number on the box, so apparently those boxes were produced before the renaming.

As with "RTX 4080 12GB", we will have to see. If we quickly see a rebranded full AD104, then it was just the result of media backlash, but if we don't see a full AD104, then we can fairly safely assume there was a yield issue.

3. Launch the Ti line with fully enabled chips a year later so that people can praise your goodness and pay for the same shit again.
Firstly, the same people don't by the slightly refreshed products of the same lineup.
Secondly, yields tend to improve over time, so it's very common to have a surplus of slightly better chips towards the end of a product lifecycle.
Thirdly, the mid-cycle refreshes usually are very good value products.
 
Joined
Jan 14, 2019
Messages
12,337 (5.77/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE
It depends what you mean by "plan". The chip designs are completed >1 year prior to launch before the tape out, and they do have a fairly good idea of the chips' characteristics ~6 months before launch with mature engineering samples. The final "cuts" and clocks are usually picked based on the QS samples, but keep in mind that the chips in the family aren't ready simultaneously, so when e.g. RTX 4090's specs was finalized, they only had a rough idea of the specs of RTX 4060, etc.

One thing to be aware of is that these decided specs can actually change in the final months prior to launch, if they discover issues or incorrectly estimates yields. This has happened before, like with the original "GTX 680" which was cancelled weeks ahead of release due to quality issues with the GK100 chip, forcing Nvidia to do a last minute rebrand of the planned "GTX 670 Ti" into the GTX 680 slot. (there were even pictures of pre-production models) I remember when I bought my GTX 680, it had stickers over every model number on the box, so apparently those boxes were produced before the renaming.

As with "RTX 4080 12GB", we will have to see. If we quickly see a rebranded full AD104, then it was just the result of media backlash, but if we don't see a full AD104, then we can fairly safely assume there was a yield issue.


Firstly, the same people don't by the slightly refreshed products of the same lineup.
Secondly, yields tend to improve over time, so it's very common to have a surplus of slightly better chips towards the end of a product lifecycle.
Thirdly, the mid-cycle refreshes usually are very good value products.
I understand that there can be yield issues. What I don't understand is why it only affects Nvidia so much that they can't (or just don't?) release a single product based on a fully enabled die during the launch of a series. My theory is that they reserve the best chips, so that they can price the defective ones against the market/competition, then sell the good ones later for even higher prices.
 
Top