• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Why does everyone hate the 4080?

Status
Not open for further replies.
In other words, no games out now can support these new RT core improvements? I mean, that's fine for future titles but the PR mentioned RT improvements but the current performance increases in RT for ADA on current gen are brought through the sheer rasterisation uplift (as I mentioned somewhere previously when looking at % hit by turning RT on when comparing Ada and Ampere).
That's my understanding for the time being too, RT in existing games effectively has the same performance hit as before, relative to raster perf, unless driver/game updates can and are made to leverage the new capabilities. My only point in this respect is that there are differences in Ada vs Ampere.
 
Of course it's what they're telling us, they designed the GPU. RT improvements almost certainly need to be accounted for in game code, the RT cores themselves definitely do now have increased capability relative to Ampere.
So if I'm telling my boss that I've worked 2x harder today compared to yesterday, he has to believe me because I said it?

Here's the Ada vs Ampere block Diagram, note the capabilities in the RT cores and the L0 i-cache.

View attachment 271083
OK, they changed the cache size. And? It's still just Ampere 2.0. RT cores are not detailed in the diagram, so whether anything has changed or not, I wouldn't know. All I know is, RT performance in games is exactly the same as it has been since Turing.

You can choose to see no improvement if you want, but this is factually not a die shrunk Ampere despite how the swing of reactions go, it may share the vast majority or architectural design, but there are indisputable additions and differences, ergo in an absolute sense, not a die shrunk Ampere.
Or rather, you can choose to see differences that aren't showing on the block diagram instead of seeing that the only improvement we have in game performance comes from more and higher-clocked cores. From this point of view, the whitepaper doesn't add much value, the same way AMD's RDNA 3 improvements (MCM, decoupled shader clock, etc) won't add much value either if they don't translate to observable performance and/or price (edit: but at least they're something you can see).
 
Last edited:
Lots of 4080's in stock in UK. Very different times. It cheers me to see that people aren't swiping these £1300-1500 2nd tier GPUs from the shelves.

I believe people are getting really desperately exhausted and tired of all that reality surrounding and the most normal reaction is to skip this series.
 
In other words, no games out now can support these new RT core improvements? I mean, that's fine for future titles but the PR mentioned RT improvements but the current performance increases in RT for ADA on current gen are brought through the sheer rasterisation uplift (as I mentioned somewhere previously when looking at % hit by turning RT on when comparing Ada and Ampere).
I don't recall anywhere mentioned that you need spacial RT code in already enabled RT games in order to use the so called "new RT core improvement".
It's like saying you need special new code to utilize the extra IPC lift in new CPU's.
No new RT feture were introduce by Microsoft and Ada support all existing RT features.

Please correct me if I'm wrong or mis inform.
 
Last edited:
I don't recall anywhere mentioned that you need spacial RT code in already enabled RT games. To use the so called "new RT core improvement".
It's like saying you need special new cod to utilize the extra IPC lift in new CPU's.
No new RT feture by Microsoft and Ada support all existing RT features.

Please correct me if I'm wrong or mis inform.

I think a newer driver can help.
 
Existence of this thread shows how privileged some are to not see a problem with blowing tons of money on entertainment that isn't even a halo product.

4080 literally costs more than a monthly paycheck I get (not a US citizen though).
 
Existence of this thread shows how privileged some are to not see a problem with blowing tons of money on entertainment that isn't even a halo product.

4080 literally costs more than a monthly paycheck I get (not a US citizen though).
To help with some reason- many game as a hobby in the little time they have (see children) as for a serious hobby those 1000-2000$ aren't so big or exception. Simple as that. They can save of course but why? Every hobby is in it's core a 'bad investment' from economic view point.
 
I don't recall anywhere mentioned that you need spacial RT code in already enabled RT games in order to use the so called "new RT core improvement".
It's like saying you need special new code to utilize the extra IPC lift in new CPU's.
No new RT feture were introduce by Microsoft and Ada support all existing RT features.

Please correct me if I'm wrong or mis inform.
My reply was in comment to another member. Regardless, if driver update is all that is needed, why did Nvidia not provide updated drivers for reviews?

Also, there are hardware elements in Ada that enable DLSS3, which cannot be done by Ampere, so certainly, there may be coding specific routes to enable full functionality of the physical changes.
 
In other words, no games out now can support these new RT core improvements? I mean, that's fine for future titles but the PR mentioned RT improvements but the current performance increases in RT for ADA on current gen are brought through the sheer rasterisation uplift (as I mentioned somewhere previously when looking at % hit by turning RT on when comparing Ada and Ampere).

Yeah no, current RT games are hybrid Raster + RT where RT effects are added on top of a rasterized frame.

4090 with >2x the RT capability of 3090Ti can halves the RT rendering time, but the final rendering time is still affected by rasterization performance.

For example, 4090 in CP2077 @ 4K (4090 TPU review)
Native 71.2FPS = 14ms
RT ON 41.8FPS = 24ms
Cost of RT = +10ms

Compare to 3090Ti
Native 47.3FPS = 21.1ms
RT ON 24FPS = 41.6ms
Cost of RT = +20.5ms

If we compare 4090 to 3090Ti in Path Traced games (or pure RT workload), 4090 will easily beat 3090Ti by more than 2x.

ADA has Shaders Execution Reordering (Intel also has something similar) which can cut the RT rendering time by another 30-40%, which can be easily integrated into existing games, just need devs to update game engine with a few codes.

Now regarding 7900XTX, AMD didn't improve their RT capability (or very minor), that's why 7900XTX could beat 3090Ti by 50% in raster, but RT ON and they both tied in CP2077, meaning in pure RT workload, 3090Ti is still superior to 7900XTX. However 7900XTX can still come out on top when games have very limited RT effects (FC6, RE Village, Watchdogs Legion). IMO 7900XTX is a dissapointment, AMD put everything on raster and still lose to 4090, in raster.
 
Last edited:
So if I'm telling my boss that I've worked 2x harder today compared to yesterday, he has to believe me because I said it?
Strawman bad.

I'll leave it at this, Ada is factually different architecture from Ampere, perhaps not by much, but it's not a disputable fact, facts don't need you to agree with them. You can feel free to downplay the differences or even dismiss them, but they exist.
 


Prices in my country:shadedshu: and in stock:
View attachment 270673

Cesar Chavez GIF by GIPHY News
The man in the video mentions Microcenter multiple times. Microcenter is walk in only and seeing how there's less than 12 US states that have a Microcenter store.

Here in the US only the budget brand Zotac 4080's are anywhere near MSRP.

 
Of course it's what they're telling us, they designed the GPU. RT improvements almost certainly need to be accounted for in game code, the RT cores themselves definitely do now have increased capability relative to Ampere.



Also, a node shrink ≠ the improved power circuitry for spikes. The massive extra cache, the 4x improvement in tensor flops which outstrips the increase in tensor cores x clock speed, the 3x faster OFA..

Here's the Ada vs Ampere block Diagram, note the capabilities in the RT cores and the L0 i-cache.

View attachment 271083

You can choose to see no improvement if you want, but this is factually not a die shrunk Ampere despite how the swing of reactions go, it may share the vast majority or architectural design, but there are indisputable additions and differences, ergo in an absolute sense, not a die shrunk Ampere.

You can check out the NVIDIA ADA GPU ARCHITECTURE whitepaper for more info.
All of these changes do not translate to in-game performance differences. The core is 95% the same, and the other 5% is of questionable significance. Not even you know what's really going to change with the mentioned changes, but it looks complicated, so surely it must be tremendous. After all, they and you say ' vastly improved'...

Marketing, man. Come on. Its clear that the block diagrams are almost identical, so hardware wise this is not much more than a shrink and some cache changes; a larger cache is not really a change to the die, its just using free space to add more of a thing it already had. We have seen this before, every arch iterates on the last, and lots of iterations are built on an identical floor plan. Ada is one of those, and the fact its identical results in end performance that is only elevated by simply adding more of it all. The 'new things' only come into play with specific dev implementations; for all we know its nothing more than a featureset/programmable upgrade with cache to hold it. There is no new hardware here, nor is its configuration in the SM.
 
Nvidia Ada on the other hand, is nothing more than Ampere on a die shrink
So we started with this
The core is 95% the same
And ended up with this. This I can accept, as they're not identical. Happy to keep an eye out for real world examples of game performance that prove it.
 
Yeah no, current RT games are hybrid Raster + RT where RT effects are added on top of a rasterized frame.

4090 with >2x the RT capability of 3090Ti can halves the RT rendering time, but the final rendering time is still affected by rasterization performance.

For example, 4090 in CP2077 @ 4K (4090 TPU review)
Native 71.2FPS = 14ms
RT ON 41.8FPS = 24ms
Cost of RT = +10ms

Compare to 3090Ti
Native 47.3FPS = 21.1ms
RT ON 24FPS = 41.6ms
Cost of RT = +20.5ms

If we compare 4090 to 3090Ti in Path Traced games (or pure RT workload), 4090 will easily beat 3090Ti by more than 2x.

ADA has Shaders Execution Reordering (Intel also has something similar) which can cut the RT rendering time by another 30-40%, which can be easily integrated into existing games, just need devs to update game engine with a few codes.

Now regarding 7900XTX, AMD didn't improve their RT capability (or very minor), that's why 7900XTX could beat 3090Ti by 50% in raster, but RT ON and they both tied in CP2077, meaning in pure RT workload, 3090Ti is still superior to 7900XTX. However 7900XTX can still come out on top when games have very limited RT effects (FC6, RE Village, Watchdogs Legion). IMO 7900XTX is a dissapointment, AMD put everything on raster and still lose to 4090, in raster.

Percentage difference between first and second case is 71.4% vs. 97.1%. Improvement but not that large.

I still think that we don't need ray-tracing at this stage with available technologies only "4nm" and worse.

 
Yeah no, current RT games are hybrid Raster + RT where RT effects are added on top of a rasterized frame.

4090 with >2x the RT capability of 3090Ti can halves the RT rendering time, but the final rendering time is still affected by rasterization performance.

For example, 4090 in CP2077 @ 4K (4090 TPU review)
Native 71.2FPS = 14ms
RT ON 41.8FPS = 24ms
Cost of RT = +10ms

Compare to 3090Ti
Native 47.3FPS = 21.1ms
RT ON 24FPS = 41.6ms
Cost of RT = +20.5ms
I'm not seeing half render time because of 2x RT capability, you need to look at a relative difference, not absolute, since the cards also have improved raster/overall perf.

3090ti: 47,3/24 = 1,97 is the factor/cost of RT compared to raster only
4090: 71,2/41,8 = 1,70 is the factor/cost of RT compared to raster only

Still its clear this a move forward, I do agree. But its very minor.
 
  • Like
Reactions: ARF
Strawman bad.

I'll leave it at this, Ada is factually different architecture from Ampere, perhaps not by much, but it's not a disputable fact, facts don't need you to agree with them. You can feel free to downplay the differences or even dismiss them, but they exist.
All I'm saying is, Nvidia saying that there are improvements, and me seeing them with my own eyes are two different things.

All of these changes do not translate to in-game performance differences. The core is 95% the same, and the other 5% is of questionable significance. Not even you know what's really going to change with the mentioned changes, but it looks complicated, so surely it must be tremendous. After all, they and you say ' vastly improved'...

Marketing, man. Come on. Its clear that the block diagrams are almost identical, so hardware wise this is not much more than a shrink and some cache changes; a larger cache is not really a change to the die, its just using free space to add more of a thing it already had. We have seen this before, every arch iterates on the last, and lots of iterations are built on an identical floor plan. Ada is one of those, and the fact its identical results in end performance that is only elevated by simply adding more of it all. The 'new things' only come into play with specific dev implementations; for all we know its nothing more than a featureset/programmable upgrade with cache to hold it. There is no new hardware here, nor is its configuration in the SM.
This!
 
Percentage difference between first and second case is 71.4% vs. 97.1%. Improvement but not that large.
3090ti: 47,3/24 = 1,97 is the factor/cost of RT compared to raster only
4090: 71,2/41,8 = 1,70 is the factor/cost of RT compared to raster only

Still its clear this a move forward, I do agree. But its very minor.
But the difference between 71.4% and 97.1% is around 37%, I'd say that's not too shabby at all for something so early on if Ada can render RT 35-40% faster. Definitely not a die shrunk Ampere.
 
But the difference between 71.4% and 97.1% is around 37%, I'd say that's not too shabby at all for something so early on if Ada can render RT 35-40% faster. Definitely not a die shrunk Ampere.

Lol don't get too hung up on the die shrunk part, 1080Ti was litterally a die shrunk Maxwell and it took AMD 3.5 years to make something better.

I sure hope Nvidia has a hidden uarch upgrade up their sleeves, just so they can remain monolithic and still getting a huge perf jump next gen, going chiplets is kinda a waste of silicon IMO.
 
The man in the video mentions Microcenter multiple times. Microcenter is walk in only and seeing how there's less than 12 US states that have a Microcenter store.

Here in the US only the budget brand Zotac 4080's are anywhere near MSRP.


That's the point though - the 4080s just aren't selling like Nvidia had hoped.

Just because Micro Center is limited to the number of states they're in, there are still 25 total stores. It doesn't matter that Micro Center does in-store only for sales because it was shown that with the GPU scarcity with Ampere, cards flew off the shelves at Micro Center. People were lining up outside on the delivery days (Tuesdays and Thursdays, if I remember correctly) every day for a chance to get any GPU for many months. I tried my luck a couple of times and I never got one, but the store workers told me every delivery day it's like this; 60+ people show up in the morning, get on the list to get randomly picked for the maybe 2 dozen cards they get in. This went on for months. Cards sold before they were ever put on the shelves for stock.

With all that in mind, people gobbled up the GPUs that Micro Center got in, especially for Nvidia cards. If the 4080s were as hot as they appear to be simply due to the fact that you can't find them at online retailers, the 4080s at Micro Center stores would have all sold by now and you'd be able to find zero anywhere in the US.

The 4080s just aren't moving. Just because you can't find one online or for MSRP online, doesn't mean the cards are highly sought after. It just means that scalpers gobbled them up (along with some actual consumers that do in fact want one) and the online listings you see now are way over priced. If the 4080s were popular and hot, hard to get. Then why are the Micro Center stores still sitting on hundreds of them across all their store locations?
 
They can't get them sold here

Screenshot_20221121_155444.png
 
For example, 4090 in CP2077 @ 4K (4090 TPU review)
Native 71.2FPS = 14ms
RT ON 41.8FPS = 24ms
Cost of RT = +10ms

Compare to 3090Ti
Native 47.3FPS = 21.1ms
RT ON 24FPS = 41.6ms
Cost of RT = +20.5ms
There's no doubt that Nvidia is continuing to invest in raytracing performance, but your example is not more than the theoretical RT performance increase. The clock speed ratio (2730 MHz vs 1980) multiplied by the ray tracing units ratio is 2.1 for CyberPunk. Shader Execution reordering (link to PDF), like Intel's Thread Sorting Unit, will be a good boost when it's implemented by game developers. It looks like Intel will be the only one to challenge Nvidia in ray tracing until AMD gets its act together.

Edit: After reading Nvidia's whitepaper on SER, I realized that their claims of it being like out of order execution for CPUs are overblown. OOE is an inherent property of the CPU and is not swiched on or off by an API call.
 
Last edited:
I think it's important to accept the 4080 is a powerful card; that efficiency is nuts - I'd love one but I'm not willing to pay over £1k for it. I used a small windfall when I bought my 2080ti (even then, I got it for near MSRP). But I said back then, I'd never again spend that money on a gfx card. IMO Nvidia has misjudged the market. Sure the colossal 4090 at MSRP is acceptable if it remains THE halo product. But the 4080 is ripping the piss.

I bought into GSync a few years back and have an older hardware limited monitor and it's the only reason I never jumped to a RX6 series card. But now, with the RX7 series knocking on the door, at a possible 900 dollar mark and it being way faster than a 6950XT, it's my target card. Sure, if the 4080 was 900 bucks, I'd have probably bought one to be honest. I'd have begrudged it but it would have got my cash. But to pay 400-600 more just for better Ray Tracing is ludicrous. And I think the gamers that aren't buying the 4080's know that.

You can talk about other NV tech but when it comes down to it, I want a card that smacks my current one in the face, is better than it at RT, and isn't over 1K. Simply put - Nvidia, IMO, totally misjudged the market with the 4080.
 
Last edited:
There's no doubt that Nvidia is continuing to invest in raytracing performance, but your example is not more than the theoretical RT performance increase. The clock speed ratio (2730 MHz vs 1980) multiplied by the ray tracing units ratio is 2.1 for CyberPunk. Shader Execution reordering (link to PDF), like Intel's Thread Sorting Unit, will be a good boost when it's implemented by game developers. It looks like Intel will be the only one to challenge Nvidia in ray tracing until AMD gets its act together.

You can always disable the ray-tracing in the game settings menu ;)
 
Status
Not open for further replies.
Back
Top