• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Prediction about RTX 5000 Series

Joined
Oct 10, 2018
Messages
157 (0.06/day)
Hello all, I just made some calculations to predict the performance of the RTX 50 series. I started by comparing different GPUs, followed by an analysis of their performance.

Comparison of NVIDIA Graphics Cards​

  • GTX 1080 Ti: 3584 CUDA cores (7168 CUDA threads), 11,800 million transistors, GDDR5X memory, 11 GB, 11 Gbps
  • RTX 2080 Ti: 4352 CUDA cores (8704 CUDA threads), 18,600 million transistors, GDDR6 memory, 14 Gbps
  • RTX 3090: 10496 CUDA cores, 28,300 million transistors, GDDR6 memory, 19.5 Gbps
  • RTX 4090: 16384 CUDA cores, 76,300 million transistors, GDDR6X memory, 21 Gbps
  • RTX 5090 (speculated): 21760 CUDA cores, GDDR7 memory, 28 Gbps

Performance Analysis​

  • GTX 1080 vs. 1080 Ti: A 1.4x increase in core count resulted in only a 28% performance boost, despite improvements in memory bandwidth and VRAM.
  • RTX 2080 vs. 2080 Ti: A 1.48x increase in core count led to a 21% performance gain, showing diminishing returns even with better bandwidth and VRAM.
  • RTX 3080 vs. 3090: A 1.2x increase in core count improved performance by just 10%, with limited impact from memory and bandwidth upgrades.
  • RTX 4070 vs. 4070 SUPER: A 1.21x increase in core count led to a 15% performance boost, with no gains from other features.
  • RTX 4080 vs. 4090: Despite a 1.68x increase in core count, performance only improved by 28%.

Core Count to Performance Ratio​

The data suggests that the performance increase is roughly half of the core count difference. This highlights the importance of architectural differences in determining actual performance.

Bandwidth's Impact on Performance​

  • RTX 4070 Ti vs. RTX 4070 Ti SUPER: Comparing the 192-bit and 256-bit memory bus, the 10% performance gain indicates that bandwidth has a limited effect.
  • GTX 1660 vs. GTX 1660 SUPER: A bandwidth increase of nearly 2x (75%) from GDDR5 to GDDR6 resulted in only a 12.6% performance boost, further suggesting that bandwidth alone does not drive major improvements.

Performance Scaling​

  • RTX 4090: Performs 64% better than the RTX 3090, despite only a 1.56x increase in core count and a mere 10% improvement in bandwidth. Most gains likely come from architectural advancements.
  • RTX 3090: Shows a 45% performance increase over the RTX 2080 Ti, with a 1.2x core count increase and 1.4x memory bandwidth. Architecture likely contributes around 20% of this boost.
  • RTX 2080 Ti: Performs 35% better than the GTX 1080 Ti, based on a 1.21x core increase and a 27% improvement in bandwidth. The Turing architecture adds roughly 15% performance improvement.

Transistor and Core Ratios​

  • RTX 3090: Has 1.52x more transistors than the RTX 2080 Ti, with 1.2x more cores.
  • RTX 4090: Has 2.7x more transistors than the RTX 3090, with 1.56x more cores.

What Will the RTX 5090 Be?​

I believe the RTX 4090 could have been even more powerful if it had a 512-bit memory bus, which would potentially increase performance by 16.6%. For the RTX 5090, we can assume a similar improvement, with a 10% gain from the increased memory bandwidth (384-bit vs. 512-bit) and another 6% from the core count difference. This would yield a total performance improvement of 36%. Adding in the benefits of GDDR7 memory, which could contribute an additional 10%, the RTX 5090 may offer up to a 50% performance increase over the RTX 4090.

Thus, the RTX 5090 (32GB) is expected to outperform the RTX 4090 by 60-70%. However, if the RTX 5090 only has 24GB and a 384-bit bus, the increase may be around 40-45%.

Predictions for the RTX 50 Series​

  • RTX 5080: This may be the weakest X80 generation GPU due to speculated core count cuts. Speculations suggest 10752 cores, while the full die could have up to 24768. The performance could increase by 1.05x (from core count) x 1.07x (from GDDR7) x 1.15x (from architecture), equaling about 30% more than the RTX 4080. We could see 16GB with 2GB modules and 24GB with 3GB modules.
  • RTX 5070: Could be a great card if it has a core count between 6144 and 7424. I believe it will have 7168 cores. The performance could be 1x (from core count) x 1.08x (from GDDR7) x 1.2x (from architecture), or equal to a 30% gain over the 4070 Super, potentially matching the 4080. We may see 15GB/18GB variants, along with a 12GB model.
  • RTX 5060 Ti: Likely based on the GB206 chip with either 4864 or 5120 cores. The performance could be 1.085x (from core count) x 1.09x (from GDDR7) x 1.1x (with a 192-bit bus) x 1.15x (from architectural improvements), resulting in a 48% performance increase over the RTX 4060 Ti. If it has a 192-bit bus, it will be more powerful than the RTX 4070; if not, it may perform at par with the RTX 4070, or about 5% better. It could come with 12GB on a 192-bit bus, or 8GB with a 128-bit bus.
  • RTX 5060: Likely based on the GB207 chip with 3584 cores. Performance could be 1.0833x (from core count) x 1.11x (from GDDR7) x 1.2x (from architectural improvements and higher clock speeds) = a 44% performance uplift over the RTX 4060, putting it on par with or slightly better than the RTX 3070 Ti. It could have 8GB and 12GB variants with a 128-bit bus.
  • RTX 5050: Based on the GB207 chip with 2560 cores, it may have 8GB and a 128-bit bus. Performance could be 1.07x (from bus upgrade from 96-bit to 128-bit) x 1.1x (from GDDR7) x 1.15x (from architectural improvements), leading to a performance boost of around 14% over the RTX 4060, putting it on par with the 6700 XT or RTX 3060 Ti.

Overall Predictions​

  • RTX 5090 32GB = RTX 4090 + 60-70%
  • RTX 5080 16/24GB = RTX 4080 + 30%, or RTX 4090 + 5-10%
  • RTX 5070 12GB = Performance on par with RTX 4080
  • RTX 5060 Ti 12/16GB = Performance equal to RTX 4070 or RTX 3080
  • RTX 5060 8/12GB = Performance on par with RTX 3070 Ti or RTX 4060 Ti + 10-15%
  • RTX 5050 8GB = Performance similar to RTX 3060 Ti or RTX 4060 + 14%
Thank you for reading!
 
No one is talking about cache, is that not one of the things that helped the 40 series over the 30 series?
 
No one is talking about the energy required.
Enough to power a small town?
Double 12VHPWR connectors?
 
Why waste your time speculating?
 
Basically you can just say the 5080 is half a 5090 and the 5080 is pretty much the 4080S with a heavily inflated TDP.

So they're gonna have to get their boost from architecture/optimization but mostly from DLSS. Be ready for the next killer software feature with a hardware requirement. And part of the boost also from higher clocks, ergo lower efficiency.
Its either that or a single shader will do much more work (given the higher TDP) but then it leaves us to wonder why they didn't cut the 5090 down more.

I don't think there are any signs the shader is going to be that different.

The only plausible path I see for Blackwell is a big pile of Nvidia smoke and mirrors, manipulated DLSS results in the presentation slides and one major clusterfck of commercial upsell because 'what will you do if you have Ada now' - if you have a 4090 you will upgrade like a sheep as you always did and 'omg this is REALLY fast' and if you don't you're basically stuck with subpar choices to make at a high cost of entry.

Nice.
 
I am not even sure why you are posting here? You hate Nvidia with a passion, and you think anyone who uses their hardware is a clown.

You should be kicking back in the comfort of knowing you have a monster in your rig.
 
No one is talking about cache, is that not one of the things that helped the 40 series over the 30 series?
Yes, you are right. If we are looking into performance difference, it is really helped. However, for a basic logic, I just wrote as generational improvement or architectural improvement.
Why waste your time speculating?
I actually just wanted to write someday and it is today. I like to predict hardware with math.
No one is talking about the energy required.
Enough to power a small town?
Double 12VHPWR connectors?
I don't know either but I can't understand leaked RTX 5080's power which is pretty high if you are considering RTX 4080's only 320W. Even with GDDR7 and new process size. Leaked RTX 5080's power is 400W.
Basically you can just say the 5080 is half a 5090 and the 5080 is pretty much the 4080S with a heavily inflated TDP.

So they're gonna have to get their boost from architecture/optimization but mostly from DLSS. Be ready for the next killer software feature with a hardware requirement. And part of the boost also from higher clocks, ergo lower efficiency.
Its either that or a single shader will do much more work (given the higher TDP) but then it leaves us to wonder why they didn't cut the 5090 down more.

I don't think there are any signs the shader is going to be that different.

The only plausible path I see for Blackwell is a big pile of Nvidia smoke and mirrors, manipulated DLSS results in the presentation slides and one major clusterfck of commercial upsell because 'what will you do if you have Ada now' - if you have a 4090 you will upgrade like a sheep as you always did and 'omg this is REALLY fast' and if you don't you're basically stuck with subpar choices to make at a high cost of entry.

Nice.
You know why Nvidia operates this way. If they cannot sell this overpriced GPUs, their revenue will be decreasing and then their stock prices gonna decline. So, we need some competition. As you did, you have bought RX 7900 XT. I really hope Intel/AMD can release high end GPUs at low prices. Also, I personally don't buy high end GPUs because every 2 year new GPUs are coming. I am using RTX 3060, it is sufficient for 1080p.
 
Well, predictions/suppostions aside, the one (big) thing not mentioned yet is how many arms, legs, left testicles/mammaries and 1st borne childs these new cards are gonna cost us...

That my friends is NOT gonna be pretty, and therefore I am predicting a corresponding uptick in the sales of AED machines and/or ER visits....IF nGreediya's past patterns hold true that is, which at this point, there is no reason to think that would change :(

p.s.a.... I've already planned to buy up a buttload of shares in the AED mfgr's, so maybe I can make some $$ from this too, maybe not on the same level as nGreediya, but still....:D
 
Well, predictions/suppostions aside, the one (big) thing not mentioned yet is how many arms, legs, left testicles/mammaries and 1st borne childs these new cards are gonna cost us...

That my friends is NOT gonna be pretty, and therefore I am predicting a corresponding uptick in the sales of AED machines and/or ER visits....IF nGreediya's past patterns hold true that is, which at this point, there is no reason to think that would change :(

p.s.a.... I've already planned to buy up a buttload of shares in the AED mfgr's, so maybe I can make some $$ from this too, maybe not on the same level as nGreediya, but still....:D
I guess the cost of the RTX 5090 will be around $1,699 to $1,999.

RTX 5090 $1699-1999
RTX 5080 $999 for 16GB version, $1199-1399 for 24GB version
RTX 5070 $549-649 for 12GB version, $699 for 18GB version
RTX 5060 Ti $399 for 12GB version, if it won't come with 192 bit bus, it will cost about $349 for 8GB
RTX 5060 $299 for 8GB version, $349 for 12GB version
RTX 5050 $249 (I think Nvidia does not care about low-budget customers; the RTX 3050 6GB is an example of how they price low-end products.)
RTX 4050 6GB for $199 (They justified 6GB card by releasing RTX 3050 6GB)
RTX 3050 6GB for $149 (After the release of the RTX 5050 and 4050, the price was discounted by $20; now its cost is $179)
 
I am not even sure why you are posting here? You hate Nvidia with a passion, and you think anyone who uses their hardware is a clown.

You should be kicking back in the comfort of knowing you have a monster in your rig.
I'm sure moderators will jump at such hostility with utmost ferocity. Oh wait... :(

Seriously, though, the world isn't so black and white. If you disregard the lashing out on 4090 owners (which isn't entirely unwarranted, but that's besides the point), Vayra's post does have some merit.
 
Well raw raster improvement is one thing, I'm more interested in the efficiency uplift and new features incoming with DLSS4.

If 5090 can improve efficiency by 50% while being +50% faster, that would be a worthy upgrade from 4090.

As for 5080, if it's the chart topper in term of efficiency, it should find its way into more SFF build, I have built a couple SFFs with 4080/4080S already.
 
Last edited:
No one is talking about the energy required.
Enough to power a small town?
Double 12VHPWR connectors?
A miniature nuclear reactor comes bundled :rolleyes:

But seriously, the power draw of GPUs is getting totally out of hand, what happened the reasonable monsters like 980 Ti and 1080 Ti?
 
The desktop 24GB 5080 rumors feel like hopium to me. My money is on those 3GB modules being reserved for mobile chips like the 5090M.

I mean, I'd love for it to be real, but that sounds like the least Nvidia move ever to me. Especially when AMD has no high end offerings to compete.

If anything, I think we might see a 24GB 5080 Ti cut out of GB202 when the refresh hits (as big as that die is, they've got to have a ton of defects), but Nvidia has a long history of starving the mid to high end on VRAM. Only halo products and gimmicky low end cards get the good stuff. :laugh:
 
Pretty certain that 5060 is going to be mega-gimped and barely beating the previous gen again and the less we talk about scam monsters that are xx50 cards, the better. I also wouldn't get hopes up that nvidia would be forced to do anything if RDNA4 comes out great in low-/middle-end — 1050ti's were selling pretty well while being a lot slower than rx470/570.
 

Predictions for the RTX 50 Series​

  • RTX 5080: x 1.15x (from architecture)
  • RTX 5070: x 1.2x (from architecture)
  • RTX 5060 Ti: x 1.15x (from architectural improvements)
  • RTX 5060: x 1.2x (from architectural improvements and higher clock speeds)
  • RTX 5050: x 1.15x (from architectural improvements)
Why do you use an x1.15 value for some and a x1.2 value for others? They're all the same architecture.
 
Why do you use an x1.15 value for some and a x1.2 value for others? They're all the same architecture.
Because of the clock speed, cache, and memory speed. I believe Nvidia could unleash the full potential of the GB207 in the 5060. If not, it could be another disaster of a GPU, potentially dead on arrival. I’m feeling a bit optimistic about the x60 tier GPUs in this generation.
Pretty certain that 5060 is going to be mega-gimped and barely beating the previous gen again and the less we talk about scam monsters that are xx50 cards, the better. I also wouldn't get hopes up that nvidia would be forced to do anything if RDNA4 comes out great in low-/middle-end — 1050ti's were selling pretty well while being a lot slower than rx470/570.
Probably, it will gonna be like that if it uses GB207.
 
I believe Nvidia could unleash the full potential of the GB207 in the 5060. If not, it could be another disaster of a GPU, potentially dead on arrival. I’m feeling a bit optimistic about the x60 tier GPUs in this generation.
They could, but leaks indicate that GB207 is smaller than AD107, with only 2560 CUDA cores and 32 ROPs. Even if Nvidia do unleash the full potential of GB207 in the 5060, it won't be much faster than the 4060.
If the 5060 does actually have 3584 CUDA cores (either if GB207 is bigger than leaks indicate, or if it's based on a cut-down GB206) it could be a decent bit faster than the 4060, and would likely be about as fast as your prediction indicates, assuming Blackwell has at least modest architectural and/or clock-frequency improvements over Lovelace.
GB206 supposedly has 4608 CUDA cores and a 128-bit bus, the same as AD106. It could possibly match the 4070 if it uses 3GB GDDR7 chips, but would otherwise be limited by VRAM capacity. I don't think it would actually benefit much from a 192-bit bus if 4x3GB is cheaper than 6x2GB, as GDDR7 (at 32Gbps) on a 128-bit bus will have slightly more total bandwidth than the GDDR6X (at 21Gbps) on an RTX 4070. A 128-bit bus could cause problems if they use slower 28Gbps GDDR7 though, or if 3GB chips aren't available at reasonable prices when Nvidia starts manufacturing RTX 5060 Tis.

I expect that the 5060 will be based on a cut-down GB206 and that the much higher bandwidth of GDDR7 compared to the 4060 Ti's 18Gbps GDDR6 will allow it to outperform the 4060 Ti, which is severely bandwidth-limited. I'm a lot more pessimistic than you in my prediction of the 5060 Ti though: if it only has 8GB VRAM, it's DOA except for competitive esports; any 16GB version would require clamshelling and be too expensive, like the 4060 Ti 16GB; and if it has 12GB it's likely to either be too expensive or to come out too late to matter. Nvidia could surprise me though.
 
My take is 50% for flagship and between 0 and 20% for the rest thanks to GDDR7..
 
RTX 5090 $1699-1999
RTX 5080 $999 for 16GB version, $1199-1399 for 24GB version
RTX 5070 $549-649 for 12GB version, $699 for 18GB version
RTX 5060 Ti $399 for 12GB version, if it won't come with 192 bit bus, it will cost about $349 for 8GB
RTX 5060 $299 for 8GB version, $349 for 12GB version
RTX 5050 $249 (I think Nvidia does not care about low-budget customers; the RTX 3050 6GB is an example of how they price low-end products.)
RTX 4050 6GB for $199 (They justified 6GB card by releasing RTX 3050 6GB)
RTX 3050 6GB for $149 (After the release of the RTX 5050 and 4050, the price was discounted by $20; now its cost is $179)
Multiply it by like 1.7 and it'll be somewhat realistic. Your predictions only make sense if AMD compete. They don't.
 
I do not care. If it's a good upgrade for 4070S I'll buy it, if it's not I won't and will look for a deal on a used 4080S instead.
 
Last edited:
I speculate each card will be more expensive than the the previous generation and every card will get 8GB of RAM.
 
I guess the cost of the RTX 5090 will be around $1,699 to $1,999.

RTX 5090 $1699-1999
RTX 5080 $999 for 16GB version, $1199-1399 for 24GB version
RTX 5070 $549-649 for 12GB version, $699 for 18GB version
RTX 5060 Ti $399 for 12GB version, if it won't come with 192 bit bus, it will cost about $349 for 8GB
RTX 5060 $299 for 8GB version, $349 for 12GB version
RTX 5050 $249 (I think Nvidia does not care about low-budget customers; the RTX 3050 6GB is an example of how they price low-end products.)
RTX 4050 6GB for $199 (They justified 6GB card by releasing RTX 3050 6GB)
RTX 3050 6GB for $149 (After the release of the RTX 5050 and 4050, the price was discounted by $20; now its cost is $179)

Add at least 20%
 
Back
Top