• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Prediction about RTX 5000 Series

Joined
Feb 10, 2007
Messages
2,700 (0.41/day)
Location
Oulu, Finland
System Name Enslaver :)
Processor Ryzen 7 7800X3D
Motherboard ASUS TUF Gaming B650-Plus
Cooling CPU: Noctua D15 G2, Case: 2 front in, 1 rear out
Memory 2x16GB Kingston Fury Beast RGB 6000MHz
Video Card(s) ASUS TUF RTX 4070Ti OC
Storage Samsung Evo Plus 1TB NVMe , internal WD Red 4TB for storage, WD Book 8TB
Display(s) LG CX OLED 65"
Case Lian Li LANCOOL II Mesh C Performance
Audio Device(s) HDMI audio powering Dolby Digital audio on 5.1 Z960 speaker system
Power Supply Corsair RM850x
Mouse Logitech G700
Keyboard ASUS Strix Tactic Pro
Software Windows 11 Pro x64
Joined
Oct 10, 2018
Messages
155 (0.07/day)
They could, but leaks indicate that GB207 is smaller than AD107, with only 2560 CUDA cores and 32 ROPs. Even if Nvidia do unleash the full potential of GB207 in the 5060, it won't be much faster than the 4060.
If the 5060 does actually have 3584 CUDA cores (either if GB207 is bigger than leaks indicate, or if it's based on a cut-down GB206) it could be a decent bit faster than the 4060, and would likely be about as fast as your prediction indicates, assuming Blackwell has at least modest architectural and/or clock-frequency improvements over Lovelace.
GB206 supposedly has 4608 CUDA cores and a 128-bit bus, the same as AD106. It could possibly match the 4070 if it uses 3GB GDDR7 chips, but would otherwise be limited by VRAM capacity. I don't think it would actually benefit much from a 192-bit bus if 4x3GB is cheaper than 6x2GB, as GDDR7 (at 32Gbps) on a 128-bit bus will have slightly more total bandwidth than the GDDR6X (at 21Gbps) on an RTX 4070. A 128-bit bus could cause problems if they use slower 28Gbps GDDR7 though, or if 3GB chips aren't available at reasonable prices when Nvidia starts manufacturing RTX 5060 Tis.

I expect that the 5060 will be based on a cut-down GB206 and that the much higher bandwidth of GDDR7 compared to the 4060 Ti's 18Gbps GDDR6 will allow it to outperform the 4060 Ti, which is severely bandwidth-limited. I'm a lot more pessimistic than you in my prediction of the 5060 Ti though: if it only has 8GB VRAM, it's DOA except for competitive esports; any 16GB version would require clamshelling and be too expensive, like the 4060 Ti 16GB; and if it has 12GB it's likely to either be too expensive or to come out too late to matter. Nvidia could surprise me though.
I didn't know that. I had just assume what will be next if trend of die of 107 would continue So, RTX 5060 would be based on GB206. They could increase core count of 5060 to 4096. Even, 4608 couldn't be enough for GB206 die. They won't use GB205 or GB204 for 5060 Ti. Its performance could be same level with 4070 but at 1080p. I saw recent leaks about RTX 5070. It is frustrating. How could it be same with 4070 Ti? Already, we have RTX 4070S for $599. Why we buy this DOA or disaster? This is non-sense. It is Nvidia, maybe they could use GB207 for 5060. They could give 12GB VRAM on 192 bit bus along with GDDR7. It could increase performance levels to RTX 4060 Ti. RTX 5060 12GB (2560 cores) = RTX 4050 (140W Laptop)x1.08(96 bit to 128 bit)x1.15(128 bit to 192 bit)x1.2(Architectural Boost)x1.08(GDDR7) = 1.60 so for comparing RTX 4060 we need to divide this with 1.23 and it will equal to 1.30 so performance level will be on par with RTX 4060 Ti 16 GB or RTX 3070. RTX 5060 GB207 with 12GB is okay-ish for $299. I personally have been considering RTX 5060 would replace my 3060 but if this will happen I don't know I will buy even with $299. On the other hand, RTX 5060 Ti would only have 8GB. But if it would have 12GB or 16GB, I will want to buy that. But at what cost?
 
Joined
Feb 24, 2021
Messages
177 (0.13/day)
System Name Upgraded CyberpowerPC Ultra 5 Elite Gaming PC
Processor AMD Ryzen 7 5800X3D
Motherboard MSI B450M Pro-VDH Plus
Cooling Thermalright Peerless Assassin 120 SE
Memory CM4X8GD3000C16K4D (OC to CL14)
Video Card(s) XFX Speedster MERC RX 7800 XT
Storage TCSunbow X3 1TB, ADATA SU630 240GB, Seagate BarraCuda ST2000DM008 2TB
Display(s) AOC Agon AG241QX 1440p 144Hz
Case Cooler Master MasterBox MB520 (CyberpowerPC variant)
Power Supply 600W Cooler Master
They won't use GB205 or GB204 for 5060 Ti.

Why not? Apparently there is no GB204, but GB205 is supposed to be smaller than AD104. They probably wouldn't use the full die, but I don't think a cut-down GB205 die being used for the RTX 5060 Ti is unrealistic. The RTX 4060 Ti didn't sell that well, I think Nvidia will want to give it at least a modest upgrade. By selling the full GB205 die as the RTX 5070 12GB for ~$550, they can still make a ton of profit on the cut-down die as an RTX 5060 Ti with 10GB or 12GB at ~$450. They'll make better margins than on the current RTX 4070 (because the die is smaller) or RTX 4060 Ti 16GB (because clamshelling RAM is expensive, even though the GPU die is cheap), and Nvidia fans will buy it.

It is Nvidia, maybe they could use GB207 for 5060. They could give 12GB VRAM on 192 bit bus along with GDDR7.
No. There's no chance whatsoever that GB207 will have a 192-bit bus. It would make the die significantly larger (the bus width isn't just a result of how many RAM chips are connected - the physical interfaces and memory controllers need to be built into the GPU, and they take up a significant proportion of the die area), and it just isn't necessary on a GPU as weak as GB207 is expected to be, especially if it uses GDDR7 (to be fair, according to leaks, GB207 might be limited to GDDR6; but it would still be slower than an RTX 4060 Ti, which uses 128-bit GDDR6, so GB207 doesn't even need GDDR7 in order to avoid being severely limited by bandwidth). Giving such a small die a 192-bit bus would be a waste of money (assuming the leaks are accurate and GB207 has 2560 CUDA cores, a version with a 192-bit bus would be about 15%-20% larger than one with a 128-bit bus, therefore about 15%-20% more expensive; plus the VRAM itself would cost 50% more). Nvidia wasting money and giving us more VRAM than they think we need, in a generation where they adopt a much faster and more expensive type of VRAM, is absolutely the last thing they would do.

Even though the RTX 5060 being limited to 8GB VRAM would suck, I'm certain that Nvidia would rather wait for 3GB GDDR7 chips to come out so they can give it 12GB while minimising their own costs, and use the limited VRAM capacity as an excuse to upsell us on more expensive GPUs with wider buses in the meantime. If Nvidia were really worried about the RTX 5060 being outcompeted by cheaper AMD GPUs, maybe (if the leaks aren't up-to-date) they would give GB206 a 192-bit bus and base the RTX 5060 on that die, but GB207 having a 192-bit bus is unrealistic. It's questionable whether GB206 would benefit from a 192-bit bus; GB207 definitely won't have one. If the RTX 5060 is 12GB, it's either using 3GB chips or using a larger die.
 
Joined
Oct 10, 2018
Messages
155 (0.07/day)
By selling the full GB205 die as the RTX 5070 12GB for ~$550, they can still make a ton of profit on the cut-down die as an RTX 5060 Ti with 10GB or 12GB at ~$450.
$450 is a little harsh but it should be faster than 4070S for justifiying the cost.
They'll make better margins than on the current RTX 4070 (because the die is smaller) or RTX 4060 Ti 16GB (because clamshelling RAM is expensive, even though the GPU die is cheap), and Nvidia fans will buy it.
What is the most effecting cost between die size or VRAM modules? For example, according to the below graph, it states that one module cost is $2.274 and if we are multiply this by 8, we will find the 8GB cost which is $18.16. It is quite low price. If we are comparing 3 years, we would see cost of GDDR6 modules decreases to 1/6 of price, even more less. So how couldn't they deliver more VRAM sizes for 4060? They could give 16GB variant for $349 with just increasing cost of $18. I think they want to sell only High-End GPUs.

View attachment 1729151030742.webp
1729150726785.png

Even though the RTX 5060 being limited to 8GB VRAM would suck, I'm certain that Nvidia would rather wait for 3GB GDDR7 chips to come out so they can give it 12GB while minimising their own costs, and use the limited VRAM capacity as an excuse to upsell us on more expensive GPUs with wider buses in the meantime.
Actually, yes. Yesterday's leaks showed Nvidia could release RTX 5060 in March 2025 but they would postpone to July or June for waiting 3GB modules or maybe giving two variants. I don't want to believe GB207 option. If GDDR6 will be used on GB207, we could see RTX 5060 16GB GDDR6. It's performance could be nightmare. You are right it cannot compete with 4060 Ti. It's performance could match with 6700 XT or 3060 Ti and 10-15% slower than 4060 Ti. On the other hand, if they will use GB206, it would be on par with 3070. I would select RTX 5060 8GB GB206 GDDR7 over GB207 16GB GDDR6.
 
Joined
Feb 24, 2021
Messages
177 (0.13/day)
System Name Upgraded CyberpowerPC Ultra 5 Elite Gaming PC
Processor AMD Ryzen 7 5800X3D
Motherboard MSI B450M Pro-VDH Plus
Cooling Thermalright Peerless Assassin 120 SE
Memory CM4X8GD3000C16K4D (OC to CL14)
Video Card(s) XFX Speedster MERC RX 7800 XT
Storage TCSunbow X3 1TB, ADATA SU630 240GB, Seagate BarraCuda ST2000DM008 2TB
Display(s) AOC Agon AG241QX 1440p 144Hz
Case Cooler Master MasterBox MB520 (CyberpowerPC variant)
Power Supply 600W Cooler Master
What is the most effecting cost between die size or VRAM modules? For example, according to the below graph, it states that one module cost is $2.274 and if we are multiply this by 8, we will find the 8GB cost which is $18.16. It is quite low price. If we are comparing 3 years, we would see cost of GDDR6 modules decreases to 1/6 of price, even more less. So how couldn't they deliver more VRAM sizes for 4060? They could give 16GB variant for $349 with just increasing cost of $18. I think they want to sell only High-End GPUs.

View attachment 367917View attachment 367915
This is true, but 8Gb VRAM chips are generally obsolete, no GPU that's still being manufactured in significant volume uses them (unless Nvidia is still making GTX 1650s/MX550s for low-end laptops, which is possible but unlikely at this point). I can't find spot prices for 16Gb GDDR6 (except at Mouser, which only has it them for £15 or more - Nvidia clearly isn't paying that much), but I expect they cost more than $2.274 per GB.

Much of the VRAM cost of the RTX 4060 Ti 16GB (and also the RX 7600 XT) comes from "clamshelling", rather than just the VRAM itself.
In order to fit 16GB RAM on a 128-bit bus using 2GB (16Gb) chips, they need to put chips on both sides of the circuit board. This means that both the circuit board and the assembly process to install the VRAM onto it need to be more complex. It's not possible to fit 16GB on a 128-bit bus using 8Gb chips, as each chip is 32-bits wide and they can fit up to 2 chips per memory controller by clamshelling.

I didn't mean to imply that the RTX 5060 can't have 16GB VRAM, it can, it would just require clamshelling. It might be available in both an 8GB and 16GB version, like the RTX 4060 Ti. But if there is a 16GB version of the RTX 5060, I expect it will be overpriced relative to its performance advantage over the 8GB version, like the RTX 4060 Ti 16GB was.
 

oliv_r

New Member
Joined
Dec 5, 2024
Messages
10 (0.28/day)
Hello all, I just made some calculations to predict the performance of the RTX 50 series. I started by comparing different GPUs, followed by an analysis of their performance.

Comparison of NVIDIA Graphics Cards​

  • GTX 1080 Ti: 3584 CUDA cores (7168 CUDA threads), 11,800 million transistors, GDDR5X memory, 11 GB, 11 Gbps
  • RTX 2080 Ti: 4352 CUDA cores (8704 CUDA threads), 18,600 million transistors, GDDR6 memory, 14 Gbps
  • RTX 3090: 10496 CUDA cores, 28,300 million transistors, GDDR6 memory, 19.5 Gbps
  • RTX 4090: 16384 CUDA cores, 76,300 million transistors, GDDR6X memory, 21 Gbps
  • RTX 5090 (speculated): 21760 CUDA cores, GDDR7 memory, 28 Gbps

Performance Analysis​

  • GTX 1080 vs. 1080 Ti: A 1.4x increase in core count resulted in only a 28% performance boost, despite improvements in memory bandwidth and VRAM.
  • RTX 2080 vs. 2080 Ti: A 1.48x increase in core count led to a 21% performance gain, showing diminishing returns even with better bandwidth and VRAM.
  • RTX 3080 vs. 3090: A 1.2x increase in core count improved performance by just 10%, with limited impact from memory and bandwidth upgrades.
  • RTX 4070 vs. 4070 SUPER: A 1.21x increase in core count led to a 15% performance boost, with no gains from other features.
  • RTX 4080 vs. 4090: Despite a 1.68x increase in core count, performance only improved by 28%.

Core Count to Performance Ratio​

The data suggests that the performance increase is roughly half of the core count difference. This highlights the importance of architectural differences in determining actual performance.

Bandwidth's Impact on Performance​

  • RTX 4070 Ti vs. RTX 4070 Ti SUPER: Comparing the 192-bit and 256-bit memory bus, the 10% performance gain indicates that bandwidth has a limited effect.
  • GTX 1660 vs. GTX 1660 SUPER: A bandwidth increase of nearly 2x (75%) from GDDR5 to GDDR6 resulted in only a 12.6% performance boost, further suggesting that bandwidth alone does not drive major improvements.

Performance Scaling​

  • RTX 4090: Performs 64% better than the RTX 3090, despite only a 1.56x increase in core count and a mere 10% improvement in bandwidth. Most gains likely come from architectural advancements.
  • RTX 3090: Shows a 45% performance increase over the RTX 2080 Ti, with a 1.2x core count increase and 1.4x memory bandwidth. Architecture likely contributes around 20% of this boost.
  • RTX 2080 Ti: Performs 35% better than the GTX 1080 Ti, based on a 1.21x core increase and a 27% improvement in bandwidth. The Turing architecture adds roughly 15% performance improvement.

Transistor and Core Ratios​

  • RTX 3090: Has 1.52x more transistors than the RTX 2080 Ti, with 1.2x more cores.
  • RTX 4090: Has 2.7x more transistors than the RTX 3090, with 1.56x more cores.

What Will the RTX 5090 Be?​

I believe the RTX 4090 could have been even more powerful if it had a 512-bit memory bus, which would potentially increase performance by 16.6%. For the RTX 5090, we can assume a similar improvement, with a 10% gain from the increased memory bandwidth (384-bit vs. 512-bit) and another 6% from the core count difference. This would yield a total performance improvement of 36%. Adding in the benefits of GDDR7 memory, which could contribute an additional 10%, the RTX 5090 may offer up to a 50% performance increase over the RTX 4090.

Thus, the RTX 5090 (32GB) is expected to outperform the RTX 4090 by 60-70%. However, if the RTX 5090 only has 24GB and a 384-bit bus, the increase may be around 40-45%.

Predictions for the RTX 50 Series​

  • RTX 5080: This may be the weakest X80 generation GPU due to speculated core count cuts. Speculations suggest 10752 cores, while the full die could have up to 24768. The performance could increase by 1.05x (from core count) x 1.07x (from GDDR7) x 1.15x (from architecture), equaling about 30% more than the RTX 4080. We could see 16GB with 2GB modules and 24GB with 3GB modules.
  • RTX 5070: Could be a great card if it has a core count between 6144 and 7424. I believe it will have 7168 cores. The performance could be 1x (from core count) x 1.08x (from GDDR7) x 1.2x (from architecture), or equal to a 30% gain over the 4070 Super, potentially matching the 4080. We may see 15GB/18GB variants, along with a 12GB model.
  • RTX 5060 Ti: Likely based on the GB206 chip with either 4864 or 5120 cores. The performance could be 1.085x (from core count) x 1.09x (from GDDR7) x 1.1x (with a 192-bit bus) x 1.15x (from architectural improvements), resulting in a 48% performance increase over the RTX 4060 Ti. If it has a 192-bit bus, it will be more powerful than the RTX 4070; if not, it may perform at par with the RTX 4070, or about 5% better. It could come with 12GB on a 192-bit bus, or 8GB with a 128-bit bus.
  • RTX 5060: Likely based on the GB207 chip with 3584 cores. Performance could be 1.0833x (from core count) x 1.11x (from GDDR7) x 1.2x (from architectural improvements and higher clock speeds) = a 44% performance uplift over the RTX 4060, putting it on par with or slightly better than the RTX 3070 Ti. It could have 8GB and 12GB variants with a 128-bit bus.
  • RTX 5050: Based on the GB207 chip with 2560 cores, it may have 8GB and a 128-bit bus. Performance could be 1.07x (from bus upgrade from 96-bit to 128-bit) x 1.1x (from GDDR7) x 1.15x (from architectural improvements), leading to a performance boost of around 14% over the RTX 4060, putting it on par with the 6700 XT or RTX 3060 Ti.

Overall Predictions​

  • RTX 5090 32GB = RTX 4090 + 60-70%
  • RTX 5080 16/24GB = RTX 4080 + 30%, or RTX 4090 + 5-10%
  • RTX 5070 12GB = Performance on par with RTX 4080
  • RTX 5060 Ti 12/16GB = Performance equal to RTX 4070 or RTX 3080
  • RTX 5060 8/12GB = Performance on par with RTX 3070 Ti or RTX 4060 Ti + 10-15%
  • RTX 5050 8GB = Performance similar to RTX 3060 Ti or RTX 4060 + 14%
Thank you for reading!
Thoughts after CES?
 
Joined
Apr 14, 2018
Messages
743 (0.30/day)
Thoughts after CES?

Without FG and DLSS, probably closer to 15% gen over gen with the exception of the 5090 being 25-30% faster.

Lack of shader increases across the board, same manf. node, likely little to no increase in clock speeds, and small bandwidth increases with some small ipc improvements.

OP’s post was horrendously optimistic.
 

Shuttlepro83

New Member
Joined
Dec 30, 2024
Messages
15 (1.50/day)
Half Life 3 has been in the news....would be THE killer app for it. But outside of that game, I'm not too worried about it. The mid range models look good.
 
Joined
Feb 24, 2021
Messages
177 (0.13/day)
System Name Upgraded CyberpowerPC Ultra 5 Elite Gaming PC
Processor AMD Ryzen 7 5800X3D
Motherboard MSI B450M Pro-VDH Plus
Cooling Thermalright Peerless Assassin 120 SE
Memory CM4X8GD3000C16K4D (OC to CL14)
Video Card(s) XFX Speedster MERC RX 7800 XT
Storage TCSunbow X3 1TB, ADATA SU630 240GB, Seagate BarraCuda ST2000DM008 2TB
Display(s) AOC Agon AG241QX 1440p 144Hz
Case Cooler Master MasterBox MB520 (CyberpowerPC variant)
Power Supply 600W Cooler Master

oliv_r

New Member
Joined
Dec 5, 2024
Messages
10 (0.28/day)
I snagged a 4070 Ti Super for 700 in newegg 2 mo. ago - I can afford the 5080 but is it worth it
 
Joined
Aug 23, 2024
Messages
39 (0.28/day)
The more high end options look quite promising. The problem is 5060 range. If we look back ,4060 was a very minor upgrade to 3060...
5060 and 3070ti like perfomance? :) and 5050>4060 :) : )
I dont think nvidia will be that generous...
 
Joined
Nov 22, 2023
Messages
271 (0.65/day)
Without FG and DLSS, probably closer to 15% gen over gen with the exception of the 5090 being 25-30% faster.

Lack of shader increases across the board, same manf. node, likely little to no increase in clock speeds, and small bandwidth increases with some small ipc improvements.

OP’s post was horrendously optimistic.

- Yeah, NV has completely leaned into their software ecosystem to sell their hardware this Gen.

With any luck I might be able to find a killer used deal on something in the 4070Ti - 4080S range so I can move my 6800XT down to my steambox and finally release my 980Ti to the stud farm.

Hopefully the unwashed NV masses have bought the performance increase promises sight unseen.
 
Joined
Feb 24, 2021
Messages
177 (0.13/day)
System Name Upgraded CyberpowerPC Ultra 5 Elite Gaming PC
Processor AMD Ryzen 7 5800X3D
Motherboard MSI B450M Pro-VDH Plus
Cooling Thermalright Peerless Assassin 120 SE
Memory CM4X8GD3000C16K4D (OC to CL14)
Video Card(s) XFX Speedster MERC RX 7800 XT
Storage TCSunbow X3 1TB, ADATA SU630 240GB, Seagate BarraCuda ST2000DM008 2TB
Display(s) AOC Agon AG241QX 1440p 144Hz
Case Cooler Master MasterBox MB520 (CyberpowerPC variant)
Power Supply 600W Cooler Master
The more high end options look quite promising. The problem is 5060 range. If we look back ,4060 was a very minor upgrade to 3060...
5060 and 3070ti like perfomance? :) and 5050>4060 :) : )
I dont think nvidia will be that generous...
The 5060 could be about as fast as a 3070 Ti, or even slightly faster, if it uses the full GB206 die. The 5050 will almost certainly be slower than the 4060 though. I don't see them using GB206 for a 50-class GPU, and GB207 is a really tiny die (smaller than AD107).

I see 2 main possibilities:

Option 1 (optimistic)
5060 Ti = cut-down GB205, 12GB or 10GB. Slightly faster than 4070 for about $450
5060 = full (or close to full) GB206, 12GB (using 3GB chips), or separate 8GB (using 2GB chips) and 16GB (using clamshelled 2GB chips) versions. Slightly faster than 3070 Ti for $300-$400 depending on VRAM config.
5050 Ti = cut-down GB206, 8GB. As fast as RTX 4060 Ti for about $250. (Possibly doesn't exist. Yields will be good, so not many GB206 dies will need to be cut down; and those that are can be used in laptops, rather than in low-margin entry-level desktop GPUs)
5050 = full GB207, 8GB. Between RTX 3060 and RTX 4060 performance for <$200, or about 3060 performance if <75W. If it's >75W, there could be a slightly cut-down <$150 <75W "RTX 5050 LP" or "RTX 4040", hopefully still with 8GB but maybe only 6GB.

Option 2 (probably more likely)
5060 Ti = full GB206, 12GB (using 3GB chips) or 16GB (using clamshelled 2GB chips). Slightly faster than 3070 Ti for about $400-$450.
5060 = cut-down GB206, 12GB (using 3GB chips), or 8GB (using 2GB chips) with a possible 16GB version. As fast as 4060 Ti (slightly slower than 3070 Ti) for $300-$400 depending on VRAM config.
5050 Ti = full GB207, 8GB. Between RTX 3060 and RTX 4060 performance for about $200.
5050 = cut-down GB207, 6GB or 8GB. <75W. About $150-180. About as fast as RTX 2060. If it's 6GB, the "5050 Ti" might be called "5050 8GB".

Whoever's in charge of TPU's database seems to have a similar idea. They have placeholders for both the 5060 and 5060 Ti with the same full GB206 spec, and we don't know which will be used yet.
 
Joined
Feb 3, 2017
Messages
3,852 (1.33/day)
Processor Ryzen 7800X3D
Motherboard ROG STRIX B650E-F GAMING WIFI
Memory 2x16GB G.Skill Flare X5 DDR5-6000 CL36 (F5-6000J3636F16GX2-FX5)
Video Card(s) INNO3D GeForce RTX™ 4070 Ti SUPER TWIN X2
Storage 2TB Samsung 980 PRO, 4TB WD Black SN850X
Display(s) 42" LG C2 OLED, 27" ASUS PG279Q
Case Thermaltake Core P5
Power Supply Fractal Design Ion+ Platinum 760W
Mouse Corsair Dark Core RGB Pro SE
Keyboard Corsair K100 RGB
VR HMD HTC Vive Cosmos
Code:
                              GB/s             TFLOPS
5070 vs 4070 (4070S)          +33,3% (+33,3%)  +5,8% (-13,1%)
5070Ti vs 4070Ti (4070Ti S)   +77,8% (+33,3%)  +9,5% (-0,4%)
5080 vs 4080 (4080 S)         +33,9% (+30,4%)  +15,6% (+7,9%)
5090 vs 4090                  +77,8%           +27,0%
Nvidia went hard for more memory bandwidth. Pure TFLOP numbers are not that great, especially against 4000 series Super models.
 
Joined
Oct 10, 2018
Messages
155 (0.07/day)
Thoughts after CES?
Actually, I didn't expect that Nvidia would release GPUs more competitive (in terms of price) espicially for 5070 and 5070 Ti. On the other hand, 9070 non xt would compete with 4080 on COD:BO6 in recent leak of IGN.
1736336050430.png

As you see, RTX 5070 will perform 30-40% better than RTX 4070.
resim(1).png
 
Joined
Aug 30, 2020
Messages
411 (0.26/day)
Location
Texass
System Name EXTREME-FLIGHT SIM
Processor AMD RYZEN 7 9800X3D 4.7GHZ 8-core 120W
Motherboard ASUS ROG X670E Crosshair EXTREME BIOS V.2506
Cooling be quiet! Silent Loop 2 360MM, Light Wings 120 & 140MM
Memory G. SKILL Trident Z5 RGB 32MBx2 DDR5-6000 CL32/EXPOⅡ
Video Card(s) ASUS ROG Strix RTX4090 O24
Storage 2TB CRUCIAL T705 M.2, 4TB Seagate FireCuda 3.5"x7200rpm
Display(s) Samsung Odyssey Neo G9 57" 5120x1440 120Hz DP2.1 #2.Ulrhzar 8" Touchscreen(HUD)
Case be quiet! Dark Base Pro 900 Rev.2 Silver
Audio Device(s) ROG SupremeFX ALC4082, Creative SoundBlaster Katana V2
Power Supply be quiet! Dark Power Pro 12 1500W via APC Back-UPS 1500
Mouse LOGITECH Pro Superlight2 and POWERPLAY Mouse Pad
Keyboard CORSAIR K100 AIR
Software WINDOWS 11 x64 PRO 23H2, MSFS2020-2024 Aviator Edition, DCS
Benchmark Scores fast and stable AIDA64
Basically you can just say the 5080 is half a 5090 and the 5080 is pretty much the 4080S with a heavily inflated TDP.

So they're gonna have to get their boost from architecture/optimization but mostly from DLSS. Be ready for the next killer software feature with a hardware requirement. And part of the boost also from higher clocks, ergo lower efficiency.
Its either that or a single shader will do much more work (given the higher TDP) but then it leaves us to wonder why they didn't cut the 5090 down more.

I don't think there are any signs the shader is going to be that different.

The only plausible path I see for Blackwell is a big pile of Nvidia smoke and mirrors, manipulated DLSS results in the presentation slides and one major clusterfck of commercial upsell because 'what will you do if you have Ada now' - if you have a 4090 you will upgrade like a sheep as you always did and 'omg this is REALLY fast' and if you don't you're basically stuck with subpar choices to make at a high cost of entry.

Nice.
Sheep? Is that derogatory?
 
Joined
Sep 17, 2014
Messages
22,840 (6.06/day)
Location
The Washing Machine
System Name Tiny the White Yeti
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
VR HMD HD 420 - Green Edition ;)
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
Sheep? Is that derogatory?
I'll leave it to each individual to decide for themselves whether that term applies to them. The fact you responded to that out of an entire post is saying a lot though... And is a good match with the blind early buyer mentality you've displayed so far. But you're happy, so all is well!

With regards to the more important bits of the post you quoted... so far its a 100% correct prediction. DLSS4, shaders confirmed, no word of raw performance increase. Do we need more writings on the wall :)
 
Joined
May 17, 2021
Messages
3,162 (2.37/day)
Processor Ryzen 5 5700x
Motherboard B550 Elite
Cooling Thermalright Perless Assassin 120 SE
Memory 32GB Fury Beast DDR4 3200Mhz
Video Card(s) Gigabyte 3060 ti gaming oc pro
Storage Samsung 970 Evo 1TB, WD SN850x 1TB, plus some random HDDs
Display(s) LG 27gp850 1440p 165Hz 27''
Case Lian Li Lancool II performance
Power Supply MSI 750w
Mouse G502
To be fair even if Nvidia could deliver 100% more performance, if it was completely achievable, why would they do it? they have no competition, and better hold you cards for the future, this way they already have the next generation assured with no risk. People won't buy more or less because of it, where are you going to shop: Intel, AMD lmao.
That's what happens in a market one company is dominant. We've seen it with Intel years ago.
And they don't scale back on R&D for investor payoffs because they have the AI boom or that would be the likely scenario.

It's just business. If it was my company that's what i would do.
Someday if (when) the Chinese catch up, you'll see the fake frames go away and larger performance increases.
 
Joined
May 10, 2023
Messages
485 (0.79/day)
Location
Brazil
Processor 5950x
Motherboard B550 ProArt
Cooling Fuma 2
Memory 4x32GB 3200MHz Corsair LPX
Video Card(s) 2x RTX 3090
Display(s) LG 42" C2 4k OLED
Power Supply XPG Core Reactor 850W
Software I use Arch btw
To be fair even if Nvidia could deliver 100% more performance, if it was completely achievable, why would they do it? they have no competition, and better hold you cards for the future, this way they already have the next generation assured with no risk. People won't buy more or less because of it, where are you going to shop: Intel, AMD lmao.
That's what happens in a market one company is dominant. We've seen it with Intel years ago.
And they don't scale back on R&D for investor payoffs because they have the AI boom or that would be the likely scenario.

It's just business. If it was my company that's what i would do.
Someday if (when) the Chinese catch up, you'll see the fake frames go away and larger performance increases.
They will deliver a 100% perf increase... In AI workloads, specially LLMs.
Given how GDDR6X was only used in the consumer products, and the enterprise ones went with regular GDDR6, likely because of available density, costs and power consumption. This made is so that the L40 (almost the full AD102 of the 4090) actually ended up with less memory bandwidth than the 4090 (and 3090 fwiw) with ~850GB/s vs 1TB/s, making it slower for LLM inferencing than the 4090, with the benefit of being able to run bigger models due to its 48GB of VRAM.

Now with GDDR7 and the 512-bit bus found on the 5090, they'll be able to come up with 48, 64 and 96GB versions accelerator by using a mix of clamshell designs and 3GB modules with 1.8TB/s of bandwidth. 80% more than the 4090, almost double of the previous version.
So yeah, companies and shareholders will be plenty happy with those new products that will cost well over 10 grand a pop and that will be backordered for years.
 
Joined
Oct 10, 2018
Messages
155 (0.07/day)
How much performance difference is there between the RTX 40 and RTX 50 series?

1736359211046.png
1736359334615.png


If you change 3070 to 3070 Ti in the last chart, it's performance difference between 4070 Super and 3070 Ti would be equal to 5070 vs 4070. So, performance difference could be translated to 1.35.

5070ti.jpeg
 
Joined
Apr 14, 2018
Messages
743 (0.30/day)
How much performance difference is there between the RTX 40 and RTX 50 series?

View attachment 379061View attachment 379066

If you change 3070 to 3070 Ti in the last chart, it's performance difference between 4070 Super and 3070 Ti would be equal to 5070 vs 4070. So, performance difference could be translated to 1.35.

View attachment 379073

Do not base performance off marketing slides that are only taking RT and DLSS benchmarks into accout, with ZERO FPS labels or hard data.

Sans software trickery (dlss, ai, fg), the performance gaps are going to be smaller than that with the exception of the 5090. The sheer fact Nvidia is slotting the 5000 series at those msrp is proof rasterization/hardware improvements are minimal. They’re not a charity.
 
Joined
Oct 10, 2018
Messages
155 (0.07/day)
Do not base performance off marketing slides that are only taking RT and DLSS benchmarks into accout, with ZERO FPS labels or hard data.

Sans software trickery (dlss, ai, fg), the performance gaps are going to be smaller than that with the exception of the 5090. The sheer fact Nvidia is slotting the 5000 series at those msrp is proof rasterization/hardware improvements are minimal. They’re not a charity.
They are now on the same page. I am just comparing these slides to real data if Nvidia is still reliable, same as 2 years ago. You are definitely right about little or no improvements, but they didn't improve clock speeds on the 50 series. So, they are just using N4P with GDDR7 and higher die sizes to compete. They are similar to the 20 series vs. the 10 series. Also, 40 series haven't achieved their full performance because of low bandwidth and low core counts. You can assume this series are better than 20 series in terms of performance difference between previous gen.
 
Top