• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Announces RDNA 3 GPU Launch Livestream

Joined
May 2, 2017
Messages
7,762 (2.78/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
For RDNA1 they claimed a 50% perf/watt gain over Vega. This was done by them comparing V64 to the 5700XT with both parts at stock.
For RDNA2 they claimed a 50% perf/watt gain in early released slides but in the reveal event they claimed 54% and 64%. 54% was 5700XT vs 6800XT at 4k in a variety of games (listed in the footnotes of their slide). The 64% was 5700XT vs 6900XT at 4K in the same games. This was further confirmed in some reviews but it heavily depended on how they tested perf/watt. Those sites that use the power and performance data from 1 game saw very different results. TPU saw about a 50% gain where as Tehcspot / HUB saw 70%+ gain because HUB used Doom Eternal and the 5700XT underperformed and TPU used CP2077 and the 6900XT underperformed. If you look at the HUB average uplift of the 6800XT and 6900XT then it actually matched up really well with AMDs claimed improvements.

So the AMD method seems to be compare SKU to SKU at stock settings, measure the average frame rate difference in a suite of titles and then work out the perf/watt delta.

With the >50% I do agree with using the 50% as a baseline but I feel confident that they are not doing a best vs worst comparison because that is not something AMD have done prior under current leadership.
I don't disagree with any of that, but I still never assume anything above what is promised. AMD under current leadership (ex-Koduri, that is) has been pretty trustworthy in their marketing for the most part. Still, I can't trust that to continue - corporations are opportunistic and almost exclusively focused on short term profits, and fundamentally do not care whatsoever about any kind of sustained ethics or even just acting consistently as long as it has some sort of sales/marketing gain, so one can never really trust history to indicate anything much - the best that's possible is to hope that they're choosing to not be complete exploitative assholes. I'm absolutely hopeful that the previous couple of generations will indeed be a solid indication of how their promised numbers should be interpreted - but hope and trust are not the same thing. Hence, I'm sticking with what has been explicitly promised - but as I said I'll be happy to be proven wrong. (And, of course, unappy to be proven wrong if they don't deliver 50% as well.)
What it does do though is give us some numbers to play with. If the Enermax numbers are correct and top N31 is using 420W then you can get the following numbers.

BaselineTBPPower DeltaPerf/Watt multiPerformance MultiEstimate vs 4090 in Raster
6900XT3001.4x1.5x2.1x+10%
6900XT3001.4x1.64x (to match 6900XT delta) extreme upper bound!2.3x+23%
Ref 6950XT3351.25x1.5x1.88x+15%
Ref 6950XT3351.25x1.64x Again extreme upper bound!2.05x+25%

Now the assumption I am making here is pretty obvious and that is the design goal of N31 was 420W to begin with which would mean it was wide enough to use that power in the saner part of the f/v curve. If it was not 420W to begin with and has been pushed to this through increasing clocks then it is obvious the perf/watt will drop off and the numbers above will be incorrect.

The other assumption is the Enermax numbers are correct. It is entirely possible that the reference TBP for N31 will be closer to 375W which with these numbers would put it about on par with the 4090.

My view is the TBP will be closer to 375-400W rather than 420W in which case anywhere from about equal to 5% ahead of the 4090 seems to be the ballpark I expect top N31 to land in but there is room for a positive surprise should AMDs >50% claim be like their >5Ghz claim or the >15% single thread claim in the Zen 4 teaser slide and be a rather large underselling of what was actually achieved. Still I await actual numbers on that front and until then I am assuming something in the region of +50%.
I'm not familiar with those Enermax numbers you mention, but there's also the variable of resolution scaling that needs consideration here. It looks like you're calculating only at 2160p? That obviously makes sense for a flagship SKU, but it also means that (unless RDNA3 scales much better with resolution than RDNA2), these cards will absolutely trounce the 4090 at 1440p - a 2.1x performance increase from the 6900XT at 1440p would go from 73% performance v. the 4090 to 153% performance - and that just sounds (way) too good to be true. It would definitely be (very!) interesting to see how customers would react to a card like that if that were to happen (and AMD didn't price it stupidly), but I'm too skeptical to believe that to be particularly likely.

However it always seems to be more interest/hype in AMD leaks than Nvidia from what I've seen.
AMD always gets the "Will they be able to take them down this time?" underdog hype, which to some extent disadvantages Nvidia - it's much harder for them to garner the type of excitement that follows with a potential upset of some kind. But on the other hand, Nvidia has massive reach, tons of media contacts, and are covered and included steadily everywhere across the internet. Not to mention that the tone of that coverage always already expects them to be superior - which isn't as exciting as an underdog, but it still gets people reading, as "how fast will my next GPU be?" (with the default expectation of this being an Nvidia GPU) is just as interesting to people as "will AMD be able to match/beat Nvidia this time?"

Of course in terms of leaks there's also the question of sheer scale: Nvidia outsells AMD's GPU division by ~4x, meaning they have 4x the production volume, 4x the shipping volume, and thus far more products passing through far more hands before launch, with control of this being far more difficult due to this scale.
 
Last edited:
Joined
Feb 20, 2019
Messages
8,339 (3.91/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
They may be a month behind Nvidia's 4090 but to completely clean-sweep the market they only need to release a sub-250W, sub-$350 card that doesn't suck donkey dick.

The $280 6600XT would be a good card if its raytracing performance wasn't basically unusable, and it's RTX/DXR titles that are really driving GPU upgrades. If you're not using raytracing then even an old GTX 1070 is still fine for 1080p60, 5 years later.

For RDNA1 they claimed a 50% perf/watt gain over Vega. This was done by them comparing V64 to the 5700XT with both parts at stock.
For RDNA2 they claimed a 50% perf/watt gain in early released slides but in the reveal event they claimed 54% and 64%. 54% was 5700XT vs 6800XT at 4k in a variety of games (listed in the footnotes of their slide). The 64% was 5700XT vs 6900XT at 4K in the same games. This was further confirmed in some reviews but it heavily depended on how they tested perf/watt. Those sites that use the power and performance data from 1 game saw very different results. TPU saw about a 50% gain where as Tehcspot / HUB saw 70%+ gain because HUB used Doom Eternal and the 5700XT underperformed and TPU used CP2077 and the 6900XT underperformed. If you look at the HUB average uplift of the 6800XT and 6900XT then it actually matched up really well with AMDs claimed improvements.

So the AMD method seems to be compare SKU to SKU at stock settings, measure the average frame rate difference in a suite of titles and then work out the perf/watt delta.

With the >50% I do agree with using the 50% as a baseline but I feel confident that they are not doing a best vs worst comparison because that is not something AMD have done prior under current leadership.

What it does do though is give us some numbers to play with. If the Enermax numbers are correct and top N31 is using 420W then you can get the following numbers.

BaselineTBPPower DeltaPerf/Watt multiPerformance MultiEstimate vs 4090 in Raster
6900XT3001.4x1.5x2.1x+10%
6900XT3001.4x1.64x (to match 6900XT delta) extreme upper bound!2.3x+23%
Ref 6950XT3351.25x1.5x1.88x+15%
Ref 6950XT3351.25x1.64x Again extreme upper bound!2.05x+25%

Now the assumption I am making here is pretty obvious and that is the design goal of N31 was 420W to begin with which would mean it was wide enough to use that power in the saner part of the f/v curve. If it was not 420W to begin with and has been pushed to this through increasing clocks then it is obvious the perf/watt will drop off and the numbers above will be incorrect.

The other assumption is the Enermax numbers are correct. It is entirely possible that the reference TBP for N31 will be closer to 375W which with these numbers would put it about on par with the 4090.

My view is the TBP will be closer to 375-400W rather than 420W in which case anywhere from about equal to 5% ahead of the 4090 seems to be the ballpark I expect top N31 to land in but there is room for a positive surprise should AMDs >50% claim be like their >5Ghz claim or the >15% single thread claim in the Zen 4 teaser slide and be a rather large underselling of what was actually achieved. Still I await actual numbers on that front and until then I am assuming something in the region of +50%.
Solid analysis. A lot of assumptions in there but I agree with them, given the complete lack of any concrete info at this point.
 
Joined
Oct 27, 2020
Messages
797 (0.53/day)
It won't have +50% performance/W increase at 420W (7950XT according to Enermax leak) only at 330W (7900X according to Enermax leak), as you go up in consumption efficiency loss is great.
So comparing 7900XT vs 6900XT, it gives +65% or a little bit above that in relation with 6900XT (so around 7%-10% slower than a RTX 4090 in 4K depending testbed's CPU) which will be a very good result for a 330W TBP card (and at QHD only 2%-5% slower, so some OC 7900XT models will be at least matching 4090 at 1440p)
To reach 2X vs a 6900XT full Navi31 will need to hit near 3.3GHz i would imagine (actual in game average clocks) and around 3.5GHz for 2.1X (maybe some liquid designs at near 500W TBP, not unlike Sapphire Toxic Radeon RX 6900 XT Extreme Edition that went from 300W TBP to 430W and from 2250MHz boost to 2730MHz (toxic boost))
 
Joined
May 2, 2017
Messages
7,762 (2.78/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
So comparing 7900XT vs 6900XT, it gives +65% or a little bit above that in relation with 6900XT (so around 7%-10% slower than a RTX 4090 in 4K depending testbed's CPU) which will be a very good result for a 330W TBP card (and at QHD only 2%-5% slower, so some OC 7900XT models will be at least matching 4090 at 1440p)
I don't know where you're getting your math from here, but your 1440p numbers don't make sense. A 65% performance increase over the 6900 XT at 1440p would make it 73%*1,65=120,5% of the 4090s 1440p performance.

Other than that though I generally agree that not expecting too much is (always) the sensible approach. Tghe 4090 is fast, but also very expensive, so even coming close at 2160p will be great as long as pricing is also good.
 
Joined
Oct 27, 2020
Messages
797 (0.53/day)
I don't know where you're getting your math from here, but your 1440p numbers don't make sense. A 65% performance increase over the 6900 XT at 1440p would make it 73%*1,65=120,5% of the 4090s 1440p performance.

Other than that though I generally agree that not expecting too much is (always) the sensible approach. Tghe 4090 is fast, but also very expensive, so even coming close at 2160p will be great as long as pricing is also good.
65% at 4K not at QHD, as you go down in resolution the gap goes smaller, efficiency claims are made at 4K...
For example 6950X is near +70% vs 3060Ti at 4K but when you compare it to QHD the difference is near +60%.
And more importantly with this kind of power the designs at that level are much more CPU/engine limited in QHD than 6950X, so 7900X will be hitting fps walls in many games just like 4090 does (but less pronounced than 4090 since it's 6 shader engines vs 11GPC)
 
Joined
Apr 21, 2005
Messages
185 (0.03/day)
I'm not familiar with those Enermax numbers you mention, but there's also the variable of resolution scaling that needs consideration here. It looks like you're calculating only at 2160p? That obviously makes sense for a flagship SKU, but it also means that (unless RDNA3 scales much better with resolution than RDNA2), these cards will absolutely trounce the 4090 at 1440p - a 2.1x performance increase from the 6900XT at 1440p would go from 73% performance v. the 4090 to 153% performance - and that just sounds (way) too good to be true. It would definitely be (very!) interesting to see how customers would react to a card like that if that were to happen (and AMD didn't price it stupidly), but I'm too skeptical to believe that to be particularly likely.

Enermax had PSU recomendations for unreleased GPUs with TBP figures that could be calculated. Could just be placeholder stuff but it worked out that top N31 was based on a 420W TBP.

Yes I am talking 4K, should have been clear about that. I very much doubt these numbers will hold below 4K for the flagship parts simply due to CPU bottlenecking impacting the maximum fps some games reach.

My personal estimate is that performance is going to be in the 4090 ballpark with TBP is going to be in the 375W region and AIBs offering OC models upto the 420W or so TBPs but they will hit diminishing returns due to pushing the clock speed rather than having a wider die.

It won't have +50% performance/W increase at 420W (7950XT according to Enermax leak) only at 330W (7900X according to Enermax leak), as you go up in consumption efficiency loss is great.
So comparing 7900XT vs 6900XT, it gives +65% or a little bit above that in relation with 6900XT (so around 7%-10% slower than a RTX 4090 in 4K depending testbed's CPU) which will be a very good result for a 330W TBP card (and at QHD only 2%-5% slower, so some OC 7900XT models will be at least matching 4090 at 1440p)
To reach 2X vs a 6900XT it will need to hit near 3.3GHz i would imagine (actual in game average clocks) and around 3.5GHz for 2.1X (maybe some liquid designs at near 500W TBP, not unlike Sapphire Toxic Radeon RX 6900 XT Extreme Edition that went from 300W TBP to 430W and from 2250MHz boost to 2730MHz (toxic boost))

It depends entirely on what TBP N31 was designed around. N23 for example is designed around a 160W TBP and N21 was designed around a 300W TBP. The performance delta between the 6600XT and the 6900XT almost perfectly matches the power delta because the parts were designed for their respective TBPs so are the correct balance of functional units to clockspeed to voltage.

50% + perf/watt improvement is possible at 420W if N31 was designed to hit 420W in the sane part of the v/f curve because the number of functional units would be balanced around that.
 
Joined
Oct 27, 2020
Messages
797 (0.53/day)
It depends entirely on what TBP N31 was designed around. N23 for example is designed around a 160W TBP and N21 was designed around a 300W TBP. The performance delta between the 6600XT and the 6900XT almost perfectly matches the power delta because the parts were designed for their respective TBPs so are the correct balance of functional units to clockspeed to voltage.

50% + perf/watt improvement is possible at 420W if N31 was designed to hit 420W in the sane part of the v/f curve because the number of functional units would be balanced around that.
Based on AMD's personnel statements regarding power efficiency and TBP targets for RDNA3 vs what the competition targets, it doesn't seem to be this case, also if this was true why the lower end part goes from 420W to 330W (according to Enermax) if AMD didn't need to hit the 50% performance/W claim.
Unless Enermax leak is without merit, so all the above assumptions are invalid. (My proposed Navi31 frequencies in order to hit 2X and 2.1X vs 6900XT have absolutely nothing to do with Enermax leak btw)
 
Joined
May 2, 2017
Messages
7,762 (2.78/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
65% at 4K not at QHD, as you go down in resolution the gap goes smaller, efficiency claims are made at 4K...
For example 6950X is near +70% vs 3060Ti at 4K but when you compare it to QHD the difference is near +60%.
And more importantly with this kind of power the designs at that level are much more CPU/engine limited in QHD than 6950X, so 7900X will be hitting fps walls in many games just like 4090 does (but less pronounced than 4090 since it's 6 shader engines vs 11GPC)
You're mixing up your baselines for comparison here. AMD scales worse when moving to higher resolutions relative to Nvidia, not relative to itself. On the other hand AMD's perf/W claims are only relative to itself. Relating that to the relative resolution scaling between chipmakers is a fundamental logical error. We can use these efficiency claims to make speculative extrapolations of performance which can then be compared to the performance of Nvidia's competing cards, but what you seem to be doing here is lumping both moves together in a way that conflates AMD's claimed gen-over-gen efficiency improvement with the relative resolution scaling v. Nvidia.

If, say, a 7900XT is 65% faster than a 6900XT at 2160p, it will most likely be very close to 65% faster at 1440p as well, barring some external bottleneck (CPU or otherwise). There can of course also be on-board non-GPU bottleneck (RAM amount and bandwidth in particular), but those tend to show up at higher resolutions, not lower ones, and would then suggest more than 65% faster at sub-2160p resolutions if those were the baseline increase at 2160p and bottleneck at that resolution.

It is of course possible that RDNA3 has some form of architectural improvement to alleviate that poor high resolution scaling that we've seen in RDNA2, which would then lead it to scale better at higher resolutions relative to RDNA2, and thus also deliver non-linearly improved perf/W at 2160p in particular - but that's a level of speculation well beyond the basic napkin math we've been engaging in here, as that requires quite fundamental, low-level architectural changes, not just "more cores, better process node, higher clocks, same or more power".
 
Joined
Oct 27, 2020
Messages
797 (0.53/day)
You're mixing up your baselines for comparison here. AMD scales worse when moving to higher resolutions relative to Nvidia, not relative to itself. On the other hand AMD's perf/W claims are only relative to itself. Relating that to the relative resolution scaling between chipmakers is a fundamental logical error. We can use these efficiency claims to make speculative extrapolations of performance which can then be compared to the performance of Nvidia's competing cards, but what you seem to be doing here is lumping both moves together in a way that conflates AMD's claimed gen-over-gen efficiency improvement with the relative resolution scaling v. Nvidia.If, say, a 7900XT is 65% faster than a 6900XT at 2160p, it will most likely be very close to 65% faster at 1440p as well, barring some external bottleneck (CPU or otherwise). There can of course also be on-board non-GPU bottleneck (RAM amount and bandwidth in particular), but those tend to show up at higher resolutions, not lower ones, and would then suggest more than 65% faster at sub-2160p resolutions if those were the baseline increase at 2160p and bottleneck at that resolution.
It is of course possible that RDNA3 has some form of architectural improvement to alleviate that poor high resolution scaling that we've seen in RDNA2, which would then lead it to scale better at higher resolutions relative to RDNA2, and thus also deliver non-linearly improved perf/W at 2160p in particular - but that's a level of speculation well beyond the basic napkin math we've been engaging in here, as that requires quite fundamental, low-level architectural changes, not just "more cores, better process node, higher clocks, same or more power".
You are right regarding mixing vendor's results (or even different gen if possible), I tried to change the 3060Ti to 6700XT but btk2k2 already posted and i didn't want to edit it after since it didn't altered anyway what i was pointing out.
It doesn't change much, on the contrary to what you say it's more pronounced difference now comparing AMD to AMD.
Check the latest vga review (ASUS strix):
QHD:
6700XT = 50%
6950XT = 75%
1.5X
4K:
6700XT = 35%
6950XT = 59%
1.69%
So the difference from +69% at 4K, it went to +50% at QHD...
It's very basic stuff, it happened since forever (4K difference between 2 cards is higher than in QHD in 99% of cases, there are exceptions but are easily explainable like RX6600 vs RTX 3050- castrated Infinity cache at 32MB etc)
 
Joined
Nov 6, 2016
Messages
1,773 (0.60/day)
Location
NH, USA
System Name Lightbringer
Processor Ryzen 7 2700X
Motherboard Asus ROG Strix X470-F Gaming
Cooling Enermax Liqmax Iii 360mm AIO
Memory G.Skill Trident Z RGB 32GB (8GBx4) 3200Mhz CL 14
Video Card(s) Sapphire RX 5700XT Nitro+
Storage Hp EX950 2TB NVMe M.2, HP EX950 1TB NVMe M.2, Samsung 860 EVO 2TB
Display(s) LG 34BK95U-W 34" 5120 x 2160
Case Lian Li PC-O11 Dynamic (White)
Power Supply BeQuiet Straight Power 11 850w Gold Rated PSU
Mouse Glorious Model O (Matte White)
Keyboard Royal Kludge RK71
Software Windows 10
People are still using the "mindshare" excuse for AMD's inability to straighten anything out of their own accord for over a decade?


Gonna guess almost no availability. The last few launches form AMD have been paper for GPUs.


In America the only store like that left is microcenter, and most Americans live minimum 3+ hours away. The cost of gas will eat up whatever savings you'd get. Anything local disappeared years ago, the only thing left are fly-by-night computer repair shops that I wouldnt trust with a raspberry pi let alone anything expensive.
For all intents and purposes, AMD GPUs DO have their stuff straightened out...I can only speak for my own experience but I've yet to have a single problem or issue with my 5700xt, which is why I'll be upgrading to RDNA3 this time around....plus, regardless of my feelings on AMD, I just can't morally bring myself to give money to Nvidia and reward their behavior. Also, any marketshare AMD captures at Nvidia's expense will necessarily improve the situation for consumers...the most ideal situation being a 50/50 marketshare between the two and a balance of power.

I could be wrong, but it sounds like you're implying that AMD should be able to perform at the same level as Nvidia, when in reality that's just not possible. AMD's 2021 R&D budget was $2 billion, which has to be divided between x86 and graphics, and based on the fact that x86 is a bigger revenue source for AMD and x86 has a much larger T.A.M., we can safely assume that x86 is getting 60% of that budget. This means that AMD has to compete against Nvidia with less than $1 billion R&D budget while Nvidia had a $5.27 billion R&D budget for 2021.....they're nowhere near competing on a level playing field. It actually goes to show how impressive AMD is, especially considering RDNA2 matched or even beat the 30 series in Raster and all while AMD has a fifth of the financial resources to spend on R&D. It's even more impressive what AMD has been able to do against Intel considering Intel has a $15 billion R&D budget for 2021!
 
Last edited:

TheLostSwede

News Editor
Joined
Nov 11, 2004
Messages
17,766 (2.42/day)
Location
Sweden
System Name Overlord Mk MLI
Processor AMD Ryzen 7 7800X3D
Motherboard Gigabyte X670E Aorus Master
Cooling Noctua NH-D15 SE with offsets
Memory 32GB Team T-Create Expert DDR5 6000 MHz @ CL30-34-34-68
Video Card(s) Gainward GeForce RTX 4080 Phantom GS
Storage 1TB Solidigm P44 Pro, 2 TB Corsair MP600 Pro, 2TB Kingston KC3000
Display(s) Acer XV272K LVbmiipruzx 4K@160Hz
Case Fractal Design Torrent Compact
Audio Device(s) Corsair Virtuoso SE
Power Supply be quiet! Pure Power 12 M 850 W
Mouse Logitech G502 Lightspeed
Keyboard Corsair K70 Max
Software Windows 10 Pro
Benchmark Scores https://valid.x86.fr/yfsd9w
A lot of those local mom-and-pop PC stores have steadily shuttered in recent years as their Baby Boomer owners who started their businesses in the Eighties have reached retirement age with no one to pick up the reins.

I am grateful that there are still a few great mom-and-pop PC stores in my area.

Computers are commodities now, people buy and throw them away (er, recycle) every few years. Hell, even Microsoft belated accepted the fact that most people don't upgrade Windows which is why you can get an OEM key for Windows 10 Pro for $7 these days.

The number of people who open up a PC case to remove a component and install a new one is a very, very small portion of the consumer userbase. Joe Consumer is just going to buy a Dell, HP, or Alienware box and when it starts running "too slow" they'll just buy a new one and put the old computer in a kid's room.
Still plenty of computer shops both in Taiwan and Sweden.
Where I live now, I have less than a 10 minute walk to one.
 
Joined
Dec 14, 2011
Messages
1,084 (0.23/day)
Location
South-Africa
Processor AMD Ryzen 9 5900X
Motherboard ASUS ROG STRIX B550-F GAMING (WI-FI)
Cooling Noctua NH-D15 G2
Memory 32GB G.Skill DDR4 3600Mhz CL18
Video Card(s) ASUS GTX 1650 TUF
Storage SAMSUNG 990 PRO 2TB
Display(s) Dell S3220DGF
Case Corsair iCUE 4000X
Audio Device(s) ASUS Xonar D2X
Power Supply Corsair AX760 Platinum
Mouse Razer DeathAdder V2 - Wireless
Keyboard Corsair K70 PRO - OPX Linear Switches
Software Microsoft Windows 11 - Enterprise (64-bit)
I really doubt RDNA3 will beat a 4090 at 4k gaming, but I think it may match 1440p gaming. Plus you have DLSS3, which lets face it, AMD just won't be able to pull something like that off. doubling the frames at 4k with AI? I just don't see AMD having that technical capability.

but, I still plan to buy RDNA3 because I only game at 1440p.

I think everyone is going to be in for a surprise, even nVidia. :):)
 
Joined
May 2, 2017
Messages
7,762 (2.78/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
You are right regarding mixing vendor's results (or even different gen if possible), I tried to change the 3060Ti to 6700XT but btk2k2 already posted and i didn't want to edit it after since it didn't altered anyway what i was pointing out.
It doesn't change much, on the contrary to what you say it's more pronounced difference now comparing AMD to AMD.
Check the latest vga review (ASUS strix):
QHD:
6700XT = 50%
6950XT = 75%
1.5X
4K:
6700XT = 35%
6950XT = 59%
1.69%
So the difference from +69% at 4K, it went to +50% at QHD...
It's very basic stuff, it happened since forever (4K difference between 2 cards is higher than in QHD in 99% of cases, there are exceptions but are easily explainable like RX6600 vs RTX 3050- castrated Infinity cache at 32MB etc)
Again I don't see this as necessarily illustrative of future implementations - all that shows is after all that different implementations of the same architecture will have different limitations/bottlenecks, whether they be compute, memory amount/bandwidth, etc. What that does show is the limitations of a narrow-and-fast die vs. a wide-and-slow(ish) one, that the latter has more left over to process higher resolutions while the latter struggles a lot more as the load grows heavier (or conversely, at lower resolutions the wider die is more held back by non-compute factors). Exactly where the limitation lies - clock speeds, memory amount, memory bandwidth, Infinity Cache, or something else, is almost impossible to tell from an end-user POV at least in real-world workloads, but as higher resolutions are more costly in pretty much every way, it makes sense for AMD to not optimize lower tier SKUs for higher resolutions if that also increases costs (through more VRAM or a wider bus, for example).

Of course this also applies to RDNA3 - the success of any given SKU is entirely dependent on AMD configuring it in a sensible way. I'm just assuming that they'll do a similarly decent job at this as with RDNA2, and beyond that I feel that this is too fine-grained a level of speculation for me to really engage in. I'm sure that with a sufficient dataset it would be possible to come up with an equation that (roughly) predicted real-world relative performance within an architecture dependent on core counts, RAM, bus width, clocks, power, etc., but until someone does I'll assume those models only exist in AMD's labs.

All of this is also dependent on what you choose as your baseline for comparison, which is doubly troublesome when what is being compared is a speculation on future products - at this point there are so many variables in play that I've long since given up :p


Edit: wide-and-slow, not fast-and slow. Some times my brain and my typing are way out of sync.
 
Last edited:
Joined
Jun 21, 2021
Messages
3,121 (2.44/day)
System Name daily driver Mac mini M2 Pro
Processor Apple proprietary M2 Pro (6 p-cores, 4 e-cores)
Motherboard Apple proprietary
Cooling Apple proprietary
Memory Apple proprietary 16GB LPDDR5 unified memory
Video Card(s) Apple proprietary M2 Pro (16-core GPU)
Storage Apple proprietary onboard 512GB SSD + various external HDDs
Display(s) LG UltraFine 27UL850W (4K@60Hz IPS)
Case Apple proprietary
Audio Device(s) Apple proprietary
Power Supply Apple proprietary
Mouse Apple Magic Trackpad 2
Keyboard Keychron K1 tenkeyless (Gateron Reds)
VR HMD Oculus Rift S (hosted on a different PC)
Software macOS Sonoma 14.7
Benchmark Scores (My Windows daily driver is a Beelink Mini S12 Pro. I'm not interested in benchmarking.)
Of course this also applies to RDNA3 - the success of any given SKU is entirely dependent on AMD configuring it in a sensible way. I'm just assuming that they'll do a similarly decent job at this as with RDNA2, and beyond that I feel that this is too fine-grained a level of speculation for me to really engage in. I'm sure that with a sufficient dataset it would be possible to come up with an equation that (roughly) predicted real-world relative performance within an architecture dependent on core counts, RAM, bus width, clocks, power, etc., but until someone does I'll assume those models only exist in AMD's labs.

They can probably do some sort of predictive modeling on expected performance. After all semiconductors these days are designed with computers. It's not like the old days when computer designs used a sliderule, a pad of paper and a pencil.

That said, AMD, Intel, NVIDIA, Apple and others test a wide variety of prototype samples in their labs. A graphics card isn't just the GPU, so different combinations of components will yield different performance results, with different power draws, COGS, whatever.

Indeed consumer grade graphics card designs are optimized for specific target resolutions (1080p, 1440p, 2160p). I have a 3060 Ti in a build for 1440p gaming; sure the 4090 will beat it, but is it worth it? After all, the price difference between a 4090 and my 3060 Ti is likely $1200-1500. That buys a lot of games and other stuff.

For sure, AMD will test all those different prototypes in their labs but only release one or two products to market. It's not like they can't put a 384-bit memory bus on an entry level CPU and hang 24GB of VRAM off it. The problem is it makes little sense from a business standpoint. Yes, someone will buy it, including some TPU forum participant probably.

I know you understand this but some other people online don't understand what follows here.

AMD isn't making a graphics card for YOU. They are making graphics cards for a larger graphics card audience. AMD is not your mom cooking your favorite breakfast so when you emerge from her basement, it's waiting for you hot on the table.
 
Joined
Jan 17, 2018
Messages
440 (0.17/day)
Processor Ryzen 7 5800X3D
Motherboard MSI B550 Tomahawk
Cooling Noctua U12S
Memory 32GB @ 3600 CL18
Video Card(s) AMD 6800XT
Storage WD Black SN850(1TB), WD Black NVMe 2018(500GB), WD Blue SATA(2TB)
Display(s) Samsung Odyssey G9
Case Be Quiet! Silent Base 802
Power Supply Seasonic PRIME-GX-1000
Define "a lot more resources", uhm... 65%... or 104%... or just ~8.5% because any shift in increased value could VALIDATE such a claim and if memory serves me, AMD definitely refused to attach any solid value to that expected ++ PR statement ++. :shadedshu:

All the "should & woulds" used above is not an AMD or NVIDIA stable business model forecast, especially for consumers.
RDNA2 is something like ~5-20% behind Nvidia in terms of raytracing efficiency right now, depending on the game. I don't think it's a stretch to assume they can reduce that gap significantly because of the larger focus on Raytracing for RDNA3.
 
Joined
Jun 21, 2021
Messages
3,121 (2.44/day)
System Name daily driver Mac mini M2 Pro
Processor Apple proprietary M2 Pro (6 p-cores, 4 e-cores)
Motherboard Apple proprietary
Cooling Apple proprietary
Memory Apple proprietary 16GB LPDDR5 unified memory
Video Card(s) Apple proprietary M2 Pro (16-core GPU)
Storage Apple proprietary onboard 512GB SSD + various external HDDs
Display(s) LG UltraFine 27UL850W (4K@60Hz IPS)
Case Apple proprietary
Audio Device(s) Apple proprietary
Power Supply Apple proprietary
Mouse Apple Magic Trackpad 2
Keyboard Keychron K1 tenkeyless (Gateron Reds)
VR HMD Oculus Rift S (hosted on a different PC)
Software macOS Sonoma 14.7
Benchmark Scores (My Windows daily driver is a Beelink Mini S12 Pro. I'm not interested in benchmarking.)
RDNA2 is something like ~5-20% behind Nvidia in terms of raytracing efficiency right now, depending on the game. I don't think it's a stretch to assume they can reduce that gap significantly because of the larger focus on Raytracing for RDNA3.

Don't forget that Ampere was made on a Samsung process that many consider inferior to what TSMC was offering.

Now both AMD and NVIDIA are using TSMC.

That said, NVIDIA may be putting more effort into improving their Tensor cores especially since ML is more important for their Datacenter business.

From a consumer gaming perspective, almost everyone who turns on ray tracing will enable some sort of image upscaling option. Generally speaking the frame rates for ray tracing without some sort of image upscaling help are too low for satisfying gameplay with current technology.

Besides, Tensor cores have other usage cases beyond DLSS for consumers like image replacement.
 
Joined
Apr 22, 2021
Messages
249 (0.19/day)
RDNA2 is something like ~5-20% behind Nvidia in terms of raytracing efficiency right now, depending on the game. I don't think it's a stretch to assume they can reduce that gap significantly because of the larger focus on Raytracing for RDNA3.
What? "~5-20% behind NVIDIA"?! Yeah, but... heck no! :laugh:

Only a deliberate nerfed AMD sponsored title can come close to any NVIDIA's RTX GPUs much superior RT performance. It is not just about how many RT cores/processes/executes a GPU have vs the competitor. :shadedshu:

The RTX 3070 also more than doubles the performance of the RX 6800. Heck, even the RTX 3060 12GB beats the 6900 XT by 16% at 1080p and 23% at 1440p.

 
Joined
Oct 27, 2020
Messages
797 (0.53/day)
Again I don't see this as necessarily illustrative of future implementations - all that shows is after all that different implementations of the same architecture will have different limitations/bottlenecks, whether they be compute, memory amount/bandwidth, etc. What that does show is the limitations of a narrow-and-fast die vs. a fast-and-slow(ish) one, that the latter has more left over to process higher resolutions while the latter struggles a lot more as the load grows heavier (or conversely, at lower resolutions the wider die is more held back by non-compute factors). Exactly where the limitation lies - clock speeds, memory amount, memory bandwidth, Infinity Cache, or something else, is almost impossible to tell from an end-user POV at least in real-world workloads, but as higher resolutions are more costly in pretty much every way, it makes sense for AMD to not optimize lower tier SKUs for higher resolutions if that also increases costs (through more VRAM or a wider bus, for example).

Of course this also applies to RDNA3 - the success of any given SKU is entirely dependent on AMD configuring it in a sensible way. I'm just assuming that they'll do a similarly decent job at this as with RDNA2, and beyond that I feel that this is too fine-grained a level of speculation for me to really engage in. I'm sure that with a sufficient dataset it would be possible to come up with an equation that (roughly) predicted real-world relative performance within an architecture dependent on core counts, RAM, bus width, clocks, power, etc., but until someone does I'll assume those models only exist in AMD's labs.

All of this is also dependent on what you choose as your baseline for comparison, which is doubly troublesome when what is being compared is a speculation on future products - at this point there are so many variables in play that I've long since given up :p
You're focusing only on GPU and try to solve which GPU characteristic can possibly effect this behaviour (4K difference=greater than QHD difference) when the problem is multifaceted and (mostly) not GPU exclusively related.
You will see that going from 4K to QHD many games are 2X or so faster.
This means that each frame is rendered in half time.
Let's take 2 VGAs A and B that the higher one (A) has double speed in 4K (double the fps)
Depending the engine and requirements in other resources except GPU (CPU mainly, system ram, storage system etc essentially every aspect that plays a role in the fps outcome ) even if GPU A is pretty capable based on specs to produce double the frames again in QHD, in order to do that it needs the other aspects of the PC (CPU etc) that are involved in the fps outcome, to be able to support the doubling also. (of course game engine must be able to scale also, this is a problem also)
That's the main factor affecting this behaviour.
But let's agree to disagree.

Edit: this is the main reason Nvidia is pursuing other avenues like frame generation in order to try to maintain a meaningful generation (Ampere->Ada etc) performance gap as an incentive for upgrade for example.
Gradually with each new GPU/CPU generation, It becomes harder and harder with the difference that GPU advancements vs CPU & memory advancements that we had through the years (GPU advancements are much greater than CPU/memory advancements, especially if you consider in CPU for example how many cores are utilised for the vast majority of games and this keeps adding up from gen to gen) to be able to sustain the performance gaps as the resolution goes down.
 
Last edited:
Joined
Jun 21, 2021
Messages
3,121 (2.44/day)
System Name daily driver Mac mini M2 Pro
Processor Apple proprietary M2 Pro (6 p-cores, 4 e-cores)
Motherboard Apple proprietary
Cooling Apple proprietary
Memory Apple proprietary 16GB LPDDR5 unified memory
Video Card(s) Apple proprietary M2 Pro (16-core GPU)
Storage Apple proprietary onboard 512GB SSD + various external HDDs
Display(s) LG UltraFine 27UL850W (4K@60Hz IPS)
Case Apple proprietary
Audio Device(s) Apple proprietary
Power Supply Apple proprietary
Mouse Apple Magic Trackpad 2
Keyboard Keychron K1 tenkeyless (Gateron Reds)
VR HMD Oculus Rift S (hosted on a different PC)
Software macOS Sonoma 14.7
Benchmark Scores (My Windows daily driver is a Beelink Mini S12 Pro. I'm not interested in benchmarking.)
You can only go so far cramming more raster cores on a die. At some point, it might be more beneficial to differentiate silicon and put specialized cores for certain tasks.

In the same way, at home you might be the one to buy groceries, make dinner, and wash the dirty pots and dishes. In a very, very small restaurant, you might be able to pull this off. But let's say you have fifty seats. Would you hire someone else to do all the same stuff that you do? Should a restaurant have ten people that all do the same stuff?

Yes, you can ray trace with traditional raster cores. It can be done but they aren't optimized for that workload. So NVIDIA carves out a chunk of die space and put in specialized transistors. Same with Tensor cores. In a restaurant kitchen, the pantry cook washes lettuce in a prep sink, not the potwasher's sink. Different tools/systems for different workloads and tasks isn't a new concept.

I know some people in these PC forums swear that they only care about 3D raster performance. That's not going to scale infinitely just like you can't have 50 people buying groceries, cooking food, and washing their own pots and pans in a hospital catering kitchen.

AMD started including RT cores with their RDNA2 products. At some point I expect them to put ML cores on their GPU dies. We already see media encoders too.

AMD needs good ML cores anyhow if they want to stay competitive in Datacenter. In the end, a lot of success will be determined by the quality of the development environment and software, not just the number of transistors you can put on a die.
 
Joined
Dec 26, 2006
Messages
3,862 (0.59/day)
Location
Northern Ontario Canada
Processor Ryzen 5700x
Motherboard Gigabyte X570S Aero G R1.1 BiosF5g
Cooling Noctua NH-C12P SE14 w/ NF-A15 HS-PWM Fan 1500rpm
Memory Micron DDR4-3200 2x32GB D.S. D.R. (CT2K32G4DFD832A)
Video Card(s) AMD RX 6800 - Asus Tuf
Storage Kingston KC3000 1TB & 2TB & 4TB Corsair MP600 Pro LPX
Display(s) LG 27UL550-W (27" 4k)
Case Be Quiet Pure Base 600 (no window)
Audio Device(s) Realtek ALC1220-VB
Power Supply SuperFlower Leadex V Gold Pro 850W ATX Ver2.52
Mouse Mionix Naos Pro
Keyboard Corsair Strafe with browns
Software W10 22H2 Pro x64
"AMD will announce its first RDNA 3 based GPUs on the 3rd of November"

Announcement or hard launch or both??
 

Space Lynx

Astronaut
Joined
Oct 17, 2014
Messages
17,421 (4.69/day)
Location
Kepler-186f
Processor 7800X3D -25 all core
Motherboard B650 Steel Legend
Cooling Frost Commander 140
Video Card(s) Merc 310 7900 XT @3100 core -.75v
Display(s) Agon 27" QD-OLED Glossy 240hz 1440p
Case NZXT H710 (Red/Black)
Audio Device(s) Asgard 2, Modi 3, HD58X
Power Supply Corsair RM850x Gold
"AMD will announce its first RDNA 3 based GPUs on the 3rd of November"

Announcement or hard launch or both??

no one knows yet. would be neat if it is both, i will def have best buy and amazon up and refreshing the entire time during the live stream. lol
 
Joined
May 2, 2017
Messages
7,762 (2.78/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
"AMD will announce its first RDNA 3 based GPUs on the 3rd of November"

Announcement or hard launch or both??
What you're quoting is all that's been said, so who knows? If they want to make use of the holiday shopping season they'd need to get cards out the door ASAP though.
 
Joined
Jun 2, 2017
Messages
9,362 (3.39/day)
System Name Best AMD Computer
Processor AMD 7900X3D
Motherboard Asus X670E E Strix
Cooling In Win SR36
Memory GSKILL DDR5 32GB 5200 30
Video Card(s) Sapphire Pulse 7900XT (Watercooled)
Storage Corsair MP 700, Seagate 530 2Tb, Adata SX8200 2TBx2, Kingston 2 TBx2, Micron 8 TB, WD AN 1500
Display(s) GIGABYTE FV43U
Case Corsair 7000D Airflow
Audio Device(s) Corsair Void Pro, Logitch Z523 5.1
Power Supply Deepcool 1000M
Mouse Logitech g7 gaming mouse
Keyboard Logitech G510
Software Windows 11 Pro 64 Steam. GOG, Uplay, Origin
Benchmark Scores Firestrike: 46183 Time Spy: 25121
No sarcasm required, there going to be fast and efficient AMD fanboy's wet dreams and will be gobbled up by the scalpers too.
Not if you can go to your local brick and mortar and have a friend to get you one before they are stocked in the store. I still feel AMD is going to do a mic drop on Nov 3. The 6800XT is just as fast as 2 Vega 64s in Crossfire at the same power draw as one card. They are saying the same thing about 7000 vs 6000 so it could mean literally that you are getting up to 80% more performance. With how powerful the 6800XT already is I have already put my name on one. I asked for the cheapest 7800Xt (Hopefully reference) as I will be putting a water block on the card.

no one knows yet. would be neat if it is both, i will def have best buy and amazon up and refreshing the entire time during the live stream. lol
Not if you can go to your local brick and mortar and have a friend to get you one before they are stocked in the store. I still feel AMD is going to do a mic drop on Nov 3. The 6800XT is just as fast as 2 Vega 64s in Crossfire at the same power draw as one card. They are saying the same thing about 7000 vs 6000 so it could mean literally that you are getting up to 80% more performance. With how powerful the 6800XT already is I have already put my name on one. I asked for the cheapest 7800Xt (Hopefully reference) as I will be putting a water block on the card.
I can see them being available like 2 weeks before Xmas.
 
Joined
Jan 17, 2018
Messages
440 (0.17/day)
Processor Ryzen 7 5800X3D
Motherboard MSI B550 Tomahawk
Cooling Noctua U12S
Memory 32GB @ 3600 CL18
Video Card(s) AMD 6800XT
Storage WD Black SN850(1TB), WD Black NVMe 2018(500GB), WD Blue SATA(2TB)
Display(s) Samsung Odyssey G9
Case Be Quiet! Silent Base 802
Power Supply Seasonic PRIME-GX-1000
What? "~5-20% behind NVIDIA"?! Yeah, but... heck no! :laugh:

Only a deliberate nerfed AMD sponsored title can come close to any NVIDIA's RTX GPUs much superior RT performance. It is not just about how many RT cores/processes/executes a GPU have vs the competitor. :shadedshu:



TPU's own benchmarks compare the efficiency of the 6xxx AMD series to the Nvidia 3xxx series(and now the 4090). Here's the biggest outlier:
cyberpunk-2077-rt-3840-2160.png (500×570) (tpucdn.com)

Cyberpunk shows AMD GPU's roughly at -69% and Nvidia 3xxx series GPU's at around -50%. This is the game with the largest difference, and probably the most intensive raytracing, that TPU has tested in their most recent benchmarks. So overall we have a ~20% difference and around ~30% if you include the 4090.

Farcry 6 the AMD GPU is actually ~5% more efficient than the Nvidia and in F1 Nvidia is only ~8% more efficient.
 
Top