Thursday, October 20th 2022

AMD Announces RDNA 3 GPU Launch Livestream

It's hardly a secret that AMD will announce its first RDNA 3 based GPUs on the 3rd of November and the company has now officially announced that that it'll hold a livestream that starts 1:00 pm (13:00) Pacific Daylight Time. The event goes under the name "together we advance_gaming". AMD didn't share much in terms of details about the event, all we know is that "AMD executives will provide details on the new high-performance, energy-efficient AMD RDNA 3 architecture that will deliver new levels of performance, efficiency and functionality to gamers and content creators."
Source: AMD
Add your own comment

104 Comments on AMD Announces RDNA 3 GPU Launch Livestream

#51
Chrispy_
They may be a month behind Nvidia's 4090 but to completely clean-sweep the market they only need to release a sub-250W, sub-$350 card that doesn't suck donkey dick.

The $280 6600XT would be a good card if its raytracing performance wasn't basically unusable, and it's RTX/DXR titles that are really driving GPU upgrades. If you're not using raytracing then even an old GTX 1070 is still fine for 1080p60, 5 years later.
btk2k2For RDNA1 they claimed a 50% perf/watt gain over Vega. This was done by them comparing V64 to the 5700XT with both parts at stock.
For RDNA2 they claimed a 50% perf/watt gain in early released slides but in the reveal event they claimed 54% and 64%. 54% was 5700XT vs 6800XT at 4k in a variety of games (listed in the footnotes of their slide). The 64% was 5700XT vs 6900XT at 4K in the same games. This was further confirmed in some reviews but it heavily depended on how they tested perf/watt. Those sites that use the power and performance data from 1 game saw very different results. TPU saw about a 50% gain where as Tehcspot / HUB saw 70%+ gain because HUB used Doom Eternal and the 5700XT underperformed and TPU used CP2077 and the 6900XT underperformed. If you look at the HUB average uplift of the 6800XT and 6900XT then it actually matched up really well with AMDs claimed improvements.

So the AMD method seems to be compare SKU to SKU at stock settings, measure the average frame rate difference in a suite of titles and then work out the perf/watt delta.

With the >50% I do agree with using the 50% as a baseline but I feel confident that they are not doing a best vs worst comparison because that is not something AMD have done prior under current leadership.

What it does do though is give us some numbers to play with. If the Enermax numbers are correct and top N31 is using 420W then you can get the following numbers.

BaselineTBPPower DeltaPerf/Watt multiPerformance MultiEstimate vs 4090 in Raster
6900XT3001.4x1.5x2.1x+10%
6900XT3001.4x1.64x (to match 6900XT delta) extreme upper bound!2.3x+23%
Ref 6950XT3351.25x1.5x1.88x+15%
Ref 6950XT3351.25x1.64x Again extreme upper bound!2.05x+25%


Now the assumption I am making here is pretty obvious and that is the design goal of N31 was 420W to begin with which would mean it was wide enough to use that power in the saner part of the f/v curve. If it was not 420W to begin with and has been pushed to this through increasing clocks then it is obvious the perf/watt will drop off and the numbers above will be incorrect.

The other assumption is the Enermax numbers are correct. It is entirely possible that the reference TBP for N31 will be closer to 375W which with these numbers would put it about on par with the 4090.

My view is the TBP will be closer to 375-400W rather than 420W in which case anywhere from about equal to 5% ahead of the 4090 seems to be the ballpark I expect top N31 to land in but there is room for a positive surprise should AMDs >50% claim be like their >5Ghz claim or the >15% single thread claim in the Zen 4 teaser slide and be a rather large underselling of what was actually achieved. Still I await actual numbers on that front and until then I am assuming something in the region of +50%.
Solid analysis. A lot of assumptions in there but I agree with them, given the complete lack of any concrete info at this point.
Posted on Reply
#52
ModEl4
It won't have +50% performance/W increase at 420W (7950XT according to Enermax leak) only at 330W (7900X according to Enermax leak), as you go up in consumption efficiency loss is great.
So comparing 7900XT vs 6900XT, it gives +65% or a little bit above that in relation with 6900XT (so around 7%-10% slower than a RTX 4090 in 4K depending testbed's CPU) which will be a very good result for a 330W TBP card (and at QHD only 2%-5% slower, so some OC 7900XT models will be at least matching 4090 at 1440p)
To reach 2X vs a 6900XT full Navi31 will need to hit near 3.3GHz i would imagine (actual in game average clocks) and around 3.5GHz for 2.1X (maybe some liquid designs at near 500W TBP, not unlike Sapphire Toxic Radeon RX 6900 XT Extreme Edition that went from 300W TBP to 430W and from 2250MHz boost to 2730MHz (toxic boost))
Posted on Reply
#53
Valantar
ModEl4So comparing 7900XT vs 6900XT, it gives +65% or a little bit above that in relation with 6900XT (so around 7%-10% slower than a RTX 4090 in 4K depending testbed's CPU) which will be a very good result for a 330W TBP card (and at QHD only 2%-5% slower, so some OC 7900XT models will be at least matching 4090 at 1440p)
I don't know where you're getting your math from here, but your 1440p numbers don't make sense. A 65% performance increase over the 6900 XT at 1440p would make it 73%*1,65=120,5% of the 4090s 1440p performance.

Other than that though I generally agree that not expecting too much is (always) the sensible approach. Tghe 4090 is fast, but also very expensive, so even coming close at 2160p will be great as long as pricing is also good.
Posted on Reply
#54
ModEl4
ValantarI don't know where you're getting your math from here, but your 1440p numbers don't make sense. A 65% performance increase over the 6900 XT at 1440p would make it 73%*1,65=120,5% of the 4090s 1440p performance.

Other than that though I generally agree that not expecting too much is (always) the sensible approach. Tghe 4090 is fast, but also very expensive, so even coming close at 2160p will be great as long as pricing is also good.
65% at 4K not at QHD, as you go down in resolution the gap goes smaller, efficiency claims are made at 4K...
For example 6950X is near +70% vs 3060Ti at 4K but when you compare it to QHD the difference is near +60%.
And more importantly with this kind of power the designs at that level are much more CPU/engine limited in QHD than 6950X, so 7900X will be hitting fps walls in many games just like 4090 does (but less pronounced than 4090 since it's 6 shader engines vs 11GPC)
Posted on Reply
#55
btk2k2
ValantarI'm not familiar with those Enermax numbers you mention, but there's also the variable of resolution scaling that needs consideration here. It looks like you're calculating only at 2160p? That obviously makes sense for a flagship SKU, but it also means that (unless RDNA3 scales much better with resolution than RDNA2), these cards will absolutely trounce the 4090 at 1440p - a 2.1x performance increase from the 6900XT at 1440p would go from 73% performance v. the 4090 to 153% performance - and that just sounds (way) too good to be true. It would definitely be (very!) interesting to see how customers would react to a card like that if that were to happen (and AMD didn't price it stupidly), but I'm too skeptical to believe that to be particularly likely.
Enermax had PSU recomendations for unreleased GPUs with TBP figures that could be calculated. Could just be placeholder stuff but it worked out that top N31 was based on a 420W TBP.

Yes I am talking 4K, should have been clear about that. I very much doubt these numbers will hold below 4K for the flagship parts simply due to CPU bottlenecking impacting the maximum fps some games reach.

My personal estimate is that performance is going to be in the 4090 ballpark with TBP is going to be in the 375W region and AIBs offering OC models upto the 420W or so TBPs but they will hit diminishing returns due to pushing the clock speed rather than having a wider die.
ModEl4It won't have +50% performance/W increase at 420W (7950XT according to Enermax leak) only at 330W (7900X according to Enermax leak), as you go up in consumption efficiency loss is great.
So comparing 7900XT vs 6900XT, it gives +65% or a little bit above that in relation with 6900XT (so around 7%-10% slower than a RTX 4090 in 4K depending testbed's CPU) which will be a very good result for a 330W TBP card (and at QHD only 2%-5% slower, so some OC 7900XT models will be at least matching 4090 at 1440p)
To reach 2X vs a 6900XT it will need to hit near 3.3GHz i would imagine (actual in game average clocks) and around 3.5GHz for 2.1X (maybe some liquid designs at near 500W TBP, not unlike Sapphire Toxic Radeon RX 6900 XT Extreme Edition that went from 300W TBP to 430W and from 2250MHz boost to 2730MHz (toxic boost))
It depends entirely on what TBP N31 was designed around. N23 for example is designed around a 160W TBP and N21 was designed around a 300W TBP. The performance delta between the 6600XT and the 6900XT almost perfectly matches the power delta because the parts were designed for their respective TBPs so are the correct balance of functional units to clockspeed to voltage.

50% + perf/watt improvement is possible at 420W if N31 was designed to hit 420W in the sane part of the v/f curve because the number of functional units would be balanced around that.
Posted on Reply
#56
ModEl4
btk2k2It depends entirely on what TBP N31 was designed around. N23 for example is designed around a 160W TBP and N21 was designed around a 300W TBP. The performance delta between the 6600XT and the 6900XT almost perfectly matches the power delta because the parts were designed for their respective TBPs so are the correct balance of functional units to clockspeed to voltage.

50% + perf/watt improvement is possible at 420W if N31 was designed to hit 420W in the sane part of the v/f curve because the number of functional units would be balanced around that.
Based on AMD's personnel statements regarding power efficiency and TBP targets for RDNA3 vs what the competition targets, it doesn't seem to be this case, also if this was true why the lower end part goes from 420W to 330W (according to Enermax) if AMD didn't need to hit the 50% performance/W claim.
Unless Enermax leak is without merit, so all the above assumptions are invalid. (My proposed Navi31 frequencies in order to hit 2X and 2.1X vs 6900XT have absolutely nothing to do with Enermax leak btw)
Posted on Reply
#57
Valantar
ModEl465% at 4K not at QHD, as you go down in resolution the gap goes smaller, efficiency claims are made at 4K...
For example 6950X is near +70% vs 3060Ti at 4K but when you compare it to QHD the difference is near +60%.
And more importantly with this kind of power the designs at that level are much more CPU/engine limited in QHD than 6950X, so 7900X will be hitting fps walls in many games just like 4090 does (but less pronounced than 4090 since it's 6 shader engines vs 11GPC)
You're mixing up your baselines for comparison here. AMD scales worse when moving to higher resolutions relative to Nvidia, not relative to itself. On the other hand AMD's perf/W claims are only relative to itself. Relating that to the relative resolution scaling between chipmakers is a fundamental logical error. We can use these efficiency claims to make speculative extrapolations of performance which can then be compared to the performance of Nvidia's competing cards, but what you seem to be doing here is lumping both moves together in a way that conflates AMD's claimed gen-over-gen efficiency improvement with the relative resolution scaling v. Nvidia.

If, say, a 7900XT is 65% faster than a 6900XT at 2160p, it will most likely be very close to 65% faster at 1440p as well, barring some external bottleneck (CPU or otherwise). There can of course also be on-board non-GPU bottleneck (RAM amount and bandwidth in particular), but those tend to show up at higher resolutions, not lower ones, and would then suggest more than 65% faster at sub-2160p resolutions if those were the baseline increase at 2160p and bottleneck at that resolution.

It is of course possible that RDNA3 has some form of architectural improvement to alleviate that poor high resolution scaling that we've seen in RDNA2, which would then lead it to scale better at higher resolutions relative to RDNA2, and thus also deliver non-linearly improved perf/W at 2160p in particular - but that's a level of speculation well beyond the basic napkin math we've been engaging in here, as that requires quite fundamental, low-level architectural changes, not just "more cores, better process node, higher clocks, same or more power".
Posted on Reply
#58
ModEl4
ValantarYou're mixing up your baselines for comparison here. AMD scales worse when moving to higher resolutions relative to Nvidia, not relative to itself. On the other hand AMD's perf/W claims are only relative to itself. Relating that to the relative resolution scaling between chipmakers is a fundamental logical error. We can use these efficiency claims to make speculative extrapolations of performance which can then be compared to the performance of Nvidia's competing cards, but what you seem to be doing here is lumping both moves together in a way that conflates AMD's claimed gen-over-gen efficiency improvement with the relative resolution scaling v. Nvidia.If, say, a 7900XT is 65% faster than a 6900XT at 2160p, it will most likely be very close to 65% faster at 1440p as well, barring some external bottleneck (CPU or otherwise). There can of course also be on-board non-GPU bottleneck (RAM amount and bandwidth in particular), but those tend to show up at higher resolutions, not lower ones, and would then suggest more than 65% faster at sub-2160p resolutions if those were the baseline increase at 2160p and bottleneck at that resolution.
ValantarIt is of course possible that RDNA3 has some form of architectural improvement to alleviate that poor high resolution scaling that we've seen in RDNA2, which would then lead it to scale better at higher resolutions relative to RDNA2, and thus also deliver non-linearly improved perf/W at 2160p in particular - but that's a level of speculation well beyond the basic napkin math we've been engaging in here, as that requires quite fundamental, low-level architectural changes, not just "more cores, better process node, higher clocks, same or more power".
You are right regarding mixing vendor's results (or even different gen if possible), I tried to change the 3060Ti to 6700XT but btk2k2 already posted and i didn't want to edit it after since it didn't altered anyway what i was pointing out.
It doesn't change much, on the contrary to what you say it's more pronounced difference now comparing AMD to AMD.
Check the latest vga review (ASUS strix):
QHD:
6700XT = 50%
6950XT = 75%
1.5X
4K:
6700XT = 35%
6950XT = 59%
1.69%
So the difference from +69% at 4K, it went to +50% at QHD...
It's very basic stuff, it happened since forever (4K difference between 2 cards is higher than in QHD in 99% of cases, there are exceptions but are easily explainable like RX6600 vs RTX 3050- castrated Infinity cache at 32MB etc)
Posted on Reply
#59
AnarchoPrimitiv
TheinsanegamerNPeople are still using the "mindshare" excuse for AMD's inability to straighten anything out of their own accord for over a decade?


Gonna guess almost no availability. The last few launches form AMD have been paper for GPUs.


In America the only store like that left is microcenter, and most Americans live minimum 3+ hours away. The cost of gas will eat up whatever savings you'd get. Anything local disappeared years ago, the only thing left are fly-by-night computer repair shops that I wouldnt trust with a raspberry pi let alone anything expensive.
For all intents and purposes, AMD GPUs DO have their stuff straightened out...I can only speak for my own experience but I've yet to have a single problem or issue with my 5700xt, which is why I'll be upgrading to RDNA3 this time around....plus, regardless of my feelings on AMD, I just can't morally bring myself to give money to Nvidia and reward their behavior. Also, any marketshare AMD captures at Nvidia's expense will necessarily improve the situation for consumers...the most ideal situation being a 50/50 marketshare between the two and a balance of power.

I could be wrong, but it sounds like you're implying that AMD should be able to perform at the same level as Nvidia, when in reality that's just not possible. AMD's 2021 R&D budget was $2 billion, which has to be divided between x86 and graphics, and based on the fact that x86 is a bigger revenue source for AMD and x86 has a much larger T.A.M., we can safely assume that x86 is getting 60% of that budget. This means that AMD has to compete against Nvidia with less than $1 billion R&D budget while Nvidia had a $5.27 billion R&D budget for 2021.....they're nowhere near competing on a level playing field. It actually goes to show how impressive AMD is, especially considering RDNA2 matched or even beat the 30 series in Raster and all while AMD has a fifth of the financial resources to spend on R&D. It's even more impressive what AMD has been able to do against Intel considering Intel has a $15 billion R&D budget for 2021!
Posted on Reply
#61
TheLostSwede
News Editor
cvaldesA lot of those local mom-and-pop PC stores have steadily shuttered in recent years as their Baby Boomer owners who started their businesses in the Eighties have reached retirement age with no one to pick up the reins.

I am grateful that there are still a few great mom-and-pop PC stores in my area.

Computers are commodities now, people buy and throw them away (er, recycle) every few years. Hell, even Microsoft belated accepted the fact that most people don't upgrade Windows which is why you can get an OEM key for Windows 10 Pro for $7 these days.

The number of people who open up a PC case to remove a component and install a new one is a very, very small portion of the consumer userbase. Joe Consumer is just going to buy a Dell, HP, or Alienware box and when it starts running "too slow" they'll just buy a new one and put the old computer in a kid's room.
Still plenty of computer shops both in Taiwan and Sweden.
Where I live now, I have less than a 10 minute walk to one.
Posted on Reply
#62
Legacy-ZA
CallandorWoTI really doubt RDNA3 will beat a 4090 at 4k gaming, but I think it may match 1440p gaming. Plus you have DLSS3, which lets face it, AMD just won't be able to pull something like that off. doubling the frames at 4k with AI? I just don't see AMD having that technical capability.

but, I still plan to buy RDNA3 because I only game at 1440p.
I think everyone is going to be in for a surprise, even nVidia. :):)
Posted on Reply
#63
Valantar
ModEl4You are right regarding mixing vendor's results (or even different gen if possible), I tried to change the 3060Ti to 6700XT but btk2k2 already posted and i didn't want to edit it after since it didn't altered anyway what i was pointing out.
It doesn't change much, on the contrary to what you say it's more pronounced difference now comparing AMD to AMD.
Check the latest vga review (ASUS strix):
QHD:
6700XT = 50%
6950XT = 75%
1.5X
4K:
6700XT = 35%
6950XT = 59%
1.69%
So the difference from +69% at 4K, it went to +50% at QHD...
It's very basic stuff, it happened since forever (4K difference between 2 cards is higher than in QHD in 99% of cases, there are exceptions but are easily explainable like RX6600 vs RTX 3050- castrated Infinity cache at 32MB etc)
Again I don't see this as necessarily illustrative of future implementations - all that shows is after all that different implementations of the same architecture will have different limitations/bottlenecks, whether they be compute, memory amount/bandwidth, etc. What that does show is the limitations of a narrow-and-fast die vs. a wide-and-slow(ish) one, that the latter has more left over to process higher resolutions while the latter struggles a lot more as the load grows heavier (or conversely, at lower resolutions the wider die is more held back by non-compute factors). Exactly where the limitation lies - clock speeds, memory amount, memory bandwidth, Infinity Cache, or something else, is almost impossible to tell from an end-user POV at least in real-world workloads, but as higher resolutions are more costly in pretty much every way, it makes sense for AMD to not optimize lower tier SKUs for higher resolutions if that also increases costs (through more VRAM or a wider bus, for example).

Of course this also applies to RDNA3 - the success of any given SKU is entirely dependent on AMD configuring it in a sensible way. I'm just assuming that they'll do a similarly decent job at this as with RDNA2, and beyond that I feel that this is too fine-grained a level of speculation for me to really engage in. I'm sure that with a sufficient dataset it would be possible to come up with an equation that (roughly) predicted real-world relative performance within an architecture dependent on core counts, RAM, bus width, clocks, power, etc., but until someone does I'll assume those models only exist in AMD's labs.

All of this is also dependent on what you choose as your baseline for comparison, which is doubly troublesome when what is being compared is a speculation on future products - at this point there are so many variables in play that I've long since given up :p


Edit: wide-and-slow, not fast-and slow. Some times my brain and my typing are way out of sync.
Posted on Reply
#64
cvaldes
ValantarOf course this also applies to RDNA3 - the success of any given SKU is entirely dependent on AMD configuring it in a sensible way. I'm just assuming that they'll do a similarly decent job at this as with RDNA2, and beyond that I feel that this is too fine-grained a level of speculation for me to really engage in. I'm sure that with a sufficient dataset it would be possible to come up with an equation that (roughly) predicted real-world relative performance within an architecture dependent on core counts, RAM, bus width, clocks, power, etc., but until someone does I'll assume those models only exist in AMD's labs.
They can probably do some sort of predictive modeling on expected performance. After all semiconductors these days are designed with computers. It's not like the old days when computer designs used a sliderule, a pad of paper and a pencil.

That said, AMD, Intel, NVIDIA, Apple and others test a wide variety of prototype samples in their labs. A graphics card isn't just the GPU, so different combinations of components will yield different performance results, with different power draws, COGS, whatever.

Indeed consumer grade graphics card designs are optimized for specific target resolutions (1080p, 1440p, 2160p). I have a 3060 Ti in a build for 1440p gaming; sure the 4090 will beat it, but is it worth it? After all, the price difference between a 4090 and my 3060 Ti is likely $1200-1500. That buys a lot of games and other stuff.

For sure, AMD will test all those different prototypes in their labs but only release one or two products to market. It's not like they can't put a 384-bit memory bus on an entry level CPU and hang 24GB of VRAM off it. The problem is it makes little sense from a business standpoint. Yes, someone will buy it, including some TPU forum participant probably.

I know you understand this but some other people online don't understand what follows here.

AMD isn't making a graphics card for YOU. They are making graphics cards for a larger graphics card audience. AMD is not your mom cooking your favorite breakfast so when you emerge from her basement, it's waiting for you hot on the table.
Posted on Reply
#65
EatingDirt
GunShotDefine "a lot more resources", uhm... 65%... or 104%... or just ~8.5% because any shift in increased value could VALIDATE such a claim and if memory serves me, AMD definitely refused to attach any solid value to that expected ++ PR statement ++. :shadedshu:

All the "should & woulds" used above is not an AMD or NVIDIA stable business model forecast, especially for consumers.
RDNA2 is something like ~5-20% behind Nvidia in terms of raytracing efficiency right now, depending on the game. I don't think it's a stretch to assume they can reduce that gap significantly because of the larger focus on Raytracing for RDNA3.
Posted on Reply
#66
cvaldes
EatingDirtRDNA2 is something like ~5-20% behind Nvidia in terms of raytracing efficiency right now, depending on the game. I don't think it's a stretch to assume they can reduce that gap significantly because of the larger focus on Raytracing for RDNA3.
Don't forget that Ampere was made on a Samsung process that many consider inferior to what TSMC was offering.

Now both AMD and NVIDIA are using TSMC.

That said, NVIDIA may be putting more effort into improving their Tensor cores especially since ML is more important for their Datacenter business.

From a consumer gaming perspective, almost everyone who turns on ray tracing will enable some sort of image upscaling option. Generally speaking the frame rates for ray tracing without some sort of image upscaling help are too low for satisfying gameplay with current technology.

Besides, Tensor cores have other usage cases beyond DLSS for consumers like image replacement.
Posted on Reply
#67
GunShot
EatingDirtRDNA2 is something like ~5-20% behind Nvidia in terms of raytracing efficiency right now, depending on the game. I don't think it's a stretch to assume they can reduce that gap significantly because of the larger focus on Raytracing for RDNA3.
What? "~5-20% behind NVIDIA"?! Yeah, but... heck no! :laugh:

Only a deliberate nerfed AMD sponsored title can come close to any NVIDIA's RTX GPUs much superior RT performance. It is not just about how many RT cores/processes/executes a GPU have vs the competitor. :shadedshu:
The RTX 3070 also more than doubles the performance of the RX 6800. Heck, even the RTX 3060 12GB beats the 6900 XT by 16% at 1080p and 23% at 1440p.
www.tomshardware.com/features/amd-vs-nvidia-best-gpu-for-ray-tracing
Posted on Reply
#68
ModEl4
ValantarAgain I don't see this as necessarily illustrative of future implementations - all that shows is after all that different implementations of the same architecture will have different limitations/bottlenecks, whether they be compute, memory amount/bandwidth, etc. What that does show is the limitations of a narrow-and-fast die vs. a fast-and-slow(ish) one, that the latter has more left over to process higher resolutions while the latter struggles a lot more as the load grows heavier (or conversely, at lower resolutions the wider die is more held back by non-compute factors). Exactly where the limitation lies - clock speeds, memory amount, memory bandwidth, Infinity Cache, or something else, is almost impossible to tell from an end-user POV at least in real-world workloads, but as higher resolutions are more costly in pretty much every way, it makes sense for AMD to not optimize lower tier SKUs for higher resolutions if that also increases costs (through more VRAM or a wider bus, for example).

Of course this also applies to RDNA3 - the success of any given SKU is entirely dependent on AMD configuring it in a sensible way. I'm just assuming that they'll do a similarly decent job at this as with RDNA2, and beyond that I feel that this is too fine-grained a level of speculation for me to really engage in. I'm sure that with a sufficient dataset it would be possible to come up with an equation that (roughly) predicted real-world relative performance within an architecture dependent on core counts, RAM, bus width, clocks, power, etc., but until someone does I'll assume those models only exist in AMD's labs.

All of this is also dependent on what you choose as your baseline for comparison, which is doubly troublesome when what is being compared is a speculation on future products - at this point there are so many variables in play that I've long since given up :p
You're focusing only on GPU and try to solve which GPU characteristic can possibly effect this behaviour (4K difference=greater than QHD difference) when the problem is multifaceted and (mostly) not GPU exclusively related.
You will see that going from 4K to QHD many games are 2X or so faster.
This means that each frame is rendered in half time.
Let's take 2 VGAs A and B that the higher one (A) has double speed in 4K (double the fps)
Depending the engine and requirements in other resources except GPU (CPU mainly, system ram, storage system etc essentially every aspect that plays a role in the fps outcome ) even if GPU A is pretty capable based on specs to produce double the frames again in QHD, in order to do that it needs the other aspects of the PC (CPU etc) that are involved in the fps outcome, to be able to support the doubling also. (of course game engine must be able to scale also, this is a problem also)
That's the main factor affecting this behaviour.
But let's agree to disagree.

Edit: this is the main reason Nvidia is pursuing other avenues like frame generation in order to try to maintain a meaningful generation (Ampere->Ada etc) performance gap as an incentive for upgrade for example.
Gradually with each new GPU/CPU generation, It becomes harder and harder with the difference that GPU advancements vs CPU & memory advancements that we had through the years (GPU advancements are much greater than CPU/memory advancements, especially if you consider in CPU for example how many cores are utilised for the vast majority of games and this keeps adding up from gen to gen) to be able to sustain the performance gaps as the resolution goes down.
Posted on Reply
#69
cvaldes
You can only go so far cramming more raster cores on a die. At some point, it might be more beneficial to differentiate silicon and put specialized cores for certain tasks.

In the same way, at home you might be the one to buy groceries, make dinner, and wash the dirty pots and dishes. In a very, very small restaurant, you might be able to pull this off. But let's say you have fifty seats. Would you hire someone else to do all the same stuff that you do? Should a restaurant have ten people that all do the same stuff?

Yes, you can ray trace with traditional raster cores. It can be done but they aren't optimized for that workload. So NVIDIA carves out a chunk of die space and put in specialized transistors. Same with Tensor cores. In a restaurant kitchen, the pantry cook washes lettuce in a prep sink, not the potwasher's sink. Different tools/systems for different workloads and tasks isn't a new concept.

I know some people in these PC forums swear that they only care about 3D raster performance. That's not going to scale infinitely just like you can't have 50 people buying groceries, cooking food, and washing their own pots and pans in a hospital catering kitchen.

AMD started including RT cores with their RDNA2 products. At some point I expect them to put ML cores on their GPU dies. We already see media encoders too.

AMD needs good ML cores anyhow if they want to stay competitive in Datacenter. In the end, a lot of success will be determined by the quality of the development environment and software, not just the number of transistors you can put on a die.
Posted on Reply
#70
mechtech
"AMD will announce its first RDNA 3 based GPUs on the 3rd of November"

Announcement or hard launch or both??
Posted on Reply
#71
Space Lynx
Astronaut
mechtech"AMD will announce its first RDNA 3 based GPUs on the 3rd of November"

Announcement or hard launch or both??
no one knows yet. would be neat if it is both, i will def have best buy and amazon up and refreshing the entire time during the live stream. lol
Posted on Reply
#72
Valantar
mechtech"AMD will announce its first RDNA 3 based GPUs on the 3rd of November"

Announcement or hard launch or both??
What you're quoting is all that's been said, so who knows? If they want to make use of the holiday shopping season they'd need to get cards out the door ASAP though.
Posted on Reply
#73
kapone32
FluffmeisterNo sarcasm required, there going to be fast and efficient AMD fanboy's wet dreams and will be gobbled up by the scalpers too.
Not if you can go to your local brick and mortar and have a friend to get you one before they are stocked in the store. I still feel AMD is going to do a mic drop on Nov 3. The 6800XT is just as fast as 2 Vega 64s in Crossfire at the same power draw as one card. They are saying the same thing about 7000 vs 6000 so it could mean literally that you are getting up to 80% more performance. With how powerful the 6800XT already is I have already put my name on one. I asked for the cheapest 7800Xt (Hopefully reference) as I will be putting a water block on the card.
CallandorWoTno one knows yet. would be neat if it is both, i will def have best buy and amazon up and refreshing the entire time during the live stream. lol
kapone32Not if you can go to your local brick and mortar and have a friend to get you one before they are stocked in the store. I still feel AMD is going to do a mic drop on Nov 3. The 6800XT is just as fast as 2 Vega 64s in Crossfire at the same power draw as one card. They are saying the same thing about 7000 vs 6000 so it could mean literally that you are getting up to 80% more performance. With how powerful the 6800XT already is I have already put my name on one. I asked for the cheapest 7800Xt (Hopefully reference) as I will be putting a water block on the card.
I can see them being available like 2 weeks before Xmas.
Posted on Reply
#74
EatingDirt
GunShotWhat? "~5-20% behind NVIDIA"?! Yeah, but... heck no! :laugh:

Only a deliberate nerfed AMD sponsored title can come close to any NVIDIA's RTX GPUs much superior RT performance. It is not just about how many RT cores/processes/executes a GPU have vs the competitor. :shadedshu:



www.tomshardware.com/features/amd-vs-nvidia-best-gpu-for-ray-tracing
TPU's own benchmarks compare the efficiency of the 6xxx AMD series to the Nvidia 3xxx series(and now the 4090). Here's the biggest outlier:
cyberpunk-2077-rt-3840-2160.png (500×570) (tpucdn.com)

Cyberpunk shows AMD GPU's roughly at -69% and Nvidia 3xxx series GPU's at around -50%. This is the game with the largest difference, and probably the most intensive raytracing, that TPU has tested in their most recent benchmarks. So overall we have a ~20% difference and around ~30% if you include the 4090.

Farcry 6 the AMD GPU is actually ~5% more efficient than the Nvidia and in F1 Nvidia is only ~8% more efficient.
Posted on Reply
#75
cvaldes
mechtech"AMD will announce its first RDNA 3 based GPUs on the 3rd of November"

Announcement or hard launch or both??
It depends on what they say on November 3.

They could say "available now" or "available [insert future date]". About the only thing they won't say is "We started selling these yesterday."

Wait until after their event and you'll know, just like the rest of us. It's not like anyone here is privy to AMD's confidential marketing event plans.
Posted on Reply
Add your own comment
Dec 20th, 2024 06:03 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts