Thursday, October 20th 2022
AMD Announces RDNA 3 GPU Launch Livestream
It's hardly a secret that AMD will announce its first RDNA 3 based GPUs on the 3rd of November and the company has now officially announced that that it'll hold a livestream that starts 1:00 pm (13:00) Pacific Daylight Time. The event goes under the name "together we advance_gaming". AMD didn't share much in terms of details about the event, all we know is that "AMD executives will provide details on the new high-performance, energy-efficient AMD RDNA 3 architecture that will deliver new levels of performance, efficiency and functionality to gamers and content creators."
Source:
AMD
104 Comments on AMD Announces RDNA 3 GPU Launch Livestream
The $280 6600XT would be a good card if its raytracing performance wasn't basically unusable, and it's RTX/DXR titles that are really driving GPU upgrades. If you're not using raytracing then even an old GTX 1070 is still fine for 1080p60, 5 years later. Solid analysis. A lot of assumptions in there but I agree with them, given the complete lack of any concrete info at this point.
So comparing 7900XT vs 6900XT, it gives +65% or a little bit above that in relation with 6900XT (so around 7%-10% slower than a RTX 4090 in 4K depending testbed's CPU) which will be a very good result for a 330W TBP card (and at QHD only 2%-5% slower, so some OC 7900XT models will be at least matching 4090 at 1440p)
To reach 2X vs a 6900XT full Navi31 will need to hit near 3.3GHz i would imagine (actual in game average clocks) and around 3.5GHz for 2.1X (maybe some liquid designs at near 500W TBP, not unlike Sapphire Toxic Radeon RX 6900 XT Extreme Edition that went from 300W TBP to 430W and from 2250MHz boost to 2730MHz (toxic boost))
Other than that though I generally agree that not expecting too much is (always) the sensible approach. Tghe 4090 is fast, but also very expensive, so even coming close at 2160p will be great as long as pricing is also good.
For example 6950X is near +70% vs 3060Ti at 4K but when you compare it to QHD the difference is near +60%.
And more importantly with this kind of power the designs at that level are much more CPU/engine limited in QHD than 6950X, so 7900X will be hitting fps walls in many games just like 4090 does (but less pronounced than 4090 since it's 6 shader engines vs 11GPC)
Yes I am talking 4K, should have been clear about that. I very much doubt these numbers will hold below 4K for the flagship parts simply due to CPU bottlenecking impacting the maximum fps some games reach.
My personal estimate is that performance is going to be in the 4090 ballpark with TBP is going to be in the 375W region and AIBs offering OC models upto the 420W or so TBPs but they will hit diminishing returns due to pushing the clock speed rather than having a wider die. It depends entirely on what TBP N31 was designed around. N23 for example is designed around a 160W TBP and N21 was designed around a 300W TBP. The performance delta between the 6600XT and the 6900XT almost perfectly matches the power delta because the parts were designed for their respective TBPs so are the correct balance of functional units to clockspeed to voltage.
50% + perf/watt improvement is possible at 420W if N31 was designed to hit 420W in the sane part of the v/f curve because the number of functional units would be balanced around that.
Unless Enermax leak is without merit, so all the above assumptions are invalid. (My proposed Navi31 frequencies in order to hit 2X and 2.1X vs 6900XT have absolutely nothing to do with Enermax leak btw)
If, say, a 7900XT is 65% faster than a 6900XT at 2160p, it will most likely be very close to 65% faster at 1440p as well, barring some external bottleneck (CPU or otherwise). There can of course also be on-board non-GPU bottleneck (RAM amount and bandwidth in particular), but those tend to show up at higher resolutions, not lower ones, and would then suggest more than 65% faster at sub-2160p resolutions if those were the baseline increase at 2160p and bottleneck at that resolution.
It is of course possible that RDNA3 has some form of architectural improvement to alleviate that poor high resolution scaling that we've seen in RDNA2, which would then lead it to scale better at higher resolutions relative to RDNA2, and thus also deliver non-linearly improved perf/W at 2160p in particular - but that's a level of speculation well beyond the basic napkin math we've been engaging in here, as that requires quite fundamental, low-level architectural changes, not just "more cores, better process node, higher clocks, same or more power".
It doesn't change much, on the contrary to what you say it's more pronounced difference now comparing AMD to AMD.
Check the latest vga review (ASUS strix):
QHD:
6700XT = 50%
6950XT = 75%
1.5X
4K:
6700XT = 35%
6950XT = 59%
1.69%
So the difference from +69% at 4K, it went to +50% at QHD...
It's very basic stuff, it happened since forever (4K difference between 2 cards is higher than in QHD in 99% of cases, there are exceptions but are easily explainable like RX6600 vs RTX 3050- castrated Infinity cache at 32MB etc)
I could be wrong, but it sounds like you're implying that AMD should be able to perform at the same level as Nvidia, when in reality that's just not possible. AMD's 2021 R&D budget was $2 billion, which has to be divided between x86 and graphics, and based on the fact that x86 is a bigger revenue source for AMD and x86 has a much larger T.A.M., we can safely assume that x86 is getting 60% of that budget. This means that AMD has to compete against Nvidia with less than $1 billion R&D budget while Nvidia had a $5.27 billion R&D budget for 2021.....they're nowhere near competing on a level playing field. It actually goes to show how impressive AMD is, especially considering RDNA2 matched or even beat the 30 series in Raster and all while AMD has a fifth of the financial resources to spend on R&D. It's even more impressive what AMD has been able to do against Intel considering Intel has a $15 billion R&D budget for 2021!
Where I live now, I have less than a 10 minute walk to one.
Of course this also applies to RDNA3 - the success of any given SKU is entirely dependent on AMD configuring it in a sensible way. I'm just assuming that they'll do a similarly decent job at this as with RDNA2, and beyond that I feel that this is too fine-grained a level of speculation for me to really engage in. I'm sure that with a sufficient dataset it would be possible to come up with an equation that (roughly) predicted real-world relative performance within an architecture dependent on core counts, RAM, bus width, clocks, power, etc., but until someone does I'll assume those models only exist in AMD's labs.
All of this is also dependent on what you choose as your baseline for comparison, which is doubly troublesome when what is being compared is a speculation on future products - at this point there are so many variables in play that I've long since given up :p
Edit: wide-and-slow, not fast-and slow. Some times my brain and my typing are way out of sync.
That said, AMD, Intel, NVIDIA, Apple and others test a wide variety of prototype samples in their labs. A graphics card isn't just the GPU, so different combinations of components will yield different performance results, with different power draws, COGS, whatever.
Indeed consumer grade graphics card designs are optimized for specific target resolutions (1080p, 1440p, 2160p). I have a 3060 Ti in a build for 1440p gaming; sure the 4090 will beat it, but is it worth it? After all, the price difference between a 4090 and my 3060 Ti is likely $1200-1500. That buys a lot of games and other stuff.
For sure, AMD will test all those different prototypes in their labs but only release one or two products to market. It's not like they can't put a 384-bit memory bus on an entry level CPU and hang 24GB of VRAM off it. The problem is it makes little sense from a business standpoint. Yes, someone will buy it, including some TPU forum participant probably.
I know you understand this but some other people online don't understand what follows here.
AMD isn't making a graphics card for YOU. They are making graphics cards for a larger graphics card audience. AMD is not your mom cooking your favorite breakfast so when you emerge from her basement, it's waiting for you hot on the table.
Now both AMD and NVIDIA are using TSMC.
That said, NVIDIA may be putting more effort into improving their Tensor cores especially since ML is more important for their Datacenter business.
From a consumer gaming perspective, almost everyone who turns on ray tracing will enable some sort of image upscaling option. Generally speaking the frame rates for ray tracing without some sort of image upscaling help are too low for satisfying gameplay with current technology.
Besides, Tensor cores have other usage cases beyond DLSS for consumers like image replacement.
Only a deliberate nerfed AMD sponsored title can come close to any NVIDIA's RTX GPUs much superior RT performance. It is not just about how many RT cores/processes/executes a GPU have vs the competitor. :shadedshu:www.tomshardware.com/features/amd-vs-nvidia-best-gpu-for-ray-tracing
You will see that going from 4K to QHD many games are 2X or so faster.
This means that each frame is rendered in half time.
Let's take 2 VGAs A and B that the higher one (A) has double speed in 4K (double the fps)
Depending the engine and requirements in other resources except GPU (CPU mainly, system ram, storage system etc essentially every aspect that plays a role in the fps outcome ) even if GPU A is pretty capable based on specs to produce double the frames again in QHD, in order to do that it needs the other aspects of the PC (CPU etc) that are involved in the fps outcome, to be able to support the doubling also. (of course game engine must be able to scale also, this is a problem also)
That's the main factor affecting this behaviour.
But let's agree to disagree.
Edit: this is the main reason Nvidia is pursuing other avenues like frame generation in order to try to maintain a meaningful generation (Ampere->Ada etc) performance gap as an incentive for upgrade for example.
Gradually with each new GPU/CPU generation, It becomes harder and harder with the difference that GPU advancements vs CPU & memory advancements that we had through the years (GPU advancements are much greater than CPU/memory advancements, especially if you consider in CPU for example how many cores are utilised for the vast majority of games and this keeps adding up from gen to gen) to be able to sustain the performance gaps as the resolution goes down.
In the same way, at home you might be the one to buy groceries, make dinner, and wash the dirty pots and dishes. In a very, very small restaurant, you might be able to pull this off. But let's say you have fifty seats. Would you hire someone else to do all the same stuff that you do? Should a restaurant have ten people that all do the same stuff?
Yes, you can ray trace with traditional raster cores. It can be done but they aren't optimized for that workload. So NVIDIA carves out a chunk of die space and put in specialized transistors. Same with Tensor cores. In a restaurant kitchen, the pantry cook washes lettuce in a prep sink, not the potwasher's sink. Different tools/systems for different workloads and tasks isn't a new concept.
I know some people in these PC forums swear that they only care about 3D raster performance. That's not going to scale infinitely just like you can't have 50 people buying groceries, cooking food, and washing their own pots and pans in a hospital catering kitchen.
AMD started including RT cores with their RDNA2 products. At some point I expect them to put ML cores on their GPU dies. We already see media encoders too.
AMD needs good ML cores anyhow if they want to stay competitive in Datacenter. In the end, a lot of success will be determined by the quality of the development environment and software, not just the number of transistors you can put on a die.
Announcement or hard launch or both??
cyberpunk-2077-rt-3840-2160.png (500×570) (tpucdn.com)
Cyberpunk shows AMD GPU's roughly at -69% and Nvidia 3xxx series GPU's at around -50%. This is the game with the largest difference, and probably the most intensive raytracing, that TPU has tested in their most recent benchmarks. So overall we have a ~20% difference and around ~30% if you include the 4090.
Farcry 6 the AMD GPU is actually ~5% more efficient than the Nvidia and in F1 Nvidia is only ~8% more efficient.
They could say "available now" or "available [insert future date]". About the only thing they won't say is "We started selling these yesterday."
Wait until after their event and you'll know, just like the rest of us. It's not like anyone here is privy to AMD's confidential marketing event plans.