Thursday, October 20th 2022
AMD Announces RDNA 3 GPU Launch Livestream
It's hardly a secret that AMD will announce its first RDNA 3 based GPUs on the 3rd of November and the company has now officially announced that that it'll hold a livestream that starts 1:00 pm (13:00) Pacific Daylight Time. The event goes under the name "together we advance_gaming". AMD didn't share much in terms of details about the event, all we know is that "AMD executives will provide details on the new high-performance, energy-efficient AMD RDNA 3 architecture that will deliver new levels of performance, efficiency and functionality to gamers and content creators."
Source:
AMD
104 Comments on AMD Announces RDNA 3 GPU Launch Livestream
no need to compete with the ridiculous RTX4090, just make something even better then RX6800(XT) while consuming barely any extra power and ill be very happy....well if the price is right of course
Imagine if future RT is optimized more for the consoles, which makes sense, and the AMD cards can run that just fine without too much of a performance hit (instead of 100 fps, you get 80) and Nvidia cards can run that without a performance hit (100 fps) is that then that much of a factor?
We dont know what the future holds or how it will develop, even if Nvidia holds a large RT performance advantage, who is to say if it really matters?
all speculation, we will see, I do really like what RT is doing for the world of gaming, so I also hope its improved significantly but the consoles run what they run and games will be made for those which will have an effect on RT development
And clearly know nothing about me ;).
It's been a long wait i just wish they get it over and done with already ha.
Choices yay.
I am grateful that there are still a few great mom-and-pop PC stores in my area.
Computers are commodities now, people buy and throw them away (er, recycle) every few years. Hell, even Microsoft belated accepted the fact that most people don't upgrade Windows which is why you can get an OEM key for Windows 10 Pro for $7 these days.
The number of people who open up a PC case to remove a component and install a new one is a very, very small portion of the consumer userbase. Joe Consumer is just going to buy a Dell, HP, or Alienware box and when it starts running "too slow" they'll just buy a new one and put the old computer in a kid's room.
- Not quite 4090 performance, but noticeably lower power?
- Noticeably slower than the 4090, but also much lower power?
- Matching or beating the 4090, at lower power?
That's pretty much in the order I consider most likely - they've promised +50% perf/W over RDNA2 after all, which would place a 330W RDNA3 GPU at beating the 4090 in 1440p and trailing slightly at 2160p (using the 6950XT in TPU's reviews as a baseline). If they stick to 300W like the 6900XT that would fit pretty well with that first suggested scenario. Definitely going to be interesting!
Well, the slide says ">50%" not just 50%, the question is how much more than 50% ?
Something like a 60% improvement in efficiency would put the 7900XT in a favorable situation against the competing 4090, perhaps forcing the launch of a marginally better 4090 ti consuming twice as much power.
Of course, it will depend on benchmark/software title. When new hardware comes out, the performance uplift is never the same across the board over all metrics.
They could cherry pick through benchmarks to come up with a maximum number. Would that impress some people? For sure. However there would be some disappointed people who run a different -- possibly more pertinent -- benchmark that shows less improvement.
My guess is that there's a little "under promise, over deliver" going on here. It's better for AMD to say +54% and have average people get +57% rather than +51%.
Remember that there's also some variance in individual samples. Saying 53.96% might sound more accurate but it might not be meaningful in the context of describing a general performance increase, so saying >50% is probably less likely to get them into trouble.
I am ~97.999%+ sure that ~ IF ~ AMD had any solution (yeah, right... I'm still waiting for this "Dr." to show up at AMD, I guess Jensen should be labeled - PROFESSOR Jensen, right :laugh:) ~ to compete/match or even surpasses the 4090 (days later after the 4090 release date and I'm also sure that AMD's team have a 4090 in their labs for testing purposes, etc.), I am confident in saying that by now that IF AMD's 7000 series had a performance lead over the 4090, that a so-called leak (yeah, right) would have reached the MSM, etc. waaaAAAaaay by now showing how the new RDNA3 will outperform the 4090 in the latest benchmarks.
That has not happened. It won't happen because AMD's
Druhm... crew is still not in. :roll:Of course, there are tons of leeway here. Are they being complete assholes and doing a worst-v-best comparison, or are they comparing at the same wattage, the same product tier, or some average/geomean of the whole range, and regardless of either of these, during which workloads?
Hence me starting from an asusmption of 50%. That's what they've promised - the "more than" sign is a vague promise with too many ways out of it to matter. So ... you understand how academic titles work, right? You go through a doctorate, do whatever research project you're workingon, write a dissertation, have it approved, and you are awarded the title of Dr., which you then have for life (unless you do something really egregious and have your home institution strip you of your degree). You somehow finding that funny is ... well, it just makes you look dumb, whatever the reason. It's pretty hard not to just assume sexism from the get-go - especially given the complete nonsense the rest of that post cosists of - but that's irrelevant really. Dr. Su has been AMD's best CEO for quite some time, and her tenure has been massively successful in many ways, including bringing the GPU branch back from a rapid decline towards irrelevance in the mid-2000s to surpassing Nvidia's efficiency and matching them in absolute performance for the past generation (outside of massively power hungry top SKUs at 2160p).
Also, did you miss the fact that this launch date was already announced quite a while ago? Or the fact that AMD launched RDNA2 two years ago, and have been working on RDNA3 since a while before that launch? Is it really surprising that they're launching a new generation close to Nvidia? For anyone following the PC hardware space even slightly, it really shouldn't be surprising at all. This is how this business operates.
The lack of leaks is somewhat unusual, but then AMD tends to have a lot less leaks than Nvidia - no doubt because of them being an overall smaller operation, and there being more interest in Nvidia leaks to begin with.
I hope the 7700 XT comes in at or under 250W and can match a 3080. If it can, they should get such a card out quick and snatch up the midrange from Nvidia.
You've already spent almost $2000 on a GPU, and by the time the AMD GPU's are released you probably won't be eligible for a return. However, I wouldn't really expect AMD to be well past Nvidia in performance though, I'd expect ±5-10%. Would you really return a 4090 for way less than you paid for it to get an extra 5-10% performance? Seems like a massive waste of money.
/s
I then realized I cant get pre-orders or do reserves.
And cant fight the bots. Cant get a overpriced 4090. Wont be able to get one of these.
Regarding raytracing performance potential if die sizes leaks are correct the dies are small, the die allocated to specific RT performance improvements seems to be nowhere near Ada's level and also it probably won't have dedicated L2 to raytracing, but doubling the L2 cache for example in each Shader Engine will certainly help if utilised properly.
Ampere was better than Turing in RT but not by much, i would be happy if Navi31 can match Turing's 2080Ti regarding % performance hit when RT is enabled, because for me it will be enough with the titles that we already have especially if FSR2.0 (3.0?) utilized (but not with some future titles that will be RT showcases)
I'll be gaming at 4k Ray Traced, so that's what's important to me. Rasterization is less of an impact when you can already reach 120hz.
All the "should & woulds" used above is not an AMD or NVIDIA stable business model forecast, especially for consumers.
For RDNA2 they claimed a 50% perf/watt gain in early released slides but in the reveal event they claimed 54% and 64%. 54% was 5700XT vs 6800XT at 4k in a variety of games (listed in the footnotes of their slide). The 64% was 5700XT vs 6900XT at 4K in the same games. This was further confirmed in some reviews but it heavily depended on how they tested perf/watt. Those sites that use the power and performance data from 1 game saw very different results. TPU saw about a 50% gain where as Tehcspot / HUB saw 70%+ gain because HUB used Doom Eternal and the 5700XT underperformed and TPU used CP2077 and the 6900XT underperformed. If you look at the HUB average uplift of the 6800XT and 6900XT then it actually matched up really well with AMDs claimed improvements.
So the AMD method seems to be compare SKU to SKU at stock settings, measure the average frame rate difference in a suite of titles and then work out the perf/watt delta.
With the >50% I do agree with using the 50% as a baseline but I feel confident that they are not doing a best vs worst comparison because that is not something AMD have done prior under current leadership.
What it does do though is give us some numbers to play with. If the Enermax numbers are correct and top N31 is using 420W then you can get the following numbers.
Now the assumption I am making here is pretty obvious and that is the design goal of N31 was 420W to begin with which would mean it was wide enough to use that power in the saner part of the f/v curve. If it was not 420W to begin with and has been pushed to this through increasing clocks then it is obvious the perf/watt will drop off and the numbers above will be incorrect.
The other assumption is the Enermax numbers are correct. It is entirely possible that the reference TBP for N31 will be closer to 375W which with these numbers would put it about on par with the 4090.
My view is the TBP will be closer to 375-400W rather than 420W in which case anywhere from about equal to 5% ahead of the 4090 seems to be the ballpark I expect top N31 to land in but there is room for a positive surprise should AMDs >50% claim be like their >5Ghz claim or the >15% single thread claim in the Zen 4 teaser slide and be a rather large underselling of what was actually achieved. Still I await actual numbers on that front and until then I am assuming something in the region of +50%.
Of course in terms of leaks there's also the question of sheer scale: Nvidia outsells AMD's GPU division by ~4x, meaning they have 4x the production volume, 4x the shipping volume, and thus far more products passing through far more hands before launch, with control of this being far more difficult due to this scale.