Thursday, October 20th 2022

AMD Announces RDNA 3 GPU Launch Livestream

It's hardly a secret that AMD will announce its first RDNA 3 based GPUs on the 3rd of November and the company has now officially announced that that it'll hold a livestream that starts 1:00 pm (13:00) Pacific Daylight Time. The event goes under the name "together we advance_gaming". AMD didn't share much in terms of details about the event, all we know is that "AMD executives will provide details on the new high-performance, energy-efficient AMD RDNA 3 architecture that will deliver new levels of performance, efficiency and functionality to gamers and content creators."
Source: AMD
Add your own comment

104 Comments on AMD Announces RDNA 3 GPU Launch Livestream

#26
ZoneDymo
I hope they just deliver solid gpu's with an eye (feeling of responsibility) on power consumption.

no need to compete with the ridiculous RTX4090, just make something even better then RX6800(XT) while consuming barely any extra power and ill be very happy....well if the price is right of course
Posted on Reply
#27
Blueberries
As excited as I am for all tech launches I'm very doubtful that AMD can compete with NVIDIA in Ray Tracing this year. If I'm wrong I'll return my 4090, but I'm not expecting much.
Posted on Reply
#28
ZoneDymo
BlueberriesAs excited as I am for all tech launches I'm very doubtful that AMD can compete with NVIDIA in Ray Tracing this year. If I'm wrong I'll return my 4090, but I'm not expecting much.
The problem is, its all in development, its hard to say if the extra RT performance of Nvidia would even be worth it really and even that depends on the person.
Imagine if future RT is optimized more for the consoles, which makes sense, and the AMD cards can run that just fine without too much of a performance hit (instead of 100 fps, you get 80) and Nvidia cards can run that without a performance hit (100 fps) is that then that much of a factor?

We dont know what the future holds or how it will develop, even if Nvidia holds a large RT performance advantage, who is to say if it really matters?

all speculation, we will see, I do really like what RT is doing for the world of gaming, so I also hope its improved significantly but the consoles run what they run and games will be made for those which will have an effect on RT development
Posted on Reply
#29
AsRock
TPU addict
cvaldesHey man, this is the Internet. TPU worships at the Altar of the Almightly Pageview.

You clicked the link, just like everyone else who read this thread.

Clearly you don't spend much time online.

Returning back to the topic, I did not know the stream would start at 1pm PDT. This article confirmed that. A more normal start time for Pacific Timezone events is 10am PT. This goes back to the printed periodical era (pre-2000s) when journalists had PM deadlines for East Coast based media companies.
I read AMD and that's all it had to say for me to click it, after all i have been waiting over 2 years for a new video card. But i guess i should of know n better just to some what blindly click haha. And never know with the rumors of a delay too it might of changed.

And clearly know nothing about me ;).

It's been a long wait i just wish they get it over and done with already ha.
Posted on Reply
#30
TheinsanegamerN
JAB CreationsThey got hacked and their closed source Linux drivers were leaked. In their true anti-capitalist crony style they failed in their attempt to look benevolent by waiting until everyone forgot about two months later. They also capitalized on their mindshare which was only possible because Intel had been surpressing AMD for most of it's existence.
People are still using the "mindshare" excuse for AMD's inability to straighten anything out of their own accord for over a decade?
ChaitanyaWould like to find out if we get mid range(x700XT) series of GPUs right at launch and what would be their availability date.
Gonna guess almost no availability. The last few launches form AMD have been paper for GPUs.
TheLostSwedeWouldn't it be safer and easier to go down to your local brick and mortar store and talk to them and see if they can reserve a card for you?
In America the only store like that left is microcenter, and most Americans live minimum 3+ hours away. The cost of gas will eat up whatever savings you'd get. Anything local disappeared years ago, the only thing left are fly-by-night computer repair shops that I wouldnt trust with a raspberry pi let alone anything expensive.
Posted on Reply
#31
TheoneandonlyMrK
I can't wait tbh I need some extreme OT at work or a bit of lottery or bingo luck though:D


Choices yay.
Posted on Reply
#32
cvaldes
TheLostSwedeWouldn't it be safer and easier to go down to your local brick and mortar store and talk to them and see if they can reserve a card for you?
A lot of those local mom-and-pop PC stores have steadily shuttered in recent years as their Baby Boomer owners who started their businesses in the Eighties have reached retirement age with no one to pick up the reins.

I am grateful that there are still a few great mom-and-pop PC stores in my area.

Computers are commodities now, people buy and throw them away (er, recycle) every few years. Hell, even Microsoft belated accepted the fact that most people don't upgrade Windows which is why you can get an OEM key for Windows 10 Pro for $7 these days.

The number of people who open up a PC case to remove a component and install a new one is a very, very small portion of the consumer userbase. Joe Consumer is just going to buy a Dell, HP, or Alienware box and when it starts running "too slow" they'll just buy a new one and put the old computer in a kid's room.
Posted on Reply
#33
Valantar
I like the highlighting of efficiency - which has of course been a part of their promises for RDNA3 since its first concrete mention, but still - but it also makes me wonder what we can expect here.

- Not quite 4090 performance, but noticeably lower power?
- Noticeably slower than the 4090, but also much lower power?
- Matching or beating the 4090, at lower power?

That's pretty much in the order I consider most likely - they've promised +50% perf/W over RDNA2 after all, which would place a 330W RDNA3 GPU at beating the 4090 in 1440p and trailing slightly at 2160p (using the 6950XT in TPU's reviews as a baseline). If they stick to 300W like the 6900XT that would fit pretty well with that first suggested scenario. Definitely going to be interesting!
Posted on Reply
#34
Denver
ValantarI like the highlighting of efficiency - which has of course been a part of their promises for RDNA3 since its first concrete mention, but still - but it also makes me wonder what we can expect here.

- Not quite 4090 performance, but noticeably lower power?
- Noticeably slower than the 4090, but also much lower power?
- Matching or beating the 4090, at lower power?

That's pretty much in the order I consider most likely - they've promised +50% perf/W over RDNA2 after all, which would place a 330W RDNA3 GPU at beating the 4090 in 1440p and trailing slightly at 2160p (using the 6950XT in TPU's reviews as a baseline). If they stick to 300W like the 6900XT that would fit pretty well with that first suggested scenario. Definitely going to be interesting!

Well, the slide says ">50%" not just 50%, the question is how much more than 50% ?

Something like a 60% improvement in efficiency would put the 7900XT in a favorable situation against the competing 4090, perhaps forcing the launch of a marginally better 4090 ti consuming twice as much power.
Posted on Reply
#35
cvaldes
DenverWell, the slide says ">50%" not just 50%, the question is how much more than 50% ?
It will likely be heavily dependent on how much they can optimize the video driver software.

Of course, it will depend on benchmark/software title. When new hardware comes out, the performance uplift is never the same across the board over all metrics.

They could cherry pick through benchmarks to come up with a maximum number. Would that impress some people? For sure. However there would be some disappointed people who run a different -- possibly more pertinent -- benchmark that shows less improvement.

My guess is that there's a little "under promise, over deliver" going on here. It's better for AMD to say +54% and have average people get +57% rather than +51%.

Remember that there's also some variance in individual samples. Saying 53.96% might sound more accurate but it might not be meaningful in the context of describing a general performance increase, so saying >50% is probably less likely to get them into trouble.
Posted on Reply
#36
GunShot
Interesting timing, huh? /s

I am ~97.999%+ sure that ~ IF ~ AMD had any solution (yeah, right... I'm still waiting for this "Dr." to show up at AMD, I guess Jensen should be labeled - PROFESSOR Jensen, right :laugh:) ~ to compete/match or even surpasses the 4090 (days later after the 4090 release date and I'm also sure that AMD's team have a 4090 in their labs for testing purposes, etc.), I am confident in saying that by now that IF AMD's 7000 series had a performance lead over the 4090, that a so-called leak (yeah, right) would have reached the MSM, etc. waaaAAAaaay by now showing how the new RDNA3 will outperform the 4090 in the latest benchmarks.

That has not happened. It won't happen because AMD's Dr uhm... crew is still not in. :roll:

Posted on Reply
#37
Valantar
Denver
Well, the slide says ">50%" not just 50%, the question is how much more than 50% ?

Something like a 60% improvement in efficiency would put the 7900XT in a favorable situation against the competing 4090, perhaps forcing the launch of a marginally better 4090 ti consuming twice as much power.
My assumption with corporate marketing is always to assume the worst with whatever wording they use. "Launching in H1"? That's June, May at best, no earlier. "Sub-$1000"? $999. So when AMD says >50%, I assume 50.1%, while being happy to be proven wrong. But I never assume more than what they're stating explicitly.

Of course, there are tons of leeway here. Are they being complete assholes and doing a worst-v-best comparison, or are they comparing at the same wattage, the same product tier, or some average/geomean of the whole range, and regardless of either of these, during which workloads?

Hence me starting from an asusmption of 50%. That's what they've promised - the "more than" sign is a vague promise with too many ways out of it to matter.
GunShotInteresting timing, huh? /s

I am ~97.999%+ sure that ~ IF ~ AMD had any solution (yeah, right... I'm still waiting for this "Dr." to show up at AMD, I guess Jensen should be labeled - PROFESSOR Jensen, right :laugh:) ~ to compete/match or even surpasses the 4090 (days later after the 4090 release date and I'm also sure that AMD's team have a 4090 in their labs for testing purposes, etc.), I am confident in saying that by now that IF AMD's 7000 series had a performance lead over the 4090, that a so-called leak (yeah, right) would have reached the MSM, etc. waaaAAAaaay by now showing how the new RDNA3 will outperform the 4090 in the latest benchmarks.

That has not happened. It won't happen because AMD's Dr uhm... crew is still not in. :roll:
So ... you understand how academic titles work, right? You go through a doctorate, do whatever research project you're workingon, write a dissertation, have it approved, and you are awarded the title of Dr., which you then have for life (unless you do something really egregious and have your home institution strip you of your degree). You somehow finding that funny is ... well, it just makes you look dumb, whatever the reason. It's pretty hard not to just assume sexism from the get-go - especially given the complete nonsense the rest of that post cosists of - but that's irrelevant really. Dr. Su has been AMD's best CEO for quite some time, and her tenure has been massively successful in many ways, including bringing the GPU branch back from a rapid decline towards irrelevance in the mid-2000s to surpassing Nvidia's efficiency and matching them in absolute performance for the past generation (outside of massively power hungry top SKUs at 2160p).

Also, did you miss the fact that this launch date was already announced quite a while ago? Or the fact that AMD launched RDNA2 two years ago, and have been working on RDNA3 since a while before that launch? Is it really surprising that they're launching a new generation close to Nvidia? For anyone following the PC hardware space even slightly, it really shouldn't be surprising at all. This is how this business operates.

The lack of leaks is somewhat unusual, but then AMD tends to have a lot less leaks than Nvidia - no doubt because of them being an overall smaller operation, and there being more interest in Nvidia leaks to begin with.
Posted on Reply
#38
RandallFlagg
Time to root for AMD.

I hope the 7700 XT comes in at or under 250W and can match a 3080. If it can, they should get such a card out quick and snatch up the midrange from Nvidia.
Posted on Reply
#39
EatingDirt
BlueberriesAs excited as I am for all tech launches I'm very doubtful that AMD can compete with NVIDIA in Ray Tracing this year. If I'm wrong I'll return my 4090, but I'm not expecting much.
I mean, in most games the 4090 is only ~5% better at raytracing efficiency than the 3090 Ti. That being Original Framerate verses Raytracing Framerate, so it wasn't much of an improvement on the actual efficiency of Raytracing. AMD on the other hand have said they will be dedicating a lot more resources to the raytracing on their RNDA 3 architecture, so the actual efficiency of RDNA3 should be better this generation, supposedly ~2x better, which would put it just about on par with Nvidia in terms of raytracing efficiency.

You've already spent almost $2000 on a GPU, and by the time the AMD GPU's are released you probably won't be eligible for a return. However, I wouldn't really expect AMD to be well past Nvidia in performance though, I'd expect ±5-10%. Would you really return a 4090 for way less than you paid for it to get an extra 5-10% performance? Seems like a massive waste of money.
Posted on Reply
#40
ODOGG26
ValantarMy assumption with corporate marketing is always to assume the worst with whatever wording they use. "Launching in H1"? That's June, May at best, no earlier. "Sub-$1000"? $999. So when AMD says >50%, I assume 50.1%, while being happy to be proven wrong. But I never assume more than what they're stating explicitly.

Of course, there are tons of leeway here. Are they being complete assholes and doing a worst-v-best comparison, or are they comparing at the same wattage, the same product tier, or some average/geomean of the whole range, and regardless of either of these, during which workloads?

Hence me starting from an asusmption of 50%. That's what they've promised - the "more than" sign is a vague promise with too many ways out of it to matter.


So ... you understand how academic titles work, right? You go through a doctorate, do whatever research project you're workingon, write a dissertation, have it approved, and you are awarded the title of Dr., which you then have for life (unless you do something really egregious and have your home institution strip you of your degree). You somehow finding that funny is ... well, it just makes you look dumb, whatever the reason. It's pretty hard not to just assume sexism from the get-go - especially given the complete nonsense the rest of that post cosists of - but that's irrelevant really. Dr. Su has been AMD's best CEO for quite some time, and her tenure has been massively successful in many ways, including bringing the GPU branch back from a rapid decline towards irrelevance in the mid-2000s to surpassing Nvidia's efficiency and matching them in absolute performance for the past generation (outside of massively power hungry top SKUs at 2160p).

Also, did you miss the fact that this launch date was already announced quite a while ago? Or the fact that AMD launched RDNA2 two years ago, and have been working on RDNA3 since a while before that launch? Is it really surprising that they're launching a new generation close to Nvidia? For anyone following the PC hardware space even slightly, it really shouldn't be surprising at all. This is how this business operates.

The lack of leaks is somewhat unusual, but then AMD tends to have a lot less leaks than Nvidia - no doubt because of them being an overall smaller operation, and there being more interest in Nvidia leaks to begin with.
Excellent post. Not sure what that guy was talking about. seems like he has some serious issues lol. However it always seems to be more interest/hype in AMD leaks than Nvidia from what I've seen.
Posted on Reply
#41
bonehead123
Livestream, my stream, your stream....it won't matta much, 'cause I've already bought every friggin RDNA3 card on the planet, and the 126.91 that were available off-world too, so this livestream will mainly be to "announce" that yes, the cards are officially launching, but no, there won't be any available to actually buy until late next year, at the earliest :)

/s
Posted on Reply
#42
Fluffmeister
bonehead123Livestream, my stream, your stream....it won't matta much, 'cause I've already bought every friggin RDNA3 card on the planet, and the 126.91 that were available off-world too, so this livestream will mainly be to "announce" that yes, the cards are officially launching, but no, there won't be any available to actually buy until late next year, at the earliest :)

/s
No sarcasm required, there going to be fast and efficient AMD fanboy's wet dreams and will be gobbled up by the scalpers too.
Posted on Reply
#43
mechtech
ChaitanyaWould like to find out if we get mid range(x700XT) series of GPUs right at launch and what would be their availability date.
What about the 7400?? ;)
Posted on Reply
#44
sepheronx
TheLostSwedeWouldn't it be safer and easier to go down to your local brick and mortar store and talk to them and see if they can reserve a card for you?
I tried that with memory express and the guy laughed, called me names then hanged up. Maybe he called me back to also say he banged my mom.

I then realized I cant get pre-orders or do reserves.

And cant fight the bots. Cant get a overpriced 4090. Wont be able to get one of these.
Posted on Reply
#45
ModEl4
The logic thing with these specs (12288SP etc) that Navi31 has, given the right TBP full Navi31 should be faster in 4K raster than RTX 4090 in a Zen4X3D/13900KS testbed (especially the V-cache enabled model whenever it launches and if) but by how much is anyone guess but certainly not with a ≤350 TBP.
Regarding raytracing performance potential if die sizes leaks are correct the dies are small, the die allocated to specific RT performance improvements seems to be nowhere near Ada's level and also it probably won't have dedicated L2 to raytracing, but doubling the L2 cache for example in each Shader Engine will certainly help if utilised properly.
Ampere was better than Turing in RT but not by much, i would be happy if Navi31 can match Turing's 2080Ti regarding % performance hit when RT is enabled, because for me it will be enough with the titles that we already have especially if FSR2.0 (3.0?) utilized (but not with some future titles that will be RT showcases)

Posted on Reply
#46
Blueberries
EatingDirtI mean, in most games the 4090 is only ~5% better at raytracing efficiency than the 3090 Ti. That being Original Framerate verses Raytracing Framerate, so it wasn't much of an improvement on the actual efficiency of Raytracing. AMD on the other hand have said they will be dedicating a lot more resources to the raytracing on their RNDA 3 architecture, so the actual efficiency of RDNA3 should be better this generation, supposedly ~2x better, which would put it just about on par with Nvidia in terms of raytracing efficiency.

You've already spent almost $2000 on a GPU, and by the time the AMD GPU's are released you probably won't be eligible for a return. However, I wouldn't really expect AMD to be well past Nvidia in performance though, I'd expect ±5-10%. Would you really return a 4090 for way less than you paid for it to get an extra 5-10% performance? Seems like a massive waste of money.
I have until Jan 30th for full return.

I'll be gaming at 4k Ray Traced, so that's what's important to me. Rasterization is less of an impact when you can already reach 120hz.
Posted on Reply
#47
GunShot
EatingDirtAMD on the other hand have said they will be dedicating a lot more resources to the raytracing on their RNDA 3 architecture, so the actual efficiency of RDNA3 should be better this generation, supposedly ~2x better, which would put it just about on par with Nvidia in terms of raytracing efficiency.
Define "a lot more resources", uhm... 65%... or 104%... or just ~8.5% because any shift in increased value could VALIDATE such a claim and if memory serves me, AMD definitely refused to attach any solid value to that expected ++ PR statement ++. :shadedshu:

All the "should & woulds" used above is not an AMD or NVIDIA stable business model forecast, especially for consumers.
Posted on Reply
#48
HenrySomeone
CallandorWoTI really doubt RDNA3 will beat a 4090 at 4k gaming, but I think it may match 1440p gaming. Plus you have DLSS3, which lets face it, AMD just won't be able to pull something like that off. doubling the frames at 4k with AI? I just don't see AMD having that technical capability.

but, I still plan to buy RDNA3 because I only game at 1440p.
Even 1440p seems unlikely, 1080p though, probably.
Posted on Reply
#49
btk2k2
ValantarMy assumption with corporate marketing is always to assume the worst with whatever wording they use. "Launching in H1"? That's June, May at best, no earlier. "Sub-$1000"? $999. So when AMD says >50%, I assume 50.1%, while being happy to be proven wrong. But I never assume more than what they're stating explicitly.

Of course, there are tons of leeway here. Are they being complete assholes and doing a worst-v-best comparison, or are they comparing at the same wattage, the same product tier, or some average/geomean of the whole range, and regardless of either of these, during which workloads?

Hence me starting from an asusmption of 50%. That's what they've promised - the "more than" sign is a vague promise with too many ways out of it to matter.
For RDNA1 they claimed a 50% perf/watt gain over Vega. This was done by them comparing V64 to the 5700XT with both parts at stock.
For RDNA2 they claimed a 50% perf/watt gain in early released slides but in the reveal event they claimed 54% and 64%. 54% was 5700XT vs 6800XT at 4k in a variety of games (listed in the footnotes of their slide). The 64% was 5700XT vs 6900XT at 4K in the same games. This was further confirmed in some reviews but it heavily depended on how they tested perf/watt. Those sites that use the power and performance data from 1 game saw very different results. TPU saw about a 50% gain where as Tehcspot / HUB saw 70%+ gain because HUB used Doom Eternal and the 5700XT underperformed and TPU used CP2077 and the 6900XT underperformed. If you look at the HUB average uplift of the 6800XT and 6900XT then it actually matched up really well with AMDs claimed improvements.

So the AMD method seems to be compare SKU to SKU at stock settings, measure the average frame rate difference in a suite of titles and then work out the perf/watt delta.

With the >50% I do agree with using the 50% as a baseline but I feel confident that they are not doing a best vs worst comparison because that is not something AMD have done prior under current leadership.

What it does do though is give us some numbers to play with. If the Enermax numbers are correct and top N31 is using 420W then you can get the following numbers.

BaselineTBPPower DeltaPerf/Watt multiPerformance MultiEstimate vs 4090 in Raster
6900XT3001.4x1.5x2.1x+10%
6900XT3001.4x1.64x (to match 6900XT delta) extreme upper bound!2.3x+23%
Ref 6950XT3351.25x1.5x1.88x+15%
Ref 6950XT3351.25x1.64x Again extreme upper bound!2.05x+25%


Now the assumption I am making here is pretty obvious and that is the design goal of N31 was 420W to begin with which would mean it was wide enough to use that power in the saner part of the f/v curve. If it was not 420W to begin with and has been pushed to this through increasing clocks then it is obvious the perf/watt will drop off and the numbers above will be incorrect.

The other assumption is the Enermax numbers are correct. It is entirely possible that the reference TBP for N31 will be closer to 375W which with these numbers would put it about on par with the 4090.

My view is the TBP will be closer to 375-400W rather than 420W in which case anywhere from about equal to 5% ahead of the 4090 seems to be the ballpark I expect top N31 to land in but there is room for a positive surprise should AMDs >50% claim be like their >5Ghz claim or the >15% single thread claim in the Zen 4 teaser slide and be a rather large underselling of what was actually achieved. Still I await actual numbers on that front and until then I am assuming something in the region of +50%.
Posted on Reply
#50
Valantar
btk2k2For RDNA1 they claimed a 50% perf/watt gain over Vega. This was done by them comparing V64 to the 5700XT with both parts at stock.
For RDNA2 they claimed a 50% perf/watt gain in early released slides but in the reveal event they claimed 54% and 64%. 54% was 5700XT vs 6800XT at 4k in a variety of games (listed in the footnotes of their slide). The 64% was 5700XT vs 6900XT at 4K in the same games. This was further confirmed in some reviews but it heavily depended on how they tested perf/watt. Those sites that use the power and performance data from 1 game saw very different results. TPU saw about a 50% gain where as Tehcspot / HUB saw 70%+ gain because HUB used Doom Eternal and the 5700XT underperformed and TPU used CP2077 and the 6900XT underperformed. If you look at the HUB average uplift of the 6800XT and 6900XT then it actually matched up really well with AMDs claimed improvements.

So the AMD method seems to be compare SKU to SKU at stock settings, measure the average frame rate difference in a suite of titles and then work out the perf/watt delta.

With the >50% I do agree with using the 50% as a baseline but I feel confident that they are not doing a best vs worst comparison because that is not something AMD have done prior under current leadership.
I don't disagree with any of that, but I still never assume anything above what is promised. AMD under current leadership (ex-Koduri, that is) has been pretty trustworthy in their marketing for the most part. Still, I can't trust that to continue - corporations are opportunistic and almost exclusively focused on short term profits, and fundamentally do not care whatsoever about any kind of sustained ethics or even just acting consistently as long as it has some sort of sales/marketing gain, so one can never really trust history to indicate anything much - the best that's possible is to hope that they're choosing to not be complete exploitative assholes. I'm absolutely hopeful that the previous couple of generations will indeed be a solid indication of how their promised numbers should be interpreted - but hope and trust are not the same thing. Hence, I'm sticking with what has been explicitly promised - but as I said I'll be happy to be proven wrong. (And, of course, unappy to be proven wrong if they don't deliver 50% as well.)
btk2k2What it does do though is give us some numbers to play with. If the Enermax numbers are correct and top N31 is using 420W then you can get the following numbers.

BaselineTBPPower DeltaPerf/Watt multiPerformance MultiEstimate vs 4090 in Raster
6900XT3001.4x1.5x2.1x+10%
6900XT3001.4x1.64x (to match 6900XT delta) extreme upper bound!2.3x+23%
Ref 6950XT3351.25x1.5x1.88x+15%
Ref 6950XT3351.25x1.64x Again extreme upper bound!2.05x+25%


Now the assumption I am making here is pretty obvious and that is the design goal of N31 was 420W to begin with which would mean it was wide enough to use that power in the saner part of the f/v curve. If it was not 420W to begin with and has been pushed to this through increasing clocks then it is obvious the perf/watt will drop off and the numbers above will be incorrect.

The other assumption is the Enermax numbers are correct. It is entirely possible that the reference TBP for N31 will be closer to 375W which with these numbers would put it about on par with the 4090.

My view is the TBP will be closer to 375-400W rather than 420W in which case anywhere from about equal to 5% ahead of the 4090 seems to be the ballpark I expect top N31 to land in but there is room for a positive surprise should AMDs >50% claim be like their >5Ghz claim or the >15% single thread claim in the Zen 4 teaser slide and be a rather large underselling of what was actually achieved. Still I await actual numbers on that front and until then I am assuming something in the region of +50%.
I'm not familiar with those Enermax numbers you mention, but there's also the variable of resolution scaling that needs consideration here. It looks like you're calculating only at 2160p? That obviously makes sense for a flagship SKU, but it also means that (unless RDNA3 scales much better with resolution than RDNA2), these cards will absolutely trounce the 4090 at 1440p - a 2.1x performance increase from the 6900XT at 1440p would go from 73% performance v. the 4090 to 153% performance - and that just sounds (way) too good to be true. It would definitely be (very!) interesting to see how customers would react to a card like that if that were to happen (and AMD didn't price it stupidly), but I'm too skeptical to believe that to be particularly likely.
ODOGG26However it always seems to be more interest/hype in AMD leaks than Nvidia from what I've seen.
AMD always gets the "Will they be able to take them down this time?" underdog hype, which to some extent disadvantages Nvidia - it's much harder for them to garner the type of excitement that follows with a potential upset of some kind. But on the other hand, Nvidia has massive reach, tons of media contacts, and are covered and included steadily everywhere across the internet. Not to mention that the tone of that coverage always already expects them to be superior - which isn't as exciting as an underdog, but it still gets people reading, as "how fast will my next GPU be?" (with the default expectation of this being an Nvidia GPU) is just as interesting to people as "will AMD be able to match/beat Nvidia this time?"

Of course in terms of leaks there's also the question of sheer scale: Nvidia outsells AMD's GPU division by ~4x, meaning they have 4x the production volume, 4x the shipping volume, and thus far more products passing through far more hands before launch, with control of this being far more difficult due to this scale.
Posted on Reply
Add your own comment
Dec 20th, 2024 06:02 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts