Tuesday, August 31st 2021
AMD Reportedly Readying RX 6900 XTX, Bringing the Battle to NVIDIA RTX 3090
Graphics cards may be on their way to becoming unicorns that you can only pay for after finding the proverbial pot of gold from under a rainbow, but that doesn't mean AMD and NVIDIA will slow down their competition any time soon - especially in this market, there's a huge profit to be made. And AMD may just be finally readying their true halo product - a graphics card that aims to beat NVIDIA's RTX 3090 across the board. Twitter user CyberPunkCat shared an alleged AMD slide showcasing a new, overpowered RX 6900 XTX graphics card. AMD's naming scheme for their RX 6900 series may be slightly confusing nowadays: the original RX 6900 XT carries the Navi 21 XTX die, and AMD has recently released a higher-performance version of that Navi 21 chip in the form of the Navi 21 XTXH - which power the liquid-cooled versions of the RX 6900 XT, with higher overall clocks than the original GPU release. However, there hasn't been a change in the RX 6900 XT nomenclature - but this new slide suggests otherwise.
If the leaked slide is real (keep your NaCl ready, as always), it appears that the RX 6900 XTX might pair both the higher-performance Navi 21 XTXH chip with higher memory speeds. While both Navi 21 XT and Navi 21 XTXH both make use of 16 Gbps GDDR6 memory, the slide indicates that the RX 6900 XTX will feature 18 Gbps memory speeds, exploring another avenue for increased performance. This decision would bring an increase in maximum theoretical memory subsystem bandwidth from the 512 Gbps in the RX 6900 XT up to 576 Gbps - a 13% increase, which would not translate into a proportional increase in final performance. However, considering how our own reviews show that AMD's RX 6900 XT with the Navi 21 XTXH silicon is already between one and three percent faster than NVIDIA's RTX 3090, even a slight, 5% performance increase over that cards' performance means that AMD might be able to claim the performance crown for the mainstream market. It's been a while since that happened, hasn't it?
Sources:
CyberPunkCat @ Twitter, via Tom's Hardware
If the leaked slide is real (keep your NaCl ready, as always), it appears that the RX 6900 XTX might pair both the higher-performance Navi 21 XTXH chip with higher memory speeds. While both Navi 21 XT and Navi 21 XTXH both make use of 16 Gbps GDDR6 memory, the slide indicates that the RX 6900 XTX will feature 18 Gbps memory speeds, exploring another avenue for increased performance. This decision would bring an increase in maximum theoretical memory subsystem bandwidth from the 512 Gbps in the RX 6900 XT up to 576 Gbps - a 13% increase, which would not translate into a proportional increase in final performance. However, considering how our own reviews show that AMD's RX 6900 XT with the Navi 21 XTXH silicon is already between one and three percent faster than NVIDIA's RTX 3090, even a slight, 5% performance increase over that cards' performance means that AMD might be able to claim the performance crown for the mainstream market. It's been a while since that happened, hasn't it?
107 Comments on AMD Reportedly Readying RX 6900 XTX, Bringing the Battle to NVIDIA RTX 3090
So you would belong to the minority (28%) who don't care about RT, yet speak the loudest?
It feels like one step forward, two steps back. I'll happily jump in on it when the market would deal with it in healthy way, but it never did and is not looking to anytime soon. And in that, RT(X) is different from previous big changes in GPU hardware and capabilities. Today its a tech for the happy few - handful even - because GPUs in general are unobtanium. Ahem, selective reading much? A VAST majority actually cares about raster BEFORE RT. And that is exactly the choice you have in the market right now, too. 51 + 28%. Not 28%.
I also believe that aligns well with what most topics on the subject contain. A few happy campers being all over RT, and a vast majority just waiting it out while using raster only, but looking forward to see where it goes next.
The clear minority is those who advocate RT progress before or even equal to raster progress: 13 + 5 + 3%. This poll simply contains five degrees of 'how much you can care' about one compared to the other, weighing them all.
You know what would kick RT adoption up in a major way? Nvidia making sure an RT capable GPU lands in the hands of every gamer. Now, look at the market again ;)
You sure know the market so tightly that you would deny any market study made by professional so I wouldn't bother sourcing any :roll:.
Btw Intel is also dead set on Ray Tracing and XeSS, which will make Ray Tracing more accessible in the future
wccftech.com/intel-xess-interview-karthik-vaidyanathan/
Jon Peddie research? Nvidia sure sold a hell lots of GPU in Q1/Q2 compare to AMD, and Steam Hardware Survey do counting them Ampere users nicely, but I guess someone here doesn't trust HWS all that much.
My take would be ~20% put a fair bit of stock in it for a current purchase, another 50% of people are still regarding it as a meaningful feature. Being that it is the future, this notion is only going to increase.
Interesting, but that's about it.
He's completely disregarded the fact that the hardware is barely capable of RT and you need a piles of money to make it happen in a respectable way. The opinion for people willing to invest in RT now or not is clearly moving leaning towards rasterization anyway. RT is great and it will be the future but it wont happen tomorrow considering hardware advancement keeping in mind what Ray Tracing can really do. Sure it may be fascinating but it still is long down the road for me. Is it possible really to get a card for MSRP? I know the 2000 series cards cost more than they did during launch. Actually you dont have to. If it doesn't bring much to the game and causes a huge impact in performance you can simply wait because the money you have to put in for it and the performance you get is not that good. Which source to support your point of view have you provided yourself? Our colleagues post is pointless in our argument.
Then there are 13% of who think that 50-50. If we even add that to RT is a MUST company- then you get 21% of the poll.
79% do not CARE about RT!
This what I can call majority. I have seen similar percent's give or take on other techtuber polls.
If we can play RT games without any performance loss (60fps min with all maxed out) and without the need of FSR, DLSS, XeSS etc. Then I can say it is not a gimmick.
Right now everytime I play and try to enjoy there is weird shimmering when using DLSS. I noticed it in Metro EE and CP2077 and in Call Of Duty Warzone. No bueno.
Unless the DLSS improves this, i will not be using it and that automatically means I cannot use RT on higher resolutions than 1440p and not use the DLSS.
1080P and RT turned ON - it's fine with most RT capabale cards.
In addition to optimizing the very specifically ray tracing side of things, RT advancements in games and software are very much around all the supporting areas that are not hardware accelerated. Notably, building and updating the BVH and other data structures.
Game development is a multi-year process and largely only this year have the major game engines implemented RT effects in a real (and non-beta) way. APIs have somewhat matured in their first incarnation.
Similarly, with AMD having hardware support for RT and Intel also getting some hardware support for RT, this is already getting industry-wide.
Clamoring for full path-tracing is one possible viewpoint but dismissing entire RT until that is very short-sighted IMO. RT has obvious benefits for use cases it is already being used - basically everything to do with lighting but in practice mostly GI, shadows, some aspects of transparency. Reflections too but that probably has more to do with us not having found better ways to do proper reflections than current strengths of RT.
Idk about you but I would turn on RT Reflection in CP2077 and tweak other settings for ~60FPS (including DLSS)
Also I'm seeing less shimmering in CP2077 with DLSS as opposed to Native rendering, DLSS is just black magic really :D
As for RT, it can make the game looks like a different, more advanced version of the same game, The Ascent for example
I'll be the first to admit that if RT gains traction, it gains traction. Really. But it hasn't and doesn't - and no, the games don't look like different games either. Most of them look like somebody was too busy cleaning everything meticulously. Reflections are overdone, lighting passes are generally lacking balance, but yes, they all move all the time, fully dynamic - that is true. Is it a different experience in terms of gaming? Absolutely not, it just looks like an extra post effect. In Metro Exodus, some scenes even detract from being a good experience. You linked the Ascent, the first thing that I noticed was the stuttering of the RT half of that video.
What we have now is 16-20% of GPU floor plan reserved to boost RT performance (extra cache also serves it). Those cores are working hard to produce somewhat flashier reflective surfaces and somewhat more dynamic lighting and shadow. That is all she wrote so far, and often just picking one of them - never all at once. Imagine how much more you need hardware wise to make it all tick.
Also... its good Intel is stepping in this race too. But all these generations are, are attempts at finding a good equilibrium between marketable RT and not too shitty raster. The balancing act is ongoing and every company is going to try and carve out a piece of Unique Selling Point. AMD really does it too, even by not offering hardware for it - if they find ways to accelerate RT performance over their regular GPU floorplan, they have a winner that Nvidia can never get close to again. Alternatively, if Nvidia proves they can do fantastic RT (they haven't, yet) at good performance levels on marketable dies without per-title support being required, they also have a winner. So far though all Nvidia has shown us is price hikes for a minimal RT perf advantage, and exploding TDPs.
So far, not one single color/camp has the best solution and it is anyone's guess where this might go. And the economy around GPUs plays a major role in it too, and its not looking very good currently. That's a huge part of what colours that poll you linked, too. If its affordable, people will happily adopt.
With the better ways as you probably know, there are demos that dont use RT. Reflections, lightning, shadows are there and you would not tell the difference. RT is great no doubt but for now it is mostly a marketing point.
I wasn't referring to AMD's 16-20% - I was referring to Nvidia's, where the shader efficiency hasn't improved since Pascal, instead it was chopped up in smaller bits and the cache is likely a byproduct of that too. And they needed that change to cater for their additional RT/Tensor patchwork on top. Not to improve raster.
Meanwhile:
6900XT = 520 mm²
3090 = 628 mm²
That's about 20% bigger right there. Despite the changes since Turing, they still have a net die size increase of 20% for similar rasterized perf. Albeit on a slightly bigger node. True actually.
almost a 100mm2 is a lot of a differnece
There is a reddit post where detailed Turing die shots were analyzed. What he came up seems to be correct enough, Tensor cores and FP16 capabilities may be more nuanced but RT Cores are distinguishable and straightforward. RT Cores make up about 6% of TPC and about 3% of total die size. The increase for Tensor cores and/or FP16 capability concurrent to FP32 has more/most uses outside RT, same for cache. Implementation for AMD and Intel should not be too much different in terms of transistors and area cost, possibly less.
I wish there were good/readable enough die shots for RDNA2 and Ampere but apparently not so far. Would also need comparisons without RT and in case of RDNA where RT capability is part of some other block (TMU?) it is probably impossible to read.
3090 is on Samsung 8N, 6900XT is on TSMC N7:
- 3090 die is 28.3B transistors on 628 mm² - 45 MTr/mm²
- 6900XT die is 26.8B transistors on 520 mm² - 51 MTr/mm²
This highlights the differences in manufacturing processes more than anything.
In terms of transistors/area cost of latest improvements RDNA2 has huge amount of transistors (at least 6.4B plus some control logic which is 24% of total transistors) in Infinity cache, Ampere no doubt has a lot of transistors in the doubled ALUs in shaders.
More cache has been the go-to improvement for a few generations before RDNA and Turing. More likely than not adding more and more cache (at different levels) would happen with or without RT. Assuming similar transistor density as 6900XT, 3090 on N7 would be 5.5% larger, about 30 mm².
That assumption is obviously suspect though. Without Infinity Cache 6900XT die would be noticeably less dense. On the other hand, there is A100 on TSMC's N7 with 54.2B transistors and 826mm² making the density out to 65,6 MTr/mm².