Wednesday, October 28th 2020

AMD Announces the Radeon RX 6000 Series: Performance that Restores Competitiveness

AMD (NASDAQ: AMD) today unveiled the AMD Radeon RX 6000 Series graphics cards, delivering powerhouse performance, incredibly life-like visuals, and must-have features that set a new standard for enthusiast-class PC gaming experiences. Representing the forefront of extreme engineering and design, the highly anticipated AMD Radeon RX 6000 Series includes the AMD Radeon RX 6800 and Radeon RX 6800 XT graphics cards, as well as the new flagship Radeon RX 6900 XT - the fastest AMD gaming graphics card ever developed.

AMD Radeon RX 6000 Series graphics cards are built upon groundbreaking AMD RDNA 2 gaming architecture, a new foundation for next-generation consoles, PCs, laptops and mobile devices, designed to deliver the optimal combination of performance and power efficiency. AMD RDNA 2 gaming architecture provides up to 2X higher performance in select titles with the AMD Radeon RX 6900 XT graphics card compared to the AMD Radeon RX 5700 XT graphics card built on AMD RDNA architecture, and up to 54 percent more performance-per-watt when comparing the AMD Radeon RX 6800 XT graphics card to the AMD Radeon RX 5700 XT graphics card using the same 7 nm process technology.
AMD RDNA 2 offers a number of innovations, including applying advanced power saving techniques to high-performance compute units to improve energy efficiency by up to 30 percent per cycle per compute unit, and leveraging high-speed design methodologies to provide up to a 30 percent frequency boost at the same power level. It also includes new AMD Infinity Cache technology that offers up to 2.4X greater bandwidth-per-watt compared to GDDR6-only AMD RDNA -based architectural designs.

"Today's announcement is the culmination of years of R&D focused on bringing the best of AMD Radeon graphics to the enthusiast and ultra-enthusiast gaming markets, and represents a major evolution in PC gaming," said Scott Herkelman, corporate vice president and general manager, Graphics Business Unit at AMD. "The new AMD Radeon RX 6800, RX 6800 XT and RX 6900 XT graphics cards deliver world class 4K and 1440p performance in major AAA titles, new levels of immersion with breathtaking life-like visuals, and must-have features that provide the ultimate gaming experiences. I can't wait for gamers to get these incredible new graphics cards in their hands."

Powerhouse Performance, Vivid Visuals & Incredible Gaming Experiences
AMD Radeon RX 6000 Series graphics cards support high-bandwidth PCIe 4.0 technology and feature 16 GB of GDDR6 memory to power the most demanding 4K workloads today and in the future. Key features and capabilities include:

Powerhouse Performance
  • AMD Infinity Cache - A high-performance, last-level data cache suitable for 4K and 1440p gaming with the highest level of detail enabled. 128 MB of on-die cache dramatically reduces latency and power consumption, delivering higher overall gaming performance than traditional architectural designs.
  • AMD Smart Access Memory - An exclusive feature of systems with AMD Ryzen 5000 Series processors, AMD B550 and X570 motherboards and Radeon RX 6000 Series graphics cards. It gives AMD Ryzen processors greater access to the high-speed GDDR6 graphics memory, accelerating CPU processing and providing up to a 13-percent performance increase on a AMD Radeon RX 6800 XT graphics card in Forza Horizon 4 at 4K when combined with the new Rage Mode one-click overclocking setting.9,10
  • Built for Standard Chassis - With a length of 267 mm and 2x8 standard 8-pin power connectors, and designed to operate with existing enthusiast-class 650 W-750 W power supplies, gamers can easily upgrade their existing large to small form factor PCs without additional cost.
True to Life, High-Fidelity Visuals
  • DirectX 12 Ultimate Support - Provides a powerful blend of raytracing, compute, and rasterized effects, such as DirectX Raytracing (DXR) and Variable Rate Shading, to elevate games to a new level of realism.
  • DirectX Raytracing (DXR) - Adding a high performance, fixed-function Ray Accelerator engine to each compute unit, AMD RDNA 2-based graphics cards are optimized to deliver real-time lighting, shadow and reflection realism with DXR. When paired with AMD FidelityFX, which enables hybrid rendering, developers can combine rasterized and ray-traced effects to ensure an optimal combination of image quality and performance.
  • AMD FidelityFX - An open-source toolkit for game developers available on AMD GPUOpen. It features a collection of lighting, shadow and reflection effects that make it easier for developers to add high-quality post-process effects that make games look beautiful while offering the optimal balance of visual fidelity and performance.
  • Variable Rate Shading (VRS) - Dynamically reduces the shading rate for different areas of a frame that do not require a high level of visual detail, delivering higher levels of overall performance with little to no perceptible change in image quality.
Elevated Gaming Experience
  • Microsoft DirectStorage Support - Future support for the DirectStorage API enables lightning-fast load times and high-quality textures by eliminating storage API-related bottlenecks and limiting CPU involvement.
  • Radeon Software Performance Tuning Presets - Simple one-click presets in Radeon Software help gamers easily extract the most from their graphics card. The presets include the new Rage Mode stable over clocking setting that takes advantage of extra available headroom to deliver higher gaming performance.
  • Radeon Anti-Lag - Significantly decreases input-to-display response times and offers a competitive edge in gameplay.
AMD Radeon RX 6000 Series Product Family
Robust Gaming Ecosystem and Partnerships
In the coming weeks, AMD will release a series of videos from its ISV partners showcasing the incredible gaming experiences enabled by AMD Radeon RX 6000 Series graphics cards in some of this year's most anticipated games. These videos can be viewed on the AMD website.
  • DIRT 5 - October 29
  • Godfall - November 2
  • World of Warcraft : Shadowlands - November 10
  • RiftBreaker - November 12
  • FarCry 6 - November 17
Pricing and Availability
  • AMD Radeon RX 6800 and Radeon RX 6800 XT graphics cards are expected to be available from global etailers/retailers and on AMD.com beginning November 18, 2020, for $579 USD SEP and $649 USD SEP, respectively. The AMD Radeon RX 6900 XT is expected to be available December 8, 2020, for $999 USD SEP.
  • AMD Radeon RX 6800 and RX 6800 XT graphics cards are also expected to be available from AMD board partners, including ASRock, ASUS, Gigabyte, MSI, PowerColor, SAPPHIRE and XFX, beginning in November 2020.
The complete AMD slide deck follows.
Add your own comment

394 Comments on AMD Announces the Radeon RX 6000 Series: Performance that Restores Competitiveness

#301
tfdsaf
SLKMarketing rule number 1: Always show yr best

Yesterday's Radeon presentation clearly indicates they have matched Ampere's raster performance. They did not show RT numbers and Super Resolution is something they are working on. If the RT was as good as Ampere, they would have shown numbers. Simple deduction.
ALL of the ray traced games so far have Nvidia's proprietary ray tracing methods. They are based on DXR of course, but completely optimized for Nvidia, so obviously AMD hardware will not be able to run with that ray tracing, or its going to have worse performance. This doesn't matter though, as only a handful of games have ray tracing support and literally like 1 or 2 are actually decent in their implementation, as in looks reasonably better than established techniques.

AMD is literally going to have all of the consoles catalog of games which will be build and optimized for AMD RDNA2 implementation of ray tracing.

Personally I think Nvidia was pushing it way too hard with ray tracing, they just needed a "new" feature to put on the marketing, without it actually being ready for practical use. In fact even the Ampere series is crap at rendering rays, and guess what their next gen will be as well, same with AMD. We are realistically at least 5 years to be able to properly trace rays in real time in games, without making it extremely specific and cutting 9/10 corners. Right now ray tracing is literally just a marketing gimmick, its extremely specific and limited.

If you actually did a full ray traced game with all of the caveats of actually tracing rays and you have trillions of rays in the scenes it will literally blow up existing GPU's, its not possible to do it. It will render like 0.1fps per second.

This is why they have to use cheats in order to make it work, this is why its only used on 1 specific thing, either shadows, either reflections, either global illumination, etc.... never all of them and even at that its very limited. They limit the rays that get processed, so its only the barebones rays that get traced.

Again we could have had a 100x times better ray tracing implementation in 5 years, with full ray tracing capabilities that doesn't cut as much corners, that comes somewhat close to rendered ray tracing, that isn't essentially a gimmick that tanks your performance by 50% and that is with very specific and very limited tracing, again if you actually did a better ray tracing it will literally tank performance completely, you'd be running at less than 1 frame per second.
Posted on Reply
#302
Zach_01
Max(IT)I was honestly expecting more, especially from the 6800 that was my target. I mean, 16 Gb of VRAM are highly unnecessary (10 would have been perfect) and the price, probably because of that amount of VRAM, is $50/60 higher than the sweet spot, and definitely too close to the 6800XT.
we know nothing about RT performance, so we should wait for the review before draw any conclusion.


When did they speak about X570 ???
They said it alright...

Copy form another thread:

"I can see there is a lot of confusion about the new feature AMD is calling "Smart Access Memory" and how it works. My 0.02 on the subject.
According to the presentation the SAM feature can be enabled only in 500series boards with a ZEN3 CPU installed. My assumption is that they use PCI-E 4.0 capabilities for this, but I'll get back to that.
The SAM feature has nothing to do with InfinityCache. IC is used to compensate the 256bit bandwithd between the GPU and VRAM. Thats it, end of story. And according to AMD this is equivalent of a 833bit bus. Again, this has nothing to do with SAM. IC is in the GPU and works for all systems the same way. They didnt say you have to do anything to "get it" to work. If it works with the same effectiveness with all games we will have to see.

Smart Access Memory
They use SAM to have CPU access to VRAM and probably speed up things a little on the CPU side. Thats it. They said it in the presentation, and they showed it also...
And they probably can get this done because of PCI-E 4.0 speed capability. If true thats why no 400series support.
They also said that this feature may be better in future than it is today, once game developers optimize their games for it.
I think AMD just made PCI-E 4.0 (on their own platform) more relevant than it was until now!"

Full CPU access to GPU memory:


----------------------------------------------------------------------


So who knows better than AMD if the 16GB is necessary or not?
Posted on Reply
#304
utilizedamplitude
nikoyaso now I have to hate LG for not implementing Freesync on C9 and B9 Oleds.
I have a c9 with a Vega64 and Freesync works just fine. All that is required is to use CRU and configure the tv to report as Freesync compatible.
Posted on Reply
#305
lexluthermiester
I know I'm late to the party so I'm repeating what's already been said, oh well(TLDR), but HOT DAMN! If those numbers are real, AMD has got the goods to make life interesting(perhaps even difficult) for NVidia. AMD is also not gimping on the VRAM either. This looks like it's going to be AMD for this round of GPU king-of-the-hill!

This is a very good time for the consumer in the PC industry!!

What I find most interesting is that the 6900XT might be even better than the 3090 at 8k gaming. Really looking forward to more tests with 8k panels like the one LTT did, but more fleshed out and expansive.
Posted on Reply
#306
InVasMani
mechtechI wonder when/if 30, 36, 40CU cards and other cards will be released??
52CU and 44CU is the next logical step based on what's already released AMD seems to disable 8CU's at a time. I can see them doing a 10GB or 14GB capacity device. It would be interesting if they utilized GDDR6/GDDR6X together and use it along side variable rate shading say use the GDDR6 when you scale the scene image quality back further and the GDDR6X at the higher quality giving mixed peformance at a better price. I would think they'd consider reducing the memory bus width to 128-bit or 192-bit for SKU's with those CU counts though if paired with infinity cache. Interesting to think about the infinity cache in a CF setup how it impacts the latency I'd expect less micro stutter. The 99 percentiles will be interesting to look at for RDNA2 with all the added bandwidth and I/O. I suppose 36CU's is possible as well by extension, but idk how the profit margins would be the low end of the market is erroding each generation further and further not to mention Intel entering the dGPU market will compound that situation further. I don't think a 30CU is likely/possibly for RDNA2 it would end up being 28CU if anything and kind of doubtful unless they wanted that for a APU/mobile then perhaps.
Posted on Reply
#308
ador250
10K Cuda cores 384bit 35 TFLOPS 350watt, all just to match with 256bit 23TFLOPS 300watt 6900XT, LawL. Ampere is a failure architecture. Let's hope Hopper will change something for Nvidia. Until then RIP Ampere.
Posted on Reply
#309
SLK
tfdsafALL of the ray traced games so far have Nvidia's proprietary ray tracing methods. They are based on DXR of course, but completely optimized for Nvidia, so obviously AMD hardware will not be able to run with that ray tracing, or its going to have worse performance. This doesn't matter though, as only a handful of games have ray tracing support and literally like 1 or 2 are actually decent in their implementation, as in looks reasonably better than established techniques.

AMD is literally going to have all of the consoles catalog of games which will be build and optimized for AMD RDNA2 implementation of ray tracing.

Personally I think Nvidia was pushing it way too hard with ray tracing, they just needed a "new" feature to put on the marketing, without it actually being ready for practical use. In fact even the Ampere series is crap at rendering rays, and guess what their next gen will be as well, same with AMD. We are realistically at least 5 years to be able to properly trace rays in real time in games, without making it extremely specific and cutting 9/10 corners. Right now ray tracing is literally just a marketing gimmick, its extremely specific and limited.

If you actually did a full ray traced game with all of the caveats of actually tracing rays and you have trillions of rays in the scenes it will literally blow up existing GPU's, its not possible to do it. It will render like 0.1fps per second.

This is why they have to use cheats in order to make it work, this is why its only used on 1 specific thing, either shadows, either reflections, either global illumination, etc.... never all of them and even at that its very limited. They limit the rays that get processed, so its only the barebones rays that get traced.

Again we could have had a 100x times better ray tracing implementation in 5 years, with full ray tracing capabilities that doesn't cut as much corners, that comes somewhat close to rendered ray tracing, that isn't essentially a gimmick that tanks your performance by 50% and that is with very specific and very limited tracing, again if you actually did a better ray tracing it will literally tank performance completely, you'd be running at less than 1 frame per second.
True, full ray tracing "aka" Path tracing is too expensive now, hence the hybrid rendering and tools like DLSS to make it feasible. However, even in its current form, it looks so good. I have played Control, Metro Exodus and Minecraft and it just looks beautiful. In slow-moving games, you can really experience the glory of ray tracing and its hard to go back to normal version after that. In fast-paced titles though, like battlefield or Fortnite, I don't really notice it.
Posted on Reply
#310
Zach_01
ador25010K Cuda cores 384bit 35 TFLOPS 350watt, all just to match with 256bit 23TFLOPS 300watt 6900XT, LawL. Ampere is a failure architecture. Let's hope Hopper will change something for Nvidia. Until then RIP Ampere.
To be honest those AMD charts for 6900XT vs 3090 is with AMD GPU overclocked and SmartAccessMemory On. So that’s no 300W for starters
I guess it wouldn’t be 350+W but stil no 300W.

I’m not saying that what AMD has accomplished is not impressive. It is more than just impressive. And with that SAM feature with 5000cpu+500boards it might change the game.

And to clarify something, SAM will be available on all 500series boards. Not only X570. They use PCI-E 4.0 interconnect between CPU and GPU for the former to have VRAM memory access. All 500 boards have PCI-E 4.0 speed for GPUs.
Posted on Reply
#311
Metroid
Max(IT)I would have preferred 10 or 12 Gb VRAM for $50 less.
16 GB for the intended target (mostly 1440P) is totally useless.
Yeah, but that 16gb will likely be a 3080ti and that is targeted for 4k, 16 or 20gb for 3080 was cancelled.
Posted on Reply
#312
Zach_01
They need more VRAM for SmartAccessMemory
Posted on Reply
#313
Metroid
ador25010K Cuda cores 384bit 35 TFLOPS 350watt, all just to match with 256bit 23TFLOPS 300watt 6900XT, LawL. Ampere is a failure architecture. Let's hope Hopper will change something for Nvidia. Until then RIP Ampere.
Failure they have not used cache like amd used here. So like lisa said, they used cache like they did to zen3. Imagine if nvidia uses cache, they will likely get the upper hand but that takes time and effort to develop such product and if that happens then will be in 2 years or so. So amd will have 2 years to think how to get closer to nvidia on raytracing and nvidia has 2 years to understand how to implement cache on their gpus like amd did.
Posted on Reply
#314
Zach_01
MetroidFailure they have not used cache like amd used here. So like lisa said, they used cache like they did to zen3. Imagine if nvidia uses cache, they will likely get the upper hand but that takes time and effort to develop such product and if that happens then will be in 2 years or so. So amd will have 2 years to think how to get closer to nvidia on raytracing and nvidia has 2 years to understand how to implement cache on their gpus like amd did.
That would require a complete GPU redesign. They already occupy a large portion of the die with tesnor and RT cores. Also a very different memory controller is needed.
The path that nVidia has chosen, do not allow them to implement such a cache. And I really doubt that they will abandon Tensor and RT cores in future.
Posted on Reply
#315
Metroid
Zach_01That would require a complete GPU redesign. They already occupy a large portion of the die with tesnor and RT cores. Also a very different memory controller is needed.
The path that nVidia has chosen, do not allow them to implement such a cache. And I really doubt that they will abandon Tensor and RT cores in future.
So if that is true then they have to find a way to outdo cache from amd, like i said before if amd pushes to 384 bit gddr6 like nvidia then nvidia is doomed, 256 bit is beating nvidia best already.
Posted on Reply
#316
Zach_01
If we believe AMDs numbers that structure (256bit+128MB) is giving them an 833-bit with GDDR6 equivalent (effective).
The thing is that we don’t know if increasing the actual bandwidth to 320 or 384 is going to scale well. You have to have stronger cores to utilize extra (normal or effective) bandwidth.

EDIT PS
and they have to redesign mem controller for wider bus = expensive and larger die.
It’s a no go...
Posted on Reply
#317
hero1
ShurikNAs far as I remember all the leaks and rumors, there is not going to be an AIB 6900XT
For reals? That'll be insane I'd they don't have AIBs involved. Imagine the amount of money they'll make when(if) the review backup their performance claims.
Posted on Reply
#318
Zach_01
AIBs on 6900XT will mean 1100-1200$ prices if not more. Maybe AMD doesn’t want that.
But then again... 6800XT AIBs mean matching the 6900XT for less... (700~800+)

It’s complicated...
Posted on Reply
#319
BoboOOZ
MetroidSo if that is true then they have to find a way to outdo cache from amd, like i said before if amd pushes to 384 bit gddr6 like nvidia then nvidia is doomed, 256 bit is beating nvidia best already.
I think it's time to rein in the hype train a bit. First off, we have no idea how this Infinity Cache scales up or down, if you just assume a linear scaling, you're probably wrong.

Second, remember Fermi: Nvidia put out a first generation of cards which were horrible, and then they iterated on the same node just 6 months later and fixed most of the problems.

The theoretical bandwidth showed up on AMDs slides is just as theoretical as Ampere TFlops.

Let's not get ahead of ourselves, AMD did well in this skirmish, the war is far from over.
Posted on Reply
#320
Metroid
BoboOOZI think it's time to rein in the hype train a bit. First off, we have no idea how this Infinity Cache scales up or down, if you just assume a linear scaling, you're probably wrong.

Second, remember Fermi: Nvidia put out a first generation of cards which were horrible, and then they iterated on the same node just 6 months later and fixed most of the problems.

The theoretical bandwidth showed up on AMDs slides is just as theoretical as Ampere TFlops.

Let's not get ahead of ourselves, AMD did well in this skirmish, the war is far from over.
in my view, amd could have beaten nvidia, they did not want to, i guess they are doing like they did to intel, zen2 = competitive x intel, zen3 = beat intel, i guess this time will be similar, 6xxx = competitive, 7xxx = beat nvidia.

Before this, i said do not underestimate amd, amd has a new management but to tell you the truth i myself have not had the thought amd could be competitive this time x nvidia, i guessed 30% less performance than x 3080, i was wrong.
Posted on Reply
#321
BoboOOZ
Metroidin my view, amd could have beaten nvidia, they did not want to, i guess they are doing like they did to intel, zen2 = competitive x intel, zen3 = beat intel, i guess this time will be similar, 6xxx = competitive, 7xxx = beat nvidia.
Only Nvidia is not Intel, Nvidia is a fast responding company capable of taking decisions in days and implementing them in weeks or months. They have tons of cash (not cache :p ), good management, loads of good engineers and excellent marketing and mindshare. Intel only had tons of cash.
Edit: ... and mindshare, tbh, which they haven't completely eroded yet, at least in some markets
Posted on Reply
#322
Zach_01
nVidia is not exactly in the position that Intel is. Sure they made some apparently dumb decisions but they have the resources to come back soon. And probably sooner than RDNA3.
The fact that RDNA3 is coming in 2 years gives nVidia room to respond.
Posted on Reply
#323
Metroid
Well, we have to wait for reviews to confirm what amd showed. It's hard to believe even if you see if is real, amd was so far behind that if all true then we have to start believing in miracles too if you dont believe it already.
Posted on Reply
#324
InVasMani
I'm a bit perplexed at how Smart access memory works in comparison to how it's always worked what's the key difference between the two with like a flow chart. What's being done differently doesn't the CPU always have access to VRAM anyway!? I imagine it's bypassing some step in the chain for quicker access than how it's been handle in the past, but that's the part I'm curious about. I mean I can access a GPU's VRAM now and the CPU and system memory obviously plays some role in the process. The mere fact that the VRAM performance slows down around the point where the L2 cache is saturated on my CPU seems to indicate the CPU design plays a role though it seems to bottleneck by system memory performance along with the CPU L2 cache and core count not thread count which adds to the overall combined L2 cache structure. You see a huge regression of performance beyond the theoretical limits of the L2 cache it seems to peak at that point and it 4-way on my CPU slows a bit up to 2MB file sizes then drops off quite rapidly after that point. If you disable the physical core too the bandwidth regresses as well so the combined L2 cache impacts it from what I've seen.
ador25010K Cuda cores 384bit 35 TFLOPS 350watt, all just to match with 256bit 23TFLOPS 300watt 6900XT, LawL. Ampere is a failure architecture. Let's hope Hopper will change something for Nvidia. Until then RIP Ampere.
Yeah I'd defiantly want the Radeon in mGPU over the Nvidia in the same scenario 600w versus 700w not to mention the infinity cache could have real latency beneficial impact in that scenario potentially as well. I'm curious if the lower end models will have CF support or not I didn't see any real mention of CF tech for RDNA2, but they had a lot of other things to cover. I think a 128-bit card with less CU's 44/52 and the same infinity cache could potentially be even better a lower overall TDP perhaps the same VRAM capacity, but overall maybe quicker than 6800XT at a similar price would be hugely popular and widely successful. I think a 44CU of that nature would probably be enough to beat the 6800XT slightly and could probably cost less plus you could upgrade to that type of performance. It might not win strictly on TDP however then again maybe it's close if AMD is pushing the clock frequency rather steeply and efficiency is going out the window as a byproduct of that. Now I wonder if the infinity cache in crossfire could be split 50/50 with 64MB to each GPU that the CPU can access and the other left over 64MB on each could shared between each other reducing the inter-latency connection between the GPU's and bandwidth to and from the CPU. The other interesting part maybe it can only push 128MB now, but once a newer compatible CPU launches it could push 256MB of smart cache to the CPU with 128MB from each GPU in Crossfire!!? Really interesting stuff to explore.
Posted on Reply
#325
lexluthermiester
MetroidWell, we have to wait for reviews to confirm what amd showed.
While true, benchmarks have been pretty much on the mark with their stated claims for Ryzen. I see no reason they would over-exaggerate these stats.
Posted on Reply
Add your own comment
Nov 23rd, 2024 18:36 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts