Wednesday, October 28th 2020

AMD Announces the Radeon RX 6000 Series: Performance that Restores Competitiveness

AMD (NASDAQ: AMD) today unveiled the AMD Radeon RX 6000 Series graphics cards, delivering powerhouse performance, incredibly life-like visuals, and must-have features that set a new standard for enthusiast-class PC gaming experiences. Representing the forefront of extreme engineering and design, the highly anticipated AMD Radeon RX 6000 Series includes the AMD Radeon RX 6800 and Radeon RX 6800 XT graphics cards, as well as the new flagship Radeon RX 6900 XT - the fastest AMD gaming graphics card ever developed.

AMD Radeon RX 6000 Series graphics cards are built upon groundbreaking AMD RDNA 2 gaming architecture, a new foundation for next-generation consoles, PCs, laptops and mobile devices, designed to deliver the optimal combination of performance and power efficiency. AMD RDNA 2 gaming architecture provides up to 2X higher performance in select titles with the AMD Radeon RX 6900 XT graphics card compared to the AMD Radeon RX 5700 XT graphics card built on AMD RDNA architecture, and up to 54 percent more performance-per-watt when comparing the AMD Radeon RX 6800 XT graphics card to the AMD Radeon RX 5700 XT graphics card using the same 7 nm process technology.
AMD RDNA 2 offers a number of innovations, including applying advanced power saving techniques to high-performance compute units to improve energy efficiency by up to 30 percent per cycle per compute unit, and leveraging high-speed design methodologies to provide up to a 30 percent frequency boost at the same power level. It also includes new AMD Infinity Cache technology that offers up to 2.4X greater bandwidth-per-watt compared to GDDR6-only AMD RDNA -based architectural designs.

"Today's announcement is the culmination of years of R&D focused on bringing the best of AMD Radeon graphics to the enthusiast and ultra-enthusiast gaming markets, and represents a major evolution in PC gaming," said Scott Herkelman, corporate vice president and general manager, Graphics Business Unit at AMD. "The new AMD Radeon RX 6800, RX 6800 XT and RX 6900 XT graphics cards deliver world class 4K and 1440p performance in major AAA titles, new levels of immersion with breathtaking life-like visuals, and must-have features that provide the ultimate gaming experiences. I can't wait for gamers to get these incredible new graphics cards in their hands."

Powerhouse Performance, Vivid Visuals & Incredible Gaming Experiences
AMD Radeon RX 6000 Series graphics cards support high-bandwidth PCIe 4.0 technology and feature 16 GB of GDDR6 memory to power the most demanding 4K workloads today and in the future. Key features and capabilities include:

Powerhouse Performance
  • AMD Infinity Cache - A high-performance, last-level data cache suitable for 4K and 1440p gaming with the highest level of detail enabled. 128 MB of on-die cache dramatically reduces latency and power consumption, delivering higher overall gaming performance than traditional architectural designs.
  • AMD Smart Access Memory - An exclusive feature of systems with AMD Ryzen 5000 Series processors, AMD B550 and X570 motherboards and Radeon RX 6000 Series graphics cards. It gives AMD Ryzen processors greater access to the high-speed GDDR6 graphics memory, accelerating CPU processing and providing up to a 13-percent performance increase on a AMD Radeon RX 6800 XT graphics card in Forza Horizon 4 at 4K when combined with the new Rage Mode one-click overclocking setting.9,10
  • Built for Standard Chassis - With a length of 267 mm and 2x8 standard 8-pin power connectors, and designed to operate with existing enthusiast-class 650 W-750 W power supplies, gamers can easily upgrade their existing large to small form factor PCs without additional cost.
True to Life, High-Fidelity Visuals
  • DirectX 12 Ultimate Support - Provides a powerful blend of raytracing, compute, and rasterized effects, such as DirectX Raytracing (DXR) and Variable Rate Shading, to elevate games to a new level of realism.
  • DirectX Raytracing (DXR) - Adding a high performance, fixed-function Ray Accelerator engine to each compute unit, AMD RDNA 2-based graphics cards are optimized to deliver real-time lighting, shadow and reflection realism with DXR. When paired with AMD FidelityFX, which enables hybrid rendering, developers can combine rasterized and ray-traced effects to ensure an optimal combination of image quality and performance.
  • AMD FidelityFX - An open-source toolkit for game developers available on AMD GPUOpen. It features a collection of lighting, shadow and reflection effects that make it easier for developers to add high-quality post-process effects that make games look beautiful while offering the optimal balance of visual fidelity and performance.
  • Variable Rate Shading (VRS) - Dynamically reduces the shading rate for different areas of a frame that do not require a high level of visual detail, delivering higher levels of overall performance with little to no perceptible change in image quality.
Elevated Gaming Experience
  • Microsoft DirectStorage Support - Future support for the DirectStorage API enables lightning-fast load times and high-quality textures by eliminating storage API-related bottlenecks and limiting CPU involvement.
  • Radeon Software Performance Tuning Presets - Simple one-click presets in Radeon Software help gamers easily extract the most from their graphics card. The presets include the new Rage Mode stable over clocking setting that takes advantage of extra available headroom to deliver higher gaming performance.
  • Radeon Anti-Lag - Significantly decreases input-to-display response times and offers a competitive edge in gameplay.
AMD Radeon RX 6000 Series Product Family
Robust Gaming Ecosystem and Partnerships
In the coming weeks, AMD will release a series of videos from its ISV partners showcasing the incredible gaming experiences enabled by AMD Radeon RX 6000 Series graphics cards in some of this year's most anticipated games. These videos can be viewed on the AMD website.
  • DIRT 5 - October 29
  • Godfall - November 2
  • World of Warcraft : Shadowlands - November 10
  • RiftBreaker - November 12
  • FarCry 6 - November 17
Pricing and Availability
  • AMD Radeon RX 6800 and Radeon RX 6800 XT graphics cards are expected to be available from global etailers/retailers and on AMD.com beginning November 18, 2020, for $579 USD SEP and $649 USD SEP, respectively. The AMD Radeon RX 6900 XT is expected to be available December 8, 2020, for $999 USD SEP.
  • AMD Radeon RX 6800 and RX 6800 XT graphics cards are also expected to be available from AMD board partners, including ASRock, ASUS, Gigabyte, MSI, PowerColor, SAPPHIRE and XFX, beginning in November 2020.
The complete AMD slide deck follows.
Add your own comment

394 Comments on AMD Announces the Radeon RX 6000 Series: Performance that Restores Competitiveness

#376
mtcn77
InVasManiIf I'm not mistaken RNDA transitioned to some form of twin CU design task scheduling work groups that allows for kind of a serial and/or parallel performance flexibility within them. I could be wrong on my interpretation of them, but I think it allows them double down for a single task or split up and each handle two smaller tasks within the same twin CU grouping. Basically a working smarter not harder hardware design technique. Granular is where it is at more neurons.
We can get deep into this subject. It holds so much water.
Posted on Reply
#377
InVasMani
CammOkay, people tend to think of bandwidth as a constant thing (I'm always pushing 18Gbps or whatever the hell it is) at all times, and that if I'm not pushing the most amount of data at all times the GPU is going to stall.

The reality is only a small subset of data is all that necessary to keeping the GPU fed to not stall. The majority of the data (in a gaming context anyway) isn't anywhere near as latency sensitive and can be much more flexible for when it comes across the bus. IC helps by doing two things. It
A: Stops writes and subsequent retrievals from going back out to general memory for the majority of that data (letting it exist in cache, where its likely a shader is going to retrieve that information from again), and
B: It helps act as a buffer for further deprioritising data retrieval, letting likely needed data be retrieved earlier, momentarily held in cache, then ingested to the shader pipeline than written back out to VRAM.

As for Nvidia, yep, they would have, but the amount of die space being chewed for even 128mb of cache is pretty ludicrously large. AMD has balls chasing such a strategy tbh (but is probably why we saw 384 bit Engineering Sample cards earlier in the year, if IC didn't perform, they could fall back to a wider bus).
Agree granular chunk blocks if you can push more of them quicker and more efficiently data flow and congestion the better it's handled less stutter encountered. CF/SLI isn't dead because it doesn't work it's been regressing because of other reasons developer support, relative power draw for the same performance from a single card solution, and user sentiment to both those issues. It's not that it doesn't work it's not less ideal, but it does offer more performance that scales well done right and well offers less problematic negatives than in the past. A lot of it hinges on developers supporting it well and is a big problem with it no matter how good it is if they do a poor job implementing it then you have a real problem on hand if you're reliant on that same with tech like DLSS it's great or useful anyway until it's not or not implemented TXAA was the same deal. It's wonderful to a point, but selectively available with mixed results. If AMD/Nvidia manages to get away from the developer/power efficiency/latency quirks with CF/SLI they'll be great that's always been what's held them back unfortunately. It's what cause Lucid Hydra to be a overall failure of sorts. I suppose it had it's influence though just the same from what was learned from it that could be applied to avoid those same pitfalls stuff like Mantle/DX12/Vulkcan API's that that are more flexible and eventhings like variable rate shading. Someone had to break things down into smaller tasks between two separate pieces of hardware and try make it more efficient or learn how it could be made better. Eventually we may get close to the Lucid Hydra realization working in the way it was actually envisioned, but with more steps involved than they had hoped or wished for.
Zach_01Rumors say that the next RDNA3 will be more close to ZEN2/3 approach. Chunks of cores/dies tied together with large pools of cache.
That’s why I believe it will not come soon. It will be way more than a year.
I would think RNDA3 and Zen 4 will arrive about the same time frame and be 5nm based with some improves to caches, cores, frequency, IPC, and power gating on both with other possible refinements and introductions. I think bigLITTLE is something to think about and perhaps some FPGA tech being applied to designs. I wonder if perhaps the MB chipset will be turned into a FPGA or incorporate some of that tech same with CPU/GPU just re-route some new designs and/or re-configure them a bit depending on need they are wonderfully flexible in a great way perfect no, but they'll certainly improve and be even more useful. Unused USB/PCI-E/M.2 slots cool I'll be reusing that for X or Y. I think eventually it could get to that point perhaps hopefully and if it can be and efficiently that cool as hell.
Posted on Reply
#378
Camm
InVasManiCF/SLI isn't dead because it doesn't work it's been regressing because of other reasons developer support, relative power draw for the same performance from a single card solution, and user sentiment to both those issues.
Probably missing the biggest issue, many postprocessing techniques are essentially impossible to do on a Crossfire\SLi solution that are using scene dividing and frame each techniques (the most common way these work).

mGPU tries to deal with this by setting bitmasks to keep certain tasks on a single GPU and an abstracted copy engine to reduce coherency requirements, but it comes down to the developer needing to explicitly manage that at the moment.
Posted on Reply
#379
mtcn77
CammProbably missing the biggest issue, many postprocessing techniques are essentially impossible to do on a Crossfire\SLi solution that are using scene dividing and frame each techniques (the most common way these work).
You have 2 frontends, though. 2 frontends give 2 times faster CU WF initiation and SIMD wave instructing. While I admit it might displace a solid single pipeline into two and create needless time seams during which the pipeline is running idle, let's be careful to notice there are no pipeline stalls in RDNA2 what so ever. The registers used to be 64x4, 4 latency gap issued, now they are 32x32, enough to cover each lane each clock cycle.
It is also not the same pipeline state object between GCN and RDNA2 either, RDNA2 can prioritise compute and can stop the graphics pipeline entirely. Since gpus are large latency hiding devices, I think this would give us the necessary time needed to seam the images back into one before the timestamp is missed, but I'm rambling.
www.hardwaretimes.com/amd-navi-vs-vega-a-look-at-the-rdna-graphics-architecture/
Posted on Reply
#380
InVasMani
Post processing is a interesting point on the Crossfire/SLI matter. That said there are work around solutions to that issue such as mCable. I don't see why AMD/Nvidia couldn't make a display with a GPU at that end that does post process at that end in a timely manner built right into the display itself and more advanced. I also kind of find it odd that interlaced mGPU techniques like 3DFX used hasn't made a come back the bandwidth savings are huge use a bit higher resolution and downscale for something akin to a higher DPI. I mean let's see PCIE 3.0 vs PCIE 4.0 you've got double the bandwidth and 1/2 the latency then interlacing same story on the bandwidth and I guess in turn latency combine the both that's x4 times the bandwidth and a 1/4 the latency throw in infinity cache very close to the same thing slightly better actually 1/8 the latency with x8 the bandwidth. The thing with the interlacing yes it perceptively looks a bit worse which I think is largely attributed to the sharpness of the image it's a bit like DLSS you got less pixels to work with of course it's going to appear more blurry and less sharp by contrast. On the plus side you could combine that with a device like mClassic I would think and work a little magic to upscale the quality. Then you've compression as well you can use LZX compression on VRAM perfectly fine for example though obviously doing that quickly would be challenging depending on file sizes involved though limits the file sizes and doing that on the fly is certainly a option I would say in the future to be considered that also of course increases bandwidth and reduces latency from higher I/O.
mtcn77You have 2 frontends, though. 2 frontends give 2 times faster CU WF initiation and SIMD wave instructing. While I admit it might displace a solid single pipeline into two and create needless time seams during which the pipeline is running idle, let's be careful to notice there are no pipeline stalls in RDNA2 what so ever. The registers used to be 64x4, 4 latency gap issued, now they are 32x32, enough to cover each lane each clock cycle.
It is also not the same pipeline state object between GCN and RDNA2 either, RDNA2 can prioritise compute and can stop the graphics pipeline entirely. Since gpus are large latency hiding devices, I think this would give us the necessary time needed to seam the images back into one before the timestamp is missed, but I'm rambling.
www.hardwaretimes.com/amd-navi-vs-vega-a-look-at-the-rdna-graphics-architecture/
I'd like to add the perks of mGPU for path tracing is enormous as well think how much more quickly denoise could be done in that scenario!? The prospects of 4 discrete GPU's with a chunk of infinity cache on each running to a CPU with larger chunk of infinity cache that it can split amongst them is a very real future and vastly better than 4-way GTX980/980Ti's was with those old slower and less multicore Intel workstation chips and motherboards like setup is archaic to what we've got now it may as well be a 486 it just looks so dated this current tech in so many area's.
Posted on Reply
#381
Zvijer
AMD all the way...finaly better FPS then "Green shelter"... :rockout:
Posted on Reply
#382
InVasMani
A Ryzen 5600X and 6800 setup looks like it's going to be quite tempting wonder if AMD will do any package deals on that combination or not that'll be great for 1080p/1440p in perticular.
Posted on Reply
#383
Camm
InVasManiA Ryzen 5600X and 6800 setup looks like it's going to be quite tempting wonder if AMD will do any package deals on that combination or not that'll be great for 1080p/1440p in perticular.
I would be *very* surprised if AMD doesn't offer package deals with a 5600X+6700XT, 5800X+6800, 5900X+6800XT & 5950X+6900XT combos or some sort of rebate system where you show you buy both in a single transaction you can apply for $50 back or something.
Posted on Reply
#384
Zach_01
InVasManiA Ryzen 5600X and 6800 setup looks like it's going to be quite tempting wonder if AMD will do any package deals on that combination or not that'll be great for 1080p/1440p in perticular.
This combo can easily do 4K, unless you're after high competitive framerate
Posted on Reply
#385
Vya Domus
Zach_01unless you're after high competitive framerate
Because then what ? You get 350 FPS instead of 320 or something ? That system will get you high performance in anything.
Posted on Reply
#386
Zach_01
Vya DomusBecause then what ? You get 350 FPS instead of 320 or something ? That system will get you high performance in anything.
I meant 4K.
He said that this system it would be great for 1080/1440p. And I said it could do 4K, unless he wants to stay in lower res for high (100+) framerate.
All 3 current 6000 GPUs are meant for 4K and not 1080/1440p. That was the point...

I didnt speak numbers, but thats what I meant.
Posted on Reply
#387
InVasMani
Zach_01I meant 4K.
He said that this system it would be great for 1080/1440p. And I said it could do 4K, unless he wants to stay in lower res for high (100+) framerate.
All 3 current 6000 GPUs are meant for 4K and not 1080/1440p. That was the point...

I didnt speak numbers, but thats what I meant.
I get what you're saying and agree it'll handle 4K quite well in additional to 1080p/1440p I'm leaning towards 120Hz+ with 1080p/1440p and taking into account newer games that are more demanding. I think in the case of 4K that combination won't always quite deliver 60FPS as fluidly especially true of scenario's involving RTRT become involved and even otherwise at times at least not w/o some subtle compromises to a few settings. You're right though about 4K being plenty capable of doing 60FPS+ perfectly in quite a few scenario's and hell even upwards of 120FPS at 4K in cases with some intelligent settings compromises. That said I don't plan on getting a 4K 120Hz display regardless at current price premiums. The price sweet spot for 100HZ+ displays is defiantly 1080p and 1440p display options.
Posted on Reply
#389
jcchg
renz496see what happen for the last 10 years. did price war really help AMD gain more market share?
Have you heard about AMD Polaris or AMD Ryzen?
Posted on Reply
#390
renz496
jcchgHave you heard about AMD Polaris or AMD Ryzen?
did it help AMD gain more discrete GPU market share? for the past 10 years we have seen AMD competing with price. and yet their market share never exceed 40% mark. the last time AMD had over 40% is back in 2010. despite all the undercutting AMD has been doing for the past 10 years they are pretty much suppressed by nvidia to have below 40%. and until recently 30% is about the best they can hold. the latest report from JPR shows that AMD discrete GPU market share is already down to 20%.

price war is only effective if you can keep gaining market share from competitor. with ryzen it works. but in GPU world what happen for the past 10 years shows us that price war are ineffective against nvidia. and i have seen it when nvidia starts retaliating with price war the one that end up giving up first was AMD.
Posted on Reply
#391
Valantar
renz496did it help AMD gain more discrete GPU market share? for the past 10 years we have seen AMD competing with price. and yet their market share never exceed 40% mark. the last time AMD had over 40% is back in 2010. despite all the undercutting AMD has been doing for the past 10 years they are pretty much suppressed by nvidia to have below 40%. and until recently 30% is about the best they can hold. the latest report from JPR shows that AMD discrete GPU market share is already down to 20%.

price war is only effective if you can keep gaining market share from competitor. with ryzen it works. but in GPU world what happen for the past 10 years shows us that price war are ineffective against nvidia. and i have seen it when nvidia starts retaliating with price war the one that end up giving up first was AMD.
That's way too simplistic a view. This drop can't simply be attributed to "AMD is only competing on price", you also have to factor in everything else that affects this. In other words: the lack of a competitive flagship/high end solution since the Fury X (2015), the (mostly well deserved) reputation for running hot and being inefficient (not that that matters for most users, but most people at least want a quiet GPU), terrible marketing efforts (remember "Poor Volta"?), overpromising about new architectures, and not least resorting to selling expensive GPUs cheaply due to the inability to scale the core design in a competitive way, eating away at profits and thus R&D budgets, deepening the issues. And that's just scratching the surface. RDNA hasn't made anything worse, but due to the PR disaster that was the state of the drivers (which, while overblown, had some truth to it) didn't help either. RDNA 2 rectifies pretty much every single point here. No, the 6900 XT isn't likely to be directly competitive with the 3090 out of the box, but it's close enough, and the 6800 XT and 6800 seem eminently competitive. The XT is $50 cheaper than the 3080, but the non-XT is $79 more than the 3070, so they're not selling these as a budget option. And it's obvious that RDNA 2 can scale down to smaller chips with great performance and efficiency in the higher volume price ranges. Does that mean AMD will magically jump to 50% market share? Obviously not. Mindshare gains take a lot of time, and require consistency over time to materialize at all. But it would be extremely surprising if these GPUs don't at least start AMD on that road.
Posted on Reply
#392
dinmaster
hoping that the reviews show performance with the 3000 series cpu's, intel ones and the new 5000 series because of that new feature (smart access memory) with the 5000 series will not show what the majority of the community runs in their systems and the gained performance from it will skew the benchmarks. I personally run a 3800x and would like to see the differences between the different ones. I know w1zzard will have that in the review or subsequent review of the different cpu's performance scaling with the 6000 gpu's.
Posted on Reply
#393
InVasMani
The big thing with RNDA2 is it's going to cause Nvidia to react and be more competitive just like you're seeing with Intel on the CPU side.
Posted on Reply
#394
Valantar
dinmasterhoping that the reviews show performance with the 3000 series cpu's, intel ones and the new 5000 series because of that new feature (smart access memory) with the 5000 series will not show what the majority of the community runs in their systems and the gained performance from it will skew the benchmarks. I personally run a 3800x and would like to see the differences between the different ones. I know w1zzard will have that in the review or subsequent review of the different cpu's performance scaling with the 6000 gpu's.
All serious review sites use a fixed test bench configuration for GPU reviews, and don't replace that when reviewing a new product. Moving to a new test setup thus requires re-testing every GPU in the comparison, and is something that is done periodically, but in periods with little review activity. As such, day 1 reviews will obviously keep using the same test bench. This obviously applies to TPU, which uses a 9900K-based test bench.

There will in all likelihood be later articles diving into SAM and similar features, and SAM articles are likely to include comparisons to both Ryzen 3000 and Intel setups, but those will necessarily be separate from the base review. Not least as testing like that would mean a massive increase in the work required: TPUs testing covers 23 games at three resolutions, so 69 data points (plus power and thermal measurements). Expand that to three platforms and you have 207 data points, though ideally you'd want to test Ryzen 5000 with SAM both enabled and disabled to single out its effect, making it 276 data points. Then there's the fact that there are three GPUs to test, and that one would want at least one RTX comparison GPU for each test. Given that reviewers typically get about a week to ready their reviews, there is no way that this could be done in time for a launch review.

That being said, I'm very much looking forward to w1zzard's SAM deep dive.
Posted on Reply
Add your own comment
Nov 27th, 2024 07:14 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts