Wednesday, October 28th 2020

AMD Announces the Radeon RX 6000 Series: Performance that Restores Competitiveness

AMD (NASDAQ: AMD) today unveiled the AMD Radeon RX 6000 Series graphics cards, delivering powerhouse performance, incredibly life-like visuals, and must-have features that set a new standard for enthusiast-class PC gaming experiences. Representing the forefront of extreme engineering and design, the highly anticipated AMD Radeon RX 6000 Series includes the AMD Radeon RX 6800 and Radeon RX 6800 XT graphics cards, as well as the new flagship Radeon RX 6900 XT - the fastest AMD gaming graphics card ever developed.

AMD Radeon RX 6000 Series graphics cards are built upon groundbreaking AMD RDNA 2 gaming architecture, a new foundation for next-generation consoles, PCs, laptops and mobile devices, designed to deliver the optimal combination of performance and power efficiency. AMD RDNA 2 gaming architecture provides up to 2X higher performance in select titles with the AMD Radeon RX 6900 XT graphics card compared to the AMD Radeon RX 5700 XT graphics card built on AMD RDNA architecture, and up to 54 percent more performance-per-watt when comparing the AMD Radeon RX 6800 XT graphics card to the AMD Radeon RX 5700 XT graphics card using the same 7 nm process technology.
AMD RDNA 2 offers a number of innovations, including applying advanced power saving techniques to high-performance compute units to improve energy efficiency by up to 30 percent per cycle per compute unit, and leveraging high-speed design methodologies to provide up to a 30 percent frequency boost at the same power level. It also includes new AMD Infinity Cache technology that offers up to 2.4X greater bandwidth-per-watt compared to GDDR6-only AMD RDNA -based architectural designs.

"Today's announcement is the culmination of years of R&D focused on bringing the best of AMD Radeon graphics to the enthusiast and ultra-enthusiast gaming markets, and represents a major evolution in PC gaming," said Scott Herkelman, corporate vice president and general manager, Graphics Business Unit at AMD. "The new AMD Radeon RX 6800, RX 6800 XT and RX 6900 XT graphics cards deliver world class 4K and 1440p performance in major AAA titles, new levels of immersion with breathtaking life-like visuals, and must-have features that provide the ultimate gaming experiences. I can't wait for gamers to get these incredible new graphics cards in their hands."

Powerhouse Performance, Vivid Visuals & Incredible Gaming Experiences
AMD Radeon RX 6000 Series graphics cards support high-bandwidth PCIe 4.0 technology and feature 16 GB of GDDR6 memory to power the most demanding 4K workloads today and in the future. Key features and capabilities include:

Powerhouse Performance
  • AMD Infinity Cache - A high-performance, last-level data cache suitable for 4K and 1440p gaming with the highest level of detail enabled. 128 MB of on-die cache dramatically reduces latency and power consumption, delivering higher overall gaming performance than traditional architectural designs.
  • AMD Smart Access Memory - An exclusive feature of systems with AMD Ryzen 5000 Series processors, AMD B550 and X570 motherboards and Radeon RX 6000 Series graphics cards. It gives AMD Ryzen processors greater access to the high-speed GDDR6 graphics memory, accelerating CPU processing and providing up to a 13-percent performance increase on a AMD Radeon RX 6800 XT graphics card in Forza Horizon 4 at 4K when combined with the new Rage Mode one-click overclocking setting.9,10
  • Built for Standard Chassis - With a length of 267 mm and 2x8 standard 8-pin power connectors, and designed to operate with existing enthusiast-class 650 W-750 W power supplies, gamers can easily upgrade their existing large to small form factor PCs without additional cost.
True to Life, High-Fidelity Visuals
  • DirectX 12 Ultimate Support - Provides a powerful blend of raytracing, compute, and rasterized effects, such as DirectX Raytracing (DXR) and Variable Rate Shading, to elevate games to a new level of realism.
  • DirectX Raytracing (DXR) - Adding a high performance, fixed-function Ray Accelerator engine to each compute unit, AMD RDNA 2-based graphics cards are optimized to deliver real-time lighting, shadow and reflection realism with DXR. When paired with AMD FidelityFX, which enables hybrid rendering, developers can combine rasterized and ray-traced effects to ensure an optimal combination of image quality and performance.
  • AMD FidelityFX - An open-source toolkit for game developers available on AMD GPUOpen. It features a collection of lighting, shadow and reflection effects that make it easier for developers to add high-quality post-process effects that make games look beautiful while offering the optimal balance of visual fidelity and performance.
  • Variable Rate Shading (VRS) - Dynamically reduces the shading rate for different areas of a frame that do not require a high level of visual detail, delivering higher levels of overall performance with little to no perceptible change in image quality.
Elevated Gaming Experience
  • Microsoft DirectStorage Support - Future support for the DirectStorage API enables lightning-fast load times and high-quality textures by eliminating storage API-related bottlenecks and limiting CPU involvement.
  • Radeon Software Performance Tuning Presets - Simple one-click presets in Radeon Software help gamers easily extract the most from their graphics card. The presets include the new Rage Mode stable over clocking setting that takes advantage of extra available headroom to deliver higher gaming performance.
  • Radeon Anti-Lag - Significantly decreases input-to-display response times and offers a competitive edge in gameplay.
AMD Radeon RX 6000 Series Product Family
Robust Gaming Ecosystem and Partnerships
In the coming weeks, AMD will release a series of videos from its ISV partners showcasing the incredible gaming experiences enabled by AMD Radeon RX 6000 Series graphics cards in some of this year's most anticipated games. These videos can be viewed on the AMD website.
  • DIRT 5 - October 29
  • Godfall - November 2
  • World of Warcraft : Shadowlands - November 10
  • RiftBreaker - November 12
  • FarCry 6 - November 17
Pricing and Availability
  • AMD Radeon RX 6800 and Radeon RX 6800 XT graphics cards are expected to be available from global etailers/retailers and on AMD.com beginning November 18, 2020, for $579 USD SEP and $649 USD SEP, respectively. The AMD Radeon RX 6900 XT is expected to be available December 8, 2020, for $999 USD SEP.
  • AMD Radeon RX 6800 and RX 6800 XT graphics cards are also expected to be available from AMD board partners, including ASRock, ASUS, Gigabyte, MSI, PowerColor, SAPPHIRE and XFX, beginning in November 2020.
The complete AMD slide deck follows.
Add your own comment

394 Comments on AMD Announces the Radeon RX 6000 Series: Performance that Restores Competitiveness

#326
Valantar
InVasManiI'm a bit perplexed at how Smart access memory works in comparison to how it's always worked what's the key difference between the two with like a flow chart. What's being done differently doesn't the CPU always have access to VRAM anyway!? I imagine it's bypassing some step in the chain for quicker access than how it's been handle in the past, but that's the part I'm curious about. I mean I can access a GPU's VRAM now and the CPU and system memory obviously plays some role in the process. The mere fact that the VRAM performance slows down around the point where the L2 cache is saturated on my CPU seems to indicate the CPU design plays a role though it seems to bottleneck by system memory performance along with the CPU L2 cache and core count not thread count which adds to the overall combined L2 cache structure. You see a huge regression of performance beyond the theoretical limits of the L2 cache it seems to peak at that point and it 4-way on my CPU slows a bit up to 2MB file sizes then drops off quite rapidly after that point. If you disable the physical core too the bandwidth regresses as well so the combined L2 cache impacts it from what I've seen.
CPUs only have access to RAM on PCIe devices in 265MB chunks at a time. SAM gives the CPU direct access to the entire VRAM at any time.
InVasMani52CU and 44CU is the next logical step based on what's already released AMD seems to disable 8CU's at a time. I can see them doing a 10GB or 14GB capacity device. It would be interesting if they utilized GDDR6/GDDR6X together and use it along side variable rate shading say use the GDDR6 when you scale the scene image quality back further and the GDDR6X at the higher quality giving mixed peformance at a better price. I would think they'd consider reducing the memory bus width to 128-bit or 192-bit for SKU's with those CU counts though if paired with infinity cache. Interesting to think about the infinity cache in a CF setup how it impacts the latency I'd expect less micro stutter. The 99 percentiles will be interesting to look at for RDNA2 with all the added bandwidth and I/O. I suppose 36CU's is possible as well by extension, but idk how the profit margins would be the low end of the market is erroding each generation further and further not to mention Intel entering the dGPU market will compound that situation further. I don't think a 30CU is likely/possibly for RDNA2 it would end up being 28CU if anything and kind of doubtful unless they wanted that for a APU/mobile then perhaps.
52 and 44 CUs would be very small steps. Also, AMD likes to do 8CU cuts? Yet the 6800 has 12 fewer CUs than the 6800 XT? Yeah, sorry, that doesn't quite add up. I'm very much hoping Navi 22 has more than 40 CUs, and I'd be very happy if it has 48. Any more than that is quite unlikely IMO. RDNA (1) scaled down to 24 CUs with Navi 14, so I would frankly be surprised if we didn't see RDNA 2 scale down just as far - though hopefully they'll increase the CU count a bit at the low end. There'd be a lot of sales volume in a low-end, low CU count, high-clocking GPU, and margins could be good if they can get by with a 128-bit bus for that part. I would very much welcome a new 75W RDNA2 GPU for slot powered applications!

Combining two different memory technologies like you are suggesting would be a complete and utter nightmare. Either you'd need to spend a lot of time and compute shuffling data back and forth between the two VRAM pools, or you'd need to double the size of each (i.e. instead of a 16GB GPU you'd need a 16+16GB GPU), driving up prices massively. Not to mention the board space requirements - those boards would be massive, expensive, and very power hungry. And then there's all the issues getting this to work - if you're using VRS as a differentiator then parts of the GPU need to be rendering a scene from one VRAM pool with the rest of the GPU rendering the same scene from a different VRAM pool, which would either mean waiting massive amounts of time for data to copy over, tanking performance, or keeping data duplicated in two VRAM pools simultaneously, which is both expensive in terms of power and would cause all kinds of issues with two different render passes and VRAM pools each informing new data being loaded to both pools at the same time. As I said: a complete and utter nightmare. Not to mention that one of the main points of Infinity Cache is to lower VRAM bandwidth needs. Adding something like this on top makes no sense.

I would expect narrower buses for lower end GPUs, though the IC will likely also shrink due to the sheer die area requirements of 128MB of SRAM. I'm hoping for 96MB of IC and a 256-bit or 192-bit bus for the next cards down. 128 won't be doable unless they keep the full size cache, and even then that sounds anemic for a GPU at that performance level (RTX 2080-2070-ish).

From AMD's utter lack of mentioning it, I'm guessing CrossFire is just as dead now as it was for RDNA1, with the only support being in major benchmarks.
Posted on Reply
#327
Zach_01
MetroidWell, we have to wait for reviews to confirm what amd showed. It's hard to believe even if you see if is real, amd was so far behind that if all true then we have to start believing in miracles too if you dont believe it already.
It’s spending money for R&D and do a lot of engineering...
Just a lot of people didn’t have faith in AMD because it seemed that it was too far behind in the GPU market. But the last 3-4 years AMD has shown some seriousness about their products, and seem more organized and focused.
Posted on Reply
#328
Metroid
Zach_01It’s spending money for R&D and do a lot of engineering...
Just a lot of people didn’t have faith in AMD because it seemed that it was too far behind in the GPU market. But the last 3-4 years AMD has shown some seriousness about their products, and seem more organized and focused.
Like I said before new management, new ideas, new employees, new objectives, new products and so on.
Posted on Reply
#329
InVasMani
lexluthermiesterWhile true, benchmarks have been pretty much on the mark with their stated claims for Ryzen. I see no reason they would over-exaggerate these stats.
You know something I noticed with the VRAM RAMDISK software when I played around with it is read performance seems to follow system memory constraints while the write performance follows the PCIE bus constraints, but you can use NTFS compression on partition format as well and speed up the bandwidth a lot and you go a step further than that as well you can compress the contents with Compact GUI-2 and use LZX compression compression it further at a high level and while I can't exactly benchmark that as easily the fact that it can be done is interesting to say the least and could speed up bandwidth and capacity further yet to the device it's basically a glorified RAMDISK that matches system memory read performance with write speeds matching PCIE bandwidth. The other neat part is when I saw Tom's Hardware test Crysis run in VRAM it performed a bit better on the minimal frame rates for 4K over NVME/SSD/RAMDISK the system RAMDISK was worst. I think that's actually expected because it eats into system memory bandwidth pulling double duties while the VRAM is often sitting around waiting for the contents to populate it in the first place and essentially works like a PCIE 4.0 x16 RAMDISK sort of device which is faster than NVME technically and less complexity than a quad M.2 PCIE 4.0 x16 setup would be. The other aspect Tom's Hardware tested it with PCIE 3.0 NVME and GPU's. I can't tell if that was error of margin or not, but it appeared like if repeatable it was a good perk and one might yield even better results than what was tested for at the same time.
Zach_01It’s spending money for R&D and do a lot of engineering...
Just a lot of people didn’t have faith in AMD because it seemed that it was too far behind in the GPU market. But the last 3-4 years AMD has shown some seriousness about their products, and seem more organized and focused.
This I said for a long while now quite often as AMD's finacials and budget improves I anticipate the Radeon to follow suit and make a stronger push in the GPU market against Nvidia. I think they are pleasantly a littler earlier than I figured they'd be at this stage though I thought this kind of rebound progress would happen next generation not this one. It just shows how hard AMD has worked to address the performance and efficiency of the Radeon brand and IP tech it's impressive this is exactly where the company wants to be headed. It's hard to not to get complacent when you've been at the top awahile look at Intel and Nvidia for that matter so it's really about time, but it wouldn't have happened w/o good management on the part of Lisa Su and the overall engineering talents she's put to work as well it's not a stroke of luck it's continued effort and progress with efficient management. To be very fair her personal experience is also very insightful for her position as well she's the right leader for that company 100% similar scenario with Jensen Huang even if you don't like leather jackets.
ValantarCPUs only have access to RAM on PCIe devices in 265MB chunks at a time. SAM gives the CPU direct access to the entire VRAM at any time.
That's exactly the kind of insight info I was interested absolute game changer I'd say.
Valantar52 and 44 CUs would be very small steps. Also, AMD likes to do 8CU cuts?
Could've sworn I was 80CU/72CU/64CU's listed...appears I got the lowest end model CU count off and it's 60. So seems they can cut 8 or 12 CU's at a time possibly more granular than that though I'd expect some similar granular size cuts to other SKU's. That said I still don't really expect they'd do cuts below 44CU's for the desktop anyway. I guess they could possibly have 56CU/52CU/44CU maybe they stretch it to 40CU as well who knows, but I doubt it if they retain the infinity cache w/o scaling it's cache size as well. I do see 192-bit and 128-bit being plausible and depends mostly around CU count which makes the most sense.

I'd like to see what could be done with CF with the infinity cache better bandwidth and I/O even if it gets split amongst the GPU's should translate to less micro stutter. Less bus bottleneck complications are always good. It would be interesting if some CPU cores got introduced on the GPU side and a bit of on the fly compress in the form of LZX or XPRESS 4K/8K/16K got used prior to the infinity cache sending that data along to the CPU. Even if it only could compress files up to a certain file size on the fly quickly it would be quite useful and you can use those types of compression methods with VRAM as well.
Posted on Reply
#330
mechtech
InVasMani52CU and 44CU is the next logical step based on what's already released AMD seems to disable 8CU's at a time. I can see them doing a 10GB or 14GB capacity device. It would be interesting if they utilized GDDR6/GDDR6X together and use it along side variable rate shading say use the GDDR6 when you scale the scene image quality back further and the GDDR6X at the higher quality giving mixed peformance at a better price. I would think they'd consider reducing the memory bus width to 128-bit or 192-bit for SKU's with those CU counts though if paired with infinity cache. Interesting to think about the infinity cache in a CF setup how it impacts the latency I'd expect less micro stutter. The 99 percentiles will be interesting to look at for RDNA2 with all the added bandwidth and I/O. I suppose 36CU's is possible as well by extension, but idk how the profit margins would be the low end of the market is erroding each generation further and further not to mention Intel entering the dGPU market will compound that situation further. I don't think a 30CU is likely/possibly for RDNA2 it would end up being 28CU if anything and kind of doubtful unless they wanted that for a APU/mobile then perhaps.
Well depends on definition of low end? Usually midrange as far as price is concerned is about $250 ish US. The RX480 when I bought it about 4 years ago was 2304 shaders (36CU) 8GB ram, 256-bit bus for $330 cnd or roughly $250US. Maybe since card prices now start at $150 US the midrange is closer to $300 US???

That's typically my budget. I am hoping something gets released along those lines, it could double my current performance and put it in 5700XT performance territory.

If not oh well, I have many other ways to put $350 cnd to better use.
Posted on Reply
#331
Zach_01
InVasManiYou know something I noticed with the VRAM RAMDISK software when I played around with it is read performance seems to follow system memory constraints while the write performance follows the PCIE bus constraints, but you can use NTFS compression on partition format as well and speed up the bandwidth a lot and you go a step further than that as well you can compress the contents with Compact GUI-2 and use LZX compression compression it further at a high level and while I can't exactly benchmark that as easily the fact that it can be done is interesting to say the least and could speed up bandwidth and capacity further yet to the device it's basically a glorified RAMDISK that matches system memory read performance with write speeds matching PCIE bandwidth. The other neat part is when I saw Tom's Hardware test Crysis run in VRAM it performed a bit better on the minimal frame rates for 4K over NVME/SSD/RAMDISK the system RAMDISK was worst. I think that's actually expected because it eats into system memory bandwidth pulling double duties while the VRAM is often sitting around waiting for the contents to populate it in the first place and essentially works like a PCIE 4.0 x16 RAMDISK sort of device which is faster than NVME technically and less complexity than a quad M.2 PCIE 4.0 x16 setup would be. The other aspect Tom's Hardware tested it with PCIE 3.0 NVME and GPU's. I can't tell if that was error of margin or not, but it appeared like if repeatable it was a good perk and one might yield even better results than what was tested for at the same time.

This I said for a long while now quite often as AMD's finacials and budget improves I anticipate the Radeon to follow suit and make a stronger push in the GPU market against Nvidia. I think they are pleasantly a littler earlier than I figured they'd be at this stage though I thought this kind of rebound progress would happen next generation not this one. It just shows how hard AMD has worked to address the performance and efficiency of the Radeon brand and IP tech it's impressive this is exactly where the company wants to be headed. It's hard to not to get complacent when you've been at the top awahile look at Intel and Nvidia for that matter so it's really about time, but it wouldn't have happened w/o good management on the part of Lisa Su and the overall engineering talents she's put to work as well it's not a stroke of luck it's continued effort and progress with efficient management. To be very fair her personal experience is also very insightful for her position as well she's the right leader for that company 100% similar scenario with Jensen Huang even if you don't like leather jackets.

That's exactly the kind of insight info I was interested absolute game changer I'd say.

Could've sworn I was 80CU/72CU/64CU's listed...appears I got the lowest end model CU count off and it's 60. So seems they can cut 8 or 12 CU's at a time possibly more granular than that though I'd expect some similar granular size cuts to other SKU's. That said I still don't really expect they'd do cuts below 44CU's for the desktop anyway. I guess they could possibly have 56CU/52CU/44CU maybe they stretch it to 40CU as well who knows, but I doubt it if they retain the infinity cache w/o scaling it's cache size as well. I do see 192-bit and 128-bit being plausible and depends mostly around CU count which makes the most sense.

I'd like to see what could be done with CF with the infinity cache better bandwidth and I/O even if it gets split amongst the GPU's should translate to less micro stutter. Less bus bottleneck complications are always good. It would be interesting if some CPU cores got introduced on the GPU side and a bit of on the fly compress in the form of LZX or XPRESS 4K/8K/16K got used prior to the infinity cache sending that data along to the CPU. Even if it only could compress files up to a certain file size on the fly quickly it would be quite useful and you can use those types of compression methods with VRAM as well.
I think they are able to cut/disable CUs by 2. If you look RNDA1/2 full dies you will see 20 and 40 same rectangular respectively. Each one of these rectangular are 2CUs.

RDNA1


RDNA2


————————

And I’m pretty convinced that Crossfire is dead long ago.
Posted on Reply
#332
nikoya
ratirtActually. The VRR is supported by these screens but it is not FreeSync equivalent. Maybe it will work but it may not necessarily work as a FreeSync monitor or a TV would work.
yes 6800XT support HDMi 2.1 VRR (just like consoles)

RTings rate C9/B9 VRR 4K 40-60hz (maybe because they disn't have HDMI2.1 source to test upper rates?)

looks like there is a modding solution to activate Freesync up to 4k 120hz

Amd/comments/g65mw7
some guys are experiencing short black screens when launching/exiting some game.
but well no big issue.

I think Im gonna go red. sweet 16Gb and frequencies reaching heaven 2300Mhz+ yummy I want it give that to me.

Green is maxing 1980Mhz on all they products. OC isn't existing at all there.

DXR on AMD with 1 RA (Raytracing Accelerator) per CU seems not so bad. I mean with the consoles willing to implement a bit of Raytracing, at least we will be able activate DXR and see what it looks like. anyway it doesnt looks like the wow thing now. just some puddles of water reflexions here and there.

DLSS.. well AMD is working on a solution, and given that the consoles are demanding for this solution as they have less power to reach 4k60+ FPS this could get somewhere in the end.

Drivers Bugs : many 5700XT users doesnt report issues seems many months, and again given that most of games are now dev. for both PC and Consoles Im pretty confident that AMD is gonna be robbust.

Also Im fed up with all my interfaces beeing green. Geforce Exp. I want to discovers the AMD HMI just for the fun to look into every possible menues and options.

I would have gonne for the 6900XT if they had it priced 750$ on a 80 / 72 CU ration basis
that would have been reasonable. even 800$ just for the "Elite" product feeling. but 999$ they got berserker here. not gonna give them credits.

In my opinion they should have done a bigger die and crush NVidia once for all. just for the fun.

all in all... November seems exiting
Johnny05I have a c9 with a Vega64 and Freesync works just fine. All that is required is to use CRU and configure the tv to report as Freesync compatible.
ah yeah thx just saw your answer now. good to see that it works for you as well.
Posted on Reply
#333
InVasMani
I actually would've guessed 2CU's.
Zach_01I think they are able to cut/disable CUs by 2. If you look RNDA1/2 full dies you will see 20 and 40 same rectangular respectively. Each one of these rectangular are 2CUs.

RDNA1


RDNA2


————————

And I’m pretty convinced that Crossfire is dead long ago.
Still AMD has to differentiate SKU's so it's a matter of how they go about it and how many SKU's they try to offer in total. AMD I'm sure wants fairly good segmentation across the board overall along with price considerations. If they added 3 more SKU's and did what they did for high end SKU's in reverse meeting most closely at the end I think they probably go with 56CU/44CU/36CU SKU's to pair with the current 80CU/72CU/60CU offerings. The 60CU/56CU would be most closely matched in price and performance naturally. Now if AMD has to create new die's to reduce the infinity cache and if they reduce the memory bit width I think 128-bit with 64MB infinity cache makes a lot of sense especially were they to swap out the GDDR6 for GDDR6X. I really see that as pretty good possibility. It actually seems to make a fair bit of sense and the 56CU would be closely match to the 60CU version, but perhaps at a better price or efficiency relative to price either way it seems flexible and scalable. They can also bump up the original 3 SKU's with GDDR6X down the road too. I think AMD kind of nailed it this time around on the GPU side really great progress and a sort of return to normacy on the GPU front between AMD/Nvidia or ATI/Nvidia either way it's good for consumers hopefully or better than it had been at the very least.
Posted on Reply
#334
Zach_01
InVasManiI actually would've guessed 2CU's.
Still AMD has to differentiate SKU's so it's a matter of how they go about it and how many SKU's they try to offer in total. AMD I'm sure wants fairly good segmentation across the board overall along with price considerations. If they added 3 more SKU's and did what they did for high end SKU's in reverse meeting most closely at the end I think they probably go with 56CU/44CU/36CU SKU's to pair with the current 80CU/72CU/60CU offerings. The 60CU/56CU would be most closely matched in price and performance naturally. Now if AMD has to create new die's to reduce the infinity cache and if they reduce the memory bit width I think 128-bit with 64MB infinity cache makes a lot of sense especially were they to swap out the GDDR6 for GDDR6X. I really see that as pretty good possibility. It actually seems to make a fair bit of sense and the 56CU would be closely match to the 60CU version, but perhaps at a better price or efficiency relative to price either way it seems flexible and scalable. They can also bump up the original 3 SKU's with GDDR6X down the road too. I think AMD kind of nailed it this time around on the GPU side really great progress and a sort of return to normacy on the GPU front between AMD/Nvidia or ATI/Nvidia either way it's good for consumers hopefully or better than it had been at the very least.
My, absolutely based on (my) logic, estimation is that AMD will stay away from GDDR6X. Because they can get away with the new IC implementation. And second for the all kinds of expenses. GDDR6X is more expensive, draws almost X3 the power from “simple” GDDR6, and the memory controller need to be more complex too (=more expenses on die area and fab cost).

This I “heard” partially...
The three 6000 we’ve seen so far is based on the Navi21 right? 80CUs full die. They may have one more N21 with further less CUs, don’t know how many, probably 56 or even less active with 8GB(?) and probably same 256bit bus. But this isn’t coming soon I think because they may have to make inventory first (because of present good fab yields) and also see how things will go with nVidia.

Further down they have Navi22. Probably (?)40CUs full die with 192bit bus, (?)12GB, and clocks up to 2.5GHz, 160~200W, with who knows how much IC it will have. That will be better than 5700XT.
And also cutdown versions of N22 with 32~36CUs 8/10/12GB 160/192bit (for 5600/5700 replacements) and so on, but at this point is all on full speculations and things may change in future.

Also rumors for Navi23 with 24~32CUs but... it’s way too soon.

Navi21: 4K
Navi22: 1440p and ultrawide
Navi23: 1080p only
Posted on Reply
#335
Valantar
Zach_01I think they are able to cut/disable CUs by 2. If you look RNDA1/2 full dies you will see 20 and 40 same rectangular respectively. Each one of these rectangular are 2CUs.
InVasManiI actually would've guessed 2CU's.
Still AMD has to differentiate SKU's so it's a matter of how they go about it and how many SKU's they try to offer in total. AMD I'm sure wants fairly good segmentation across the board overall along with price considerations. If they added 3 more SKU's and did what they did for high end SKU's in reverse meeting most closely at the end I think they probably go with 56CU/44CU/36CU SKU's to pair with the current 80CU/72CU/60CU offerings. The 60CU/56CU would be most closely matched in price and performance naturally. Now if AMD has to create new die's to reduce the infinity cache and if they reduce the memory bit width I think 128-bit with 64MB infinity cache makes a lot of sense especially were they to swap out the GDDR6 for GDDR6X. I really see that as pretty good possibility. It actually seems to make a fair bit of sense and the 56CU would be closely match to the 60CU version, but perhaps at a better price or efficiency relative to price either way it seems flexible and scalable. They can also bump up the original 3 SKU's with GDDR6X down the road too. I think AMD kind of nailed it this time around on the GPU side really great progress and a sort of return to normacy on the GPU front between AMD/Nvidia or ATI/Nvidia either way it's good for consumers hopefully or better than it had been at the very least.
Yep, CUs are grouped two by two in ... gah, I can't remember what they call the groups. Anyhow, AMD can disable however many they like as long as it's a multiple of 2.

That being said, it makes no sense for them to launch further disabled Navi 21 SKUs. Navi 21 is a big and expensive die, made on a mature process with a low error rate. They've already launched a SKU with 25% of CUs disabled. Going below that would only be warranted if there were lots of defective dice that didn't even have 60 working CUs. That's highly unlikely, and so they would then be giving up chips they could sell in higher power, more expensive SKUs just to make cut-down ones - again, why would they do that? And besides, AMD has promised that RDNA will be the basis for their full product stack going forward, so we can expect at the very least two more die designs going forward - they had two below 60 CUs for RDNA 1 after all, and reducing that number makes no sense at all. I would expect the rumors of a mid-size Navi 22 and a small Navi 23 to be relatively accurate, though I'm doubtful about Navi 22 having only 40 CUs - that's too big a jump IMO. 44, 48? Sure. And again, 52 would place it too close to the 6800. 80-72-60-(new die)-48-40-32-(new die)-28-24-20 sounds like a likely lineup to me, which gives us everything down to a 5500 non-XT, with the possibility of 5400/5300 SKUs with disabled memory, lower clocks, etc.

As for memory, I agree with @Zach_01 that AMD will likely stay away from GDDR6X entirely. It just doesn't make sense for them. With IC working to the degree that they only need a relatively cheap 256-bit GDDR6 bus on their top end SKU, going for a more expensive, more power hungry RAM standard on a lower end SKU would just be plain weird. What would they gain from it? I wouldn't be surprised if Navi 22 still had a 256-bit bus, but it might only get fully enabled on top bins (6700 XT, possibly 6700) - a 256-bit bus doesn't take much board space and isn't very expensive (the RX 570 had one, after all). My guess: fully enabled Navi 22 will have something like a 256-bit G6 bus with 96MB of IC. Though it could of course be any number of configurations, and no doubt AMD has simulated the crap out of this to decide which to go for - it could also be 192-bit G6+128MB IC, or even 192-bit+96MB if that delivers sufficient performance for a 6700 XT SKU.
Posted on Reply
#336
Zach_01
The battle will continue and I think it will be more fierce at low-mid range where the most cards are sold. Not that Top-End is over...
Its really nice and exciting to see them both fight for "seats" and market share, all over again... Not only for the new and more advanced products (from both), but for the competition also!
I'm all set for a GPU for the next couple of years but all I want is to see them fight!!
Posted on Reply
#337
R0H1T
ValantarSo no, this doesn't seem like a "best v. worst" comparison.
I didn't say that, hence the word could. AMD can get the numbers they desire by comparing the less efficient cards, that's it. Different cards can have vastly different perf/w figures, the efficiency jump in & of itself says nothing. What it does tell us however is that AMD's removed some bottlenecks from their RDNA uarch that improved efficiency by a lot. There could be more efficient cards in the 6xxx lineup which might well be more than 70% more efficient than the worst RDNA card out there. The bottomline being there's more than one way to skin the cat & while the jump is tremendous indeed I can't say it's that surprising, not to me at least. In case you forgot AMD has lead Nvidia in perf/W & overall performance in the last decade, I'm frankly more impressed by the zen team's achievements.
Posted on Reply
#338
Valantar
R0H1TI didn't say that, hence the word could.
But your wording was vague. You said they could, yet failed to point out that in this case it's quite clear that they didn't. Which makes all the difference.
R0H1TAMD can get the numbers they desire by comparing the less efficient cards, that's it. Different cards can have vastly different perf/w figures, the efficiency jump in & of itself says nothing. What it does tell us however is that AMD's removed some bottlenecks from their RDNA uarch that improved efficiency by a lot. There could be more efficient cards in the 6xxx lineup which might well be more than 70% more efficient than the worst RDNA card out there.
That's likely true. If they have a low-and-(relatively-)wide RDNA 2 SKU like the 5600 XT, that would no doubt be more than 70% better than the 5700 XT in perf/W. And of course if they, say, clock the living bejeezus out of some SKU it might not significantly beat the 5600 XT in perf/W. At that point though it's more interesting to look at overall/average/geomean perf/W for the two lineups and compare that, in which case there's little doubt RDNA 2 will be a lot more efficient.
R0H1TThe bottomline being there's more than one way to skin the cat & while the jump is tremendous indeed I can't say it's that surprising, not to me at least. In case you forgot AMD has lead Nvidia in perf/W & overall performance in the last decade,
Sorry, what? Did you mean to say the opposite? AMD has been behind Nvidia in perf/W and overall performance since the 780 Ti. That's not quite a decade, but seven years is not nothing, and the closes AMD has been in that time has been the Fury X (near performance parity at slightly higher power) and the 5600 XT (near outright efficiency superiority, but at a relatively low absolute performance).
R0H1TI'm frankly more impressed by the zen team's achievements.
I'd say both are about equally impressive, though it remains to be seen if the RDNA team can keep up with the extremely impressive follow-through of the Zen team. RDNA 2 over RDNA 1 is (at least according to AMD's numbers) a change very similar to Zen over Excavator, but since then we've now seen significant generational growth for two generations (with a minor revision in between). On the other hand RDNA 1 over GCN was also a relatively big jump, but one that also had more issues that Zen (even accounting for early RAM and BIOS issues). So the comparison is a bit difficult at this point in time, but it's certainly promising for the RDNA team.
Posted on Reply
#339
Zach_01
Don’t forget that the next RDNA3 is ~24months away.
That is a lot more from the 15 month period (RDNA1 to 2).
The impressive stuff may continue on their all new platform on early 2022 for ZEN5 and late 2022 for RDNA3, and it could be bigger than what we’ve seen so far.
Posted on Reply
#340
BoboOOZ
Zach_01Don’t forget that the next RDNA3 is ~24months away.
That is a lot more from the 15 month period (RDNA1 to 2).
The impressive stuff may continue on their all new platform on early 2022 for ZEN5 and late 2022 for RDNA3, and it could be bigger than what we’ve seen so far.
The promise for RDNA is to have short incremental cycles just like with Zen, so RDNA3 is due for end of next year, beginning of 2022 at the latest. That's what everybody is saying, and Lisa just said that the development of RDNA3 is well under way

www.tweaktown.com/images/news/7/4/74274_06_amds-next-gen-rdna-3-revolutionary-chiplet-design-could-crush-nvidia_full.png
Posted on Reply
#341
R0H1T
Yeah can't imagine AMD not going 5nm/RDNA 3 in much less than 2 years, especially since they probably have all the access to TSMC's top nodes now. Nvidia certainly is gonna release something better much sooner, AMD can't let the Turing saga play out for another year!
Posted on Reply
#342
Zach_01
Yet they show it on a slide of RDNA2 that RNDA3 is for end of 2022. ;)I’m not making this...
Posted on Reply
#343
BoboOOZ
Zach_01Yet they show it on a slide of RDNA2 that RNDA3 is for end of 2022. ;)I’m not making this...
Show that slide, I showed you mine :p
Posted on Reply
#345
Valantar

(Source.) It's AMD's typical vague roadmap, it can mean any part of 2022, though 2021 is very unlikely.
Posted on Reply
#346
Shatun_Bear
I think RDNA2 is the equivalent of Zen 2 in the PC space. Extremely competitive with their rival allowing massive market share gains (they can only go up from 20%).

RDNA3 is said to be another huge leap and on TSMC's 5nm whilst Nvidia will be trundling along on 7nm or flirting with Samsung's el cheapo crappy 8nm+ or 7 nm (nvidia mistake) with their 4000 series.
Posted on Reply
#347
springs113
Valantar
(Source.) It's AMD's typical vague roadmap, it can mean any part of 2022, though 2021 is very unlikely.
how i read that map... is the end of 2021... before 2022. Also every leaker out there says 2021.
Posted on Reply
#348
Valantar
springs113how i read that map... is the end of 2021... before 2022. Also every leaker out there says 2021.
Guess that depends if you're reading the "2022" point as "start of 2022" or "end of 2022". I prefer pessimism with the possibility of being surprised, so I'm firmly in the latter camp.
Posted on Reply
#349
BoboOOZ
ValantarGuess that depends if you're reading the "2022" point as "start of 2022" or "end of 2022". I prefer pessimism with the possibility of being surprised, so I'm firmly in the latter camp.
Yes, but leakers :) ...
Plus, while AMD might feel encouraged to slow things down a bit on the CPU side, since they are starting to compete with themselves a bit, in the GPU market they need to keep the fast pace for quite a while, before even hoping to get to a similar position.
Posted on Reply
#350
lexluthermiester
BoboOOZPlus, while AMD might feel encouraged to slow things down a bit on the CPU side, since they are starting to compete with themselves a bit
Plus, they are focused on a new socket. The Ryzen 5000 series of CPU's are the last for socket AM4. The next will likely be AM5.
Posted on Reply
Add your own comment
Nov 23rd, 2024 18:29 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts