• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Announces the Radeon RX 6000 Series: Performance that Restores Competitiveness

Joined
May 2, 2017
Messages
7,762 (2.81/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
I'm a bit perplexed at how Smart access memory works in comparison to how it's always worked what's the key difference between the two with like a flow chart. What's being done differently doesn't the CPU always have access to VRAM anyway!? I imagine it's bypassing some step in the chain for quicker access than how it's been handle in the past, but that's the part I'm curious about. I mean I can access a GPU's VRAM now and the CPU and system memory obviously plays some role in the process. The mere fact that the VRAM performance slows down around the point where the L2 cache is saturated on my CPU seems to indicate the CPU design plays a role though it seems to bottleneck by system memory performance along with the CPU L2 cache and core count not thread count which adds to the overall combined L2 cache structure. You see a huge regression of performance beyond the theoretical limits of the L2 cache it seems to peak at that point and it 4-way on my CPU slows a bit up to 2MB file sizes then drops off quite rapidly after that point. If you disable the physical core too the bandwidth regresses as well so the combined L2 cache impacts it from what I've seen.
CPUs only have access to RAM on PCIe devices in 265MB chunks at a time. SAM gives the CPU direct access to the entire VRAM at any time.
52CU and 44CU is the next logical step based on what's already released AMD seems to disable 8CU's at a time. I can see them doing a 10GB or 14GB capacity device. It would be interesting if they utilized GDDR6/GDDR6X together and use it along side variable rate shading say use the GDDR6 when you scale the scene image quality back further and the GDDR6X at the higher quality giving mixed peformance at a better price. I would think they'd consider reducing the memory bus width to 128-bit or 192-bit for SKU's with those CU counts though if paired with infinity cache. Interesting to think about the infinity cache in a CF setup how it impacts the latency I'd expect less micro stutter. The 99 percentiles will be interesting to look at for RDNA2 with all the added bandwidth and I/O. I suppose 36CU's is possible as well by extension, but idk how the profit margins would be the low end of the market is erroding each generation further and further not to mention Intel entering the dGPU market will compound that situation further. I don't think a 30CU is likely/possibly for RDNA2 it would end up being 28CU if anything and kind of doubtful unless they wanted that for a APU/mobile then perhaps.
52 and 44 CUs would be very small steps. Also, AMD likes to do 8CU cuts? Yet the 6800 has 12 fewer CUs than the 6800 XT? Yeah, sorry, that doesn't quite add up. I'm very much hoping Navi 22 has more than 40 CUs, and I'd be very happy if it has 48. Any more than that is quite unlikely IMO. RDNA (1) scaled down to 24 CUs with Navi 14, so I would frankly be surprised if we didn't see RDNA 2 scale down just as far - though hopefully they'll increase the CU count a bit at the low end. There'd be a lot of sales volume in a low-end, low CU count, high-clocking GPU, and margins could be good if they can get by with a 128-bit bus for that part. I would very much welcome a new 75W RDNA2 GPU for slot powered applications!

Combining two different memory technologies like you are suggesting would be a complete and utter nightmare. Either you'd need to spend a lot of time and compute shuffling data back and forth between the two VRAM pools, or you'd need to double the size of each (i.e. instead of a 16GB GPU you'd need a 16+16GB GPU), driving up prices massively. Not to mention the board space requirements - those boards would be massive, expensive, and very power hungry. And then there's all the issues getting this to work - if you're using VRS as a differentiator then parts of the GPU need to be rendering a scene from one VRAM pool with the rest of the GPU rendering the same scene from a different VRAM pool, which would either mean waiting massive amounts of time for data to copy over, tanking performance, or keeping data duplicated in two VRAM pools simultaneously, which is both expensive in terms of power and would cause all kinds of issues with two different render passes and VRAM pools each informing new data being loaded to both pools at the same time. As I said: a complete and utter nightmare. Not to mention that one of the main points of Infinity Cache is to lower VRAM bandwidth needs. Adding something like this on top makes no sense.

I would expect narrower buses for lower end GPUs, though the IC will likely also shrink due to the sheer die area requirements of 128MB of SRAM. I'm hoping for 96MB of IC and a 256-bit or 192-bit bus for the next cards down. 128 won't be doable unless they keep the full size cache, and even then that sounds anemic for a GPU at that performance level (RTX 2080-2070-ish).

From AMD's utter lack of mentioning it, I'm guessing CrossFire is just as dead now as it was for RDNA1, with the only support being in major benchmarks.
 
Joined
Sep 3, 2019
Messages
3,512 (1.84/day)
Location
Thessaloniki, Greece
System Name PC on since Aug 2019, 1st CPU R5 3600 + ASUS ROG RX580 8GB >> MSI Gaming X RX5700XT (Jan 2020)
Processor Ryzen 9 5900X (July 2022), 220W PPT limit, 80C temp limit, CO -6-14, +50MHz (up to 5.0GHz)
Motherboard Gigabyte X570 Aorus Pro (Rev1.0), BIOS F39b, AGESA V2 1.2.0.C
Cooling Arctic Liquid Freezer II 420mm Rev7 (Jan 2024) with off-center mount for Ryzen, TIM: Kryonaut
Memory 2x16GB G.Skill Trident Z Neo GTZN (July 2022) 3667MT/s 1.42V CL16-16-16-16-32-48 1T, tRFC:280, B-die
Video Card(s) Sapphire Nitro+ RX 7900XTX (Dec 2023) 314~467W (375W current) PowerLimit, 1060mV, Adrenalin v24.10.1
Storage Samsung NVMe: 980Pro 1TB(OS 2022), 970Pro 512GB(2019) / SATA-III: 850Pro 1TB(2015) 860Evo 1TB(2020)
Display(s) Dell Alienware AW3423DW 34" QD-OLED curved (1800R), 3440x1440 144Hz (max 175Hz) HDR400/1000, VRR on
Case None... naked on desk
Audio Device(s) Astro A50 headset
Power Supply Corsair HX750i, ATX v2.4, 80+ Platinum, 93% (250~700W), modular, single/dual rail (switch)
Mouse Logitech MX Master (Gen1)
Keyboard Logitech G15 (Gen2) w/ LCDSirReal applet
Software Windows 11 Home 64bit (v24H2, OSBuild 26100.2161), upgraded from Win10 to Win11 on Jan 2024
Well, we have to wait for reviews to confirm what amd showed. It's hard to believe even if you see if is real, amd was so far behind that if all true then we have to start believing in miracles too if you dont believe it already.
It’s spending money for R&D and do a lot of engineering...
Just a lot of people didn’t have faith in AMD because it seemed that it was too far behind in the GPU market. But the last 3-4 years AMD has shown some seriousness about their products, and seem more organized and focused.
 
Joined
May 8, 2018
Messages
1,568 (0.65/day)
Location
London, UK
It’s spending money for R&D and do a lot of engineering...
Just a lot of people didn’t have faith in AMD because it seemed that it was too far behind in the GPU market. But the last 3-4 years AMD has shown some seriousness about their products, and seem more organized and focused.

Like I said before new management, new ideas, new employees, new objectives, new products and so on.
 
Joined
Mar 21, 2016
Messages
2,508 (0.79/day)
While true, benchmarks have been pretty much on the mark with their stated claims for Ryzen. I see no reason they would over-exaggerate these stats.
You know something I noticed with the VRAM RAMDISK software when I played around with it is read performance seems to follow system memory constraints while the write performance follows the PCIE bus constraints, but you can use NTFS compression on partition format as well and speed up the bandwidth a lot and you go a step further than that as well you can compress the contents with Compact GUI-2 and use LZX compression compression it further at a high level and while I can't exactly benchmark that as easily the fact that it can be done is interesting to say the least and could speed up bandwidth and capacity further yet to the device it's basically a glorified RAMDISK that matches system memory read performance with write speeds matching PCIE bandwidth. The other neat part is when I saw Tom's Hardware test Crysis run in VRAM it performed a bit better on the minimal frame rates for 4K over NVME/SSD/RAMDISK the system RAMDISK was worst. I think that's actually expected because it eats into system memory bandwidth pulling double duties while the VRAM is often sitting around waiting for the contents to populate it in the first place and essentially works like a PCIE 4.0 x16 RAMDISK sort of device which is faster than NVME technically and less complexity than a quad M.2 PCIE 4.0 x16 setup would be. The other aspect Tom's Hardware tested it with PCIE 3.0 NVME and GPU's. I can't tell if that was error of margin or not, but it appeared like if repeatable it was a good perk and one might yield even better results than what was tested for at the same time.

It’s spending money for R&D and do a lot of engineering...
Just a lot of people didn’t have faith in AMD because it seemed that it was too far behind in the GPU market. But the last 3-4 years AMD has shown some seriousness about their products, and seem more organized and focused.
This I said for a long while now quite often as AMD's finacials and budget improves I anticipate the Radeon to follow suit and make a stronger push in the GPU market against Nvidia. I think they are pleasantly a littler earlier than I figured they'd be at this stage though I thought this kind of rebound progress would happen next generation not this one. It just shows how hard AMD has worked to address the performance and efficiency of the Radeon brand and IP tech it's impressive this is exactly where the company wants to be headed. It's hard to not to get complacent when you've been at the top awahile look at Intel and Nvidia for that matter so it's really about time, but it wouldn't have happened w/o good management on the part of Lisa Su and the overall engineering talents she's put to work as well it's not a stroke of luck it's continued effort and progress with efficient management. To be very fair her personal experience is also very insightful for her position as well she's the right leader for that company 100% similar scenario with Jensen Huang even if you don't like leather jackets.

CPUs only have access to RAM on PCIe devices in 265MB chunks at a time. SAM gives the CPU direct access to the entire VRAM at any time.
That's exactly the kind of insight info I was interested absolute game changer I'd say.

52 and 44 CUs would be very small steps. Also, AMD likes to do 8CU cuts?
Could've sworn I was 80CU/72CU/64CU's listed...appears I got the lowest end model CU count off and it's 60. So seems they can cut 8 or 12 CU's at a time possibly more granular than that though I'd expect some similar granular size cuts to other SKU's. That said I still don't really expect they'd do cuts below 44CU's for the desktop anyway. I guess they could possibly have 56CU/52CU/44CU maybe they stretch it to 40CU as well who knows, but I doubt it if they retain the infinity cache w/o scaling it's cache size as well. I do see 192-bit and 128-bit being plausible and depends mostly around CU count which makes the most sense.

I'd like to see what could be done with CF with the infinity cache better bandwidth and I/O even if it gets split amongst the GPU's should translate to less micro stutter. Less bus bottleneck complications are always good. It would be interesting if some CPU cores got introduced on the GPU side and a bit of on the fly compress in the form of LZX or XPRESS 4K/8K/16K got used prior to the infinity cache sending that data along to the CPU. Even if it only could compress files up to a certain file size on the fly quickly it would be quite useful and you can use those types of compression methods with VRAM as well.
 
Last edited:
Joined
Dec 26, 2006
Messages
3,833 (0.59/day)
Location
Northern Ontario Canada
Processor Ryzen 5700x
Motherboard Gigabyte X570S Aero G R1.1 BiosF5g
Cooling Noctua NH-C12P SE14 w/ NF-A15 HS-PWM Fan 1500rpm
Memory Micron DDR4-3200 2x32GB D.S. D.R. (CT2K32G4DFD832A)
Video Card(s) AMD RX 6800 - Asus Tuf
Storage Kingston KC3000 1TB & 2TB & 4TB Corsair MP600 Pro LPX
Display(s) LG 27UL550-W (27" 4k)
Case Be Quiet Pure Base 600 (no window)
Audio Device(s) Realtek ALC1220-VB
Power Supply SuperFlower Leadex V Gold Pro 850W ATX Ver2.52
Mouse Mionix Naos Pro
Keyboard Corsair Strafe with browns
Software W10 22H2 Pro x64
52CU and 44CU is the next logical step based on what's already released AMD seems to disable 8CU's at a time. I can see them doing a 10GB or 14GB capacity device. It would be interesting if they utilized GDDR6/GDDR6X together and use it along side variable rate shading say use the GDDR6 when you scale the scene image quality back further and the GDDR6X at the higher quality giving mixed peformance at a better price. I would think they'd consider reducing the memory bus width to 128-bit or 192-bit for SKU's with those CU counts though if paired with infinity cache. Interesting to think about the infinity cache in a CF setup how it impacts the latency I'd expect less micro stutter. The 99 percentiles will be interesting to look at for RDNA2 with all the added bandwidth and I/O. I suppose 36CU's is possible as well by extension, but idk how the profit margins would be the low end of the market is erroding each generation further and further not to mention Intel entering the dGPU market will compound that situation further. I don't think a 30CU is likely/possibly for RDNA2 it would end up being 28CU if anything and kind of doubtful unless they wanted that for a APU/mobile then perhaps.

Well depends on definition of low end? Usually midrange as far as price is concerned is about $250 ish US. The RX480 when I bought it about 4 years ago was 2304 shaders (36CU) 8GB ram, 256-bit bus for $330 cnd or roughly $250US. Maybe since card prices now start at $150 US the midrange is closer to $300 US???

That's typically my budget. I am hoping something gets released along those lines, it could double my current performance and put it in 5700XT performance territory.

If not oh well, I have many other ways to put $350 cnd to better use.
 
Last edited:
Joined
Sep 3, 2019
Messages
3,512 (1.84/day)
Location
Thessaloniki, Greece
System Name PC on since Aug 2019, 1st CPU R5 3600 + ASUS ROG RX580 8GB >> MSI Gaming X RX5700XT (Jan 2020)
Processor Ryzen 9 5900X (July 2022), 220W PPT limit, 80C temp limit, CO -6-14, +50MHz (up to 5.0GHz)
Motherboard Gigabyte X570 Aorus Pro (Rev1.0), BIOS F39b, AGESA V2 1.2.0.C
Cooling Arctic Liquid Freezer II 420mm Rev7 (Jan 2024) with off-center mount for Ryzen, TIM: Kryonaut
Memory 2x16GB G.Skill Trident Z Neo GTZN (July 2022) 3667MT/s 1.42V CL16-16-16-16-32-48 1T, tRFC:280, B-die
Video Card(s) Sapphire Nitro+ RX 7900XTX (Dec 2023) 314~467W (375W current) PowerLimit, 1060mV, Adrenalin v24.10.1
Storage Samsung NVMe: 980Pro 1TB(OS 2022), 970Pro 512GB(2019) / SATA-III: 850Pro 1TB(2015) 860Evo 1TB(2020)
Display(s) Dell Alienware AW3423DW 34" QD-OLED curved (1800R), 3440x1440 144Hz (max 175Hz) HDR400/1000, VRR on
Case None... naked on desk
Audio Device(s) Astro A50 headset
Power Supply Corsair HX750i, ATX v2.4, 80+ Platinum, 93% (250~700W), modular, single/dual rail (switch)
Mouse Logitech MX Master (Gen1)
Keyboard Logitech G15 (Gen2) w/ LCDSirReal applet
Software Windows 11 Home 64bit (v24H2, OSBuild 26100.2161), upgraded from Win10 to Win11 on Jan 2024
You know something I noticed with the VRAM RAMDISK software when I played around with it is read performance seems to follow system memory constraints while the write performance follows the PCIE bus constraints, but you can use NTFS compression on partition format as well and speed up the bandwidth a lot and you go a step further than that as well you can compress the contents with Compact GUI-2 and use LZX compression compression it further at a high level and while I can't exactly benchmark that as easily the fact that it can be done is interesting to say the least and could speed up bandwidth and capacity further yet to the device it's basically a glorified RAMDISK that matches system memory read performance with write speeds matching PCIE bandwidth. The other neat part is when I saw Tom's Hardware test Crysis run in VRAM it performed a bit better on the minimal frame rates for 4K over NVME/SSD/RAMDISK the system RAMDISK was worst. I think that's actually expected because it eats into system memory bandwidth pulling double duties while the VRAM is often sitting around waiting for the contents to populate it in the first place and essentially works like a PCIE 4.0 x16 RAMDISK sort of device which is faster than NVME technically and less complexity than a quad M.2 PCIE 4.0 x16 setup would be. The other aspect Tom's Hardware tested it with PCIE 3.0 NVME and GPU's. I can't tell if that was error of margin or not, but it appeared like if repeatable it was a good perk and one might yield even better results than what was tested for at the same time.

This I said for a long while now quite often as AMD's finacials and budget improves I anticipate the Radeon to follow suit and make a stronger push in the GPU market against Nvidia. I think they are pleasantly a littler earlier than I figured they'd be at this stage though I thought this kind of rebound progress would happen next generation not this one. It just shows how hard AMD has worked to address the performance and efficiency of the Radeon brand and IP tech it's impressive this is exactly where the company wants to be headed. It's hard to not to get complacent when you've been at the top awahile look at Intel and Nvidia for that matter so it's really about time, but it wouldn't have happened w/o good management on the part of Lisa Su and the overall engineering talents she's put to work as well it's not a stroke of luck it's continued effort and progress with efficient management. To be very fair her personal experience is also very insightful for her position as well she's the right leader for that company 100% similar scenario with Jensen Huang even if you don't like leather jackets.

That's exactly the kind of insight info I was interested absolute game changer I'd say.

Could've sworn I was 80CU/72CU/64CU's listed...appears I got the lowest end model CU count off and it's 60. So seems they can cut 8 or 12 CU's at a time possibly more granular than that though I'd expect some similar granular size cuts to other SKU's. That said I still don't really expect they'd do cuts below 44CU's for the desktop anyway. I guess they could possibly have 56CU/52CU/44CU maybe they stretch it to 40CU as well who knows, but I doubt it if they retain the infinity cache w/o scaling it's cache size as well. I do see 192-bit and 128-bit being plausible and depends mostly around CU count which makes the most sense.

I'd like to see what could be done with CF with the infinity cache better bandwidth and I/O even if it gets split amongst the GPU's should translate to less micro stutter. Less bus bottleneck complications are always good. It would be interesting if some CPU cores got introduced on the GPU side and a bit of on the fly compress in the form of LZX or XPRESS 4K/8K/16K got used prior to the infinity cache sending that data along to the CPU. Even if it only could compress files up to a certain file size on the fly quickly it would be quite useful and you can use those types of compression methods with VRAM as well.
I think they are able to cut/disable CUs by 2. If you look RNDA1/2 full dies you will see 20 and 40 same rectangular respectively. Each one of these rectangular are 2CUs.

RDNA1
1604013839544.jpeg


RDNA2
1604014026193.jpeg


————————

And I’m pretty convinced that Crossfire is dead long ago.
 
Last edited:
Joined
Jul 8, 2019
Messages
76 (0.04/day)
Actually. The VRR is supported by these screens but it is not FreeSync equivalent. Maybe it will work but it may not necessarily work as a FreeSync monitor or a TV would work.
yes 6800XT support HDMi 2.1 VRR (just like consoles)

RTings rate C9/B9 VRR 4K 40-60hz (maybe because they disn't have HDMI2.1 source to test upper rates?)

looks like there is a modding solution to activate Freesync up to 4k 120hz

https://www.reddit.com/r/Amd/comments/g65mw7
some guys are experiencing short black screens when launching/exiting some game.
but well no big issue.

I think Im gonna go red. sweet 16Gb and frequencies reaching heaven 2300Mhz+ yummy I want it give that to me.

Green is maxing 1980Mhz on all they products. OC isn't existing at all there.

DXR on AMD with 1 RA (Raytracing Accelerator) per CU seems not so bad. I mean with the consoles willing to implement a bit of Raytracing, at least we will be able activate DXR and see what it looks like. anyway it doesnt looks like the wow thing now. just some puddles of water reflexions here and there.

DLSS.. well AMD is working on a solution, and given that the consoles are demanding for this solution as they have less power to reach 4k60+ FPS this could get somewhere in the end.

Drivers Bugs : many 5700XT users doesnt report issues seems many months, and again given that most of games are now dev. for both PC and Consoles Im pretty confident that AMD is gonna be robbust.

Also Im fed up with all my interfaces beeing green. Geforce Exp. I want to discovers the AMD HMI just for the fun to look into every possible menues and options.

I would have gonne for the 6900XT if they had it priced 750$ on a 80 / 72 CU ration basis
that would have been reasonable. even 800$ just for the "Elite" product feeling. but 999$ they got berserker here. not gonna give them credits.

In my opinion they should have done a bigger die and crush NVidia once for all. just for the fun.

all in all... November seems exiting

I have a c9 with a Vega64 and Freesync works just fine. All that is required is to use CRU and configure the tv to report as Freesync compatible.

ah yeah thx just saw your answer now. good to see that it works for you as well.
 
Joined
Mar 21, 2016
Messages
2,508 (0.79/day)
I actually would've guessed 2CU's.
I think they are able to cut/disable CUs by 2. If you look RNDA1/2 full dies you will see 20 and 40 same rectangular respectively. Each one of these rectangular are 2CUs.

RDNA1
View attachment 173792

RDNA2
View attachment 173793

————————

And I’m pretty convinced that Crossfire is dead long ago.
Still AMD has to differentiate SKU's so it's a matter of how they go about it and how many SKU's they try to offer in total. AMD I'm sure wants fairly good segmentation across the board overall along with price considerations. If they added 3 more SKU's and did what they did for high end SKU's in reverse meeting most closely at the end I think they probably go with 56CU/44CU/36CU SKU's to pair with the current 80CU/72CU/60CU offerings. The 60CU/56CU would be most closely matched in price and performance naturally. Now if AMD has to create new die's to reduce the infinity cache and if they reduce the memory bit width I think 128-bit with 64MB infinity cache makes a lot of sense especially were they to swap out the GDDR6 for GDDR6X. I really see that as pretty good possibility. It actually seems to make a fair bit of sense and the 56CU would be closely match to the 60CU version, but perhaps at a better price or efficiency relative to price either way it seems flexible and scalable. They can also bump up the original 3 SKU's with GDDR6X down the road too. I think AMD kind of nailed it this time around on the GPU side really great progress and a sort of return to normacy on the GPU front between AMD/Nvidia or ATI/Nvidia either way it's good for consumers hopefully or better than it had been at the very least.
 
Joined
Sep 3, 2019
Messages
3,512 (1.84/day)
Location
Thessaloniki, Greece
System Name PC on since Aug 2019, 1st CPU R5 3600 + ASUS ROG RX580 8GB >> MSI Gaming X RX5700XT (Jan 2020)
Processor Ryzen 9 5900X (July 2022), 220W PPT limit, 80C temp limit, CO -6-14, +50MHz (up to 5.0GHz)
Motherboard Gigabyte X570 Aorus Pro (Rev1.0), BIOS F39b, AGESA V2 1.2.0.C
Cooling Arctic Liquid Freezer II 420mm Rev7 (Jan 2024) with off-center mount for Ryzen, TIM: Kryonaut
Memory 2x16GB G.Skill Trident Z Neo GTZN (July 2022) 3667MT/s 1.42V CL16-16-16-16-32-48 1T, tRFC:280, B-die
Video Card(s) Sapphire Nitro+ RX 7900XTX (Dec 2023) 314~467W (375W current) PowerLimit, 1060mV, Adrenalin v24.10.1
Storage Samsung NVMe: 980Pro 1TB(OS 2022), 970Pro 512GB(2019) / SATA-III: 850Pro 1TB(2015) 860Evo 1TB(2020)
Display(s) Dell Alienware AW3423DW 34" QD-OLED curved (1800R), 3440x1440 144Hz (max 175Hz) HDR400/1000, VRR on
Case None... naked on desk
Audio Device(s) Astro A50 headset
Power Supply Corsair HX750i, ATX v2.4, 80+ Platinum, 93% (250~700W), modular, single/dual rail (switch)
Mouse Logitech MX Master (Gen1)
Keyboard Logitech G15 (Gen2) w/ LCDSirReal applet
Software Windows 11 Home 64bit (v24H2, OSBuild 26100.2161), upgraded from Win10 to Win11 on Jan 2024
I actually would've guessed 2CU's.
Still AMD has to differentiate SKU's so it's a matter of how they go about it and how many SKU's they try to offer in total. AMD I'm sure wants fairly good segmentation across the board overall along with price considerations. If they added 3 more SKU's and did what they did for high end SKU's in reverse meeting most closely at the end I think they probably go with 56CU/44CU/36CU SKU's to pair with the current 80CU/72CU/60CU offerings. The 60CU/56CU would be most closely matched in price and performance naturally. Now if AMD has to create new die's to reduce the infinity cache and if they reduce the memory bit width I think 128-bit with 64MB infinity cache makes a lot of sense especially were they to swap out the GDDR6 for GDDR6X. I really see that as pretty good possibility. It actually seems to make a fair bit of sense and the 56CU would be closely match to the 60CU version, but perhaps at a better price or efficiency relative to price either way it seems flexible and scalable. They can also bump up the original 3 SKU's with GDDR6X down the road too. I think AMD kind of nailed it this time around on the GPU side really great progress and a sort of return to normacy on the GPU front between AMD/Nvidia or ATI/Nvidia either way it's good for consumers hopefully or better than it had been at the very least.
My, absolutely based on (my) logic, estimation is that AMD will stay away from GDDR6X. Because they can get away with the new IC implementation. And second for the all kinds of expenses. GDDR6X is more expensive, draws almost X3 the power from “simple” GDDR6, and the memory controller need to be more complex too (=more expenses on die area and fab cost).

This I “heard” partially...
The three 6000 we’ve seen so far is based on the Navi21 right? 80CUs full die. They may have one more N21 with further less CUs, don’t know how many, probably 56 or even less active with 8GB(?) and probably same 256bit bus. But this isn’t coming soon I think because they may have to make inventory first (because of present good fab yields) and also see how things will go with nVidia.

Further down they have Navi22. Probably (?)40CUs full die with 192bit bus, (?)12GB, and clocks up to 2.5GHz, 160~200W, with who knows how much IC it will have. That will be better than 5700XT.
And also cutdown versions of N22 with 32~36CUs 8/10/12GB 160/192bit (for 5600/5700 replacements) and so on, but at this point is all on full speculations and things may change in future.

Also rumors for Navi23 with 24~32CUs but... it’s way too soon.

Navi21: 4K
Navi22: 1440p and ultrawide
Navi23: 1080p only
 
Last edited:
Joined
May 2, 2017
Messages
7,762 (2.81/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
I think they are able to cut/disable CUs by 2. If you look RNDA1/2 full dies you will see 20 and 40 same rectangular respectively. Each one of these rectangular are 2CUs.
I actually would've guessed 2CU's.
Still AMD has to differentiate SKU's so it's a matter of how they go about it and how many SKU's they try to offer in total. AMD I'm sure wants fairly good segmentation across the board overall along with price considerations. If they added 3 more SKU's and did what they did for high end SKU's in reverse meeting most closely at the end I think they probably go with 56CU/44CU/36CU SKU's to pair with the current 80CU/72CU/60CU offerings. The 60CU/56CU would be most closely matched in price and performance naturally. Now if AMD has to create new die's to reduce the infinity cache and if they reduce the memory bit width I think 128-bit with 64MB infinity cache makes a lot of sense especially were they to swap out the GDDR6 for GDDR6X. I really see that as pretty good possibility. It actually seems to make a fair bit of sense and the 56CU would be closely match to the 60CU version, but perhaps at a better price or efficiency relative to price either way it seems flexible and scalable. They can also bump up the original 3 SKU's with GDDR6X down the road too. I think AMD kind of nailed it this time around on the GPU side really great progress and a sort of return to normacy on the GPU front between AMD/Nvidia or ATI/Nvidia either way it's good for consumers hopefully or better than it had been at the very least.
Yep, CUs are grouped two by two in ... gah, I can't remember what they call the groups. Anyhow, AMD can disable however many they like as long as it's a multiple of 2.

That being said, it makes no sense for them to launch further disabled Navi 21 SKUs. Navi 21 is a big and expensive die, made on a mature process with a low error rate. They've already launched a SKU with 25% of CUs disabled. Going below that would only be warranted if there were lots of defective dice that didn't even have 60 working CUs. That's highly unlikely, and so they would then be giving up chips they could sell in higher power, more expensive SKUs just to make cut-down ones - again, why would they do that? And besides, AMD has promised that RDNA will be the basis for their full product stack going forward, so we can expect at the very least two more die designs going forward - they had two below 60 CUs for RDNA 1 after all, and reducing that number makes no sense at all. I would expect the rumors of a mid-size Navi 22 and a small Navi 23 to be relatively accurate, though I'm doubtful about Navi 22 having only 40 CUs - that's too big a jump IMO. 44, 48? Sure. And again, 52 would place it too close to the 6800. 80-72-60-(new die)-48-40-32-(new die)-28-24-20 sounds like a likely lineup to me, which gives us everything down to a 5500 non-XT, with the possibility of 5400/5300 SKUs with disabled memory, lower clocks, etc.

As for memory, I agree with @Zach_01 that AMD will likely stay away from GDDR6X entirely. It just doesn't make sense for them. With IC working to the degree that they only need a relatively cheap 256-bit GDDR6 bus on their top end SKU, going for a more expensive, more power hungry RAM standard on a lower end SKU would just be plain weird. What would they gain from it? I wouldn't be surprised if Navi 22 still had a 256-bit bus, but it might only get fully enabled on top bins (6700 XT, possibly 6700) - a 256-bit bus doesn't take much board space and isn't very expensive (the RX 570 had one, after all). My guess: fully enabled Navi 22 will have something like a 256-bit G6 bus with 96MB of IC. Though it could of course be any number of configurations, and no doubt AMD has simulated the crap out of this to decide which to go for - it could also be 192-bit G6+128MB IC, or even 192-bit+96MB if that delivers sufficient performance for a 6700 XT SKU.
 
Joined
Sep 3, 2019
Messages
3,512 (1.84/day)
Location
Thessaloniki, Greece
System Name PC on since Aug 2019, 1st CPU R5 3600 + ASUS ROG RX580 8GB >> MSI Gaming X RX5700XT (Jan 2020)
Processor Ryzen 9 5900X (July 2022), 220W PPT limit, 80C temp limit, CO -6-14, +50MHz (up to 5.0GHz)
Motherboard Gigabyte X570 Aorus Pro (Rev1.0), BIOS F39b, AGESA V2 1.2.0.C
Cooling Arctic Liquid Freezer II 420mm Rev7 (Jan 2024) with off-center mount for Ryzen, TIM: Kryonaut
Memory 2x16GB G.Skill Trident Z Neo GTZN (July 2022) 3667MT/s 1.42V CL16-16-16-16-32-48 1T, tRFC:280, B-die
Video Card(s) Sapphire Nitro+ RX 7900XTX (Dec 2023) 314~467W (375W current) PowerLimit, 1060mV, Adrenalin v24.10.1
Storage Samsung NVMe: 980Pro 1TB(OS 2022), 970Pro 512GB(2019) / SATA-III: 850Pro 1TB(2015) 860Evo 1TB(2020)
Display(s) Dell Alienware AW3423DW 34" QD-OLED curved (1800R), 3440x1440 144Hz (max 175Hz) HDR400/1000, VRR on
Case None... naked on desk
Audio Device(s) Astro A50 headset
Power Supply Corsair HX750i, ATX v2.4, 80+ Platinum, 93% (250~700W), modular, single/dual rail (switch)
Mouse Logitech MX Master (Gen1)
Keyboard Logitech G15 (Gen2) w/ LCDSirReal applet
Software Windows 11 Home 64bit (v24H2, OSBuild 26100.2161), upgraded from Win10 to Win11 on Jan 2024
The battle will continue and I think it will be more fierce at low-mid range where the most cards are sold. Not that Top-End is over...
Its really nice and exciting to see them both fight for "seats" and market share, all over again... Not only for the new and more advanced products (from both), but for the competition also!
I'm all set for a GPU for the next couple of years but all I want is to see them fight!!
 
Joined
Apr 12, 2013
Messages
7,532 (1.77/day)
So no, this doesn't seem like a "best v. worst" comparison.
I didn't say that, hence the word could. AMD can get the numbers they desire by comparing the less efficient cards, that's it. Different cards can have vastly different perf/w figures, the efficiency jump in & of itself says nothing. What it does tell us however is that AMD's removed some bottlenecks from their RDNA uarch that improved efficiency by a lot. There could be more efficient cards in the 6xxx lineup which might well be more than 70% more efficient than the worst RDNA card out there. The bottomline being there's more than one way to skin the cat & while the jump is tremendous indeed I can't say it's that surprising, not to me at least. In case you forgot AMD has lead Nvidia in perf/W & overall performance in the last decade, I'm frankly more impressed by the zen team's achievements.
 
Joined
May 2, 2017
Messages
7,762 (2.81/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
I didn't say that, hence the word could.
But your wording was vague. You said they could, yet failed to point out that in this case it's quite clear that they didn't. Which makes all the difference.
AMD can get the numbers they desire by comparing the less efficient cards, that's it. Different cards can have vastly different perf/w figures, the efficiency jump in & of itself says nothing. What it does tell us however is that AMD's removed some bottlenecks from their RDNA uarch that improved efficiency by a lot. There could be more efficient cards in the 6xxx lineup which might well be more than 70% more efficient than the worst RDNA card out there.
That's likely true. If they have a low-and-(relatively-)wide RDNA 2 SKU like the 5600 XT, that would no doubt be more than 70% better than the 5700 XT in perf/W. And of course if they, say, clock the living bejeezus out of some SKU it might not significantly beat the 5600 XT in perf/W. At that point though it's more interesting to look at overall/average/geomean perf/W for the two lineups and compare that, in which case there's little doubt RDNA 2 will be a lot more efficient.
The bottomline being there's more than one way to skin the cat & while the jump is tremendous indeed I can't say it's that surprising, not to me at least. In case you forgot AMD has lead Nvidia in perf/W & overall performance in the last decade,
Sorry, what? Did you mean to say the opposite? AMD has been behind Nvidia in perf/W and overall performance since the 780 Ti. That's not quite a decade, but seven years is not nothing, and the closes AMD has been in that time has been the Fury X (near performance parity at slightly higher power) and the 5600 XT (near outright efficiency superiority, but at a relatively low absolute performance).
I'm frankly more impressed by the zen team's achievements.
I'd say both are about equally impressive, though it remains to be seen if the RDNA team can keep up with the extremely impressive follow-through of the Zen team. RDNA 2 over RDNA 1 is (at least according to AMD's numbers) a change very similar to Zen over Excavator, but since then we've now seen significant generational growth for two generations (with a minor revision in between). On the other hand RDNA 1 over GCN was also a relatively big jump, but one that also had more issues that Zen (even accounting for early RAM and BIOS issues). So the comparison is a bit difficult at this point in time, but it's certainly promising for the RDNA team.
 
Joined
Sep 3, 2019
Messages
3,512 (1.84/day)
Location
Thessaloniki, Greece
System Name PC on since Aug 2019, 1st CPU R5 3600 + ASUS ROG RX580 8GB >> MSI Gaming X RX5700XT (Jan 2020)
Processor Ryzen 9 5900X (July 2022), 220W PPT limit, 80C temp limit, CO -6-14, +50MHz (up to 5.0GHz)
Motherboard Gigabyte X570 Aorus Pro (Rev1.0), BIOS F39b, AGESA V2 1.2.0.C
Cooling Arctic Liquid Freezer II 420mm Rev7 (Jan 2024) with off-center mount for Ryzen, TIM: Kryonaut
Memory 2x16GB G.Skill Trident Z Neo GTZN (July 2022) 3667MT/s 1.42V CL16-16-16-16-32-48 1T, tRFC:280, B-die
Video Card(s) Sapphire Nitro+ RX 7900XTX (Dec 2023) 314~467W (375W current) PowerLimit, 1060mV, Adrenalin v24.10.1
Storage Samsung NVMe: 980Pro 1TB(OS 2022), 970Pro 512GB(2019) / SATA-III: 850Pro 1TB(2015) 860Evo 1TB(2020)
Display(s) Dell Alienware AW3423DW 34" QD-OLED curved (1800R), 3440x1440 144Hz (max 175Hz) HDR400/1000, VRR on
Case None... naked on desk
Audio Device(s) Astro A50 headset
Power Supply Corsair HX750i, ATX v2.4, 80+ Platinum, 93% (250~700W), modular, single/dual rail (switch)
Mouse Logitech MX Master (Gen1)
Keyboard Logitech G15 (Gen2) w/ LCDSirReal applet
Software Windows 11 Home 64bit (v24H2, OSBuild 26100.2161), upgraded from Win10 to Win11 on Jan 2024
Don’t forget that the next RDNA3 is ~24months away.
That is a lot more from the 15 month period (RDNA1 to 2).
The impressive stuff may continue on their all new platform on early 2022 for ZEN5 and late 2022 for RDNA3, and it could be bigger than what we’ve seen so far.
 
Joined
May 15, 2020
Messages
697 (0.42/day)
Location
France
System Name Home
Processor Ryzen 3600X
Motherboard MSI Tomahawk 450 MAX
Cooling Noctua NH-U14S
Memory 16GB Crucial Ballistix 3600 MHz DDR4 CAS 16
Video Card(s) MSI RX 5700XT EVOKE OC
Storage Samsung 970 PRO 512 GB
Display(s) ASUS VA326HR + MSI Optix G24C4
Case MSI - MAG Forge 100M
Power Supply Aerocool Lux RGB M 650W
Don’t forget that the next RDNA3 is ~24months away.
That is a lot more from the 15 month period (RDNA1 to 2).
The impressive stuff may continue on their all new platform on early 2022 for ZEN5 and late 2022 for RDNA3, and it could be bigger than what we’ve seen so far.
The promise for RDNA is to have short incremental cycles just like with Zen, so RDNA3 is due for end of next year, beginning of 2022 at the latest. That's what everybody is saying, and Lisa just said that the development of RDNA3 is well under way

 
Joined
Apr 12, 2013
Messages
7,532 (1.77/day)
Yeah can't imagine AMD not going 5nm/RDNA 3 in much less than 2 years, especially since they probably have all the access to TSMC's top nodes now. Nvidia certainly is gonna release something better much sooner, AMD can't let the Turing saga play out for another year!
 
Joined
Sep 3, 2019
Messages
3,512 (1.84/day)
Location
Thessaloniki, Greece
System Name PC on since Aug 2019, 1st CPU R5 3600 + ASUS ROG RX580 8GB >> MSI Gaming X RX5700XT (Jan 2020)
Processor Ryzen 9 5900X (July 2022), 220W PPT limit, 80C temp limit, CO -6-14, +50MHz (up to 5.0GHz)
Motherboard Gigabyte X570 Aorus Pro (Rev1.0), BIOS F39b, AGESA V2 1.2.0.C
Cooling Arctic Liquid Freezer II 420mm Rev7 (Jan 2024) with off-center mount for Ryzen, TIM: Kryonaut
Memory 2x16GB G.Skill Trident Z Neo GTZN (July 2022) 3667MT/s 1.42V CL16-16-16-16-32-48 1T, tRFC:280, B-die
Video Card(s) Sapphire Nitro+ RX 7900XTX (Dec 2023) 314~467W (375W current) PowerLimit, 1060mV, Adrenalin v24.10.1
Storage Samsung NVMe: 980Pro 1TB(OS 2022), 970Pro 512GB(2019) / SATA-III: 850Pro 1TB(2015) 860Evo 1TB(2020)
Display(s) Dell Alienware AW3423DW 34" QD-OLED curved (1800R), 3440x1440 144Hz (max 175Hz) HDR400/1000, VRR on
Case None... naked on desk
Audio Device(s) Astro A50 headset
Power Supply Corsair HX750i, ATX v2.4, 80+ Platinum, 93% (250~700W), modular, single/dual rail (switch)
Mouse Logitech MX Master (Gen1)
Keyboard Logitech G15 (Gen2) w/ LCDSirReal applet
Software Windows 11 Home 64bit (v24H2, OSBuild 26100.2161), upgraded from Win10 to Win11 on Jan 2024
Yet they show it on a slide of RDNA2 that RNDA3 is for end of 2022. ;)I’m not making this...
 
Joined
May 15, 2020
Messages
697 (0.42/day)
Location
France
System Name Home
Processor Ryzen 3600X
Motherboard MSI Tomahawk 450 MAX
Cooling Noctua NH-U14S
Memory 16GB Crucial Ballistix 3600 MHz DDR4 CAS 16
Video Card(s) MSI RX 5700XT EVOKE OC
Storage Samsung 970 PRO 512 GB
Display(s) ASUS VA326HR + MSI Optix G24C4
Case MSI - MAG Forge 100M
Power Supply Aerocool Lux RGB M 650W
Yet they show it on a slide of RDNA2 that RNDA3 is for end of 2022. ;)I’m not making this...
Show that slide, I showed you mine :p
 
Joined
Sep 3, 2019
Messages
3,512 (1.84/day)
Location
Thessaloniki, Greece
System Name PC on since Aug 2019, 1st CPU R5 3600 + ASUS ROG RX580 8GB >> MSI Gaming X RX5700XT (Jan 2020)
Processor Ryzen 9 5900X (July 2022), 220W PPT limit, 80C temp limit, CO -6-14, +50MHz (up to 5.0GHz)
Motherboard Gigabyte X570 Aorus Pro (Rev1.0), BIOS F39b, AGESA V2 1.2.0.C
Cooling Arctic Liquid Freezer II 420mm Rev7 (Jan 2024) with off-center mount for Ryzen, TIM: Kryonaut
Memory 2x16GB G.Skill Trident Z Neo GTZN (July 2022) 3667MT/s 1.42V CL16-16-16-16-32-48 1T, tRFC:280, B-die
Video Card(s) Sapphire Nitro+ RX 7900XTX (Dec 2023) 314~467W (375W current) PowerLimit, 1060mV, Adrenalin v24.10.1
Storage Samsung NVMe: 980Pro 1TB(OS 2022), 970Pro 512GB(2019) / SATA-III: 850Pro 1TB(2015) 860Evo 1TB(2020)
Display(s) Dell Alienware AW3423DW 34" QD-OLED curved (1800R), 3440x1440 144Hz (max 175Hz) HDR400/1000, VRR on
Case None... naked on desk
Audio Device(s) Astro A50 headset
Power Supply Corsair HX750i, ATX v2.4, 80+ Platinum, 93% (250~700W), modular, single/dual rail (switch)
Mouse Logitech MX Master (Gen1)
Keyboard Logitech G15 (Gen2) w/ LCDSirReal applet
Software Windows 11 Home 64bit (v24H2, OSBuild 26100.2161), upgraded from Win10 to Win11 on Jan 2024
No I’m not...:laugh:
 
Joined
May 2, 2017
Messages
7,762 (2.81/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro

(Source.) It's AMD's typical vague roadmap, it can mean any part of 2022, though 2021 is very unlikely.
 
Joined
Jul 19, 2016
Messages
482 (0.16/day)
I think RDNA2 is the equivalent of Zen 2 in the PC space. Extremely competitive with their rival allowing massive market share gains (they can only go up from 20%).

RDNA3 is said to be another huge leap and on TSMC's 5nm whilst Nvidia will be trundling along on 7nm or flirting with Samsung's el cheapo crappy 8nm+ or 7 nm (nvidia mistake) with their 4000 series.
 
Joined
May 24, 2007
Messages
1,116 (0.17/day)
Location
Florida
System Name Blackwidow/
Processor Ryzen 5950x / Threadripper 3960x
Motherboard Asus x570 Crosshair viii impact/ Asus Zenith ii Extreme
Cooling Ek 240Aio/Custom watercooling
Memory 32gb ddr4 3600MHZ Crucial Ballistix / 32gb ddr4 3600MHZ G.Skill TridentZ Royal
Video Card(s) MSI RX 6900xt/ XFX 6800xt
Storage WD SN850 1TB boot / Samsung 970 evo+ 1tb boot, 6tb WD SN750
Display(s) Sony A80J / Dual LG 27gl850
Case Cooler Master NR200P/ 011 Dynamic XL
Audio Device(s) On board/ Soundblaster ZXR
Power Supply Corsair SF750w/ Seasonic Prime Titanium 1000w
Mouse Razer Viper Ultimate wireless/ Logitech G Pro X Superlight
Keyboard Logitech G915 TKL/ Logitech G915 Wireless
Software Win 10 Pro

(Source.) It's AMD's typical vague roadmap, it can mean any part of 2022, though 2021 is very unlikely.
how i read that map... is the end of 2021... before 2022. Also every leaker out there says 2021.
 
Joined
May 2, 2017
Messages
7,762 (2.81/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
how i read that map... is the end of 2021... before 2022. Also every leaker out there says 2021.
Guess that depends if you're reading the "2022" point as "start of 2022" or "end of 2022". I prefer pessimism with the possibility of being surprised, so I'm firmly in the latter camp.
 
Joined
May 15, 2020
Messages
697 (0.42/day)
Location
France
System Name Home
Processor Ryzen 3600X
Motherboard MSI Tomahawk 450 MAX
Cooling Noctua NH-U14S
Memory 16GB Crucial Ballistix 3600 MHz DDR4 CAS 16
Video Card(s) MSI RX 5700XT EVOKE OC
Storage Samsung 970 PRO 512 GB
Display(s) ASUS VA326HR + MSI Optix G24C4
Case MSI - MAG Forge 100M
Power Supply Aerocool Lux RGB M 650W
Guess that depends if you're reading the "2022" point as "start of 2022" or "end of 2022". I prefer pessimism with the possibility of being surprised, so I'm firmly in the latter camp.
Yes, but leakers :) ...
Plus, while AMD might feel encouraged to slow things down a bit on the CPU side, since they are starting to compete with themselves a bit, in the GPU market they need to keep the fast pace for quite a while, before even hoping to get to a similar position.
 
Top