• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Announces the Radeon RX 6000 Series: Performance that Restores Competitiveness

Joined
Mar 21, 2016
Messages
2,508 (0.79/day)
Think about system memory with latency vs bandwidth from latency tightening vs frequency scaling. I think that's going to come into play here quite a bit with the infinity cache situation it has to. I believe AMD tried to get the design well balanced and efficient for certain with minimal oddball compromising imbalances in the design of it. We can already glean a fair amount with what AMD's shown however, but we'll know more for certain with further data naturally. As I said I'd like to see the 1080p results. What you're saying though is fair we need to know more about Ampere and RDNA2 before we can more easily conclude exactly which parts of the design are leading to which performance differences and their impact based on resolution scaling. It's safe to say though there appears to be sweeping differences in design between RNDA2/Ampere to do with the resolution scaling.

If PCIE 4.0 doubled the bandwidth and cut the I/O bottleneck in half and this infinity cache is doing similarly that's a big deal for Crossfire. Mantle/Vulkan,DX12, VRS, Direct Storage API, Infinity Fabric, Infinity Cache, PCIE 4.0 and other things all make mGPU easier if anything the only real barrier developers.


I feel like AMD should just do a quincunx socket setup. Sounds a bit crazy, but they could have 4 APU's and a central processor. Infinity fabric and infinity cache between the 4-APU's and the central processor. A shared quad channel memory for the central processor with shared dual channel access to it from the surrounding APU's. The APU's would have 2 cores each to communicate with the adjacent APU's and the rest could be GPU design. The central processor would probably be a pure CPU design high IPC high frequency perhaps a bigLITTLE design a beastly single core central design the heart of the unit and 8-smaller surrounding physical cores handling odd and ends. There could be a lot of on the fly compression/decompression involved as well to maximize bandwidth and increase I/O. The chipset would be gone entirely and just integrated into the CPU design through the socketed chips involved. Lots of bandwidth, processing, single core performance along with multi-core performance and load balancing and head distribution and quick and efficient data transfer between different parts. It's a fortress of sorts, but it could probably fit within a ATX design reasonably well. You might start out with dual channel/quad channel with two socketed chips the socketed heart/brain and along with a APU and build it up down the road for scalable performance improvements. They could integrate FPGA tech into the equation, but that's another matter and cyborg matter we probably shouldn't speak of right now though the cyborg is coming.

I think this also encapsulates the gist of it somewhat.
Prior to this, AMD struggled with instruction pipeline functions. Successively, they streamlined the pipeline operation flow, dropped instruction latency to 1 and started implementing dual issued operations. That, or I don't know how they can increase shader speed by 7.9x folds implementing simple progressions to the same architecture.


And remember, this is only because they had previously experimented with it, otherwise there would be no chance that they know first hand how much power budget it would cost them. Sram has a narrow efficiency window.
There used to be a past notice which compared AMD and Intel's cell to transistor ratios, with the summary being AMD had integrated higher and more efficient transistor count units. All because of available die space.
If I'm not mistaken RNDA transitioned to some form of twin CU design task scheduling work groups that allows for kind of a serial and/or parallel performance flexibility within them. I could be wrong on my interpretation of them, but I think it allows them double down for a single task or split up and each handle two smaller tasks within the same twin CU grouping. Basically a working smarter not harder hardware design technique. Granular is where it is at more neurons. I think ideally you want a brute force single core that occupies the most die space and scale downward by like 50% with twice the core count. So like 4chips 1c/2c/4c/8c chips the performance per core would scale downward as core count increases, but the efficiency per core would increase and provided it can perform the task quickly enough that's a thing it saves power even if it doesn't perform the task as fast though it doesn't always need to either. The 4c/8c chips wouldn't be real ideal for gaming frame rates or anything overall, but they would probably be good for handling and calculating different AI within a game as opposed to pure rendering the AI animations and such don't have to be as quick and efficient as scene rendering for example in general it's just not as vital. I wonder if the variable rate shading will help make better use of core assignments across more cores in theory it should if they are assignable.
 
Last edited:
Joined
Jun 3, 2010
Messages
2,540 (0.48/day)
If I'm not mistaken RNDA transitioned to some form of twin CU design task scheduling work groups that allows for kind of a serial and/or parallel performance flexibility within them. I could be wrong on my interpretation of them, but I think it allows them double down for a single task or split up and each handle two smaller tasks within the same twin CU grouping. Basically a working smarter not harder hardware design technique. Granular is where it is at more neurons.
We can get deep into this subject. It holds so much water.
 
Joined
Mar 21, 2016
Messages
2,508 (0.79/day)
Okay, people tend to think of bandwidth as a constant thing (I'm always pushing 18Gbps or whatever the hell it is) at all times, and that if I'm not pushing the most amount of data at all times the GPU is going to stall.

The reality is only a small subset of data is all that necessary to keeping the GPU fed to not stall. The majority of the data (in a gaming context anyway) isn't anywhere near as latency sensitive and can be much more flexible for when it comes across the bus. IC helps by doing two things. It
A: Stops writes and subsequent retrievals from going back out to general memory for the majority of that data (letting it exist in cache, where its likely a shader is going to retrieve that information from again), and
B: It helps act as a buffer for further deprioritising data retrieval, letting likely needed data be retrieved earlier, momentarily held in cache, then ingested to the shader pipeline than written back out to VRAM.

As for Nvidia, yep, they would have, but the amount of die space being chewed for even 128mb of cache is pretty ludicrously large. AMD has balls chasing such a strategy tbh (but is probably why we saw 384 bit Engineering Sample cards earlier in the year, if IC didn't perform, they could fall back to a wider bus).
Agree granular chunk blocks if you can push more of them quicker and more efficiently data flow and congestion the better it's handled less stutter encountered. CF/SLI isn't dead because it doesn't work it's been regressing because of other reasons developer support, relative power draw for the same performance from a single card solution, and user sentiment to both those issues. It's not that it doesn't work it's not less ideal, but it does offer more performance that scales well done right and well offers less problematic negatives than in the past. A lot of it hinges on developers supporting it well and is a big problem with it no matter how good it is if they do a poor job implementing it then you have a real problem on hand if you're reliant on that same with tech like DLSS it's great or useful anyway until it's not or not implemented TXAA was the same deal. It's wonderful to a point, but selectively available with mixed results. If AMD/Nvidia manages to get away from the developer/power efficiency/latency quirks with CF/SLI they'll be great that's always been what's held them back unfortunately. It's what cause Lucid Hydra to be a overall failure of sorts. I suppose it had it's influence though just the same from what was learned from it that could be applied to avoid those same pitfalls stuff like Mantle/DX12/Vulkcan API's that that are more flexible and eventhings like variable rate shading. Someone had to break things down into smaller tasks between two separate pieces of hardware and try make it more efficient or learn how it could be made better. Eventually we may get close to the Lucid Hydra realization working in the way it was actually envisioned, but with more steps involved than they had hoped or wished for.

Rumors say that the next RDNA3 will be more close to ZEN2/3 approach. Chunks of cores/dies tied together with large pools of cache.
That’s why I believe it will not come soon. It will be way more than a year.
I would think RNDA3 and Zen 4 will arrive about the same time frame and be 5nm based with some improves to caches, cores, frequency, IPC, and power gating on both with other possible refinements and introductions. I think bigLITTLE is something to think about and perhaps some FPGA tech being applied to designs. I wonder if perhaps the MB chipset will be turned into a FPGA or incorporate some of that tech same with CPU/GPU just re-route some new designs and/or re-configure them a bit depending on need they are wonderfully flexible in a great way perfect no, but they'll certainly improve and be even more useful. Unused USB/PCI-E/M.2 slots cool I'll be reusing that for X or Y. I think eventually it could get to that point perhaps hopefully and if it can be and efficiently that cool as hell.
 
Last edited:
Joined
Sep 26, 2012
Messages
871 (0.20/day)
Location
Australia
System Name ATHENA
Processor AMD 7950X
Motherboard ASUS Crosshair X670E Extreme
Cooling ASUS ROG Ryujin III 360, 13 x Lian Li P28
Memory 2x32GB Trident Z RGB 6000Mhz CL30
Video Card(s) ASUS 4090 STRIX
Storage 3 x Kingston Fury 4TB, 4 x Samsung 870 QVO
Display(s) Acer X38S, Wacom Cintiq Pro 15
Case Lian Li O11 Dynamic EVO
Audio Device(s) Topping DX9, Fluid FPX7 Fader Pro, Beyerdynamic T1 G2, Beyerdynamic MMX300
Power Supply Seasonic PRIME TX-1600
Mouse Xtrfy MZ1 - Zy' Rail, Logitech MX Vertical, Logitech MX Master 3
Keyboard Logitech G915 TKL
VR HMD Oculus Quest 2
Software Windows 11 + Universal Blue
CF/SLI isn't dead because it doesn't work it's been regressing because of other reasons developer support, relative power draw for the same performance from a single card solution, and user sentiment to both those issues.

Probably missing the biggest issue, many postprocessing techniques are essentially impossible to do on a Crossfire\SLi solution that are using scene dividing and frame each techniques (the most common way these work).

mGPU tries to deal with this by setting bitmasks to keep certain tasks on a single GPU and an abstracted copy engine to reduce coherency requirements, but it comes down to the developer needing to explicitly manage that at the moment.
 
Joined
Jun 3, 2010
Messages
2,540 (0.48/day)
Probably missing the biggest issue, many postprocessing techniques are essentially impossible to do on a Crossfire\SLi solution that are using scene dividing and frame each techniques (the most common way these work).
You have 2 frontends, though. 2 frontends give 2 times faster CU WF initiation and SIMD wave instructing. While I admit it might displace a solid single pipeline into two and create needless time seams during which the pipeline is running idle, let's be careful to notice there are no pipeline stalls in RDNA2 what so ever. The registers used to be 64x4, 4 latency gap issued, now they are 32x32, enough to cover each lane each clock cycle.
It is also not the same pipeline state object between GCN and RDNA2 either, RDNA2 can prioritise compute and can stop the graphics pipeline entirely. Since gpus are large latency hiding devices, I think this would give us the necessary time needed to seam the images back into one before the timestamp is missed, but I'm rambling.
 
Joined
Mar 21, 2016
Messages
2,508 (0.79/day)
Post processing is a interesting point on the Crossfire/SLI matter. That said there are work around solutions to that issue such as mCable. I don't see why AMD/Nvidia couldn't make a display with a GPU at that end that does post process at that end in a timely manner built right into the display itself and more advanced. I also kind of find it odd that interlaced mGPU techniques like 3DFX used hasn't made a come back the bandwidth savings are huge use a bit higher resolution and downscale for something akin to a higher DPI. I mean let's see PCIE 3.0 vs PCIE 4.0 you've got double the bandwidth and 1/2 the latency then interlacing same story on the bandwidth and I guess in turn latency combine the both that's x4 times the bandwidth and a 1/4 the latency throw in infinity cache very close to the same thing slightly better actually 1/8 the latency with x8 the bandwidth. The thing with the interlacing yes it perceptively looks a bit worse which I think is largely attributed to the sharpness of the image it's a bit like DLSS you got less pixels to work with of course it's going to appear more blurry and less sharp by contrast. On the plus side you could combine that with a device like mClassic I would think and work a little magic to upscale the quality. Then you've compression as well you can use LZX compression on VRAM perfectly fine for example though obviously doing that quickly would be challenging depending on file sizes involved though limits the file sizes and doing that on the fly is certainly a option I would say in the future to be considered that also of course increases bandwidth and reduces latency from higher I/O.

You have 2 frontends, though. 2 frontends give 2 times faster CU WF initiation and SIMD wave instructing. While I admit it might displace a solid single pipeline into two and create needless time seams during which the pipeline is running idle, let's be careful to notice there are no pipeline stalls in RDNA2 what so ever. The registers used to be 64x4, 4 latency gap issued, now they are 32x32, enough to cover each lane each clock cycle.
It is also not the same pipeline state object between GCN and RDNA2 either, RDNA2 can prioritise compute and can stop the graphics pipeline entirely. Since gpus are large latency hiding devices, I think this would give us the necessary time needed to seam the images back into one before the timestamp is missed, but I'm rambling.
I'd like to add the perks of mGPU for path tracing is enormous as well think how much more quickly denoise could be done in that scenario!? The prospects of 4 discrete GPU's with a chunk of infinity cache on each running to a CPU with larger chunk of infinity cache that it can split amongst them is a very real future and vastly better than 4-way GTX980/980Ti's was with those old slower and less multicore Intel workstation chips and motherboards like setup is archaic to what we've got now it may as well be a 486 it just looks so dated this current tech in so many area's.
 
Joined
Mar 21, 2016
Messages
2,508 (0.79/day)
A Ryzen 5600X and 6800 setup looks like it's going to be quite tempting wonder if AMD will do any package deals on that combination or not that'll be great for 1080p/1440p in perticular.
 
Joined
Sep 26, 2012
Messages
871 (0.20/day)
Location
Australia
System Name ATHENA
Processor AMD 7950X
Motherboard ASUS Crosshair X670E Extreme
Cooling ASUS ROG Ryujin III 360, 13 x Lian Li P28
Memory 2x32GB Trident Z RGB 6000Mhz CL30
Video Card(s) ASUS 4090 STRIX
Storage 3 x Kingston Fury 4TB, 4 x Samsung 870 QVO
Display(s) Acer X38S, Wacom Cintiq Pro 15
Case Lian Li O11 Dynamic EVO
Audio Device(s) Topping DX9, Fluid FPX7 Fader Pro, Beyerdynamic T1 G2, Beyerdynamic MMX300
Power Supply Seasonic PRIME TX-1600
Mouse Xtrfy MZ1 - Zy' Rail, Logitech MX Vertical, Logitech MX Master 3
Keyboard Logitech G915 TKL
VR HMD Oculus Quest 2
Software Windows 11 + Universal Blue
A Ryzen 5600X and 6800 setup looks like it's going to be quite tempting wonder if AMD will do any package deals on that combination or not that'll be great for 1080p/1440p in perticular.

I would be *very* surprised if AMD doesn't offer package deals with a 5600X+6700XT, 5800X+6800, 5900X+6800XT & 5950X+6900XT combos or some sort of rebate system where you show you buy both in a single transaction you can apply for $50 back or something.
 
Joined
Sep 3, 2019
Messages
3,518 (1.84/day)
Location
Thessaloniki, Greece
System Name PC on since Aug 2019, 1st CPU R5 3600 + ASUS ROG RX580 8GB >> MSI Gaming X RX5700XT (Jan 2020)
Processor Ryzen 9 5900X (July 2022), 220W PPT limit, 80C temp limit, CO -6-14, +50MHz (up to 5.0GHz)
Motherboard Gigabyte X570 Aorus Pro (Rev1.0), BIOS F39b, AGESA V2 1.2.0.C
Cooling Arctic Liquid Freezer II 420mm Rev7 (Jan 2024) with off-center mount for Ryzen, TIM: Kryonaut
Memory 2x16GB G.Skill Trident Z Neo GTZN (July 2022) 3667MT/s 1.42V CL16-16-16-16-32-48 1T, tRFC:280, B-die
Video Card(s) Sapphire Nitro+ RX 7900XTX (Dec 2023) 314~467W (375W current) PowerLimit, 1060mV, Adrenalin v24.10.1
Storage Samsung NVMe: 980Pro 1TB(OS 2022), 970Pro 512GB(2019) / SATA-III: 850Pro 1TB(2015) 860Evo 1TB(2020)
Display(s) Dell Alienware AW3423DW 34" QD-OLED curved (1800R), 3440x1440 144Hz (max 175Hz) HDR400/1000, VRR on
Case None... naked on desk
Audio Device(s) Astro A50 headset
Power Supply Corsair HX750i, ATX v2.4, 80+ Platinum, 93% (250~700W), modular, single/dual rail (switch)
Mouse Logitech MX Master (Gen1)
Keyboard Logitech G15 (Gen2) w/ LCDSirReal applet
Software Windows 11 Home 64bit (v24H2, OSBuild 26100.2161), upgraded from Win10 to Win11 on Jan 2024
A Ryzen 5600X and 6800 setup looks like it's going to be quite tempting wonder if AMD will do any package deals on that combination or not that'll be great for 1080p/1440p in perticular.
This combo can easily do 4K, unless you're after high competitive framerate
 
Joined
Jan 8, 2017
Messages
9,440 (3.27/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
unless you're after high competitive framerate

Because then what ? You get 350 FPS instead of 320 or something ? That system will get you high performance in anything.
 
Joined
Sep 3, 2019
Messages
3,518 (1.84/day)
Location
Thessaloniki, Greece
System Name PC on since Aug 2019, 1st CPU R5 3600 + ASUS ROG RX580 8GB >> MSI Gaming X RX5700XT (Jan 2020)
Processor Ryzen 9 5900X (July 2022), 220W PPT limit, 80C temp limit, CO -6-14, +50MHz (up to 5.0GHz)
Motherboard Gigabyte X570 Aorus Pro (Rev1.0), BIOS F39b, AGESA V2 1.2.0.C
Cooling Arctic Liquid Freezer II 420mm Rev7 (Jan 2024) with off-center mount for Ryzen, TIM: Kryonaut
Memory 2x16GB G.Skill Trident Z Neo GTZN (July 2022) 3667MT/s 1.42V CL16-16-16-16-32-48 1T, tRFC:280, B-die
Video Card(s) Sapphire Nitro+ RX 7900XTX (Dec 2023) 314~467W (375W current) PowerLimit, 1060mV, Adrenalin v24.10.1
Storage Samsung NVMe: 980Pro 1TB(OS 2022), 970Pro 512GB(2019) / SATA-III: 850Pro 1TB(2015) 860Evo 1TB(2020)
Display(s) Dell Alienware AW3423DW 34" QD-OLED curved (1800R), 3440x1440 144Hz (max 175Hz) HDR400/1000, VRR on
Case None... naked on desk
Audio Device(s) Astro A50 headset
Power Supply Corsair HX750i, ATX v2.4, 80+ Platinum, 93% (250~700W), modular, single/dual rail (switch)
Mouse Logitech MX Master (Gen1)
Keyboard Logitech G15 (Gen2) w/ LCDSirReal applet
Software Windows 11 Home 64bit (v24H2, OSBuild 26100.2161), upgraded from Win10 to Win11 on Jan 2024
Because then what ? You get 350 FPS instead of 320 or something ? That system will get you high performance in anything.
I meant 4K.
He said that this system it would be great for 1080/1440p. And I said it could do 4K, unless he wants to stay in lower res for high (100+) framerate.
All 3 current 6000 GPUs are meant for 4K and not 1080/1440p. That was the point...

I didnt speak numbers, but thats what I meant.
 
Joined
Mar 21, 2016
Messages
2,508 (0.79/day)
I meant 4K.
He said that this system it would be great for 1080/1440p. And I said it could do 4K, unless he wants to stay in lower res for high (100+) framerate.
All 3 current 6000 GPUs are meant for 4K and not 1080/1440p. That was the point...

I didnt speak numbers, but thats what I meant.
I get what you're saying and agree it'll handle 4K quite well in additional to 1080p/1440p I'm leaning towards 120Hz+ with 1080p/1440p and taking into account newer games that are more demanding. I think in the case of 4K that combination won't always quite deliver 60FPS as fluidly especially true of scenario's involving RTRT become involved and even otherwise at times at least not w/o some subtle compromises to a few settings. You're right though about 4K being plenty capable of doing 60FPS+ perfectly in quite a few scenario's and hell even upwards of 120FPS at 4K in cases with some intelligent settings compromises. That said I don't plan on getting a 4K 120Hz display regardless at current price premiums. The price sweet spot for 100HZ+ displays is defiantly 1080p and 1440p display options.
 
Last edited:
Joined
Oct 26, 2019
Messages
117 (0.06/day)
 
Joined
Mar 24, 2012
Messages
533 (0.11/day)
Have you heard about AMD Polaris or AMD Ryzen?

did it help AMD gain more discrete GPU market share? for the past 10 years we have seen AMD competing with price. and yet their market share never exceed 40% mark. the last time AMD had over 40% is back in 2010. despite all the undercutting AMD has been doing for the past 10 years they are pretty much suppressed by nvidia to have below 40%. and until recently 30% is about the best they can hold. the latest report from JPR shows that AMD discrete GPU market share is already down to 20%.

price war is only effective if you can keep gaining market share from competitor. with ryzen it works. but in GPU world what happen for the past 10 years shows us that price war are ineffective against nvidia. and i have seen it when nvidia starts retaliating with price war the one that end up giving up first was AMD.
 
Joined
May 2, 2017
Messages
7,762 (2.80/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
did it help AMD gain more discrete GPU market share? for the past 10 years we have seen AMD competing with price. and yet their market share never exceed 40% mark. the last time AMD had over 40% is back in 2010. despite all the undercutting AMD has been doing for the past 10 years they are pretty much suppressed by nvidia to have below 40%. and until recently 30% is about the best they can hold. the latest report from JPR shows that AMD discrete GPU market share is already down to 20%.

price war is only effective if you can keep gaining market share from competitor. with ryzen it works. but in GPU world what happen for the past 10 years shows us that price war are ineffective against nvidia. and i have seen it when nvidia starts retaliating with price war the one that end up giving up first was AMD.
That's way too simplistic a view. This drop can't simply be attributed to "AMD is only competing on price", you also have to factor in everything else that affects this. In other words: the lack of a competitive flagship/high end solution since the Fury X (2015), the (mostly well deserved) reputation for running hot and being inefficient (not that that matters for most users, but most people at least want a quiet GPU), terrible marketing efforts (remember "Poor Volta"?), overpromising about new architectures, and not least resorting to selling expensive GPUs cheaply due to the inability to scale the core design in a competitive way, eating away at profits and thus R&D budgets, deepening the issues. And that's just scratching the surface. RDNA hasn't made anything worse, but due to the PR disaster that was the state of the drivers (which, while overblown, had some truth to it) didn't help either. RDNA 2 rectifies pretty much every single point here. No, the 6900 XT isn't likely to be directly competitive with the 3090 out of the box, but it's close enough, and the 6800 XT and 6800 seem eminently competitive. The XT is $50 cheaper than the 3080, but the non-XT is $79 more than the 3070, so they're not selling these as a budget option. And it's obvious that RDNA 2 can scale down to smaller chips with great performance and efficiency in the higher volume price ranges. Does that mean AMD will magically jump to 50% market share? Obviously not. Mindshare gains take a lot of time, and require consistency over time to materialize at all. But it would be extremely surprising if these GPUs don't at least start AMD on that road.
 
Joined
Oct 8, 2006
Messages
173 (0.03/day)
hoping that the reviews show performance with the 3000 series cpu's, intel ones and the new 5000 series because of that new feature (smart access memory) with the 5000 series will not show what the majority of the community runs in their systems and the gained performance from it will skew the benchmarks. I personally run a 3800x and would like to see the differences between the different ones. I know w1zzard will have that in the review or subsequent review of the different cpu's performance scaling with the 6000 gpu's.
 
Joined
Mar 21, 2016
Messages
2,508 (0.79/day)
The big thing with RNDA2 is it's going to cause Nvidia to react and be more competitive just like you're seeing with Intel on the CPU side.
 
Joined
May 2, 2017
Messages
7,762 (2.80/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
hoping that the reviews show performance with the 3000 series cpu's, intel ones and the new 5000 series because of that new feature (smart access memory) with the 5000 series will not show what the majority of the community runs in their systems and the gained performance from it will skew the benchmarks. I personally run a 3800x and would like to see the differences between the different ones. I know w1zzard will have that in the review or subsequent review of the different cpu's performance scaling with the 6000 gpu's.
All serious review sites use a fixed test bench configuration for GPU reviews, and don't replace that when reviewing a new product. Moving to a new test setup thus requires re-testing every GPU in the comparison, and is something that is done periodically, but in periods with little review activity. As such, day 1 reviews will obviously keep using the same test bench. This obviously applies to TPU, which uses a 9900K-based test bench.

There will in all likelihood be later articles diving into SAM and similar features, and SAM articles are likely to include comparisons to both Ryzen 3000 and Intel setups, but those will necessarily be separate from the base review. Not least as testing like that would mean a massive increase in the work required: TPUs testing covers 23 games at three resolutions, so 69 data points (plus power and thermal measurements). Expand that to three platforms and you have 207 data points, though ideally you'd want to test Ryzen 5000 with SAM both enabled and disabled to single out its effect, making it 276 data points. Then there's the fact that there are three GPUs to test, and that one would want at least one RTX comparison GPU for each test. Given that reviewers typically get about a week to ready their reviews, there is no way that this could be done in time for a launch review.

That being said, I'm very much looking forward to w1zzard's SAM deep dive.
 
Top