• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Announces the Radeon RX 6000 Series: Performance that Restores Competitiveness

Joined
May 2, 2017
Messages
7,762 (2.81/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
I have an RX 480 and never had any issues. Then again, I usually don't update the driver, and have left them for over a year. I kind of agree with the Adrenaline suite, I preferred the older version. Some of the older versions had the option only to install the driver and not the suite. Many times I just did that to avoid all those extras I never used or needed. They need to do that with Adrenaline, have custom install with a good selection to pick from, such as driver only. Only issue is the occasional wattman crash, that doesn't really seem to do anything. I also found windows 1709 and 1809 to be pretty good stability wise, but they are eol now.
Driver issues seem to be extremely variable in who gets them. I never had any issues (that I didn't cause myself with aggressive undervolting or the like) with my Fury X and RX 570. But some people keep having issues across many different cards. As GN said in their video covering the launch: they have been able to recreate some, but it took some effort, so it's not like they're extremely common. Too common, yes, but not a deal-breaker unless you have one of those systems that just doesn't seem to like AMD's drivers.
 
Joined
Feb 8, 2017
Messages
225 (0.08/day)
Marketing rule number 1: Always show yr best

Yesterday's Radeon presentation clearly indicates they have matched Ampere's raster performance. They did not show RT numbers and Super Resolution is something they are working on. If the RT was as good as Ampere, they would have shown numbers. Simple deduction.

ALL of the ray traced games so far have Nvidia's proprietary ray tracing methods. They are based on DXR of course, but completely optimized for Nvidia, so obviously AMD hardware will not be able to run with that ray tracing, or its going to have worse performance. This doesn't matter though, as only a handful of games have ray tracing support and literally like 1 or 2 are actually decent in their implementation, as in looks reasonably better than established techniques.

AMD is literally going to have all of the consoles catalog of games which will be build and optimized for AMD RDNA2 implementation of ray tracing.

Personally I think Nvidia was pushing it way too hard with ray tracing, they just needed a "new" feature to put on the marketing, without it actually being ready for practical use. In fact even the Ampere series is crap at rendering rays, and guess what their next gen will be as well, same with AMD. We are realistically at least 5 years to be able to properly trace rays in real time in games, without making it extremely specific and cutting 9/10 corners. Right now ray tracing is literally just a marketing gimmick, its extremely specific and limited.

If you actually did a full ray traced game with all of the caveats of actually tracing rays and you have trillions of rays in the scenes it will literally blow up existing GPU's, its not possible to do it. It will render like 0.1fps per second.

This is why they have to use cheats in order to make it work, this is why its only used on 1 specific thing, either shadows, either reflections, either global illumination, etc.... never all of them and even at that its very limited. They limit the rays that get processed, so its only the barebones rays that get traced.

Again we could have had a 100x times better ray tracing implementation in 5 years, with full ray tracing capabilities that doesn't cut as much corners, that comes somewhat close to rendered ray tracing, that isn't essentially a gimmick that tanks your performance by 50% and that is with very specific and very limited tracing, again if you actually did a better ray tracing it will literally tank performance completely, you'd be running at less than 1 frame per second.
 
Joined
Sep 3, 2019
Messages
3,507 (1.84/day)
Location
Thessaloniki, Greece
System Name PC on since Aug 2019, 1st CPU R5 3600 + ASUS ROG RX580 8GB >> MSI Gaming X RX5700XT (Jan 2020)
Processor Ryzen 9 5900X (July 2022), 220W PPT limit, 80C temp limit, CO -6-14, +50MHz (up to 5.0GHz)
Motherboard Gigabyte X570 Aorus Pro (Rev1.0), BIOS F39b, AGESA V2 1.2.0.C
Cooling Arctic Liquid Freezer II 420mm Rev7 (Jan 2024) with off-center mount for Ryzen, TIM: Kryonaut
Memory 2x16GB G.Skill Trident Z Neo GTZN (July 2022) 3667MT/s 1.42V CL16-16-16-16-32-48 1T, tRFC:280, B-die
Video Card(s) Sapphire Nitro+ RX 7900XTX (Dec 2023) 314~467W (375W current) PowerLimit, 1060mV, Adrenalin v24.10.1
Storage Samsung NVMe: 980Pro 1TB(OS 2022), 970Pro 512GB(2019) / SATA-III: 850Pro 1TB(2015) 860Evo 1TB(2020)
Display(s) Dell Alienware AW3423DW 34" QD-OLED curved (1800R), 3440x1440 144Hz (max 175Hz) HDR400/1000, VRR on
Case None... naked on desk
Audio Device(s) Astro A50 headset
Power Supply Corsair HX750i, ATX v2.4, 80+ Platinum, 93% (250~700W), modular, single/dual rail (switch)
Mouse Logitech MX Master (Gen1)
Keyboard Logitech G15 (Gen2) w/ LCDSirReal applet
Software Windows 11 Home 64bit (v24H2, OSBuild 26100.2161), upgraded from Win10 to Win11 on Jan 2024
I was honestly expecting more, especially from the 6800 that was my target. I mean, 16 Gb of VRAM are highly unnecessary (10 would have been perfect) and the price, probably because of that amount of VRAM, is $50/60 higher than the sweet spot, and definitely too close to the 6800XT.
we know nothing about RT performance, so we should wait for the review before draw any conclusion.


When did they speak about X570 ???
They said it alright...

Copy form another thread:

"I can see there is a lot of confusion about the new feature AMD is calling "Smart Access Memory" and how it works. My 0.02 on the subject.
According to the presentation the SAM feature can be enabled only in 500series boards with a ZEN3 CPU installed. My assumption is that they use PCI-E 4.0 capabilities for this, but I'll get back to that.
The SAM feature has nothing to do with InfinityCache. IC is used to compensate the 256bit bandwithd between the GPU and VRAM. Thats it, end of story. And according to AMD this is equivalent of a 833bit bus. Again, this has nothing to do with SAM. IC is in the GPU and works for all systems the same way. They didnt say you have to do anything to "get it" to work. If it works with the same effectiveness with all games we will have to see.

Smart Access Memory
They use SAM to have CPU access to VRAM and probably speed up things a little on the CPU side. Thats it. They said it in the presentation, and they showed it also...
And they probably can get this done because of PCI-E 4.0 speed capability. If true thats why no 400series support.
They also said that this feature may be better in future than it is today, once game developers optimize their games for it.
I think AMD just made PCI-E 4.0 (on their own platform) more relevant than it was until now!"

Full CPU access to GPU memory:
View attachment 173701

----------------------------------------------------------------------


So who knows better than AMD if the 16GB is necessary or not?
 
Joined
Aug 5, 2019
Messages
808 (0.42/day)
System Name Apex Raptor: Silverback
Processor Intel i9 13900KS Allcore @ 5.8
Motherboard z790 Apex
Cooling LT720 360mm + Phanteks T30
Memory 32GB @8000MT/s CL36
Video Card(s) RTX 4090
Storage 990 PRO 4TB
Display(s) Neo G8 / C1 65"
Case Antec Performance 1
Audio Device(s) DT 1990 Pro / Motu M2
Power Supply Prime Ultra Titanium 1000w
Mouse Scimitar Pro
Keyboard K95 Platinum
nope


DXR is M$'s
 
Joined
Jul 5, 2013
Messages
27,752 (6.67/day)
I know I'm late to the party so I'm repeating what's already been said, oh well(TLDR), but HOT DAMN! If those numbers are real, AMD has got the goods to make life interesting(perhaps even difficult) for NVidia. AMD is also not gimping on the VRAM either. This looks like it's going to be AMD for this round of GPU king-of-the-hill!

This is a very good time for the consumer in the PC industry!!

What I find most interesting is that the 6900XT might be even better than the 3090 at 8k gaming. Really looking forward to more tests with 8k panels like the one LTT did, but more fleshed out and expansive.
 
Last edited:
Joined
Mar 21, 2016
Messages
2,508 (0.79/day)
I wonder when/if 30, 36, 40CU cards and other cards will be released??
52CU and 44CU is the next logical step based on what's already released AMD seems to disable 8CU's at a time. I can see them doing a 10GB or 14GB capacity device. It would be interesting if they utilized GDDR6/GDDR6X together and use it along side variable rate shading say use the GDDR6 when you scale the scene image quality back further and the GDDR6X at the higher quality giving mixed peformance at a better price. I would think they'd consider reducing the memory bus width to 128-bit or 192-bit for SKU's with those CU counts though if paired with infinity cache. Interesting to think about the infinity cache in a CF setup how it impacts the latency I'd expect less micro stutter. The 99 percentiles will be interesting to look at for RDNA2 with all the added bandwidth and I/O. I suppose 36CU's is possible as well by extension, but idk how the profit margins would be the low end of the market is erroding each generation further and further not to mention Intel entering the dGPU market will compound that situation further. I don't think a 30CU is likely/possibly for RDNA2 it would end up being 28CU if anything and kind of doubtful unless they wanted that for a APU/mobile then perhaps.
 
Last edited:
Joined
Apr 24, 2019
Messages
185 (0.09/day)
10K Cuda cores 384bit 35 TFLOPS 350watt, all just to match with 256bit 23TFLOPS 300watt 6900XT, LawL. Ampere is a failure architecture. Let's hope Hopper will change something for Nvidia. Until then RIP Ampere.
 

SLK

Joined
Sep 2, 2019
Messages
30 (0.02/day)
ALL of the ray traced games so far have Nvidia's proprietary ray tracing methods. They are based on DXR of course, but completely optimized for Nvidia, so obviously AMD hardware will not be able to run with that ray tracing, or its going to have worse performance. This doesn't matter though, as only a handful of games have ray tracing support and literally like 1 or 2 are actually decent in their implementation, as in looks reasonably better than established techniques.

AMD is literally going to have all of the consoles catalog of games which will be build and optimized for AMD RDNA2 implementation of ray tracing.

Personally I think Nvidia was pushing it way too hard with ray tracing, they just needed a "new" feature to put on the marketing, without it actually being ready for practical use. In fact even the Ampere series is crap at rendering rays, and guess what their next gen will be as well, same with AMD. We are realistically at least 5 years to be able to properly trace rays in real time in games, without making it extremely specific and cutting 9/10 corners. Right now ray tracing is literally just a marketing gimmick, its extremely specific and limited.

If you actually did a full ray traced game with all of the caveats of actually tracing rays and you have trillions of rays in the scenes it will literally blow up existing GPU's, its not possible to do it. It will render like 0.1fps per second.

This is why they have to use cheats in order to make it work, this is why its only used on 1 specific thing, either shadows, either reflections, either global illumination, etc.... never all of them and even at that its very limited. They limit the rays that get processed, so its only the barebones rays that get traced.

Again we could have had a 100x times better ray tracing implementation in 5 years, with full ray tracing capabilities that doesn't cut as much corners, that comes somewhat close to rendered ray tracing, that isn't essentially a gimmick that tanks your performance by 50% and that is with very specific and very limited tracing, again if you actually did a better ray tracing it will literally tank performance completely, you'd be running at less than 1 frame per second.

True, full ray tracing "aka" Path tracing is too expensive now, hence the hybrid rendering and tools like DLSS to make it feasible. However, even in its current form, it looks so good. I have played Control, Metro Exodus and Minecraft and it just looks beautiful. In slow-moving games, you can really experience the glory of ray tracing and its hard to go back to normal version after that. In fast-paced titles though, like battlefield or Fortnite, I don't really notice it.
 
Joined
Sep 3, 2019
Messages
3,507 (1.84/day)
Location
Thessaloniki, Greece
System Name PC on since Aug 2019, 1st CPU R5 3600 + ASUS ROG RX580 8GB >> MSI Gaming X RX5700XT (Jan 2020)
Processor Ryzen 9 5900X (July 2022), 220W PPT limit, 80C temp limit, CO -6-14, +50MHz (up to 5.0GHz)
Motherboard Gigabyte X570 Aorus Pro (Rev1.0), BIOS F39b, AGESA V2 1.2.0.C
Cooling Arctic Liquid Freezer II 420mm Rev7 (Jan 2024) with off-center mount for Ryzen, TIM: Kryonaut
Memory 2x16GB G.Skill Trident Z Neo GTZN (July 2022) 3667MT/s 1.42V CL16-16-16-16-32-48 1T, tRFC:280, B-die
Video Card(s) Sapphire Nitro+ RX 7900XTX (Dec 2023) 314~467W (375W current) PowerLimit, 1060mV, Adrenalin v24.10.1
Storage Samsung NVMe: 980Pro 1TB(OS 2022), 970Pro 512GB(2019) / SATA-III: 850Pro 1TB(2015) 860Evo 1TB(2020)
Display(s) Dell Alienware AW3423DW 34" QD-OLED curved (1800R), 3440x1440 144Hz (max 175Hz) HDR400/1000, VRR on
Case None... naked on desk
Audio Device(s) Astro A50 headset
Power Supply Corsair HX750i, ATX v2.4, 80+ Platinum, 93% (250~700W), modular, single/dual rail (switch)
Mouse Logitech MX Master (Gen1)
Keyboard Logitech G15 (Gen2) w/ LCDSirReal applet
Software Windows 11 Home 64bit (v24H2, OSBuild 26100.2161), upgraded from Win10 to Win11 on Jan 2024
10K Cuda cores 384bit 35 TFLOPS 350watt, all just to match with 256bit 23TFLOPS 300watt 6900XT, LawL. Ampere is a failure architecture. Let's hope Hopper will change something for Nvidia. Until then RIP Ampere.
To be honest those AMD charts for 6900XT vs 3090 is with AMD GPU overclocked and SmartAccessMemory On. So that’s no 300W for starters
I guess it wouldn’t be 350+W but stil no 300W.

I’m not saying that what AMD has accomplished is not impressive. It is more than just impressive. And with that SAM feature with 5000cpu+500boards it might change the game.

And to clarify something, SAM will be available on all 500series boards. Not only X570. They use PCI-E 4.0 interconnect between CPU and GPU for the former to have VRAM memory access. All 500 boards have PCI-E 4.0 speed for GPUs.
 
Last edited:
Joined
May 8, 2018
Messages
1,568 (0.66/day)
Location
London, UK
I would have preferred 10 or 12 Gb VRAM for $50 less.
16 GB for the intended target (mostly 1440P) is totally useless.

Yeah, but that 16gb will likely be a 3080ti and that is targeted for 4k, 16 or 20gb for 3080 was cancelled.
 
Joined
Sep 3, 2019
Messages
3,507 (1.84/day)
Location
Thessaloniki, Greece
System Name PC on since Aug 2019, 1st CPU R5 3600 + ASUS ROG RX580 8GB >> MSI Gaming X RX5700XT (Jan 2020)
Processor Ryzen 9 5900X (July 2022), 220W PPT limit, 80C temp limit, CO -6-14, +50MHz (up to 5.0GHz)
Motherboard Gigabyte X570 Aorus Pro (Rev1.0), BIOS F39b, AGESA V2 1.2.0.C
Cooling Arctic Liquid Freezer II 420mm Rev7 (Jan 2024) with off-center mount for Ryzen, TIM: Kryonaut
Memory 2x16GB G.Skill Trident Z Neo GTZN (July 2022) 3667MT/s 1.42V CL16-16-16-16-32-48 1T, tRFC:280, B-die
Video Card(s) Sapphire Nitro+ RX 7900XTX (Dec 2023) 314~467W (375W current) PowerLimit, 1060mV, Adrenalin v24.10.1
Storage Samsung NVMe: 980Pro 1TB(OS 2022), 970Pro 512GB(2019) / SATA-III: 850Pro 1TB(2015) 860Evo 1TB(2020)
Display(s) Dell Alienware AW3423DW 34" QD-OLED curved (1800R), 3440x1440 144Hz (max 175Hz) HDR400/1000, VRR on
Case None... naked on desk
Audio Device(s) Astro A50 headset
Power Supply Corsair HX750i, ATX v2.4, 80+ Platinum, 93% (250~700W), modular, single/dual rail (switch)
Mouse Logitech MX Master (Gen1)
Keyboard Logitech G15 (Gen2) w/ LCDSirReal applet
Software Windows 11 Home 64bit (v24H2, OSBuild 26100.2161), upgraded from Win10 to Win11 on Jan 2024
They need more VRAM for SmartAccessMemory
 
Joined
May 8, 2018
Messages
1,568 (0.66/day)
Location
London, UK
10K Cuda cores 384bit 35 TFLOPS 350watt, all just to match with 256bit 23TFLOPS 300watt 6900XT, LawL. Ampere is a failure architecture. Let's hope Hopper will change something for Nvidia. Until then RIP Ampere.

Failure they have not used cache like amd used here. So like lisa said, they used cache like they did to zen3. Imagine if nvidia uses cache, they will likely get the upper hand but that takes time and effort to develop such product and if that happens then will be in 2 years or so. So amd will have 2 years to think how to get closer to nvidia on raytracing and nvidia has 2 years to understand how to implement cache on their gpus like amd did.
 
Joined
Sep 3, 2019
Messages
3,507 (1.84/day)
Location
Thessaloniki, Greece
System Name PC on since Aug 2019, 1st CPU R5 3600 + ASUS ROG RX580 8GB >> MSI Gaming X RX5700XT (Jan 2020)
Processor Ryzen 9 5900X (July 2022), 220W PPT limit, 80C temp limit, CO -6-14, +50MHz (up to 5.0GHz)
Motherboard Gigabyte X570 Aorus Pro (Rev1.0), BIOS F39b, AGESA V2 1.2.0.C
Cooling Arctic Liquid Freezer II 420mm Rev7 (Jan 2024) with off-center mount for Ryzen, TIM: Kryonaut
Memory 2x16GB G.Skill Trident Z Neo GTZN (July 2022) 3667MT/s 1.42V CL16-16-16-16-32-48 1T, tRFC:280, B-die
Video Card(s) Sapphire Nitro+ RX 7900XTX (Dec 2023) 314~467W (375W current) PowerLimit, 1060mV, Adrenalin v24.10.1
Storage Samsung NVMe: 980Pro 1TB(OS 2022), 970Pro 512GB(2019) / SATA-III: 850Pro 1TB(2015) 860Evo 1TB(2020)
Display(s) Dell Alienware AW3423DW 34" QD-OLED curved (1800R), 3440x1440 144Hz (max 175Hz) HDR400/1000, VRR on
Case None... naked on desk
Audio Device(s) Astro A50 headset
Power Supply Corsair HX750i, ATX v2.4, 80+ Platinum, 93% (250~700W), modular, single/dual rail (switch)
Mouse Logitech MX Master (Gen1)
Keyboard Logitech G15 (Gen2) w/ LCDSirReal applet
Software Windows 11 Home 64bit (v24H2, OSBuild 26100.2161), upgraded from Win10 to Win11 on Jan 2024
Failure they have not used cache like amd used here. So like lisa said, they used cache like they did to zen3. Imagine if nvidia uses cache, they will likely get the upper hand but that takes time and effort to develop such product and if that happens then will be in 2 years or so. So amd will have 2 years to think how to get closer to nvidia on raytracing and nvidia has 2 years to understand how to implement cache on their gpus like amd did.
That would require a complete GPU redesign. They already occupy a large portion of the die with tesnor and RT cores. Also a very different memory controller is needed.
The path that nVidia has chosen, do not allow them to implement such a cache. And I really doubt that they will abandon Tensor and RT cores in future.
 
Joined
May 8, 2018
Messages
1,568 (0.66/day)
Location
London, UK
That would require a complete GPU redesign. They already occupy a large portion of the die with tesnor and RT cores. Also a very different memory controller is needed.
The path that nVidia has chosen, do not allow them to implement such a cache. And I really doubt that they will abandon Tensor and RT cores in future.

So if that is true then they have to find a way to outdo cache from amd, like i said before if amd pushes to 384 bit gddr6 like nvidia then nvidia is doomed, 256 bit is beating nvidia best already.
 
Joined
Sep 3, 2019
Messages
3,507 (1.84/day)
Location
Thessaloniki, Greece
System Name PC on since Aug 2019, 1st CPU R5 3600 + ASUS ROG RX580 8GB >> MSI Gaming X RX5700XT (Jan 2020)
Processor Ryzen 9 5900X (July 2022), 220W PPT limit, 80C temp limit, CO -6-14, +50MHz (up to 5.0GHz)
Motherboard Gigabyte X570 Aorus Pro (Rev1.0), BIOS F39b, AGESA V2 1.2.0.C
Cooling Arctic Liquid Freezer II 420mm Rev7 (Jan 2024) with off-center mount for Ryzen, TIM: Kryonaut
Memory 2x16GB G.Skill Trident Z Neo GTZN (July 2022) 3667MT/s 1.42V CL16-16-16-16-32-48 1T, tRFC:280, B-die
Video Card(s) Sapphire Nitro+ RX 7900XTX (Dec 2023) 314~467W (375W current) PowerLimit, 1060mV, Adrenalin v24.10.1
Storage Samsung NVMe: 980Pro 1TB(OS 2022), 970Pro 512GB(2019) / SATA-III: 850Pro 1TB(2015) 860Evo 1TB(2020)
Display(s) Dell Alienware AW3423DW 34" QD-OLED curved (1800R), 3440x1440 144Hz (max 175Hz) HDR400/1000, VRR on
Case None... naked on desk
Audio Device(s) Astro A50 headset
Power Supply Corsair HX750i, ATX v2.4, 80+ Platinum, 93% (250~700W), modular, single/dual rail (switch)
Mouse Logitech MX Master (Gen1)
Keyboard Logitech G15 (Gen2) w/ LCDSirReal applet
Software Windows 11 Home 64bit (v24H2, OSBuild 26100.2161), upgraded from Win10 to Win11 on Jan 2024
If we believe AMDs numbers that structure (256bit+128MB) is giving them an 833-bit with GDDR6 equivalent (effective).
The thing is that we don’t know if increasing the actual bandwidth to 320 or 384 is going to scale well. You have to have stronger cores to utilize extra (normal or effective) bandwidth.

EDIT PS
and they have to redesign mem controller for wider bus = expensive and larger die.
It’s a no go...
 
Last edited:
Joined
Aug 27, 2013
Messages
55 (0.01/day)
System Name Redemption(Fractal DD XL R2 mod)
Processor Intel Core i7 4770K
Motherboard Gigabyte Z97X-G1 Gaming BLK WIFI
Cooling Water Cooled
Memory G. Skill Ripjaws Z 16 GB 1866 MHz
Video Card(s) 2 x Gigabyte R9 290X with EK blocks
Storage 256 GB Samsung 830 SSD
Display(s) Dell U2713HM
Case Fractal Design Define XL R2 (Modified)
Audio Device(s) Creative SB Z
Power Supply Silverstone Stider Evolution 1000 Watt Gold with individual MDPC sleeves
Software Windows 7 Ultimate
As far as I remember all the leaks and rumors, there is not going to be an AIB 6900XT

For reals? That'll be insane I'd they don't have AIBs involved. Imagine the amount of money they'll make when(if) the review backup their performance claims.
 
Joined
Sep 3, 2019
Messages
3,507 (1.84/day)
Location
Thessaloniki, Greece
System Name PC on since Aug 2019, 1st CPU R5 3600 + ASUS ROG RX580 8GB >> MSI Gaming X RX5700XT (Jan 2020)
Processor Ryzen 9 5900X (July 2022), 220W PPT limit, 80C temp limit, CO -6-14, +50MHz (up to 5.0GHz)
Motherboard Gigabyte X570 Aorus Pro (Rev1.0), BIOS F39b, AGESA V2 1.2.0.C
Cooling Arctic Liquid Freezer II 420mm Rev7 (Jan 2024) with off-center mount for Ryzen, TIM: Kryonaut
Memory 2x16GB G.Skill Trident Z Neo GTZN (July 2022) 3667MT/s 1.42V CL16-16-16-16-32-48 1T, tRFC:280, B-die
Video Card(s) Sapphire Nitro+ RX 7900XTX (Dec 2023) 314~467W (375W current) PowerLimit, 1060mV, Adrenalin v24.10.1
Storage Samsung NVMe: 980Pro 1TB(OS 2022), 970Pro 512GB(2019) / SATA-III: 850Pro 1TB(2015) 860Evo 1TB(2020)
Display(s) Dell Alienware AW3423DW 34" QD-OLED curved (1800R), 3440x1440 144Hz (max 175Hz) HDR400/1000, VRR on
Case None... naked on desk
Audio Device(s) Astro A50 headset
Power Supply Corsair HX750i, ATX v2.4, 80+ Platinum, 93% (250~700W), modular, single/dual rail (switch)
Mouse Logitech MX Master (Gen1)
Keyboard Logitech G15 (Gen2) w/ LCDSirReal applet
Software Windows 11 Home 64bit (v24H2, OSBuild 26100.2161), upgraded from Win10 to Win11 on Jan 2024
AIBs on 6900XT will mean 1100-1200$ prices if not more. Maybe AMD doesn’t want that.
But then again... 6800XT AIBs mean matching the 6900XT for less... (700~800+)

It’s complicated...
 
Joined
May 15, 2020
Messages
697 (0.42/day)
Location
France
System Name Home
Processor Ryzen 3600X
Motherboard MSI Tomahawk 450 MAX
Cooling Noctua NH-U14S
Memory 16GB Crucial Ballistix 3600 MHz DDR4 CAS 16
Video Card(s) MSI RX 5700XT EVOKE OC
Storage Samsung 970 PRO 512 GB
Display(s) ASUS VA326HR + MSI Optix G24C4
Case MSI - MAG Forge 100M
Power Supply Aerocool Lux RGB M 650W
So if that is true then they have to find a way to outdo cache from amd, like i said before if amd pushes to 384 bit gddr6 like nvidia then nvidia is doomed, 256 bit is beating nvidia best already.
I think it's time to rein in the hype train a bit. First off, we have no idea how this Infinity Cache scales up or down, if you just assume a linear scaling, you're probably wrong.

Second, remember Fermi: Nvidia put out a first generation of cards which were horrible, and then they iterated on the same node just 6 months later and fixed most of the problems.

The theoretical bandwidth showed up on AMDs slides is just as theoretical as Ampere TFlops.

Let's not get ahead of ourselves, AMD did well in this skirmish, the war is far from over.
 
Joined
May 8, 2018
Messages
1,568 (0.66/day)
Location
London, UK
I think it's time to rein in the hype train a bit. First off, we have no idea how this Infinity Cache scales up or down, if you just assume a linear scaling, you're probably wrong.

Second, remember Fermi: Nvidia put out a first generation of cards which were horrible, and then they iterated on the same node just 6 months later and fixed most of the problems.

The theoretical bandwidth showed up on AMDs slides is just as theoretical as Ampere TFlops.

Let's not get ahead of ourselves, AMD did well in this skirmish, the war is far from over.

in my view, amd could have beaten nvidia, they did not want to, i guess they are doing like they did to intel, zen2 = competitive x intel, zen3 = beat intel, i guess this time will be similar, 6xxx = competitive, 7xxx = beat nvidia.

Before this, i said do not underestimate amd, amd has a new management but to tell you the truth i myself have not had the thought amd could be competitive this time x nvidia, i guessed 30% less performance than x 3080, i was wrong.
 
Joined
May 15, 2020
Messages
697 (0.42/day)
Location
France
System Name Home
Processor Ryzen 3600X
Motherboard MSI Tomahawk 450 MAX
Cooling Noctua NH-U14S
Memory 16GB Crucial Ballistix 3600 MHz DDR4 CAS 16
Video Card(s) MSI RX 5700XT EVOKE OC
Storage Samsung 970 PRO 512 GB
Display(s) ASUS VA326HR + MSI Optix G24C4
Case MSI - MAG Forge 100M
Power Supply Aerocool Lux RGB M 650W
in my view, amd could have beaten nvidia, they did not want to, i guess they are doing like they did to intel, zen2 = competitive x intel, zen3 = beat intel, i guess this time will be similar, 6xxx = competitive, 7xxx = beat nvidia.
Only Nvidia is not Intel, Nvidia is a fast responding company capable of taking decisions in days and implementing them in weeks or months. They have tons of cash (not cache :p ), good management, loads of good engineers and excellent marketing and mindshare. Intel only had tons of cash.
Edit: ... and mindshare, tbh, which they haven't completely eroded yet, at least in some markets
 
Last edited:
Joined
Sep 3, 2019
Messages
3,507 (1.84/day)
Location
Thessaloniki, Greece
System Name PC on since Aug 2019, 1st CPU R5 3600 + ASUS ROG RX580 8GB >> MSI Gaming X RX5700XT (Jan 2020)
Processor Ryzen 9 5900X (July 2022), 220W PPT limit, 80C temp limit, CO -6-14, +50MHz (up to 5.0GHz)
Motherboard Gigabyte X570 Aorus Pro (Rev1.0), BIOS F39b, AGESA V2 1.2.0.C
Cooling Arctic Liquid Freezer II 420mm Rev7 (Jan 2024) with off-center mount for Ryzen, TIM: Kryonaut
Memory 2x16GB G.Skill Trident Z Neo GTZN (July 2022) 3667MT/s 1.42V CL16-16-16-16-32-48 1T, tRFC:280, B-die
Video Card(s) Sapphire Nitro+ RX 7900XTX (Dec 2023) 314~467W (375W current) PowerLimit, 1060mV, Adrenalin v24.10.1
Storage Samsung NVMe: 980Pro 1TB(OS 2022), 970Pro 512GB(2019) / SATA-III: 850Pro 1TB(2015) 860Evo 1TB(2020)
Display(s) Dell Alienware AW3423DW 34" QD-OLED curved (1800R), 3440x1440 144Hz (max 175Hz) HDR400/1000, VRR on
Case None... naked on desk
Audio Device(s) Astro A50 headset
Power Supply Corsair HX750i, ATX v2.4, 80+ Platinum, 93% (250~700W), modular, single/dual rail (switch)
Mouse Logitech MX Master (Gen1)
Keyboard Logitech G15 (Gen2) w/ LCDSirReal applet
Software Windows 11 Home 64bit (v24H2, OSBuild 26100.2161), upgraded from Win10 to Win11 on Jan 2024
nVidia is not exactly in the position that Intel is. Sure they made some apparently dumb decisions but they have the resources to come back soon. And probably sooner than RDNA3.
The fact that RDNA3 is coming in 2 years gives nVidia room to respond.
 
Last edited:
Joined
May 8, 2018
Messages
1,568 (0.66/day)
Location
London, UK
Well, we have to wait for reviews to confirm what amd showed. It's hard to believe even if you see if is real, amd was so far behind that if all true then we have to start believing in miracles too if you dont believe it already.
 
Joined
Mar 21, 2016
Messages
2,508 (0.79/day)
I'm a bit perplexed at how Smart access memory works in comparison to how it's always worked what's the key difference between the two with like a flow chart. What's being done differently doesn't the CPU always have access to VRAM anyway!? I imagine it's bypassing some step in the chain for quicker access than how it's been handle in the past, but that's the part I'm curious about. I mean I can access a GPU's VRAM now and the CPU and system memory obviously plays some role in the process. The mere fact that the VRAM performance slows down around the point where the L2 cache is saturated on my CPU seems to indicate the CPU design plays a role though it seems to bottleneck by system memory performance along with the CPU L2 cache and core count not thread count which adds to the overall combined L2 cache structure. You see a huge regression of performance beyond the theoretical limits of the L2 cache it seems to peak at that point and it 4-way on my CPU slows a bit up to 2MB file sizes then drops off quite rapidly after that point. If you disable the physical core too the bandwidth regresses as well so the combined L2 cache impacts it from what I've seen.

10K Cuda cores 384bit 35 TFLOPS 350watt, all just to match with 256bit 23TFLOPS 300watt 6900XT, LawL. Ampere is a failure architecture. Let's hope Hopper will change something for Nvidia. Until then RIP Ampere.
Yeah I'd defiantly want the Radeon in mGPU over the Nvidia in the same scenario 600w versus 700w not to mention the infinity cache could have real latency beneficial impact in that scenario potentially as well. I'm curious if the lower end models will have CF support or not I didn't see any real mention of CF tech for RDNA2, but they had a lot of other things to cover. I think a 128-bit card with less CU's 44/52 and the same infinity cache could potentially be even better a lower overall TDP perhaps the same VRAM capacity, but overall maybe quicker than 6800XT at a similar price would be hugely popular and widely successful. I think a 44CU of that nature would probably be enough to beat the 6800XT slightly and could probably cost less plus you could upgrade to that type of performance. It might not win strictly on TDP however then again maybe it's close if AMD is pushing the clock frequency rather steeply and efficiency is going out the window as a byproduct of that. Now I wonder if the infinity cache in crossfire could be split 50/50 with 64MB to each GPU that the CPU can access and the other left over 64MB on each could shared between each other reducing the inter-latency connection between the GPU's and bandwidth to and from the CPU. The other interesting part maybe it can only push 128MB now, but once a newer compatible CPU launches it could push 256MB of smart cache to the CPU with 128MB from each GPU in Crossfire!!? Really interesting stuff to explore.
 
Last edited:
Top