Monday, September 19th 2022

AMD Radeon RX 7000-series RDNA3 GPUs Approach 4 GHz GPU Clocks

AMD's upcoming Radeon RX 7000-series GPUs based on the RDNA3 graphics architecture, are rumored to be capable of engine clocks (GPU clocks) close to 4 GHz. This is plausible, given that the current-gen RX 6000-series can hit 3 GHz. AMD's play against the RTX 4090 will hence be a product with +50% performance/Watt gain over the previous generation, a significantly increased shader-count, an over 70% increase in memory bandwidth (384-bit memory running at 20 Gbps or more), faster/larger Infinity Cache, and to top it all off, engine clocks approaching 4 GHz.
Source: HXL (Twitter)
Add your own comment

35 Comments on AMD Radeon RX 7000-series RDNA3 GPUs Approach 4 GHz GPU Clocks

#1
Minus Infinity
I'm almost certain RDNA3 will be my next cards, probably 7800XT. 7900X3D + 7800XT combo with 64GB CL30ish DDR56000 RAM is sounding better by the day.
Posted on Reply
#2
Denver
I thought something like that was impossible...
Posted on Reply
#3
Bruno_O
Minus InfinityI'm almost certain RDNA3 will be my next cards, probably 7800XT. 7900X3D + 7800XT combo with 64GB CL30ish DDR56000 RAM is sounding better by the day.
hope that 64GB isn't for gaming... or will be under utilized by 70%+ during its lifetime
Posted on Reply
#4
wolf
Better Than Native
Interesting if true! Ada vs RDNA3 is going to be a lot of fun it seems.
Posted on Reply
#5
beedoo
wolfInteresting if true! Ada vs RDNA3 is going to be a lot of fun it seems.
Interesting, as in the conversations will go... AMD is 1% behind Nvidia in Ray-tracing, AMD is crap!
Posted on Reply
#6
wolf
Better Than Native
beedooInteresting, as in the conversations will go... AMD is 1% behind Nvidia in Ray-tracing, AMD is crap!
More just a heated battle, competition benefits us all and I wont limit myself to either camp, ever.

Naturally there are haters and cherry pickers on both sides, especially amusing considering the rich history of both swapping who has had leads in various areas like VRAM, efficiency, features, holding the crown etc, but spending a lot of time in forums and subreddits, I've seen a lot of takes.

In Ampere vs RDNA2 I've seen this from the Pro RDNA2 over Ampere folk, including but not limited to;
  • VRAM is king and certain Ampere will age poorly for this
  • Efficiency is everything
  • RDNA2 has more "raw power" than Ampere
  • RT is a gimmick
  • DLSS is a gimmick (admittedly now that FSR is out, this has largely subsided)
  • Nvenc/RTX voice/CUDA isn't a selling point
  • Ngreedia/they're evil/shady tactics/closed ecosystem/holding the industry back never a dime again etc etc
In Ampere vs RDNA2 I've seen this from the Pro Ampere over RDNA2 folk, including but not limited to;
  • Ampere is more forward looking as an architecture
  • the VRAM amount is lower than desired, but largely suitable for the powerband the respective cards occupy
  • Very efficient when tweaked
  • GDDR6X is a major cause of the power consumption, the core/s themselves aren't entirely inefficient
  • Equal "raw power" to RDNA2 but better RT
  • RT is the future and Ampere already does decently well in respect to the each products targeted res/framerate
  • Image reconstruction (DLSS) is amazing and without Nvidia pushing this new wave, we wouldn't have FSR/XeSS
  • AMD drivers still are meh according to a vocal minority
Lets get the popcorn ready for what Ada vs RDNA3 will bring hey, some of either points will surely remain, but a lot of the rest could change or equalize.
Posted on Reply
#7
Unregistered
wolfMore just a heated battle, competition benefits us all and I wont limit myself to either camp, ever.

Naturally there are haters and cherry pickers on both sides, especially amusing considering the rich history of both swapping who has had leads in various areas like VRAM, efficiency, features, holding the crown etc, but spending a lot of time in forums and subreddits, I've seen a lot of takes.

In Ampere vs RDNA2 I've seen this from the Pro RDNA2 over Ampere folk, including but not limited to;
  • VRAM is king and certain Ampere will age poorly for this
  • Efficiency is everything
  • RDNA2 has more "raw power" than Ampere
  • RT is a gimmick
  • DLSS is a gimmick (admittedly now that FSR is out, this has largely subsided)
  • Nvenc/RTX voice/CUDA isn't a selling point
  • Ngreedia/they're evil/shady tactics/closed ecosystem/holding the industry back never a dime again etc etc
In Ampere vs RDNA2 I've seen this from the Pro Ampere over RDNA2 folk, including but not limited to;
  • Ampere is more forward looking as an architecture
  • the VRAM amount is lower than desired, but largely suitable for the powerband the respective cards occupy
  • Very efficient when tweaked
  • GDDR6X is a major cause of the power consumption, the core/s themselves aren't entirely inefficient
  • Equal "raw power" to RDNA2 but better RT
  • RT is the future and Ampere already does decently well in respect to the each products targeted res/framerate
  • Image reconstruction (DLSS) is amazing and without Nvidia pushing this new wave, we wouldn't have FSR/XeSS
  • AMD drivers still are meh according to a vocal minority
Lets get the popcorn ready for what Ada vs RDNA3 will bring hey, some of either points will surely remain, but a lot of the rest could change or equalize.
It seems we turn everything into us vs them.
#8
ratirt
4Ghz is like a CPU almost. Very high frequency. I only hope it wont be toasty because of it. Not to mention power consumption. Anyway, considering the strides both companies make in power consumption, this one probably will be sucking a lot of watts just like the NV cards will. I hope I'm wrong.
Posted on Reply
#9
AusWolf
ratirt4Ghz is like a CPU almost. Very high frequency. I only hope it wont be toasty because of it. Not to mention power consumption. Anyway, considering the strides both companies make in power consumption, this one probably will be sucking a lot of watts just like the NV cards will. I hope I'm wrong.
Exactly my thoughts. For the first time ever, my next upgrade path will be decided by power consumption and heat, not performance or price.
Posted on Reply
#10
ZoneDymo
ratirt4Ghz is like a CPU almost. Very high frequency. I only hope it wont be toasty because of it. Not to mention power consumption. Anyway, considering the strides both companies make in power consumption, this one probably will be sucking a lot of watts just like the NV cards will. I hope I'm wrong.
I mean go back to the 2600k and its higher then a cpu, so yep it is about that speed.
But I always find it interesting when stuff like this is mentioned (the title of the article I mean) considering it in itself is rather meaningless.

a 5ghz pentium D is slower then a core2duo 3.6ghz (yes old example I know), I guess we have indeed RDNA2 to compare it slightly but still....
Posted on Reply
#11
ratirt
ZoneDymoI mean go back to the 2600k and its higher then a cpu, so yep it is about that speed.
But I always find it interesting when stuff like this is mentioned (the title of the article I mean) considering it in itself is rather meaningless.

a 5ghz pentium D is slower then a core2duo 3.6ghz (yes old example I know), I guess we have indeed RDNA2 to compare it slightly but still....
It has been apparent that the frequency is not everything. It has been showed by the Pentium D (if I remember correctly) and Athlon era. Pentiums were clocked higher and yet still were slower. That is why Intel had to revise the architecture. Looking only on clocks will get you nowhere.
Posted on Reply
#12
AusWolf
ratirtIt has been apparent that the frequency is not everything. It has been showed by the Pentium D (if I remember correctly) and Athlon era. Pentiums were clocked higher and yet still were slower. That is why Intel had to revise the architecture. Looking only on clocks will get you nowhere.
Soon enough, looking only at performance will get you nowhere, either. When Random Joe buys his GTRTX 797979500 XT Ti Super, and realizes that the system with his noname 500 W power supply runs slow / won't start / burns the house down. Or do we live in that era already?
Posted on Reply
#13
ModEl4
If this true, I'm really happy for the engineering team at AMD.
If the die sizes are at around the leaked levels and taking account the >50% performance/Watt claim, the design choices are really smart with focus on keeping die size & power consumption low and according to this rumor the 5nm designs can hit extremely high clocks also if pushed.
Regarding features set it won't be competitive with Ada, my impression is that it will be at Turing level (finally) in rendering features (level of RT, AI based technics like DLSS etc included, I also mean the % hit you take in the frame rate when implementing forward looking features like these) and maybe at Ampere level regarding display & multimedia engine.
But this isn't bad if you think consoles are the base and that introduced just 2 years before.
The performance of reference Navi31 flagship in relation with 3090Ti (100%) should be in the below region imo depending the TBP that AMD will target, below 3 examples:

TBPBest caseWorst case
450W192%173.5%
400W181%163.5%
350W168%152%
Posted on Reply
#14
wolf
Better Than Native
Xex360It seems we turn everything into us vs them.
It's just about intrinsically part of the human condition isn't it.
Posted on Reply
#15
1d10t
4 gigahertz isn't enough, I need 4 gigawatts!
Posted on Reply
#16
big_glasses
AusWolfSoon enough, looking only at performance will get you nowhere, either. When Random Joe buys his GTRTX 797979500 XT Ti Super, and realizes that the system with his noname 500 W power supply runs slow / won't start / burns the house down. Or do we live in that era already?
was multiple instability issues on reddit that's been solved due to bad/weak PSU already, so even more power-expensive cards will probably cause more of them
Posted on Reply
#17
DeathtoGnomes
Congratz AMD, now lets have some real reviews, whaddya say?
Posted on Reply
#18
ratirt
AusWolfSoon enough, looking only at performance will get you nowhere, either. When Random Joe buys his GTRTX 797979500 XT Ti Super, and realizes that the system with his noname 500 W power supply runs slow / won't start / burns the house down. Or do we live in that era already?
To be fair, average Joe is still going to buy it anyway since I have found out that those are being stubborn and know better. I find the 220v in EU very useful nowadays than the 110v US. Normally I would argue furiously against 220v. Now, I have to bite my tongue. Maybe this situation with the global power problem will open some eyes although I doubt it.
Joe will still buy it, then buy a new CPU with new mobo and obviously PSU with obvious reasons or burn his household and argue about pricing or being tricked on when the burning thing happens. Then maybe they will open their eyes.
Either the scenario the companies selling products win and nothing changes. Price goes up, consumption goes up (even though everyone claims how efficient technological advancement is).
Posted on Reply
#19
pavle
What you can't do with instructions per cycle, you do with clockspeed, nothing new.
Posted on Reply
#20
Jism
DenverI thought something like that was impossible...
When you de-couple chips in MCM type of designs you have freedom in regards of how fast or how hard you can run the chip, while restrain power conditions or target, and without the disadvantage of a large monolithic die.

Look at the Xbox vs PS4 GPU. The Xbox has a tad more shaders and runs slower, then the PS4's GPU which has less shader but runs faster.

They perform equal. The PS4 has more power budget available because of that.

I think something is pulled here by AMD as well.
Posted on Reply
#21
ARF
When is the reveal? We need benchmarks already!
Posted on Reply
#22
Jimmy_
It's starting to get really exciting and dramatic, and TEAM ARC is definitely not in the competition it seems :D
Ada Vs RDNA
Posted on Reply
#23
vmarv
Lot of people look at the AMD cards from the gamer point of view and they are missing something. The major rendering programs support only CUDA, or are better optimized for CUDA.
Blender with the 3.0 build ditched the Open CL support and AMD was forced to introduce a new API, called HIP, that works only with their modern gpu series. And this API is slower than CUDA, just like Open CL before.
Some years ago AMD released a rendering engine called AMD Pro Render, but this never reached the popularity of stuff like V-Ray, Redshift, Renderman, etc.
Basically AMD is cut out from an entire piece of market and if someone needs to make complex renderings for its job, AMD can't be taken in consideration.

Not just this. The NVIDIA cards can be joined together with the NVLink and the rendering program sees one single card. Meaning that two 24GB cards are seen as one with 48GB of memory. What before was the limit of the GPU rendering, the small amount of memory, is not a problem anymore.
And NVIDIA cards have Tensor cores, that can be used by games too.
In other words, AMD is years behind and unless they pay some billions in order to get their gpus fully supported by the major rendering programs, they will never keep up with NVIDIA in the workstations gpu market.
Posted on Reply
#24
Denver
JismWhen you de-couple chips in MCM type of designs you have freedom in regards of how fast or how hard you can run the chip, while restrain power conditions or target, and without the disadvantage of a large monolithic die.

Look at the Xbox vs PS4 GPU. The Xbox has a tad more shaders and runs slower, then the PS4's GPU which has less shader but runs faster.

They perform equal. The PS4 has more power budget available because of that.

I think something is pulled here by AMD as well.
Yeah... But according to the leaks, the GPU chip is monolithic, only the 3D cache is separated into smaller chips..
Posted on Reply
#25
Space Lynx
Astronaut
an all AMD system build again it sounds like for me. fuck yeah, can't wait!
Posted on Reply
Add your own comment
Nov 21st, 2024 13:11 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts