Monday, September 19th 2022
AMD Radeon RX 7000-series RDNA3 GPUs Approach 4 GHz GPU Clocks
AMD's upcoming Radeon RX 7000-series GPUs based on the RDNA3 graphics architecture, are rumored to be capable of engine clocks (GPU clocks) close to 4 GHz. This is plausible, given that the current-gen RX 6000-series can hit 3 GHz. AMD's play against the RTX 4090 will hence be a product with +50% performance/Watt gain over the previous generation, a significantly increased shader-count, an over 70% increase in memory bandwidth (384-bit memory running at 20 Gbps or more), faster/larger Infinity Cache, and to top it all off, engine clocks approaching 4 GHz.
Source:
HXL (Twitter)
35 Comments on AMD Radeon RX 7000-series RDNA3 GPUs Approach 4 GHz GPU Clocks
Naturally there are haters and cherry pickers on both sides, especially amusing considering the rich history of both swapping who has had leads in various areas like VRAM, efficiency, features, holding the crown etc, but spending a lot of time in forums and subreddits, I've seen a lot of takes.
In Ampere vs RDNA2 I've seen this from the Pro RDNA2 over Ampere folk, including but not limited to;
- VRAM is king and certain Ampere will age poorly for this
- Efficiency is everything
- RDNA2 has more "raw power" than Ampere
- RT is a gimmick
- DLSS is a gimmick (admittedly now that FSR is out, this has largely subsided)
- Nvenc/RTX voice/CUDA isn't a selling point
- Ngreedia/they're evil/shady tactics/closed ecosystem/holding the industry back never a dime again etc etc
In Ampere vs RDNA2 I've seen this from the Pro Ampere over RDNA2 folk, including but not limited to;- Ampere is more forward looking as an architecture
- the VRAM amount is lower than desired, but largely suitable for the powerband the respective cards occupy
- Very efficient when tweaked
- GDDR6X is a major cause of the power consumption, the core/s themselves aren't entirely inefficient
- Equal "raw power" to RDNA2 but better RT
- RT is the future and Ampere already does decently well in respect to the each products targeted res/framerate
- Image reconstruction (DLSS) is amazing and without Nvidia pushing this new wave, we wouldn't have FSR/XeSS
- AMD drivers still are meh according to a vocal minority
Lets get the popcorn ready for what Ada vs RDNA3 will bring hey, some of either points will surely remain, but a lot of the rest could change or equalize.But I always find it interesting when stuff like this is mentioned (the title of the article I mean) considering it in itself is rather meaningless.
a 5ghz pentium D is slower then a core2duo 3.6ghz (yes old example I know), I guess we have indeed RDNA2 to compare it slightly but still....
If the die sizes are at around the leaked levels and taking account the >50% performance/Watt claim, the design choices are really smart with focus on keeping die size & power consumption low and according to this rumor the 5nm designs can hit extremely high clocks also if pushed.
Regarding features set it won't be competitive with Ada, my impression is that it will be at Turing level (finally) in rendering features (level of RT, AI based technics like DLSS etc included, I also mean the % hit you take in the frame rate when implementing forward looking features like these) and maybe at Ampere level regarding display & multimedia engine.
But this isn't bad if you think consoles are the base and that introduced just 2 years before.
The performance of reference Navi31 flagship in relation with 3090Ti (100%) should be in the below region imo depending the TBP that AMD will target, below 3 examples:
Joe will still buy it, then buy a new CPU with new mobo and obviously PSU with obvious reasons or burn his household and argue about pricing or being tricked on when the burning thing happens. Then maybe they will open their eyes.
Either the scenario the companies selling products win and nothing changes. Price goes up, consumption goes up (even though everyone claims how efficient technological advancement is).
Look at the Xbox vs PS4 GPU. The Xbox has a tad more shaders and runs slower, then the PS4's GPU which has less shader but runs faster.
They perform equal. The PS4 has more power budget available because of that.
I think something is pulled here by AMD as well.
Ada Vs RDNA
Blender with the 3.0 build ditched the Open CL support and AMD was forced to introduce a new API, called HIP, that works only with their modern gpu series. And this API is slower than CUDA, just like Open CL before.
Some years ago AMD released a rendering engine called AMD Pro Render, but this never reached the popularity of stuff like V-Ray, Redshift, Renderman, etc.
Basically AMD is cut out from an entire piece of market and if someone needs to make complex renderings for its job, AMD can't be taken in consideration.
Not just this. The NVIDIA cards can be joined together with the NVLink and the rendering program sees one single card. Meaning that two 24GB cards are seen as one with 48GB of memory. What before was the limit of the GPU rendering, the small amount of memory, is not a problem anymore.
And NVIDIA cards have Tensor cores, that can be used by games too.
In other words, AMD is years behind and unless they pay some billions in order to get their gpus fully supported by the major rendering programs, they will never keep up with NVIDIA in the workstations gpu market.