• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Radeon RX 7000-series RDNA3 GPUs Approach 4 GHz GPU Clocks

an all AMD system build again it sounds like for me. fuck yeah, can't wait!
 
So will there be a small chip with 1.5x performance of the 6600/xt for $300??
 
Lot of people look at the AMD cards from the gamer point of view and they are missing something.
??? Why wouldn't they that is who the cards are designed for?

AMD has a separate line of GPU's for workstations.
 
??? Why wouldn't they that is who the cards are designed for?

AMD has a separate line of GPU's for workstations.
They are not designed just for that. Modern gpus are aimed also to content creators and 3D artists and when someone buys a gpu not just for gaming, but also for video editing, rendering, streaming, etcetera, there are more aspects to consider when choosing hardware, beside frequencies and power consumption. Same goes for the cpus.
The workstation lineup shares 90% of the specs with the regular cards and many artists and professional studios since years prefer the regular ones because of the higher frequencies and cheaper costs. It's the case of the GTX 1080 and of the RTX top solutions. If you work with Blender or C4D, for example, you look at them, not at Quadro and Tesla. Those are cards optimized for CAD and scientific environment, but nowadays the high end consumer cards have workstation power and capabilities.
And still, looking at AMD cards from the gamer-only point of view, NVIDIA with the Tensor cores and the NVLink bridge beats them, no matter what.
The upcoming AMD cards have to beat these features, or at least offer equivalent (and very close in performance) hardware solutions, otherwise in specific tasks and games will fall back.
So, in the end and as for the current state of things, the NVIDIA cards offer a better overall performance value for the money, because they grant terrific performance in different scenarios. While the AMD consumer cards don't have the same capabilities and don't have the same kind of support that certain software houses offer to NVIDIA.
 
Lot of people look at the AMD cards from the gamer point of view and they are missing something. The major rendering programs support only CUDA, or are better optimized for CUDA.
Blender with the 3.0 build ditched the Open CL support and AMD was forced to introduce a new API, called HIP, that works only with their modern gpu series. And this API is slower than CUDA, just like Open CL before.
Some years ago AMD released a rendering engine called AMD Pro Render, but this never reached the popularity of stuff like V-Ray, Redshift, Renderman, etc.
Basically AMD is cut out from an entire piece of market and if someone needs to make complex renderings for its job, AMD can't be taken in consideration.

Not just this. The NVIDIA cards can be joined together with the NVLink and the rendering program sees one single card. Meaning that two 24GB cards are seen as one with 48GB of memory. What before was the limit of the GPU rendering, the small amount of memory, is not a problem anymore.
And NVIDIA cards have Tensor cores, that can be used by games too.
In other words, AMD is years behind and unless they pay some billions in order to get their gpus fully supported by the major rendering programs, they will never keep up with NVIDIA in the workstations gpu market.
Personally, I look at the cards from a gamer point of view, because I use them for gaming. I don't care about Blender or any other content creation software because I don't use it. Each to their own, as they say.
 
They are not designed just for that. Modern gpus are aimed also to content creators and 3D artists and when someone buys a gpu not just for gaming, but also for video editing, rendering, streaming, etcetera, there are more aspects to consider when choosing hardware, beside frequencies and power consumption. Same goes for the cpus.
The workstation lineup shares 90% of the specs with the regular cards and many artists and professional studios since years prefer the regular ones because of the higher frequencies and cheaper costs. It's the case of the GTX 1080 and of the RTX top solutions. If you work with Blender or C4D, for example, you look at them, not at Quadro and Tesla. Those are cards optimized for CAD and scientific environment, but nowadays the high end consumer cards have workstation power and capabilities.
And still, looking at AMD cards from the gamer-only point of view, NVIDIA with the Tensor cores and the NVLink bridge beats them, no matter what.
The upcoming AMD cards have to beat these features, or at least offer equivalent (and very close in performance) hardware solutions, otherwise in specific tasks and games will fall back.
So, in the end and as for the current state of things, the NVIDIA cards offer a better overall performance value for the money, because they grant terrific performance in different scenarios. While the AMD consumer cards don't have the same capabilities and don't have the same kind of support that certain software houses offer to NVIDIA.
As stated above this post your view of how desktop cards should be used doesn't apply to everyone.

And the upcoming cards need to beat these features my gut lets me you would be buying Nvidia regardless so why does it matter.
 
As stated above this post your view of how desktop cards should be used doesn't apply to everyone.
Exactly. Just because GPUs can also be used for stuff other than gaming, it doesn't mean that I have to care.
 
Exactly. Just because GPUs can also be used for stuff other than gaming, it doesn't mean that I have to care.

True but gaming is constantly evolving and can adopt new features that weren't commonly used before.

Both ray tracing and machine learning got off to rocky starts on NVIDIA GeForce and many dismissed RTX and DLSS in their early iterations.

And there's no specific type of gamer. Some people still dismiss RT and DLSS-like technologies because whatever games they are currently enjoying don't harness those technologies. Heck, even Minecraft turned on RTX. A 15 year old game like Portal is now getting RTX and DLSS 3.0.

For sure, game developers have some interest in using the hardware that will harness these features available as development tools.

It's not like all of these new technologies only debut on brand new gaming titles. A lot of older (especially popular) games can receive these improvements.
 
As stated above this post your view of how desktop cards should be used doesn't apply to everyone.

And the upcoming cards need to beat these features my gut lets me you would be buying Nvidia regardless so why does it matter.
You can buy whatever you want guys, I'm not a fanboy, my comment was just to point that maybe the NVIDIA cards are priced so high (beside the speculation that we see every time there is a new launch, and especially this time around), because they sport more features than the AMD counterparts and they can be used for more tasks.
I moved to NVIDIA recently just for CUDA and will buy a RTX in the future, after owning AMD gpus for many years.

cvaldes mentioned the rtx mod for Portal. You can bet that the modding community will make a total conversion of many popular and beloved games using the NVIDIA technology and many players will pick a RTX card just to play them.
 
True but gaming is constantly evolving and can adopt new features that weren't commonly used before.

Both ray tracing and machine learning got off to rocky starts on NVIDIA GeForce and many dismissed RTX and DLSS in their early iterations.

And there's no specific type of gamer. Some people still dismiss RT and DLSS-like technologies because whatever games they are currently enjoying don't harness those technologies. Heck, even Minecraft turned on RTX. A 15 year old game like Portal is now getting RTX and DLSS 3.0.

For sure, game developers have some interest in using the hardware that will harness these features available as development tools.

It's not like all of these new technologies only debut on brand new gaming titles. A lot of older (especially popular) games can receive these improvements.
I learned not to give a damn about potential future features long ago.

AMD's GCN was a brilliant architecture for compute, and everybody was loud with compute gaining a foothold in games, which then never really happened. Nvidia's got CUDA for physics, but that's pretty much it.

Despite being a huge failure, AMD FX was (supposed to be) a very future-pointing project. It only took applications a decade to properly use the unique architecture with 8 integer and 4 FP units, but by that time, the individual cores were already too slow to do anything meaningful. I had an FX-8150 which gave me the biggest CPU bottleneck I've ever witnessed. I'm sure it would perform slightly better today if I still had it, but technology has moved forward so much that a little improvement won't save it from being the biggest disappointment of the 2010s.

I agree that RT may very well be the future of gaming. It's only that by the time it really goes mainstream, current-gen graphics cards will be long obsolete and too slow for proper RT.

You only buy PC hardware for the present. When the future comes, there will be newer, faster and more specialised hardware for it.
 
Last edited:
Back
Top