Monday, January 29th 2024

Top AMD RDNA4 Part Could Offer RX 7900 XTX Performance at Half its Price and Lower Power

We've known since way back in August 2023, that AMD is rumored to be retreating from the enthusiast graphics segment with its next-generation RDNA 4 graphics architecture, which means that we likely won't see successors to the RX 7900 series squaring off against the upper end of NVIDIA's fastest GeForce RTX "Blackwell" series. What we'll get instead is a product stack closely resembling that of the RX 5000 series RDNA, with its top part providing a highly competitive price-performance mix around the $400-mark. A more recent report by Moore's Law is Dead sheds more light on this part.

Apparently, the top Radeon RX SKU based on the next-gen RDNA4 graphics architecture will offer performance comparable to that of the current RX 7900 XTX, but at less than half its price (around the $400 mark). It is also expected to achieve this performance target using a smaller, simpler silicon, with significantly lower board cost, leading up to its price. What's more, there could be energy efficiency gains made from the switch to a newer 4 nm-class foundry node and the RDNA4 architecture itself; which could achieve its performance target using fewer numbers of compute units than the RX 7900 XTX with its 96.
When it came out, the RX 5700 XT offered an interesting performance proposition, beating the RTX 2070, and forcing NVIDIA to refresh its product stack with the RTX 20-series SUPER, and the resulting RTX 2070 SUPER. Things could go down slightly differently with RDNA4. Back in 2019, ray tracing was a novelty, and AMD could surprise NVIDIA in the performance segment even without it. There is no such advantage now, ray tracing is relevant; and so AMD could count on timing its launch before the Q4-2024 debut of the RTX 50-series "Blackwell."
Sources: Moore's Law is Dead (YouTube), Tweaktown
Add your own comment

421 Comments on Top AMD RDNA4 Part Could Offer RX 7900 XTX Performance at Half its Price and Lower Power

#401
Chrispy_
3valatzy24 GB of VRAM is definitely an overkill. 20 GB is ok. More VRAM doesn't mean higher performance, except under those settings which eat lots and lots of VRAM and its buffer is overloaded.
Less memory throughput is also fine, it depends on how fast the shaders are, how much L3 cache it has, etc.
Getting higher performance with less resources / higher architectural efficiency has always been the case and the reason for generational progress.
It's all relative, isn't it?

Newer games with higher-resulution assets making use of more features are what are driving up VRAM. Even at 4K max settings 10GB used to be enough only a few short years ago. People who bought 3080s probably skipped the 40-series and they've been suffering with 10GB for a good year or more, in so much as "suffering" is still little more than a minor inconvenience of having to compromise on some graphics settings.

I think 16GB is the new sweet spot in that it will be enough for max or high settings for a decent GPU lifespan right now. 20 and 24GB sure do feel like overkill when the consoles are making do with somewhere between 9GB and 12.5GB depending on how much system RAM the game requires. Throw some headroom into that and a 12GB card is probably fine for lower resolutions, 16GB should handle 4K, and by the time games actually need 20 or 24GB, cards like the 7900-series, 3090/4090 will lack the raw compute to actually run at 4K max settings.

We're all speculating, and this is a thread based on speculation anyway, but as someone with friends working in multiple different game studios, there's a strong focus on developing for the majority, which means devs are targeting consoles and midrange GPUs at most. If you have more GPU power on tap, you can get higher framerates and/or resolution but don't expect anything else except in rare edge cases like CP2077 where Nvidia basically dumped millions of dollars of effort and cooperation with CDPR as a marketing stunt more than a practical example of the real-world future that all games will look like.
Posted on Reply
#402
3valatzy
TumbleGeorgeI agree, but progress has slowed significantly compared to the beginning of the century and, as I have specified, I do not believe that a serious difference is possible in another generation with the listed disadvantages.
Because a 2 nm TSMC wafer costs $30,000 per piece. 3 nm costs $20,000 per piece.
While in 2004, a 90 nm wafer cost only $2,000 per piece.



2 nm and 3 nm are forbidden for AMD, which means no new graphics cards, and AMD going out of the GPU business.

www.techpowerup.com/301393/tsmc-3-nm-wafer-pricing-to-reach-usd-20-000-next-gen-cpus-gpus-to-be-more-expensive
Posted on Reply
#403
TumbleGeorge
3valatzyBecause a 2 nm TSMC wafer costs $30,000 per piece. 3 nm costs $20,000 per piece.
While in 2004, a 90 nm wafer cost only $2,000 per piece.



2 nm and 3 nm are forbidden for AMD, which means no new graphics cards, and AMD going out of the GPU business.

www.techpowerup.com/301393/tsmc-3-nm-wafer-pricing-to-reach-usd-20-000-next-gen-cpus-gpus-to-be-more-expensive
Let me question these prices. They seem too round and I certainly don't know what is written in the contracts of the companies renting capacity from TSMC.
Posted on Reply
#404
3valatzy
TumbleGeorgeLet me question these prices.
Let me question something else - why was AMD's latest graphics card launched back in 2022? We are close to 2025 and there are not even hints about anything better to be coming?
Posted on Reply
#405
kapone32
3valatzyLet me question something else - why was AMD's latest graphics card launched back in 2022? We are close to 2025 and there are not even hints about anything better to be coming?
What has Nvidia offered?
Posted on Reply
#406
Chrispy_
3valatzyLet me question something else - why was AMD's latest graphics card launched back in 2022? We are close to 2025 and there are not even hints about anything better to be coming?


By your reasoning, Nvidia's latest graphics card was also launched back in 2022, 2 months before the first Radeon 7000-series offering.

I guess Nvidia have an excuse though - they're poor and they can't afford to develop new graphics cards, nor is it economically viable for them to do that with their tiny marketshare.
Posted on Reply
#407
Makaveli
3valatzy24 GB of VRAM is definitely an overkill. 20 GB is ok. More VRAM doesn't mean higher performance, except under those settings which eat lots and lots of VRAM and its buffer is overloaded.
Less memory throughput is also fine, it depends on how fast the shaders are, how much L3 cache it has, etc.
Getting higher performance with less resources / higher architectural efficiency has always been the case and the reason for generational progress.
all depends on what you do on the machine.

For games maybe.

I run LLM's on my machine and 24GB of VRAM isn't enough for some of the medium to larger models. So if I could get a card with 32 to 48GB of VRAM on the consumer side that didn't cost a kidney I would do it.
Posted on Reply
#408
3valatzy
Makaveliall depends on what you do on the machine.

For games maybe.

I run LLM's on my machine and 24GB of VRAM isn't enough for some of the medium to larger models. So if I could get a card with 32 to 48GB of VRAM on the consumer side that didn't cost a kidney I would do it.
There is no free lunch for you. You want professional cards, you pay professional money.


www.amd.com/en/products/graphics/workstations.html

@Chrispy_ FYI:

Posted on Reply
#409
TumbleGeorge
Makaveliall depends on what you do on the machine.

For games maybe.

I run LLM's on my machine and 24GB of VRAM isn't enough for some of the medium to larger models. So if I could get a card with 32 to 48GB of VRAM on the consumer side that didn't cost a kidney I would do it.


Quadro RTX 8000 is available at quite low prices. True, this is only the first series of cards supporting DX 12.2 and with not very high performance today, but it does have 48GB of VRAM. In fact, I came across an ad in a bazaar in my homeland, a configuration with such parameters as in the photo for the equivalent of USD $4057.
Posted on Reply
#410
Chrispy_
3valatzy@Chrispy_ FYI:
Yes, thanks - that's the page I copied the previous table from, and cited in my comment about Nvidia's Ada architecture launching two months earlier in October 2022.

You still seem to be oblivious to the reason I quoted you in the first place.
3valatzywhy was AMD's latest graphics card launched back in 2022?
The latest graphics card is 2024's 7600XT.

Stop citing and complaining about architecture launch dates; Architectures have spanned multiple generations of graphics cards for decades now. It's historical fact that cannot be changed or argued with and it's very common to see old architectures in new generations, sometimes even entire new generations of graphics cards have remained on last-gen architecture.
Posted on Reply
#411
Makaveli
3valatzyThere is no free lunch for you. You want professional cards, you pay professional money.


www.amd.com/en/products/graphics/workstations.html

@Chrispy_ FYI:

Well it would actually be cheaper to buy 7900 XTX x2 which would give you 48GB of vram vs W7800 32GB card which cost $3700 CAD or W7900 48GB for $5426 CAD.

But this is more of a hobby for me if it was work related my employer would be footing the bill for the hardware :)
Posted on Reply
#412
SailorMan-PT
Chrispy_It's all relative, isn't it?

Newer games with higher-resulution assets making use of more features are what are driving up VRAM. Even at 4K max settings 10GB used to be enough only a few short years ago. People who bought 3080s probably skipped the 40-series and they've been suffering with 10GB for a good year or more, in so much as "suffering" is still little more than a minor inconvenience of having to compromise on some graphics settings.

I think 16GB is the new sweet spot in that it will be enough for max or high settings for a decent GPU lifespan right now. 20 and 24GB sure do feel like overkill when the consoles are making do with somewhere between 9GB and 12.5GB depending on how much system RAM the game requires. Throw some headroom into that and a 12GB card is probably fine for lower resolutions, 16GB should handle 4K, and by the time games actually need 20 or 24GB, cards like the 7900-series, 3090/4090 will lack the raw compute to actually run at 4K max settings.

We're all speculating, and this is a thread based on speculation anyway, but as someone with friends working in multiple different game studios, there's a strong focus on developing for the majority, which means devs are targeting consoles and midrange GPUs at most. If you have more GPU power on tap, you can get higher framerates and/or resolution but don't expect anything else except in rare edge cases like CP2077 where Nvidia basically dumped millions of dollars of effort and cooperation with CDPR as a marketing stunt more than a practical example of the real-world future that all games will look like.
Why it's one Overkill to have 24 GB VRAM or even more than that. NVIDIA brings the RTX 5090 with 32 GB VRAM. There are some games who needed yet about 12 GB VRAM, already.

In 1440p. Like Alan Wake-2, or Metro Exodus Enhanced. Or Space Simulation game. I have forgotten the complete name. So 16 GB VRAM wouldn't be the sweet spot. It's would be the minimum size in the future now.

But those green devils have one very strange politics. The absolute Flagship does become huge VRAM. But the rest is equipped with peanuts . Like 16 or 12 GB VRAM.

AMD has every time spendet enough VRAM on her GPUs. Like they did just perfectly in RDNA-3. So the next upcoming RDNA-4 would have the same we can assumed.
Super XPI agree with 1 of your statements, Ray Tracing is nonsense and subjective. Personally I do not like how it looks. The performance hit is far too big, but maybe in the coming years it will even out and just be a regular option for gamers.
I'm quite have the same interpretation. RayTracing is way to hyped by greedy NVIDIA. Those green devils. The red AMD angels should keep more focused on Raster. But to be, and keep competitive with the green team, they needs to improve their own RT performance. There are no way out.
Posted on Reply
#413
Makaveli
SailorMan-PTAMD has every time spendet enough VRAM on her GPUs. Like they did just perfectly in RDNA-3. So the next upcoming RDNA-4 would have the same we can assumed.
RDNA 4 (8800 XT) is midrange only so it will have 16GB's max memory.

I'm waiting for the next Highend Radeon before I move off the 7900XTX since I do use the additional vram
Posted on Reply
#414
3valatzy
Chrispy_The latest graphics card is 2024's 7600XT.

Stop citing and complaining about architecture launch dates; Architectures have spanned multiple generations of graphics cards for decades now. It's historical fact that cannot be changed or argued with and it's very common to see old architectures in new generations, sometimes even entire new generations of graphics cards have remained on last-gen architecture.
Err, if they relaunch RX 580 in 2027, for you it will be the latest. Of course, it is not. You count the date of the first graphics card with a new microarchitecture. All variants after it are late iterations and don't count..
Posted on Reply
#415
Zach_01
SailorMan-PTWhy it's one Overkill to have 24 GB VRAM or even more than that. NVIDIA brings the RTX 5090 with 32 GB VRAM. There are some games who needed yet about 12 GB VRAM, already.

In 1440p. Like Alan Wake-2, or Metro Exodus Enhanced. Or Space Simulation game. I have forgotten the complete name. So 16 GB VRAM wouldn't be the sweet spot. It's would be the minimum size in the future now.

But those green devils have one very strange politics. The absolute Flagship does become huge VRAM. But the rest is equipped with peanuts . Like 16 or 12 GB VRAM.

AMD has every time spendet enough VRAM on her GPUs. Like they did just perfectly in RDNA-3. So the next upcoming RDNA-4 would have the same we can assumed.


I'm quite have the same interpretation. RayTracing is way to hyped by greedy NVIDIA. Those green devils. The red AMD angels should keep more focused on Raster. But to be, and keep competitive with the green team, they needs to improve their own RT performance. There are no way out.
I probably understand what you're saying (highlighted above) about VRAM but the way you're saying it doesn't make sense.
Sweetspot (for amount, frequency or whatever in discussion) usually means the point where after it the return is not so great or miniscule or none at all.
I am considering 12GB of VRAM to be the minimum and 16GB the sweetspot. 8GB is dying even at 1080p native and max settings from what I've seen on latest UE5+ games.
And by minimum I mean (and I believe most people do) games don't glitch, stutter or have selected textures downgraded.

--------------------------------------------

As for RT, personally I like it. Its small things (visually) like this that all add up to a more realistic visuals. I like it when passing by a pothole filled with water everything is mirrored inside instead of a smudged image.
Taking too much computational power? Yes it does. Upscaling (quality) exists.
I am considering "normal" medium RT settings as the sweetspot. Max settings is past that spot. Path tracing even further away...
Posted on Reply
#416
Chrispy_
3valatzyYou count the date of the first graphics card with a new microarchitecture. All variants after it are late iterations and don't count..
If you want to count architectures, say "architectures", not "graphics cards".

A graphics card is not an architecture. Confusing these two terms highlights an absolutely, non-debatable gap in your understanding.
Posted on Reply
#417
SailorMan-PT
Zach_01I probably understand what you're saying (highlighted above) about VRAM but the way you're saying it doesn't make sense.
Sweetspot (for amount, frequency or whatever in discussion) usually means the point where after it the return is not so great or miniscule or none at all.
I am considering 12GB of VRAM to be the minimum and 16GB the sweetspot. 8GB is dying even at 1080p native and max settings from what I've seen on latest UE5+ games.
And by minimum I mean (and I believe most people do) games don't glitch, stutter or have selected textures downgraded.

--------------------------------------------

As for RT, personally I like it. Its small things (visually) like this that all add up to a more realistic visuals. I like it when passing by a pothole filled with water everything is mirrored inside instead of a smudged image.
Taking too much computational power? Yes it does. Upscaling (quality) exists.
I am considering "normal" medium RT settings as the sweetspot. Max settings is past that spot. Path tracing even further away...
Well I used to play with 8 GB VRAM in 2K and even 4K with details on high. Older games could be played well. Graphics cards were or are Radeon Vega 64 and VII, RTX 2080 too.

And for RayTracing, I agree too. I'm curious to see how they're presenting. I bought one RTX 4080 Super this summer ️. In September. But due to my big lack in mental health (depressive episodes) I haven't played much since that time. Now this year is almost done. And I want to upgrade my second of 3 PCs.

The CPUs I had choosed were the Ryzen 7800x3D or 9800x3D. The bloody thing are the continuing increase of the prices. From 300 bucks this summer. The 7800x3D is now at 549 €. The sucessor 9800x3D, when hit the market, 529 € is now at 719 € on eBay.

All this have to do with simply facts. The shortage of both processors. The old Gen weren't produced any more. And the new Gen isn't saturated the market. Cuz of the huge demand. I'll wait till January or February next year. Till The prices fall again. Currently they're quite to high.

Over 500 € bucks for one CPU isn't candy. It's sour cream. But if the prices still continue at this high levels, I had in mind to invest a little bit more than that's. For 649 € the all mighty 16 core 7950 is available.
Posted on Reply
#418
3valatzy
It seems AMD has dramatically improved the transistor density using the 4nm node, which means Navi 48 will pack as many as 45 billion transistors, while the smaller Navi 44 will pack 23 billion transistors.

Navi 44: 153 mm², 22.98 BTr, 150.2 MTr/mm²
It lies, though, that the die is MCM. It is not.



For a comparison:
Navi 48: 300 mm², 45.06 BTr, 150.2 MTr/mm²

Navi 31: 529 mm², 57.7 BTr
Navi 32: 346 mm², 28.1 BTr
Navi 33: 204 mm², 13.3 BTr, 65.2 MTr/ mm²

Navi 10: 251 mm², 10.3 BTr

Navi 24: 107 mm², 5.4 BTr
Navi 23: 237 mm², 11.06 BTr
Navi 22: 335 mm², 17.2 BTr
Navi 21: 520 mm², 26.8 BTr

Performance estimate: Navi 44 ~ Radeon RX 7700 XT | RTX 4070
Performance estimate: Navi 48 ~ Radeon RX 7900 XT | RTX 4070 Ti S (if everything goes well)

www.techpowerup.com/gpu-specs/amd-navi-44.g1070
Posted on Reply
Add your own comment
Jan 24th, 2025 09:44 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts