• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Retreating from Enthusiast Graphics Segment with RDNA4?

Joined
May 31, 2023
Messages
65 (0.12/day)
Location
Texas, USA
Processor Ryzen 7 7800X3D
Motherboard Asus X670E-A ROG Strix Gaming Wifi
Cooling ID-Cooling 240X Frost Flow
Memory 2x16GB G.Skill Flare X 6000MT/s DDR5
Video Card(s) ASROCK Phantom Gaming 7900xtx
Storage Crucial P3 Plus 2TB, x2 WD SN750 1TB
Display(s) Alienware OLED Ultrawide, Asus ProArt 4K
Case White Fractal North
Audio Device(s) Corsair Virtuoso
Power Supply Corsair RM850x
Mouse Pulsar Wireless x2
Keyboard Keychron 75% with Gateron Box Blacks/Reds
VR HMD HTC Vive (Gen 1)
Im pretty sure i have seen a perf summary somewhere in adrenalin. Isn't a ton of test are made without any manufacturer software anyway ?
There is a version built into Adrenanlin. Hit the keys "alt+r" and it'll bring up a performance metrics overlay that you can toggle on and off. I can't remember if there is a logging function in it though.
 

ARF

Joined
Jan 28, 2020
Messages
4,670 (2.65/day)
Location
Ex-usa | slava the trolls
That's why I'm saying they need to be more aggressive in securing an advanced node. Doesn't need to be the most advanced, but they can not afford disparity with Nvidia/Intel. Its an investment that pays off. Isn't that supposed to be an advantage of chiplet approach, you get better yields because the chip is not as big/complex?

Maybe but nvidia offsets the maybe higher costs by transferring them to the customers. This is simply another approach, and neither of them is the right or wrong.

AMD's problems are two:
1. Must do a more capable graphics architecture.
2. Must cheat in order to boost the frame rate. I see that no one really pays attention about the image quality in games, so AMD can simply do DLSS-type of thing default, upscale the FPS several dozens of percents, the performance crown will be theirs.
 
Joined
Nov 26, 2021
Messages
1,645 (1.51/day)
Location
Mississauga, Canada
Processor Ryzen 7 5700X
Motherboard ASUS TUF Gaming X570-PRO (WiFi 6)
Cooling Noctua NH-C14S (two fans)
Memory 2x16GB DDR4 3200
Video Card(s) Reference Vega 64
Storage Intel 665p 1TB, WD Black SN850X 2TB, Crucial MX300 1TB SATA, Samsung 830 256 GB SATA
Display(s) Nixeus NX-EDG27, and Samsung S23A700
Case Fractal Design R5
Power Supply Seasonic PRIME TITANIUM 850W
Mouse Logitech
VR HMD Oculus Rift
Software Windows 11 Pro, and Ubuntu 20.04
There is a version built into Adrenanlin. Hit the keys "alt+r" and it'll bring up a performance metrics overlay that you can toggle on and off. I can't remember if there is a logging function in it though.
There's logging as well.
 
Joined
Apr 14, 2018
Messages
649 (0.27/day)
That's why I'm saying they need to be more aggressive in securing an advanced node. Doesn't need to be the most advanced, but they can not afford disparity with Nvidia/Intel. It’s an investment that pays off. Isn't that supposed to be an advantage of chiplet approach, you get better yields because the chip is not as big/complex?

This conveniently ignores the fact there are a limited number of fabs with bleeding edge nodes available, not to mention TSMC etc have to uphold agreements and contracts made previously.

The ability to just “secure” advance nodes depends entirely on which of the many baskets that AMD has their eggs in they want to focus on, and that’s definitely not their GPU division. Not to mention they have to compete with companies with more $$$ for these fab contracts.

Considering they’re handing Intel their ass in the CPU space on efficiency, it makes sense that their server and consumer product lines for CPUs get whatever priority they can for node/fab tech.

If anything the 7000 series/Navi3x was a proof of concept for chiplets in the GPU space as opposed to an attempt to claw back the performance crown. The 8000 series/Navi4x will be more of a turning point to judge/look at in my opinion.
 
Joined
Apr 12, 2013
Messages
7,525 (1.77/day)
Well considering Nvidia & Intel are also looking to chiplets for GPU's & will eventually have to transition to them for flagship products, AMD has always been in a great position to exploit their lead in this space. Now they can still blow it with a *Dozer equivalent for dGPU but that's unlikely at this point & Nvidia/Intel could also fumble equally hard on chiplets themselves! So hold on to your seats, this isn't even close to being over.
Chill Reaction GIF
 
Joined
Aug 10, 2023
Messages
341 (0.73/day)
the issue is this:

- they need wafers foremost for ZEN chiplets, so radeon is always a afterthought that has to share wafer allocation with ZEN chiplets.

- because of that they were forced to reduce the chipsize and using MCM so that the main chip on 5nm is only 300mm2.

- this strategy isnt that good since it means losing efficiency due to worse latencies, GCD to MCD communication costs.

- this means, if nvidia behaves like a full fledged GPU company and AMD doesnt, since its production is mainly CPU oriented, AMD will fall further back due to handicaps which were mentioned.

- NAVI21 was competitive at the high end because it was a big monolithic chip. NAVI31 on monolithic would likely perform clearly better than the current version with MCM.

- so i'm not too surprised if AMD decides to pull off, since it's too awkward for them to compete with nvidia due to wafer limitations.

- the speculation is this: what if AMD would not have bought ATI ... would ATI have been competitive, because they wouldve concentrated their wafers fully to produce GPUs with no handicaps?
 
Joined
Mar 10, 2010
Messages
11,878 (2.21/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in four sticks./16Gb/16GB
Video Card(s) Powercolour RX7900XT Reference/Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506
the issue is this:

- they need wafers foremost for ZEN chiplets, so radeon is always a afterthought that has to share wafer allocation with ZEN chiplets.

- because of that they were forced to reduce the chipsize and using MCM so that the main chip on 5nm is only 300mm2.

- this strategy isnt that good since it means losing efficiency due to worse latencies, GCD to MCD communication costs.

- this means, if nvidia behaves like a full fledged GPU company and AMD doesnt, since its production is mainly CPU oriented, AMD will fall further back due to handicaps which were mentioned.

- NAVI21 was competitive at the high end because it was a big monolithic chip. NAVI31 on monolithic would likely perform clearly better than the current version with MCM.

- so i'm not too surprised if AMD decides to pull off, since it's too awkward for them to compete with nvidia due to wafer limitations.

- the speculation is this: what if AMD would not have bought ATI ... would ATI have been competitive, because they wouldve concentrated their wafers fully to produce GPUs with no handicaps?
That speculation is so old and trite.

Nvidia would by now have bought or forced Ati out of the market.

Are you Not understanding Huang's game plan?!.

AMD own more than a CPU, GPU company, xilinx is not inconsiderate.

They're wafer and design requirements are also quite significant as will be the constant evolution of console SOCs.
 
Joined
Jun 11, 2020
Messages
573 (0.35/day)
Location
Florida
Processor 5800x3d
Motherboard MSI Tomahawk x570
Cooling Thermalright
Memory 32 gb 3200mhz E die
Video Card(s) 3080
Storage 2tb nvme
Display(s) 165hz 1440p
Case Fractal Define R5
Power Supply Toughpower 850 platium
Mouse HyperX Hyperfire Pulse
Keyboard EVGA Z15
This conveniently ignores the fact there are a limited number of fabs with bleeding edge nodes available, not to mention TSMC etc have to uphold agreements and contracts made previously.

The ability to just “secure” advance nodes depends entirely on which of the many baskets that AMD has their eggs in they want to focus on, and that’s definitely not their GPU division. Not to mention they have to compete with companies with more $$$ for these fab contracts.

Considering they’re handing Intel their ass in the CPU space on efficiency, it makes sense that their server and consumer product lines for CPUs get whatever priority they can for node/fab tech.

If anything the 7000 series/Navi3x was a proof of concept for chiplets in the GPU space as opposed to an attempt to claw back the performance crown. The 8000 series/Navi4x will be more of a turning point to judge/look at in my opinion.

I get all that, the thing is they were the first to get TSMC 7nm for (x86) CPUs and GPUs. And it did wonders for them, stock price has more than doubled since 2020. They know full well the advantages of using more advanced nodes than your competition. Only disadvantage I can see is its more expensive. They pulled the purse strings at a time of increased competition which allowed Nvidia and now intel to get ahead of them. I know the Nvidias/Apples/Intels of the world have more money. But how were they able to be the first to 7nm? Seems like a mistake from Lisa Su to not secure 4nm node like nvidia did for ada and to depend on unproven gpu chiplets (in addition to a new architecture) to even the playing field.
 
Joined
Apr 14, 2018
Messages
649 (0.27/day)
I get all that, the thing is they were the first to get TSMC 7nm for (x86) CPUs and GPUs. And it did wonders for them, stock price has more than doubled since 2020. They know full well the advantages of using more advanced nodes than your competition. Only disadvantage I can see is it’s more expensive. They pulled the purse strings at a time of increased competition which allowed Nvidia and now intel to get ahead of them. I know the Nvidias/Apples/Intels of the world have more money. But how were they able to be the first to 7nm? Seems like a mistake from Lisa Su to not secure 4nm node like nvidia did for ada and to depend on unproven gpu chiplets (in addition to a new architecture) to even the playing field.

If you get all of that why do you keep acting like they can snap their fingers and magically get priority on whatever fab tech they want?
 
Joined
Aug 10, 2023
Messages
341 (0.73/day)
If you get all of that why do you keep acting like they can snap their fingers and magically get priority on whatever fab tech they want?
priority isnt even the biggest problem they have, they simply dont get enough wafers, otherwise this sub-efficient MCM NAVI31 design wouldve been entirely pointless. it loses performance through worse latencies for example. loses efficiency because of inter-die communication. it has multiple problems. just look how nvidia with a smaller design, sub 400mm2 can compete with the 7900 XTX. since its monolithic.

NAVI31 in monolithic would be faster ... clearly so. also it would only be a 480-500mm2 design, so still 100mm2 more possible to catch up with the 4090. all this is not possible because AMD has to save wafers, due to them being a multi processor company.
 
Joined
Jun 11, 2020
Messages
573 (0.35/day)
Location
Florida
Processor 5800x3d
Motherboard MSI Tomahawk x570
Cooling Thermalright
Memory 32 gb 3200mhz E die
Video Card(s) 3080
Storage 2tb nvme
Display(s) 165hz 1440p
Case Fractal Define R5
Power Supply Toughpower 850 platium
Mouse HyperX Hyperfire Pulse
Keyboard EVGA Z15
They weren't the first to 7nm, it was Apple as usual & they have 4nm products in 7xxxhs chips, probably CDNA based GPU's as well?
I put the qualifier (x86) for CPU, but they were the first dGPU.

If you get all of that why do you keep acting like they can snap their fingers and magically get priority on whatever fab tech they want?
I'm not saying its magic, obviously there's calculus to it all and AMD's calculus said it would be too expensive to pursue. I'm saying that's the wrong choice.
 
Joined
Apr 12, 2013
Messages
7,525 (1.77/day)
GPU's don't yield the same $/mm^2 for AMD as it does for Nvidia, so CPU will always have priority. As for securing the latest/most expensive nodes you also need to see that chips released on them would generally be uber expensive, do you see anyone paying 2 grands for a hypothetical 7950 XTX even if it were 5% faster than 4090 on avg? Also remember Nvidia recoups a lot of the cost (massive DC/HPC premium) through their AI craze, AMD can't do that as of now.
 
Joined
Aug 10, 2023
Messages
341 (0.73/day)
GPU's don't yield the same $/mm^2 for AMD as it does for Nvidia, so CPU will always have priority. As for securing the latest/most expensive nodes you also need to see that chips released on them would generally be uber expensive, do you see anyone paying 2 grands for a hypothetical 7950 XTX even if it were 5% faster than 4090 on avg? Also remember Nvidia recoups a lot of the cost (massive premium) through their AI craze, AMD can't do that as of now.
why 2 grands? A competitive card would cost 1500$ max since they would price it to compete against a giant. What i suggested isn't fantasy at all, and AMD/ATI proved in the past that they CAN compete with nvidia if they go far enough, or have the better node.
 
Joined
Apr 12, 2013
Messages
7,525 (1.77/day)
I'm talking about a high end hypothetical chip with poor 4nm yields & why 2k not less? Because the halo products always cost a premium ~ always have & always will.

The point is it don't make financial sense for AMD to pursue the top end halo product because they still wouldn't be making massive gobs of $$$ which Nvidia still commands, at times even with inferior performance.
 
Joined
Aug 10, 2023
Messages
341 (0.73/day)
I'm talking about a high end hypothetical chip with poor 4nm yields & why 2k not less? Because the halo products always cost a premium ~ always have & always will.
4nm isn't needed since its barely better. A 5nm with proven, high power design ready yield, 600mm²~ would be enough to compete - this all in monolithic, has multiple upsides over their current high end design.
 
Joined
Apr 12, 2013
Messages
7,525 (1.77/day)
And how many functional chips would they sell with it, at what rate? Why not sell EPYC 77xx for at least 2x price & probably much better margins as well. Besides 5nm is in great demand so AMD doesn't really have an unlimited supply all for themselves. Just talking as a business of course, I wouldn't mind them selling me a 128c zen5 chip for a grand or less.
 
Joined
Aug 10, 2023
Messages
341 (0.73/day)
The point is it don't make financial sense for AMD to pursue the top end halo product because they still wouldn't be making massive gobs of $$$ which Nvidia still commands, at times even with inferior performance.
I see it, but also it doesnt matter since they wont get the wafer allocation anyway - too many other companies want it as well and are ready to price them out. IF AMD would theoretically get these wafers for a good price they WOULD be able to compete with nvidia. they already took this risk with Big Navi and it paid off. Big Navi had a size of about 530mm2 without MCM, so a clearly bigger design than the current chip.
 
Joined
Apr 12, 2013
Messages
7,525 (1.77/day)
That's the thing ~ they're not looking to compete with Nvidia at that price or performance level, IMO it simply isn't worth it & the bean counters at AMD probably do agree.
I see it, but also it doesnt matter since they wont get the wafer allocation anyway - too many other companies want it as well and are ready to price them out.
There's a fixed capacity at TSMC for 5nm, while that might be increasing over time with new fabs being built, AMD would still want to allocate more of their share to CPU/Xilinx et al because they get more $$$ out of them simple as that. So even if they get Apple's share or Nvidia's why would they wanna change the status quo?
 
Joined
Aug 10, 2023
Messages
341 (0.73/day)
That's the thing ~ they're not looking to compete with Nvidia at that price or performance level, IMO it simply isn't worth it & the bean counters at AMD probably do agree.
and again its not worth to them because they are a multi processor company that cares way less about GPUs than Nvidia does - this is why i questioned if ATI would be better today without AMD at the helm. They would use all their wafers for GPU, obviously

The dilemma is that there is no other tech company that can compete with TSMC. else AMD could order the GPUs at Samsung for example, for maybe even a lower price than currently at TSMC.
 
Joined
Apr 12, 2013
Messages
7,525 (1.77/day)
That's way out of scope here but AMD wouldn't exist today without ATI, the original GCN cards from over a decade back & then consoles saved their hide from the worst of *Dozer era!
 
Joined
Aug 10, 2023
Messages
341 (0.73/day)
That's way out of scope here but AMD wouldn't exist today without ATI, the original GCN cards from over a decade back & then consoles saved their hide from the worst of *Dozer era!
Hmm, i dont think so, they used a lot of money to buy ATI, that money couldve saved AMD as well. Money = more opportunities to come back from a FX disaster.

Out of scope or not, it's a valid speculation.
 
Joined
Apr 14, 2018
Messages
649 (0.27/day)
priority isnt even the biggest problem they have, they simply dont get enough wafers, otherwise this sub-efficient MCM NAVI31 design wouldve been entirely pointless. it loses performance through worse latencies for example. loses efficiency because of inter-die communication. it has multiple problems. just look how nvidia with a smaller design, sub 400mm2 can compete with the 7900 XTX. since its monolithic.

NAVI31 in monolithic would be faster ... clearly so. also it would only be a 480-500mm2 design, so still 100mm2 more possible to catch up with the 4090. all this is not possible because AMD has to save wafers, due to them being a multi processor company.

I put the qualifier (x86) for CPU, but they were the first dGPU.


I'm not saying it’s magic, obviously there's calculus to it all and AMD's calculus said it would be too expensive to pursue. I'm saying that's the wrong choice.

Clearly AMD needs to higher the both of you as they have no clue what they’re doing.

/s

In a freely ideal situation, sure. Our assumptions on a business’s decisions have no relevance to what’s actually going on as we have no real information on why those decisions are made. The number of times I’ve experienced a client telling me to redesign something because it’s “easy” or better when they have no experience, education, or factual information as to why and how continues to grow and is an ever useful reminder that as much as we want to believe something is simple from the outside, it’s not.
 
Joined
Aug 10, 2023
Messages
341 (0.73/day)
Clearly AMD needs to higher the both of you as they have no clue what they’re doing.

/s

In a freely ideal situation, sure. Our assumptions on a business’s decisions have no relevance to what’s actually going on as we have no real information on why those decisions are made. The number of times I’ve experienced a client telling me to redesign something because it’s “easy” or better when they have no experience, education, or factual information as to why and how continues to grow and is an ever useful reminder that as much as we want to believe something is simple from the outside, it’s not.
this is nonsense, as it's not about "clue", if you would have properly understood what I was saying, it is about "possibilities" and "money" and "opportunities".

In other words: AMD does know everything I said. And more
 
Joined
Apr 12, 2013
Messages
7,525 (1.77/day)
Hmm, i dont think so, they used a lot of money to buy ATI, that money couldve saved AMD as well. Money = more opportunities to come back from a FX disaster.

Out of scope or not, it's a valid speculation.
Or they could not have spun off GF & settled with Intel out of court for a puny billion back in 2008/09 huh? The FX was a disaster in major parts because AMD was lagging Intel 1-2.5 nodes behind. We can speculate all we want but it is what it is ~
 
Top