• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD RDNA 5 a "Clean Sheet" Graphics Architecture, RDNA 4 Merely Corrects a Bug Over RDNA 3

Joined
Oct 6, 2021
Messages
1,605 (1.42/day)
After the bad experience with chiplets, is it really a good idea to move on with them? :confused:
GPUs are very sensitive to latencies, they work best when the latencies are extremely low, which means a monolithic design, chiplets are good for CPUs, but extremely bad for GPUs.
That's why CrossFire is no longer supported.
Do they want to invent a new type of CrossFire?
With skyrocketing manufacturing costs accompanied by minimal improvements ?

A Multi-GCD design is the most important thing AMD could bring out to be more competitive. Instead of developing 5-6 chips, a single block (GCD) would serve all segments, simply by putting these chips together. Billions would be saved in the process.

But it's obvious that such a design needs to drastically change the graphics processing model.

1000015379.png


"The new patent (PDF) is dated November 23, 2023, so we'll likely not see this for a while. It describes a GPU design radically different from its existing chiplet layout, which has a host of memory cache dies (MCD) spread around the large main GPU die, which it calls the Graphics Compute Die (GCD). The new patent suggests AMD is exploring making the GCD out of chiplets instead of just one giant slab of silicon at some point in the future. It describes a system that distributes a geometry workload across several chiplets, all working in parallel. Additionally, no "central chiplet" is distributing work to its subordinates, as they will all function independently.
 

ARF

Joined
Jan 28, 2020
Messages
4,670 (2.67/day)
Location
Ex-usa | slava the trolls
With skyrocketing manufacturing costs accompanied by minimal improvements ?

Doesn't assembly of chiplets also cost quite a bit and is more expensive production practice than simply putting a single die onto the interposer/PCB?

Also, they can compensate by using faster architectures on older/cheaper processes?
I still don't know why they haven't released a pipecleaner, 150 mm^2 GPU built on the newer TSMC N4 or TSMC N3 processes?
 
Joined
Feb 20, 2019
Messages
8,209 (3.93/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
Doesn't assembly of chiplets also cost quite a bit and is more expensive production practice than simply putting a single die onto the interposer/PCB?

Also, they can compensate by using faster architectures on older/cheaper processes?
I still don't know why they haven't released a pipecleaner, 150 mm^2 GPU built on the newer TSMC N4 or TSMC N3 processes?
Not even close.
Bleeding edge manufacturing nodes, and the price-bidding war to win allocation means that N4 and N3 are an order of magnitude more expensive than the interposer/assembly costs. Those are rapidly becoming irrelevant, too - since AMD have been doing it for so long that it's a solved problem with plenty of experience and a relatively smooth/effortless process now.
They won't release lower-end parts on N4 and N3 simply because the profit margins for those lower end, smaller dies don't actually merit the high cost AMD pays TSMC for the flagship nodes.
 
Joined
Apr 19, 2018
Messages
1,227 (0.51/day)
Processor AMD Ryzen 9 5950X
Motherboard Asus ROG Crosshair VIII Hero WiFi
Cooling Arctic Liquid Freezer II 420
Memory 32Gb G-Skill Trident Z Neo @3806MHz C14
Video Card(s) MSI GeForce RTX2070
Storage Seagate FireCuda 530 1TB
Display(s) Samsung G9 49" Curved Ultrawide
Case Cooler Master Cosmos
Audio Device(s) O2 USB Headphone AMP
Power Supply Corsair HX850i
Mouse Logitech G502
Keyboard Cherry MX
Software Windows 11
Joined
Nov 11, 2016
Messages
3,393 (1.16/day)
System Name The de-ploughminator Mk-II
Processor i7 13700KF
Motherboard MSI Z790 Carbon
Cooling ID-Cooling SE-226-XT + Phanteks T30
Memory 2x16GB G.Skill DDR5 7200Cas34
Video Card(s) Asus RTX4090 TUF
Storage Kingston KC3000 2TB NVME
Display(s) 48" LG OLED C4
Case Corsair 5000D Air
Audio Device(s) KEF LSX II LT speakers + KEF KC62 Subwoofer
Power Supply Corsair HX850
Mouse Razor Death Adder v3
Keyboard Razor Huntsman V3 Pro TKL
Software win11
Across the stack, this generation can be summed up by:
  • AMD: 5-10% faster raster at ~5% lower prices
  • Nvidia: 50-200% faster RT/AI/DLSS
  • All of AMD's buyers (historic low at 5% of the discrete consumer GPUs market share): Flooding all websites and forums with "WHY WON'T YOU BUY AMD IT'S 5-10% FASTER FOR 5% CHEAPER YOU MUST BE A BLIND NVIDIA FANBOY NOBODY USES RT/AI/DLSS THEY'RE SCAMS REEEEEEEEEEEEEEEEE".

Even worse is that UE5 which is touted as next-gen of games, require Upscaling to be playable (50% boost to FPS is hard to ignore when all UE5 games run poorly).
 
Joined
Jan 8, 2017
Messages
9,402 (3.29/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
The MCM design works in compute, it's a different ballgame within graphics. That's why there's not that much difference in between 6x00 and 7x00 generation.
This has nothing to do with MCM at the moment, the GCD is still monolithic, Navi31 simply hasn't added all that many more CUs.
 
Joined
Jun 1, 2010
Messages
368 (0.07/day)
System Name Very old, but all I've got ®
Processor So old, you don't wanna know... Really!
After the bad experience with chiplets, is it really a good idea to move on with them?
Chiplets for graphics is are not new. AMD has been using it in enterprize, a year before the RDNA3's launch. And if the product was flawed, that market won't tolerate it. So if AMD managed to sell these like the hot panckakes, this means, the architecture and the execution is sturdy, and reliable.

You see, AMD took the approach, of designing the top architecture, and then using it for derivative product ranges. This is the VAG of silicon. So the EPYC/Instinct is MAN/Bugatti, while Ryzen and Radeon are A6, Passat, Golf and Polo; and Threadripper/Radeon Pro are somewhat between Rolls-Royce and Crafter. And this is brilliant strategy, to be honest. ANd this is why AMD holds on to it so much, because it brought them the fortune, they never ever had before. That's why, I think, that AMD isn't going to cut MCM/MCD design for consumer grade cards (with possibility of the lower tier chips joining MCM design as well), by improving it instead, akin how Intel holds for Arc. Because it's much cheaper and easier to keep the same approach for all products, and just rectify the sissues, rayther then dedicate the budget for development of separate architecture.

So I guess, that although the CDNA and RDNA architectures are different, the ideas, technology and design, and execution might have many in common, sans video output.
Thus the problem might be specifically with maintaining it for "multipurpose"/gaming use, where the frequencies are higher and load is variable. So the strain on the hardware is not constant and can easilly exceede the chip/link capabilities during load spikes. Thsese are just layman's speculations, but I hope you got the point.
 
Last edited:
Joined
May 13, 2016
Messages
87 (0.03/day)
Just a matter of years to have a bit of hope then?
GPU market is in such a sad state in the last ~5 years...
 
Joined
Jul 13, 2016
Messages
3,258 (1.07/day)
Processor Ryzen 7800X3D
Motherboard ASRock X670E Taichi
Cooling Noctua NH-D15 Chromax
Memory 32GB DDR5 6000 CL30
Video Card(s) MSI RTX 4090 Trio
Storage Too much
Display(s) Acer Predator XB3 27" 240 Hz
Case Thermaltake Core X9
Audio Device(s) Topping DX5, DCA Aeon II
Power Supply Seasonic Prime Titanium 850w
Mouse G305
Keyboard Wooting HE60
VR HMD Valve Index
Software Win 10
Doesn't assembly of chiplets also cost quite a bit and is more expensive production practice than simply putting a single die onto the interposer/PCB?

Also, they can compensate by using faster architectures on older/cheaper processes?
I still don't know why they haven't released a pipecleaner, 150 mm^2 GPU built on the newer TSMC N4 or TSMC N3 processes?

Chiplets have added assembly cost that depends on just how advanced the packaging is. The packaging used for AMD's CPUs for example, where it's just a dumb interposer, is very cheap. A step up from that is the organic substrate used for AMD's 7000 series GPUs. This organic substrate allows them to increase the bandwidth of the link between chiplets while keeping packaging costs in check as the substrate itself is still dumb (no logic). Those two are pretty economical. A big step above that cost wise is CoWoS, which is the most expensive option here but likewise provides the most bandwidth. You see this kind of packaging used by AMD and Nvidia in the enterprise space and it's used to connect the die and HBM or in AMD's case all the chiplets and HBM.

The cost overhead of chiplets is vastly outweighed by the cost savings. By splitting a design into smaller chiplets you increase the number of good chips yieled per wafer. The exact increase depends on the defect density of the node but as you increase defect density the greater the benefit chiplets have. Even at TSMC's 5nm's defect density of 0.1 per square cm the number of additional chips yielded is significant, let alone 3nm which TSMC is currently having issues yield wise. This goes triple for GPUs, which are huge dies that disproportionately benefit from the disaggregation that chiplets bring.

In essence you are weighing the cost of wasted silicon compared to the added cost of a silicon interposer. I managed to find an estimate from 2018 which places the cost between $30 (for a medium sized chip) and $100 (for an interposer a multiple of reticle size stitched together): https://www.frost.com/frost-perspectives/overview-interposer-technology-packaging-applications/

AMD's desktop CPUs qualify below that lower figure and flaghip GPUs (600mm2+) likely sit above the middle at $70-80. I would not be surprised if those costs have gone down for dumb interposers since that was published (not CoWoS though, which is in high demand).

Also consider that chiplets allow you to use older processes for certain parts of the chip for additional savings and you only have to design just the chiplets that will then be used in every SKU in your lineup, both of which will influence total cost to manufacture in a positive way.
 
Last edited:
Joined
Apr 18, 2019
Messages
2,328 (1.15/day)
Location
Olympia, WA
System Name Sleepy Painter
Processor AMD Ryzen 5 3600
Motherboard Asus TuF Gaming X570-PLUS/WIFI
Cooling FSP Windale 6 - Passive
Memory 2x16GB F4-3600C16-16GVKC @ 16-19-21-36-58-1T
Video Card(s) MSI RX580 8GB
Storage 2x Samsung PM963 960GB nVME RAID0, Crucial BX500 1TB SATA, WD Blue 3D 2TB SATA
Display(s) Microboard 32" Curved 1080P 144hz VA w/ Freesync
Case NZXT Gamma Classic Black
Audio Device(s) Asus Xonar D1
Power Supply Rosewill 1KW on 240V@60hz
Mouse Logitech MX518 Legend
Keyboard Red Dragon K552
Software Windows 10 Enterprise 2019 LTSC 1809 17763.1757
With skyrocketing manufacturing costs accompanied by minimal improvements ?

A Multi-GCD design is the most important thing AMD could bring out to be more competitive. Instead of developing 5-6 chips, a single block (GCD) would serve all segments, simply by putting these chips together. Billions would be saved in the process.

But it's obvious that such a design needs to drastically change the graphics processing model.

View attachment 347369

"The new patent (PDF) is dated November 23, 2023, so we'll likely not see this for a while. It describes a GPU design radically different from its existing chiplet layout, which has a host of memory cache dies (MCD) spread around the large main GPU die, which it calls the Graphics Compute Die (GCD). The new patent suggests AMD is exploring making the GCD out of chiplets instead of just one giant slab of silicon at some point in the future. It describes a system that distributes a geometry workload across several chiplets, all working in parallel. Additionally, no "central chiplet" is distributing work to its subordinates, as they will all function independently.
This is seriously starting to look like those decade+ old "far future AMD roadmap" leaks, were true...
 

ARF

Joined
Jan 28, 2020
Messages
4,670 (2.67/day)
Location
Ex-usa | slava the trolls
They won't release lower-end parts on N4 and N3 simply because the profit margins for those lower end, smaller dies don't actually merit the high cost AMD pays TSMC for the flagship nodes.

Now or never? Now - "maybe". If never, it's game over for AMD.

Bleeding edge manufacturing nodes, and the price-bidding war to win allocation means that N4 and N3 are an order of magnitude more expensive than the interposer/assembly costs.

The only things that I see that AMD bleeds are performance left on the table because the chiplets are too slow, and the connected market share loss.
It's about making decisions.

Maybe AMD must put on the table the profit margins, and instead start thinking about the gamers?
 
Joined
Feb 20, 2019
Messages
8,209 (3.93/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.

Chiplets for graphics is are not new. AMD has been using it in enterprize, a year before the RDNA3's launch. And if the product was flawed, that market won't tolerate it. So if AMD managed to sell these like the hot panckakes, this means, the architecture and the execution is sturdy, and reliable.
GPU compute for the datacenter and AI isn't particularly latency sensitive, so the latency penalty of a chiplet MCM approach is almost irrelevant and the workloads benefit hugely from the raw compute bandwidth.

GPU for high-fps gaming is extremely latency-sensitive, so the latency penalty of chiplet MCM is 100% a total dealbreaker.

AMD hasn't solved/evolved the inter-chiplet latency well enough for them to be suitable for a real-time graphics pipeline yet, but that doesn't mean they won't.
 
Joined
Dec 25, 2020
Messages
6,597 (4.67/day)
Location
São Paulo, Brazil
System Name "Icy Resurrection"
Processor 13th Gen Intel Core i9-13900KS Special Edition
Motherboard ASUS ROG MAXIMUS Z790 APEX ENCORE
Cooling Noctua NH-D15S upgraded with 2x NF-F12 iPPC-3000 fans and Honeywell PTM7950 TIM
Memory 32 GB G.SKILL Trident Z5 RGB F5-6800J3445G16GX2-TZ5RK @ 7600 MT/s 36-44-44-52-96 1.4V
Video Card(s) ASUS ROG Strix GeForce RTX™ 4080 16GB GDDR6X White OC Edition
Storage 500 GB WD Black SN750 SE NVMe SSD + 4 TB WD Red Plus WD40EFPX HDD
Display(s) 55-inch LG G3 OLED
Case Pichau Mancer CV500 White Edition
Power Supply EVGA 1300 G2 1.3kW 80+ Gold
Mouse Microsoft Classic Intellimouse
Keyboard Generic PS/2
Software Windows 11 IoT Enterprise LTSC 24H2
Benchmark Scores I pulled a Qiqi~
nGreedia's is a bubble, and one day, it will burst. I am VERY much looking forward to that whether it be a year from now, or 10.

You may be disappointed to hear that by the time any such bubble pops they will remain a multi trillion corporation.

$NVDA is priced as is because they provide both the hardware and software tools for AI companies to develop their products. OpenAI for example is a private corporation (similar to Valve), and AI is widely considered to be in its infancy. It's the one lesson not to mock a solid ecosystem.
 
Joined
Feb 20, 2019
Messages
8,209 (3.93/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
Now or never? Now - "maybe". If never, it's game over for AMD.
Never, for sure.
It's simply a question of cost because low end parts need to be cheap, which means using expensive nodes for them makes absolutely zero sense.

I can confidently say that it's not happened in the entire history of AMD graphics cards, going back to the early ATi Mach cards, 35 years ago!
Look at the column for manufacturing node; The low end of each generation is always last years product rebranded, or - if it's actually a new product rather than a rebrand - it's always an older process node to save money.

So yes, please drop it. I don't know how I can explain it any more clearly to you. Low end parts don't get made on top-tier, expensive, flagship manufacturing nodes, because it's simply not economically viable. Companies aiming to make a profit will not waste their limited quantity of flagship node wafer allocations on low-end shit - that would be corporate suicide!

If Pirelli came accross a super-rare, super-expensive, extra-sticky rubber but there was a limited quantity of the stuff - they could use it to make 1000 of the best Formula 1 racing tyres ever seen and give their brand a huge marketing boost and recognition, OR they could waste it making 5000 more boring, cheap, everyday tyres for commuter workhorse cars like your grandma's Honda Civic.
 
Last edited:

ARF

Joined
Jan 28, 2020
Messages
4,670 (2.67/day)
Location
Ex-usa | slava the trolls
Never, for sure.
It's simply a question of cost because low end parts need to be cheap, which means using expensive nodes for them makes absolutely zero sense.

I can confidently say that it's not happened in the entire history of AMD graphics cards, going back to the early ATi Mach cards, 35 years ago!
Look at the column for manufacturing node; The low end of each generation is always last years product rebranded, or - if it's actually a new product rather than a rebrand - it's always an older process node to save money.

So yes, please drop it. I don't know how I can explain it any more clearly to you. Low end parts don't get made on top-tier, expensive, flagship manufacturing nodes, because it's simply not economically viable. Companies aiming to make a profit will not waste their limited quantity of flagship node wafer allocations on low-end shit - that would be corporate suicide!

What is expensive today, will not necessarily be expensive tomorrow. Wafer prices fall, N4 will be an ancient technology in 5 or 10 years.
Saying never, means that you must have an alternative in mind? What's it? Making RX 7600 on 6nm for 20 years more?

 
Joined
Sep 10, 2018
Messages
6,868 (3.05/day)
Location
California
System Name His & Hers
Processor R7 5800X/ R7 7950X3D Stock
Motherboard X670E Aorus Pro X/ROG Crosshair VIII Hero
Cooling Corsair h150 elite/ Corsair h115i Platinum
Memory Trident Z5 Neo 6000/ 32 GB 3200 CL14 @3800 CL16 Team T Force Nighthawk
Video Card(s) Evga FTW 3 Ultra 3080ti/ Gigabyte Gaming OC 4090
Storage lots of SSD.
Display(s) A whole bunch OLED, VA, IPS.....
Case 011 Dynamic XL/ Phanteks Evolv X
Audio Device(s) Arctis Pro + gaming Dac/ Corsair sp 2500/ Logitech G560/Samsung Q990B
Power Supply Seasonic Ultra Prime Titanium 1000w/850w
Mouse Logitech G502 Lightspeed/ Logitech G Pro Hero.
Keyboard Logitech - G915 LIGHTSPEED / Logitech G Pro
You may be disappointed to hear that by the time any such bubble pops they will remain a multi trillion corporation.

$NVDA is priced as is because they provide both the hardware and software tools for AI companies to develop their products. OpenAI for example is a private corporation (similar to Valve), and AI is widely considered to be in its infancy. It's the one lesson not to mock a solid ecosystem.

Yeah, even this generations being one of the worst for Nvidia from a price to performance standpoint they are still obliterating AMD in gaming revenue while really only focusing on AI although at least with Nvidia some of that trickles to their gaming cards.

Nvidia left the door wide open this generation for AMD and they are like nah we love being stuck as an insignificant % of the market. It's really a total opposite of how AMD handled Zen.

We need both these companies pushing each other to make better products but if RDNA5 is a bust like 3 I'm not sure CDNA can save the whole gpu side at amd... Maybe we are just seeing the ceiling for an AMD branded gpu regardless of how good of a product amd makes.

Who knows maybe Nvidia will open the door even wider next generation been hearing 1200 ish for a 5080 that only offers 4090 performance with less Vram which would be a pretty terrible product.
 
Joined
Feb 20, 2019
Messages
8,209 (3.93/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
What is expensive today, will not necessarily be expensive tomorrow. Wafer prices fall, N4 will be an ancient technology in 5 or 10 years.
Saying never, means that you must have an alternative in mind? What's it? Making RX 7600 on 6nm for 20 years more?

Ohhh, you mean on N4 once N4 is old and cheap?
Sure, that'll eventually happen. That's where N6 is right now - but it's not relevant to this discussion, is it?
 
Joined
Dec 25, 2020
Messages
6,597 (4.67/day)
Location
São Paulo, Brazil
System Name "Icy Resurrection"
Processor 13th Gen Intel Core i9-13900KS Special Edition
Motherboard ASUS ROG MAXIMUS Z790 APEX ENCORE
Cooling Noctua NH-D15S upgraded with 2x NF-F12 iPPC-3000 fans and Honeywell PTM7950 TIM
Memory 32 GB G.SKILL Trident Z5 RGB F5-6800J3445G16GX2-TZ5RK @ 7600 MT/s 36-44-44-52-96 1.4V
Video Card(s) ASUS ROG Strix GeForce RTX™ 4080 16GB GDDR6X White OC Edition
Storage 500 GB WD Black SN750 SE NVMe SSD + 4 TB WD Red Plus WD40EFPX HDD
Display(s) 55-inch LG G3 OLED
Case Pichau Mancer CV500 White Edition
Power Supply EVGA 1300 G2 1.3kW 80+ Gold
Mouse Microsoft Classic Intellimouse
Keyboard Generic PS/2
Software Windows 11 IoT Enterprise LTSC 24H2
Benchmark Scores I pulled a Qiqi~
What is expensive today, will not necessarily be expensive tomorrow. Wafer prices fall, N4 will be an ancient technology in 5 or 10 years.
Saying never, means that you must have an alternative in mind? What's it? Making RX 7600 on 6nm for 20 years more?


But bruh who will be interested on a 7600 10 years from now? Chrispy is right on this one. Just makes no sense.
 

ARF

Joined
Jan 28, 2020
Messages
4,670 (2.67/day)
Location
Ex-usa | slava the trolls
But bruh who will be interested on a 7600 10 years from now? Chrispy is right on this one. Just makes no sense.

He is right, but the point is that AMD will not be able to sell these cards. This is unsustainable strategy, leading to downward spiraling. Bleeding market share to the more popular competitor, and then forcing to exit the market segment.
 
Joined
Dec 25, 2020
Messages
6,597 (4.67/day)
Location
São Paulo, Brazil
System Name "Icy Resurrection"
Processor 13th Gen Intel Core i9-13900KS Special Edition
Motherboard ASUS ROG MAXIMUS Z790 APEX ENCORE
Cooling Noctua NH-D15S upgraded with 2x NF-F12 iPPC-3000 fans and Honeywell PTM7950 TIM
Memory 32 GB G.SKILL Trident Z5 RGB F5-6800J3445G16GX2-TZ5RK @ 7600 MT/s 36-44-44-52-96 1.4V
Video Card(s) ASUS ROG Strix GeForce RTX™ 4080 16GB GDDR6X White OC Edition
Storage 500 GB WD Black SN750 SE NVMe SSD + 4 TB WD Red Plus WD40EFPX HDD
Display(s) 55-inch LG G3 OLED
Case Pichau Mancer CV500 White Edition
Power Supply EVGA 1300 G2 1.3kW 80+ Gold
Mouse Microsoft Classic Intellimouse
Keyboard Generic PS/2
Software Windows 11 IoT Enterprise LTSC 24H2
Benchmark Scores I pulled a Qiqi~
He is right, but the point is that AMD will not be able to sell these cards. This is unsustainable strategy, leading to downward spiraling. Bleeding market share to the more popular competitor, and then forcing to exit the market segment.

The low end market is less sensitive to bleeding edge technology. People would actually rather get something tried and true here, so it works out in the end. Using earlier nodes on lower cost products is therefore a great idea.
 

ARF

Joined
Jan 28, 2020
Messages
4,670 (2.67/day)
Location
Ex-usa | slava the trolls
The low end market is less sensitive to bleeding edge technology. People would actually rather get something tried and true here, so it works out in the end. Using earlier nodes on lower cost products is therefore a great idea.

The question is - when do you expect an RX 6600/RX 7600 owner to upgrade? If following this logic - never, or maybe in 7-10 years?
Is it fine for AMD to get so rare gamers' purchases? If so, then it's fine.

But it would mean that the niche market will not hold for many more years. No reason to upgrade.
 
Joined
Sep 10, 2018
Messages
6,868 (3.05/day)
Location
California
System Name His & Hers
Processor R7 5800X/ R7 7950X3D Stock
Motherboard X670E Aorus Pro X/ROG Crosshair VIII Hero
Cooling Corsair h150 elite/ Corsair h115i Platinum
Memory Trident Z5 Neo 6000/ 32 GB 3200 CL14 @3800 CL16 Team T Force Nighthawk
Video Card(s) Evga FTW 3 Ultra 3080ti/ Gigabyte Gaming OC 4090
Storage lots of SSD.
Display(s) A whole bunch OLED, VA, IPS.....
Case 011 Dynamic XL/ Phanteks Evolv X
Audio Device(s) Arctis Pro + gaming Dac/ Corsair sp 2500/ Logitech G560/Samsung Q990B
Power Supply Seasonic Ultra Prime Titanium 1000w/850w
Mouse Logitech G502 Lightspeed/ Logitech G Pro Hero.
Keyboard Logitech - G915 LIGHTSPEED / Logitech G Pro
The low end market is less sensitive to bleeding edge technology. People would actually rather get something tried and true here, so it works out in the end. Using earlier nodes on lower cost products is therefore a great idea.

Yeah the RX 580 has the highest presence for a discrete AMD gpu on the hardware survey and there are a bunch of crappy 50/60 class cards from Nvidia in the top 20 people buy whatever they can afford at the low end regardless of how meh it is.
The question is - when do you expect an RX 6600/RX 7600 owner to upgrade? If following this logic - never, or maybe in 7-10 years?
Is it fine for AMD to get so rare gamers' purchases? If so, then it's fine.

But it would mean that the niche market will not hold for many more years. No reason to upgrade.

Most of AMDs low end base is still on 580s I think they would be happy with them actually buying 7600s as it is.
 
Joined
Jan 14, 2019
Messages
12,334 (5.80/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE
  • Nvidia: 50-200% faster RT/AI/DLSS
Where do you get this number from? :wtf:

Now or never? Now - "maybe". If never, it's game over for AMD.



The only things that I see that AMD bleeds are performance left on the table because the chiplets are too slow, and the connected market share loss.
It's about making decisions.

Maybe AMD must put on the table the profit margins, and instead start thinking about the gamers?
Do you think chiplets are about gamers? Far from it. The post you replied to demonstrates that it's a cost saving technique, nothing else. Better yields on smaller chips, the ability to link chips made on different nodes, etc.
 
Joined
Nov 11, 2016
Messages
3,393 (1.16/day)
System Name The de-ploughminator Mk-II
Processor i7 13700KF
Motherboard MSI Z790 Carbon
Cooling ID-Cooling SE-226-XT + Phanteks T30
Memory 2x16GB G.Skill DDR5 7200Cas34
Video Card(s) Asus RTX4090 TUF
Storage Kingston KC3000 2TB NVME
Display(s) 48" LG OLED C4
Case Corsair 5000D Air
Audio Device(s) KEF LSX II LT speakers + KEF KC62 Subwoofer
Power Supply Corsair HX850
Mouse Razor Death Adder v3
Keyboard Razor Huntsman V3 Pro TKL
Software win11
Yeah, even this generations being one of the worst for Nvidia from a price to performance standpoint they are still obliterating AMD in gaming revenue while really only focusing on AI although at least with Nvidia some of that trickles to their gaming cards.

Nvidia left the door wide open this generation for AMD and they are like nah we love being stuck as an insignificant % of the market. It's really a total opposite of how AMD handled Zen.

We need both these companies pushing each other to make better products but if RDNA5 is a bust like 3 I'm not sure CDNA can save the whole gpu side at amd... Maybe we are just seeing the ceiling for an AMD branded gpu regardless of how good of a product amd makes.

Who knows maybe Nvidia will open the door even wider next generation been hearing 1200 ish for a 5080 that only offers 4090 performance with less Vram which would be a pretty terrible product.

The duopoly must continue, Nvidia is pricing their gaming GPU just high enough to make sure of that.

It's so easy for AMD and Nvidia to figure out the minimum prices of their competitor, given that they share the same chip manufacturer (TSMC), same GDDR manufacturer (Samsung), same PCB manufacturer (AIBs).

Who know perhaps Nvidia will charge higher margins next-gen, just so Radeon can improve their terrible margins.
 
Joined
Sep 10, 2018
Messages
6,868 (3.05/day)
Location
California
System Name His & Hers
Processor R7 5800X/ R7 7950X3D Stock
Motherboard X670E Aorus Pro X/ROG Crosshair VIII Hero
Cooling Corsair h150 elite/ Corsair h115i Platinum
Memory Trident Z5 Neo 6000/ 32 GB 3200 CL14 @3800 CL16 Team T Force Nighthawk
Video Card(s) Evga FTW 3 Ultra 3080ti/ Gigabyte Gaming OC 4090
Storage lots of SSD.
Display(s) A whole bunch OLED, VA, IPS.....
Case 011 Dynamic XL/ Phanteks Evolv X
Audio Device(s) Arctis Pro + gaming Dac/ Corsair sp 2500/ Logitech G560/Samsung Q990B
Power Supply Seasonic Ultra Prime Titanium 1000w/850w
Mouse Logitech G502 Lightspeed/ Logitech G Pro Hero.
Keyboard Logitech - G915 LIGHTSPEED / Logitech G Pro
The duopoly must continue, Nvidia is pricing their gaming GPU just high enough to make sure of that.

It's so easy for AMD and Nvidia to figure out the minimum prices of their competitor, given that they share the same chip manufacturer (TSMC), same GDDR manufacturer (Samsung), same PCB manufacturer (AIBs).

Who know perhaps Nvidia will charge higher margins next-gen, just so Radeon can improve their terrible margins.

Yeah, I would really like to see a BOM cost if it was high it would make me feel better lol.
 
Top