• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD RDNA 5 a "Clean Sheet" Graphics Architecture, RDNA 4 Merely Corrects a Bug Over RDNA 3

ARF

Joined
Jan 28, 2020
Messages
4,670 (2.60/day)
Location
Ex-usa | slava the trolls
Still curious how they're doing RDNA4.

Navi 44 a small chip, max 250 mm^2, successor for RX 5700 XT (Navi 10 251 mm^2), RX 6600 XT (Navi 23 237 mm^2), and RX 7600 XT (Navi 33 204 mm^2).
Expected performance between Radeon RX 7700 XT and Radeon RX 7800 XT.

Navi 48 a larger chip, max 350 mm^2, successor for RX 6700 XT (Navi 22 335 mm^2), and RX 7700 XT / RX 7800 XT (Navi 32 346 mm^2).
Expected performance between Radeon RX 7900 XT and Radeon RX 7900 XTX.

Makes sense to expect <800$ price tag. RX 7900 XT is 690$, RX 7900 XTX is 820$.

AMD were going monolithic instead of MCD for this gen but at the same time, if they're promising big performance/$ improvements that would be the opposite of monolithic.

They must sacrifice the extremely large profit margins in order to make a bit more sales.
AMD's current position doesn't allow both of them at the same time. Either one or the other.


1715637271824.png
 
Joined
Jan 14, 2019
Messages
12,612 (5.80/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE
$500? Its always been $700. Why are you mad about $800? $900 is such a good deal!

Seems to me we're gonna get an entire generation of ReBrandeon, with a "fix" that I'd bet decent money doesnt actually improve performance beyond margin of error. Kinda like GCN 1.1/1.2/1.4.


So given the way AMD behaved with the RX 7000s, expect it to launch at $799 and drop below $700 after a year of nobody buying it.


*the answer is whatever the consumer will support.
I wouldn't be mad about that, either. Both Nvidia and AMD have been rebranding the same stuff made on smaller processes for like 3 generations now. We've only been getting more CUs/SMs for proportionally more money.

I apologize for a lot of repetitive bla-blah-blah! Maybe I shouldn't post this, but...

It seems RDNA4 is long overdue. Ideally this is what RDNA3 should have been, instead of releasing half baked product. This is important, because not only dGPUs are based on it, but iGPUs, as well. And the last require these improvements even more. Things are worse, because APUs/mobile are stuck with RDNA 3/3.5 for a long time. It means that the laptops comming in the next couple years, will still have the outdated iGPUs, that should have been replaced by RDNA4 at least year ago. And yet they advertise this as an achievement.

The power consumption, is the biggest threat right now. I would say even bigger than the inferiority of upscale and RTRT technologies. No matter how much someone uses the upscale, this won't fix the situation, if the card is a powerhog. It's completely clear, that the primary market for any company in the world is US, where company like AMD sells their absolute mass of products, and where people do not count, or think about amount of electric power being used. Especially when it comes to gamers.
So why does it matter? Because once again, the iGPUs use the same µarch, but scaled down/limited to just few CUs. If the efficiency is bad for desktop, it will be as bad with smaller iGPUs/hanheld/mobile GPUs either. And sadly, the inefficiency, is not only a result of an inferior node, but of inferior design as well. Not to mention, that there are ways to reduce the rendering load, without dumping the quality, instead of bruteforcing it. And this part alos heavily relies on the software side, which sadly still lags behind.

But enough complains. This won't fix the situation. Still would be great if these delays will turn into good fruition, and lead to abundance of powerful and improved, energy efficient products, even if they will not have the performance of next gen nVidia top solutions. But I don't hold my breath.

Exactly. This is obvious, that nVidia get the pricing out of thin air, just to see where is the threshold they can get away with. And people paid that, thus set the pricing in stone, forever. The pricing was doomed, the day RTX 2080 was announced. AMD has no incentive, as they position themselves as "premium" brand. There can be no value-oriented market segment, and especially value oriented pricing for such segment, when each of the market participant (except intel for now) is a "premium" company.
This seems like great business model, where the consumer is pressed to the wall, and have no choice but to pay extortion prices, because both "rivals" set their SKUs and pricing vitrually the same. A "parity", with no losers, except the end user.

$700 is clearly inacceptable for a mid range GPU, even with inflation. Why midrange? Because from all the rumors RDNA4 will have no hi-end SKUs, just the refining of RDNA3 Rebrandeon RX 8000. So AMD just wants to stretch the maximal price nVidia would set, to their "top" midrange solutions. But at same time without submitting much effort, by maintaining the "GPU underdog" image, to justify their greed and laziness. This is ridiculous.
At least nVidia still has some pretense, that they "care" about "gamers", and still puts some fractions of R&D budget into the consumer GeForce. AMD does non of this, while asks as much money.
But this have began a while ago. Remember when AMD had to bring down RX5700 prices, due to outrage? They wanted to gouge people the way nVidia did, a while back, with the release of very first RDNA1. The completely new, raw and µarch, that had a lot of bugs and issues. But they somehow had hubris, to set the price tags, like they were flawless.
The problem with AMD is not because they have bad products. The problem is that they know that the nVidia pricing is delusional, but still comly with it. AMD being bashed more, because they follow the suit, and thus lose the public image and credibility. And it hurts them more, than nVidia themselves, because repeating the immoral move is even worse than comiting it in the first place (which can be explained as "unintentional" mistake).

No matter what people say, the CPUs and server, are not the only branches AMD has profited a lot from. Their GPUs still made them tons of money. But the Radeon is yet to have the same R&D treatment as Ryzen.
People say, that AMD has no money to put into better R&D, and they cannot compete with the scale of nVidia budget. But everyone forgets, that AMD made it's first Ryzen, by being on the verge of bancrupcy. The absolute end, with zero finacial backup. They had no additional sources of income, it was "all-in".
I don't say they must put all budget into consumer Radeon. That's stupid. But it would have been a great move, to improve RTG, as it can be an additional source of income. It's impossible to gain, without the input.
And the consumer/gaming branch poor sales are not due to people are uninterested in Radeon products. But due to the pricing being atrocious. It's not because people prefer nVidia, but because doubling down on margins alone, is destructive for the economy, and the company's health. If they don't see this now, then nothing will help.

Coulpe bucks below nVidia counterparts, just to make an illusion of competition.

I dunno where is 7900XT for less than $799. Here are about 40 stores, all sell it for $1000. There's no competition.

But, yeah, this time AMD has to put at least some efforts for the consumer market. Otherwise it just looks as looks like a placeholder Radeon branch along with Enterprise.
And nope, I guess AMD won't make 7900XT for 7800XT money, since they already have set this pricing already, by shifting entire stack one class above. I mean 7900XT is just cries it is just a 7800XT, and what they call "7800XT" is just 7700XT instead. With 7900GRE being 7800 non-XT. 7600 being 7500, and 7700XT being true 7600XT. Only 7900XTX holds it's top SKU moniker, just having an excessive "X" at the end. So why would they do such favor, and undercut themselves by "gifting" the previous gen top performance, for "mid-end" prices? This is new era.
But the true cause behind the atrocious pricing, is that AMD is knee deep into top profit margins market of AI and enterprize. They don't make any assignment, no matter what.

However, the whole price/class shifting shenanigans, in reality made more disservice, rather than help. What I mean, is that the generational uplifts used to be due to each next gen one tier lower SKU performance, is same or more, than the higher tier/class SKU of previous gen. Or in other words, the same class SKU of the next gen should have performance uplift. Otherwise, this doesn't make sence, and is both counter-productive, and counter-evolutionary.

But AMD has really have shot themselves in both feet, by sticking to this scammy shenanigans, especially doing this after nVidia backpedaled their dumb 4080 12GB naming. What they have done is even worse. This not only makes the products less attractive price-wise. But also completely abolishes the whole generational uplift, by making it twice as bad. I mean If they'd stick with the naming mentioned above, the performance growth would be true, and way greater, than it is. But shifting it one tier above, the entire stack lost their performance improvements altogether.

The reasoning behind this is clear. AMD wanted to increase the profit margins. But by raising prices without naming the SKUs accordingly, would have made the pricing an unreasonable and blatant rip off. And could show their true pursuit for nVidia behavior, thus might ended up into public outrage. So, here we are.

The entire point of pricing the VGA for a grant and above, is just means these cards were supposed for use by prosumers, just the GPU maker (nVidia in this case) to extract maximum margins out of them. Well, JHH himself said this while announcing RTX lineup of GeForce, saying it's a "holy grail" for the developers. Both 2080 Ti and Titan RTX (true uncut 2080Ti, rather than Titan), were positioned as products for content creators and designers. Sorta Quatro for "poor".
So this is obvious, the ordinary "gamur" Joe/Jane doesn't need to waste this ungodly amount of hard-earned money, on device, which price was set and inflated artificially and unreasonably. Because they (nVidia) can.

Indeed. There is the reason why AMD market cap is now almost twice as big as Intels. Everyone says it's server/data center and CPU branch. But everyone forgets that both nVidia and AMD made a fortune by selling ships of cards directly to the miners. And used convenient "scalper" scapegoat, as a reasoning to inflate prices forever. The true virtue signaling of them, as claiming they were unable to issue the solution. But it was clear, that they could enforce the sale restriction, as much as take the entire supply under their control, and sell cards, directly from their very own store. But they didn't.
Intel was just late with their miner-specific ASIC hardware, so they've just switched to AI instead, before anything came to the market. So they just had to hold by the Arc development, as it's now their bread.

I'm sure there's huge portion of current AMD wealth, that is due to one single non-stop covid-mining rampage. And they both GPU makers had no incentive to return their prices back to the sane levels, as both were already benefit from selling their entire stocks to the compute/data center tasks, which miners/crypto are directly related to.

That's why, instead of investing into consumer market, they just invented AI "surge" to justify their wish to continue to sell all silicon, including consumer/gaming grade graphic cards to non-consumer/gamer, compute markets. AMD is maybe a bit more descreet about it, as they've openly dropped the consumer market in favor of AI/enterprize. And nVidia is just creates a whole lot of different tricks to rebadge their gamer GPUs as "compute" cards and circuimvent the restrictions.

That is the reason, they all stuffing the useless "AI" features to every GPU and CPU, just to inflate the price and ride on this "AI" train, while it's still going full steam. Some even try to "coordinate" the independent/open AI endevours.

They don't even try. I might be wrong, but it may happen, AMD designs RDNA5 with the main intention, of use them later as WS cards. They aren't trying to "better" the RDNA cards for consumer/gamer market needs. They used to have CDNA based on Vega/GCN, but why just not use the chip for workstations and sell the "crisps" binned chips as consumers RX series.
Now AMD just makes minimal efforts, to use it as in order to Radeon would continue to "exist" (which BTW helps nVidia to not fall under "monopoly" status), for R&D and test gound for future iGPU products, used in APU and consoles. Since these two and OEM are the next more important ereas after AI/enterprise. The DIY market is sadly dead.
Why would AMD, a for-profit company not follow Nvidia's pricing? They'd be stupid not to.

The DIY market has been dead forever according to some, yet, here we are with DIY PCs in 2024. No reason for unwarranted negativity as long as we're reading news about future desktop architectures, imo.
 

wolf

Better Than Native
Joined
May 7, 2007
Messages
8,249 (1.28/day)
System Name MightyX
Processor Ryzen 9800X3D
Motherboard Gigabyte X650I AX
Cooling Scythe Fuma 2
Memory 32GB DDR5 6000 CL30
Video Card(s) Asus TUF RTX3080 Deshrouded
Storage WD Black SN850X 2TB
Display(s) LG 42C2 4K OLED
Case Coolermaster NR200P
Audio Device(s) LG SN5Y / Focal Clear
Power Supply Corsair SF750 Platinum
Mouse Corsair Dark Core RBG Pro SE
Keyboard Glorious GMMK Compact w/pudding
VR HMD Meta Quest 3
Software case populated with Artic P12's
Benchmark Scores 4k120 OLED Gsync bliss
I'd have expected a new name if it's clean sheet, ie not RDNA#. Still. it's welcome, the changes between generations while undeniably present, have yielded minimal IPC gains and the reliance on nodes, clocks and pushing power consumption skyward was the play. RDNA3 did a bit to improve RT but it seems that chiplets were the stumbling point.
 
Joined
Jan 14, 2019
Messages
12,612 (5.80/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE
I'd have expected a new name if it's clean sheet, ie not RDNA#. Still. it's welcome, the changes between generations while undeniably present, have yielded minimal IPC gains and the reliance on nodes, clocks and pushing power consumption skyward was the play. RDNA3 did a bit to improve RT but it seems that chiplets were the stumbling point.
RDNA 5 could still get a new name, as far as I understand.

As for RDNA 3, it only had minor changes in its RT engine compared to RDNA 2, but RDNA 4's RT will be a completely new design. Combined with RDNA 3-like CUs, I'm quite excited to see what it's capable of, despite all the negativity in this thread. :)
 
Joined
Sep 10, 2018
Messages
6,996 (3.04/day)
Location
California
System Name His & Hers
Processor R7 5800X/ R7 7950X3D Stock
Motherboard X670E Aorus Pro X/ROG Crosshair VIII Hero
Cooling Corsair h150 elite/ Corsair h115i Platinum
Memory Trident Z5 Neo 6000/ 32 GB 3200 CL14 @3800 CL16 Team T Force Nighthawk
Video Card(s) Evga FTW 3 Ultra 3080ti/ Gigabyte Gaming OC 4090
Storage lots of SSD.
Display(s) A whole bunch OLED, VA, IPS.....
Case 011 Dynamic XL/ Phanteks Evolv X
Audio Device(s) Arctis Pro + gaming Dac/ Corsair sp 2500/ Logitech G560/Samsung Q990B
Power Supply Seasonic Ultra Prime Titanium 1000w/850w
Mouse Logitech G502 Lightspeed/ Logitech G Pro Hero.
Keyboard Logitech - G915 LIGHTSPEED / Logitech G Pro
RDNA 5 could still get a new name, as far as I understand.

As for RDNA 3, it only had minor changes in its RT engine compared to RDNA 2, but RDNA 4's RT will be a completely new design. Combined with RDNA 3-like CUs, I'm quite excited to see what it's capable of, despite all the negativity in this thread. :)

I'm excited because all new hardware is exciting to me at the same time AMDs rumored gameplan is pretty disappointing.

Going by the interwebs rdna3 has done pretty terrible overall with AMD almost dismissing it as a low margin product they don't really care about at the very least their gaming division which includes consoles isn't doing very good going by the latest earnings call tanking like 50%.

 

wolf

Better Than Native
Joined
May 7, 2007
Messages
8,249 (1.28/day)
System Name MightyX
Processor Ryzen 9800X3D
Motherboard Gigabyte X650I AX
Cooling Scythe Fuma 2
Memory 32GB DDR5 6000 CL30
Video Card(s) Asus TUF RTX3080 Deshrouded
Storage WD Black SN850X 2TB
Display(s) LG 42C2 4K OLED
Case Coolermaster NR200P
Audio Device(s) LG SN5Y / Focal Clear
Power Supply Corsair SF750 Platinum
Mouse Corsair Dark Core RBG Pro SE
Keyboard Glorious GMMK Compact w/pudding
VR HMD Meta Quest 3
Software case populated with Artic P12's
Benchmark Scores 4k120 OLED Gsync bliss
As for RDNA 3, it only had minor changes in its RT engine
Didn't RNDA3 also get the dual issue shader ALU's like Ampere did? or am I thinking of a different gen... Either way both RDNA4 and "RDNA5" will both be interesting in their own right.
despite all the negativity in this thread. :)
Yes it's unfortunate but not unexpected, people actively choose to be so negative and seemingly pour effort into it, to what end I'm not sure. These corporations, their products and actions seem to live rent free in a few minds around here.
 
Joined
Nov 22, 2023
Messages
228 (0.57/day)
Didn't RNDA3 also get the dual issue shader ALU's like Ampere did? or am I thinking of a different gen... Either way both RDNA4 and "RDNA5" will both be interesting in their own right.

-"Dual Issue" was always sort of a red herring. Ampere and Ada's dual issue didn't really do much of anything either, any gains off the former gen were due to more SMs and higher clocks than anything.

Same with RDNA3 vs 2, kinda performs exactly like you'd think it would with the additional CUs.
 
Joined
May 3, 2018
Messages
2,881 (1.19/day)
Oh look another leak that is diammetrically opposed to the other 99.9% of leaks, but hey this one must be true right. Let me guess 8700XT is on average 2-5% faster than 7700XT in raster and 4-6% faster in RTing. Oh and all those leaks on it being 7900XT+ level of raster please ignore.

Also if this is true, why wouldn't AMD offer an updated 8900XT/XTX with fixes and the original 2x IC. That would provide a healthy boost to performance and give them something to fight 5080 at least for a a while until RDNA5.

I call total BS.
 
Joined
Dec 25, 2020
Messages
7,063 (4.83/day)
Location
São Paulo, Brazil
System Name "Icy Resurrection"
Processor 13th Gen Intel Core i9-13900KS Special Edition
Motherboard ASUS ROG Maximus Z790 Apex Encore
Cooling Noctua NH-D15S upgraded with 2x NF-F12 iPPC-3000 fans and Honeywell PTM7950 TIM
Memory 32 GB G.SKILL Trident Z5 RGB F5-6800J3445G16GX2-TZ5RK @ 7600 MT/s 36-44-44-52-96 1.4V
Video Card(s) ASUS ROG Strix GeForce RTX™ 4080 16GB GDDR6X White OC Edition
Storage 500 GB WD Black SN750 SE NVMe SSD + 4 TB WD Red Plus WD40EFPX HDD
Display(s) 55-inch LG G3 OLED
Case Pichau Mancer CV500 White Edition
Audio Device(s) Apple USB-C + Sony MDR-V7 headphones
Power Supply EVGA 1300 G2 1.3kW 80+ Gold
Mouse Microsoft Classic Intellimouse
Keyboard IBM Model M type 1391405 (distribución española)
Software Windows 11 IoT Enterprise LTSC 24H2
Benchmark Scores I pulled a Qiqi~
Indeed. There is the reason why AMD market cap is now almost twice as big as Intels.

And that, I reckon, is due to Intel's particularly awful performance coupled with extreme CapEx due to their foundry and product research and development rather than a truly stellar financial performance by AMD. They'll have to ride the AI wave if they want to impress stockholders, and so far, AMD's done little towards that.

Still curious how they're doing RDNA4. Rumours were that AMD were going monolithic instead of MCD for this gen but at the same time, if they're promising big performance/$ improvements that would be the opposite of monolithic.

5120 shaders like the GRE has could be very powerful at 3GHz with twice the InfinityCache. The GRE is basically a bandwidth-starved 7900XT

The problem with focusing on performance per dollar improvements can only mean that they're making cards with lower performance than you'd normally expect from them, and offsetting the cost in order to make it an attractive product rather than an all-around generational loser.

To stand out, RDNA 4 would need to be more than just cheap: it'd need to pull at least the same current generational performance at a much lower power footprint, and the GRE you mentioned is a perfect example of a product that you don't want to see in this next efficiency and cost-focused generation: much like the RX 6800 before it, it's a byproduct of lower yield on the big N31 die, it has a very significant amount of disabled execution units and memory channels. It's got 80 (vs. 84 on 7900 XT and 96 on the full XTX) execution units and a quarter of its memory capacity, bandwidth, and attached cache disabled - meaning it loses a lot of performance while keeping a similar power footprint.

It should still be much faster than the 7800 XT on paper, since it still has much more resources available, but it ain't, and there's your answer. Even in bandwidth-friendly 1080p resolution, the fastest GRE model is only 4% faster than the 192-bit Nvidia card that has even less bandwidth available and only 11% faster than the 7800 XT which is much smaller and has a much lighter footprint.



In all reality this is precisely where AMD screwed up with Navi 31... it doesn't scale. It seems to have an architectural bottleneck somewhere, or some resource utilization issue stemming either from a hardware bug or very severe software issue you should not hope that AMD could fix. The RX 7900 XTX is on paper faster than the RTX 4090 in almost every theoretical metric, yet in real life use cases, it languishes around the much leaner AD103 (which is closer to Navi 32 in size and scope - just not price).
 
Joined
Nov 4, 2005
Messages
12,019 (1.72/day)
System Name Compy 386
Processor 7800X3D
Motherboard Asus
Cooling Air for now.....
Memory 64 GB DDR5 6400Mhz
Video Card(s) 7900XTX 310 Merc
Storage Samsung 990 2TB, 2 SP 2TB SSDs, 24TB Enterprise drives
Display(s) 55" Samsung 4K HDR
Audio Device(s) ATI HDMI
Mouse Logitech MX518
Keyboard Razer
Software A lot.
Benchmark Scores Its fast. Enough.
And that, I reckon, is due to Intel's particularly awful performance coupled with extreme CapEx due to their foundry and product research and development rather than a truly stellar financial performance by AMD. They'll have to ride the AI wave if they want to impress stockholders, and so far, AMD's done little towards that.



The problem with focusing on performance per dollar improvements can only mean that they're making cards with lower performance than you'd normally expect from them, and offsetting the cost in order to make it an attractive product rather than an all-around generational loser.

To stand out, RDNA 4 would need to be more than just cheap: it'd need to pull at least the same current generational performance at a much lower power footprint, and the GRE you mentioned is a perfect example of a product that you don't want to see in this next efficiency and cost-focused generation: much like the RX 6800 before it, it's a byproduct of lower yield on the big N31 die, it has a very significant amount of disabled execution units and memory channels. It's got 80 (vs. 84 on 7900 XT and 96 on the full XTX) execution units and a quarter of its memory capacity, bandwidth, and attached cache disabled - meaning it loses a lot of performance while keeping a similar power footprint.

It should still be much faster than the 7800 XT on paper, since it still has much more resources available, but it ain't, and there's your answer. Even in bandwidth-friendly 1080p resolution, the fastest GRE model is only 4% faster than the 192-bit Nvidia card that has even less bandwidth available and only 11% faster than the 7800 XT which is much smaller and has a much lighter footprint.



In all reality this is precisely where AMD screwed up with Navi 31... it doesn't scale. It seems to have an architectural bottleneck somewhere, or some resource utilization issue stemming either from a hardware bug or very severe software issue you should not hope that AMD could fix. The RX 7900 XTX is on paper faster than the RTX 4090 in almost every theoretical metric, yet in real life use cases, it languishes around the much leaner AD103 (which is closer to Navi 32 in size and scope - just not price).


Did you bother to look at performance of 4070VS4080VS4090?

The 4070, super, Ti, extra super duper, super Ti, Ti special edition are all swipes at the 7900XT and all fail at whichever metric you want to compare.

7900GRE, cheaper, as fast at most games.

7900XTX, between 24-38% faster at RT, can be found on sale for less than $180 more.

Point out where it doesn't scale at resolution? A single image from a single review.

1715655579644.png


1715655628596.png



Look how shitty the 4070 is at RT. Merely 4% faster than the 7900GRE

1715655747735.png
 

Solaris17

Super Dainty Moderator
Staff member
Joined
Aug 16, 2005
Messages
27,097 (3.83/day)
Location
Alabama
System Name RogueOne
Processor Xeon W9-3495x
Motherboard ASUS w790E Sage SE
Cooling SilverStone XE360-4677
Memory 128gb Gskill Zeta R5 DDR5 RDIMMs
Video Card(s) MSI SUPRIM Liquid X 4090
Storage 1x 2TB WD SN850X | 2x 8TB GAMMIX S70
Display(s) 49" Philips Evnia OLED (49M2C8900)
Case Thermaltake Core P3 Pro Snow
Audio Device(s) Moondrop S8's on schitt Gunnr
Power Supply Seasonic Prime TX-1600
Mouse Razer Viper mini signature edition (mercury white)
Keyboard Monsgeek M3 Lavender, Moondrop Luna lights
VR HMD Quest 3
Software Windows 11 Pro Workstation
Benchmark Scores I dont have time for that.
A shame to hear but kind of expected, exciting either way! I do need a new AMD test card.
 
Joined
Dec 25, 2020
Messages
7,063 (4.83/day)
Location
São Paulo, Brazil
System Name "Icy Resurrection"
Processor 13th Gen Intel Core i9-13900KS Special Edition
Motherboard ASUS ROG Maximus Z790 Apex Encore
Cooling Noctua NH-D15S upgraded with 2x NF-F12 iPPC-3000 fans and Honeywell PTM7950 TIM
Memory 32 GB G.SKILL Trident Z5 RGB F5-6800J3445G16GX2-TZ5RK @ 7600 MT/s 36-44-44-52-96 1.4V
Video Card(s) ASUS ROG Strix GeForce RTX™ 4080 16GB GDDR6X White OC Edition
Storage 500 GB WD Black SN750 SE NVMe SSD + 4 TB WD Red Plus WD40EFPX HDD
Display(s) 55-inch LG G3 OLED
Case Pichau Mancer CV500 White Edition
Audio Device(s) Apple USB-C + Sony MDR-V7 headphones
Power Supply EVGA 1300 G2 1.3kW 80+ Gold
Mouse Microsoft Classic Intellimouse
Keyboard IBM Model M type 1391405 (distribución española)
Software Windows 11 IoT Enterprise LTSC 24H2
Benchmark Scores I pulled a Qiqi~
Did you bother to look at performance of 4070VS4080VS4090?

The 4070, super, Ti, extra super duper, super Ti, Ti special edition are all swipes at the 7900XT and all fail at whichever metric you want to compare.

7900GRE, cheaper, as fast at most games.

7900XTX, between 24-38% faster at RT, can be found on sale for less than $180 more.

Point out where it doesn't scale at resolution? A single image from a single review.

View attachment 347332

View attachment 347333


Look how shitty the 4070 is at RT. Merely 4% faster than the 7900GRE

View attachment 347334

You missed the point of my post entirely... and no, they're not swipes at any AMD product. The truth is that Nvidia's been selling Ada cards almost as if AMD doesn't exist and Radeon GPUs were nothing but fairytales, and that's reflected on the products' street prices. Before I decided to upgrade (and I caved in, I admit - it bothered me I had a 3 year old GPU even though it was a 3090 and the itch was killing me), I had always been an outspoken critic of NVIDIA's pricing because that's just the truth we were, and to some extent still are facing: Nvidia's charging whatever it wants because it has no competition, AMD's far too incompetent for that.

My post is about each processor and where they stand overall in the product stack, regardless of their cost as a finished, marketable product. I prepared this table with the specifications of either highest-released product or full chip available in each tier of processor at a similar size and footprint with data from the TPU database:

NVIDIA
AMD
AD102 (RTX 4090, ~82 TFLOPS, 443 GPixel/s, 1.12 Ttexel/s, 384-bit, 72M cache), 176 ROPs/512 TMUs
~609 mm², 450 W​
N31 (7900 XTX, ~62 TFLOPS, 480 GPixel/s, 960 Gtexel/s, 384-bit, 96M cache), 192 ROPs/384 TMUs ~529mm², 355 W​
AD103 (4080S, ~52 TFLOPS, 285 GPixel/s, 816 Gtexel/s, 256-bit, 64M cache), 112 ROPs/320 TMUs ~379mm², 320 W​
AMD offers no similarly sized processor in this segment, instead, cards are cut down from N31​
AD104 (4070 Ti, ~40 TFLOPS, 208 GPixel/s, 626 Gtexel/s, 192-bit, 48M cache), 80 ROPs/240 TMUs ~294 mm², 285 W​
N32 (7800 XT, ~38 TFLOPS, 233 GPixel/s, 583 Gtexel/s, 256-bit, 64M cache), 96 ROPs/240 TMUs ~346mm², 263 W​
AD106 (4060 Ti, ~22 TFLOPS, 121 GPixel/s, 345 Gtexel/s, 128-bit, 32M cache), 48 ROPs/136 TMUs, ~188 mm², 160 W​
N33 (7600 XT, ~22 TFLOPS, 176 GPixel/s, 352 Gtexel/s, 128-bit, 32M cache), 64 ROPs/128 TMUs
~204 mm², 190 W​
AD107 (4060, ~15 TFLOPS, 118 GPixel/s, 236 Gtexel/s, 128-bit, 24M cache), 48 ROPs/96 TMUs, ~159mm², 115 W​
AMD offers no similarly sized processor in this segment as it never released Navi 34​

From this table it becomes very easy to extrapolate, by general resource availability estimates: AMD placed heavier emphasis on raster power as you can see their cards have a very high number of raster operation pipelines and less focus on texturing capabilities, which are actually quite capable as is despite Nvidia winning here - at the high-end, they opted to trade off a small amount of performance for TDP (low reference clocks permitted it to be a 355 W, traditional 2x 8pin design, as evidenced by AIB 7900 XTXs with unlocked power posting record power consumption without the performance to show for it), there is a noticeable decrease in rated TFLOPS but this won't affect the graphics performance much.

I was generous with Navi 32 - despite being roughly the physical size of AD103, its specs mirror AD104 much more closely. That's also the market segment where both are being sold as finished products - the 4070 Super and Ti are priced similarly to the 7800 XT.

At the lower end, AMD instead opted to juice their cards a little bit to ensure that they kept up, but in general - Nvidia's products are always competing with an AMD product that is a tier higher, in some cases or workloads, even two tiers higher. This becomes most evident in N33 vs. AD107, the RTX 4060 and RX 7600 have just about the exact same level of performance overall, despite the 7600 being the superior card on paper, having a larger die and more power available for it.

And that is why we paid $1200+ on a 4080. It wasn't because AMD is a charity, our friends, think better of consumers or anything, they made the best of the steaming heap of shite they had for show at the time. Without sugarcoating the truth, and I know this will make a lot of AMD fans upset, RDNA 3 is an unmitigated disaster. In most scenarios, its strengths are far outweighed by its weaknesses. The area I cannot fault AMD this generation is drivers, they are really investing hard in this area and it seems the brass finally got the message about gaming features that make GeForce so popular.

BUT, the important thing is that they actually made decent products out of it, as ultimately, consumers want affordable graphics cards that can run their games, and they'll do this just fine at their currently practiced prices. Few will care about technicalities such as the ones I am talking about here, they don't care what processor it is or how many execution units, all they care about is fps, cool features and that the thing doesn't crash. The fact you're so defensive of a technically shoddy product like the 7900 XT (once upon a time envisioned as "RTX 4080 killer") is proof of that, and what makes the 7900 XT decent rather than an all-around laughing stock? It's not $900 anymore. At their original MSRP? Not worthy of consideration. In fact you'd need to be a fool to consider the 7900 XT at all with its launch price, even if the only other option was the 7900 XTX, the pricing for them was too close and the XTX is just a better product.

To be specific my point is that as a pretentious RTX 4080 killer the XT is garbage. As a regular product at $550 it's a great deal.

This post took far more time to make than I should have spent on it, and I think I have clarified my train of thought and exposed the way I think, above any marketing BS and bringing the facts to the table.

Otherwise, why would AMD give up on the high end this generation to "recalibrate" and instead focus on high-volume channels to sort their issues out? AMD already took the price war approach. It's what you're seeing with the current generation, from top to bottom throughout the entire stack. It wasn't enough, and the result is that Radeon looks like a liability in their earnings report. Not even a single lousy billion of income last quarter, in the middle of a GPU boom. No wonder shareholders aren't happy.

Needless to say this post is my opinion and my opinion alone. Don't take it as gospel, personally or anything. It's an insomniac's rant.
 
Last edited:
Joined
Dec 30, 2010
Messages
2,200 (0.43/day)
The MCM design works in compute, it's a different ballgame within graphics. That's why there's not that much difference in between 6x00 and 7x00 generation.
 

ARF

Joined
Jan 28, 2020
Messages
4,670 (2.60/day)
Location
Ex-usa | slava the trolls
The MCM design works in compute, it's a different ballgame within graphics. That's why there's not that much difference in between 6x00 and 7x00 generation.

There is no performance difference between Navi 23 and Navi 33 (Radeon RX 6650 XT, Radeon RX 7600), because it's the same thing with a new media engine with AV1 support.
TSMC 7N to TSMC 6N is not a die shrink, but a small optimisation move. TSMC 6N is TSMC 7N+ in fact.

AMD's mistake was that it didn't shrink Navi 33 on the newer TSMC 5N, 4N or 3N processes! :banghead:
 
Joined
Apr 19, 2018
Messages
1,227 (0.50/day)
Processor AMD Ryzen 9 5950X
Motherboard Asus ROG Crosshair VIII Hero WiFi
Cooling Arctic Liquid Freezer II 420
Memory 32Gb G-Skill Trident Z Neo @3806MHz C14
Video Card(s) MSI GeForce RTX2070
Storage Seagate FireCuda 530 1TB
Display(s) Samsung G9 49" Curved Ultrawide
Case Cooler Master Cosmos
Audio Device(s) O2 USB Headphone AMP
Power Supply Corsair HX850i
Mouse Logitech G502
Keyboard Cherry MX
Software Windows 11
So the ghost of India has been removed? About bloody time AMD.
 

ARF

Joined
Jan 28, 2020
Messages
4,670 (2.60/day)
Location
Ex-usa | slava the trolls
So the ghost of India has been removed? About bloody time AMD.

You mean Raja Koduri? I don't think he is to blame, because this has been the natural GPU execution by AMD for decades.
If anyone is to remove, it's the CEO, and then restructure the whole company under one top priority - GPUs.
CPUs can be left as a third priority, because they are not so important - bring less revenue, and AMD executes better with less efforts there.
 
Joined
Jan 14, 2019
Messages
12,612 (5.80/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE
I'm excited because all new hardware is exciting to me at the same time AMDs rumored gameplan is pretty disappointing.

Going by the interwebs rdna3 has done pretty terrible overall with AMD almost dismissing it as a low margin product they don't really care about at the very least their gaming division which includes consoles isn't doing very good going by the latest earnings call tanking like 50%.

RDNA 2 did pretty well, but now AMD is dying because RDNA 3 didn't turn out to be as good as hoped? Nah, don't think so. I never give much credit to these clickbait apocalypse articles, to be honest. Business is way more complex than this.

Didn't RNDA3 also get the dual issue shader ALU's like Ampere did? or am I thinking of a different gen... Either way both RDNA4 and "RDNA5" will both be interesting in their own right.
Yes it did, but it didn't bring much measurable performance to the table. CU vs CU, RDNA 2 and 3 perform pretty similarly, except for a handful of outliers.

You mean Raja Koduri? I don't think he is to blame, because this has been the natural GPU execution by AMD for decades.
If anyone is to remove, it's the CEO, and then restructure the whole company under one top priority - GPUs.
CPUs can be left as a third priority, because they are not so important - bring less revenue, and AMD executes better with less efforts there.
Where were you between 2012 and 2017 when AMD had basically no usable CPU for sale? Why do you think Ryzen is called Ryzen? AMD has basically "Ryzen" from its ashes, surviving on selling GPUs that you don't think highly of by the looks of it. GCN, especially the first gen was actually good, believe it or not. AMD had better efficiency at that time than Nvidia did with Kepler. It's only that it got old by the hundredth revision. Maybe we're seeing the same with RDNA.
 

ARF

Joined
Jan 28, 2020
Messages
4,670 (2.60/day)
Location
Ex-usa | slava the trolls
Where were you between 2012 and 2017 when AMD had basically no usable CPU for sale? Why do you think Ryzen is called Ryzen? AMD has basically "Ryzen" from its ashes, surviving on selling GPUs that you don't think highly of by the looks of it. GCN, especially the first gen was actually good, believe it or not. AMD had better efficiency at that time than Nvidia did with Kepler. It's only that it got old by the hundredth revision. Maybe we're seeing the same with RDNA.

Look at nvidia's size and AMD's size. AMD is only 10% as big as nvidia. And nvidia is a GPU-centric company.

1715678474640.png


1715678497610.png

 
Joined
Feb 20, 2019
Messages
8,342 (3.90/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
Did you bother to look at performance of 4070VS4080VS4090?

The 4070, super, Ti, extra super duper, super Ti, Ti special edition are all swipes at the 7900XT and all fail at whichever metric you want to compare.

7900GRE, cheaper, as fast at most games.

7900XTX, between 24-38% faster at RT, can be found on sale for less than $180 more.

Point out where it doesn't scale at resolution? A single image from a single review.

View attachment 347332

View attachment 347333


Look how shitty the 4070 is at RT. Merely 4% faster than the 7900GRE

View attachment 347334
I'm pretty sure you're barking up the wrong tree - he's talking about the cost for AMD to manufacture, not the cost you pay as a consumer to buy them.

The XTX, XT and GRE are expensive silicon with raw specs, manufacturing costs, power envelopes, and transistor counts far exceeding their actual performance. That's the architectural bottleneck/bug that we are guessing AMD is talking about fixing. I distinctly remember the launch of Navi31 being a disappointment and multiple channels interviewing Scott Herkelman and covering Sam Naffziger's deep dive on the pros and cons of moving GPUs to chiplets. AMD basically admitted that they couldn't scale the interconnects as well as they hoped which is one possible reason why RDNA3 chiplets weren't as fast or cost-effective as the hype and pre-launch info originally predicted.

If fixing this bug opens up 20-30% more performance from the same design, then we have an RDNA4 "7800XT Revision 2.0" that AMD's been selling for $500 up until now, but with ~7900XT performance.
 
Joined
Nov 3, 2014
Messages
267 (0.07/day)
It has to be lower than that. If the top card is expected with "7900XT performance". The 7900XT can currently be bought for less than $799. So to have a good offer the new card needs to be cheaper or better performing, only some better RT performance won't be enough.. If it comes close to 7800XT pricing with 7900XTX performance and better RT, sound like a winner.

Have you already forgotten how many products 7000 series products AMD launched at the same price AND same performance as 6000 series? They cannot help but compulsively and comically overprice their launch MSRPs.

You missed the point of my post entirely... and no, they're not swipes at any AMD product. The truth is that Nvidia's been selling Ada cards almost as if AMD doesn't exist and Radeon GPUs were nothing but fairytales, and that's reflected on the products' street prices. Before I decided to upgrade (and I caved in, I admit - it bothered me I had a 3 year old GPU even though it was a 3090 and the itch was killing me), I had always been an outspoken critic of NVIDIA's pricing because that's just the truth we were, and to some extent still are facing: Nvidia's charging whatever it wants because it has no competition, AMD's far too incompetent for that.

My post is about each processor and where they stand overall in the product stack, regardless of their cost as a finished, marketable product. I prepared this table with the specifications of either highest-released product or full chip available in each tier of processor at a similar size and footprint with data from the TPU database:

NVIDIA
AMD
AD102 (RTX 4090, ~82 TFLOPS, 443 GPixel/s, 1.12 Ttexel/s, 384-bit, 72M cache), 176 ROPs/512 TMUs
~609 mm², 450 W​
N31 (7900 XTX, ~62 TFLOPS, 480 GPixel/s, 960 Gtexel/s, 384-bit, 96M cache), 192 ROPs/384 TMUs ~529mm², 355 W​
AD103 (4080S, ~52 TFLOPS, 285 GPixel/s, 816 Gtexel/s, 256-bit, 64M cache), 112 ROPs/320 TMUs ~379mm², 320 W​
AMD offers no similarly sized processor in this segment, instead, cards are cut down from N31​
AD104 (4070 Ti, ~40 TFLOPS, 208 GPixel/s, 626 Gtexel/s, 192-bit, 48M cache), 80 ROPs/240 TMUs ~294 mm², 285 W​
N32 (7800 XT, ~38 TFLOPS, 233 GPixel/s, 583 Gtexel/s, 256-bit, 64M cache), 96 ROPs/240 TMUs ~346mm², 263 W​
AD106 (4060 Ti, ~22 TFLOPS, 121 GPixel/s, 345 Gtexel/s, 128-bit, 32M cache), 48 ROPs/136 TMUs, ~188 mm², 160 W​
N33 (7600 XT, ~22 TFLOPS, 176 GPixel/s, 352 Gtexel/s, 128-bit, 32M cache), 64 ROPs/128 TMUs
~204 mm², 190 W​
AD107 (4060, ~15 TFLOPS, 118 GPixel/s, 236 Gtexel/s, 128-bit, 24M cache), 48 ROPs/96 TMUs, ~159mm², 115 W​
AMD offers no similarly sized processor in this segment as it never released Navi 34​

From this table it becomes very easy to extrapolate, by general resource availability estimates: AMD placed heavier emphasis on raster power as you can see their cards have a very high number of raster operation pipelines and less focus on texturing capabilities, which are actually quite capable as is despite Nvidia winning here - at the high-end, they opted to trade off a small amount of performance for TDP (low reference clocks permitted it to be a 355 W, traditional 2x 8pin design, as evidenced by AIB 7900 XTXs with unlocked power posting record power consumption without the performance to show for it), there is a noticeable decrease in rated TFLOPS but this won't affect the graphics performance much.

I was generous with Navi 32 - despite being roughly the physical size of AD103, its specs mirror AD104 much more closely. That's also the market segment where both are being sold as finished products - the 4070 Super and Ti are priced similarly to the 7800 XT.

At the lower end, AMD instead opted to juice their cards a little bit to ensure that they kept up, but in general - Nvidia's products are always competing with an AMD product that is a tier higher, in some cases or workloads, even two tiers higher. This becomes most evident in N33 vs. AD107, the RTX 4060 and RX 7600 have just about the exact same level of performance overall, despite the 7600 being the superior card on paper, having a larger die and more power available for it.

And that is why we paid $1200+ on a 4080. It wasn't because AMD is a charity, our friends, think better of consumers or anything, they made the best of the steaming heap of shite they had for show at the time. Without sugarcoating the truth, and I know this will make a lot of AMD fans upset, RDNA 3 is an unmitigated disaster. In most scenarios, its strengths are far outweighed by its weaknesses. The area I cannot fault AMD this generation is drivers, they are really investing hard in this area and it seems the brass finally got the message about gaming features that make GeForce so popular.

BUT, the important thing is that they actually made decent products out of it, as ultimately, consumers want affordable graphics cards that can run their games, and they'll do this just fine at their currently practiced prices. Few will care about technicalities such as the ones I am talking about here, they don't care what processor it is or how many execution units, all they care about is fps, cool features and that the thing doesn't crash. The fact you're so defensive of a technically shoddy product like the 7900 XT (once upon a time envisioned as "RTX 4080 killer") is proof of that, and what makes the 7900 XT decent rather than an all-around laughing stock? It's not $900 anymore. At their original MSRP? Not worthy of consideration. In fact you'd need to be a fool to consider the 7900 XT at all with its launch price, even if the only other option was the 7900 XTX, the pricing for them was too close and the XTX is just a better product.

To be specific my point is that as a pretentious RTX 4080 killer the XT is garbage. As a regular product at $550 it's a great deal.

This post took far more time to make than I should have spent on it, and I think I have clarified my train of thought and exposed the way I think, above any marketing BS and bringing the facts to the table.

Otherwise, why would AMD give up on the high end this generation to "recalibrate" and instead focus on high-volume channels to sort their issues out? AMD already took the price war approach. It's what you're seeing with the current generation, from top to bottom throughout the entire stack. It wasn't enough, and the result is that Radeon looks like a liability in their earnings report. Not even a single lousy billion of income last quarter, in the middle of a GPU boom. No wonder shareholders aren't happy.

Needless to say this post is my opinion and my opinion alone. Don't take it as gospel, personally or anything. It's an insomniac's rant.

Across the stack, this generation can be summed up by:
  • AMD: 5-10% faster raster at ~5% lower prices
  • Nvidia: 50-200% faster RT/AI/DLSS
  • All of AMD's buyers (historic low at 5% of the discrete consumer GPUs market share): Flooding all websites and forums with "WHY WON'T YOU BUY AMD IT'S 5-10% FASTER FOR 5% CHEAPER YOU MUST BE A BLIND NVIDIA FANBOY NOBODY USES RT/AI/DLSS THEY'RE SCAMS REEEEEEEEEEEEEEEEE".
 
Last edited:
Joined
Oct 6, 2021
Messages
1,605 (1.36/day)
I'm excited because all new hardware is exciting to me at the same time AMDs rumored gameplan is pretty disappointing.

Going by the interwebs rdna3 has done pretty terrible overall with AMD almost dismissing it as a low margin product they don't really care about at the very least their gaming division which includes consoles isn't doing very good going by the latest earnings call tanking like 50%.

It's normal, consoles have been on the market for a long time, stagnation has arrived, and with PS5 pro coming in the not too distant future people tend to go into "waiting mode"

I just hope that among the big changes that RDNA5 intends to implement one will be MCM with Multi-GCD.
 

ARF

Joined
Jan 28, 2020
Messages
4,670 (2.60/day)
Location
Ex-usa | slava the trolls
I'm pretty sure you're barking up the wrong tree - he's talking about the cost for AMD to manufacture, not the cost you pay as a consumer to buy them.

The XTX, XT and GRE are expensive silicon with raw specs, manufacturing costs, power envelopes, and transistor counts far exceeding their actual performance. That's the architectural bottleneck/bug that we are guessing AMD is talking about fixing. I distinctly remember the launch of Navi31 being a disappointment and multiple channels interviewing Scott Herkelman and covering Sam Naffziger's deep dive on the pros and cons of moving GPUs to chiplets. AMD basically admitted that they couldn't scale the interconnects as well as they hoped which is one possible reason why RDNA3 chiplets weren't as fast or cost-effective as the hype and pre-launch info originally predicted.

If fixing this bug opens up 20-30% more performance from the same design, then we have an RDNA4 "7800XT Revision 2.0" that AMD's been selling for $500 up until now, but with ~7900XT performance.

After the bad experience with chiplets, is it really a good idea to move on with them? :confused:
GPUs are very sensitive to latencies, they work best when the latencies are extremely low, which means a monolithic design, chiplets are good for CPUs, but extremely bad for GPUs.
That's why CrossFire is no longer supported.
Do they want to invent a new type of CrossFire?

It's normal, consoles have been on the market for a long time, stagnation has arrived, and with PS5 pro coming in the not too distant future people tend to go into "waiting mode"

I just hope that among the big changes that RDNA5 intends to implement one will be MCM with Multi-GCD.
 
Joined
Apr 14, 2022
Messages
763 (0.77/day)
Location
London, UK
Processor AMD Ryzen 7 5800X3D
Motherboard ASUS B550M-Plus WiFi II
Cooling Noctua U12A chromax.black
Memory Corsair Vengeance 32GB 3600Mhz
Video Card(s) Palit RTX 4080 GameRock OC
Storage Samsung 970 Evo Plus 1TB + 980 Pro 2TB
Display(s) Acer Nitro XV271UM3B IPS 180Hz
Case Asus Prime AP201
Audio Device(s) Creative Gigaworks - Razer Blackshark V2 Pro
Power Supply Corsair SF750
Mouse Razer Viper
Keyboard Asus ROG Falchion
Software Windows 11 64bit
chiplets are good for CPUs, but extremely bad for GPUs.

Chiplets are good for cpus that target MT applications. For gaming, monolithic cpus are still the king.

The gap between nVidia and AMD has widen to a ridiculous amount.
A couple of days ago I read about companies that ordered 5b, 14b, 30b chips from nVidia.
....b=billions $.

China’s internet giants order $5bn of Nvidia chips to power AI ambitions​

Spent nearly 10 billion US dollars to buy Nvidia AI chips! Meta is making every effort to develop open source AGI​

 
Joined
Feb 20, 2019
Messages
8,342 (3.90/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
After the bad experience with chiplets, is it really a good idea to move on with them? :confused:
GPUs are very sensitive to latencies, they work best when the latencies are extremely low, which means a monolithic design, chiplets are good for CPUs, but extremely bad for GPUs.
That's why CrossFire is no longer supported.
Do they want to invent a new type of CrossFire?
Of course they want to move to chiplets. RDNA3 was their first attempt, and it worked okay, but not as well as expected.
Chiplets are why 96-core EPYCs exist and are stealing huge amounts of business from Intel. AMD can sell those 12-chiplet EYPCs for $14750 rather than selling twelve 7950X CPUs for $550 each.

So, I doubt they'll give up. If AMD ever manage to solve the latency issues and scale GPU cores (not memory subsystems) out to multiple chiplets, they could effectively make massive GPUs like the 4090 is, but at a tiny fraction of the cost.

Dismissing AMD because their very first attempt at GPU chiplets was mediocre would be stupid. They've demonstrated they have the skill to improve chiplets, reducing the downsides to a mutli-chip architecture with each generation. They will never be as good as performant as the equivalent monolithic solution, but that's not the point, they can potentially scale far beyond what's even possible to manufacture as a monolithic design.

Give them time; AMD might give up on the idea of scalable GPU chiplets, but I have my doubts.
 
Last edited:
Top