• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

New Specs of AMD RDNA3 GPUs Emerge

Joined
Dec 30, 2010
Messages
2,202 (0.43/day)
They need to advertise and sponsor more Gaming Tourneys

What, spoonfed a generation that is incapable of doing any research of their own now?

AMD cards where always better (for me) since the 9600XT untill today's gen. I owned a Geforce 5700 128 or 256MB, on where i replaced the tim one day and the card died, proberly due to static electricity. AMD cards on the other hand could simply take a beating or have it running for days on "AtiTool" to get the best possible OC out of it. Since then i really never went the Nvidia route again and really i cant find myself in "crappy" or "bad drivers".

Problem with "bad drivers" is'nt AMD but dev's not properly optimizing their game path with the use of AMD cards or now Intel. You have AMD that needs to clean up the mess devs made in the first place. I stopped buying games released the first second; primarily because bugs do exist and need to be ironed out. Once that is done, you got a far better product anyway with game studio's considering games more and more as a quick cash grab.

And since the K7 Slot A i never used a Intel neither. It was always AMD back then which offered quite better value for the money then Intel did. And esp with the AM4 route, we had quite some years of the same platform and 4 generations of CPU's being released for that same platform. Even tho AM5 is on it's way, AM4 will still be relevant for quite some time now.
 
Joined
Jun 5, 2021
Messages
284 (0.22/day)
If the cores are this low.. nvidia will win easily with ad102 the l2 cache will bring 5 terabytes of bandwidth
 
Joined
Mar 10, 2010
Messages
11,880 (2.19/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Gskill Trident Z 3900cas18 32Gb in four sticks./16Gb/16GB
Video Card(s) Asus tuf RX7900XT /Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores laptop Timespy 6506
AMD has confirmed that it takes more energy for moving data in an MCM chip in a question for why laptop CPU are monolithic.
It's accepted that moving data is the most energy draining, I already agreed.
But.
AMD invented/ use infinity cache to mitigate this issue.

As I already said.
 
Joined
Jul 10, 2008
Messages
339 (0.06/day)
Location
Wasteland
System Name Cast Lead™
Processor Intel(R) Core(TM) i5-7600 CPU @ 3.50GHz
Motherboard Asus ROG STRIX H270F GAMING
Cooling Cooler Master Hyper 212 LED
Memory G.SKILL Ripjaws V Series 8GB (2 x 4GB) DDR4 2400
Video Card(s) MSI Radeon RX 480 GAMING X 8G
Storage Seagate 1TB 7200 rpm
Display(s) LG 24MP59G
Case Green Z4
Audio Device(s) Realtek High Definition Audio
Power Supply Green 650 UK Gold
Software Windows 10™ Pro 64 bit
they need focus on raytracing
 
Joined
Jan 8, 2017
Messages
9,525 (3.26/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
Hm, Even EPYC with 8 chipplets is quite efficient. Don't you think you're exaggerating the consumption of interconnectors?

No, I'm really not. The cores themselves are efficient, the interconnects between them are not. This isn't me saying MCM can't be efficient, I am simply point out the fact that if you go from monolithic to MCM without a node shrink or something else to drive efficiency up, it will use more power, it's just the way it is.

1651565183850.png


Vs what the power distribution looks like on a monolithic processor :

1651565244294.png


Your interconnect point is countered by massive on die cache ,but is otherwise sound.

Caches alleviates the need to go off chiplet but it doesn't eliminate it.
 
Last edited:
Joined
Sep 8, 2020
Messages
220 (0.14/day)
System Name Home
Processor 5950x
Motherboard Asrock Taichi x370
Cooling Thermalright True Spirit 140
Memory Patriot 32gb DDR4 3200mhz
Video Card(s) Sapphire Radeon RX 6700 10gb
Storage Too many to count
Display(s) U2518D+u2417h
Case Chieftec
Audio Device(s) onboard
Power Supply seasonic prime 1000W
Mouse Razer Viper
Keyboard Logitech
Software Windows 10
This is gonna be awesome, i dreamed of the day when CS GO hits 2000 fps.
 
Joined
May 2, 2017
Messages
7,762 (2.77/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
I thought RDNA3 was going to be MCM, so we'd see fewer unique dies and a range of GPUs based more like Zen2 when it first lauched MCM on desktop:

Single harvested die (3600/3600X)
Single fully-enabled die (3700X/3800X)
Dual harvested dies (3900X)
Dual fully-enabled die (3950X)

If AMD was going MCM it wouldn't need three different sizes, would it? Perhaps just a performance-class die to scale up with multiple modules and a tiny one for the entry-level and lower midrange.
what's the source on that?

It would be super dumb from a business perspective to make special 3-shader-engine GCD die just for a low-volume halo part, and use 2-shader GCDs for everything else.

Look at Zen/Threadripper/Epyc - there's a single CCD chiplet that serves the entire non-APU product stack, something like 35+ SKUs from the lowly R5 5500 all the way up to the ridiculous $9000 EPYC 7700-series using the exact same piece of silicon binned, harvested, and combined in many different ways. It's a relatively small die that is easy to make with extremely good yields and it's 100% built from the ground up to scale to multiple dies.

It's way more likely that AMD have had a single GCD chiplet desing with a scalable interconnect and will add 1, 2, 3, 4, 6, 8 of them together as necessary.
This is my take, which is obviously pure speculation and based on rumors going around:
- RDNA3 will be MCM, with compute dice separate from interconnect+memory controller+cache dice, but
- (likely due to physical characteristics, trace lengths, die shapes, etc) they can only connect two compute dice per interconnect die
- this forces them into making at least two compute die layouts, as using only one would allow for insufficient scaling across the product stack - either forcing massive, wasteful cuts to huge dice for lower end products (say, two dice with 1/3rd disabled for a third-tier product) - or putting a cap on high end core counts that they're likely judging to be too low to be competitive. Two die layouts (or even three, with a smaller low-end one) allows them to bypass this issue while still keeping most of the savings due to the relative simplicity of producing a second compute-only die on the same node (as compared to taping out a whole new monolithic GPU, inlcuding the reorganization of the layout inherent to this)

Where I think the rumors start being iffy is in their consistent treatment of die=SKU, which just has never been how the GPU industry has worked. I've seen too much "Navi 31 = 7900 XT, Navi 32 = 7800 XT, Navi 33 = 7700 XT" stuff that to me is just mind-bogglingly dumb unless there are also at least one cut-down non-XT SKU in between each of those. If I were to guess, I'd think things are somehow more spread out than that. And, of course, having a 3 SE and a 2 SE die allows for a lot of implementation flexibility, as you can make both single and dual die configurations of both. If paired with a 1 SE die on the low end, you can have 1x1SE, 1x2SE, 1x3SE, 2x2SE, and 2x3SE implementations, each with cut-down variants depending on yields and packaging costs.
MCM designs are actually going to use way more power vs monolithic because of the interconnect. Inside any chip the thing that consumes the most power is moving data around, the power consumed by these processor to do actual "work" is almost inconsequential.

The further you have to move data the more power you need, so interconnects that need to go across chips are going to use more power.
You're not wrong here, but given the latency needs of a GPU, there's no chance of MCM GPUs using through-package IF like EPYC/TR - they'll be using some form of COWOS stacking or LSI to join the dice together. They've already demonstrated a working, market-ready stacked cache with the 5800X3D and its EPYC siblings, so it seems reasonable that they would go that route (and recent rumors also point that way). And as these packaging technologies drastically cut transmission distances and keep all signalling in-silicon, power consumption will also drop drastically.
 
Joined
Jun 5, 2018
Messages
240 (0.10/day)
I guess there are three ways to look at this:

1) Yields: AMD will prioritize highest yields and thus margins, assuming market will consume anything they produce, performance crown not a priority.
2) Architecture superiority: AMD thinks the full fledged 15360 shader will more than surpass anything Nvidia will offer and are adjusting performance to match the 4000 series lineup.
3) Efficiency: AMD will chose to take a loss in max FPS but will dominate perf/watt. 600W GPUs are ridiculous, think of the cooling requirement for a 3090Ti and add another 33% on top!

Either way, personally I would prefer that AMD matches Nvidia on performance, but as I like small ITX size systems, these 600W behemoths will add no value to me, so better efficiency would be nice.
 
Joined
Dec 28, 2012
Messages
3,987 (0.91/day)
System Name Skunkworks 3.0
Processor 5800x3d
Motherboard x570 unify
Cooling Noctua NH-U12A
Memory 32GB 3600 mhz
Video Card(s) asrock 6800xt challenger D
Storage Sabarent rocket 4.0 2TB, MX 500 2TB
Display(s) Asus 1440p144 27"
Case Old arse cooler master 932
Power Supply Corsair 1200w platinum
Mouse *squeak*
Keyboard Some old office thing
Software Manjaro
So true.
The Geforce FX series of cards were hugely inferior to the ATi 9000-series and people bought terrible Geforce FX cards instead of Radeons.
The 9000s series sold so well ATi was solidified as a major nvidia competitor and gave them the money for the exellent X series. The only reason they didnt sell more was, preditably, numerous driver issues that plagued ATi for years. Gee I hope that doesnt become a consistent pattern.......
Fermi was a dumpster fire that was a year late and barely performed faster than AMD's previous generation at twice the power draw, but people still bought it in droves.
The HD 5000 series hit 49% marketshare. AMD's response to this was to sit on their arse and do nothing, rebranding the 5000s and 6000s, and get caught with their pants down when nvidia, as it turns out, was not simply sitting idle and was actively fixing fermi's issues, resulting in the 570/580 stealing the performance crown back, even from the 6900 emergency edition cards. And as a result of nvidia's work, the lower end cards, which were more competitive, saw power draw decreases that threatened AMD's entire lineup. Losing to fermi was entirely on AMD's incompetence. Oh yeah, cant forget the ever present driver issues either, as the 6000 series was the era when you needed to keep 3-4 drivers on your system depending on which game you wanted to play. I remember that vividly.
History is proof that when AMD has a better product, people will still buy Nvidia. Goes to show that having the best product doesn't automatically make it a success.
History is proof that whenever AMD fails, the entireity of blame is placed on external factors and people ignore internal factors. Take their Cpus, everyone blames intel's anti competitive practices, but even with those AMD was selling every athlon 64 they could make. When you bring up this fact, and the fact that AMD let athlon 64 wither on the vine while spending 5 BILLION on ATi, all while intel was cooking up the ROFLstomp that was conroe, the AMD fans tend to get very very quiet. Yeah, intel hurt them, but AMD were the ones that seemingly had no plan for athlon 64, or if they did they spent WAY too long cooking up what would become k10, k10 should have released 2 years earlier then it did. You bring up the numerous times it took media involvement to get AMD to admit they had serious driver issues (FCAT pacing with the 6000/7000 series, black screens with the 7000/200/300 series, downclocking issues with rDNA) and immediate deflections about "mindshare" are brought up.

AMD is very good at putting themselves in horrible positions, through nobody's fault but their own. Even now, they get dominance with a SINGLE generation of CPUS and their prices go through the roof. Intel, despite having dominance for nearly a decade, didn't jack up prices until haswell/skylake. AMD did it in 6 months, destroying the budget market. They fix rDNA then jack up the prices of rDNA 2 to the point where nvidia makes more sense. They get a good GPU arch then ruin it with the likes of the 6500xt gimpfest.
 
Last edited:
Joined
Jun 19, 2010
Messages
409 (0.08/day)
Location
Germany
Processor Ryzen 5600X
Motherboard MSI A520
Cooling Thermalright ARO-M14 orange
Memory 2x 8GB 3200
Video Card(s) RTX 3050 (ROG Strix Bios)
Storage SATA SSD
Display(s) UltraHD TV
Case Sharkoon AM5 Window red
Audio Device(s) Headset
Power Supply beQuiet 400W
Mouse Mountain Makalu 67
Keyboard MS Sidewinder X4
Software Windows, Vivaldi, Thunderbird, LibreOffice, Games, etc.
what's the source on that?

It would be super dumb from a business perspective to make special 3-shader-engine GCD die just for a low-volume halo part, and use 2-shader GCDs for everything else.

Look at Zen/Threadripper/Epyc - there's a single CCD chiplet that serves the entire non-APU product stack, something like 35+ SKUs from the lowly R5 5500 all the way up to the ridiculous $9000 EPYC 7700-series using the exact same piece of silicon binned, harvested, and combined in many different ways. It's a relatively small die that is easy to make with extremely good yields and it's 100% built from the ground up to scale to multiple dies.

It's way more likely that AMD have had a single GCD chiplet desing with a scalable interconnect and will add 1, 2, 3, 4, 6, 8 of them together as necessary.
i 100% hear you and i would lie if i didn´t have the same concern with that approach
 
Joined
May 21, 2009
Messages
275 (0.05/day)
Processor AMD Ryzen 5 4600G @4300mhz
Motherboard MSI B550-Pro VC
Cooling Scythe Mugen 5 Black Edition
Memory 16GB DDR4 4133Mhz Dual Channel
Video Card(s) IGP AMD Vega 7 Renoir @2300mhz (8GB Shared memory)
Storage 256GB NVMe PCI-E 3.0 - 6TB HDD - 4TB HDD
Display(s) Samsung SyncMaster T22B350
Software Xubuntu 24.04 LTS x64 + Windows 10 x64
according videocardz apparently vcn 4.0 dont come with av1 encode support for now:



:)
 
Last edited:

ARF

Joined
Jan 28, 2020
Messages
4,670 (2.59/day)
Location
Ex-usa | slava the trolls
according videocardz apparently vcn 4.0 dont come with av1 encode support for now:



:)

This is a terrible news, this support lack means that the graphics cards will be weaker than the competition and potentially will cause inconveniences for the users who do need this hardware acceleration..
 
Joined
Oct 12, 2005
Messages
720 (0.10/day)
This is a terrible news, this support lack means that the graphics cards will be weaker than the competition and potentially will cause inconveniences for the users who do need this hardware acceleration..
Indeed, We will see if AV1 encoding is becoming a thing. Most source will ask for either h264 or h265 still. For most people, AV1 decoding will be enough.

Nvidia claim to have an AV1 hardware encoder with Ampere but it doesn't seems it's part of NVENC and cannot be used right now. Intel showed their AV1 Hardware encoder but their GPU are yet to become available.

I don't think it's such a big deal that the encoding portion is not there as long as it have the decoding part. Still it would have been great if it was available.

The thing that intrigue me is how it stand with Stadia. That platform is somehow in a strange place. It use intel CPU and AMD GPU (Vega 56 if i recall) + an hardware av1 encoder. I wonder if this mean the next gen stadia (or white label Stadia as it seem it's the direction Google want to take) will use full Intel Hardware.

To me, that either means AMD no longer plan to work with Google on Stadia or Stadia might never have a refresh. Time will tell, but google seems to push a lot for white labeling their tech where they don't seems to push the platform itself anymore. There are barely anything new there.
 
Joined
Jan 17, 2018
Messages
440 (0.17/day)
Processor Ryzen 7 5800X3D
Motherboard MSI B550 Tomahawk
Cooling Noctua U12S
Memory 32GB @ 3600 CL18
Video Card(s) AMD 6800XT
Storage WD Black SN850(1TB), WD Black NVMe 2018(500GB), WD Blue SATA(2TB)
Display(s) Samsung Odyssey G9
Case Be Quiet! Silent Base 802
Power Supply Seasonic PRIME-GX-1000
The 9000s series sold so well ATi was solidified as a major nvidia competitor and gave them the money for the exellent X series. The only reason they didnt sell more was, preditably, numerous driver issues that plagued ATi for years. Gee I hope that doesnt become a consistent pattern.......

The HD 5000 series hit 49% marketshare. AMD's response to this was to sit on their arse and do nothing, rebranding the 5000s and 6000s, and get caught with their pants down when nvidia, as it turns out, was not simply sitting idle and was actively fixing fermi's issues, resulting in the 570/580 stealing the performance crown back, even from the 6900 emergency edition cards. And as a result of nvidia's work, the lower end cards, which were more competitive, saw power draw decreases that threatened AMD's entire lineup. Losing to fermi was entirely on AMD's incompetence. Oh yeah, cant forget the ever present driver issues either, as the 6000 series was the era when you needed to keep 3-4 drivers on your system depending on which game you wanted to play. I remember that vividly.

History is proof that whenever AMD fails, the entireity of blame is placed on external factors and people ignore internal factors. Take their Cpus, everyone blames intel's anti competitive practices, but even with those AMD was selling every athlon 64 they could make. When you bring up this fact, and the fact that AMD let athlon 64 wither on the vine while spending 5 BILLION on ATi, all while intel was cooking up the ROFLstomp that was conroe, the AMD fans tend to get very very quiet. Yeah, intel hurt them, but AMD were the ones that seemingly had no plan for athlon 64, or if they did they spent WAY too long cooking up what would become k10, k10 should have released 2 years earlier then it did. You bring up the numerous times it took media involvement to get AMD to admit they had serious driver issues (FCAT pacing with the 6000/7000 series, black screens with the 7000/200/300 series, downclocking issues with rDNA) and immediate deflections about "mindshare" are brought up.

AMD is very good at putting themselves in horrible positions, through nobody's fault but their own. Even now, they get dominance with a SINGLE generation of CPUS and their prices go through the roof. Intel, despite having dominance for nearly a decade, didn't jack up prices until haswell/skylake. AMD did it in 6 months, destroying the budget market. They fix rDNA then jack up the prices of rDNA 2 to the point where nvidia makes more sense. They get a good GPU arch then ruin it with the likes of the 6500xt gimpfest.
What in the world did AMD do to you that you despise them as a company so much?

Most of your rambling isn't even about the last 5 years so let's ignore all the rambling about 20 years ago and go with addressing the last comment:

Prices on AMD CPU's increased about $50 across the board with the 5000 series, and they didn't come out with non X series CPU's. Both were for a very simple reason, the chip shortage. The chip shortage is to blame for the 'destroyed budget market', as you call it. Hell, the first 6 months after the release of the 5000 series you were lucky to find any in stock.

RDNA 1 was a fine mid-tier chip. RDNA 2 is a fine high tier chip and they were and have an MSRP priced accordingly.

6500 is a repurposed Laptop chip. It's crap like every other low tier card that has ever been released and probably isn't even capable of using more than 4x PCIe lanes.
 
Joined
May 2, 2017
Messages
7,762 (2.77/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
What in the world did AMD do to you that you despise them as a company so much?

Most of your rambling isn't even about the last 5 years so let's ignore all the rambling about 20 years ago and go with addressing the last comment:

Prices on AMD CPU's increased about $50 across the board with the 5000 series, and they didn't come out with non X series CPU's. Both were for a very simple reason, the chip shortage. The chip shortage is to blame for the 'destroyed budget market', as you call it. Hell, the first 6 months after the release of the 5000 series you were lucky to find any in stock.

RDNA 1 was a fine mid-tier chip. RDNA 2 is a fine high tier chip and they were and have an MSRP priced accordingly.

6500 is a repurposed Laptop chip. It's crap like every other low tier card that has ever been released and probably isn't even capable of using more than 4x PCIe lanes.
I think there's a reasonable middle position to be found here: AMD has absolutely made consumer-unfriendly choices, but have also been somewhat forced into them by the chip shortage, their CPU design strategy, and overall market conditions. They could have made different choices, but this would have meant giving up profits through prioritizing lower-priced, lower margin products, which would ultimately run afoul of shareholders and their (US) legal obligation to chase profits above all else. Still, they could have found a balance that was friendlier to their customers overall.

To expand a bit on that:
AMD came into Zen3 on a high - they had already had a highly competitive CPU architecture for a while, and were selling well. Zen3 kicked Intel's butt across the board, in both efficiency and absolute performance. They had raised the bar for consumer core counts, from 4 to 8 then to 16, establishing a new 6-to-8 mainstream segment (and relegating anything 4c to the low end). They had started selling on value - many cores with decent IPC for good prices - moved to value and absolute performance - many cores with competitive-to-leading IPC for good prices. Crucially, they had also been pushing the consumer CPU price ceiling upwards. A 7700K was $350, an 1800X was $499 (and a 1700 was $329!) - great value, but more expensive at the top end. The 16c pushed that ceiling to $750. Suddenly the price range for consumer CPUs was ~$700 (~$50-750) instead of ~$300, meaning an increasing focus on really expensive hardware. And as AMD came to win mindshare and dominate in performance, they sought to capitalize on that by increasing prices. Not much, mind you - just $50-ish per SKU. That's not bad per se, but a clear indication that AMD saw itself as done being the "better value" alternative, as they were dominant in essentially all aspets.

This, in and of itself, is not all that problematic, even if it's a bit iffy. If one wants to be generous, one can frame it as just rewards for long, dedicated work towards improving performance and core counts overall. I'm not saying that's my view, just that it's not an unreasonable one. The problem was the timing and circumstances of this move, and how things played out afterwards.

On top of this, another couple of factors that worsened the effects of this choice: TSMCs high 7nm yields, and AMD's 8-core CCD designs. This meant that AMD was selling every die they could produce, and had no fab capacity to spare for other products. They also had nothing to offer on the low end - the number of CCDs with more than 2 defective cores seems to have been essentially zero - certainly not enough to make a <6 core product out of. And seemingly all chips binned well too, qualifying for high-clocking SKUs (with the high power-per-core 5800X seeming to take the brunt of less efficient bins).

This left AMD with no real option for the low end market - they'd either have to cut down chips that they could have sold as more expensive, more fully enabled products for more money, or they'd have to tape out a whole new low core count die at a cost of millions of dollars. Either way, in a severely supply constrained situation, either would likely lead to them being sued by shareholders for not seeking profits with sufficient aggression.

Of course, the fab capacity pressure from mobile APUs (which vastly outsell desktop CPUs), consoles (huge APUs that AMD are contractually obliged to prioritize and that sell in the tens of millions), and their attempt at getting back into high end graphics also put severe pressure on this, worsening the situation.

IMO, what AMD should have done in this situation is to take the (overall relatively small) cost of taping out a 4-core or 6-core CCD design and using that to fill out the low end CPU market. Instead they sat on their hands, raking in profits. Then Intel came in with Alder Lake, beating them soundly in gaming performance, and the market seemed to loosen a bit - chips were consistently in stock at MSRP. But still AMD waited, with only an upper midrange to high end CPU lineup - it took nearly half a year between supply normalizing and the launch of their low end range. And, crucially, most of that low end range underperforms compared to Intel competitors at the same or lower price, with chips like the i3-12100 nearly matching the gaming performance of the 5600X.

I have little doubt that AMD was under severe shareholder pressure to extract as much profit as possible during this period - anything else would be very surprising. Shareholders tend not to care about anything beyond ROI, and don't give a crap about long-term consequences most of the time. But it's also AMD's responsibility to push back in cases like that - but it seems they underestimated how well Intel would respond, overestimated customer willingness to pay $299 and up for CPUs, and was off by at least half a year in how much time they had to fix things.

So, in sum, AMD made some poor judgements in a difficult situation under a lot of pressure. They are absolutely to blame for making consumer-unfriendly decisions, but I don't see them deserving the type of vitriol seen above - this is still pretty mild on the "consumer unfriendly corporation" scale. Intel's decade-long history of bribery and anticompetitive behaviour is far, far worse still, as it operates on an entirely different scale. But that doesn't mean that AMD is anyone's friend.


The 6500 XT and 6400 are mostly as is to be expected in this context. In some sense, the design was probably smart when it was decided on: an as-small-as-possible die design optimized for high value low power entry gaming laptops, cutting everything unnecessary, including duplicate encode/decode blocks that would be found on the APU in those laptops. But they made mistakes here as well: it seems that their pre-silicon modelling underestimated the detrimental effects of cutting both the PCIe and RAM buses to the bone - or they erroneously judged this to not be a big deal for the intended market. Of course it's also rather odd to make such an obviously mobile-first design, yet have desktop variants hit the market months before laptops. Honestly, it would probably have come off far better if they had postponed the 6500 XT until after 6500M laptops had been out a while.

And, of course, it should never have been called the 6500 XT - it's a 4 tier GPU in terms of performance. Give it a slightly more reasonable power target (100W? 90W? even 75W would be good) call it the 6400 XT, sell it for $150 or less, and it would be pretty great. Presenting it as the successor to the 5500 XT, at the same or worse performance and with fewer features ... that's just a massive misstep, that makes no sense. It's clear they had to push this die much further than what is reasonable to even match the 5500 XT. Again, this is a clear indication of AMD chasing profits rather than customer satisfaction. The 6500 XT tier should have been a 20-24CU cut-down Navi 23, not 16CU Navi 24. But again, that would mean sacrificing profits for the sake of a cheaper SKU, as there doesn't seem to be any defective Navi 23 dice to spare. (Of course the mobile 6000S series, all based on Navi 23, also puts a lot of pressure on that die's production.)

That doesn't make either the 6400 or the 6500 XT terrible - just terrible value at their given pricing! - nor does it make AMD evil. It just illustrates that a corporation in our current economic system will always be more beholden to its shareholders than its customers. And we're seeing direct consequences of that.
 
Joined
Apr 12, 2013
Messages
7,573 (1.77/day)
Ehhh, slides are for marketing and investors.
No he's right, there are 3 chiplets in 5900x & 5950x as well as lots of EPYC chips, the IO die also counts as one.

their (US) legal obligation to chase profits above all else.
There's no such thing, I dunno why people bring this false narrative up so often :rolleyes:
 
Joined
Mar 28, 2020
Messages
1,763 (1.01/day)
I think next navi will likley beat rtx 3000 soundly in magic cherry picked scenarios for marketing slides with a bunch of little asterisks and paragraphs of notes at the bottom that says something like in 1080P Medium with FSR Performance+++ enabled!!! Got a 6900XT and it just so disappointing to lose 100+ frames when switching to RT on.
That’s because you are comparing cases where Nvidia GPUs have an advantage. This is no different from cherry picking scenarios to me. The truth is when you switch RT on, you lose significant performance whether you are using Nvidia or AMD GPUs. You claw back performance with DLSS, which gave Nvidia a significant advantage.

The 9000s series sold so well ATi was solidified as a major nvidia competitor and gave them the money for the exellent X series. The only reason they didnt sell more was, preditably, numerous driver issues that plagued ATi for years. Gee I hope that doesnt become a consistent pattern.......

The HD 5000 series hit 49% marketshare. AMD's response to this was to sit on their arse and do nothing, rebranding the 5000s and 6000s, and get caught with their pants down when nvidia, as it turns out, was not simply sitting idle and was actively fixing fermi's issues, resulting in the 570/580 stealing the performance crown back, even from the 6900 emergency edition cards. And as a result of nvidia's work, the lower end cards, which were more competitive, saw power draw decreases that threatened AMD's entire lineup. Losing to fermi was entirely on AMD's incompetence. Oh yeah, cant forget the ever present driver issues either, as the 6000 series was the era when you needed to keep 3-4 drivers on your system depending on which game you wanted to play. I remember that vividly.

History is proof that whenever AMD fails, the entireity of blame is placed on external factors and people ignore internal factors. Take their Cpus, everyone blames intel's anti competitive practices, but even with those AMD was selling every athlon 64 they could make. When you bring up this fact, and the fact that AMD let athlon 64 wither on the vine while spending 5 BILLION on ATi, all while intel was cooking up the ROFLstomp that was conroe, the AMD fans tend to get very very quiet. Yeah, intel hurt them, but AMD were the ones that seemingly had no plan for athlon 64, or if they did they spent WAY too long cooking up what would become k10, k10 should have released 2 years earlier then it did. You bring up the numerous times it took media involvement to get AMD to admit they had serious driver issues (FCAT pacing with the 6000/7000 series, black screens with the 7000/200/300 series, downclocking issues with rDNA) and immediate deflections about "mindshare" are brought up.

AMD is very good at putting themselves in horrible positions, through nobody's fault but their own. Even now, they get dominance with a SINGLE generation of CPUS and their prices go through the roof. Intel, despite having dominance for nearly a decade, didn't jack up prices until haswell/skylake. AMD did it in 6 months, destroying the budget market. They fix rDNA then jack up the prices of rDNA 2 to the point where nvidia makes more sense. They get a good GPU arch then ruin it with the likes of the 6500xt gimpfest.
I certainly am not happy with AMD raising prices. But what is the reason for them doing so, we do not completely know. We assume that there are some cost increase here and there, but that’s all to it. Without knowing the complete picture, I won’t jump to conclusion that AMD increase price and that increase goes to them as 100% profit.
Also, the reason why you will come to the conclusion that AMD increase their prices is because they have released fairly good value Ryzen 1 through 3 CPUs. Thus, when Ryzen 5xxx series was announced, it was indeed a sticker shock. As for Intel, they have constantly been charging consumers a high price, which makes their price adjustment look like some sort of discount. Realistically, if AMD did not release a competitive product, Intel will still be happily selling you a quad core CPU, while charging you and arm and a leg for anything more than a 4c/4t CPU. I did not cook this up, but if you look back a decade before AMD released their Ryzen CPU, that’s what Intel has been doing. In any case, it is no secret that whoever have the better product gets to price their products at a premium. The longer they stay up there, the more crazy the price increase. Intel did that, and so will AMD. These are profit seeking companies and will stop at nothing to earn more. So I’ve grown up and look beyond siding any specific companies.
 
Last edited:
Joined
May 2, 2017
Messages
7,762 (2.77/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
No he's right, there are 3 chiplets in 5900x & 5950x as well as lots of EPYC chips, the IO die also counts as one.
Two links though - two chiplets would only need one link, after all. Still, I think the only limitation here is packaging, not the interconnect - they can make however many IF controllers on a die that they want. Physical layout is the real challenge: how many dice you can fit within the square/rectangular shape of the package while allowing for efficient trace routing. With a large+small chiplet design like Ryzen, that will likely be an odd number; with equally sized dice you're more likely to find an even number (like 1st/2nd gen TR/EPYC), though if one die is an I/O die responsible for off-package signaling that inherently biases towards an odd number of dice (as long as there's room).
There's no such thing, I dunno why people bring this false narrative up so often :rolleyes:
It's not a false narrative - heck, that WaPo article you link acknowledges as much in its third paragraph:
What began in the 1970s and ’80s as a useful corrective to self-satisfied managerial mediocrity has become a corrupting, self-interested dogma peddled by finance professors, money managers and over-compensated corporate executives.
The Forbes article does the same in its third paragraph:
But I contend there is a problem with the status quo, with the current version of capitalism, which serves the shareholders well, but has proven to be catastrophic for the vast majority of the American people and detrimental to American competitiveness on the global stage
(an article that, for the record, discusses a different topic from the legal and institutional structures enabling shareholder lawsuits for insufficient profit focus, but mentions it in passing)

That is how corporations today are managed, and there is massive legal precedent for shareholders successfully suing companies for not being sufficiently profit-focused. What the quasi-critical, quasi-apologist articles above (like the NYT one) point out is that this is in and of itself only one possible interpretation of the law. But it is still the dominant one, and the one that has the greatest impact on international trade as, crucially, international trade isn't regulated by the laws of other regions that are mostly far less permissive of this type of exploitation. It is still entirely possible for a corporation to argue (in court, if necessary, and they might win, but it will be expensive!) that strategies that are not directly aimed at short-term profits first and foremost will be the best in the long term, whether through long-term profitability, stability, growth, etc.

But this also misses a crucial point: the threat of legal action in and of itself strongly incentivizes profit-seeking strategies. Why? Because legal action is really expensive, time-consuming, and will hurt share prices significantly. And while share prices are mostly arbitrary and don't really tell you anything about the viability of a business or its value to society, the world (including governments, lenders/creditors, investors and business partners) treat them as if they do tell something about those things. Thus, the very fact that it's possible to sue companies for this, and that there's a reasonable chance that said litigation will drag on for a while, and that the shareholders might win, and that legal action is likely to negatively affect share prices, together acts as a strong incentive for corportations to chase profits above all else. Saying that there isn't a legal obligation to do so is effectively splitting hairs, as the legal grounds for being punished (for lack of a better word) exists, the systems they operate within are geared towards doing so, and the surrounding structures act as if there is a legal obligation to do so, preemptively punishing companies (through dropping share prices and poorer treatment by partners) even if they might win the lawsuits. Saying this isn't a legal obligation is overplaying a technicality in order to hide the real-world effects of these systems, and (why we're seeing this in the types of publications you're posted) an obfuscation tactic used by the peddlers of these ideologies to try and distract from the fact that this is the exact outcome they've been promoting for half a century.
 
Joined
Apr 12, 2013
Messages
7,573 (1.77/day)
There is no legal obligation or fiduciary duty to maximize profits for shareholders, otherwise Amazon would be sued into oblivion 2 decades back probably Tesla as well.

how many dice you can fit within the square/rectangular shape of the package while allowing for efficient trace routing.
Interposers or 3d packaging to the rescue then!
 
Joined
May 2, 2017
Messages
7,762 (2.77/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
There is no legal obligation or fiduciary duty to maximize profits for shareholders, otherwise Amazon would be sued into oblivion 2 decades back probably Tesla as well.
As I explained above, there is effectively a legal obligation due to the risk of lawsuits and the (purely negative) knock-on effects of this. You're welcome to present an argument as for why that understanding is wrong, but shouting "nuh-huh" doesn't do that. Due to the arbitrary nature of financial markets and how receptive they are to charismatic leaders, there are always exceptions - but it's also crucial to note that the companies you mention (as well as dozens of unprofitable tech startups - Twitter, Uber, etc.) either have highly visible, charismatic leaders, or exist within the vague bubble of Silicon Valley tech utopianism where there seems to be a shocking acceptance for spending billions with no road towards even significant revenue, let alone profitability (again, Twitter and Uber stand out, but there are plenty of others). For most companies though, these exceptions do not apply, as they either don't have charismatic leaders or can't glue themselves to the "we're creating disruptive tech that will revolutionize [X]" branch of Silicon Valley tech utopianism. Especially for manufacturing industries, where you're dependent on producing real, physical products from material resources and shipping them to people, these exceptions very often do not apply even within Silicon Valley (see Segway, Theranos, etc.).
Interposers or 3d packaging to the rescue then!
Yep, these technologies will change things dramatically, both in terms of how tightly dice can be packed together and how much power is needed for die-to-die interconnects. IF on 3rd gen TR/EPYC can approach 100W on its own, which shows the potential for massive performance gains by moving to packaging methods that don't include through-package signaling between chiplets.
 
Joined
Apr 12, 2013
Messages
7,573 (1.77/day)
As I explained above, there is effectively a legal obligation due to the risk of lawsuits and the (purely negative) knock-on effects of this.
Well an effective obligation doesn't mean anything unless it's part of a law, it's either there or it isn't ~ we aren't playing guess who btw. The biggest risks in litigation is arguably wrt bad PR, this is why most firms try to negotiate settlements even when they usually wouldn't lose in courts. Also the US court system is kinda messy with the sometimes biased juries & what not! You can sometimes walk out of criminal courts through cold blooded murders & be bankrupt in some other civil suit a day later :wtf:
You're welcome to present an argument as for why that understanding is wrong
Well how about Intel? They effectively maximized profits & put the company on a crash course with reality half a decade later! Maybe the investors should sue themselves for the continued pursuit of "maximizing shareholder value" over effective course navigation through a complex & sometimes troubled market with (negative) affecting sales or direct to consumer products like processors. Had Intel not pushed for fleecing their customer so much they really wouldn't feel their wrath in last half a decade since Zen launched. People still remember $1700+ 6950x ripoff or any number of ULV chips till 8th gen finally introduced quad cores!
 
Joined
May 2, 2017
Messages
7,762 (2.77/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Well an effective obligation doesn't mean anything unless it's part of a law, it's either there or it isn't ~ we aren't playing guess who btw.
Sorry, but no. Laws aren't black and white - that's the broad public idea of them, but if they were, we wouldn't have a massive global industry of legal professionals. And it doesn't take an explicit ban or regulatory demand in order to ensure a type of behaviour in the vast majority of applicable cases. Chilling effects and inherent incentives work just as well, and cause a lot less fuss. And current US law and legal precedent severely disincentivize any other course of action than explicitly seeking short-term profits.
The biggest risks in litigation is arguably wrt bad PR, this is why most firms try to negotiate settlements even when they usually wouldn't lose in courts. Also the US court system is kinda messy with the sometimes biased juries & what not! You can sometimes walk out of criminal courts through cold blooded murders & be bankrupt in some other civil suit a day later :wtf:
... yes? Is this in any way contrary to what I was saying? Being bogged down in a years-long shareholder suit will tank your stock prices, which, as you admit, leads to negotiating settlements - which are essentially direct monetary losses for the company, punitively applied by shareholders because they were unhappy.
Well how about Intel? They effectively maximized profits & put the company on a crash course with reality half a decade later! Maybe the investors should sue themselves for the continued pursuit of "maximizing shareholder value" over effective course navigation through a complex & sometimes troubled market with (negative) affecting sales or direct to consumer products like processors. Had Intel not pushed for fleecing their customer so much they really wouldn't feel their wrath in last half a decade since Zen launched. People still remember $1700+ 6950x ripoff or any number of ULV chips till 8th gen finally introduced quad cores!
You're acting as if financial logic is ... well, logically coherent. It isn't - quite far from it. Any such lawsuit against Intel would be laughed out of court - you can't sue a company for losing marketshare or falling behind the competition, you can only sue them if it can be """reasonably""" argued that they should have done more to ensure shareholder profits. The judge would kick you out. Not to mention, of course, that Intel's woes were caused - as you say! - by explicitly chasing short-term shareholder profits.

In a sane world, or in the libertarian dream of a self-regulating free market, this would indeed have consequences (though very different consequences in those two scenarios), but in real-world markets where things like wealth/cash reserves, market share, mindshare, momentum, reputations, and trust with major shareholders matter a lot, Intel wasn't all that affected in the short term, and the long-term consequences are yet to be seen. Knowing Intel and their penchant for strong-arm anticompetitive practices, they'll be "fine". And thus, shareholders will be happy. Remember, stock markets are inherently conservative and risk-averse, and backing an incumbent is always seen as a safe bet until it is unequivocally proven that they have been overtaken.
 
Joined
Jun 18, 2021
Messages
2,588 (2.00/day)
otherwise Amazon would be sued into oblivion 2 decades back probably Tesla as well

What? Open up their charts and you see how you're wrong. And Tesla was actually sued by shareholders multiple times ("funding secure" and the Solar City aqcuisition come to mind, they were able to win/settle the cases but it was still a massive burden)

Well how about Intel? They effectively maximized profits & put the company on a crash course with reality half a decade later! Maybe the investors should sue themselves for the continued pursuit of "maximizing shareholder value" over effective course navigation through a complex & sometimes troubled market with (negative) affecting sales or direct to consumer products like processors. Had Intel not pushed for fleecing their customer so much they really wouldn't feel their wrath in last half a decade since Zen launched. People still remember $1700+ 6950x ripoff or any number of ULV chips till 8th gen finally introduced quad cores!

You're taking things to the extreme, mismanagement, or simply put making so bad choices and bad decisions, is not necessarily criminally viable. Taking Intel for example, they were uncontested market leaders with no competition so they sat back, build up cash reservers, "invested" in stock buy backs and put themselves at risk of any savy competitor - well guess what happened, a savy competitor appeared.

Was it shortsighted? Yes of course but they really didn't need to do anything else, bad strategy doesn't mean they defrauded their shareholders. On a different case, for example Tesla and Solar City, the allegation was that Elon Musk rammed the takeover through to save his cousins buisiness, for what was seen as a not to viable company (Solar City had several problems and still has) - that would be against the interests of the shareholders and more than simple mismanagement. The line can often be thin but it's there somewhere
 

ARF

Joined
Jan 28, 2020
Messages
4,670 (2.59/day)
Location
Ex-usa | slava the trolls
Meanwhile, leakers say that nvidia pulls forward the launch of next-gen RTX 4000 series to "early Q3" this year.
Q3 is from 1st of July to 30th of September.
Early literally means from 1st of July to 14th of August.
 
Top