• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD to Support DDR5, LPDDR5, and PCI-Express gen 5.0 by 2022, Intel First to Market with DDR5

Joined
Oct 12, 2005
Messages
695 (0.10/day)
Even ddr4 2133 was slower than ddr3 1866 thanks to timings, speed isn't the only indicator. Remember you needed ddr2 667 to see a 1-3% gain over ddr400 and you needed ddr3 1333 to see the same small gain over ddr2. Like wise ddr4 2400 is just barely quicker than ddr3 1866 again timings.
The thing is the design change between DDR3 and DDR4 are not the same than DDR4 and DDR5. You can't compare them. Only real test at launch will really tell you the reality.

DDR5 bring a lot of new features that will increase performance no matter what speed it run. Not just more bandwidth and lower voltage. Change between DDR3 and DDR4 were not that important versus change between DDR4 and DDR5.

Also, this is a test of DDR4 vs DDR3 both at 2133 and it's not true that DDR4 were always slower.

 
Joined
May 2, 2017
Messages
7,762 (2.91/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Even ddr4 2133 was slower than ddr3 1866 thanks to timings, speed isn't the only indicator. Remember you needed ddr2 667 to see a 1-3% gain over ddr400 and you needed ddr3 1333 to see the same small gain over ddr2. Like wise ddr4 2400 is just barely quicker than ddr3 1866 again timings.
I know - but latencies (in an absolute sense, not cycles) for common JEDEC speeds are typically more or less flat generation to generation (even if early-generation standards usually compare somewhat poorly to previous late-generation ones, like DDR4-2133 vs. DDR3-2133), while available maximum bandwidth increases. And given the massive RAM size requirements of servers and HPC applications - which are largely driving the development of new RAM standards - I doubt we'll be seeing any real drops in latency any time soon, unless CPU makers start implementing huge L4 caches to do that part of the job as a sort of in-between step. But then again memory speed/latency scaling testing shows that outside of a few memory limited applications (typically compression/decompression and very few others relevant to end users) there isn't much of a performance difference for CPU tasks across variations of reasonably fast and low-latency RAM.
The thing is the design change between DDR3 and DDR4 are not the same than DDR4 and DDR5. You can't compare them. Only real test at launch will really tell you the reality.

DDR5 bring a lot of new features that will increase performance no matter what speed it run. Not just more bandwidth and lower voltage. Change between DDR3 and DDR4 were not that important versus change between DDR4 and DDR5.

Also, this is a test of DDR4 vs DDR3 both at 2133 and it's not true that DDR4 were always slower.

If I'm reading that AT review correctly they did normalize latencies across the DDR3 and DDR4 setups, though, which would negate any latency advantage of DDR3. Going by the specs the DDR3-2133 was C11 and the DDR4-2133 C15, but they say:
AnandTech said:
The first contains the Haswell-E i7-5960X processor, cut it down to run at four cores with no HyperThreading, fixed the CPU speed at 4 GHz and placed the memory into DDR4-2133 14-14-14 350 2T timings. We did the same with the second system, a Haswell based i7-4770K moved it to 4 GHz and making sure it was in 4C/4T mode.
When they say "We did the same" I take that to mean they set the DDR3 to the same timings.
 

TheLostSwede

News Editor
Joined
Nov 11, 2004
Messages
16,887 (2.34/day)
Location
Sweden
System Name Overlord Mk MLI
Processor AMD Ryzen 7 7800X3D
Motherboard Gigabyte X670E Aorus Master
Cooling Noctua NH-D15 SE with offsets
Memory 32GB Team T-Create Expert DDR5 6000 MHz @ CL30-34-34-68
Video Card(s) Gainward GeForce RTX 4080 Phantom GS
Storage 1TB Solidigm P44 Pro, 2 TB Corsair MP600 Pro, 2TB Kingston KC3000
Display(s) Acer XV272K LVbmiipruzx 4K@160Hz
Case Fractal Design Torrent Compact
Audio Device(s) Corsair Virtuoso SE
Power Supply be quiet! Pure Power 12 M 850 W
Mouse Logitech G502 Lightspeed
Keyboard Corsair K70 Max
Software Windows 10 Pro
Benchmark Scores https://valid.x86.fr/yfsd9w
LOL! Yeah i actually do remember those before SIMM's. I had a good friend building machines back then. (386 and 486) We went from needing 4 sticks to only 2 with SIMMS. AHH memories..
Those were 30-pin rather than 72-pin SIMMs.
SIPP memory was individual memory chips you had to add to the board. We had an old 8086 in school that had about a dozen or two of SIPPs mounted in it.
 
Joined
Oct 12, 2005
Messages
695 (0.10/day)
Well they clearly specified that the DDR3 modules at 2133 they had was CL11. To me they are talking into putting the CPU into 4core/4thread. But even if it's not the case, People tend to forget also that high speed memory stick with low latency cost a fortune. Higher speed generally have worst settings.

And again, contrarely to most DDR change, DDR5 is not just "Doubling the transfert speed while cutting in half the internal memory clock". This time, there are major design change in the memory itself that make any past comparaison useless.
 
Joined
May 2, 2017
Messages
7,762 (2.91/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Well they clearly specified that the DDR3 modules at 2133 they had was CL11. To me they are talking into putting the CPU into 4core/4thread. But even if it's not the case, People tend to forget also that high speed memory stick with low latency cost a fortune. Higher speed generally have worst settings.

And again, contrarely to most DDR change, DDR5 is not just "Doubling the transfert speed while cutting in half the internal memory clock". This time, there are major design change in the memory itself that make any past comparaison useless.
I didn't say there weren't design changes, just that I believe your reading of the AT article was a bit off. After all, if they were doing a stock v. stock comparison, why tune the timings of the DDR4? Also, while DDR3-2133 wasn't cheap by any means, DDR3-2133 C11 was the base latency at that speed. In 2015 I bought a 2x4GB kit of DDR3-2133 C9 for my APU HTPC, and that cost the equivalent of less than US$100 at the time. Not cheap for 8GB of DDR3 in 2015, but by no means crazy prices. Still, it'll definitely be interesting to see DDR5 reviews when platforms start arriving.
 
Joined
Dec 28, 2006
Messages
4,378 (0.68/day)
Location
Hurst, Texas
System Name The86
Processor Ryzen 5 3600
Motherboard ASROCKS B450 Steel Legend
Cooling AMD Stealth
Memory 2x8gb DDR4 3200 Corsair
Video Card(s) EVGA RTX 3060 Ti
Storage WD Black 512gb, WD Blue 1TB
Display(s) AOC 24in
Case Raidmax Alpha Prime
Power Supply 700W Thermaltake Smart
Mouse Logitech Mx510
Keyboard Razer BlackWidow 2012
Software Windows 10 Professional
I didn't say there weren't design changes, just that I believe your reading of the AT article was a bit off. After all, if they were doing a stock v. stock comparison, why tune the timings of the DDR4? Also, while DDR3-2133 wasn't cheap by any means, DDR3-2133 C11 was the base latency at that speed. In 2015 I bought a 2x4GB kit of DDR3-2133 C9 for my APU HTPC, and that cost the equivalent of less than US$100 at the time. Not cheap for 8GB of DDR3 in 2015, but by no means crazy prices. Still, it'll definitely be interesting to see DDR5 reviews when platforms start arriving.

Funny how that works, I've got some ddr2 1200 cas 5 on an am2+ board with a phenom x4 840t. Put the same chip into an am3+ board with ddr3 1333 cas 11, guess which one had the better memory system.
 
Joined
Feb 26, 2016
Messages
551 (0.18/day)
Location
Texas
System Name O-Clock
Processor Intel Core i9-9900K @ 52x/49x 8c8t
Motherboard ASUS Maximus XI Gene
Cooling EK Quantum Velocity C+A, EK Quantum Vector C+A, CE 280, Monsta 280, GTS 280 all w/ A14 IP67
Memory 2x16GB G.Skill TridentZ @3900 MHz CL16
Video Card(s) EVGA RTX 2080 Ti XC Black
Storage Samsung 983 ZET 960GB, 2x WD SN850X 4TB
Display(s) Asus VG259QM
Case Corsair 900D
Audio Device(s) beyerdynamic DT 990 600Ω, Asus SupremeFX Hi-Fi 5.25", Elgato Wave 3
Power Supply EVGA 1600 T2 w/ A14 IP67
Mouse Logitech G403 Wireless (PMW3366)
Keyboard Monsgeek M5W w/ Cherry MX Silent Black RGBs
Software Windows 10 Pro 64 bit
Benchmark Scores https://hwbot.org/search/submissions/permalink?userId=92615&cpuId=5773
Lets just blame intel for being stagnant - we'd still be on quad cores and PCI-E 3 without ryzen shaking things up
Lol the fact that y'all really think Intel was going to stick with quad cores forever... THEY DIDN'T MAKE THE 8700K IN RESPONSE TO RYZEN LMAO IT WAS BECAUSE THEY FELT IT WAS A GOOD TIME TO GIVE NEW CORES!!! (I talked to engineers before coffee lake was announced, they told me they were making 6 core processors)

All Ryzen did was lower the price of Intel's lineup. At least in the beginning, with the previously unknown (at the time) Zen architecture. Now, yes they did add more cores in direct competition with Ryzen, however it wasn't all because of Ryzen.
 

ARF

Joined
Jan 28, 2020
Messages
4,566 (2.74/day)
Location
Ex-usa | slava the trolls
Lol the fact that y'all really think Intel was going to stick with quad cores forever... THEY DIDN'T MAKE THE 8700K IN RESPONSE TO RYZEN LMAO IT WAS BECAUSE THEY FELT IT WAS A GOOD TIME TO GIVE NEW CORES!!! (I talked to engineers before coffee lake was announced, they told me they were making 6 core processors)

All Ryzen did was lower the price of Intel's lineup. At least in the beginning, with the previously unknown (at the time) Zen architecture. Now, yes they did add more cores in direct competition with Ryzen, however it wasn't all because of Ryzen.



Well, Intel didn't lower the price but actually gave more cores because Core i7-7700K vs Ryzen 7 1800X didn't look that well.
 
Joined
May 2, 2017
Messages
7,762 (2.91/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Lol the fact that y'all really think Intel was going to stick with quad cores forever... THEY DIDN'T MAKE THE 8700K IN RESPONSE TO RYZEN LMAO IT WAS BECAUSE THEY FELT IT WAS A GOOD TIME TO GIVE NEW CORES!!! (I talked to engineers before coffee lake was announced, they told me they were making 6 core processors)

All Ryzen did was lower the price of Intel's lineup. At least in the beginning, with the previously unknown (at the time) Zen architecture. Now, yes they did add more cores in direct competition with Ryzen, however it wasn't all because of Ryzen.
It's well known that Intel was working on 6-core MSDT i7s before Ryzen, sure, but if their previous practices were anything to go by they would have been much more expensive, and would have stayed around as the top of the line option for half a decade or more. If we had been lucky they would have moved the i5s to 4c8t instead of 4c4t, but no more. Pre Ryzen it took them literally a decade to move past 4 cores (C2Q in 2007 to 8700k in 2017). Post Ryzen a matching increase from 6 to 8 took a single year. That latter increase is purely due to competition, as there is zero precedent to believe otherwise (and plenty to support that Intel changes as little as possible unless they have to). It is obviously not all down to Ryzen, but that Ryzen is the main reason we now have much higher core counts across the board is undeniable. That 6-core flagship Intel made mostly on their own accord is now competing with lower midrange i5s around $200 after all, with 10-core i9s arriving any minute. That would never have happened in such a short time if it wasn't for Ryzen.
 
Joined
Nov 6, 2016
Messages
1,693 (0.60/day)
Location
NH, USA
System Name Lightbringer
Processor Ryzen 7 2700X
Motherboard Asus ROG Strix X470-F Gaming
Cooling Enermax Liqmax Iii 360mm AIO
Memory G.Skill Trident Z RGB 32GB (8GBx4) 3200Mhz CL 14
Video Card(s) Sapphire RX 5700XT Nitro+
Storage Hp EX950 2TB NVMe M.2, HP EX950 1TB NVMe M.2, Samsung 860 EVO 2TB
Display(s) LG 34BK95U-W 34" 5120 x 2160
Case Lian Li PC-O11 Dynamic (White)
Power Supply BeQuiet Straight Power 11 850w Gold Rated PSU
Mouse Glorious Model O (Matte White)
Keyboard Royal Kludge RK71
Software Windows 10
Guess I was early adopting DDR4 in 2017, but in my defense I came from a DDR2 C2Q system :D

DDR5 will mainly be interesting to me in terms of APU systems, general system performance doesn't benefit much from faster RAM,but iGPUs sure do. A 5nm APU with 15-20 RDNA 2 CUs and DDR5-6000? I would definitely like that, yes. Same goes for LPDDR5 in laptops, though at least we have LPDDR4X there to make up some of the deficit compared to dedicated VRAM.


You might be right there, though a full calendar year without a CPU release will look weird even if the timing isn't actually any slower than previous generations. Still, the time of expecting major performance improvements year over year is past, so I suppose this is reasonable. At least that way I won't be feeling as bad for building a Renoir HTPC once they hit the desktop later this year :p

I don't think DDR5 will be as important to APUs as integrated HBM will be with the growth of heterogeneous SoCs. Personally, I'd rather have at least 4GB (though 8GB would be amazing) of embedded HBM2e (acting as a unified VRAM/L4 Cache, 36 RDNA2 CUs (like the PS5) , 8 CPU cores, and I wouldn't care if it required a socket and SoC package the size of Threadripper and it was $600. I would absolutely love if APUs with the graphical horsepower of the Xbox Series X and PS5 were available on desktop, but I think a large part of their performance comes from the unified GDDR6 it's using in place of a CPU memory pool and a GPU memory pool, that's why the embedded 8GB of HBM2e in my dream APU would be the important part, it's got more bandwidth and most importantly it's extremely close.

Am I the only one interested in seeing powerhouse APUs? I think it'd be amazing to have the option to abstain from the dGPU market altogether and the powerful SFF builds possible with such APUs would be legendary. I haven't even mentioned what they'd do for the mobile market.
 
Joined
May 2, 2017
Messages
7,762 (2.91/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
I don't think DDR5 will be as important to APUs as integrated HBM will be with the growth of heterogeneous SoCs. Personally, I'd rather have at least 4GB (though 8GB would be amazing) of embedded HBM2e (acting as a unified VRAM/L4 Cache, 36 RDNA2 CUs (like the PS5) , 8 CPU cores, and I wouldn't care if it required a socket and SoC package the size of Threadripper and it was $600. I would absolutely love if APUs with the graphical horsepower of the Xbox Series X and PS5 were available on desktop, but I think a large part of their performance comes from the unified GDDR6 it's using in place of a CPU memory pool and a GPU memory pool, that's why the embedded 8GB of HBM2e in my dream APU would be the important part, it's got more bandwidth and most importantly it's extremely close.

Am I the only one interested in seeing powerhouse APUs? I think it'd be amazing to have the option to abstain from the dGPU market altogether and the powerful SFF builds possible with such APUs would be legendary. I haven't even mentioned what they'd do for the mobile market.
You aren't the only one, but they're never going to happen. Too expensive to make and too much of a niche market. This applies to anything with HBM or something similar on board, at least for desktops: too pricey and too few takers. Nor is Windows as its currently designed capable of handling a hybrid memory system like that, meaning you'd need to hardwire the HBM to the GPU, limiting its usefulness for other applications (further shrinking the addressable market for such a product). Not to mention that cooling something like that would likely require a new form factor, making it incompatible with pretty much any case. The appeal of something like this is compactness after all, which means ITX compatibility is a must. It would likely be possible at reasonable prices to stick a HBM2 die on a mobile substrate, but it would require dimensions outside of standard desktop sockets, so it would be mobile only (unless you want an APU for TR4, which doesn't have display outputs...).

Which, of course, is why I'm excited for DDR5 for APUs - 'cause it will actually happen, and will have huge benefits for iGPU performance. The same goes for LPDDR5 obviously.
 

dlee5495

New Member
Joined
Apr 16, 2020
Messages
8 (0.01/day)
Lets just blame intel for being stagnant - we'd still be on quad cores and PCI-E 3 without ryzen shaking things up

There are definitely signs of their having rested on their laurels, but why blame only the CPU manufacturers for slow integration with the memory chip manufacturers? Collusion for memory pricing has gone by with slaps on the wrist, and the lack of real competition or principled rule of law in international trade ends up physically enforcing the digital divide in a more brutal way than would otherwise be possible.

Am I the only one interested in seeing powerhouse APUs? I think it'd be amazing to have the option to abstain from the dGPU market altogether and the powerful SFF builds possible with such APUs would be legendary. I haven't even mentioned what they'd do for the mobile market.
The NUC extreme brings in the laptop CPU for the usff/nettop/desktop with full-fledged desktop GPU; your idea would kinda be the inversion of that. The move upward from phone to laptop capability in processing power has been a thing for a while, and the 4900HS brings desktop to the laptop at 35W TDP - ; it's all kinda blending at the borders of these definitions, like a folding phone/tablet does

It would likely be possible at reasonable prices to stick a HBM2 die on a mobile substrate, but it would require dimensions outside of standard desktop sockets, so it would be mobile only.

maybe this is already happening
 
Last edited:
Joined
May 2, 2017
Messages
7,762 (2.91/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
There are definitely signs of their having rested on their laurels, but why blame only the CPU manufacturers for slow integration with the memory chip manufacturers? Collusion for memory pricing has gone by with slaps on the wrist, and the lack of real competition or principled rule of law in international trade ends up physically enforcing the digital divide in a more brutal way than would otherwise be possible.


The NUC extreme brings in the laptop CPU for the usff/nettop/desktop with full-fledged desktop GPU; your idea would kinda be the inversion of that. The move upward from phone to laptop capability in processing power has been a thing for a while, and the 4900HS brings desktop to the laptop at 35W TDP - ; it's all kinda blending at the borders of these definitions, like a folding phone/tablet does



maybe this is already happening
What? That's a smartphone SoC. There is no way in the world that will have HBM. I was using the word mobile in a PC context (nothing in this thread relates to phones), as in: laptops. There would be no way of using HBM in a phone anyhow: smartphone SoCs aren't packaged in a way that would allow for the addition of an interposer for the HBM to sit on without dramatically increasing the thickness of the phone, and likely also its area (most smartphone SoCs have LPDDR RAM stacked directly on top of the SoC in a PoP (Package on Package) setup). Not to mention that its performance would be utterly wasted on a 3-5W smartphone chip. No, I was talking about a laptop BGA APU with a slightly bigger substrate to fit an interposer and HBM - which could both be done at acceptable cost and have very tangible benefits. The main issue is likely unwillingness to make an APU die big enough to properly utilize it and make its cost seem reasonable - there isn't much demand for high end APUs after all when gaming laptops are as good as they are.
 

dlee5495

New Member
Joined
Apr 16, 2020
Messages
8 (0.01/day)
What? That's a smartphone SoC. There is no way in the world that will have HBM. I was using the word mobile in a PC context (nothing in this thread relates to phones), as in: laptops. There would be no way of using HBM in a phone anyhow: smartphone SoCs aren't packaged in a way that would allow for the addition of an interposer for the HBM to sit on without dramatically increasing the thickness of the phone, and likely also its area (most smartphone SoCs have LPDDR RAM stacked directly on top of the SoC in a PoP (Package on Package) setup). Not to mention that its performance would be utterly wasted on a 3-5W smartphone chip. No, I was talking about a laptop BGA APU with a slightly bigger substrate to fit an interposer and HBM - which could both be done at acceptable cost and have very tangible benefits. The main issue is likely unwillingness to make an APU die big enough to properly utilize it and make its cost seem reasonable - there isn't much demand for high end APUs after all when gaming laptops are as good as they are.

I think you are probably right; i hadn't thought of the thickness of the interposer and the layers. I wonder if the technology has much to do with the advantage of AMD in the overal cpu race and would be worth the size and thickness reduction. I remember when they were shooting for early Ryzen that they mentioned that they had left a lot of the low hanging fruit in terms of speed and efficiency improvements untouched as they focused on the higher level matters. They were going to clean up later with the easier optimizations, is there something about going toward 7nm and finer that they prepped architecturally that intel didn't, or is lithography and yield unaffected by the details of the circuit design? I mean AMD was doing cost saving and selling the x86 rights to China and staying at 20+nm for a while to buy the time to finish the architecture change, did they do something big enough in terms of far-sightedness with APUs and HBM that intel will eventually have to follow after or imitate like back with x64? Thick stacking and minimizing the size of nand flash has been a thing that samsung and others have been doing for a while; is the interposer problem big enough to really stretch theoretical limits?
 
Last edited:
Joined
May 2, 2017
Messages
7,762 (2.91/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
I think you are probably right; i hadn't thought of the thickness of the interposer and the layers. I wonder if the technology has much to do with the advantage of AMD in the overal cpu race and would be worth the size and thickness reduction. I remember when they were shooting for early Ryzen that they mentioned that they had left a lot of the low hanging fruit in terms of speed and efficiency improvements untouched as they focused on the higher level matters. They were going to clean up later with the easier optimizations, is there something about going toward 7nm and finer that they prepped architecturally that intel didn't, or is lithography and yield unaffected by the details of the circuit design? I mean AMD was doing cost saving and selling the x86 rights to China and staying at 20+nm for a while to buy the time to finish the architecture change, did they do something big enough in terms of far-sightedness with APUs and HBM that intel will eventually have to follow after or imitate like back with x64? Thick stacking and minimizing the size of nand flash has been a thing that samsung and others have been doing for a while; is the interposer problem big enough to really stretch theoretical limits?
That post has too many questions in various categories jumbled together for me to quite parse, but I'll make an attempt nonetheless.

- I don't think interposers are suitable for smartphone/tablet applications, partly due to cost/complexity and partly due to durability: interposers are large and thin pieces of silicon, and at least in current implementations they would have significant risks of cracking if used in something that's likely to be dropped like a phone. Of course they could be strengthened, and any kind of SoC+HBM on interposer package for a phone/tablet is likely to be packaged in a ceramic (or at least epoxy) package that will help support the interposer, but it will still be significantly more brittle than any other mobile SoC. As for cost, not only is making the interposer relatively expensive, HBM also costs many time more than an off-the-shelf LPDDR package, and production costs with mounting both chips on the interposer and then packaging them would also be many times higher. I'm not sure if an interposer could be mounted directly to the phone motherboard or not, but at least all current implementations of interposers place them on a substrate first, adding thickness. Of course this will be somewhat offset by not having DRAM stacked on top of the SoC, but you would end up with a quite large, thick package for the SoC+DRAM - large enough that it might be difficult to fit on most phone motherboards. Implementing something like this in a laptop form factor would be far easier as the change in SoC form factor would be much smaller, and there's more space to work with in the first place. Durability concerns are also much lower in an implementation like that.
- Of course this could change with new packaging methods; if chip-on-chip stacking (rather than package-on-package or chip-on-interposer-on-substrate) finally arrives for high volume and high power implementations that could allow for HBM to be stacked directly on top of an SoC. HBM cost would still be an issue for mobile, but new packaging like this would make its use somewhat more likely.

- The "low-hanging fruit" comment regarding successors to Zen was referring to upcoming architectural improvements; we saw some of that addressed with Zen+ (cache improvements) and more of it with Zen 2. After all, as you work towards and eventually settle on a base design (that is finalized 1-1.5 years before it reaches retail) you will inevitably find areas (big or small) where it could be improved with various degrees of effort. Node improvements can of course help further improve things, but the two are not necessarily related.
- Silicon yields are absolutely affected by circuit design, but the specifics of this are extremely ... well, specific and fine-grained; any silicon design must be tweaked and tuned for the node it is designed to be manufactured on, but the specific design (as in: the actual layout that is to be etched into the silicon, not just the architectural layout) is also based off node-specific design libraries specifying how various types of transistors, interconnects, etc. are shaped and laid out. This is why transferring a chip design from one node to another is far from trivial - if the nodes are very different, it is essentially a brand new design even if the overall architecture is the same. In other words, you can't really compare the implementations of entirely different CPU designs on entirely different nodes beyond high-level overviews (unless you want to write a couple of Ph.D.s on the subject, I suppose). The reason why AMD managed to overtake Intel like they have with the current generation can be summed up as a confluence of various factors related in various ways: In terms of silicon manufacturing, AMD had access to a relatively mature 7nm node while Intel struggled to get their comparable 10nm node to work properly (not directly related to silicon designs). In terms of architecture, AMD improved the Zen design enough to surpass Skylake and its derivatives in IPC, while Intel was still using Skylake (holding off its new core designs for the perpetually delayed 10nm node which, as noted above, they were designed for, and redesigning them for 14nm would be a significant undertaking). And in terms of the combination of architecture and node, AMD could reap the rewards of an efficient architecture on an efficient node with better clock scaling than the previous node, while Intel had no recourse but to push clock speeds ever higher on their aging 14nm node, compounding AMD's efficiency lead while barely managing to keep up in absolute performance (and arguably not managing this in multithreaded workloads). This won't change until at least Tiger lake (mobile 10nm, reportedly actually working well) and Rocket Lake (14nm backport of 10nm Willow Cove core) arrive, but by that time AMD will have Zen 3 CPUs out at least in the desktop space.

- HBM doesn't relate much to this: Intel has already used HBM in a mobile chip after all (Kaby Lake-G) through its EMIB interconnect tech. If price is taken out of the picture, anyone can use HBM in any non-tiny form factor should they want to. AMD could make a HBM-equipped APU tomorrow (well, not technically tomorrow, it'd take time to implement in silicon) if they wanted to, they would just need to put a HBM PHY and controller in the APU and design and manufacture an interposer and package for it. For desktops this would be more challenging as it would be extremely difficult to fit this + a reasonably sized APU within the constraints of an AM4 package and its IHS - there isn't much area there. So it's also a question of balance: HBM would be meaningless if it could only be paired with an iGPU too small for it to be utilized properly. Still, cost and addressable markets (and therefore margins) is the biggest hindrance here. KBL-G saw just a handful of implementations, and an expensive APU isn't likely to be widely adopted if OEMs can get similar or better performance at a comparable price through separate CPUs and GPUs (which they are also far more familiar with designing cooling systems and motherboards for).
 

dlee5495

New Member
Joined
Apr 16, 2020
Messages
8 (0.01/day)
That post has too many questions in various categories jumbled together for me to quite parse, but I'll make an attempt nonetheless.

- I don't think interposers are suitable for smartphone/tablet applications, partly due to cost/complexity and partly due to durability: interposers are large and thin pieces of silicon, and at least in current implementations they would have significant risks of cracking if used in something that's likely to be dropped like a phone. Of course they could be strengthened, and any kind of SoC+HBM on interposer package for a phone/tablet is likely to be packaged in a ceramic (or at least epoxy) package that will help support the interposer, but it will still be significantly more brittle than any other mobile SoC. As for cost, not only is making the interposer relatively expensive, HBM also costs many time more than an off-the-shelf LPDDR package, and production costs with mounting both chips on the interposer and then packaging them would also be many times higher. I'm not sure if an interposer could be mounted directly to the phone motherboard or not, but at least all current implementations of interposers place them on a substrate first, adding thickness. Of course this will be somewhat offset by not having DRAM stacked on top of the SoC, but you would end up with a quite large, thick package for the SoC+DRAM - large enough that it might be difficult to fit on most phone motherboards. Implementing something like this in a laptop form factor would be far easier as the change in SoC form factor would be much smaller, and there's more space to work with in the first place. Durability concerns are also much lower in an implementation like that.
- Of course this could change with new packaging methods; if chip-on-chip stacking (rather than package-on-package or chip-on-interposer-on-substrate) finally arrives for high volume and high power implementations that could allow for HBM to be stacked directly on top of an SoC. HBM cost would still be an issue for mobile, but new packaging like this would make its use somewhat more likely.

- The "low-hanging fruit" comment regarding successors to Zen was referring to upcoming architectural improvements; we saw some of that addressed with Zen+ (cache improvements) and more of it with Zen 2. After all, as you work towards and eventually settle on a base design (that is finalized 1-1.5 years before it reaches retail) you will inevitably find areas (big or small) where it could be improved with various degrees of effort. Node improvements can of course help further improve things, but the two are not necessarily related.
- Silicon yields are absolutely affected by circuit design, but the specifics of this are extremely ... well, specific and fine-grained; any silicon design must be tweaked and tuned for the node it is designed to be manufactured on, but the specific design (as in: the actual layout that is to be etched into the silicon, not just the architectural layout) is also based off node-specific design libraries specifying how various types of transistors, interconnects, etc. are shaped and laid out. This is why transferring a chip design from one node to another is far from trivial - if the nodes are very different, it is essentially a brand new design even if the overall architecture is the same. In other words, you can't really compare the implementations of entirely different CPU designs on entirely different nodes beyond high-level overviews (unless you want to write a couple of Ph.D.s on the subject, I suppose). The reason why AMD managed to overtake Intel like they have with the current generation can be summed up as a confluence of various factors related in various ways: In terms of silicon manufacturing, AMD had access to a relatively mature 7nm node while Intel struggled to get their comparable 10nm node to work properly (not directly related to silicon designs). In terms of architecture, AMD improved the Zen design enough to surpass Skylake and its derivatives in IPC, while Intel was still using Skylake (holding off its new core designs for the perpetually delayed 10nm node which, as noted above, they were designed for, and redesigning them for 14nm would be a significant undertaking). And in terms of the combination of architecture and node, AMD could reap the rewards of an efficient architecture on an efficient node with better clock scaling than the previous node, while Intel had no recourse but to push clock speeds ever higher on their aging 14nm node, compounding AMD's efficiency lead while barely managing to keep up in absolute performance (and arguably not managing this in multithreaded workloads). This won't change until at least Tiger lake (mobile 10nm, reportedly actually working well) and Rocket Lake (14nm backport of 10nm Willow Cove core) arrive, but by that time AMD will have Zen 3 CPUs out at least in the desktop space.

- HBM doesn't relate much to this: Intel has already used HBM in a mobile chip after all (Kaby Lake-G) through its EMIB interconnect tech. If price is taken out of the picture, anyone can use HBM in any non-tiny form factor should they want to. AMD could make a HBM-equipped APU tomorrow (well, not technically tomorrow, it'd take time to implement in silicon) if they wanted to, they would just need to put a HBM PHY and controller in the APU and design and manufacture an interposer and package for it. For desktops this would be more challenging as it would be extremely difficult to fit this + a reasonably sized APU within the constraints of an AM4 package and its IHS - there isn't much area there. So it's also a question of balance: HBM would be meaningless if it could only be paired with an iGPU too small for it to be utilized properly. Still, cost and addressable markets (and therefore margins) is the biggest hindrance here. KBL-G saw just a handful of implementations, and an expensive APU isn't likely to be widely adopted if OEMs can get similar or better performance at a comparable price through separate CPUs and GPUs (which they are also far more familiar with designing cooling systems and motherboards for).

I only remembered Broadwell and what Intel did with the C chips in that series for caching.
Is HBM essential to the changes made in the Zen architecture, or is it solely for the graphics (you mention cache, which is just memory, is that HBM caching? If so, then just because Intel hasn't successfully reaped rewards of it yet doesn't mean it isn't something won't have to do eventually (still thinking of the x64 issue), since Kaby Lake G was trying to use it for graphics anyway and not cpu..

It seems like if it is fundamental to the way forward space can be made for it even with a temporary generational protrusion like is done with the camera bumps; issues like cost and size have been just a matter of time and investment for scale and process improvement, and competition by Intel (or licensing from AMD like they did with x64 and like the Chinese have now done with x86 from AMD) would seem to only improve the problems of miniaturization and economization. Smartphone CPUs today outdo what the vacuum-tube computers that filled large rooms used to, after all.

If i understand you correctly, you're saying that the logic/architecture of the chip itself has to be remapped to any given node - is that because the foundry technique itself varies widely, or because of microscopic differences in from foundry to foundry built upon the same specifications? Is it just like, intel and amd don't know why one wafer-maker gets the yields it does but the foundry does have more ground knowledge and goes through a bunch of different iterations, and the logic guys just give them the file to print and work it out? Maybe just more circuit dense regions have different electrical properties than others under a given photo/heating/chemical process or something?
 

ARF

Joined
Jan 28, 2020
Messages
4,566 (2.74/day)
Location
Ex-usa | slava the trolls
You really think that is the case?


Well, exactly 3 years ago, one could buy only a quad core CPU for $300-$350. The large majority of consumers had been using dual-cores at that time.
Yes, the top bin would have been a hexa-core today, if Ryzen hadn't appeared at all.
 

Mussels

Freshwater Moderator
Staff member
Joined
Oct 6, 2004
Messages
58,413 (8.05/day)
Location
Oystralia
System Name Rainbow Sparkles (Power efficient, <350W gaming load)
Processor Ryzen R7 5800x3D (Undervolted, 4.45GHz all core)
Motherboard Asus x570-F (BIOS Modded)
Cooling Alphacool Apex UV - Alphacool Eisblock XPX Aurora + EK Quantum ARGB 3090 w/ active backplate
Memory 2x32GB DDR4 3600 Corsair Vengeance RGB @3866 C18-22-22-22-42 TRFC704 (1.4V Hynix MJR - SoC 1.15V)
Video Card(s) Galax RTX 3090 SG 24GB: Underclocked to 1700Mhz 0.750v (375W down to 250W))
Storage 2TB WD SN850 NVME + 1TB Sasmsung 970 Pro NVME + 1TB Intel 6000P NVME USB 3.2
Display(s) Phillips 32 32M1N5800A (4k144), LG 32" (4K60) | Gigabyte G32QC (2k165) | Phillips 328m6fjrmb (2K144)
Case Fractal Design R6
Audio Device(s) Logitech G560 | Corsair Void pro RGB |Blue Yeti mic
Power Supply Fractal Ion+ 2 860W (Platinum) (This thing is God-tier. Silent and TINY)
Mouse Logitech G Pro wireless + Steelseries Prisma XL
Keyboard Razer Huntsman TE ( Sexy white keycaps)
VR HMD Oculus Rift S + Quest 2
Software Windows 11 pro x64 (Yes, it's genuinely a good OS) OpenRGB - ditch the branded bloatware!
Benchmark Scores Nyooom.
You really think that is the case?

intel are still on PCI-E 3, without ryzen we'd barely have hex core CPU's at todays clock speeds (5GHz max)... if ryzen made them speed things up, where would we be without that motivation to do better?
 
Joined
Feb 26, 2016
Messages
551 (0.18/day)
Location
Texas
System Name O-Clock
Processor Intel Core i9-9900K @ 52x/49x 8c8t
Motherboard ASUS Maximus XI Gene
Cooling EK Quantum Velocity C+A, EK Quantum Vector C+A, CE 280, Monsta 280, GTS 280 all w/ A14 IP67
Memory 2x16GB G.Skill TridentZ @3900 MHz CL16
Video Card(s) EVGA RTX 2080 Ti XC Black
Storage Samsung 983 ZET 960GB, 2x WD SN850X 4TB
Display(s) Asus VG259QM
Case Corsair 900D
Audio Device(s) beyerdynamic DT 990 600Ω, Asus SupremeFX Hi-Fi 5.25", Elgato Wave 3
Power Supply EVGA 1600 T2 w/ A14 IP67
Mouse Logitech G403 Wireless (PMW3366)
Keyboard Monsgeek M5W w/ Cherry MX Silent Black RGBs
Software Windows 10 Pro 64 bit
Benchmark Scores https://hwbot.org/search/submissions/permalink?userId=92615&cpuId=5773
It's well known that Intel was working on 6-core MSDT i7s before Ryzen, sure, but if their previous practices were anything to go by they would have been much more expensive, and would have stayed around as the top of the line option for half a decade or more. If we had been lucky they would have moved the i5s to 4c8t instead of 4c4t, but no more. Pre Ryzen it took them literally a decade to move past 4 cores (C2Q in 2007 to 8700k in 2017). Post Ryzen a matching increase from 6 to 8 took a single year. That latter increase is purely due to competition, as there is zero precedent to believe otherwise (and plenty to support that Intel changes as little as possible unless they have to). It is obviously not all down to Ryzen, but that Ryzen is the main reason we now have much higher core counts across the board is undeniable. That 6-core flagship Intel made mostly on their own accord is now competing with lower midrange i5s around $200 after all, with 10-core i9s arriving any minute. That would never have happened in such a short time if it wasn't for Ryzen.
The engineers I talked to told me to expect those 6 core [i7] processors to be around 500$. They also told me that Intel can technically add as many cores as they want, which honestly, they should at this point. I would say 16 of those skylake cores would be pretty nice, but they are already working on newer architectures so yeah..

Well, Intel didn't lower the price but actually gave more cores because Core i7-7700K vs Ryzen 7 1800X didn't look that well.
No, Intel lowered the price. And um, Ryzen was launched in 2017. I was told about those 6 cores back in 2016. AMD launched extremely efficient 8 core products and Intel had to lower the price, otherwise there was no reason to go Intel. Intel was going to make those 6 core processors 500$+ if Ryzen didn't exist.
 
Top