Monday, May 20th 2024

AMD to Present "Zen 5" Microarchitecture Deep-dive at Hot Chips 2024

AMD is slated to deliver a "Zen 5" microarchitecture deep-dive at the Hot Chips 2024 conference, on August 25. The company is widely expected to either unveil or announce its next-generation processors based on the architecture, in its 2024 Computex keynote on June 3, so it remains to be seen if the deep-dive follows a product launch, or predates it. Either way, Hot Chips talks tend to be significantly more detailed than the product launch pre-briefs that we get; and so we hope to learn a lot more about the architecture.

A lot rides on the continued success of "Zen 5" to deliver a double-digit percentage IPC increase over its predecessor, while also introducing new microarchitecture-level features; and leveraging new foundry processes at TSMC, to deliver competitive processors to Intel. Unlike Intel, which has implemented hybrid CPU cores across its product stack, AMD continues to make traditional multicore processors, and refuses to level even the chips that contain regular and high-density versions of its "Zen 4" cores as "hybrid."
Sources: Hot Chips, Wccftech
Add your own comment

14 Comments on AMD to Present "Zen 5" Microarchitecture Deep-dive at Hot Chips 2024

#1
Chaitanya
Unlike Intel, which has implemented hybrid CPU cores across its product stack, AMD continues to make traditional multicore processors, and refuses to level even the chips that contain regular and high-density versions of its "Zen 4" cores as "hybrid."
AMD does sell chips with mix of full fat and efficiency oriented C chips.
Posted on Reply
#2
azrael
Maybe it's just me but I think naming this conference "Hot Chips" is a bit unfortunate. And yes, I know it's been around for years. I also cannot come up with a better name right now, not that it matters.
Posted on Reply
#3
Vya Domus
ChaitanyaAMD does sell chips with mix of full fat and efficiency oriented C chips.
As far as I can tell the 4C cores just have SMT and some cache removed, they're still very much Zen 4 cores with nearly identical IPC.
Posted on Reply
#4
Panther_Seraphin
ChaitanyaAMD does sell chips with mix of full fat and efficiency oriented C chips.
Not in the same way as Intel does.

AMD chips are the same microarchitecture with just a physical space optimisation and reduction of cache.
Intel is a different archtetcure between E core and P Core which did and can lead to weird interactions with certain software.
Posted on Reply
#5
Daven
“…AMD continues to make traditional multicore processors, and refuses to level even the chips…”

I think you meant ‘label’ not ‘level’.
azraelMaybe it's just me but I think naming this conference "Hot Chips" is a bit unfortunate. And yes, I know it's been around for years. I also cannot come up with a better name right now, not that it matters.
How about ‘Chips that Matter’?
Posted on Reply
#6
azrael
Panther_SeraphinNot in the same way as Intel does.

AMD chips are the same microarchitecture with just a physical space optimisation and reduction of cache.
Intel is a different archtetcure between E core and P Core which did and can lead to weird interactions with certain software.
Wasn't one of the reasons for Windows 11 that a new scheduler was needed that could handle Intel's new heterogeneous architecture?
DavenHow about ‘Chips that Matter’?
I see what you did there... :p
Posted on Reply
#7
Panther_Seraphin
azraelWasn't one of the reasons for Windows 11 that a new scheduler was needed that could handle Intel's new heterogeneous architecture?
Not directly but things like VMware Workstation has a real tendancy to crash out VMs due to the way it tries to shift vms running from P to E Cores and vice versa.

the Scheduler was there so that things that needed performance (games, video encoding etc) were pushed onto the P cores but if you didnt need that performance/were only running background tasks it would push things to E cores and power down the P cores aggressively to save power etc. Most of this was meant to be done in the hardware scheduler built into the CPU directly and all windows was doing was supplying info regarding the app and if it was a background task etc.
Posted on Reply
#8
Caring1
AMD is hoping consumers will be like Seagulls all over these Hot Chips.
Posted on Reply
#9
Panther_Seraphin
As long as Zen 5 steps up on the IO die especially the memory controller aspect itll be a hell of a step up.
Posted on Reply
#10
Minus Infinity
Panther_SeraphinAs long as Zen 5 steps up on the IO die especially the memory controller aspect itll be a hell of a step up.
IO die is basically same as for Zen 4, just supports faster memory out of the box. You'll have to wait for Zen 6 apparently.
Posted on Reply
#11
JWNoctis
Minus InfinityIO die is basically same as for Zen 4, just supports faster memory out of the box. You'll have to wait for Zen 6 apparently.
Not good if confirmed, when the IO die - or more specifically the IF infrastructure of the IOD and the CCD, as well as the internal data path of the IOD, caps maximum memory bandwidth. I think the maximum theoretical number is 64GB/s at 2000MHz FCLK, proportional, no matter how fast the memory is running on memory controller side.

This - sequential RAM read bandwidth - mattered for AI workloads, which is apparently the newest thing everyone is capitalizing on. Lunar Lake is apparently going after that with its LPDDR5X-on-package design.

EDIT: To illustrate the problem, my current system does LLM inference on CPU at an indicated 60GB/s, while the system I upgraded from, an AMD Cezanne-based setup with DDR4-3200, did it at around 43GB/s. The performance improvement is noticeable, but at first less than expected. If I set my RAM to JEDEC speed with default FCLK(DDR5-4800, 1600MHz FCLK, same as the Cezanne), then there's ~zero improvement.

For every token generated, an LLM would need to iterate through the entire active model weight once, and those are several GB minimum for models of useful performance. It's going to be a problem, if powerful client-side AI ever works out for the general consumer.

Not all AI workloads are LLM, but LLM and Transformer-based models in other modalities are the stars of the current AI boom.
Posted on Reply
#12
ARF
azraelMaybe it's just me but I think naming this conference "Hot Chips" is a bit unfortunate. And yes, I know it's been around for years. I also cannot come up with a better name right now, not that it matters.
Unfortunate is not the right word. This is a natural consequence of natural processes - Moore's law is dead, processes optimisation stagnation, and corresponding exponential manufacturing costs increases, which means in the future there won't be new chips, at all. The question is when.

A better name would be "Innovation & Chips", "New Chips", "Modern Chips", "Fast Chips", or whatever else, but not literally "hot".
Posted on Reply
#13
Launcestonian
Can't wait to see benchmarks of this new range of chips, they look attractive on paper, but reality is another kettle of fish as usual.
Posted on Reply
#14
Minus Infinity
ARFUnfortunate is not the right word. This is a natural consequence of natural processes - Moore's law is dead, processes optimisation stagnation, and corresponding exponential manufacturing costs increases, which means in the future there won't be new chips, at all. The question is when.

A better name would be "Innovation & Chips", "New Chips", "Modern Chips", "Fast Chips", or whatever else, but not literally "hot".
Given it's all about AI now, I'm surprised it hasn't been renamed to Smart or Intelligent Chips.

We have a brand of beer in Australia called Minimum Chips, which I quite like.
Posted on Reply
Add your own comment
Dec 22nd, 2024 03:07 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts