• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD Could Release Next Generation EPYC CPUs with Four-Way SMT

As some have pointed out, IPC is single thread. What you probably meant is saturation of the core resources, but it's important to understand that SMT even i perfect conditions never exceeds the performance of a single "optimal" thread. It's simply a way to let other threads utilize the resources the other thread doesn't use, scaling towards one "optimal thread".

There are several factors that impacts IPC. One way is to add more execution resources (ALUs, FPUs, AGUs etc.) which boosts your peak performance, but can leave resources unsaturated. Secondly, there are front-end, latency and cache improvements which improve the utilization of the execution resources you already have. Since SMT relies on exploiting idle resources of the CPU core for other threads, the ever increasing efficiency of CPU architectures is actually making SMT less and less useful for generic tasks, as efficiency gains in front-end and cache will ultimately consume the "gains" of SMT.

SMT was introduced at a time when single core CPUs were mostly idle due to stalls in the CPU pipeline, and the cost of implementing SMT in silicon was minuscule. But these days as the gains of SMT are shrinking, and the security implications of SMT makes the silicon costs ever increasing, it's actually time to drop it, not extend it further with 4-way or even 8-way SMT. Today, SMT only really makes sense for server workloads where latency is irrelevant and total throughput of massive amounts of requests (or work items) is the primary goal. SMT is really a relic of the past, and 2020 is not the year to push it further.

While future gains in CPU performance wouldn't get close to the improvements we saw in the 80s and the 90s, it's important to remember that the reason "stagnant" single thread performance for the last ~4+ years is not due to any theoretical performance limit in IPC. Obviously we are now at a "clock wall" for the current type of semiconductors, but the primary reason for the (Intel's) stagnant CPU selection is the node problems causing two years of delays to Ice Lake(Sunny Cove), which they claim offer 18% IPC gains. Both Intel and AMD have their 2-3 next architectures lined up, and theoretically it is absolutely possible to achieve ~50% better IPC over Skylake with just continuing to add more execution resources, improving cache, reducing latency and improving the front-end.

But even beyond that, single thread performance will not hit a wall any time soon. Quite the opposite, we are now on the verge of the largest single thread gain since the 90s. Since Pentium(1993), x86 CPUs have become increasingly superscalar, which obviously does wonders for peak performance, but also keeps widening the gap between minimum and average vs. peak performance, as the CPU becomes more sensitive to the code to keep the resources fully saturated. As anyone familiar with machine code would know, there are two major causes for this lack of saturation; cache misses and branch mispredictions. Optimizing for cache misses can be done fairly efficiently, but branch mispredictions are harder to deal with. Largely it's about removing bloat, but you will usually still have enough of it left to hold back performance. And in the greater scope of even a function, most branching only have local effects, but the CPU can't know that, so when there is a branch misprediction it has to flush the pipeline, even if some of the calculations may still be "good". This is because a lot of context is lost between your high level code and machine code, and even the best prediction models will only get you so far without getting some extra "help". I know Intel is researching a solution to this problem, where basically you have these dependencies between branching implied in machine code (e.g. this branch only affects this code over here, but not the bigger flow of the program), I believe they call it "threadlets" or something, and would probably done by having chains of instructions that are independent of branching in others, like sort of a "thread" that only exists virtually for a few dozen instructions. While this would at least require recompilation of software, it would greatly improve the CPU front-end's ability to reason about true dependencies between calculations, instead of having to assume the pipeline needs to be flushed. Gains in single threaded performance of 2-3x should not be unreasonable. While what I'm describing here may seem a little out of scope, it's actually not, as this would practically eliminate SMT. But don't expect this to be implemented in shipping products yet, it's still experimental, I would expect it 5-10 years down the road.


Actually, you got this the wrong way. In ideal conditions, SMT would not be needed at all, the only reason why there are gains from SMT is that threads don't saturate the CPU enough. When you have ideal software as you said, branch and cache optimized, it will saturate the CPU very well.

SMT is mostly useful for server workloads where you have an "endless" supply of "work chunks" that can be done in parallel, very typical for a server running worker threads for Java code or scripts. This is code which can't be cache optimized and is heavily abstracted, so the CPU will more or less constantly stall. This is where 4-way and even 8-way SMT makes sense (like Power CPUs), and even then the execution part of the CPU will be largely idle, the bottleneck will be the front-end and the caches, otherwise you could make a 32-way SMT CPU and scale on.


Oh, there can be so many, too much to discuss here. It depends how many threads you spawn, how they are synchronized and of course how your application is "disturbed" by background threads.
You know I half agree but disagree with your initial standpoint, this ideal software you speak of, do you have an example , because I would be surprised, CPU's are not fixed function ,they have built in coprocessors and on die coprocessors, modern code is also broke up into micro ops so I can't imagine that's an easy bit of code to know how to write never mind write.
 
You know I half agree but disagree with your initial standpoint, this ideal software you speak of, do you have an example , because I would be surprised, CPU's are not fixed function ,they have built in coprocessors and on die coprocessors, modern code is also broke up into micro ops so I can't imagine that's an easy bit of code to know how to write never mind write.
All modern x86 microarchitectures convert x86 ISA into "RISC-like" microoperations. These are not only Intel or AMD specific, but microarchtecture specific or even specific down to the die configuration. Exposing these to write targeted code is not feasible. So it will be up to the CPU front-end to convert the x86 machine code into the native micro operations, assigning registers etc.
 
All modern x86 microarchitectures convert x86 ISA into "RISC-like" microoperations. These are not only Intel or AMD specific, but microarchtecture specific or even specific down to the die configuration. Exposing these to write targeted code is not feasible. So it will be up to the CPU front-end to convert the x86 machine code into the native micro operations, assigning registers etc.
Yes exactly where the optimisation is done.
And exactly the part that makes your point moot.
With a resource, any resource, it only gets used as much as it's regulated to at any moment, no code uses all of a CPU,s possible circuit based compute power, if that were allowed to happen modern core's would not last long or be efficient.
Hence why power virus are a thing and few of them really max a cpu's full spectrum of processing abilities.

My point was and is that resource use is paramount, no codes perfect, none yet Intel and Amd have to work their imperfect silicon into optimally running all sorts of code for many uses.
 
Yes exactly where the optimisation is done.
And exactly the part that makes your point moot.
With a resource, any resource, it only gets used as much as it's regulated to at any moment, no code uses all of a CPU,s possible circuit based compute power, if that were allowed to happen modern core's would not last long or be efficient.
Hence why power virus are a thing and few of them really max a cpu's full spectrum of processing abilities.
Do you mean my point about the irrelevance of SMT?
Well, 100% of it can never be utilized, due to power gating, and resources sharing execution ports.
But SMT is mostly about utilizing idle cycles due to cache misses and branch mispredictions, which leaves idle cycles for partial or the entire core.

My point was and is that resource use is paramount, no codes perfect, none yet Intel and Amd have to work their imperfect silicon into optimally running all sorts of code for many uses.
"Optimal" code is about implementing an algorithm solving a particular task in the most efficient way, not about utilizing every possible CPU resource 100% every clock cycle.
 
Do you mean my point about the irrelevance of SMT?
Well, 100% of it can never be utilized, due to power gating, and resources sharing execution ports.
But SMT is mostly about utilizing idle cycles due to cache misses and branch mispredictions, which leaves idle cycles for partial or the entire core.


"Optimal" code is about implementing an algorithm solving a particular task in the most efficient way, not about utilizing every possible CPU resource 100% every clock cycle.
But no modern pc is made to or actually runs one piece of code like that besides supercomputers, modern PC have many processes on the fly with multiple threads each , over a thousand on a typical pc, that's where SMt and HTT make they're money in optimization of core use.
 
You answered your own question. They aren't in the desktop consumer scene as they don't make desktop consumer products.
Not really, I asked what they do... :wtf:
 
You know I half agree but disagree with your initial standpoint, this ideal software you speak of, do you have an example , because I would be surprised, CPU's are not fixed function ,they have built in coprocessors and on die coprocessors, modern code is also broke up into micro ops so I can't imagine that's an easy bit of code to know how to write never mind write.
Linpack is a known to perform same or even worse with SMT. It is far from perfect load but good enough to negate the potential improvement from SMT.
 
Linpack is a known to perform same or even worse with SMT. It is far from perfect load but good enough to negate the potential improvement from SMT.
Is it using SMT and is it optimized? If an application can't use more threads and cores then of course it will work less efficient and won't scale with SMT.
 
Last edited:
Is it using SMT and is it optimized? If an application can't use more threads and cores then of course it will work less efficient and won't scale with SMT.
It is optimized. The problem is not with threads. 1 thread of Linpack running on 1 core is same or faster that 2 threads running on the same core with SMT enabled.

The idea of SMT is that this is done in hardware and you do not optimize for it and there are not too many generic ways of doing that. The main optimization on software side is awareness on operating system level (scheduler) about which cores are physical and which are logical. Threads are ideally assigned to physical cores first, then logical for best results.
 
It is optimized. The problem is not with threads. 1 thread of Linpack running on 1 core is same or faster that 2 threads running on the same core with SMT enabled.

The idea of SMT is that this is done in hardware and you do not optimize for it and there are not too many generic ways of doing that. The main optimization on software side is awareness on operating system level (scheduler) about which cores are physical and which are logical. Threads are ideally assigned to physical cores first, then logical for best results.
What I know is that the Linpack for AMD 3000 series (for instance and other ryzen processors) uses the OpenMP which is by all means not optimized for AMD. Optimized compiler and libraries for full support of new Ryzen architecture are also required. So there is still a lot to improve and I'm not talking about the hardware now.
 
Failed products doesn't demonstrate mastery...

Oh. My god.

No one was even trying to...

that wasn't even the...

why did I just read? This is worse than the bulldozer core "debate"

Screw it, you are all frog-god food now. So has decreed the giant green one. Blessed be his slime. I wash my hands of this.
 
Quad SMT would help a lot in the server market and help AMD get some ground back. hopefully re-engaging with large partners like Dell.

As for the naming, I would of found something more worthy-sounding than those city names.
 
Quad SMT would help a lot in the server market and help AMD get some ground back. hopefully re-engaging with large partners like Dell.

As for the naming, I would of found something more worthy-sounding than those city names.
They're internal code names, not marketing names. What do they matter? Whether it's Rome, Milan, Turin or... Cinque Terre? or whatever - they're all EPYC + a 4-digit identifier when they go on sale. The generation is indicated within those four digits, so the code names are never officially used for marketing purposes. That enthusiasts adopt them as shorthand is our problem, not AMD's.
 
This is very interesting indeed. Scheduling this monstrous ZEN3 will have to be PERFECT. Personally I don't see this coming to desktop CPUs, because Windows OS will have a nightmare scheduling it. But you never know. AMD is all about innovation and 1sts. Can't wait for more official details by AMD to come out.
 
Speaking of IBM, they are in hot water for age discrimination. They wholly deny it but gdamn, they are full of it. Everyone knows once you get old they cut you. Old techs cost more then young techs
Oh. My god.

No one was even trying to...

that wasn't even the...

why did I just read? This is worse than the bulldozer core "debate"

Screw it, you are all frog-god food now. So has decreed the giant green one. Blessed be his slime. I wash my hands of this.

Haha, get a grip man. Whomever the initial dolt who started this created this context by trolling and stating that AMD is only catching up to Intel with 4 way SMT. Maybe you should read the earlier posts. My point is being first at somethhing you've done ludicrously bad at isn't something to brag about, in context. It's kinda ironic... considering the world runs on AMD64 and not the crap Intel had.
 
This is great and all, but AMD, could we get some more love on OpenGL and Vulkan (yes, your own API), please?
 
This is great and all, but AMD, could we get some more love on OpenGL and Vulkan (yes, your own API), please?

OpenGL under Windows I get has always been a problem (which somehow is fine under Linux), but what issue do you have with Vulkan?
 
Haha, get a grip man. Whomever the initial dolt who started this created this context by trolling and stating that AMD is only catching up to Intel with 4 way SMT. Maybe you should read the earlier posts.

You shouldn't take troll posts so seriously, dude. My grip is fine. I followed the context fine. No one else seemed too and the whole thing left me feeling mentally ill.

The dam just broke on your post, it wasn't just you. Doesn't matter though, the toad is always hungry.
 
OpenGL under Windows I get has always been a problem (which somehow is fine under Linux), but what issue do you have with Vulkan?
It's falling behind Intel and Nvidia.
The Linux driver is a lot better for OpenGL, but it's also very unstable.
 
It's falling behind Intel and Nvidia.
The Linux driver is a lot better for OpenGL, but it's also very unstable.

How so? Feature parity is generally fine, and AMD is still generally more performant under Vulkan than Nvidia. I don't mean to be harsh, but it feels like a very odd whinge. As for Linux OpenGL stability, AMDGPU has been much better and stable than Noveau or Nvidia's binary.
 
How so? Feature parity is generally fine, and AMD is still generally more performant under Vulkan than Nvidia. I don't mean to be harsh, but it feels like a very odd whinge. As for Linux OpenGL stability, AMDGPU has been much better and stable than Noveau or Nvidia's binary.
I'm a tester for yuzu emulator, so I use it on my 270X, the OpenGL driver on Windows is stable, but so slow that an Intel IGP is faster than a Navi 5700XT. The Vulkan Windows one is "fine" (as fast as the OpenGL Nvidia one, which is great considering the Switch is an Nvidia tablet) but AMD already said that they won't implement some extensions "because it's too much work", those extensions already work on Intel and Nvidia. The OpenGL mesa (Linux) driver is faster, a lot faster, but it seems to only be stable on GCN2 and up, GCN1 is just a mess, both on radeonsi or amdgpu, it eats ram, crashes easily, and has geometry glitches everywhere. Haven't tested the RADV Vulkan driver yet.
All the money seems to be on Navi, but it also failed to give us a decent OpenGL driver.
 
I am not sure about the density argument either as the 2600k had 995 million transistors vs the FX 8350 with 1200 Million.

I think I recall someone claiming that Anandtech's die area figure is too high, at least when it comes to Piledriver. I vaguely recall that the actual die size for Piledriver was, according to this person, just below 300. But, the gist appears to be that AMD/GF didn't beat Intel in the Sandy area in terms of density. If the claim is true that Piledriver was below 300 then it looks like Piledriver/GF 32nm SOI was pretty close to Intel's Sandy Bridge E 4C.

density.png
 
Back
Top