Wednesday, April 28th 2021
AMD Zen 5 "Strix Point" Processors Rumored To Feature big.LITTLE Core Design
AMD launched the 7 nm Zen 3 microarchitecture which powers Ryzen 5000 processors in late 2020, we expect AMD to follow this up with a Zen 3+ on 6 nm later this year and a 5 nm Zen 4 in 2022. We are now beginning to receive the first rumors about the 3 nm Zen 5 architecture which is expected to launch in 2024 in Ryzen 8000 series products. The architecture is reportedly known as "Strix Point" and will be manufactured on TSMC's 3 nm node with a big.LITTLE core design similar to the upcoming Intel Alder Lake and the Apple M1. The Strix Point lineup will consist exclusively of APUs and could feature up to 8 high-performance and 4 low-performance cores which would be less than what Intel plans to offer with Alder Lake. AMD has allegedly already set graphics performance targets for the processors and that they will bring significant changes to the memory subsystem but with rumors for a product 3 years away from launch take them with a healthy dose of skepticism.
Sources:
MEOPC, Video Cardz
78 Comments on AMD Zen 5 "Strix Point" Processors Rumored To Feature big.LITTLE Core Design
Maybe in parallel universe where Apple's own silicon isn't as strong as it is or M1 wasn't ready it could have happened and would have been interesting to see but it would have been a stop-gap effort at most.
And also there is the Intel resurrection with Core Duo, AMD was on its downward spiral into Bulldozer and ATI's adquisition also was hurting. But if it was in the Pentium III/Athlon or Pentium 4/Athlon XP and 64, Apple would have gone AMD almost for sure. Even today Intel is not into a full NetBurst situation, those were hellish years for them.
When you need to predict the future, you use predictors based on statistics from recent past, like branch predictors. For scheduling, I can imagine a solution that's based on both HW and SW. There would have to be some dedicated hardware on the CPU that collects some statistics about program execution. For example, how much time is spent executing/emulating AVX, or how much time is spent waiting for I/O or memory while the core is gobbling up power, or how much time is spent waiting because of the other thread on the same core is using some shared resource. The scheduler would then use these statistics to determine if the execution is optimal and move it to another core if it isn't.
The executable code itself could contain some metadata, provided by the compiler or manually, for a whole DLL/library or more detailed, and the scheduler would use that data as a hint when picking the best core for that code.
As it's based on statistics, it would be called "AI scheduler", of course.
- The strix (plural striges or strixes), in the mythology of classical antiquity, was a bird of ill omen, the product of metamorphosis, that fed on human flesh and blood. It also referred to witches and related malevolent folkloric beings.
Sounds like AMD is about get midevil on Intel's ass, lol.I know certain game engines would completely fall apart on on Bulldozer if the game wasn't made aware of the clustered nature of threads. I'm sure something similar will have to be done with big/little cores as you can't have a rendering thread for example go from a high performance big core to little one with half the performance without drastic performance issues. Yeah, AMD could go lower for sure but thats not how you run a company. Competition is weak but performance is pretty in line with price, especially when you consider that Intel has nothing in their stack that can do what AMDs top end parts do, nothing about AMD's pricing is abusive. The same cannot be said for Intel over the years with halo HEDT CPUs costing orders of magnitude more than the desktop units of the same family with absolutely no performance to justify it.
So yeah... AMD is being sensible.
What this really looks like is a 5ms (for the hard drive to respond) and then 4096 bytes (one sector of the hard drive loaded). A millisecond is very slow for a computer (4 GHz is 0.25 nanoseconds. 5ms is 5000000 nanoseconds or 20 million cycles).
Even if done on a slow 200MHz little core, this kind of background task almost certainly will be 'run and sleep'. Sleeping faster (by executing on a big core) could very well be the more efficient decision.
If AMD adopt the same methods of implementation then the meme is relevant, but if their path differs to Intel's, then it's not.
So basically it's not what they do, but how they do it that matters.
Oh and welcome.
Anyway does this story imply there will be no more 12 and 16 core desktop, as it said all Zen5 will be APU design.
And this is Zen 5, supposedly on 3nm. At that point a 16 core APU will be nothing, particularly if use multiple chiplets.
ARM hardware is a lot more power efficient so what if the OS could run on 15W of ARM hardware while the x86 cores slept?
If you're doing render farms or other "big" tasks, a bigger core at a lower frequency (think EPYC) is the best bet. But if you're streaming data from a hard drive out of a NIC into the Internet... LITTLE cores probably win (very low CPU requirements). "Schedulers aren't smart enough" to make these decisions. Heck, I don't think anyone is really smart enough to figure out the problem right now.