Tuesday, April 16th 2019
AMD Zen3 to Leverage 7nm+ EUV For 20% Transistor Density Increase
AMD "Zen 3" microarchitecture could be designed for the enhanced 7 nm+ EUV (extreme ultraviolet) silicon fabrication node at TSMC, which promises a significant 20 percent increase in transistor densities compared to the 7 nm DUV (deep ultraviolet) node on which its "Zen 2" processors are being built. In addition, the node will also reduce power consumption by up to 10 percent at the same operational load. In a late-2018 interview, CTO Mark Papermaster stated AMD's design goal with "Zen 3" would be to prioritize energy-efficiency, and that it would present "modest" performance improvements (read: IPC improvements) over "Zen 2." AMD made it clear that it won't drag 7 nm DUV over more than one microarchitecture (Zen 2), and that "Zen 3" will debut in 2020 on 7 nm+ EUV.
Source:
PCGamesN
90 Comments on AMD Zen3 to Leverage 7nm+ EUV For 20% Transistor Density Increase
But I don't get why so many people are fixated on DDR5, is there really a rush? CPU vendors will make the switch when it's needed and it's ready, not before. And based on the information I've seen, DDR5 doesn't look so good yet in terms of latency.
For mainstream users, dual channel DDR4 2666 MHz is plenty, and if you're a content creator you can always go for a quad-channel HEDT configuration. Memory bandwidth is primarily a bottleneck for heavy server loads, which is why upcoming Ice Lake-SP will feature 8 memory channels per socket. Higher memory bandwidth only really helps if a workload is bottlenecked, and over the past decade memory bandwidth have grown much faster than core speed, so generally you need heavy multithreading to be bottlenecked by memory bandwidth. As of now, DDR4 is supporting up to 3200 MHz, but I don't know if it can be expanded further. Well, actually not.
The primary motivation was hardware. PPC promised to replace x86, but failed to do so. Apple needed an architecture capable of scaling from a low-power laptop to a high-end workstation, and only x86 could do that.
In terms of software, OS X have no relation to Windows or its ecosystem at all. The Darwin kernel is a mix of the mach microkernel and BSD Unix, and largely rely on the BSD ecosystem, APIs, compilers etc. while they are now increasingly drifting away from that and creating their own walled garden… Most of Apple's current software is already compatible with ARM, so that's not really a big concern.
But ARM is not close to performant enough to replace x86, and this wouldn't change anytime soon. ARM devices like iPhones and iPads rely heavily on specialized instructions to accelerate workloads, and anything else will perform like crap. If Apple switches completely to ARM it would either mean they focus solely on low-performance "web browsing devices" or a hornet's nest of software patchwork and specialized instructions to be "competitive".
But on an interesting note regarding AMD; Zen was supposed to be the steppingstone up to K12, the new big CPU architecture from AMD, based on ARM. Meanwhile K12 is MiA, and probably already obsolete compared to other ARM designs. So Zen 2, 3…5, is it the backup plan after the failure of K12? Is this why we don't hear anything about a Zen successor yet?
Zen3 might be what I upgrade next to from my 2700X,, all depends really on price/performance which matters more then just out right performance at like double the price cough cough
Apple's nearest competitor in this space is Apple & that was last year, so a 2 year gap with a node shrink is definitely good enough IMO to get them in their own notebooks. From there it's only Apple's ambitions that stops them.
I didn't mean Apple software. They aren't making anything critical at the moment (mostly stuff you need for everyday computing, multimedia etc).
I meant 3rd party software, which is only available on x86.
Apple's shift to Intel CPUs made OS X a feasible alternative for Windows and Linux a decade ago.
If you're from US, you may have different memories. Macs were popular in US also before.
Outside of US they were very rare. Too expensive for home users. Too useless for most professional work.
x86 was an architecture people knew how to use. It was supported by important software. It had all the useful libraries.
Moving from Windows/Linux to Mac suddenly was very easy.
I was studying back then and I remember all the Mac Pro an MacBook boxes lying around. But it can be frugal. That's the use case at the moment. And the cores are simpler, so you can put more of them in a package. In many ways ARM servers will target the same niche AMD does with EPYC. So if you believe EPYC makes sense, then ARM will as well.
Server scenarios built around complex, single-thread tasks are out of ARM reach and that's unlikely to change. x86 will always have the edge. Assuming current Zen will be able to compete with Intel for at least 3 years and AMD may keep selling it for another 5... Zen successor could be a fairly young project.
But who knows? Chiplet idea gives them some interesting possibilities. Can they put an ASIC or FPGA there? Could they open it? Cooperate with clients who could customize the designs to fit their software?
PS5 is going to support RTRT. We don't know how. We know it's a custom Zen. Wouldn't it be nice if Sony could add a custom chiplet that does ray tracing? What if Nvidia made it?
:-)
They could use some of the new APU's from AMD. That would cut cost so they could offer a budget entry
level mac book at an affordable price. That would expand their user base greatly. Once a person gets used
to mac os and they know they can get a sub $600 laptop; they will most likely stick with it. People don't
always like change; they want something that works when they need it to...
All the Windows 10 hate lately, AMD getting competitive again in the server and consumer markets plus
the decline in their over priced iPhone markets would seem like an opportune time to take back some
of the consumer desktop/laptop share.
I think that would make more common sense than trying to port x86 to ARM; plus try to develop a CPU that
could come close to Intel, AMD current CPU's....
Everybody knows AMD cpu's can run OSX (or Mac OS) the hackntosh community has been modding
the Mach kernel since the early days of the switch to Intel. 2006-2007ish I believe.
Technically, they could just sell a version of Mac OS to run on PC's also if they wanted to.
sell it for $150-250 and make a lot of money. I think people would pay for that; instead of doing
the all the tricks to get Mac Os to run on a PC hardware...
I think they need to do something other than solely rely on IPhone sales.
That market will continue to decline; I remember when you could get a new phone for
sub $400 in the not to distant past. Now? really $1300? forget that....
I'll stick to my $60 Wal-Mart phone, that runs android and I replace every 1 1/2....
BTW; don't they still own a part of Adobe? Sure, but it's worth mentioning that most professional tools available for Macs were already available for PPC, but of course, everyone appreciated ditching PPC. :) Well, if you consider that ARM chips in mobile devices, TVs, Blu-Ray players etc. basically are very weak CPUs with specialized acceleration for anything which needs performance, then it's not really that efficient in generic code. Without the acceleration these CPUs would struggle to open a web browser or play a video.
ARM does fundamentally require more operations to do the same work, so it needs much higher clock speed. Also, your high TDP desktop CPU uses a lot of its die space on a much bigger prefetcher, more SIMD features etc. things which makes a huge difference when using e.g. Photoshop or Premiere.
Epyc certainly makes sense for some workloads, ARM servers doesn't really. Coming from PR guys, it can probably mean just about anything, so I wouldn't read too much into it before we have real details. Light and well crafted use of raytracing is technically already possible on existing hardware using OpenCL or CUDA, so it all comes down to what they mean by "RTRT"…
Looking forward to more information in 2020. :)