• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Qualcomm Says PC Transition to Arm-based Processors is Certain, to Launch High-Performance SoCs in 2023

Joined
Feb 3, 2017
Messages
3,747 (1.32/day)
Processor Ryzen 7800X3D
Motherboard ROG STRIX B650E-F GAMING WIFI
Memory 2x16GB G.Skill Flare X5 DDR5-6000 CL36 (F5-6000J3636F16GX2-FX5)
Video Card(s) INNO3D GeForce RTX™ 4070 Ti SUPER TWIN X2
Storage 2TB Samsung 980 PRO, 4TB WD Black SN850X
Display(s) 42" LG C2 OLED, 27" ASUS PG279Q
Case Thermaltake Core P5
Power Supply Fractal Design Ion+ Platinum 760W
Mouse Corsair Dark Core RGB Pro SE
Keyboard Corsair K100 RGB
VR HMD HTC Vive Cosmos
it also more or less matches 20W Zen3 cores and 50+W Golden Cove cores in ST performance around 11W/core.
Have you seen clock-for-clock (or power-limited) benchmarks with M1 vs Zen3 vs something from Intel somewhere? I am really curious about how that would work out.

Zen3 at 20W should run at 5GHz, Golden Cove at 50+W runs at 5.2-5.3GHz, both are on a quite steep curve or at the end of reasonable curve at that point. At 11W Zen3 should be around 4.2GHz and Golden Cove at around 3.9GHz.

Also relevant - TSMC claims 20% more performance or 40% less power for N5 over N7.
But that doesn't take away from the fact that if the rest of the ARM world wants to even remotely keep up, they need to try to follow suit.
Looking at this angle - and only at this angle - Nvidia buying ARM might actually be very beneficial. Nvidia should have both the resources and knowhow to make that happen and software support is their strong side.
The challenge is how to make a huge core design that is still economically feasible for a non-integrated market. IMO a huge portion of this will be to also have an actually fast small core, as the A53 is woefully slow by today's standards, making Apple-like 2+4 (or similar) designs perform very poorly. Thus most ARM vendors need more big cores, which doubly disadvantages them.
ARM will now have to contend with Gracemont and soon Zen4c/Zen4e. Outside Apple that picture is not looking too rosy.
 
Last edited:

silentbogo

Moderator
Staff member
Joined
Nov 20, 2013
Messages
5,540 (1.38/day)
Location
Kyiv, Ukraine
System Name WS#1337
Processor Ryzen 7 5700X3D
Motherboard ASUS X570-PLUS TUF Gaming
Cooling Xigmatek Scylla 240mm AIO
Memory 4x8GB Samsung DDR4 ECC UDIMM
Video Card(s) MSI RTX 3070 Gaming X Trio
Storage ADATA Legend 2TB + ADATA SX8200 Pro 1TB
Display(s) Samsung U24E590D (4K/UHD)
Case ghetto CM Cosmos RC-1000
Audio Device(s) ALC1220
Power Supply SeaSonic SSR-550FX (80+ GOLD)
Mouse Logitech G603
Keyboard Modecom Volcano Blade (Kailh choc LP)
VR HMD Google dreamview headset(aka fancy cardboard)
Software Windows 11, Ubuntu 24.04 LTS
I think you missed it. Apple provided comprehensive emulation support when they moved to ARM, Microsoft didn't. It doesn't matter how good the hardware is, if you can't run every application without issues no one is going to want to switch over.
MS did have a decent x86 emulation in place. The issue is - they still don't have a market-ready 64-bit emu, and according to recent news we won't see it until Windows 11 for ARM (e.g. not soon).
 
Joined
Jan 8, 2017
Messages
9,434 (3.28/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
The issue is - they still don't have a market-ready 64-bit emu
Which, like I said, means that their efforts amount to nothing. No consumer in their right mind will buy a Windows PC that they know wont be capable of running the vast majority of software released in the last few years.
 
Joined
May 2, 2017
Messages
7,762 (2.81/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
I guess, I am not really that impressed by the emulation to be honest, there are things that work in favor of x86 emulation on ARM, like the fact that you have more general purpose registers available.
Can you mention even a single example of emulation that performs as well as Apple's does across a similarly wide range of applications? If not, that lack of impressedness seems ... a bit selective?
By the way, I still don't understand how Apple got away with x86 emulation, Intel cracked down on every big corporation who wanted to do that in the past, like Nvidia and even Microsoft. You'd think they would do it to the one company where it would actually matter.
Two options: they're paying Intel (and likely AMD, for 64-bit emulation) licensing fees, or they put their massive engineering and legal resources in a way that circumvented Intel's legal options for stopping it.
Have you seen clock-for-clock (or power-limited) benchmarks with M1 vs Zen3 vs something from Intel somewhere? I am really curious about how that would work out.
Not strictly power limited, but Zen3 doesn't go higher than 20W-ish, and Intel generally lets their chips turbo freely in ST tasks. Anandtech has done several such comparisons: M1 Pro/Max vs. 5980HS & 11980HK; M1 vs 5950X, 1185G7 & 10900K.
Also relevant - TSMC claims 20% more performance or 40% less power for N5 over N7.
True, but remember those numbers are for the same architecture, and are about transistor switching, not overall arch efficiency (i.e. with their model chip (which is typically a simple ARM design) they can make its transistors switch 20% faster at ISO power or consume 40% less power at ISO performance). Comparisons across architectures are never quite that simple, even if the node specs do give some rough indication.
Looking at this angle - and only at this angle - Nvidia buying ARM might actually be very beneficial. Nvidia should have both the resources and knowhow to make that happen and software support is their strong side.
Nah, I don't see that. Nvidia has already showed that they don't have any interest in competing in the low-margin mobile SoC space - their interest is in servers and automotive applications. I see no reason why they would spend huge R&D resources on an expensive design for a low-margin consumer market, as it doesn't align with their current mode of operations at all.
ARM will now have to contend with Gracemont and soon Zen4c/Zen4e. Outside Apple that picture is not looking too rosy.
Yeah, ARM needs to get their act together. A78 is decent for what it is, and X1 was an okay first effort, but they now need a much larger X2 design and a vastly improve efficiency core - and preferably yesterday.
 
Joined
Sep 17, 2014
Messages
22,431 (6.03/day)
Location
The Washing Machine
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling Thermalright Peerless Assassin
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
Which, like I said, means that their efforts amount to nothing. No consumer in their right mind will buy a Windows PC that they know wont be capable of running the vast majority of software released in the last few years.

Unless they are of the opinion the cloud replaces all of what they do. A stupid notion, but it exists.
 
Joined
Jul 10, 2017
Messages
2,671 (0.99/day)
Have you read the monthly vulnerability reports for Qualcomm's chips (SoC, modems, etc.)? They are full of nail-biting nightmare stuff! A true cringe!

And you want to put this into PC? Heeeeell naaaaw! We are barely surviving with intel's and AMD's as it is.

Plus, I NEED my computers to run a variety of tasks, from text processors to scientific simulations. Good luck with that, ARM.
 
Joined
Oct 6, 2021
Messages
1,605 (1.40/day)
Of course they will say something like that, since they can't produce x86 processors. They wanna push their product at any cost.

I don't understand how people can find apple's SOC impressive with nearly 60B of transitors and still a mediocre performance. There's nothing special about it. They just made a big efficient chip, as big as possible in that process, that would be pretty easy for AMD.

But honestly this would result in much more expensive products and less supply, so they work intelligently trying to increase performance at the lowest possible area/transitor cost.
 
Joined
Feb 3, 2017
Messages
3,747 (1.32/day)
Processor Ryzen 7800X3D
Motherboard ROG STRIX B650E-F GAMING WIFI
Memory 2x16GB G.Skill Flare X5 DDR5-6000 CL36 (F5-6000J3636F16GX2-FX5)
Video Card(s) INNO3D GeForce RTX™ 4070 Ti SUPER TWIN X2
Storage 2TB Samsung 980 PRO, 4TB WD Black SN850X
Display(s) 42" LG C2 OLED, 27" ASUS PG279Q
Case Thermaltake Core P5
Power Supply Fractal Design Ion+ Platinum 760W
Mouse Corsair Dark Core RGB Pro SE
Keyboard Corsair K100 RGB
VR HMD HTC Vive Cosmos
Not strictly power limited, but Zen3 doesn't go higher than 20W-ish, and Intel generally lets their chips turbo freely in ST tasks. Anandtech has done several such comparisons: M1 Pro/Max vs. 5980HS & 11980HK; M1 vs 5950X, 1185G7 & 10900K.
My impression from everything is that Zen3 does not go higher due to process limitations. It is simply up against the very-very steep efficiency curve at 5GHz. Intel has it (clock capability, not efficiency) a bit better but not by much - 12900K leaked OC results (that reviews seem to confirm) had 330W at 5.2GHz and 400W at 5.3GHz (and still unstable). +20% power for +100MHz (+2% clocks). M1 runs at 3.2GHz, I wonder what would it do at for example 4GHz both in terms of performance and power consumption or if it would be capable of that.
True, but remember those numbers are for the same architecture, and are about transistor switching, not overall arch efficiency (i.e. with their model chip (which is typically a simple ARM design) they can make its transistors switch 20% faster at ISO power or consume 40% less power at ISO performance). Comparisons across architectures are never quite that simple, even if the node specs do give some rough indication.
My point is that M1 efficiency lead is practically within these same margins given that CPUs are run in optimal efficiency point - which M1 is but AMD/Intel CPUs generally are not (EPYCs and Xeons maybe).
Nah, I don't see that. Nvidia has already showed that they don't have any interest in competing in the low-margin mobile SoC space - their interest is in servers and automotive applications. I see no reason why they would spend huge R&D resources on an expensive design for a low-margin consumer market, as it doesn't align with their current mode of operations at all.
They tried getting x86 license at one point, they have tried creating (high-performance) ARM cores, both with limited success. They have been trying to get CPUs to go along with the whole GPU/HPC thing but fairly unsuccessfully. For the software side of support, if someone pushes an ISA to wide enough adoption in servers - or automotive for that matter - that tends to trickle to other segments as well.
 
Joined
Aug 6, 2020
Messages
729 (0.46/day)
Qualcomm has been tripping over their own internally-developed CPU architecture for the last decade!.

First, the rushed-out Kryo after the A7 stunned the industry (canned after Revision 1.0 was slow)

Then they got into servers wit centriq (and killed that, despite it making some inroads, and being the highest-performing arm of it's day.

They killed it several years before ARM announced Neoverse, so what makes you think this purchase will end-up any different?)

As long as all Qualcomm has to compete with is Apple, then they still sell hundreds of millions of chips a year (having that impressive modem, combined with GPU makes them currently untouchable on Android - why should they stir the Apple cart)?
 
Last edited:
Joined
Oct 12, 2005
Messages
707 (0.10/day)
Well M1 can control more their power consumption because it control everything where on PC, you have to use standard board with standard memory. On M1, it use LPDDR4/5 soldered where on PC it have to use most of the time DIMM that require longer trace, higher power to be stable etc..

Also, all CPU are designed with a goal in mind. x86 core are still mainly designed as a server/desktop CPU first in mind where M1 was designed for Laptop. There are no perfect designs, but designs adapted to the end goal.

The main problem or ARM or RISC-V for me is it's they get excited when they get very good initial performance at lower clock with smaller chips. They think that if they scale it out, the gain will be linear. The reality is the first 80% seem easy to get with low power consumption and low transistors count. When they want to get the last 20% to reach current top CPU like x86, it's where things start to become hard.

They have to implement complex mechanism like Out of order, pre-fetching, SIMD, etc. to feed larger and larger execution ports. This cost power and transistor count. In the end the CPU become so complex that all the advantages of the ISA become negated. In the end, it's all the design choice, the process, the v/f curve that matter.

And in those days where AMD and Intel are pushing things hard and are no longer trying failed architecture (Bulldozer - AMD) or milking the market (Intel last 10 years), ARM will have an hard time to get competitive since they also have the ISA incompatibility problem.

But this is a thing that Microsoft is actively working on. Even more now that Apple have it's own CPU in house and could push performance way ahead. They cannot let AMD and Intel slowing down and milking the x86 market. They are trying to make Windows ISA agnostics. How long will it take? i don't know but all their decision point to this end goals.

What we would need is the ability to have both binary for many application and the OS would just run them transparently. If not, then it would be nice to have a offline transcoding, and only in last call, a realtime one. with that, the experience would be mainly transparent.


But in the end, nobody will buy a Qualcomm laptop running Windows 11 if the experience is bad or if the performance are much lower for the same price. The thing is the CPU cost in a laptop is just a fraction so even cutting their price a lot on the CPU parts won't make those laptop very good deals if the performance isn't there.

But if they manage to do low power/reasonable performance at a fair price (instead of the super high price intel/AMD charge for their most efficient sku), they might have a chance to do a dent. But that is a big If. It's something they promised but never happened yet.
 
  • Like
Reactions: bug
Joined
May 17, 2021
Messages
3,005 (2.34/day)
Processor Ryzen 5 5700x
Motherboard B550 Elite
Cooling Thermalright Perless Assassin 120 SE
Memory 32GB Fury Beast DDR4 3200Mhz
Video Card(s) Gigabyte 3060 ti gaming oc pro
Storage Samsung 970 Evo 1TB, WD SN850x 1TB, plus some random HDDs
Display(s) LG 27gp850 1440p 165Hz 27''
Case Lian Li Lancool II performance
Power Supply MSI 750w
Mouse G502
Watching what Apple did in such a short time (sure lots of money, but so does Intel have) i think it's inevitable.
 
Joined
Apr 24, 2008
Messages
2,021 (0.33/day)
Processor RyZen R9 3950X
Motherboard ASRock X570 Taichi
Cooling Coolermaster Master Liquid ML240L RGB
Memory 64GB DDR4 3200 (4x16GB)
Video Card(s) RTX 3050
Storage Samsung 2TB SSD
Display(s) Asus VE276Q, VE278Q and VK278Q triple 27” 1920x1080
Case Zulman MS800
Audio Device(s) On Board
Power Supply Seasonic 650W
VR HMD Oculus Rift, Oculus Quest V1, Oculus Quest 2
Software Windows 11 64bit
Infidels !!!

Our ARM overlords have spoken !!!

So shall it be !!!
 
Joined
May 2, 2017
Messages
7,762 (2.81/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Watching what Apple did in such a short time (sure lots of money, but so does Intel have) i think it's inevitable.
Short time? They've been selling devices with their own ARM cores since, what, 2012? They've likely been working on the M1 series since 2016-2017-ish, as that's about how long a ground-up CPU architecture design cycle is.
My impression from everything is that Zen3 does not go higher due to process limitations.
Not only - different processes have different clock scaling characteristics, but it's also highly architecture dependent. Both need to align for clocks to scale better, and AMD seems to be held back by a combination of both.
It is simply up against the very-very steep efficiency curve at 5GHz.
That is absolutely true, but it doesn't change the absolute power draw charactersitics of the chip.
Intel has it (clock capability, not efficiency) a bit better but not by much - 12900K leaked OC results (that reviews seem to confirm) had 330W at 5.2GHz and 400W at 5.3GHz (and still unstable). +20% power for +100MHz (+2% clocks).
Yes, but again, same thing.
M1 runs at 3.2GHz, I wonder what would it do at for example 4GHz both in terms of performance and power consumption or if it would be capable of that.
I'm reasonably sure that the M1 can't clock that much higher - an execution pipeline that wide is likely very, very limited in how high it can scale.
My point is that M1 efficiency lead is practically within these same margins given that CPUs are run in optimal efficiency point - which M1 is but AMD/Intel CPUs generally are not (EPYCs and Xeons maybe).
But that's missing the point. The point is: they are likely architecturally limited to the mid-to-low 3GHz range, yet they still manage to match the best x86 CPUs in ST. Yes, they spend tons of transistors to do so, have massive caches and an extremely wide core design, but they still manage to match a 5GHz Zen3 core at the power levels of a ~4.2GHz Zen3 core. That clearly indicates that, as you say, Ryzen 5000 is well out of its efficiency sweet spot at ~5GHz, but it also shows just how significant Apple's IPC and efficiency advantage is. If AMD had to clock down to match their efficiency, they would be significantly slower.
They tried getting x86 license at one point, they have tried creating (high-performance) ARM cores, both with limited success. They have been trying to get CPUs to go along with the whole GPU/HPC thing but fairly unsuccessfully. For the software side of support, if someone pushes an ISA to wide enough adoption in servers - or automotive for that matter - that tends to trickle to other segments as well.
IMO, that is highly, highly doubtful. How many RISC or POWER chips do you see in consumer applications? Also, the ARM ISA is already ubiquitous in consumer mobile spaces, so that's not the issue. The issue is getting a sufficiently high performance core design out there - and one made for automotive and server tasks is likely to have performance characteristics quite unsuited for consumer applications, or a bunch of features that simply aren't used, eating up die area. And as you said, Nvidia already tried - and crucially, gave up on - building consumer SoCs. Yes, that was in part due to gross anticompetitive behaviour from Qualcomm and Intel (bribing or "sponsoring" manufacturers to not use Tegra4, among other things), but they're quite unlikely to get back into that game. They've even been reluctant to produce a new, bespoke SoC for a new Switch, despite that being a higher-margin product (for them, not necessarily Nintendo) that is guaranteed to sell in tens of millions of units. Nvidia has shown zero interest in being an end-user-friendly custodian of ARM.
Well M1 can control more their power consumption because it control everything where on PC, you have to use standard board with standard memory. On M1, it use LPDDR4/5 soldered where on PC it have to use most of the time DIMM that require longer trace, higher power to be stable etc..
AMD and Intel can specify and package literally whatever RAM they want in whatever way they want. The only question is cost and whether any OEMs are willing to pay for it and put it to use. HBM, on-package LPDDR, whatever, they can do it if they want to. There is no system limitation for this. Also, most laptops today use soldered RAM, whether regular DDR4 or LPDDR4X, as most designs are thin-and-lights these days.
Also, all CPU are designed with a goal in mind. x86 core are still mainly designed as a server/desktop CPU first in mind where M1 was designed for Laptop. There are no perfect designs, but designs adapted to the end goal.
Intel has been "mobile-first" in their CPU designs since at least Skylake. That's what sells the most (by an order of magnitude if not more), so that's the main focus.
The main problem or ARM or RISC-V for me is it's they get excited when they get very good initial performance at lower clock with smaller chips. They think that if they scale it out, the gain will be linear. The reality is the first 80% seem easy to get with low power consumption and low transistors count. When they want to get the last 20% to reach current top CPU like x86, it's where things start to become hard.
It's mainly down to the willingness to pay for a sufficiently substantial design. Most ARM SoCs cost well below $100 for phone or chromebook manufacturers, while AMD and Intel CPUs/APUs easily cost $3-400 if not more for higher end parts. It stands to reason that AMD and Intel can then afford to make bigger designs with larger caches and more substantial core designs with better performance.
They have to implement complex mechanism like Out of order, pre-fetching, SIMD, etc. to feed larger and larger execution ports. This cost power and transistor count. In the end the CPU become so complex that all the advantages of the ISA become negated. In the end, it's all the design choice, the process, the v/f curve that matter.
There isn't a single high-performance core design on the market today that isn't OoO, prefetchers are equally ubiquitous, as is SIMD hardware and ISAs. I fail to see how this would be a disadvantage for ARM and somehow not x86.
And in those days where AMD and Intel are pushing things hard and are no longer trying failed architecture (Bulldozer - AMD) or milking the market (Intel last 10 years), ARM will have an hard time to get competitive since they also have the ISA incompatibility problem.
Apple has demonstrated clearly that an ARM design can compete with the fastest x86 designs. ARM, Qualcomm, Samsung and the rest just need to get their collective thumbs out of their collective rear ends and catch up. The problem seems to be a conservative and overly cost-conscious design approach, more than anything else.
But this is a thing that Microsoft is actively working on. Even more now that Apple have it's own CPU in house and could push performance way ahead. They cannot let AMD and Intel slowing down and milking the x86 market. They are trying to make Windows ISA agnostics. How long will it take? i don't know but all their decision point to this end goals.
I don't think either Intel or AMD are in a position where they could milk anything. Chipmaking is - thankfully - an extremely competitive business once again.
What we would need is the ability to have both binary for many application and the OS would just run them transparently. If not, then it would be nice to have a offline transcoding, and only in last call, a realtime one. with that, the experience would be mainly transparent.
That sounds like a recipe for disaster IMO. Not only would application install sizes (and download sizes) balloon, but potentially having an active thread swap over to a core of an entirely incompatible ISA, being swapped on the fly including whatever data it's working on? That sounds like BSOD hell.
But in the end, nobody will buy a Qualcomm laptop running Windows 11 if the experience is bad or if the performance are much lower for the same price. The thing is the CPU cost in a laptop is just a fraction so even cutting their price a lot on the CPU parts won't make those laptop very good deals if the performance isn't there.
That's true. But Qualcomm's SoCs are dirt cheap compared to Intel/AMD CPUs/APUs - that's why those ARM Chromebooks get so cheap. There isn't much left to cut. They need to step up their performance game, period.
But if they manage to do low power/reasonable performance at a fair price (instead of the super high price intel/AMD charge for their most efficient sku), they might have a chance to do a dent. But that is a big If. It's something they promised but never happened yet.
Yeah, something like a 4xX1+4xA78 design that was also cheap could be interesting for a laptop. But there's no way such a design would be cheap, which lands us back to square one. Anything cheap and ARM inevitably means a bunch of A53 cores, and they just aren't even remotely competitive today.
 
Joined
Feb 3, 2017
Messages
3,747 (1.32/day)
Processor Ryzen 7800X3D
Motherboard ROG STRIX B650E-F GAMING WIFI
Memory 2x16GB G.Skill Flare X5 DDR5-6000 CL36 (F5-6000J3636F16GX2-FX5)
Video Card(s) INNO3D GeForce RTX™ 4070 Ti SUPER TWIN X2
Storage 2TB Samsung 980 PRO, 4TB WD Black SN850X
Display(s) 42" LG C2 OLED, 27" ASUS PG279Q
Case Thermaltake Core P5
Power Supply Fractal Design Ion+ Platinum 760W
Mouse Corsair Dark Core RGB Pro SE
Keyboard Corsair K100 RGB
VR HMD HTC Vive Cosmos
Watching what Apple did in such a short time (sure lots of money, but so does Intel have) i think it's inevitable.
It has been well over a decade. Apple's first completely own SoC was with A4 that was in products released in 2010. Designing a chip like that takes years. Apple has been designing SoCs probably for at least 15 years now.
I'm reasonably sure that the M1 can't clock that much higher - an execution pipeline that wide is likely very, very limited in how high it can scale.
I have a feeling this has more to do with the manufacturing process used rather than architecture or execution pipeline. M1 density is a little over 130 Mtr/mm² while Zen3 is 62MTr/mm². TSMC official word is that N5 is 80% more dense than N7. These official specs never really been reached in this way in previous generations and especially so with logic. Also Zen3 should contain more memory (dense) than M1. This makes the 110% higher in density... weird. I bet Apple is using the high density/low power variation of N5, not the high performance one.
 
Last edited:
Joined
Aug 6, 2020
Messages
729 (0.46/day)
Well M1 can control more their power consumption because it control everything where on PC, you have to use standard board with standard memory. On M1, it use LPDDR4/5 soldered where on PC it have to use most of the time DIMM that require longer trace, higher power to be stable etc..
 
Joined
Oct 15, 2019
Messages
585 (0.31/day)
I truly believe Intel will start a trend with the hybrid processor designs, and this is where we will end up - mixing Arm and x86 in the same systems with windows 11 and its support for hybrid, mixed architectures
Win11 does not support mixed instruction sets (i.e x86-64 & arm). That is the reason why even avx512 has to be disabled in alder lake when E cores are in use (as E cores do not support it and it’s a big no-no to mix instruction sets).

They will make more E core heavy products, but never mix arm with x86, as that makes literally no sense. They may however make pure ARM compatible processors.
 
Joined
Feb 1, 2019
Messages
3,580 (1.69/day)
Location
UK, Midlands
System Name Main PC
Processor 13700k
Motherboard Asrock Z690 Steel Legend D4 - Bios 13.02
Cooling Noctua NH-D15S
Memory 32 Gig 3200CL14
Video Card(s) 4080 RTX SUPER FE 16G
Storage 1TB 980 PRO, 2TB SN850X, 2TB DC P4600, 1TB 860 EVO, 2x 3TB WD Red, 2x 4TB WD Red
Display(s) LG 27GL850
Case Fractal Define R4
Audio Device(s) Soundblaster AE-9
Power Supply Antec HCG 750 Gold
Software Windows 10 21H2 LTSC
Nothing is impossible but any emulation needs to be very efficient and probably a performance advantage on the chips to offset those overheads, the decades worth of software that exists on the windows platform wont be recompiled, so they have to get passed that barrier.
 

ARF

Joined
Jan 28, 2020
Messages
4,670 (2.65/day)
Location
Ex-usa | slava the trolls
My impression from everything is that Zen3 does not go higher due to process limitations. It is simply up against the very-very steep efficiency curve at 5GHz. Intel has it (clock capability, not efficiency) a bit better but not by much - 12900K leaked OC results (that reviews seem to confirm) had 330W at 5.2GHz and 400W at 5.3GHz (and still unstable). +20% power for +100MHz (+2% clocks). M1 runs at 3.2GHz, I wonder what would it do at for example 4GHz both in terms of performance and power consumption or if it would be capable of that.
My point is that M1 efficiency lead is practically within these same margins given that CPUs are run in optimal efficiency point - which M1 is but AMD/Intel CPUs generally are not (EPYCs and Xeons maybe).
They tried getting x86 license at one point, they have tried creating (high-performance) ARM cores, both with limited success. They have been trying to get CPUs to go along with the whole GPU/HPC thing but fairly unsuccessfully. For the software side of support, if someone pushes an ISA to wide enough adoption in servers - or automotive for that matter - that tends to trickle to other segments as well.

330W and 400W are simply silly, ridiculous numbers.

You, guys, discuss the topic from the wrong side.

Try getting the RISC smartphone performance from the x86-64 chips in the very same power envelope.

How about running a 4K 17-inch office laptop with 1-watt power consumption and a battery that can last 48 hours or more? How about that?

How about making AMD's Ryzen 5 U-APUs running at 2 or 3 watts with decent performance on a 4K 15-inch office notebook?
 
Joined
Oct 12, 2005
Messages
707 (0.10/day)
AMD and Intel can specify and package literally whatever RAM they want in whatever way they want. The only question is cost and whether any OEMs are willing to pay for it and put it to use. HBM, on-package LPDDR, whatever, they can do it if they want to. There is no system limitation for this. Also, most laptops today use soldered RAM, whether regular DDR4 or LPDDR4X, as most designs are thin-and-lights these days.
99% of the comparison aren't system build with LPDDR*, they are traditional PC or laptop build by third party.

Intel has been "mobile-first" in their CPU designs since at least Skylake. That's what sells the most (by an order of magnitude if not more), so that's the main focus.
They say that, but in reality it's more, Let's build a server die and sell the best binned CPU to mobile.

It's mainly down to the willingness to pay for a sufficiently substantial design. Most ARM SoCs cost well below $100 for phone or chromebook manufacturers, while AMD and Intel CPUs/APUs easily cost $3-400 if not more for higher end parts. It stands to reason that AMD and Intel can then afford to make bigger designs with larger caches and more substantial core designs with better performance.
You have to perform better to ask for more. See Apple with M1 Pro and Max. if it perform better, people will pay more for ARM design.

There isn't a single high-performance core design on the market today that isn't OoO, prefetchers are equally ubiquitous, as is SIMD hardware and ISAs. I fail to see how this would be a disadvantage for ARM and somehow not x86.
The point is the ARM ISA benefits of being "RISC" loose most of its benefits as the cores grow bigger and go more complex. As core grow bigger and bigger, the ISA become a way smaller factor in overall performance. I do not say that it would impact one more than another. I say ISA become less and less relevant the more you push performance.

Apple has demonstrated clearly that an ARM design can compete with the fastest x86 designs. ARM, Qualcomm, Samsung and the rest just need to get their collective thumbs out of their collective rear ends and catch up. The problem seems to be a conservative and overly cost-conscious design approach, more than anything else.
The problem is that it's not that simple to be the best performer in the CPU worlds. Some people seems to think that just because you have a new ISA, or use a "better one" that it would be easy. So many people are like ARM is the future because it's so much more performant than x86. But in the end, you can probably do similar thing in both ISA because like i said, at that level, the ISA is not the main factor for performance.

Everyone could do a high performance architecture using x86(if they have the license indeed), ARM or RISC V. But getting there is hard.

My point there is ARM got it easy over the last decade as AMD was struggling and Intel was milking the market. If both company were still in a fierce competition, ARM would probably got it way harder. That do not means that Apple would not have been able to get an amazing design out.

I don't think either Intel or AMD are in a position where they could milk anything. Chipmaking is - thankfully - an extremely competitive business once again.
Intel was for a decade. They are no longer since now AMD got the mind share and HEDT and server performance crown where Apple have the mobile crown. I wish that a milking situation won't ever happen. But those year are probably the main reason why Apple decided to make their own chips in the first place.


That sounds like a recipe for disaster IMO. Not only would application install sizes (and download sizes) balloon, but potentially having an active thread swap over to a core of an entirely incompatible ISA, being swapped on the fly including whatever data it's working on? That sounds like BSOD hell.
Well an installer could choose to only install the specific binary. Binary are fairly small. i don't see this as a big deal. People forget to realise that x64 code take double the space of 32 bit code and nobody freaked out with that. And it's also how Apple are doing things these days that their M1 laptop are doing very well.

That's true. But Qualcomm's SoCs are dirt cheap compared to Intel/AMD CPUs/APUs - that's why those ARM Chromebooks get so cheap. There isn't much left to cut. They need to step up their performance game, period.

Yeah, something like a 4xX1+4xA78 design that was also cheap could be interesting for a laptop. But there's no way such a design would be cheap, which lands us back to square one. Anything cheap and ARM inevitably means a bunch of A53 cores, and they just aren't even remotely competitive today.

That bring back to the main debate. (let exclude apple from this since it's a closed system), If ARM ecosystem doesn't release design that are matching or outperforming x64 offers, they won't penetrate the desktop/mobile market. They will also have an hard time in server market. They may have their initiative like Amazon CPU and stuff, but in the end, people will prefer what perform the most.

ARM isn't a magical ISA that is just so much better than x86, like i said, at that level of complexity, the ISA doesn't matter too much (Except for indeed software compatibility).
 
Joined
Dec 28, 2012
Messages
3,877 (0.89/day)
System Name Skunkworks 3.0
Processor 5800x3d
Motherboard x570 unify
Cooling Noctua NH-U12A
Memory 32GB 3600 mhz
Video Card(s) asrock 6800xt challenger D
Storage Sabarent rocket 4.0 2TB, MX 500 2TB
Display(s) Asus 1440p144 27"
Case Old arse cooler master 932
Power Supply Corsair 1200w platinum
Mouse *squeak*
Keyboard Some old office thing
Software Manjaro
Qualcomm said in the early 2010s that ARM would be the future of servers, and that went precisely nowhere.

Turns out, if you dont have decades of software to back you up, you at least need jawdropping performance. Which ARM also couldnt deliver, at x86 performance it was drawing as much or more power. We see similar behaviour from apple's M series, despite having 5nm on their side and gigantic caches and all this vertical integration they are only slightly faster and just as power hungry as zen 2 based 4000 series APUs.

x86 isnt some boat anchor liek the 90s. 99% of what is done on x86 chips today is micro operations, and it doesnt matter what arch does that. The low hanging fruit was picked ages ago. An ARM based future will only happen if they are both cheaper and more capable then what we already have, and if serious efforts are made to allow x86 apps to run on ARM without an issue.
 
Joined
Mar 10, 2010
Messages
11,878 (2.21/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in four sticks./16Gb/16GB
Video Card(s) Powercolour RX7900XT Reference/Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506
Thanos was inevitable, the terminator was inevitable, and look at those, arm doing in x86 , I'll not hold my breath.

@Mussels I did agree with you on hybrid but I don't think we will see arm in the x86 mix, it assists x86's own demise, but big and little core's , definitely, other hardware acceleration definitely, I can't imagine why we won't see FPGA tech incorporated within a few years ,both big x86 have in house FPGA roadmaps both do have arm licences too, I can't wait for some of that to come out, with One API and it's like making using that system a joy.
Hopefully.
Then you have Apple, you can't knock what they achieved, you can reason it through, it's possible they won't always have that node advantage, or it could limit availability.
None the less Intel stepped up and showed hybrid no doubt to claim the crown in performance against a resurgent Amd , there's no doubt in my mind they wouldn't go steps further to maintain Arms present share of its pie too, ultra big core, more Alu's , cache, whatever it takes.
 
Last edited:
Joined
May 3, 2018
Messages
2,881 (1.20/day)
Silly poll, where was the option "It depends on performance and price".

Qualcomm seems to be aiming too high. How about making an iPad Pro killer tablet first, something that could actually beat down the A16 next year. If they can't pull that off they have no hope against most likely M2 or what about AMD Phoenix APU's and Intel Raptor Lake P.
 
Joined
Mar 21, 2016
Messages
2,508 (0.79/day)
Pair ARM with FPGA and use the FPGA hardware to emulate the harder x86 stuff that ARM struggles with. Looking at it further FPGA is what will bridge the gap between x86 and ARM and other chips. FPGA is the programmable fabric if you will that can connect and ties different chip hardware together in various manners. I firmly believe in the future of FPGA tech it's just a matter of more of a stronger programmable hardware and software to go alongside it.

Some of the stuff Xilinx is doing now is interesting it'll help making FPGA's more user friendly and access w/o being a coder and engineer at the same time sort of like GameMaker for FPGA's is the best way to describe things they are starting to do with developer boards building blocks and it's not really to that point yet, but give it time and using FPGA's exactly the way you intend to for a given purpose is going to become easier and simpler to do so. It's like lego blocks with circuits so just imagine the fun in the right hands with the right vision.
 
Last edited:

eidairaman1

The Exiled Airman
Joined
Jul 2, 2007
Messages
42,094 (6.63/day)
Location
Republic of Texas (True Patriot)
System Name PCGOD
Processor AMD FX 8350@ 5.0GHz
Motherboard Asus TUF 990FX Sabertooth R2 2901 Bios
Cooling Scythe Ashura, 2×BitFenix 230mm Spectre Pro LED (Blue,Green), 2x BitFenix 140mm Spectre Pro LED
Memory 16 GB Gskill Ripjaws X 2133 (2400 OC, 10-10-12-20-20, 1T, 1.65V)
Video Card(s) AMD Radeon 290 Sapphire Vapor-X
Storage Samsung 840 Pro 256GB, WD Velociraptor 1TB
Display(s) NEC Multisync LCD 1700V (Display Port Adapter)
Case AeroCool Xpredator Evil Blue Edition
Audio Device(s) Creative Labs Sound Blaster ZxR
Power Supply Seasonic 1250 XM2 Series (XP3)
Mouse Roccat Kone XTD
Keyboard Roccat Ryos MK Pro
Software Windows 7 Pro 64
SuperH (Hitachi), SPARC (Sun Micro Systems/Oracle) anyone?
 
Joined
Mar 28, 2020
Messages
1,753 (1.03/day)
Apple's M1 is not designed to run with high clock speed. That in my opinion is a brute force method. At least from what I see, ARM and AMD generally don't go with super charging their processors with high clock speed, and we can see why Intel's processors are burning hot and power hungry. ARM chips are generally focused on low power, though we have seen ARM chips fro various companies pushing that power envelop in recent years. Still a very powerful ARM chip like the Apple M1 caps out at 3.2Ghz, and Apple chooses to go wide instead of creating a M1 Pro and Max on steroids by bumping up the power requirement just to enable a very high clock speed. It is inefficient since at some point, any increase in Mhz will result in a exponential increase in power requirement. Intel basically ignored the sweet spot clock speed, and went straight for as high as it can push it without making it impossible for most consumers to cool the CPU.

In any case, Apple's M1 already proved that it is viable to run a good SOC for desktop or even professional use. So that leaves Qualcomm to achieve the same, with the same people that contributed to the M1. However, I think it will take a much longer time for ARM to gain traction on say Windows because it is currently dominated by X86 processors, and Intel will do whatever it takes to slow or stop this transition from happening. Just as Intel CEO mentioned, it is not just the hardware, but a combination of hardware + software that matters. The Windows ecosystem is sadly very fractured/ divided that makes such transitions difficult. So I won't expect the kind of success that Apple enjoys in their first attempt.
 
Top