• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Qualcomm Says PC Transition to Arm-based Processors is Certain, to Launch High-Performance SoCs in 2023

Joined
Sep 17, 2014
Messages
22,673 (6.05/day)
Location
The Washing Machine
System Name Tiny the White Yeti
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
VR HMD HD 420 - Green Edition ;)
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
Apple's M1 is not designed to run with high clock speed. That in my opinion is a brute force method. At least from what I see, ARM and AMD generally don't go with super charging their processors with high clock speed

You're comparing desktop- to mobile oriented architectures.

Look at AMDs RDNA2 and how its clock speeds compare to Nvidia's Ampere. Look at the jump they made. Then look at the node they both use. Now, look at AMDs steps on Zen, where they also push clocks alongside numerous other refinements.

The node answers the major gap in your assumptions/opinion. Every company will maximize the frequency within the power envelope they can use for each product (stack), and it relates to the power budget offered plus the node they use and how it works with that power budget.

Intel's 10nm node didn't 'work' because Intel set a high bar for the frequency they wanted within a certain power envelope, and 10nm couldn't deliver that, while 14nm could. Effectively, this made 14nm economically much more viable than their 10nm ever could be, and so they only pushed lower clock mobile parts with bigger margins (Tiger Lake), trying to upsell GPU (Xe) to make it worthwhile - we know it wasn't.

If a shrink causes you to lose frequency, the net result is you've spent R&D on a slower chip or higher cost to produce the same thing.
 
Last edited:
Joined
May 2, 2017
Messages
7,762 (2.78/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
You, guys, discuss the topic from the wrong side.

Try getting the RISC smartphone performance from the x86-64 chips in the very same power envelope.

How about running a 4K 17-inch office laptop with 1-watt power consumption and a battery that can last 48 hours or more? How about that?

How about making AMD's Ryzen 5 U-APUs running at 2 or 3 watts with decent performance on a 4K 15-inch office notebook?
Show me an ARM SoC that performs decently and consumes even 1W under a meaningful load? Also, the LCD panel of that 17" 4K laptop will be consuming way more than 1 watt on its own. Phone ARM SoCs peak at 8-10W and run steady-state at around ~3W (depending on the phone design). The same chips in laptops get a bit more thermal and power headroom. ARM chips idle lower than x86 chips, but then you're talking about high vs. low mW figures, which ... well, it's a difference, but it isn't particularly meaningful. ARM chips are still more efficient in the low end and can run in lower power envelopes in total, but that is in part due to them being designed to be smaller and more efficient in the first place. It'll be extremely interesting to see how Intel's E cores and whatever analogue to this AMD comes up with in the coming years stacks up.
99% of the comparison aren't system build with LPDDR*, they are traditional PC or laptop build by third party.
... did I say that? No. I said:
most laptops today use soldered RAM, whether regular DDR4 or LPDDR4X
So at least do me the courtesy of responding to what I actually said. Also, all PCs are built by third parties. AMD and Intel don't make PCs, whether desktops or laptops. None of that stops them from designing an SoC with on-board memory - it would just force OEMs to adjust their designs.
They say that, but in reality it's more, Let's build a server die and sell the best binned CPU to mobile.
Really? Intel launched Ice Lake for servers this spring. Ice Lake in mobile launched in mid-2019. While these aren't the same architecture in every way, there is a reason they share a name, and the server chip was literally two years later than the mobile chip. Also, there is zero overlap between Intel's server and laptop chips, as they use completely different die designs.
You have to perform better to ask for more. See Apple with M1 Pro and Max. if it perform better, people will pay more for ARM design.
On the Apple side that is relatively easy, as they control the OS and much of the software stack, and also built an incredibly robust backwards compatibility system through emulation. ARM, QC or anyone else would need close cooperation with MS to do the same. And so far, these efforts have failed. You are right to a certain point, but it's a chicken-and-egg problem: nobody buys ARM PCs because they underperform; nobody builds high-performance ARM PCs because there's no demand for them. The only solution towards fixing this is a significant R&D investment.
The point is the ARM ISA benefits of being "RISC" loose most of its benefits as the cores grow bigger and go more complex. As core grow bigger and bigger, the ISA become a way smaller factor in overall performance. I do not say that it would impact one more than another. I say ISA become less and less relevant the more you push performance.
Less, sure. But Apple's efficiency advantage still shows that there's a lot of room for movement. Sure, they have a huge die and a node advantage, but they're still matching ~5GHz x86 performance at 3.2GHz and half or less than half the power. Regardless of other factors, that speaks to a highly efficient design. Whether that can be matched by an x86 design? We'll have to wait and see.
Everyone could do a high performance architecture using x86(if they have the license indeed), ARM or RISC V. But getting there is hard.
That statement is literally a contradiction. If everyone could make one, then it would by definition not be hard. The difficulty of doing so is precisely why everyone isn't doing that.
My point there is ARM got it easy over the last decade as AMD was struggling and Intel was milking the market. If both company were still in a fierce competition, ARM would probably got it way harder. That do not means that Apple would not have been able to get an amazing design out.
You might be right - we'll see. IMO, ARM's main advantage has been the growth of the mobile market, not x86 stagnation, as they mostly haven't been competing in the same markets to begin with.
Well an installer could choose to only install the specific binary. Binary are fairly small. i don't see this as a big deal. People forget to realise that x64 code take double the space of 32 bit code and nobody freaked out with that. And it's also how Apple are doing things these days that their M1 laptop are doing very well.
So you want a dual-ISA CPU with applications that selectively install for one set of cores or the other? What would be the point of that? Does the user have to choose which set of cores they want to install an application for? Will it be automatic? And what about the OS? Seriously, this sounds like a nightmare scenario.
That bring back to the main debate. (let exclude apple from this since it's a closed system), If ARM ecosystem doesn't release design that are matching or outperforming x64 offers, they won't penetrate the desktop/mobile market.
That is somewhat true. At least they need to be competitive, and to deliver adequate performance. Zen1 showed us what kind of effect that can have.
They will also have an hard time in server market. They may have their initiative like Amazon CPU and stuff, but in the end, people will prefer what perform the most.
They're doing decently in the server market currently, both in gaining market share and absolute performance. Look at the Altra Q80 results.

Qualcomm said in the early 2010s that ARM would be the future of servers, and that went precisely nowhere.
That is very much not true. They had a couple of false starts, but ARM servers are doing quite well these days. Look at the link directly above. Their market share is still tiny, but that's expected, as gaining ground in the server market takes a long, long time.
Turns out, if you dont have decades of software to back you up, you at least need jawdropping performance. Which ARM also couldnt deliver, at x86 performance it was drawing as much or more power. We see similar behaviour from apple's M series, despite having 5nm on their side and gigantic caches and all this vertical integration they are only slightly faster and just as power hungry as zen 2 based 4000 series APUs.
While the M1 Pro and Max can scale to ~100W power draws under heavy combined CPU and GPU loads, I've seen nothign to indicate that they aren't more efficient than 4000-series APUs. Got any concrete data to back up that statement? At least in ST performance they match the fastest x86 cores while consuming ~10-11W (absolute highest is 14.5W), while in MT performance they match or (often drastically) beat 35/45W AMD/Intel mobile chips (note three links in this sentence). With MT loads never exceeding 45W, it's pretty safe to say the M1 series is significantly ahead in efficiency.

Pair ARM with FPGA and use the FPGA hardware to emulate the harder x86 stuff that ARM struggles with. Looking at it further FPGA is what will bridge the gap between x86 and ARM and other chips. FPGA is the programmable fabric if you will that can connect and ties different chip hardware together in various manners. I firmly believe in the future of FPGA tech it's just a matter of more of a stronger programmable hardware and software to go alongside it.

Some of the stuff Xilinx is doing now is interesting it'll help making FPGA's more user friendly and access w/o being a coder and engineer at the same time sort of like GameMaker for FPGA's is the best way to describe things they are starting to do with developer boards building blocks and it's not really to that point yet, but give it time and using FPGA's exactly the way you intend to for a given purpose is going to become easier and simpler to do so. It's like lego blocks with circuits so just imagine the fun in the right hands with the right vision.
FPGAs are nowhere near ASICs in power efficiency though, so if you're using some sort of FPGA-based acceleration for your x86 emulator that is likely going to get quite power hungry. Which rather undermines the proposed ARM advantage in the first place. FPGAs have tons and tons of cool use cases, but wide-scale general consumer applications are not among them - ASICs will always be a better fit there. Also, FPGA-accelerated x86 emulation sounds like a recipe for some rather heavy litigation.
 
Joined
Jun 5, 2021
Messages
284 (0.22/day)
It has been well over a decade. Apple's first completely own SoC was with A4 that was in products released in 2010. Designing a chip like that takes years. Apple has been designing SoCs probably for at least 15 years now.
I have a feeling this has more to do with the manufacturing process used rather than architecture or execution pipeline. M1 density is a little over 130 Mtr/mm² while Zen3 is 62MTr/mm². TSMC official word is that N5 is 80% more dense than N7. These official specs never really been reached in this way in previous generations and especially so with logic. Also Zen3 should contain more memory (dense) than M1. This makes the 110% higher in density... weird. I bet Apple is using the high density/low power variation of N5, not the high performance one.
Why did apple get 130mtr/mm is tsmc sram scaling that bad that they got 79-80% of the claimed density of 171mtr/mm ?

You're comparing desktop- to mobile oriented architectures.

Look at AMDs RDNA2 and how its clock speeds compare to Nvidia's Ampere. Look at the jump they made. Then look at the node they both use. Now, look at AMDs steps on Zen, where they also push clocks alongside numerous other refinements.

The node answers the major gap in your assumptions/opinion. Every company will maximize the frequency within the power envelope they can use for each product (stack), and it relates to the power budget offered plus the node they use and how it works with that power budget.

Intel's 10nm node didn't 'work' because Intel set a high bar for the frequency they wanted within a certain power envelope, and 10nm couldn't deliver that, while 14nm could. Effectively, this made 14nm economically much more viable than their 10nm ever could be, and so they only pushed lower clock mobile parts with bigger margins (Tiger Lake), trying to upsell GPU (Xe) to make it worthwhile - we know it wasn't.

If a shrink causes you to lose frequency, the net result is you've spent R&D on a slower chip or higher cost to produce the same thing.
Amd did fine clock gating on the architecture to get does high frequencies... compare rdna 1 5700xt-1900ghz vs 6700xt-2400 ghz
 
Top