Thursday, December 1st 2011

The Move Away From x86 To ARM Processors On The Desktop To Start Soon - Survey

It looks like there's a subtle but relentless push to get ARM CPUs into desktop PCs. Morgan Stanley recently surveyed 30 PC makers (names not revealed) and discovered that 40% of them are interested in trying out ARM-based PCs within the next two years. As we reported previously that the Wintel alliance appears to be crumbling, this finding appears to add weight to that assertion. Of course, there's a huge mountain to climb before ARM processors can compete head to head with high performance x86, as explained in our article, not least because Microsoft won't begin supporting ARM until Windows 8 is released late next year and the fact that the vast majority of existing software won't run on ARM. A real catch-22 if ever there was one. Just as crucially, the many high performance enhancements and interface standards that currently go into making a modern x86 chip fly will also have to go into an ARM - and developing that isn't going to be cheap, although it may not take that long, since these are tried and trusted technologies that need to be applied. Still, the interest is there and Morgan Stanley expect that 10% (39 million) PCs, excluding tablets, will have an ARM processor at their heart. If true, it will make for interesting times.
Source: Focus Taiwan
Add your own comment

37 Comments on The Move Away From x86 To ARM Processors On The Desktop To Start Soon - Survey

#26
qubit
Overclocked quantum bit
erockerHow so?
Just compare the data sheets - I did this years ago and they were very interesting. :) But just a few of the points I can remember off the top of my head are:

- All instructions are the same length, ie 32-bit (or 64-bit for the upcoming 64-bit version)
- Multiple registers to work with. The ARM2 I'm familiar with had 16, modern ARMs might have doubled that now
- Load/store architecture
- Pipelined from the very first version, so that instructions take effectively one clock cycle to complete. To clarify, the instuctions all took 3 clock cycles, but the short pipeline overlapped them so they took an effective single cycle, making the CPU very fast, as instructions per clock (IPC) was very good
- Two instructions to load/store multiple registers in one instruction. These instructions take several cycles, but each word of data takes one cycle
- All instructions are conditional. Makes If-Then decisions very fast, such as branching or deciding whether or not an ADD instruction should execute, say, according to a condition flag. The unexecuted instruction still takes a clock cycle
- No indirect addressing modes to complicate the instruction decode. All data processing is done between registers
- No internal microcode required, as the decode logic is relatively simple. Microcode slows down a processor significantly. Instructions are hard-wired instead, making them much faster to execute
- Flexible memory addressing modes enhance efficiency further, allowing one simple instruction to perform more complex tasks than they could otherwise
- This architecture lends itself well to superscalar processing (more than one instruction per clock cycle). PowerPC already did this years ago

The above are typical traits of the RISC design philosophy and you would expect them to appear in RISC processors in general, especially the fixed length instructions and load/store architecture.

Actually, that was rather more info than I thought I'd remember. :cool:
erockerIn what ways is it inefficient? How would you improve the x86 instruction set to make it more efficient?
x86 is more or less the opposite of what I described above. These are typical CISC designs and were intended to process the highest code density and most complex instructions possible, given the tiny amount of memory that computers had back in the late 70's.

- Variable length instructions from a single byte to something like 7 or 8 bytes
- Indirect addressing modes

The above two especially, make decode logic very complex and hard to streamline

- microcoded to deal with the above complexity. Slows down the processor significantly compared to a hard-wired design.
- Small number of registers (3 I think)

x64 improves on these, but it's still a CISC design. You can see how the RISC design is more efficient by two facts of all modern x86 processors:
- Modern x86 processors for the last decade or so have broken down the CISC instructions to RISC-like ones internally, to help speed them up. As Byte magazine (remember them?!) said of x86: CISC won by stealing RISCs clothes. And by win, they meant becoming the dominant architecture commercially, not technical merit. This was around 1990. You can bet your boots that Intel didn't want to lose their niche position as near-sole manufacturer of x86 processors, so put all its corporate might behind it to make sure it would succeed
- The SSE instruction set extension looks more like RISC. Again, I'm not familiar with the fine details, but is what I remember from an article I read about it some time ago

I'm sure there's other things, but this is all I can remember off the top of my head. Having come from an Acorn background, where ARM originated, I'm much more familiar with the technical details of the ARM than the x86.
Wile EIt's performance won't be any different than current cpus. It will just do things differently, requiring everybody to port or recode their apps. Nothing scales 100% in the computer world.
Yeah, it will be better, see my answer to erocker above for why.

The apps portability is part of that mountain to climb to gain acceptance and which I have discussed before. However, I'm making the point here about the raw performance of the ARM processor with turbocharging applied.
Posted on Reply
#27
Wile E
Power User
That's all irrelevant with modern compilers, unless you are coding in assembly.
Posted on Reply
#28
FordGT90Concept
"I go fast!1!11!1!"
qubitWithout all those issues being addressed, I'd agree. Yet, there's interest, so one has to ask why that's so.
Tablet computers. ARM offers huge advantages to power requirements and legacy software support isn't a concern. ARM in a desktop computer is kind of like Windows trying to break into the cellphone market. I can see it now:

Customer: That computer costs $200 less. I want that one!
Sales Rep: If you get that one, virtually all the programs you use now won't work. I can't recommend it.
Customer: Oh, okay...

ARM desktops might be offered but all except the most knowledgable will be inclined to buy them. I can't see them taking up much more than 15% of the marketshare if you exclude tablets.


I apparently can't emphasis this enough: everything Microsoft + ARM has to do with tablet computers and desktop/tablets (all-in-one computers). It has nothing to do with mainstream desktops.
Wile EThat's all irrelevant with modern compilers, unless you are coding in assembly.
Microsoft has moved to the .NET platform and the soonest possible support for ARM is likely in Visual Studio 2012--.NET Framework 5.0. All applications made for 4.0 and down will not be compatible without getting Visual Studio 2012 and recompiling. The same dilemma faces virtually all other applications/code but with even more complexities (dealing with the lack of x86 instructions).

Most developers aren't going to recompile simply because they are not invested in ARM nor its success. Instead, ARM based computers are going to be focused on productivity software, email, internet, and cloud computing. These are not systems you're going to run developer software on, CAD programs, games, or anything more than basic, basic stuff.


Bottom line: ARM poses very, very little threat to x86 on mainstream desktop computers looking at least 10 years into the future.
Posted on Reply
#29
robal
Wile EThat's all irrelevant with modern compilers, unless you are coding in assembly.
I think Qubit's logic still holds, even with modern compilers.
You'll simply get more CPU "muscle" per watt, per transistor, per die area, per $ in general.

Cheers,
Posted on Reply
#30
pr0n Inspector
Whoa, RISC vs. CISC? It's like I went back to the 80s. Jesus.
Posted on Reply
#31
damric
It's not that ARM is going to take over PCs, it's that tablets and smartphones are making PCs (desktops and laptops) less important for general use.

You can surf the web, watch videos, and even do some office productivity on these ARM powered devices. I'm still blown away by my wife's Le Pan tablet. Just 5 years ago I never would have imagined such a powerful device was possible, and only $200. It's so thin and light, and it's practically all screen (and a 10" screen too). It has about the same horsepower as one of my first windows XP desktops from ~2001. I wish I had one of them when I was a field service technician.

Tegra 3 looks like it is narrowing the performance gap even more.
Posted on Reply
#32
qubit
Overclocked quantum bit
Wile EThat's all irrelevant with modern compilers, unless you are coding in assembly.
Rubbish. A compiler cannot get more out of an architecture than it has. If one is superior, that one will win. In this case ARM is superior, as I've explained in detail above.
FordGT90ConceptTablet computers. ARM offers huge advantages to power requirements and legacy software support isn't a concern. ARM in a desktop computer is kind of like Windows trying to break into the cellphone market. I can see it now:

Customer: That computer costs $200 less. I want that one!
Sales Rep: If you get that one, virtually all the programs you use now won't work. I can't recommend it.
Customer: Oh, okay...

ARM desktops might be offered but all except the most knowledgable will be inclined to buy them. I can't see them taking up much more than 15% of the marketshare if you exclude tablets.


I apparently can't emphasis this enough: everything Microsoft + ARM has to do with tablet computers and desktop/tablets (all-in-one computers). It has nothing to do with mainstream desktops.
Your scenario of the customer in the shop is very apt. It is indeed one of the major hurdles that ARM has to overcome, as I've mentioned in various posts now. Along with the physical development of a supercharged ARM and other improvements, such as an industry standards compliant chipset, it's why I reckon it'll take 5-10 years to seriously challenge x86. Perhaps it never will, who knows? In the end, none of can predict the future, so all we can do is to make educated guesses with the information we have now.

Oh and I like your golden rule of programming to never assume. How true and just how many times did I get caught out by it! :laugh:

I remember doing some ARM assembly programming on my Acorn Archimedes around 1990. It was a little snippet of code, who's function was to calculate something, I forget what. Anyway, It reliably produced the correct result on my 2MB RAM Archimedes (yes, that's 2 megabytes, hehe) but on my friend's 4MB Archimedes, it would spew out random junk. Took me hours to figure it out and in the end it turned out that I'd not referenced some memory properly with the pointers in the assembler. A real obvious, embarrassing facepalm moment when I realized what I'd done. Somehow the code just happened to work with a 2 meg machine, but of course fell over on the 4 meg one, because the page size was twice as big. I felt like such a muppet at the end of it, lol, but I was able to fix it. Yeah, never assume. ;)
Posted on Reply
#33
Steevo
I posted performance comparisons that equaled the playing field as much as they could to compare with Intel and AMD.

In 8/10 tests the X86 blew away the ARM at IPC, and at any sort of advanced app it crapped on its plate while the ARM sat and ate it.

www.brightsideofnews.com/news/2011/5/19/the-coming-war-arm-versus-x86.aspx

A AMD at the same speed was 2-3 times faster than the ARM processor.
Posted on Reply
#34
qubit
Overclocked quantum bit
SteevoI posted performance comparisons that equaled the playing field as much as they could to compare with Intel and AMD.

In 8/10 tests the X86 blew away the ARM at IPC, and at any sort of advanced app it crapped on its plate while the ARM sat and ate it.

www.brightsideofnews.com/news/2011/5/19/the-coming-war-arm-versus-x86.aspx

A AMD at the same speed was 2-3 times faster than the ARM processor.
Are you Van Smith, then? I see that article is quite old and compares the Cortex-A8. The Cortex-A15 is around twice as quick or more. Of course, it should be compared to the latest Atoms, too.

That Atom consumes way more power though. It's all in the optimisation, as I said. I don't see why people get so defensive, about it. Put the same mojo in both architectures and the inherently leaner, more efficient one will win, which is ARM.
Posted on Reply
#35
Steevo
Leaner more efficient like X86 won over Motorola?


No, the golden rule is the one with more gold make the rule.


ARM is great for my cellphone, it is great for a tablet. But to compare it to a full featured CPU with a GPU built in is simply insanity.


They said the same about how Linux was going to come rip MS a new asshole and all the laptops with free Linux versions would make the world realize how bad microsoft was and how free they could be........bla bla bla, the same recycled crap hippes said.



People will buy whatever is cheapest and works for them, or what they get advertised the most of. Thats why people even buy Ipads, Iphones, and other gadgets that really do nothing for them.
Posted on Reply
#36
qubit
Overclocked quantum bit
I hate to say it, but I agree with your whole post. It's definitely often not technical merit that wins the day, but dollars and marketing hype, which is a very bg shame. You hit the nail on the head with this bit:
SteevoPeople will buy whatever is cheapest and works for them, or what they get advertised the most of. Thats why people even buy Ipads, Iphones, and other gadgets that really do nothing for them.
However, my points in the last few posts are purely about the technical merits of the two architectures. Whether ARM is eventually successful in the desktop space truly "remains to be seen".
Posted on Reply
#37
Wile E
Power User
I just don't think that ARM scaled to a full blown desktop chip would be faster than what x86 offerings there are.

Nothing I see suggests that, not even in the material you posted. If they could take on x86 cpus in the desktop market, they would.
Posted on Reply
Add your own comment
Aug 7th, 2024 03:25 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts