• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

RISC-V Processor Achieves 5 GHz Frequency at Just 1 Watt of Power

Thanks for the info but I just wanted to know what the difference between the ARM and RISC-V is. From what I know (up until now) the difference was the open source for RISC-V instead of licensed ARM. Both are based on RISC instructions but the RISC-V gives more custom design possibility not like ARM being licensed. I know these are derivatives from same philosophy like you mentioned.
RISC-V is much more sparse and more customizable. It has mostly just the bare minimum, designed to make it easy to add anything people may need. The basic design is not that far off from being an ALU with some glue logic.

RISC-V is not going to compete with your x86 desktop CPU, despite some news sites and "experts" on YouTube claiming so. Meanwhile, future x86 will continue adding CISC features like more efficient operations and SIMD.

But.. Does it run Crysis? :D
Probably at about 60 SPF.

RISC is a general term, like a "automobile" or "vegetable". It only means that CPU uses reduced instruction set, which nowadays has very little meaning.
And all x86 designs since the mid 90s have been using micro-operations, combining the best from RISC and CISC. Plus ARM has added a lot of CISC-like features, so it certainly makes little sense.
Most people have missed that the RISC/CISC argument is actually not about ARM vs. x86, but rather the specialized complex designs from the 70s. I always cringe when articles dig up these decades old arguments and try to apply them to modern CPU designs.
RISC architecture is gonna change everything, you know.

It matters a lot. One of the major reasons RISC-V is gaining traction, is because it's not overburdened by legacy feature support, which makes it simple and more efficient than ARM or MIPS.
It's mostly about being as customizable as possible, not "legacy". Even modern x86 designs are not hampered by "legacy" like most people seems to think.

Aren't WD's in-house SSD controllers the most high-profile consumer RISC-V design out there? Or is there something else that I've missed?
This is excatly the purpose of RISC-V; a small flexible ISA which can be easily adopted to any specialized purpose, like controllers, GPU schedulers etc.
 
Most people have missed that the RISC/CISC argument is actually not about ARM vs. x86, but rather the specialized complex designs from the 70s. I always cringe when articles dig up these decades old arguments and try to apply them to modern CPU designs.

Honestly, the "modern" architectures that should be debated are:

1. "Traditional" CPUs: Branch-predicted, out-of-order, pipelined, superscalar cores -- ARM, POWER9 / POWER10, RISC-V, x86.

2. SIMD -- NVidia Ampere, AMD NAVI / GCN

3. VLIW -- Apple Neural Engine, Qualcomm Hexagon, Xilinx AI-engine

4. Systolic Engines -- NVidia "Tensor Cores", Google TPUs, "FPGAs"

I expect that most computers fall into one of the 4 categories today, maybe two or even three of the above categories. (Intel Skylake is traditional + SIMD. NVidia Ampere is SIMD + Systolic. Xilinx AI Engine is VLIW + SIMD + Systolic).

Apple M1 is just a really big traditional (branch-predicted / out-of-order / pipelined / superscalar) core. Its a non-standard configuration, but the performance benefits are pretty well known and decently studied at this point.
 
Ehhhh... just really M1 and A64Fx.
And ThunderX, and upcoming Ampere, and AWS Graviton, and whatever Microsoft and Qualcomm cooked up last year, etc.
Baby steps, but in the right direction.
 
"x86 and legacy garbage instructions"

Let's say ARM or RISC-V achieves similar market-share across servers/desktops, which will take at least 10-15 years, that's enough time for enterprise/professional software to rely on "legacy garbage instructions", leading any new ISA at the same place where x86 is today.
RISC was great for specialized environments in the 90s, but a "generic user" using a RISC cpu today, will need dedicated fixed-function accelerators for video, audio, AI, compute, encryption, compression, graphics maybe more in some years.
All that fixed-function hardware, WILL NOT work with anything out of its purpose and when the CPU has no specialized instructions either, you are forced to upgrade/ditch old hardware.

Can you run an old iOS/MacOS/Android on a new smartphone?

That's a great opportunity to sell different hardware for different needs in a world where needs continue to increase and differ every 2 years.
A world brought to you by Apple and every other company's wet dream , where old software does not work on newer hardware.
B-B-BUT EMULATORS!!!
Meanwhile on x86 you can run anything you like - natively - because of that "legacy garbage".
 
ARM and RISC-V are both RISC and use same core ISA

RISC is a design philosophy not an ISA.

Can't you just run a saturn emulator on most modern phones? I have more experience with snes ones but it shouldn't be harder than that.

IIRC the saturn didn't use voxel based geometry, but some weird alternative that might make emulation tricky. It's weird GPU was also one of nvidias first pet projects, IIRC.
 
These ARM CPUs are evolving so damn fast. Saying "ARM is the future" may no the necessarily be overrated after all.
ARM and RISC-V are both RISC and use same core ISA but each one can extend it. At least that's what I thought it was.
In case this was not made clear by other users, RISC-V is NOT the same as ARM. Just FYI there...
ARM stands for Advanced RISC Machine.
Actually it was "Acorn RISC Machine", again, just FYI.
RISC-V is not going to compete with your x86 desktop CPU, despite some news sites and "experts" on YouTube claiming so.
Not quite yet, but it's getting there. ARM SOC's are getting to the point of being "desktop replacement" grade, for example Apple's M1. RISC-V, if done right, can potentially make for a solid competitor in the mobile & desktop markets.

RISC is a design philosophy not an ISA.
Correct.
 
Last edited:
Meanwhile on x86 you can run anything you like - natively - because of that "legacy garbage".
Oh yeah... of course you can.... :banghead:
And after intel stops supporting CSM and AMD follows a year or two after, you'll "can" even more.

All that fixed-function hardware, WILL NOT work with anything out of its purpose and when the CPU has no specialized instructions either, you are forced to upgrade/ditch old hardware.
We are already at that point, and in most cases it's not hardware, but software that's the limiting factor (artificial, mind you). Just look at our current situation with windows and linux: wanna run old linux software - appimage, a container or a VM is your best friend (unless you wanna break something else with old dependencies); wanna run an old game - use dosbox or borrow your grandpa's PC; wanna use ancient CAD software - make a VM and install XP on it. etc. etc. etc. Especially in govt. segment it's a norm to maintain old hardware just to be able to run old software, until the point of no-return.
Also, radical hardware changes don't happen that often, so, let's say, by the time RISC-VI rolls out, it'll probably be powerful enough to emulate RISC-V in software.
 
Let's say ARM or RISC-V achieves similar market-share across servers/desktops, which will take at least 10-15 years, that's enough time for enterprise/professional software to rely on "legacy garbage instructions", leading any new ISA at the same place where x86 is today.
Interestingly, ARM is 35 years already.
The "problem" of "legacy garbage instructions" is yet another myth. Modern x86 microarchitectures use their own microoperations which are optimized for the most current relevant features, and legacy features are translated into current features, so they are not really suffering for this legacy support, only a tiny overhead in the CPU front-end to translate it.
One example, modern desktop CPUs from Intel and AMD don't have single FPUs, they only have vector units. So they convert single floating point instructions, MMX, and SSE into AVX and runs everything through the AVX units.

RISC was great for specialized environments in the 90s, but a "generic user" using a RISC cpu today, will need dedicated fixed-function accelerators for video, audio, AI, compute, encryption, compression, graphics maybe more in some years.

All that fixed-function hardware, WILL NOT work with anything out of its purpose and when the CPU has no specialized instructions either, you are forced to upgrade/ditch old hardware.
Yeah, these application specific instructions are a mess, they require low-level code to be written for each ISA variant, and then they quickly become obsolete. They may be a necessity for low-power appliances, but a desktop computer should rather have much more generic performance, performance you can leverage for future software, codecs, etc.

"Pure" RISC designs will ultimately fail when it comes to performance. The claimed advantage was a smaller RISC design could compete with a larger CISC design by running at a higher clock speed, and the lower die size offering lower costs. But performance today scales towards cache misses, a single one costs ~400-500 clocks for Skylake/Zen. But even if you could push your RISC design far beyond 5 GHz, you will eventually get to a point where you can no longer offset the performance lost to extra cache misses by just boosting the clock speed.

Not quite yet, but it's getting there. ARM SOC's are getting to the point of being "desktop replacement" grade, for example Apple's M1. RISC-V, if done right, can potentially make for a solid competitor for the mobile & desktop markets.
In order to close the gap, ARM needs to remain add comparable CISC-style features. Current ARM designs rely heavily on application specific instructions to be "competitive", so don't trust benchmarks like Geekbench to show generic performance. RISC-V will not get anywhere close, it lacks all kinds of "complex" instructions. Let's take one example; Instructions like CMOV may look insignificant, but it eliminates a lot of branching which in turn means less branch mispredictions, less cache misses and improved cache efficiency. We've had this one since Pentium, and many such instructions are essential to achieve the performance and responsiveness we expect from a modern desktop computer. A pure RISC design lacks such features, and no amount of clock speed can compensate for features which reduces cache misses.

Additionally, most x86 software is actually not compiled using an up-to-date ISA (while ARM software often is fairly up-to-date). A lot of the software in your computer, including your OS, drivers and a lot of your application are compiled for x86-64 and SSE2, so 17 years "behind". Is is at least about to change for GCC and LLVM for the purpose of compiling Linux with much greater performance. Hopefully MS will soon follow, and unlock this "free" potential.
 
Meanwhile on x86 you can run anything you like - natively - because of that "legacy garbage".

Except anything compiled for anything else.

I used to emulate Mac OS X on x86 back in the PowerPC days. It was doggone slow.

Hopefully MS will soon follow, and unlock this "free" potential.

That only works in an open source context where you can expect recompiles. AVX2 has been supported in the windows core since Windows 7 SP1 IIRC, but apps have not followed suit.
 
Someone build me a RISC-V implementation of the 65C102 microprocessor. And if you don't know what that is - look up 6502 on wiki. The LEGEND of a CPU that was the basis on which Acorn engineers went to kick off ARM in Cambridge back in the 1980s.

Then, I will try to find all my cassette tapes and 100K floppy disks of 6502 assembly code and run them again!
 
That only works in an open source context where you can expect recompiles. AVX2 has been supported in the windows core since Windows 7 SP1 IIRC, but apps have not followed suit.

AVX2 was first supported by Haswell in 2013. Maybe you're talking about AVX, which was a bit earlier.

But AVX2 is still avoided by some application programmers because 2012-era CPUs are still kinda common (The venerable i7-2600k is still popular and kicking around today in many people's builds). AVX (the first one) was first supported by Sandy Bridge (i7-2600k), which was 2011. Probably safe to use today, but some people do run 10+ year old computers without that feature.

Worse still: the i3 and "Pentium" and "Celeron" chips, as well as 'Atoms' never supported AVX. So someone with an Atom from 2015 won't be able to run AVX or AVX2 code. "Open Source" code which tries to run on multiple platforms, only go up to 128-bit vectors (ARM NEON is only 128-bit wide), and therefore SSE (also a 128-bit instruction set) is the ideal for compatibility. Even the Apple M1 is still 128-bit wide for its SIMD-units.
 
Last edited:
AVX2 was first supported by Haswell in 2013. Maybe you're talking about AVX, which was a bit earlier.

But AVX2 is still avoided by some application programmers because 2012-era CPUs are still kinda common (The venerable i7-2600k is still popular and kicking around today in many people's builds).

AVX (the first one) was first supported by Sandy Bridge (i7-2600k), which was 2011. Probably safe to use today, but some people do run 10+ year old computers without that feature.

Perhaps. I don't know if it was 1/2, but I do know unlike (old?) gcc, the msvc compiler can choose the ideal code path for the hardware without precompiling to a target.

Meaning you can target avx and if it is not present, it will fall back to legacy code.
 
AVX (the first one) was first supported by Sandy Bridge (i7-2600k), which was 2011. Probably safe to use today, but some people do run 10+ year old computers without that feature.
It depends, a productive application can certainly do AVX2 as a minimum requirement, but it can be a tough decision for more mainstream applications. I believe shipping both a "current" and a "legacy" version of an application is an easy option, just like most software have been shipping a 32-bit version for ages. Hopefully will the usage of the new x86-64 feature levels in upcoming Linux distros showcase how easily additional performance can be gained for "free". As you know, many applications will benefit just from a recompilation, even if no low-level intrinsics are applied.

Worse still: the i3 and "Pentium" and "Celeron" chips, as well as 'Atoms' never supported AVX. So someone with an Atom from 2015 won't be able to run AVX or AVX2 code.
Sadly, the horrible Celerons and Pentiums have been lacking AVX (until the Tiger Lake ones arrives), but at least the i3s support AVX2/FMA.
Such CPUs can be found all over offices, schools etc.
 
Back
Top