Wednesday, December 2nd 2020

RISC-V Processor Achieves 5 GHz Frequency at Just 1 Watt of Power

Researchers at the University of California, Berkeley in 2010 have started an interesting project. They created a goal to develop a new RISC-like Instruction Set Architecture that is simple and efficient while being open-source and royalty-free. Born out of that research was RISC-V ISA, the fifth iteration of Reduced Instruction Set Computing (RISC) ideology. Over the years, the RISC-V ISA has become more common, and today, many companies are using it to design their processors and release new designs every day. One of those companies is Micro Magic Inc., a provider of silicon design tools, IP, and design services. The company has developed a RISC-V processor that is rather interesting.

Apart from the RISC-V ISA, the processor has an interesting feature. It runs at the whopping 5 GHz frequency, a clock speed unseen on the RISC-V chips before, at the power consumption of a mere one (yes that is 1) Watt. The chip ran at just 1.1 Volts, which means that a very low current needs to be supplied to the chip so it can achieve the 5 GHz mark. If you are wondering about performance, well the numbers show that at 5 GHz, the CPU can produce a score of 13000 CoreMarks. However, that is not the company's highest-performance RISC-V core. In yesterday's PR, Micro Magic published that their top-end design can achieve 110000 CoreMarks/Watt, so we are waiting to hear more details about it.
Source: EE Times
Add your own comment

65 Comments on RISC-V Processor Achieves 5 GHz Frequency at Just 1 Watt of Power

#51
dragontamer5788
efikkanMost people have missed that the RISC/CISC argument is actually not about ARM vs. x86, but rather the specialized complex designs from the 70s. I always cringe when articles dig up these decades old arguments and try to apply them to modern CPU designs.
Honestly, the "modern" architectures that should be debated are:

1. "Traditional" CPUs: Branch-predicted, out-of-order, pipelined, superscalar cores -- ARM, POWER9 / POWER10, RISC-V, x86.

2. SIMD -- NVidia Ampere, AMD NAVI / GCN

3. VLIW -- Apple Neural Engine, Qualcomm Hexagon, Xilinx AI-engine

4. Systolic Engines -- NVidia "Tensor Cores", Google TPUs, "FPGAs"

I expect that most computers fall into one of the 4 categories today, maybe two or even three of the above categories. (Intel Skylake is traditional + SIMD. NVidia Ampere is SIMD + Systolic. Xilinx AI Engine is VLIW + SIMD + Systolic).

Apple M1 is just a really big traditional (branch-predicted / out-of-order / pipelined / superscalar) core. Its a non-standard configuration, but the performance benefits are pretty well known and decently studied at this point.
Posted on Reply
#52
silentbogo
dragontamer5788Ehhhh... just really M1 and A64Fx.
And ThunderX, and upcoming Ampere, and AWS Graviton, and whatever Microsoft and Qualcomm cooked up last year, etc.
Baby steps, but in the right direction.
Posted on Reply
#54
marios15
"x86 and legacy garbage instructions"

Let's say ARM or RISC-V achieves similar market-share across servers/desktops, which will take at least 10-15 years, that's enough time for enterprise/professional software to rely on "legacy garbage instructions", leading any new ISA at the same place where x86 is today.
RISC was great for specialized environments in the 90s, but a "generic user" using a RISC cpu today, will need dedicated fixed-function accelerators for video, audio, AI, compute, encryption, compression, graphics maybe more in some years.
All that fixed-function hardware, WILL NOT work with anything out of its purpose and when the CPU has no specialized instructions either, you are forced to upgrade/ditch old hardware.

Can you run an old iOS/MacOS/Android on a new smartphone?

That's a great opportunity to sell different hardware for different needs in a world where needs continue to increase and differ every 2 years.
A world brought to you by Apple and every other company's wet dream , where old software does not work on newer hardware.
B-B-BUT EMULATORS!!!
Meanwhile on x86 you can run anything you like - natively - because of that "legacy garbage".
Posted on Reply
#55
R-T-B
ratirtARM and RISC-V are both RISC and use same core ISA
RISC is a design philosophy not an ISA.
progsteCan't you just run a saturn emulator on most modern phones? I have more experience with snes ones but it shouldn't be harder than that.
IIRC the saturn didn't use voxel based geometry, but some weird alternative that might make emulation tricky. It's weird GPU was also one of nvidias first pet projects, IIRC.
Posted on Reply
#56
lexluthermiester
ratirtThese ARM CPUs are evolving so damn fast. Saying "ARM is the future" may no the necessarily be overrated after all.
ratirtARM and RISC-V are both RISC and use same core ISA but each one can extend it. At least that's what I thought it was.
In case this was not made clear by other users, RISC-V is NOT the same as ARM. Just FYI there...
ratirtARM stands for Advanced RISC Machine.
Actually it was "Acorn RISC Machine", again, just FYI.
efikkanRISC-V is not going to compete with your x86 desktop CPU, despite some news sites and "experts" on YouTube claiming so.
Not quite yet, but it's getting there. ARM SOC's are getting to the point of being "desktop replacement" grade, for example Apple's M1. RISC-V, if done right, can potentially make for a solid competitor in the mobile & desktop markets.
R-T-BRISC is a design philosophy not an ISA.
Correct.
Posted on Reply
#57
silentbogo
marios15Meanwhile on x86 you can run anything you like - natively - because of that "legacy garbage".
Oh yeah... of course you can.... :banghead:
And after intel stops supporting CSM and AMD follows a year or two after, you'll "can" even more.
marios15All that fixed-function hardware, WILL NOT work with anything out of its purpose and when the CPU has no specialized instructions either, you are forced to upgrade/ditch old hardware.
We are already at that point, and in most cases it's not hardware, but software that's the limiting factor (artificial, mind you). Just look at our current situation with windows and linux: wanna run old linux software - appimage, a container or a VM is your best friend (unless you wanna break something else with old dependencies); wanna run an old game - use dosbox or borrow your grandpa's PC; wanna use ancient CAD software - make a VM and install XP on it. etc. etc. etc. Especially in govt. segment it's a norm to maintain old hardware just to be able to run old software, until the point of no-return.
Also, radical hardware changes don't happen that often, so, let's say, by the time RISC-VI rolls out, it'll probably be powerful enough to emulate RISC-V in software.
Posted on Reply
#58
InVasMani
Vya DomusAnd on top of that they're only useful for a bunch of types of problems.
Oh qubit now...we've all got a bunch of problems.
Posted on Reply
#59
efikkan
marios15Let's say ARM or RISC-V achieves similar market-share across servers/desktops, which will take at least 10-15 years, that's enough time for enterprise/professional software to rely on "legacy garbage instructions", leading any new ISA at the same place where x86 is today.
Interestingly, ARM is 35 years already.
The "problem" of "legacy garbage instructions" is yet another myth. Modern x86 microarchitectures use their own microoperations which are optimized for the most current relevant features, and legacy features are translated into current features, so they are not really suffering for this legacy support, only a tiny overhead in the CPU front-end to translate it.
One example, modern desktop CPUs from Intel and AMD don't have single FPUs, they only have vector units. So they convert single floating point instructions, MMX, and SSE into AVX and runs everything through the AVX units.
marios15RISC was great for specialized environments in the 90s, but a "generic user" using a RISC cpu today, will need dedicated fixed-function accelerators for video, audio, AI, compute, encryption, compression, graphics maybe more in some years.

All that fixed-function hardware, WILL NOT work with anything out of its purpose and when the CPU has no specialized instructions either, you are forced to upgrade/ditch old hardware.
Yeah, these application specific instructions are a mess, they require low-level code to be written for each ISA variant, and then they quickly become obsolete. They may be a necessity for low-power appliances, but a desktop computer should rather have much more generic performance, performance you can leverage for future software, codecs, etc.

"Pure" RISC designs will ultimately fail when it comes to performance. The claimed advantage was a smaller RISC design could compete with a larger CISC design by running at a higher clock speed, and the lower die size offering lower costs. But performance today scales towards cache misses, a single one costs ~400-500 clocks for Skylake/Zen. But even if you could push your RISC design far beyond 5 GHz, you will eventually get to a point where you can no longer offset the performance lost to extra cache misses by just boosting the clock speed.
lexluthermiesterNot quite yet, but it's getting there. ARM SOC's are getting to the point of being "desktop replacement" grade, for example Apple's M1. RISC-V, if done right, can potentially make for a solid competitor for the mobile & desktop markets.
In order to close the gap, ARM needs to remain add comparable CISC-style features. Current ARM designs rely heavily on application specific instructions to be "competitive", so don't trust benchmarks like Geekbench to show generic performance. RISC-V will not get anywhere close, it lacks all kinds of "complex" instructions. Let's take one example; Instructions like CMOV may look insignificant, but it eliminates a lot of branching which in turn means less branch mispredictions, less cache misses and improved cache efficiency. We've had this one since Pentium, and many such instructions are essential to achieve the performance and responsiveness we expect from a modern desktop computer. A pure RISC design lacks such features, and no amount of clock speed can compensate for features which reduces cache misses.

Additionally, most x86 software is actually not compiled using an up-to-date ISA (while ARM software often is fairly up-to-date). A lot of the software in your computer, including your OS, drivers and a lot of your application are compiled for x86-64 and SSE2, so 17 years "behind". Is is at least about to change for GCC and LLVM for the purpose of compiling Linux with much greater performance. Hopefully MS will soon follow, and unlock this "free" potential.
Posted on Reply
#60
R-T-B
marios15Meanwhile on x86 you can run anything you like - natively - because of that "legacy garbage".
Except anything compiled for anything else.

I used to emulate Mac OS X on x86 back in the PowerPC days. It was doggone slow.
efikkanHopefully MS will soon follow, and unlock this "free" potential.
That only works in an open source context where you can expect recompiles. AVX2 has been supported in the windows core since Windows 7 SP1 IIRC, but apps have not followed suit.
Posted on Reply
#61
lemonadesoda
Someone build me a RISC-V implementation of the 65C102 microprocessor. And if you don't know what that is - look up 6502 on wiki. The LEGEND of a CPU that was the basis on which Acorn engineers went to kick off ARM in Cambridge back in the 1980s.

Then, I will try to find all my cassette tapes and 100K floppy disks of 6502 assembly code and run them again!
Posted on Reply
#62
dragontamer5788
R-T-BThat only works in an open source context where you can expect recompiles. AVX2 has been supported in the windows core since Windows 7 SP1 IIRC, but apps have not followed suit.
AVX2 was first supported by Haswell in 2013. Maybe you're talking about AVX, which was a bit earlier.

But AVX2 is still avoided by some application programmers because 2012-era CPUs are still kinda common (The venerable i7-2600k is still popular and kicking around today in many people's builds). AVX (the first one) was first supported by Sandy Bridge (i7-2600k), which was 2011. Probably safe to use today, but some people do run 10+ year old computers without that feature.

Worse still: the i3 and "Pentium" and "Celeron" chips, as well as 'Atoms' never supported AVX. So someone with an Atom from 2015 won't be able to run AVX or AVX2 code. "Open Source" code which tries to run on multiple platforms, only go up to 128-bit vectors (ARM NEON is only 128-bit wide), and therefore SSE (also a 128-bit instruction set) is the ideal for compatibility. Even the Apple M1 is still 128-bit wide for its SIMD-units.
Posted on Reply
#63
R-T-B
dragontamer5788AVX2 was first supported by Haswell in 2013. Maybe you're talking about AVX, which was a bit earlier.

But AVX2 is still avoided by some application programmers because 2012-era CPUs are still kinda common (The venerable i7-2600k is still popular and kicking around today in many people's builds).

AVX (the first one) was first supported by Sandy Bridge (i7-2600k), which was 2011. Probably safe to use today, but some people do run 10+ year old computers without that feature.
Perhaps. I don't know if it was 1/2, but I do know unlike (old?) gcc, the msvc compiler can choose the ideal code path for the hardware without precompiling to a target.

Meaning you can target avx and if it is not present, it will fall back to legacy code.
Posted on Reply
#64
ratirt
lexluthermiesterActually it was "Acorn RISC Machine", again, just FYI.
Actually it was both but thanks for clarification.
Posted on Reply
#65
efikkan
dragontamer5788AVX (the first one) was first supported by Sandy Bridge (i7-2600k), which was 2011. Probably safe to use today, but some people do run 10+ year old computers without that feature.
It depends, a productive application can certainly do AVX2 as a minimum requirement, but it can be a tough decision for more mainstream applications. I believe shipping both a "current" and a "legacy" version of an application is an easy option, just like most software have been shipping a 32-bit version for ages. Hopefully will the usage of the new x86-64 feature levels in upcoming Linux distros showcase how easily additional performance can be gained for "free". As you know, many applications will benefit just from a recompilation, even if no low-level intrinsics are applied.
dragontamer5788Worse still: the i3 and "Pentium" and "Celeron" chips, as well as 'Atoms' never supported AVX. So someone with an Atom from 2015 won't be able to run AVX or AVX2 code.
Sadly, the horrible Celerons and Pentiums have been lacking AVX (until the Tiger Lake ones arrives), but at least the i3s support AVX2/FMA.
Such CPUs can be found all over offices, schools etc.
Posted on Reply
Add your own comment
Dec 4th, 2024 04:20 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts