Tuesday, August 11th 2020

Tachyum Demo Shows Prodigy Will Be Faster Than NVIDIA and Intel Chips

Tachyum Inc. today announced that it has successfully completed a demonstration showing its Prodigy Universal Processor running faster than any other processor, HPC or AI chips, including ones from NVIDIA and Intel. This is the latest of many recent milestones achieved by Tachyum as the company continues its march towards Prodigy's product release next year.

Tachyum demonstrated how its computational operation and the speed of its product design, using an industry-standard Verilog simulation of the actual Prodigy post layout hardware, is the superior solution to current competitive offerings. Not only does Prodigy execute instructions at very high speeds, but Tachyum now has an infrastructure implemented for automatically checking correct results from the Verilog RTL. These automated tests check Verilog output for correctness compared to Tachyum's C-model, which was used to measure performance, and is now the 'Golden Model' for the Verilog hardware simulation to ensure it produces identical, step-by-step results.
This verification milestone dramatically increases Tachyum's productivity and its ability to test the Prodigy hardware design efficiently in order to find bugs and correct them prior to tape-out. With this latest accomplishment, Tachyum now has automated the constrained random test generation capability, which further adds to its productivity.


Tachyum's previous hardware design milestone was to build components and interconnect them, which was successfully completed in April. The most recent hardware design milestone - and resulting tool - is about the Prodigy processor producing correct results and its performance on test programs. Prodigy is now handling branch mispredictions, or compiler misprediction of memory dependency, whereupon it detects, recovers and produces correct results.

Thanks to Tachyum's IP suppliers, the company is now able to do read/writes from Prodigy communications mesh to its DDR5 DIMMs hardware memory models. The global clock is now connected from the PLL to Prodigy cores. RAMBIST and other manufacturability features are now integrated into the Prodigy hardware design in large part due to Tachyum's physical design partner.

"This latest hardware milestone is a testament to the diligent work of our engineering team and the vast human resources we have been able to assemble to complete a revolutionary solution never before seen," said Dr. Radoslav Danilak, Tachyum founder and CEO. "We set out to produce the highest performance, lowest energy and most cost-efficient processor for the hyperscale, HPC and AI marketplace, and these milestones are proving that we are achieving those goals. With a product that is faster than the fastest Intel Xeon or NVIDIA A100 Chips, Prodigy is nearing all of its stated objectives and remains on track to make its debut as planned next year."

Tachyum's Prodigy can run HPC applications, convolution AI, explainable AI, general AI, bio AI and spiking neural networks, as well as normal data center workloads on a single homogeneous processor platform with its simple programming model. Using CPU, GPU, TPU and other accelerators in lieu of Prodigy for these different types of workloads is inefficient. A heterogeneous processing fabric, with unique hardware dedicated to each type of workload (e.g. data center, AI, HPC), results in underutilization of hardware resources, and a more challenging programming environment. Prodigy's ability to seamlessly switch among these various workloads dramatically changes the competitive landscape and the economics of data centers.

Prodigy significantly improves computational performance, energy consumption, hardware (server) utilization and space requirements compared to existing chips provisioned in hyperscale data centers today. It will also allow Edge developers for IoT to exploit its low power and high performance, along with its simple programming model to deliver AI to the edge.

Prodigy is truly a universal processor. In addition to native Prodigy code, it also runs legacy x86, ARM and RISC-V binaries. And, with a single, highly efficient processor architecture, Prodigy delivers industry-leading performance across data center, AI, and HPC workloads. Prodigy, the company's flagship Universal Processor, will enter volume production in 2021. In April the Prodigy chip successfully proved its viability with a complete chip layout exceeding speed targets. In August the processor is able to correctly execute short programs, with results automatically verified against the software model, while exceeding the target clock speeds. The next step is to get a manufactured wholly functional FPGA prototype of the chip later this year, which is the last milestone before tape-out.

Prodigy outperforms the fastest Xeon processors at 10x lower power on data center workloads, as well as outperforming NVIDIA's fastest GPU on HPC, AI training and inference. The 125 HPC Prodigy racks can deliver a 32 tensor EXAFLOPS. Prodigy's 3X lower cost per MIPS and 10X lower power translates to a 4X lower data center Total Cost of Ownership (TCO), enables billions of dollars of savings for hyperscalers such as Google, Facebook, Amazon, Alibaba, and others. Since Prodigy is the world's only processor that can switch between data center, AI and HPC workloads, unused servers can be used as CAPEX-free AI or HPC cloud, because the servers have already been amortized.

For demo resources and videos, visit this page.
Add your own comment

21 Comments on Tachyum Demo Shows Prodigy Will Be Faster Than NVIDIA and Intel Chips

#1
Vya Domus
Call it a gut feeling but this seems like the Theranos of silicon.
Posted on Reply
#2
Assimilator
News day so slow you have to post about vapourware, huh?
Posted on Reply
#3
TheoneandonlyMrK
Yet we see no proof of it actually doing anything, disappointed.
Posted on Reply
#4
ebivan
When will they acually release something? So far all I hear is "We can do anything better" (but only in internal tests...)
Posted on Reply
#5
Verpal
Verilog RTL.

OK.

Call me again when you got silicon sampled and start running benchmark on it.
Posted on Reply
#6
mahirzukic2
VerpalVerilog RTL.

OK.

Call me again when you got silicon sampled and start running benchmark on it.
As per:
Prodigy, the company's flagship Universal Processor, will enter volume production in 2021. In April the Prodigy chip successfully proved its viability with a complete chip layout exceeding speed targets. In August the processor is able to correctly execute short programs, with results automatically verified against the software model, while exceeding the target clock speeds. The next step is to get a manufactured wholly functional FPGA prototype of the chip later this year, which is the last milestone before tape-out.
We need to wait for 2021, I would guess H1.
Posted on Reply
#7
windwhirl
Vya DomusCall it a gut feeling but this seems like the Theranos of silicon.
I would like it if you were wrong, it means more competition, etc., but I'm kinda getting the same vibe here :laugh:
Posted on Reply
#8
Steevo
How much security is built in?

If every application were trusted and didn't have any faults processors would be simple to build.
Posted on Reply
#9
windwhirl
SteevoHow much security is built in?

If every application were trusted and didn't have any faults processors would be simple to build.
You inb4 massive security vulnerabilities take away 90% of the performance? :roll:
Posted on Reply
#10
Frick
Fishfaced Nincompoop
Beating Nvidia and Intel in custom AI is ... realistic. Chips built for specific tasks are way better at that task than general purpose stuff, and if you (like Amazon) build code specific for that chip ... yeah it'll be faster than anything Intel and Nvidia has to offer, for that specific task. This thing can be faster in very specific workloads, which is the point. That it also runs "legacy x86, ARM and RISC-V binaries" is ... I have no idea if it's a good idea or not. The performance will be terrible. Is there a market for that? I have no idea. Maybe?

Anyway these guys seem to be made up by some SandForce (remember SandForce?) people and some Wave Computing (uh-oh) people. At least they're serious about it, and even though I doubt anything that isn't actual numbers from a real product it will be interesting to see how it pans out.


Also there is a myriad of AI compute companies out there right now, and I quite like it.

Some more details

www.eejournal.com/article/creating-the-universal-processor/
Posted on Reply
#11
Vya Domus
FrickAlso there is a myriad of AI compute companies out there right now, and I quite like it.
I kind of don't, they are waste of manpower and investor cash. "AI" will be the next dot-com bubble, mark my words.
Posted on Reply
#12
bug
theoneandonlymrkYet we see no proof of it actually doing anything, disappointed.
Well, it can appear in the news today, so there's that.
Posted on Reply
#13
Frick
Fishfaced Nincompoop
Vya DomusI kind of don't, they are waste of manpower and investor cash. "AI" will be the next dot-com bubble, mark my words.
Lots of them will go bust fo sho, but otoh it is always good that the existing dragons have some fires under them, and if the dragons buy out the upcomers (which is likely if the tech is good) it will find its way to consumers anyway (maybe). And it's still an emergent market, so lots of things can happen before history decides what the outcome turned out to be. It's easy to tell what failed after the fact. Also things rarely happen in isolation, progress here might be applicable to there.
Posted on Reply
#14
TheoneandonlyMrK
bugWell, it can appear in the news today, so there's that.
It was in the news the other day for saying 10x Xeon or ampere at 100th the power or something.

In Any workload , on any instruction set.

It was probably less boastful but that's still an accurate review of the last Pr piece and this one.

Facts/benches=zero.

Yawn, I would love a third decent architecture, but vapour isn't worth much time.
Posted on Reply
#15
kapone32
They will sign a licence with Biostar to distribute pre built systems for everything from Etherium to DOTA. GOG will invest in a hardware solution for their platform (if you know what I mean) and the CPU market will become ultra competitive. Nvidia or Intel would buy one another and AMD, Sony and MS would form an alliance to respond with a 5nm APU with Zen5 CPU cores and RDNA3 GPU cores. Intel\Nvidia would come with a chip about the size of TR4 with 2 3080TIs, 64 PCIE 4.0 lanes and 24 7nm CPU cores that run at a stable 6.7 GHZ with a 7 GHZ boost. Of course all of this is as possible as building a livable colony on the Moon.
Posted on Reply
#16
Steevo
windwhirlYou inb4 massive security vulnerabilities take away 90% of the performance? :roll:
Yep, with cache handling, predictive branch speculation, symmetric threads, all needing hardware management that can communicate locks to programs and define addressing spaces for each and manage security in tandem with the operating system I dont see this happening.

Want to see how much time in wait each thread/core has thise metrics are available in Windows. The only reason consoles for example run slightly to moderately faster than PCs is software trust and closed ecosystems. But no one is going to tell me that chip A @4ghz is 10 times faster than chip B @4ghz in a branch heavy, dependant, out of order serial work load. The only programs that run faster with more cores is when workloads aren't dependant on current results.
Posted on Reply
#17
Wshlist
Seems a fair company in the sense that they don't mention AMD and nicely avoid that comparison :)
Also don't see Google's AI chips mentioned come to think of it.

It's a pity though that they flirt with the DoD and 'intellengence orgs' needs.
Posted on Reply
#18
aQi
Thats like someone just pretending to be they are not.
Without any “solid” I cant agree to any of that.
Releasing next year ? Sure take 100 just make sure to bring silicon samples with you.
Posted on Reply
#19
TheoneandonlyMrK
WshlistSeems a fair company in the sense that they don't mention AMD and nicely avoid that comparison :)
Also don't see Google's AI chips mentioned come to think of it.

It's a pity though that they flirt with the DoD and 'intellengence orgs' needs.
I'm not bothered who they compare with, just show some proof with the claims.
Posted on Reply
#20
AsRock
TPU addict
FrickBeating Nvidia and Intel in custom AI is ... realistic. Chips built for specific tasks are way better at that task than general purpose stuff, and if you (like Amazon) build code specific for that chip ... yeah it'll be faster than anything Intel and Nvidia has to offer, for that specific task. This thing can be faster in very specific workloads, which is the point. That it also runs "legacy x86, ARM and RISC-V binaries" is ... I have no idea if it's a good idea or not. The performance will be terrible. Is there a market for that? I have no idea. Maybe?

Anyway these guys seem to be made up by some SandForce (remember SandForce?) people and some Wave Computing (uh-oh) people. At least they're serious about it, and even though I doubt anything that isn't actual numbers from a real product it will be interesting to see how it pans out.


Also there is a myriad of AI compute companies out there right now, and I quite like it.

Some more details

www.eejournal.com/article/creating-the-universal-processor/
O remember those the 1st SSD and ONLY SSD to fail on me.
Posted on Reply
#21
ExcuseMeWtf
March next year seems HIGHLY optimistic, if they only have that to show.
Posted on Reply
Add your own comment
Nov 21st, 2024 11:02 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts