Tuesday, October 3rd 2023

Tachyum Books Purchase Order to Build System with 25,000x ChatGPT4 Capacity and 25x Faster than Current Supercomputers

Tachyum announced that it has accepted a major purchase order from a US company to build a large-scale system, based on its 5 nm Prodigy Universal Processor chip, which delivers more than 50 exaflops performance that will exponentially exceed the computational capabilities of the fastest inference or generative AI supercomputers available anywhere in the world today.

Prodigy, the world's first Universal Processor, is engineered to transform the capacity, efficiency and economics of datacenters through its industry-leading performance for hyperscale, high-performance computing and AI workloads. When complete, the Prodigy-powered system will deliver a 25x multiplier vs. the world's fastest conventional supercomputer built just this year, and will achieve AI capabilities 25,000x larger than models for ChatGPT4.
Exascale supercomputers can quickly analyze and process massive amounts of data to solve complex problems that had previously been intractable. Prodigy overdelivers on traditional exascale capabilities by providing industry-leading performance, significantly reduced energy consumption, improved server utilization and space efficiency. Prodigy's exponential increases in memory, storage and compute to enable breakthroughs across datacenter, AI and HPC workloads in government, research and academia, business, manufacturing and other industries.

The human brain consists of around 100 billion neurons and approximately 200 trillion synaptic connections. Assuming a few bytes per synaptic connection, it would require 100's of terabytes of memory. Therefore, the Prodigy system with hundreds of petabytes of DRAM will have 100x more memory than needed.

Installation of this Prodigy-enabled solution will begin in 2024 and reach full capacity in 2025. Among the deliverables are:
  • 8 Zettaflops AI training for big language models
  • 16 Zettaflops of image and video processing
  • Ability to fit more than 100,000x PALM2 530B parameter models OR 25,000x ChatGPT 4 1.7T parameter models with base memory and 100,000x ChatGPT4 with 4x of base DRAM
  • Upgradable memory of the base model system
  • Hundreds of petabytes of DRAM and exabytes of flash-based primary storage
  • 4-socket, liquid-cooled nodes connected to 400G RoCE ethernet, with the capability to double to an 800G all non-blocking and non-overprovisioned switching fabric
Tachyum's proprietary TPU AI Inference IP supports Tachyum AI (TAI) data type and provides even more; breakthrough efficiency for video and large language model data formats that otherwise would require excessive power and expensive multipliers in matrix multiplication to achieve.

"The unprecedented scale and computational power required as part of this installation simply could not be provided by any chip manufacturer on the market today," said Dr. Radoslav Danilak, founder and CEO of Tachyum. "While there are startups receiving billions of dollars, based on their promise of achieving similar capabilities sometime in the future, only Tachyum is positioned to deliver the capability to economically build order-of-magnitude bigger machines that potentially enable the transition to cognitive AI, beginning later this year. This purchase order is a testament to our first-to-market position and our ability to provide a positive impact to worldwide AI markets."

As a Universal Processor offering utility for all workloads, Prodigy-powered data center servers can seamlessly and dynamically switch between computational domains (such as AI/ML, HPC, and cloud) on a single architecture. By eliminating the need for expensive dedicated AI hardware and dramatically increasing server utilization, Prodigy reduces CAPEX and OPEX significantly while delivering unprecedented data center performance, power, and economics. Prodigy integrates 192 high-performance custom-designed 64-bit compute cores, to deliver up to 4,5x the performance of the highest-performing x86 processors for cloud workloads, up to 3x that of the highest performing GPU for HPC, and 6x for AI applications.
Add your own comment

8 Comments on Tachyum Books Purchase Order to Build System with 25,000x ChatGPT4 Capacity and 25x Faster than Current Supercomputers

#2
AnotherReader
Tachyum's claims are rather unbelievable:
Prodigy integrates 192 high-performance custom-designed 64-bit compute cores, to deliver up to 4,5x the performance of the highest-performing x86 processors for cloud workloads, up to 3x that of the highest performing GPU for HPC, and 6x for AI applications.
Posted on Reply
#3
theouto
That is one meaty pc, damn
Posted on Reply
#4
Steevo
AnotherReaderTachyum's claims are rather unbelievable:
They have their own in house benchmark results but as fa as I have seen nothing public, and they have never made a fully functional product yet. They seem to be the stuff of unicorn farts and wishful thinking, I bet they are the people who started the speak these numbers into the universe for success....
Posted on Reply
#5
atomsymbol
SteevoThey have their own in house benchmark results but as fa as I have seen nothing public, and they have never made a fully functional product yet. They seem to be the stuff of unicorn farts and wishful thinking, I bet they are the people who started the speak these numbers into the universe for success....
The last time I checked their published papers (a couple of months ago), most of Tachyum's numbers (claims) seemed correct/believable. For example, some of Prodigy CPU's data paths in a Prodigy core (L1D cache, vector ALU width) are 2-times wider than in any existing x86 CPU. Prodigy isn't a desktop CPU, it is a server CPU with a high power consumption (N*100 Watts) unsuitable for a desktop machine. It isn't a GPU, it cannot run PC games efficiently. Most people won't have direct access to Prodigy CPUs - most end-users will be using Prodigy CPUs indirectly. When reading about Prodigy performance estimates in relation to other CPUs or GPUs: unless stated otherwise, the word 'performance' means 'performance-per-watt' or 'performance-per-dollar'.
Posted on Reply
#6
Minus Infinity
LOL who was it that said yesterday that no one would buy anything but Nvidia AI accelerators no matter the price. Nvidia has the lead but others have woken up and seen the $$$$$$$ signs. Competition over the next few years will be fierce.
Posted on Reply
#7
ebivan
Tachyum making more claims without any proof.
Posted on Reply
#8
Prima.Vera
Chat GPT for some reasons gives crap answers and methodology. I have found out that Google's BARD is light years better in accuracy and complexion.
Posted on Reply
May 15th, 2024 16:28 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts