News Posts matching #Lightmatter

Return to Keyword Browsing

Lightmatter Unveils Six‑Chip Photonic AI Processor with Incredible Performance/Watt

Lightmatter has launched its latest photonic processor, representing a fundamental shift from traditional computing architectures. The new system integrates six chips into a single 3D packaged module, each containing photonic tensor cores and control dies that work in concert to accelerate AI workloads. Detailed in a recent Nature publication, the processor combines approximately 50 billion transistors with one million photonic components interconnected via high-speed optical links. The industry has faced numerous computing challenges as conventional scaling approaches plateau, with Moore's Law, Dennard scaling, and DRAM capacity doubling, all reaching physical limits per silicon area. Lightmatter's solution implements an adaptive block floating point (ABFP) format with analog gain control to overcome these barriers. During matrix operations, weights and activations are grouped into blocks sharing a single exponent determined by the most significant value, minimizing quantization errors.

The processor achieves 65.5 trillion 16-bit ABFP operations per second (sort of 16-bit TOPs) while consuming just 78 W of electrical power and 1.6 W of optical power. What sets this processor apart is its ability to run unmodified AI models with near FP32 accuracy. The system successfully executes full-scale models, including ResNet for image classification, BERT for natural language processing, and DeepMind's Atari reinforcement learning algorithms without specialized retraining or quantization-aware techniques. This represents the first commercially available photonic AI accelerator capable of running off-the-shelf models without fine-tuning. The processor's architecture fundamentally uses light for computation to address next-generation GPUs' prohibitive costs and energy demands. With native integration for popular AI frameworks like PyTorch and TensorFlow, Lightmatter hopes for immediate adoption in production environments.

Lightmatter Unveils Passage M1000 Photonic Superchip

Lightmatter, the leader in photonic supercomputing, today announced Passage M1000, a groundbreaking 3D Photonic Superchip designed for next-generation XPUs and switches. The Passage M1000 enables a record-breaking 114 Tbps total optical bandwidth for the most demanding AI infrastructure applications. At more than 4,000 square millimeters, the M1000 reference platform is a multi-reticle active photonic interposer that enables the world's largest die complexes in a 3D package, providing connectivity to thousands of GPUs in a single domain.

In existing chip designs, interconnects for processors, memory, and I/O chiplets are bandwidth limited because electrical input/output (I/O) connections are restricted to the edges of these chips. The Passage M1000 overcomes this limitation by unleashing electro-optical I/O virtually anywhere on its surface for the die complex stacked on top. Pervasive interposer connectivity is enabled by an extensive and reconfigurable waveguide network that carries high-bandwidth WDM optical signals throughout the M1000. With fully integrated fiber attachment supporting an unprecedented 256 fibers, the M1000 delivers an order of magnitude higher bandwidth in a smaller package size compared to conventional Co-Packaged Optics (CPO) and similar offerings.

NVIDIA Shows Future AI Accelerator Design: Silicon Photonics and DRAM on Top of Compute

During the prestigious IEDM 2024 conference, NVIDIA presented its vision for the future AI accelerator design, which the company plans to chase after in future accelerator iterations. Currently, the limits of chip packaging and silicon innovation are being stretched. However, future AI accelerators might need some additional verticals to gain the required performance improvement. The proposed design at IEDM 24 introduces silicon photonics (SiPh) at the center stage. NVIDIA's architecture calls for 12 SiPh connections for intrachip and interchip connections, with three connections per GPU tile across four GPU tiles per tier. This marks a significant departure from traditional interconnect technologies, which in the past have been limited by the natural properties of copper.

Perhaps the most striking aspect of NVIDIA's vision is the introduction of so-called "GPU tiers"—a novel approach that appears to stack GPU components vertically. This is complemented by an advanced 3D stacked DRAM configuration featuring six memory units per tile, enabling fine-grained memory access and substantially improved bandwidth. This stacked DRAM would have a direct electrical connection to the GPU tiles, mimicking the AMD 3D V-Cache on a larger scale. However, the timeline for implementation reflects the significant technological hurdles that must be overcome. The scale-up of silicon photonics manufacturing presents a particular challenge, with NVIDIA requiring the capacity to produce over one million SiPh connections monthly to make the design commercially viable. NVIDIA has invested in Lightmatter, which builds photonic packages for scaling the compute, so some form of its technology could end up in future NVIDIA accelerators

Lightmatter Introduces Optical Processor to Speed Compute for Next-Gen AI

Lightmatter, a leader in silicon photonics processors, today announces its artificial intelligence (AI) photonic processor, a general-purpose AI inference accelerator that uses light to compute and transport data. Using light to calculate and communicate within the chip reduces heat—leading to orders of magnitude reduction in energy consumption per chip and dramatic improvements in processor speed. Since 2010, the amount of compute power needed to train a state-of-the-art AI algorithm has grown at five times the rate of Moore's Law scaling—doubling approximately every three and a half months. Lightmatter's processor solves the growing need for computation to support next-generation AI algorithms.

"The Department of Energy estimates that by 2030, computing and communications technology will consume more than 8 percent of the world's power. Transistors, the workhorse of traditional processors, aren't improving; they're simply too hot. Building larger and larger datacenters is a dead end path along the road of computational progress," said Nicholas Harris, PhD, founder and CEO at Lightmatter. "We need a new computing paradigm. Lightmatter's optical processors are dramatically faster and more energy efficient than traditional processors. We're simultaneously enabling the growth of computing and reducing its impact on our planet."
Return to Keyword Browsing
May 4th, 2025 14:05 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts