News Posts matching #ML

Return to Keyword Browsing

UL Announces the Procyon AI Image Generation Benchmark Based on Stable Diffusion

We're excited to announce we're expanding our AI Inference benchmark offerings with the UL Procyon AI Image Generation Benchmark, coming Monday, 25th March. AI has the potential to be one of the most significant new technologies hitting the mainstream this decade, and many industry leaders are competing to deliver the best AI Inference performance through their hardware. Last year, we launched the first of our Procyon AI Inference Benchmarks for Windows, which measured AI Inference performance with a workload using Computer Vision.

The upcoming UL Procyon AI Image Generation Benchmark provides a consistent, accurate and understandable workload for measuring the AI performance of high-end hardware, built with input from members of the industry to ensure fair and comparable results across all supported hardware.

Lenovo and Anaconda Announce Agreement to Accelerate AI Development and Deployment

Today, Lenovo announced a strategic partnership with Anaconda Inc., the leading provider of the world's most popular artificial intelligence (AI), machine learning (ML) and data science platform, to empower Lenovo's high performance data science workstations. The partnership will couple Lenovo's trusted ThinkStation and ThinkPad workstation product portfolio heritage and leadership with Anaconda's enterprise strengths for open-source leadership, security, and reliability.

The rapidly evolving world of artificial intelligence, deep learning and generative AI is opening up new opportunities for businesses and data scientists. Much of the AI innovation taking place today is driven by open-source software and cloud-based solutions, with Python being a leading software language for AI applications. However, the data security risks associated with utilizing open-source software at an enterprise level, privacy concerns and often prohibitive cost of cloud-based AI solutions, is causing many organizations to rethink their approach to investment in AI development. With Intel -powered Lenovo workstations architected with the latest generations of professional NVIDIA GPUs built for large-language model fine-tuning, and the Anaconda Navigator's ability to enable businesses to leverage open-source and AI with enhanced security, scale, and governance mechanisms in place, the partnership allows data scientists to create and deploy AI solutions with first class hardware and enterprise-grade AI software support within a more manageable investment framework.

Ethernet Switch Chips are Now Infected with AI: Broadcom Announces Trident 5-X12

Artificial intelligence has been a hot topic this year, and everything is now an AI processor, from CPUs to GPUs, NPUs, and many others. However, it was only a matter of time before we saw an integration of AI processing elements into the networking chips. Today, Broadcom announced its new Ethernet switching silicon called Trident 5-X12. The Trident 5-X12 delivers 16 Tb/s of bandwidth, double that of the previous Trident generation while adding support for fast 800G ports for connection to Tomahawk 5 spine switch chips. The 5-X12 is software-upgradable and optimized for dense 1RU top-of-rack designs, enabling configurations with up to 48x200G downstream server ports and 8x800G upstream fabric ports. The 800G support is added using 100G-PAM4 SerDes, which enables up to 4 m DAC and linear optics.

However, this is not only a switch chip on its own. Broadcom has added AI processing elements in an inference engine called NetGNT (Networking General-purpose Neural-network Traffic-analyzer). It can detect common traffic patterns and optimize data movement across the chip. Specifically, the company has listed an example of the system doing AI/ML workloads. In that case, NetGNT performs intelligent traffic analysis to avoid network congestion in these workloads. For example, it can detect the so-called "incast" patterns in real-time, where many flows converge simultaneously on the same port. By recognizing the start of incast early, NetGNT can invoke hardware-based congestion control techniques to prevent performance degradation without added latency.

Intel Advances Scientific Research and Performance for New Wave of Supercomputers

At SC23, Intel showcased AI-accelerated high performance computing (HPC) with leadership performance for HPC and AI workloads across Intel Data Center GPU Max Series, Intel Gaudi 2 AI accelerators and Intel Xeon processors. In partnership with Argonne National Laboratory, Intel shared progress on the Aurora generative AI (genAI) project, including an update on the 1 trillion parameter GPT-3 LLM on the Aurora supercomputer that is made possible by the unique architecture of the Max Series GPU and the system capabilities of the Aurora supercomputer. Intel and Argonne demonstrated the acceleration of science with applications from the Aurora Early Science Program (ESP) and the Exascale Computing Project. The company also showed the path to Intel Gaudi 3 AI accelerators and Falcon Shores.

"Intel has always been committed to delivering innovative technology solutions to meet the needs of the HPC and AI community. The great performance of our Xeon CPUs along with our Max GPUs and CPUs help propel research and science. That coupled with our Gaudi accelerators demonstrate our full breadth of technology to provide our customers with compelling choices to suit their diverse workloads," said Deepak Patil, Intel corporate vice president and general manager of Data Center AI Solutions.

Ayar Labs Showcases 4 Tbps Optically-enabled Intel FPGA at Supercomputing 2023

Ayar Labs, a leader in silicon photonics for chip-to-chip connectivity, will showcase its in-package optical I/O solution integrated with Intel's industry-leading Agilex Field-Programmable Gate Array (FPGA) technology. In demonstrating 5x current industry bandwidth at 5x lower power and 20x lower latency, the optical FPGA - packaged in a common PCIe card form factor - has the potential to transform the high performance computing (HPC) landscape for data-intensive workloads such as generative artificial intelligence (AI), machine learning, and support novel new disaggregated compute and memory architectures and more.

"We're on the cusp of a new era in high performance computing as optical I/O becomes a 'must have' building block for meeting the exponentially growing, data-intensive demands of emerging technologies like generative AI," said Charles Wuischpard, CEO of Ayar Labs. "Showcasing the integration of Ayar Labs' silicon photonics and Intel's cutting-edge FPGA technology at Supercomputing is a concrete demonstration that optical I/O has the maturity and manufacturability needed to meet these critical demands."

NVIDIA Turbocharges Generative AI Training in MLPerf Benchmarks

NVIDIA's AI platform raised the bar for AI training and high performance computing in the latest MLPerf industry benchmarks. Among many new records and milestones, one in generative AI stands out: NVIDIA Eos - an AI supercomputer powered by a whopping 10,752 NVIDIA H100 Tensor Core GPUs and NVIDIA Quantum-2 InfiniBand networking - completed a training benchmark based on a GPT-3 model with 175 billion parameters trained on one billion tokens in just 3.9 minutes. That's a nearly 3x gain from 10.9 minutes, the record NVIDIA set when the test was introduced less than six months ago.

The benchmark uses a portion of the full GPT-3 data set behind the popular ChatGPT service that, by extrapolation, Eos could now train in just eight days, 73x faster than a prior state-of-the-art system using 512 A100 GPUs. The acceleration in training time reduces costs, saves energy and speeds time-to-market. It's heavy lifting that makes large language models widely available so every business can adopt them with tools like NVIDIA NeMo, a framework for customizing LLMs. In a new generative AI test ‌this round, 1,024 NVIDIA Hopper architecture GPUs completed a training benchmark based on the Stable Diffusion text-to-image model in 2.5 minutes, setting a high bar on this new workload. By adopting these two tests, MLPerf reinforces its leadership as the industry standard for measuring AI performance, since generative AI is the most transformative technology of our time.

AMD, Arm, Intel, Meta, Microsoft, NVIDIA, and Qualcomm Standardize Next-Generation Narrow Precision Data Formats for AI

Realizing the full potential of next-generation deep learning requires highly efficient AI infrastructure. For a computing platform to be scalable and cost efficient, optimizing every layer of the AI stack, from algorithms to hardware, is essential. Advances in narrow-precision AI data formats and associated optimized algorithms have been pivotal to this journey, allowing the industry to transition from traditional 32-bit floating point precision to presently only 8 bits of precision (i.e. OCP FP8).

Narrower formats allow silicon to execute more efficient AI calculations per clock cycle, which accelerates model training and inference times. AI models take up less space, which means they require fewer data fetches from memory, and can run with better performance and efficiency. Additionally, fewer bit transfers reduces data movement over the interconnect, which can enhance application performance or cut network costs.

Comcast and Broadcom to Develop the World's First AI-Powered Access Network With Pioneering New Chipset

Comcast and Broadcom today announced joint efforts to develop the world's first AI-powered access network with a new chipset that embeds artificial intelligence (AI) and machine learning (ML) within the nodes, amps and modems that comprise the last few miles of Comcast's network. With these new capabilities broadly deployed throughout the network, Comcast will be able to transform its operations by automating more network functions and deliver an improved customer experience through better and more actionable intelligence.

Additionally, the new chipset will be the first in the world to incorporate DOCSIS 4.0 Full Duplex (FDX), Extended Spectrum (ESD) and the ability to run both simultaneously, enabling Internet service providers across the globe to deliver DOCSIS 4.0 services using a toolkit with technology options to meet their business needs. DOCSIS 4.0 is the next-generation network technology that will introduce symmetrical multi-gigabit Internet speeds, lower latency, and even better security and reliability to hundreds of millions of people and businesses over their existing connections without the need for major construction of new network infrastructure.

AMD to Acquire Open-Source AI Software Expert Nod.ai

AMD today announced the signing of a definitive agreement to acquire Nod.ai to expand the company's open AI software capabilities. The addition of Nod.ai will bring an experienced team that has developed an industry-leading software technology that accelerates the deployment of AI solutions optimized for AMD Instinct data center accelerators, Ryzen AI processors, EPYC processors, Versal SoCs and Radeon GPUs to AMD. The agreement strongly aligns with the AMD AI growth strategy centered on an open software ecosystem that lowers the barriers of entry for customers through developer tools, libraries and models.

"The acquisition of Nod.ai is expected to significantly enhance our ability to provide AI customers with open software that allows them to easily deploy highly performant AI models tuned for AMD hardware," said Vamsi Boppana, senior vice president, Artificial Intelligence Group at AMD. "The addition of the talented Nod.ai team accelerates our ability to advance open-source compiler technology and enable portable, high-performance AI solutions across the AMD product portfolio. Nod.ai's technologies are already widely deployed in the cloud, at the edge and across a broad range of end point devices today."

Broadcom Partners with Google Cloud to Strengthen Gen AI-Powered Cybersecurity

Symantec, a division of Broadcom Inc., is partnering with Google Cloud to embed generative AI (gen AI) into the Symantec Security platform in a phased rollout that will give customers a significant technical edge for detecting, understanding, and remediating sophisticated cyber attacks.

Symantec is leveraging the Google Cloud Security AI Workbench and security-specific large language model (LLM)--Sec-PaLM 2-across its portfolio to enable natural language interfaces and generate more comprehensive and easy-to-understand threat analyses. With Security AI Workbench-powered summarization of complex incidents and alignment to MITRE ATT&CK context, security operations center (SOC) analysts of all levels can better understand threats and be able to respond faster. That, in turn, translates into greater security and higher SOC productivity.

Supermicro Introduces a Number of Density and Power Optimized Edge Platforms for Telco Providers, Based on the New AMD EPYC 8004 Series Processor

Supermicro, Inc., a Total IT Solution Provider for Cloud, AI/ML, Storage, and 5G/Edge, is announcing the AMD based Supermicro H13 generation of WIO Servers, optimized to deliver strong performance and energy efficiency for edge and telco datacenters powered by the new AMD EPYC 8004 Series processors. The new Supermicro H13 WIO and short-depth front I/O systems deliver energy-efficient single socket servers that lower operating costs for enterprise, telco, and edge applications. These systems are designed with a dense form factor and flexible I/O options for storage and networking, making the new servers ideal for deploying in edge networks.

"We are excited to expand our AMD EPYC-based server offerings optimized to deliver excellent TCO and energy efficiency for data center networking and edge computing," said Charles Liang, president and CEO of Supermicro. "Adding to our already industry leading edge-to-cloud rack scale IT solutions, the new Supermicro H13 WIO systems with PCIe 5.0 and DDR5-4800 MHz memory show tremendous performance for edge applications."

Andes Announces General Availability of the New AndesCore RISC-V Multicore Vector Processor AX45MPV

Andes Technology, a leading supplier of high efficiency, low-power 32/64-bit RISC-V processor cores and Founding Premier member of RISC-V International, today proudly announces general availability of the high-performance AndesCore AX45MPV multicore vector processor IP. The AX45MPV is the third generation of the award winning AndesCore vector processor series. Equipped with powerful RISC-V vector processing and parallel execution capability, it targets the applications with large volumes of data such as ADAS, AI inference and training, AR/VR, multimedia, robotics, and signal processing.

Andes and Meta started collaboration on datacenter AI with RISC-V vector core from early 2019. Andes later unveiled the AndesCore NX27V, marking a significant milestone as the industry's first commercial RISC-V vector processor core with the capability of generating up to 4 512-bit vector (VLEN) results per cycle, at the end of 2019. It immediately attracted the attention of worldwide SoC design teams working on AI accelerators, and has landed over a dozen datacenter AI projects. Since then, the RISC-V vector processor cores have become the choice for ML and AI chip vendors.

Google Cloud and NVIDIA Expand Partnership to Advance AI Computing, Software and Services

Google Cloud Next—Google Cloud and NVIDIA today announced new AI infrastructure and software for customers to build and deploy massive models for generative AI and speed data science workloads.

In a fireside chat at Google Cloud Next, Google Cloud CEO Thomas Kurian and NVIDIA founder and CEO Jensen Huang discussed how the partnership is bringing end-to-end machine learning services to some of the largest AI customers in the world—including by making it easy to run AI supercomputers with Google Cloud offerings built on NVIDIA technologies. The new hardware and software integrations utilize the same NVIDIA technologies employed over the past two years by Google DeepMind and Google research teams.

Tachyum Achieves 192-Core Chip After Switch to New EDA Tools

Tachyum today announced that new EDA tools, utilized during the physical design phase of the Prodigy Universal Processor, have allowed the company to achieve significantly better results with chip specifications than previously anticipated, after the successful change in physical design tools - including an increase in the number of Prodigy cores to 192.

After RTL design coding, Tachyum began work on completing the physical design (the actual placement of transistors and wires) for Prodigy. After the Prodigy design team had to replace IPs, it also had to replace RTL simulation and physical design tools. Armed with a new set of EDA tools, Tachyum was able to optimize settings and options that increased the number of cores by 50 percent, and SERDES from 64 to 96 on each chip. Die size grew minimally, from 500mm2 to 600mm2 to accommodate improved physical capabilities. While Tachyum could add more of its very efficient cores and still fit into the 858mm2 reticle limit, these cores would be memory bandwidth limited, even with 16 DDR5 controllers running in excess of 7200MT/s. Tachyum cores have much higher performance than any other processor cores.

Lightelligence Introduces Optical Interconnect for Composable Data Center Architectures

Lightelligence, the global leader in photonic computing and connectivity systems, today announced Photowave, the first optical communications hardware designed for PCIe and Compute Express Link (CXL) connectivity, unleashing next-generation workload efficiency.

Photowave, an Optical Networking (oNET) transceiver leveraging the significant latency and energy efficiency of photonics technology, empowers data center managers to scale resources within or across server racks. The first public demonstration of Photowave will be at Flash Memory Summit today through Thursday, August 10, in Santa Clara, Calif.

Supermicro Expands AMD Product Lines with New Servers and New Processors Optimized for Cloud Native Infrastructure

Supermicro, Inc., a Total IT Solution Provider for Cloud, AI/ML, Storage, and 5G/Edge, is announcing that its entire line of H13 AMD based-systems is now available with support for 4th Gen AMD EPYC processors, based on "Zen 4c" architecture, and 4th Gen AMD EPYC processors with AMD 3D V-Cache technology. Supermicro servers powered by 4th Gen AMD EPYC processors for cloud-native computing, with leading thread density and 128 cores per socket, deliver impressive rack density and scalable performance with energy efficiency to deploy cloud native workloads in more consolidated infrastructure. These systems are targeted for cloud operators to meet the ever-growing demands of user sessions and deliver AI-enabled new services. Servers featuring AMD 3D V-Cache technology excel in running technical applications in FEA, CFD, and EDA. The large Level 3 cache enables these types of applications to run faster than ever before. Over 50 world record benchmarks have been set with AMD EPYC processors over the past few years.

"Supermicro continues to push the boundary of our product lines to meet customers' requirements. We design and deliver resource-saving, application-optimized servers with rack scale integration for rapid deployments," said Charles Liang, president, and CEO of Supermicro. "With our growing broad portfolio of systems fully optimized for the latest 4th Gen AMD EPYC processors, cloud operators can now achieve extreme density and efficiency for numerous users and cloud-native services even in space-constrained data centers. In addition, our enhanced high performance, multi-socket, multi-node systems address a wide range of technical computing workloads and dramatically reduce time-to-market for manufacturing companies to design, develop, and validate new products leveraging the accelerated performance of memory intensive applications."

Arm Launches the Cortex-X4, A720 and A520, Immortalis-G715 GPU

Mobile devices touch every aspect of our digital lives. In the palm of your hand is the ability to both create and consume increasingly immersive, AI-accelerated experiences that continue to drive the need for more compute. Arm is at the heart of many of these, bringing unlimited delight, productivity and success to more people than ever. Every year we build foundational platforms designed to meet these increasing compute demands, with a relentless focus on high performance and efficiency. Working closely with our broader ecosystem, we're delivering the performance, efficiency and intelligence needed on every generation of consumer device to expand our digital lifestyles.

Today we are announcing Arm Total Compute Solutions 2023 (TCS23), which will be the platform for mobile computing, offering our best ever premium solution for smartphones. TCS23 delivers a complete package of the latest IP designed and optimized for specific workloads to work seamlessly together as a complete system. This includes a new world-class Arm Immortalis GPU based on our brand-new 5th Generation GPU architecture for ultimate visual experiences, a new cluster of Armv9 CPUs that continue our performance leadership for next-gen artificial intelligence (AI), and new enhancements to deliver more accessible software for the millions of Arm developers.

Google Merges its AI Subsidiaries into Google DeepMind

Google has announced that the company is officially merging its subsidiaries focused on artificial intelligence to form a single group. More specifically, Google Brain and DeepMind companies are now joining forces to become a single unit called Google DeepMind. As Google CEO Sundar Pichai notes: "This group, called Google DeepMind, will bring together two leading research groups in the AI field: the Brain team from Google Research, and DeepMind. Their collective accomplishments in AI over the last decade span AlphaGo, Transformers, word2vec, WaveNet, AlphaFold, sequence to sequence models, distillation, deep reinforcement learning, and distributed systems and software frameworks like TensorFlow and JAX for expressing, training and deploying large scale ML models."

As a CEO of this group, Demis Hassabis, a previous CEO of DeepMind, will work together with Jeff Dean, now promoted to Google's Chief Scientist, where he will report to the Sundar. In the spirit of a new role, Jeff Dean will work as a Chief Scientist at Google Research and Google DeepMind, where he will set the goal for AI research at both units. This corporate restructuring will help the two previously separate teams work together on a single plan and help advance AI capabilities faster. We are eager to see the upcoming developments these teams accomplish.

NVIDIA H100 AI Performance Receives up to 54% Uplift with Optimizations

On Wednesday, the MLCommons team released the MLPerf 3.0 Inference numbers, and there was an exciting submission from NVIDIA. Reportedly, NVIDIA has used software optimization to improve the already staggering performance of its latest H100 GPU by up to 54%. For reference, NVIDIA's H100 GPU first appeared on MLPerf 2.1 back in September of 2022. In just six months, NVIDIA engineers worked on AI optimizations for the MLPerf 3.0 release to find that basic software optimization can catalyze performance increases anywhere from 7-54%. The workloads for measuring the inferencing speed suite included RNN-T speech recognition, 3D U-Net medical imaging, RetinaNet object detection, ResNet-50 object classification, DLRM recommendation, and BERT 99/99.9% natural language processing.

What is interesting is that NVIDIA's submission is a bit modified. There are open and closed categories that vendors have to compete in, where closed is the mathematical equivalent of a neural network. In contrast, the open category is flexible and allows vendors to submit results based on optimizations for their hardware. The closed submission aims to provide an "apples-to-apples" hardware comparison. Given that NVIDIA opted to use the closed category, performance optimization of other vendors such as Intel and Qualcomm are not accounted for here. Still, it is interesting that optimization can lead to a performance increase of up to 54% in NVIDIA's case with its H100 GPU. Another interesting takeaway is that some comparable hardware, like Qualcomm Cloud AI 100, Intel Xeon Platinum 8480+, and NeuChips's ReccAccel N3000, failed to finish all the workloads. This is shown as "X" on the slides made by NVIDIA, stressing the need for proper ML system software support, which is NVIDIA's strength and an extensive marketing claim.

NVIDIA Hopper GPUs Expand Reach as Demand for AI Grows

NVIDIA and key partners today announced the availability of new products and services featuring the NVIDIA H100 Tensor Core GPU—the world's most powerful GPU for AI—to address rapidly growing demand for generative AI training and inference. Oracle Cloud Infrastructure (OCI) announced the limited availability of new OCI Compute bare-metal GPU instances featuring H100 GPUs. Additionally, Amazon Web Services announced its forthcoming EC2 UltraClusters of Amazon EC2 P5 instances, which can scale in size up to 20,000 interconnected H100 GPUs. This follows Microsoft Azure's private preview announcement last week for its H100 virtual machine, ND H100 v5.

Additionally, Meta has now deployed its H100-powered Grand Teton AI supercomputer internally for its AI production and research teams. NVIDIA founder and CEO Jensen Huang announced during his GTC keynote today that NVIDIA DGX H100 AI supercomputers are in full production and will be coming soon to enterprises worldwide.

Supermicro Expands Storage Solutions Portfolio for Intensive I/O Workloads with Industry Standard Based All-Flash Servers Utilizing EDSFF E3.S, and E1

Supermicro, Inc., a Total IT Solution Provider for Cloud, AI/ML, Storage, and 5G/Edge, is announcing the latest addition to its revolutionary ultra-high performance, high-density petascale class all-flash NVMe server family. Supermicro systems in this high-performance storage product family will support the next-generation EDSFF form factor, including the E3.S and E1.S devices, in form factors that accommodate 16- and 32 high-performance PCIe Gen 5 NVMe drive bays.

The initial offering of the updated product line will support up to one-half of a petabyte of storage space in a 1U 16 bay rackmount system, followed by a full petabyte of storage space in a 2U 32 bay rackmount system for both Intel and AMD PCIe Gen 5 platforms. All of the Supermicro systems that support either the E1.S or E3.s form factors enable customers to realize the benefits in various application-optimized servers.

Intel's Transition of OpenFL Primes Growth of Confidential AI

Today, Intel announced that the LF AI & Data Foundation Technical Advisory Council accepted Open Federated Learning (OpenFL) as an incubation project to further drive collaboration, standardization and interoperability. OpenFL is an open source framework for a type of distributed AI referred to as federated learning (FL) that incorporates privacy-preserving features called confidential computing. It was developed and hosted by Intel to help data scientists address the challenge of maintaining data privacy while bringing together insights from many disparate, confidential or regulated data sets.

"We are thrilled to welcome OpenFL to the LF AI & Data Foundation. This project's innovative approach to enabling organizations to collaboratively train machine learning models across multiple devices or data centers without the need to share raw data aligns perfectly with our mission to accelerate the growth and adoption of open source AI and data technologies. We look forward to collaborating with the talented individuals behind this project and helping to drive its success," said Dr. Ibrahim Haddad, executive director, LF AI & Data Foundation.

Renesas to Demonstrate First AI Implementations on the Arm Cortex-M85 Processor Featuring Helium Technology

Renesas Electronics Corporation, a premier supplier of advanced semiconductor solutions, today announced that it will present the first live demonstrations of artificial intelligence (AI) and machine learning (ML) implementations on an MCU based on the Arm Cortex -M85 processor. The demos will show the performance uplift in AI/ML applications made possible by the new Cortex-M85 core and Arm's Helium technology. They will take place in the Renesas stand - Hall 1, Stand 234 (1-234) at the embedded world 2023 Exhibition and Conference in Nuremburg, Germany from March 14-16.

At embedded world in 2022, Renesas became the first company to demonstrate working silicon based on the Arm Cortex-M85 processor. This year, Renesas is extending its leadership by showcasing the features of the new processor in demanding AI use cases. The first demonstration showcases a people detection application developed in collaboration with Plumerai, a leader in Vision AI, that identifies and tracks persons in the camera frame in varying lighting and environmental conditions. The compact and efficient TinyML models used in this application lead to low-cost and lower power AI solutions for a wide range of IoT implementations. The second demo showcases a motor control predictive maintenance use case with an AI-based unbalanced load detection application using Tensorflow Lite for Microcontrollers with CMSIS-NN.

Revenue from Enterprise SSDs Totaled Just US$3.79 Billion for 4Q22 Due to Slumping Demand and Widening Decline in SSD Contract Prices, Says TrendForce

Looking back at 2H22, as server OEMs slowed down the momentum of their product shipments, Chinese server buyers also held a conservative outlook on future demand and focused on inventory reduction. Thus, the flow of orders for enterprise SSDs remained sluggish. However, NAND Flash suppliers had to step up shipments of enterprise SSDs during 2H22 because the demand for storage components equipped in notebook (laptop) computers and smartphones had undergone very large downward corrections. Compared with other categories of NAND Flash products, enterprise SSDs represented the only significant source of bit consumption. Ultimately, due to the imbalance between supply and demand, the QoQ decline in prices of enterprise SSDs widened to 25% for 4Q22. This price plunge, in turn, caused the quarterly total revenue from enterprise SSDs to drop by 27.4% QoQ to around US$3.79 billion. TrendForce projects that the NAND Flash industry will again post a QoQ decline in the revenue from this product category for 1Q23.

Ayar Labs Demonstrates Industry's First 4-Tbps Optical Solution, Paving Way for Next-Generation AI and Data Center Designs

Ayar Labs, a leader in the use of silicon photonics for chip-to-chip optical connectivity, today announced public demonstration of the industry's first 4 terabit-per-second (Tbps) bidirectional Wavelength Division Multiplexing (WDM) optical solution at the upcoming Optical Fiber Communication Conference (OFC) in San Diego on March 5-9, 2023. The company achieves this latest milestone as it works with leading high-volume manufacturing and supply partners including GlobalFoundries, Lumentum, Macom, Sivers Photonics and others to deliver the optical interconnects needed for data-intensive applications. Separately, the company was featured in an announcement with partner Quantifi Photonics on a CW-WDM-compliant test platform for its SuperNova light source, also at OFC.

In-package optical I/O uniquely changes the power and performance trajectories of system design by enabling compute, memory and network silicon to communicate with a fraction of the power and dramatically improved performance, latency and reach versus existing electrical I/O solutions. Delivered in a compact, co-packaged CMOS chiplet, optical I/O becomes foundational to next-generation AI, disaggregated data centers, dense 6G telecommunications systems, phased array sensory systems and more.
Return to Keyword Browsing
Dec 19th, 2024 05:28 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts