News Posts matching #HPC

Return to Keyword Browsing

Phison Debuts the X1 to Provide the Industry's Most Advanced Enterprise SSD Solution

Phison Electronics Corp., a global leader in NAND flash controller and storage solutions, today announced the launch of its X1 controller based solid state drive (SSD) platform that delivers the industry's most advanced enterprise SSD solution. Engineered with Phison's technology to meet the evolving demands of faster and smarter global data-center infrastructures, the X1 SSD platform was designed in partnership with Seagate Technology Holdings plc, a world leader in mass-data storage infrastructure solutions. The X1 SSD customizable platform offers more computing with less energy consumption. With a cost-effective solution that eliminates bottlenecks and improves quality of service, the X1 offers more than a 30 percent increase in data reads than existing market competitors for the same power used.

"We combined Seagate's proprietary data management and customer integration capabilities with Phison's cutting-edge technology to create highly customized SSDs that meet the ever-evolving needs of the enterprise storage market," said Sai Varanasi, senior vice president of product and business marketing at Seagate Technology. "Seagate is excited to partner with Phison on developing advanced SSD technology to provide the industry with increased density, higher performance and power efficiency for all mass capacity storage providers."

Avicena Raises $25 Million in Series A to Fund Development of High Capacity microLED-based Optical Interconnects

-AvicenaTech Corp., the leader in microLED-based chip-to-chip interconnects, today announced that the company has secured $25M in Series A funding from Samsung Catalyst Fund, Cerberus Capital Management, Clear Ventures, and Micron Ventures to drive the development of products based on Avicena's breakthrough photonic I/O solution. "We believe that Avicena technology can be transformational in unlocking compute-to-memory chip-to-chip high-speed interconnects. Such technology can be central to supporting future disaggregated architectures and distributed high-performance computing (HPC) systems," said Marco Chisari, EVP of Samsung Electronics and Head of the Samsung Semiconductor Innovation Center.

"We are excited to participate in this round at Avicena," said Amir Salek, Senior Managing Director at Cerberus Capital Management and former Head of silicon for Google Infrastructure and Cloud. "Avicena has a highly differentiated technology addressing one of the main challenges in modern computer architecture. The technology offered by Avicena meets the needs for scaling future HPC and cloud compute networks and covers applications in conventional datacenter and 5G cellular networking."

Supermicro Launches Multi-GPU Cloud Gaming Solutions Based on Intel Arctic Sound-M

Super Micro Computer, Inc., a global leader in enterprise computing, storage, networking, and green computing technology, is announcing future Total IT Solutions for availability with Android Cloud Gaming and Media Processing & Delivery. These new solutions will incorporate the Intel Data Center GPU, codenamed Arctic Sound-M, and will be supported on several Supermicro servers. Supermicro solutions that will contain the Intel Data Center GPUs codenamed Arctic Sound-M, include the 4U 10x GPU server for transcoding and media delivery, the Supermicro BigTwin system with up to eight Intel Data Center GPUs, codenamed Arctic Sound-M in 2U for media processing applications, the Supermicro CloudDC server for edge AI inferencing, and the Supermicro 2U 2-Node server with three Intel Data Center GPUs, codenamed Arctic Sound-M per node, optimized for cloud gaming. Additional systems will be made available later this year.

"Supermicro will extend our media processing solutions by incorporating the Intel Data Center GPU," said Charles Liang, President, and CEO, Supermicro. "The new solutions will increase video stream rates and enable lower latency Android cloud gaming. As a result, Android cloud gaming performance and interactivity will increase dramatically with the Supermicro BigTwin systems, while media delivery and transcoding will show dramatic improvements with the new Intel Data Center GPUs. The solutions will expand our market-leading accelerated computing offerings, including everything from Media Processing & Delivery to Collaboration, and HPC."

Semiconductor Fab Order Cancellations Expected to Result in Reduced Capacity Utilization Rate in 2H22

According to TrendForce investigations, foundries have seen a wave of order cancellations with the first of these revisions originating from large-size Driver IC and TDDI, which rely on mainstream 0.1X μm and 55 nm processes, respectively. Although products such as MCU and PMIC were previously in short supply, foundries' capacity utilization rate remained roughly at full capacity through their adjustment of product mix. However, a recent wave cancellations have emerged for PMIC, CIS, and certain MCU and SoC orders. Although still dominated by consumer applications, foundries are beginning to feel the strain of the copious order cancellations from customers and capacity utilization rate has officially declined.

Looking at trends in 2H22, TrendForce indicates, in addition to no relief from the sustained downgrade of driver IC demand, inventory adjustment has begun for smartphones, PCs, and TV-related peripheral components such as SoCs, CIS, and PMICs, and companies are beginning to curtail their wafer input plans with foundries. This phenomenon of order cancellations is occurring simultaneously in 8-inch and 12-inch fabs at nodes including 0.1X μm, 90/55 nm, and 40/28 nm. Not even the advanced 7/6 nm processes are immune.

Cerebras Systems Sets Record for Largest AI Models Ever Trained on A Single Device

Cerebras Systems, the pioneer in high performance artificial intelligence (AI) computing, today announced, for the first time ever, the ability to train models with up to 20 billion parameters on a single CS-2 system - a feat not possible on any other single device. By enabling a single CS-2 to train these models, Cerebras reduces the system engineering time necessary to run large natural language processing (NLP) models from months to minutes. It also eliminates one of the most painful aspects of NLP—namely the partitioning of the model across hundreds or thousands of small graphics processing units (GPU).

"In NLP, bigger models are shown to be more accurate. But traditionally, only a very select few companies had the resources and expertise necessary to do the painstaking work of breaking up these large models and spreading them across hundreds or thousands of graphics processing units," said Andrew Feldman, CEO and Co-Founder of Cerebras Systems. "As a result, only very few companies could train large NLP models - it was too expensive, time-consuming and inaccessible for the rest of the industry. Today we are proud to democratize access to GPT-3 1.3B, GPT-J 6B, GPT-3 13B and GPT-NeoX 20B, enabling the entire AI ecosystem to set up large models in minutes and train them on a single CS-2."

AMD Instinct MI300 APU to Power El Capitan Exascale Supercomputer

The Exascale supercomputing race is now well underway, as the US-based Frontier supercomputer got delivered, and now we wait to see the remaining systems join the race. Today, during 79th HPC User Forum at Oak Ridge National Laboratory (ORNL), Terri Quinn at Lawrence Livermore National Laboratory (LLNL) delivered a few insights into what El Capitan exascale machine will look like. And it seems like the new powerhouse will be based on AMD's Instinct MI300 APU. LLNL targets peak performance of over two exaFLOPs and a sustained performance of more than one exaFLOP, under 40 megawatts of power. This should require a very dense and efficient computing solution, just like the MI300 APU is.

As a reminder, the AMD Instinct MI300 is an APU that combines Zen 4 x86-64 CPU cores, CDNA3 compute-oriented graphics, large cache structures, and HBM memory used as DRAM on a single package. This is achieved using a multi-chip module design with 2.5D and 3D chiplet integration using Infinity architecture. The system will essentially utilize thousands of these APUs to become one large Linux cluster. It is slated for installation in 2023, with an operating lifespan from 2024 to 2030.

PCI-SIG Announces PCI Express 7.0 Specification to Reach 128 GT/s

PCI-SIG today announced that the PCI Express (PCIe ) 7.0 specification will double the data rate to 128 GT/s and is targeted for release to members in 2025. "For 30 years the guiding principle of PCI-SIG has been, 'If we build it, they will come,'" observed Nathan Brookwood, Research Fellow at Insight 64. "Early parallel versions of PCI technology accommodated speeds of hundreds of megabytes/second, well matched to the graphics, storage and networking demands of the 1990s.

In 2003, PCI-SIG evolved to a serial design that supported speeds of gigabytes/second to accommodate faster solid-state disks and 100MbE Ethernet. Almost like clockwork, PCI-SIG has doubled PCIe specification bandwidth every three years to meet the challenges of emerging applications and markets. Today's announcement of PCI-SIG's plan to double the channel's speed to 512 GB/s (bi-directionally) puts it on track to double PCIe specification performance for another 3-year cycle."

EuroHPC Joint Undertaking Announces Five Sites to Host new World-Class Supercomputers

JUPITER, the first European exascale supercomputer, will be hosted by the Jülich Supercomputing Centre in Germany. Exascale supercomputers are systems capable of performing more than a billion billion calculations per second and represent a significant milestone for Europe. By supporting the development of high-precision models of complex systems, they will have a major impact on European scientific excellence.

Researchers Use SiFive's RISC-V SoC to Build a Supercomputer

Researchers from Università di Bologna and CINECA, the largest supercomputing center in Italy, have been playing with the concept of developing a RISC-V supercomputer. The team has laid the grounds for the first-ever implementation that demonstrates the capability of the relatively novel ISA to run high-performance computing. To create a supercomputer, you need pieces of hardware that seem like Lego building blocks. Those are called clusters, made from a motherboard, processor, memory, and storage. Italian researchers decided to try and use something different than Intel/AMD solution to the problem and use a processor based on RISC-V ISA. Using SiFive's Freedom U740 SoC as the base, researchers named their RISC-V cluster "Monte Cimone."

Monte Cimone features four dual-board servers, each in a 1U form factor. Each board has a SiFive's Freedom U740 SoC with four U74 cores running up to 1.4 GHz and one S7 management core. In total, eight nodes combine for a total of 32 RISC-V cores. Paired with 16 GB of 64-bit DDR4 memory operating at 1866s MT/s, PCIe Gen 3 x8 bus running at 7.8 GB/s, one gigabit Ethernet port, USB 3.2 Gen 1 interfaces, the system is powered by two 250 Watt PSUs to support future expansion and addition of accelerator cards.

Intel Announces "Rialto Bridge" Accelerated AI and HPC Processor

During the International Supercomputing Conference on May 31, 2022, in Hamburg, Germany, Jeff McVeigh, vice president and general manager of the Super Compute Group at Intel Corporation, announced Rialto Bridge, Intel's data center graphics processing unit (GPU). Using the same architecture as the Intel data center GPU Ponte Vecchio and combining enhanced tiles with Intel's next process node, Rialto Bridge will offer up to 160 Xe cores, more FLOPs, more I/O bandwidth and higher TDP limits for significantly increased density, performance and efficiency.

"As we embark on the exascale era and sprint towards zettascale, the technology industry's contribution to global carbon emissions is also growing. It has been estimated that by 2030, between 3% and 7% of global energy production will be consumed by data centers, with computing infrastructure being a top driver of new electricity use," said Jeff McVeigh, vice president and general manager of the Super Compute Group at Intel Corporation.

Ayar Labs Partners with NVIDIA to Deliver Light-Based Interconnect for AI Architectures

Ayar Labs, the leader in chip-to-chip optical connectivity, is developing with NVIDIA groundbreaking artificial intelligence (AI) infrastructure based on optical I/O technology to meet future demands of AI and high performance computing (HPC) workloads. The collaboration will focus on integrating Ayar Labs' technology to develop scale-out architectures enabled by high-bandwidth, low-latency and ultra-low-power optical-based interconnects for future NVIDIA products. Together, the companies plan to accelerate the development and adoption of optical I/O technology to support the explosive growth of AI and machine learning (ML) applications and data volumes.

Optical I/O uniquely changes the performance and power trajectories of system designs by enabling compute, memory and networking ASICs to communicate with dramatically increased bandwidth, at lower latency, over longer distances and at a fraction of the power of existing electrical I/O solutions. The technology is also foundational to enabling emerging heterogeneous compute systems, disaggregated/pooled designs, and unified memory architectures that are critical to accelerating future data center innovation.

GIGABYTE Joins Computex to Promote Emerging Enterprise Technologies

GIGABYTE Technology, (TWSE: 2376), an industry leader in high-performance servers and workstations, today announced its participation in Computex, which runs from May 24-27. GIGABYTE's booth will showcase the latest in accelerated computing, liquid & immersion cooling, and edge solutions, as well as provide an opportunity for attendees to get a glimpse at next gen enterprise hardware. GIGABYTE also announced plans to support the NVIDIA Grace CPU Superchip and NVIDIA Grace Hopper Superchip. With the new hardware innovations, GIGABYTE is charting a path to meet the requirements for the foreseeable future in data centers, accordingly many of those important technological modernizations that will be on show at Computex, as well as behind the NDA curtain.

Computex is unquestionably the world's largest computer expo, which has been held annually in Taiwan for over 20 years. Computex has continued to generate great interest and anticipation from the international community. The Nangang Exhibition Center will once again have a strong presence of manufacturers and component buyers encompassing the consumer PC market to enterprise.

HPE Build Supercomputer Factory in Czech Republic

Hewlett Packard Enterprise (NYSE: HPE) today announced its ongoing commitment in Europe by building its first factory in the region for next-generation high performance computing (HPC) and artificial intelligence (AI) systems to accelerate delivery to customers and strengthen the region's supplier ecosystem. The new site will manufacture HPE's industry-leading systems as custom-designed solutions to advance scientific research, mature AL/ML initiatives, and bolster innovation.

The dedicated HPC factory, which will become the fourth of HPE's global HPC sites, will be located in Kutná Hora, Czech Republic, next to HPE's existing European site for manufacturing its industry-standard servers and storage solutions. Operations will begin in summer 2022.

Tachyum Delivers the Highest AI and HPC Performance with the Launch of the World's First Universal Processor

Tachyum today launched the world's first universal processor, Prodigy, which unifies the functionality of a CPU, GPU and TPU in a single processor, creating a homogeneous architecture, while delivering massive performance improvements at a cost many times less than competing products.

After the company undertook its mission to conquer the processor performance plateau in nanometer-class chips and the systems they power, Tachyum has succeeded by launching its first commercial product. The Prodigy Cloud/AI/HPC supercomputer processor chip offers 4x the performance of the fastest Xeon, has 3x more raw performance than NVIDIA's H100 on HPC and has 6x more raw performance on AI training and inference workloads, and up to 10x performance at the same power. Prodigy is poised to overcome the challenges of increasing data center power consumption, low server utilization and stalled performance scaling.

Ayar Labs Raises $130 Million for Light-based Chip-to-Chip Communication

Ayar Labs, the leader in chip-to-chip optical connectivity, today announced that the company has secured $130 million in additional financing led by Boardman Bay Capital Management to drive the commercialization of its breakthrough optical I/O solution. Hewlett Packard Enterprise (HPE) and NVIDIA entered this investment round, joining existing strategic investors Applied Ventures LLC, GlobalFoundries, Intel Capital, and Lockheed Martin Ventures. Other new strategic and financial investors participating in the round include Agave SPV, Atreides Capital, Berkeley Frontier Fund, IAG Capital Partners, Infinitum Capital, Nautilus Venture Partners, and Tyche Partners. They join existing investors such as BlueSky Capital, Founders Fund, Playground Global, and TechU Venture Partners.

"As a successful technology-focused crossover fund operating for over a decade, Ayar Labs represents our largest private investment to date," said Will Graves, Chief Investment Officer at Boardman Bay Capital Management. "We believe that silicon photonics-based optical interconnects in the data center and telecommunications markets represent a massive new opportunity and that Ayar Labs is the leader in this emerging space with proven technology, a fantastic team, and the right ecosystem partners and strategy."

TSMC First Quarter 2022 Financials Show 45.1% Increase in Revenues

A new quarter and another forecast shattering revenue report from TSMC, as the company beat analysts' forecasts by over US$658 million, with a total revenue for the quarter of US$17.6 billion and a net income of almost US$7.26 billion. That's an increase in net income of 45.1 percent or 35.5 percent in sales. Although the monetary figures might be interesting to some, far more interesting details were also shared, such as production updates about future nodes. As a followup on yesterday's news post about 3 nanometer nodes, the N3 node is officially on track for mass production in the second half of this year. TSMC says that customer engagement is stronger than at the start of its N7 and N7 nodes, with HPC and smartphone chip makers lining up to get onboard. The N3E node is, as reported yesterday, expected to enter mass production in the second half of 2023, or a year after N3. Finally, the N2 node is expected in 2025 and won't adhere to TSMC's two year process technology cadence.

Breaking down the revenue by nodes, N7 has taken back the lead over N5, as N7 accounted for 30 percent of TSMC's Q1 revenues up from 27 percent last quarter, but down from 35 percent in the previous year. N5 sits at 20 percent, which is down from 23 percent in the previous quarter, but up from 14 percent a year ago. The 16 and 28 nm nodes still hold on to 25 percent of TSMC's revenue, which is the same as a year ago and up slightly from the previous quarter. Remaining nodes are unchanged from last quarter.

Tachyum Successfully Runs FreeBSD in Prodigy Ecosystem; Expands Open-Source OS Support

Tachyum today announced it has completed validation of its Prodigy Universal Processor and software ecosystem with the operating system FreeBSD, and completed the Prodigy instruction set architecture (ISA) for FreeBSD porting. FreeBSD powers modern servers, desktops, and embedded platforms in environments that value performance, stability, and security. It is the platform of choice for many of the busiest websites and the most pervasive embedded networking and storage devices.

The validation of FreeBSD extends Tachyum's support for open-source operating systems and tools, including Linux, Yocto Project, PHP, MariaDB, PostgreSQL, Apache, QEMU, Git, RabbitMQ, and more.

NVIDIA Claims Grace CPU Superchip is 2X Faster Than Intel Ice Lake

When NVIDIA announced its Grace CPU Superchip, the company officially showed its efforts of creating an HPC-oriented processor to compete with Intel and AMD. The Grace CPU Superchip combines two Grace CPU modules that use the NVLink-C2C technology to deliver 144 Arm v9 cores and 1 TB/s of memory bandwidth. Each core is Arm Neoverse N2 Perseus design, configured to achieve the highest throughput and bandwidth. As far as performance is concerned, the only detail NVIDIA provides on its website is the estimated SPECrate 2017_int_base score of over 740. Thanks to the colleges over at Tom's Hardware, we have another performance figure to look at.

NVIDIA has made a slide about comparison with Intel's Ice Lake server processors. One Grace CPU Superchip was compared to two Xeon Platinum 8360Y Ice Lake CPUs configured in a dual-socket server node. The Grace CPU Superchip outperformed the Ice Lake configuration by two times and provided 2.3 times the efficiency in WRF simulation. This HPC application is CPU-bound, allowing the new Grace CPU to show off. This is all thanks to the Arm v9 Neoverse N2 cores pairing efficiently with outstanding performance. NVIDIA made a graph showcasing all HPC applications running on Arm today, with many more to come, which you can see below. Remember that NVIDIA provides this information, so we have to wait for the 2023 launch to see it in action.

Intel Planning a Return to HEDT with "Alder Lake-X"?

Enthused with its IPC leadership, Intel is possibly planning a return to the high-end desktop (HEDT) market segment, with the "Alder Lake-X" line of processors, according to a Tom's Hardware report citing a curious-looking addition to an AIDA64 beta change-log. The exact nature of "Alder Lake-X" (ADL-X) still remains a mystery—one theory holds that ADL-X could be a consumer variant of the "Sapphire Rapids" microarchitecture, much like how the 10th Gen Core "Cascade Lake-X" was to "Cascade Lake," a server processor microarchitecture. Given that Intel is calling it "Alder Lake-X" and not "Sapphire Rapids-X," it could even be a whole new client-specific silicon. What's the difference between the two? It's all in the cores.

While both "Alder Lake" and "Sapphire Rapids" come with "Golden Cove" performance cores (P-cores), they use variants of it. Alder Lake has the client-specific variant with 1.25 MB L2 cache, a lighter client-relevant ISA, and other optimizations that enable it to run at higher clock speeds. Sapphire Rapids, on the other hand, will use a server-specific variant of "Golden Cove" that's optimized for the Mesh interconnect, has 2 MB of L2 cache, a server/HPC-relevant ISA, and a propensity to run at lower clock speeds, to support the silicon's overall TDP and high CPU core-count.

Fujitsu Achieves Major Technical Milestone with World's Fastest 36 Qubit Quantum Simulator

Fujitsu has successfully developed the world's fastest quantum computer simulator capable of handling 36 qubit quantum circuits on a cluster system featuring Fujitsu's "FUJITSU Supercomputer PRIMEHPC FX 700" ("PRIMEHPC FX 700")(1), which is equipped with the same A64FX CPU that powers the world's fastest supercomputer, Fugaku.

The newly developed quantum simulator can execute the quantum simulator software "Qulacs"(3) in parallel at high speed, achieving approximately double the performance of other significant quantum simulators in 36 qubit quantum operations. Fujitsu's new quantum simulator will serve as an important bridge towards the development of quantum computing applications that are expected to be put to practical use in the years ahead.

TSMC Ramps up Shipments to Record Levels, 5/4 nm Production Lines at Capacity

According to DigiTimes, TSMC is working on increased its monthly shipments of finished wafers from 120,000 to 150,000 for its 5 nm nodes, under which 4 nm also falls. This is three times as much as what TSMC was producing just a year ago. The 4 nm node is said to be in full mass production now and the enhanced N4P node should be ready for mass production in the second half of 2022, alongside N3B. This will be followed by the N4X and N3E nodes in 2023. The N3B node is expected to hit 40-50,000 wafers initially, before ramping up from there, assuming everything is on track.

The report also mentions that TSMC is expecting a 20 percent revenue increase from its 28 to 7 nm nodes this year, which shows that even these older nodes are being heavily utilised by its customers. TSMC has what NVIDIA would call a demand problem, as the company simply can't meet demand at the moment, with customers lining up to be able to get a share of any additional production capacity. NVIDIA is said to have paid TSMC at least US$10 billion in advance to secure manufacturing capacity for its upcoming products, both for consumer and enterprise products. TSMC's top three HPC customers are also said to have pre-booked capacity on the upcoming 3 and 2 nm nodes, so it doesn't look like demand is going to ease up anytime soon.

NVIDIA Unveils Grace CPU Superchip with 144 Cores and 1 TB/s Bandwidth

NVIDIA has today announced its Grace CPU Superchip, a monstrous design focused on heavy HPC and AI processing workloads. Previously, team green has teased an in-house developed CPU that is supposed to go into servers and create an entirely new segment for the company. Today, we got a more detailed look at the plan with the Grace CPU Superchip. The Superchip package represents a package of two Grace processors, each containing 72 cores. These cores are based on Arm v9 in structure set architecture iteration and two CPUs total for 144 cores in the Superchip module. These cores are surrounded by a now unknown amount of LPDDR5x with ECC memory, running at 1 TB/s total bandwidth.

NVIDIA Grace CPU Superchip uses the NVLink-C2C cache coherent interconnect, which delivers 900 GB/s bandwidth, seven times more than the PCIe 5.0 protocol. The company targets two-fold performance per Watt improvement over today's CPUs and wants to bring efficiency and performance together. We have some preliminary benchmark information provided by NVIDIA. In the SPECrate2017_int_base integer benchmark, the Grace CPU Superchip scores over 740 points, which is just the simulation for now. This means that the performance target is not finalized yet, teasing a higher number in the future. The company expects to ship the Grace CPU Superchip in the first half of 2023, with an already supported ecosystem of software, including NVIDIA RTX, HPC, NVIDIA AI, and NVIDIA Omniverse software stacks and platforms.
NVIDIA Grace CPU Superchip

NVIDIA GTC 2022 Keynote Liveblog: NVIDIA Hopper Architecture Unveil

NVIDIA today kicked off the 2022 Graphics Technology Conference, its annual gathering of compute and gaming developers discovering the very next in AI, data-science, HPC, graphics, autonomous machines, edge computing, and networking. At the 2022 show premiering now, NVIDIA is expected to unveil its next-generation "Hopper" architecture, which could make its debut as an AI/HPC product, much like "Ampere." Stay tuned for our live blog!

15:00 UTC: The show gets underway with a thank-you to the sponsors.

AMD Introduces Instinct MI210 Data Center Accelerator for Exascale-class HPC and AI in a PCIe Form-Factor

AMD today announced a new addition to the Instinct MI200 family of accelerators. Officially titled Instinct MI210 accelerator, AMD tries to bring exascale-class technologies to mainstream HPC and AI customers with this model. Based on CDNA2 compute architecture built for heavy HPC and AI workloads, the card features 104 compute units (CUs), totaling 6656 Streaming Processors (SPs). With a peak engine clock of 1700 MHz, the card can output 181 TeraFLOPs of FP16 half-precision peak compute, 22.6 TeraFLOPs peak FP32 single-precision, and 22.6 TFLOPs peak FP62 double-precision compute. For single-precision matrix (FP32) compute, the card can deliver a peak of 45.3 TFLOPs. The INT4/INT8 precision settings provide 181 TOPs, while MI210 can compute the bfloat16 precision format with 181 TeraFLOPs at peak.

The card uses a 4096-bit memory interface connecting 64 GBs of HMB2e to the compute silicon. The total memory bandwidth is 1638.4 GB/s, while memory modules run at a 1.6 GHz frequency. It is important to note that the ECC is supported on the entire chip. AMD provides an Instinct MI210 accelerator as a PCIe solution, based on a PCIe 4.0 standard. The card is rated for a TDP of 300 Watts and is cooled passively. There are three infinity fabric links enabled, and the maximum bandwidth of the infinity fabric link is 100 GB/s. Pricing is unknown; however, availability is March 22nd, which is the immediate launch date.

AMD places this card directly aiming at NVIDIA A100 80 GB accelerator as far as the targeted segment, with emphasis on half-precision and INT4/INT8 heavy applications.

Intel to Invest Over €33 Billion for Semiconductor R&D and Manufacturing in EU

Intel today announced the first phase of its plans to invest as much as 80 billion euros in the European Union over the next decade along the entire semiconductor value chain - from research and development (R&D) to manufacturing to state-of-the art packaging technologies. Today's announcement includes plans to invest an initial 17 billion euros into a leading-edge semiconductor fab mega-site in Germany, to create a new R&D and design hub in France, and to invest in R&D, manufacturing and foundry services in Ireland, Italy, Poland and Spain. With this landmark investment, Intel plans to bring its most advanced technology to Europe, creating a next-generation European chip ecosystem and addressing the need for a more balanced and resilient supply chain.

Pat Gelsinger, CEO of Intel, said: "Our planned investments are a major step both for Intel and for Europe. The EU Chips Act will empower private companies and governments to work together to drastically advance Europe's position in the semiconductor sector. This broad initiative will boost Europe's R&D innovation and bring leading-edge manufacturing to the region for the benefit of our customers and partners around the world. We are committed to playing an essential role in shaping Europe's digital future for decades to come."
Return to Keyword Browsing
Dec 19th, 2024 00:11 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts