News Posts matching #EPYC

Return to Keyword Browsing

Leak Suggests AMD 6th Gen EPYC "Venice" CPUs Linked to New SP7 Socket

Hardware leaker, YuuKi_AnS, has briefly turned their attention away from all things Team Blue—their latest leak points to upcoming server-grade processors chez AMD. A Zen 6 core-based 9006 EPYC CPU series, codenamed "Venice," is expected to arrive within two to three years along with an all-new SP7 socket—this information seems to have been sourced from an unnamed server manufacturer's product roadmap. A partial view of said slide also reveals forthcoming equipment powered by Intel "Falcon Shore" and NVIDIA "Blackwell" GPU technologies.

As reported a couple of months ago, older insider info has AMD using "Weisshorn" as an in-house moniker for Zen 6 "Morpheus" architecture, destined for Venice CPUs—alleged to form part of a 2025/2026 EPYC lineup. YuuKi_AnS proposes that these will utilize either 12-channel or 16-channel DDR5 memory configurations—thus providing plenty of bandwidth across hundreds of Zen cores. Altogether very handy for cloud, enterprise, and HPC workloads—industry experts reckon that 384-core counts are feasible on single packages. Naturally, a Team Red timeline dictates that Zen 5 "Nirvana" is due before Zen 6 "Morpheus," so EPYC 9005 "Turin(-X)" and 8005 "Turin-Dense" lineups are (allegedly) up for a 2024-ish launch window on SP5 (LGA-6096) and SP6 (LGA 4094) socket types.

AMD Shares Technical Details of Secure Encrypted Virtualization Technology

AMD has published the source code for AMD Secure Encrypted Virtualization (SEV) technology, the backbone of AMD EPYC processor-based confidential computing virtual machines (VMs) available from cloud service providers including Amazon Web Services (AWS), Google Cloud, Microsoft Azure and Oracle Compute Infrastructure (OCI). This release from AMD will drive greater transparency for the security industry and provide customers the opportunity to thoroughly review the technology behind confidential computing VMs powered by AMD EPYC processors.

"As a leader in confidential computing, we are committed to a relentless pursuit of innovation and creating modern security features that complement our ecosystem partners' most advanced cloud offerings," said Mark Papermaster, executive vice president and chief technology officer, AMD. "By sharing the underpinnings of our SEV technology, we are delivering transparency for confidential computing and demonstrating our dedication to open source. Involving the open-source community will further strengthen this critical technology for our partners and customers who expect nothing less than the utmost protection for their most valuable asset - their data."

AMD Showcases Continued Enterprise Data Center Momentum with EPYC CPUs and Pensando DPUs

Today, at VMware Explore 2023 Las Vegas, AMD continued to showcase its proven performance and growing adoption of AMD EPYC CPUs, AMD Pensando data processing units (DPUs) and adaptive computing products as ideal solutions for the most efficient and innovative virtualized environments. For instance, a system powered by a 4th Gen AMD EPYC 9654 CPUs and a Pensando DPU, delivers approximately 3.3x the Redis application performance and 1.75x the aggregate network throughput when compared to a 4th Gen EPYC system with standard NICs. Additionally, servers with 2P 4th Gen EPYC 9654 CPUs alone can enable using up to 35% fewer servers in an environment running 2000 virtual machines (VMs) compared to 2P Intel Xeon 8490H based servers.

"AMD is helping enterprise customers fully realize the benefits of their virtualized data centers with the latest generation EPYC CPUs and Pensando DPUs," said Forrest Norrod, executive vice president and general manager, Data Center Solutions Business Group, AMD. "Consolidation and modernization enable businesses to increase server utilization and efficiency while delivering impressive performance for critical enterprise workloads. Our ongoing collaboration with VMware enables customers to get more efficient and agile to reach their digital transformation goals."

AMD Showcases Leadership Cloud Performance with New Amazon EC2 Instances Powered by 4th Gen AMD EPYC Processors

Today, AMD announced Amazon Web Services (AWS) has expanded its 4th Gen AMD EPYC processor-based offerings with the general availability of Amazon Elastic Compute Cloud (EC2) M7a and Amazon EC2 Hpc7a instances, which offer next-generation performance and efficiency for applications that benefit from high performance, high throughput and tightly coupled HPC workloads, respectively.

"For customers with increasingly complex and compute-intensive workloads, 4th Gen EPYC processor-powered Amazon EC2 instances deliver a differentiated offering for customers," said David Brown, vice president of Amazon EC2 at AWS. "Combined with the power of the AWS Nitro System, both M7a and Hpc7a instances allow for fast and low-latency internode communications, advancing what our customers can achieve across our growing family of Amazon EC2 instances."

IT Leaders Optimistic about Ways AI will Transform their Business and are Ramping up Investments

Today, AMD released the findings from a new survey of global IT leaders which found that 3 in 4 IT leaders are optimistic about the potential benefits of AI—from increased employee efficiency to automated cybersecurity solutions—and more than 2 in 3 are increasing investments in AI technologies. However, while AI presents clear opportunities for organizations to become more productive, efficient, and secure, IT leaders expressed uncertainty on their AI adoption timeliness due to their lack of implementation roadmaps and the overall readiness of their existing hardware and technology stack.

AMD commissioned the survey of 2,500 IT leaders across the United States, United Kingdom, Germany, France, and Japan to understand how AI technologies are re-shaping the workplace, how IT leaders are planning their AI technology and related Client hardware roadmaps, and what their biggest challenges are for adoption. Despite some hesitations around security and a perception that training the workforce would be burdensome, it became clear that organizations that have already implemented AI solutions are seeing a positive impact and organizations that delay risk being left behind. Of the organizations prioritizing AI deployments, 90% report already seeing increased workplace efficiency.

Supermicro Announces High Volume Production of E3.S All-Flash Storage Portfolio with CXL Memory Expansion

Supermicro, Inc., a Total IT Solution Provider for Cloud, AI/ML, Storage, and 5G/Edge, is delivering a high-throughput, low latency E3.S storage solutions supporting the industry's first PCIe Gen 5 drives and CXL modules to meet the demands of large AI Training and HPC clusters, where massive amounts of unstructured data must be delivered to the GPUs and CPUs to achieve faster results.

Supermicro's Petascale systems are a new class of storage servers supporting the latest industry standard E3.S (7.5 mm) Gen 5 NVMe drives from leading storage vendors for up to 256 TB of high throughput, low latency storage in 1U or up to a half petabyte in 2U. Inside, Supermicro's innovative symmetrical architecture reduced latency by ensuring the shortest signal paths for data and maximized airflow over critical components, allowing them to run at optimal speeds. With these new systems, a standard rack can now hold over 20 Petabytes of capacity for high throughput NVMe-oF (NVMe over Fabrics) configurations, ensuring that GPUs remain saturated with data. Systems are available with either the 4th Gen Intel Xeon Scalable processors or 4th Gen AMD EPYC processors.

AMD Reports Second Quarter 2023 Financial Results, Revenue Down 18% YoY

AMD today announced revenue for the second quarter of 2023 of $5.4 billion, gross margin of 46%, operating loss of $20 million, net income of $27 million and diluted earnings per share of $0.02. On a non-GAAP basis, gross margin was 50%, operating income was $1.1 billion, net income was $948 million and diluted earnings per share was $0.58.

"We delivered strong results in the second quarter as 4th Gen EPYC and Ryzen 7000 processors ramped significantly," said AMD Chair and CEO Dr. Lisa Su. "Our AI engagements increased by more than seven times in the quarter as multiple customers initiated or expanded programs supporting future deployments of Instinct accelerators at scale. We made strong progress meeting key hardware and software milestones to address the growing customer pull for our data center AI solutions and are on-track to launch and ramp production of MI300 accelerators in the fourth quarter."

Inventec's C805G6 Data Center Solution Brings Sustainable Efficiency & Advanced Security for Powering AI

Inventec, a global leader in high-powered servers headquartered in Taiwan, is launching its cutting-edge C805G6 server for data centers based on AMD's newest 4th Gen EPYC platform—a major innovation in computing power that provides double the operating efficiency of previous platforms. These innovations are timely, as the industry worldwide faces converse challenges—on one hand, a growing need to reduce carbon footprints and power consumption, while, on the other hand, the push for ever higher computing power and performance for AI. In fact, in 2022 MIT found that improving a machine learning model tenfold will require a 10,000-fold increase in computational requirements.

Addressing both pain points, George Lin, VP of Business Unit VI, Inventec Enterprise Business Group (Inventec EBG) notes that, "Our latest C805G6 data center solution represents an innovation both for the present and the future, setting the standard for performance, energy efficiency, and security while delivering top-notch hardware for powering AI workloads."

China Hosts 40% of all Arm-based Servers in the World

The escalating challenges in acquiring high-performance x86 servers have prompted Chinese data center companies to accelerate the shift to Arm-based system-on-chips (SoCs). Investment banking firm Bernstein reports that approximately 40% of all Arm-powered servers globally are currently being used in China. While most servers operate on x86 processors from AMD and Intel, there's a growing preference for Arm-based SoCs, especially in the Chinese market. Several global tech giants, including AWS, Ampere, Google, Fujitsu, Microsoft, and Nvidia, have already adopted or developed Arm-powered SoCs. However, Arm-based SoCs are increasingly favorable for Chinese firms, given the difficulty in consistently sourcing Intel's Xeon or AMD's EPYC. Chinese companies like Alibaba, Huawei, and Phytium are pioneering the development of these Arm-based SoCs for client and data center processors.

However, the US government's restrictions present some challenges. Both Huawei and Phytium, blacklisted by the US, cannot access TSMC's cutting-edge process technologies, limiting their ability to produce competitive processors. Although Alibaba's T-Head can leverage TSMC's latest innovations, it can't license Arm's high-performance computing Neoverse V-series CPU cores due to various export control rules. Despite these challenges, many chip designers are considering alternatives such as RISC-V, an unrestricted, rapidly evolving open-source instruction set architecture (ISA) suitable for designing highly customized general-purpose cores for specific workloads. Still, with the backing of influential firms like AWS, Google, Nvidia, Microsoft, Qualcomm, and Samsung, the Armv8 and Armv9 instruction set architectures continue to hold an edge over RISC-V. These companies' support ensures that the software ecosystem remains compatible with their CPUs, which will likely continue to drive the adoption of Arm in the data center space.

AMD Radeon RX 7900 GRE ASIC Smaller than Navi 31, Slightly Larger than Navi 21

The GPU at the heart of the China-exclusive AMD Radeon RX 7900 GRE (Golden Rabbit Edition) sparked much curiosity. It is a physically different GPU from the one found in desktop Radeon RX 7900 XT and RX 7900 XTX graphics cards. AMD wouldn't go through all that effort designing a whole different GPU just for a limited edition graphics card, which means this silicon could find greater use for the company—for example, this could be the package AMD uses for its upcoming mobile RX 7900 series. AMD wouldn't go through all the effort designing a first-party MBA (made by AMD) PCB for the silicon just for the RX 7900 GRE, and so this PCB, with this particular version of the "Navi 31" silicon, could see a wider global launch, probably as the rumored Radeon RX 7800 XT, or something else (although with a different set of specs from the RX 7900 GRE).

We compared the sizes of the new "Navi 31" package found in the RX 7900 GRE, with those of the regular "Navi 31" powering the RX 7900 XT/XTX, the previous-generation "Navi 21" powering the RX 6900 XT, and the NVIDIA AD103 silicon powering the desktop GeForce RTX 4080. There are some interesting findings. The new smaller "Navi 31" package is visibly smaller than the one powering the RX 7900 XT/XTX. It is a square package, compared to the larger rectangular one, and has a significantly thinner metal reinforcement brace. What's interesting is that the 5 nm GCD is still surrounded by six 6 nm MCDs. We don't know if they've disabled two of the six MCDs, or whether they're dummies. AMD uses dummy chiplets as structural reinforcement in some of its EPYC server processors. The dummies spread some of the mounting pressure applied by the IHS or cooling solution, so the logic behind surrounding the GCD with six of these MCDs could be the same.

Zenbleed Vulnerability Affects All AMD Zen 2 CPUs

A new vulnerability has been discovered in AMD Zen 2 based CPUs by Tavis Ormandy, a Google Information Security researcher. Ormandy has named the new vulnerability Zenbleed—also known as CVE-2023-20593—and it's said to affect all Zen 2 based AMD processors, which means Ryzen 3000, 4000 and 5000-series CPUs and APUs, as well as EPYC server chips. The reason why Zenbleed is of concern is because it doesn't require a potential attacker to have physical access to the computer or server in question and it's said to be possible to trigger the vulnerability via executing a javascript on a webpage. This means that the attack vector ends up being massive, at least when we're talking about something like a webhosting company.

Zenbleed is said to allow a potential attacker to gain access to things like encryption keys and user logins via triggering something called "the XMM Register Merge Optimization2, followed by a register rename and a mispredicted vzeroupper." Apparently this requires some precision for the vulnerability to work, but due to these registers being used system wide, even a sandboxed attacker can gain access to them. AMD has already issued a patch for its EPYC server CPUs, which obviously are the most vulnerable systems in question and the company is planning to release patches for all of its Zen 2 based CPUs before the end of the year. Hit up the source links for more details about Zenbleed.

Cerebras and G42 Unveil World's Largest Supercomputer for AI Training with 4 ExaFLOPS

Cerebras Systems, the pioneer in accelerating generative AI, and G42, the UAE-based technology holding group, today announced Condor Galaxy, a network of nine interconnected supercomputers, offering a new approach to AI compute that promises to significantly reduce AI model training time. The first AI supercomputer on this network, Condor Galaxy 1 (CG-1), has 4 exaFLOPs and 54 million cores. Cerebras and G42 are planning to deploy two more such supercomputers, CG-2 and CG-3, in the U.S. in early 2024. With a planned capacity of 36 exaFLOPs in total, this unprecedented supercomputing network will revolutionize the advancement of AI globally.

"Collaborating with Cerebras to rapidly deliver the world's fastest AI training supercomputer and laying the foundation for interconnecting a constellation of these supercomputers across the world has been enormously exciting. This partnership brings together Cerebras' extraordinary compute capabilities, together with G42's multi-industry AI expertise. G42 and Cerebras' shared vision is that Condor Galaxy will be used to address society's most pressing challenges across healthcare, energy, climate action and more," said Talal Alkaissi, CEO of G42 Cloud, a subsidiary of G42.

AMD EPYC 7003 Series CPUs Announced as Powering SAP Applications

Today, AMD announced that SAP has chosen AMD EPYC processor-powered Google Cloud N2D virtual machines (VMs) to run its cloud ERP delivery operations for RISE with SAP; further increasing adoption of AMD EPYC for cloud-based workloads. As enterprises look toward digital modernization, many are adopting cloud-first architectures to complement their on-premises data centers. AMD, Google Cloud and SAP can help customers achieve their most stringent performance goals while delivering on energy efficiency, scalability and resource utilization needs.

AMD EPYC processors offer exceptional performance as well as robust security features, and energy efficient solutions for enterprise workloads in the cloud. RISE with SAP helps maximize customer investments in cloud infrastructure and, paired with AMD EPYC processors and Google Cloud N2D VMs, aims to modernize customer data centers and transform data into actionable insights, faster. "AMD powers some of the most performant and energy efficient cloud instances available in the world today," said Dan McNamara, senior vice president and general manager, Server Business Unit, AMD. "As part of our engagement with Google Cloud and SAP, SAP has selected AMD EPYC CPU-powered N2D instances to host its Business Suite enterprise software workloads. This decision by SAP delivers the performance and performance-per-dollar of EPYC processors to customers looking to modernize their data centers and streamline IT spending by accelerating time to value on their enterprise applications."

AMD Starts Software Enablement of Zen 5 Processors

According to the Linux Kernel Mailing List, AMD started to enable next-generation processors by submitting patches to the Linux kernel. Codenamed Family 1Ah or Family 26 in decimal notation, the set of patches corresponds to the upcoming AMD Zen 5 core, which is the backbone of the upcoming Ryzen 8000 series processors. The patches have a few interesting notes, namely few of them being: added support for the amd64_edac (Error Detection and Correction) module and temperature monitoring; added PCI IDs for these models covering 00h-1Fh and 20h; added required support in k10temp driver.

The AMD EDAC driver also points out that the Zen 5 server CPUs will max out with 12-channel memory. Codenames 0-31 correspond to next-generation EPYC, while 40 to 79 are desktop and laptop SKUS. Interestingly, these patches are just the start, as adding PCI IDs and temperature drivers are basic enablement. With the 2024 launch date nearing, we expect to see more Linux kernel enablement efforts, especially with more complicated parts of the kernel.

Oracle Introduces Next-Gen Exadata X10M Platforms

Oracle today introduced the latest generation of the Oracle Exadata platforms, the X10M, delivering unrivaled performance and availability for all Oracle Database workloads. Starting at the same price as the previous generation, these platforms support higher levels of database consolidation with more capacity and offer dramatically greater value than previous generations. Thousands of organizations, large and small, run their most critical and demanding workloads on Oracle Exadata including the majority of the largest financial, telecom, and retail businesses in the world.

"Our 12th generation Oracle Exadata X10M continues our strategy to provide customers with extreme scale, performance, and value, and we will make it available everywhere—in the cloud and on-premises," said Juan Loaiza, executive vice president, Mission-Critical Database Technologies, Oracle. "Customers that choose cloud deployments also benefit from running Oracle Autonomous Database, which further lowers costs by delivering true pay-per-use and eliminating database and infrastructure administration."

AMD EPYC Embedded Series Processors Power New HPE Alletra Storage MP Solution

AMD today announced that its AMD EPYC Embedded Series processors are powering Hewlett Packard Enterprise's new modular, multi-protocol storage solution, HPE Alletra Storage MP. AMD EPYC Embedded processors provide the performance and energy efficiency required for enterprise-class storage systems with high availability, resilience, and industry-leading connectivity and longevity.

The HPE Alletra Storage MP supports a disaggregated infrastructure with multiple storage protocols on the same hardware that can scale independently for performance and capacity. Configurable for block and file stores, HPE Alletra Storage MP gives customers the ability to deploy, manage, and orchestrate data and storage services via the HPE GreenLake edge-to-cloud platform, regardless of the workload and storage protocol. This eliminates data silos, reducing cost and complexity while improving performance.

AMD Zen 4c Not an E-core, 35% Smaller than Zen 4, but with Identical IPC

AMD on Tuesday (June 13) launched the EPYC 9004 "Bergamo" 128-core/256-thread high density compute server processor, and with it, debuted the new "Zen 4c" CPU microarchitecture. A lot had been made out about Zen 4c in the run up to yesterday's launch, such as rumors that it is a Zen 4 "lite" core that has lesser number-crunching muscle, and hence lower IPC, and that Zen 4c is AMD's answer to Intel's E-core architectures, such as "Gracemont" and "Crestmont." It turns out that it's neither a lite version of Zen 4, nor is it an E-core, but a physically compacted version of the Zen 4 core, with identical number crunching machinery.

First things first—Zen 4c has the same exact IPC as Zen 4 (that's performance at a given clock-speed). This is because its front-end, execution stage, load/store component, and internal cache hierarchy is exactly the same. It has the same 88-deep load queue, 64-deep store queue, the same 675,000 µop cache, the exact same INT+FP issue width of 10+6, the same exact INT register file, the same scheduler, and cache latencies. The L1I and L1D caches are the same 32 KB in size as "Zen 4," and so is the dedicated L2 cache, at 1 MB.

ASRock Rack Leveraging Latest 4th Gen AMD EPYC Processors with AMD "Zen 4c" Architecture,

ASRock Rack, the leading innovative server company, today announced its support of 4th Gen AMD EPYC processors with AMD "Zen 4c" architecture and 4th Gen AMD EPYC processors with AMD 3D V-Cache technology, as well as the expansion of their new products ranging from high-density storage, GPU, multi-nodes servers all for the new AMD processors.

"4th Gen AMD EPYC processors offer the highest core density of any x86 processor in the world and will deliver outstanding performance and efficiency for cloud-native workloads," said Lynn Comp, corporate vice president, Server Product and Technology Marketing, AMD. "Our latest family of data center processors allow customers to balance workload growth and flexibility with critical infrastructure consolidation mandates, enabling our customers to do more work, with more energy efficiency at a time when cloud native computing is transforming the data center."

Giga Computing Expands Support for 4th Gen AMD EPYC Processors

Giga Computing, a subsidiary of GIGABYTE and an industry leader in high-performance servers, server motherboards, and workstations, today announced support for the latest 4th Gen AMD EPYC processors. The new processors, based on "Zen 4c" architecture and featuring AMD 3D V-Cache technology, enhance Giga Computing's enterprise solutions, enabling superior performance and scalability for cloud native computing and technical computing applications in GIGABYTE enterprise solutions. To date, more than thirty unique GIGABYTE systems and platforms support the latest generation of AMD EPYC 9004 processors. As time goes on Giga Computing will roll out more new GIGABYTE models for this platform, including more SKUs for immersion-ready servers and direct liquid cooling systems.

"For every new generation of AMD EPYC processors, GIGABYTE has been there, offering diverse platform options for all workloads and users," said Vincent Wang, Sales VP at Giga Computing. "And with the recent announcement of new AMD EPYC 9004 processors for technical computing and cloud native computing, we are also ready to support them at this time on our current AMD EPYC 9004 Series platforms."

AMD Details New EPYC CPUs, Next-Generation AMD Instinct Accelerator, and Networking Portfolio for Cloud and Enterprise

Today, at the "Data Center and AI Technology Premiere," AMD announced the products, strategy and ecosystem partners that will shape the future of computing, highlighting the next phase of data center innovation. AMD was joined on stage with executives from Amazon Web Services (AWS), Citadel, Hugging Face, Meta, Microsoft Azure and PyTorch to showcase the technological partnerships with industry leaders to bring the next generation of high performance CPU and AI accelerator solutions to market.

"Today, we took another significant step forward in our data center strategy as we expanded our 4th Gen EPYC processor family with new leadership solutions for cloud and technical computing workloads and announced new public instances and internal deployments with the largest cloud providers," said AMD Chair and CEO Dr. Lisa Su. "AI is the defining technology shaping the next generation of computing and the largest strategic growth opportunity for AMD. We are laser focused on accelerating the deployment of AMD AI platforms at scale in the data center, led by the launch of our Instinct MI300 accelerators planned for later this year and the growing ecosystem of enterprise-ready AI software optimized for our hardware."

AMD Expands 4th Gen EPYC CPU Portfolio with Processors for Cloud Native and Technical Computing Workloads

Today, at the "Data Center and AI Technology Premiere," AMD announced the addition of two new, workload optimized processors to the 4th Gen EPYC CPU portfolio. By leveraging the new "Zen 4c" core architecture, the AMD EPYC 97X4 cloud native-optimized data center CPUs further extend the EPYC 9004 Series of processors to deliver the thread density and scale needed for leadership cloud native computing. Additionally, AMD announced the 4th Gen AMD EPYC processors with AMD 3D V-Cache technology, ideally suited for the most demanding technical computing workloads.

"In an era of workload optimized compute, our new CPUs is pushing the boundaries of what is possible in the data center, delivering new levels of performance, efficiency, and scalability," said Forrest Norrod, executive vice president and general manager, Data Center Solutions Business Group, AMD. "We closely align our product roadmap to our customers' unique environments and each offering in the 4th Gen AMD EPYC family of processors is tailored to deliver compelling and leadership performance in general purpose, cloud native or technical computing workloads."

AMD EPYC "Bergamo" Uses 16-core Zen 4c CCDs, Barely 10% Larger than Regular Zen 4 CCDs

A SemiAnalysis report sheds light on just how much smaller the "Zen 4c" CPU core is compared to the regular "Zen 4." AMD's upcoming high core-count enterprise processor for cloud data-center deployments, the EPYC "Bergamo," is based on the new "Zen 4c" microarchitecture. Although with the same ISA as "Zen 4," the "Zen 4c" is essentially a low-power, lite version of the core, with significantly higher performance/Watt. The core is physically smaller than a regular "Zen 4" core, which allows AMD to create CCDs (CPU core dies) with 16 cores, compared to the current "Zen 4" CCD with 8.

The 16-core "Zen 4c" CCD is built on the same 5 nm EUV foundry node as the 8-core "Zen 4" CCD, and internally features two CCX (CPU core complex), each with 8 "Zen 4c" cores. Each of the two CCX shares a 16 MB L3 cache among the cores. The SemiAnalysis report states that the dedicated L2 cache size of the "Zen 4c" core remains at 1 MB, just like that of the regular "Zen 4." Perhaps the biggest finding is their die-size estimation, which puts the 16-core "Zen 4c" CCD just 9.6% larger in die-area, than the 8-core "Zen 4" CCD. That's 72.7 mm² per CCD, compared to 66.3 mm² of the regular 8-core "Zen 4" CCD.

Micron Announces High-Capacity 96 GB DDR5-4800 RDIMMs

Micron Technology, Inc., (Nasdaq: MU) today announced volume production availability of high-capacity 96 GB DDR5 RDIMMs in speeds up to 4800MT/s, which have double the bandwidth compared to DDR4 memory. By unlocking the next level of monolithic technology, the integration of Micron's high-density memory solutions empowers artificial intelligence (AI) and in-memory database workloads and eliminates the need for costly die stacking that also adds latency. Micron's 96 GB DDR5 RDIMM modules are qualified with 4th Gen AMD EPYC processors. Additionally, the Supermicro 8125GS - an AMD-based system - includes the Micron 96 GB DDR5 modules and is an excellent platform for high-performance computing, artificial intelligence and deep learning training, and industrial server workloads.

"Delivering high-capacity memory solutions that enable the right performance for compute-intensive workloads is essential to Micron's role as a leading memory innovator and manufacturer. Micron's 96 GB DDR5 DRAM module establishes a new optimized total cost of ownership solution for our customers," stated Praveen Vaidyanathan, vice president and general manager of Micron's Compute Products Group. "Our collaboration with a flexible system provider like Supermicro leverages each of our strengths to provide customers with the latest memory technology to address their most challenging data center needs."
"Supermicro's time-to-market collaboration with Micron benefits a wide variety of key customers," said Don Clegg, senior vice president, Worldwide Sales, Supermicro. "Micron's portfolio of advanced memory and storage products, aligned with Supermicro's broad server and storage innovations deliver validated, tested, and proven solutions for data center deployments and advanced workloads."

Tyan Showcases Density With Updated AMD EPYC 2U Server Lineup

Tyan, subsidary of MiTAC, showed off their new range of AMD EPYC based servers with a distinct focus on compute density. These included new introductions to their Transport lineup of configurable servers which now host EPYC 9004 "Genoa" series processors with up to 96-cores each. The new additions come as 2U servers each with a different specialty focus. First up is the Transport SX TN85-B8261, aimed squarely at HPC and AI/ML deployment, with support for up to dual 96-Core EPYC "Genoa" processors, 3 TB of registered ECC DDR5-4800, dual 10GbE via an Intel x550-AT2 as well as 1GbE for IPMI, six PCI-E Gen 5 x16 slots with support for four GPGPUs for ML/HPC compute, and eight NVMe drives at the front of the chassis. An optional more storage focused configuration if you choose not to install GPUs is to have 24 total NVMe SSDs at the front soaking up the 96 lanes of PCI-E.

TYAN Server Platforms to Boost Data Center Computing Performance with 4th Gen AMD EPYC Processors at Computex 2023

TYAN, an industry-leading server platform design manufacturer and a subsidiary of MiTAC Computing Technology Corporation, will be showcasing its latest HPC, cloud and storage platforms at Computex 2023, Booth #M0701a in Taipei, Taiwan from May 30 to June 2. These platforms are powered by AMD EPYC 9004 Series processors, which offer superior energy efficiency and are designed to enhance data center computing performance.

"As businesses increasingly prioritize sustainability in their operations, data centers - which serve as the computational core of an organization - offer a significant opportunity to improve efficiency and support ambitious sustainability targets," said Eric Kuo, Vice President of the Server Infrastructure Business Unit at MiTAC Computing Technology Corporation. "TYAN's server platforms powered by 4th Gen AMD EPYC processor enable IT organizations to achieve high performance while remaining cost-effective and contributing to environmental sustainability."
Return to Keyword Browsing
Mar 28th, 2025 03:09 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts