News Posts matching #Genoa

Return to Keyword Browsing

AMD EPYC "Turin" with 192 Cores and 384 Threads Delivers Almost 40% Higher Performance Than Intel Xeon 6

AMD has unveiled its latest EPYC processors, codenamed "Turin," featuring Zen 5 and Zen 5C dense cores. Phoronix's thorough testing reveals remarkable advancements in performance, efficiency, and value. The new lineup includes the EPYC 9575F (64-core), EPYC 9755 (128-core), and EPYC 9965 (192-core) models, all showing impressive capabilities across various server and HPC workloads. In benchmarks, a dual-socket configuration of the 128-core EPYC 9755 Turin outperformed Intel's dual Xeon "Granite Rapids" 6980P setup with MRDIMM-8800 by 40% in the geometric mean of all tests. Surprisingly, even a single EPYC 9755 or EPYC 9965 matched the dual Xeon 6980P in expanded tests with regular DDR5-6400. Within AMD's lineup, the EPYC 9755 showed a 1.55x performance increase over its predecessor, the 96-core EPYC 9654 "Genoa". The EPYC 9965 surpassed the dual EPYC 9754 "Bergamo" by 45%.

These gains come with improved efficiency. While power consumption increased moderately, performance improvements resulted in better overall efficiency. For example, the EPYC 9965 used 32% more power than the EPYC 9654 but delivered 1.55x the performance. Power consumption remains competitive: the EPYC 9965 averaged 275 Watts (peak 461 Watts), the EPYC 9755 averaged 324 Watts (peak 500 Watts), while Intel's Xeon 6980P averaged 322 Watts (peak 547 Watts). AMD's pricing strategy adds to the appeal. The 192-core model is priced at $14,813, compared to Intel's 128-core CPU at $17,800. This competitive pricing, combined with superior performance per dollar and watt, has resonated with hyperscalers. Estimates suggest 50-60% of hyperscale deployments now use AMD processors.

AMD MI300X Accelerators are Competitive with NVIDIA H100, Crunch MLPerf Inference v4.1

The MLCommons consortium on Wednesday posted MLPerf Inference v4.1 benchmark results for popular AI inferencing accelerators available in the market, across brands that include NVIDIA, AMD, and Intel. AMD's Instinct MI300X accelerators emerged competitive to NVIDIA's "Hopper" H100 series AI GPUs. AMD also used the opportunity to showcase the kind of AI inferencing performance uplifts customers can expect from its next-generation EPYC "Turin" server processors powering these MI300X machines. "Turin" features "Zen 5" CPU cores, sporting a 512-bit FPU datapath, and improved performance in AI-relevant 512-bit SIMD instruction-sets, such as AVX-512, and VNNI. The MI300X, on the other hand, banks on the strengths of its memory sub-system, FP8 data format support, and efficient KV cache management.

The MLPerf Inference v4.1 benchmark focused on the 70 billion-parameter LLaMA2-70B model. AMD's submissions included machines featuring the Instinct MI300X, powered by the current EPYC "Genoa" (Zen 4), and next-gen EPYC "Turin" (Zen 5). The GPUs are backed by AMD's ROCm open-source software stack. The benchmark evaluated inference performance using 24,576 Q&A samples from the OpenORCA dataset, with each sample containing up to 1024 input and output tokens. Two scenarios were assessed: the offline scenario, focusing on batch processing to maximize throughput in tokens per second, and the server scenario, which simulates real-time queries with strict latency limits (TTFT ≤ 2 seconds, TPOT ≤ 200 ms). This lets you see the chip's mettle in both high-throughput and low-latency queries.

HBM3e Production Surge Expected to Make Up 35% of Advanced Process Wafer Input by End of 2024

TrendForce reports that the three largest DRAM suppliers are increasing wafer input for advanced processes. Following a rise in memory contract prices, companies have boosted their capital investments, with capacity expansion focusing on the second half of this year. It is expected that wafer input for 1alpha nm and above processes will account for approximately 40% of total DRAM wafer input by the end of the year.

HBM production will be prioritized due to its profitability and increasing demand. However, limited yields of around 50-60% and a wafer area 60% larger than DRAM products mean a higher proportion of wafer input is required. Based on the TSV capacity of each company, HBM is expected to account for 35% of advanced process wafer input by the end of this year, with the remaining wafer capacity used for LPDDR5(X) and DDR5 products.

AMD Response to "ZENHAMMER: Rowhammer Attacks on AMD Zen-Based Platforms"

On February 26, 2024, AMD received new research related to an industry-wide DRAM issue documented in "ZENHAMMER: Rowhammering Attacks on AMD Zen-based Platforms" from researchers at ETH Zurich. The research demonstrates performing Rowhammer attacks on DDR4 and DDR5 memory using AMD "Zen" platforms. Given the history around Rowhammer, the researchers do not consider these rowhammering attacks to be a new issue.

Mitigation
AMD continues to assess the researchers' claim of demonstrating Rowhammer bit flips on a DDR5 device for the first time. AMD will provide an update upon completion of its assessment.

ASRock Rack Offers World's Smallest NVIDIA Grace Hopper Superchip Server, Other Innovations at MWC 2024

ASRock Rack is a subsidiary of ASRock that deals with servers, workstations, and other data-center hardware, and comes with the enormous brand trust not just of ASRock, but also the firm hand of parent company and OEM giant Pegatron. At the 2024 Mobile World Congress, ASRock Rack introduced several new server innovations relevant to the AI Edge, and 5G cellular carrier industries. A star attraction here is the new ASRock Rack MECAI-GH200, claimed to be the world's smallest server powered by the NVIDIA GH200 Grace Hopper Superchip.

The ASRock Rack MECAI-GH200 executes one of NVIDIA's original design goals behind the Grace Hopper Superchip—AI deployments in an edge environment. The GH200 module combines an NVIDIA Grace CPU with a Hopper AI GPU, and a performance-optimized NVLink interconnect between them. The CPU features 72 Arm Neoverse V2 cores, and 480 GB of LPDDR5X memory; while the Hopper GPU has 132 SM with 528 Tensor cores, and 96 GB of HBM3 memory across a 6144-bit memory interface. Given that the newer HBM3e version of the GH200 won't come out before Q2-2024, this has to be the version with HBM3. What makes the MECAI-GH200 the world's smallest server with the GH200 has to be its compact 2U form-factor—competing solutions tend to be 3U or larger.

AMD 5th Gen EPYC "Turin" Pictured: Who Needs Accelerators When You Have 192 Cores?

AMD's upcoming server processor, the 5th Gen EPYC "Turin," has been pictured as an engineering sample is probably being evaluated by the company's data-center or cloud customers. The processor has a mammoth core-count of 192-core/384-thread in its high-density cloud-focused variant that uses "Zen 5c" CPU cores. Its regular version that uses larger "Zen 5" cores that can sustain higher clock speeds, also comes with a fairly high core-count of 128-core/256-thread, up from the 96-core/192-thread of the "Zen 4" based EPYC "Genoa."

The EPYC "Turin" server processor based on "Zen 5" comes with an updated sIOD (server I/O die), surrounded by as many as 16 CCDs (CPU complex dies). AMD is expected to build these CCDs on the TSMC N4P foundry node, which is a more advanced version of the TSMC N4 node the company currently uses for its "Phoenix" client processors, and the TSMC N5 node it uses for its "Zen 4" CCD. TSMC claims that the N4P node offers an up to 22% improvement in power efficiency over N5, as well as a 6% increase in transistor density. Each of the "Zen 5" CCDs is confirmed to have 8 CPU cores sharing 32 MB L3 cache memory. A total of 16 such CCDs add up to the processor's 128-core/256-thread number. The high-density "Turin" meant for cloud data-centers, is a whole different beast.

Intel "Emerald Rapids" Xeon Platinum 8592+ Tested, Shows 20%+ Improvement over Sapphire Rapids

Yesterday, Intel unveiled its latest Xeon data center processors, codenamed Emerald Rapids, delivering the new Xeon Platinum 8592+ flagship SKU with 64 cores and 128 threads. Packed into its fresh silicon, Intel promises boosted performance and reduced power hunger. The comprehensive tech benchmarking website Phoronix essentially confirms Intel's pitch. Testing production servers running the new 8592+ showed solid gains over prior Intel models, let alone older generations still commonplace in data centers. On average, upgrading to the 8592+ increased single-socket server performance by around 23.5% compared to the previous generation configs of Sapphire Rapid, Xeon Platinum 8490H. The dual-socket configuration records a 17% boost in performance.

However, Intel is not in the data center market by itself. AMD's 64-core offering that Xeon Platinum 8592+ is competing with is AMD EPYC 9554. The Emerald Rapids chip is faster by about 2.3%. However, AMD's lineup doesn't stop at only 64 cores. AMD's Genoa and Genoa-X with 3D V-cache top out at 96 cores, while Bergamo goes up to 128 cores. On the power consumption front, the Xeon Platinum 8592+ was pulling about 289 Watts compared to the Xeon Platinum 8490H average of 306 Watts. At peak, the Xeon Platinum 8592+ CPU managed to hit 434 Watts compared to the Xeon Platinum 8490H peak of 469 Watts. This aligns with Intel's claims of enhanced efficiency. However, compared to the 64-core counterpart from AMD, the EPYC 9554 had an average power consumption of 227 Watts and a recorded peak of 369 Watts.

SilverStone Intros XE360-SP5 AIO Liquid CPU Coolers for AMD Socket SP5

SilverStone introduced the XE360-SP5, an all-in-one liquid CPU cooler for AMD Socket SP5, making it fit for servers and workstations based on the EPYC "Genoa" and "Genoa-X" processors, and its upcoming Ryzen Threadripper HEDT processors, assuming AMD sticks to this socket infrastructure. The cooler features a copper water block that's optimized for the chiplet design of Socket SP5 processors, considering the hottest components (the up to twelve "Zen 4" CCDs) are toward the edges, and the central region has the relatively cooler sIOD. The block does not have an integrated pump, which makes it 1U-capable. It measures 92 mm (W) x 25 mm (H) x 118 mm (D). The block is made of nickel-plated copper, with come of its structural parts being made of aluminium.

A set of 46 cm-long coolant tubes connects the block to the 28 mm-thick 360 mm x 120 mm radiator. This radiator has an integrated pump that turns at speeds of up to 4,000 RPM. A set of three SilverStone 120 mm fans comes included, each of these takes in 4-pin PWM input, turns at speeds ranging between 600 to 2,800 RPM, with a noise level of up to 46 dBA, airflow of up to 87.72 CFM, and 3.09 mm H₂O pressure. The company didn't reveal pricing.

Lightelligence Introduces Optical Interconnect for Composable Data Center Architectures

Lightelligence, the global leader in photonic computing and connectivity systems, today announced Photowave, the first optical communications hardware designed for PCIe and Compute Express Link (CXL) connectivity, unleashing next-generation workload efficiency.

Photowave, an Optical Networking (oNET) transceiver leveraging the significant latency and energy efficiency of photonics technology, empowers data center managers to scale resources within or across server racks. The first public demonstration of Photowave will be at Flash Memory Summit today through Thursday, August 10, in Santa Clara, Calif.

AMD Reports Second Quarter 2023 Financial Results, Revenue Down 18% YoY

AMD today announced revenue for the second quarter of 2023 of $5.4 billion, gross margin of 46%, operating loss of $20 million, net income of $27 million and diluted earnings per share of $0.02. On a non-GAAP basis, gross margin was 50%, operating income was $1.1 billion, net income was $948 million and diluted earnings per share was $0.58.

"We delivered strong results in the second quarter as 4th Gen EPYC and Ryzen 7000 processors ramped significantly," said AMD Chair and CEO Dr. Lisa Su. "Our AI engagements increased by more than seven times in the quarter as multiple customers initiated or expanded programs supporting future deployments of Instinct accelerators at scale. We made strong progress meeting key hardware and software milestones to address the growing customer pull for our data center AI solutions and are on-track to launch and ramp production of MI300 accelerators in the fourth quarter."

Supermicro Expands AMD Product Lines with New Servers and New Processors Optimized for Cloud Native Infrastructure

Supermicro, Inc., a Total IT Solution Provider for Cloud, AI/ML, Storage, and 5G/Edge, is announcing that its entire line of H13 AMD based-systems is now available with support for 4th Gen AMD EPYC processors, based on "Zen 4c" architecture, and 4th Gen AMD EPYC processors with AMD 3D V-Cache technology. Supermicro servers powered by 4th Gen AMD EPYC processors for cloud-native computing, with leading thread density and 128 cores per socket, deliver impressive rack density and scalable performance with energy efficiency to deploy cloud native workloads in more consolidated infrastructure. These systems are targeted for cloud operators to meet the ever-growing demands of user sessions and deliver AI-enabled new services. Servers featuring AMD 3D V-Cache technology excel in running technical applications in FEA, CFD, and EDA. The large Level 3 cache enables these types of applications to run faster than ever before. Over 50 world record benchmarks have been set with AMD EPYC processors over the past few years.

"Supermicro continues to push the boundary of our product lines to meet customers' requirements. We design and deliver resource-saving, application-optimized servers with rack scale integration for rapid deployments," said Charles Liang, president, and CEO of Supermicro. "With our growing broad portfolio of systems fully optimized for the latest 4th Gen AMD EPYC processors, cloud operators can now achieve extreme density and efficiency for numerous users and cloud-native services even in space-constrained data centers. In addition, our enhanced high performance, multi-socket, multi-node systems address a wide range of technical computing workloads and dramatically reduce time-to-market for manufacturing companies to design, develop, and validate new products leveraging the accelerated performance of memory intensive applications."

ASRock Rack Leveraging Latest 4th Gen AMD EPYC Processors with AMD "Zen 4c" Architecture,

ASRock Rack, the leading innovative server company, today announced its support of 4th Gen AMD EPYC processors with AMD "Zen 4c" architecture and 4th Gen AMD EPYC processors with AMD 3D V-Cache technology, as well as the expansion of their new products ranging from high-density storage, GPU, multi-nodes servers all for the new AMD processors.

"4th Gen AMD EPYC processors offer the highest core density of any x86 processor in the world and will deliver outstanding performance and efficiency for cloud-native workloads," said Lynn Comp, corporate vice president, Server Product and Technology Marketing, AMD. "Our latest family of data center processors allow customers to balance workload growth and flexibility with critical infrastructure consolidation mandates, enabling our customers to do more work, with more energy efficiency at a time when cloud native computing is transforming the data center."

Giga Computing Expands Support for 4th Gen AMD EPYC Processors

Giga Computing, a subsidiary of GIGABYTE and an industry leader in high-performance servers, server motherboards, and workstations, today announced support for the latest 4th Gen AMD EPYC processors. The new processors, based on "Zen 4c" architecture and featuring AMD 3D V-Cache technology, enhance Giga Computing's enterprise solutions, enabling superior performance and scalability for cloud native computing and technical computing applications in GIGABYTE enterprise solutions. To date, more than thirty unique GIGABYTE systems and platforms support the latest generation of AMD EPYC 9004 processors. As time goes on Giga Computing will roll out more new GIGABYTE models for this platform, including more SKUs for immersion-ready servers and direct liquid cooling systems.

"For every new generation of AMD EPYC processors, GIGABYTE has been there, offering diverse platform options for all workloads and users," said Vincent Wang, Sales VP at Giga Computing. "And with the recent announcement of new AMD EPYC 9004 processors for technical computing and cloud native computing, we are also ready to support them at this time on our current AMD EPYC 9004 Series platforms."

AMD EPYC "Bergamo" Uses 16-core Zen 4c CCDs, Barely 10% Larger than Regular Zen 4 CCDs

A SemiAnalysis report sheds light on just how much smaller the "Zen 4c" CPU core is compared to the regular "Zen 4." AMD's upcoming high core-count enterprise processor for cloud data-center deployments, the EPYC "Bergamo," is based on the new "Zen 4c" microarchitecture. Although with the same ISA as "Zen 4," the "Zen 4c" is essentially a low-power, lite version of the core, with significantly higher performance/Watt. The core is physically smaller than a regular "Zen 4" core, which allows AMD to create CCDs (CPU core dies) with 16 cores, compared to the current "Zen 4" CCD with 8.

The 16-core "Zen 4c" CCD is built on the same 5 nm EUV foundry node as the 8-core "Zen 4" CCD, and internally features two CCX (CPU core complex), each with 8 "Zen 4c" cores. Each of the two CCX shares a 16 MB L3 cache among the cores. The SemiAnalysis report states that the dedicated L2 cache size of the "Zen 4c" core remains at 1 MB, just like that of the regular "Zen 4." Perhaps the biggest finding is their die-size estimation, which puts the 16-core "Zen 4c" CCD just 9.6% larger in die-area, than the 8-core "Zen 4" CCD. That's 72.7 mm² per CCD, compared to 66.3 mm² of the regular 8-core "Zen 4" CCD.

Tyan Showcases Density With Updated AMD EPYC 2U Server Lineup

Tyan, subsidary of MiTAC, showed off their new range of AMD EPYC based servers with a distinct focus on compute density. These included new introductions to their Transport lineup of configurable servers which now host EPYC 9004 "Genoa" series processors with up to 96-cores each. The new additions come as 2U servers each with a different specialty focus. First up is the Transport SX TN85-B8261, aimed squarely at HPC and AI/ML deployment, with support for up to dual 96-Core EPYC "Genoa" processors, 3 TB of registered ECC DDR5-4800, dual 10GbE via an Intel x550-AT2 as well as 1GbE for IPMI, six PCI-E Gen 5 x16 slots with support for four GPGPUs for ML/HPC compute, and eight NVMe drives at the front of the chassis. An optional more storage focused configuration if you choose not to install GPUs is to have 24 total NVMe SSDs at the front soaking up the 96 lanes of PCI-E.

MediaWorkstation Packs 192 Cores and 3TB DDR5 Into Their Updated a-X2P Luggable

MediaWorkstation first announced the a-X2P mobile workstation back in 2020, turning heads with its impressive dual AMD EPYC "Rome" processors that packed in 128 "Zen 2" cores into a transportable package nearly small enough to take as carry-on luggage on a flight. The a-X2P chassis hosts an "EATX" server motherboard, seven full height expansion slots, five 5.25-inch bays, and a slot load optical drive within a 24-inch chassis that features an integrated LCD display and fold out mechanical keyboard. The a-X2P also supports connecting up to six total 24-inch displays, all mounted to the chassis and up to 4K resolution each, for a configuration which the company says is, "ideal for production, live broadcast, and monitoring." This incredible amount of expansion brings the weight of the a-X2P up to 55 lbs (~25 kg).

This most recent update to the a-X2P changes nothing about the exterior feature set, but instead brings the internals to the modern computing age with AMD's EPYC 9004 "Genoa" processors, with support for up to 3 TB of DDR5-4800 in a 12-channel configuration. MediaWorkstation does not specify which SKUs of EPYC they're employing, but the highest configuration of 192 total cores leaves little guesswork as AMD really only has the EPYC 9654(P) at the top of their lineup providing 96 cores at a whopping $11,805 each. An interesting note on the EPYC 9654 variants is that they officially support a cTDP down to 320 W, a bit off the advertised maximum allowed cTDP of 300 W in the a-X2P. MediaWorkstation also does not specify what kind of power supply they've sourced for this behemoth, but it's a safe bet that it'll be over 1.5 kW. Don't expect to power this monster with batteries for any usable amount of time.

Report: ASP of NAND Flash Products Will Continue to Fall 5~10% in 2Q23, Whether Prices Continue to Decline in 2H23 Will Depend on Demand

Although NAND suppliers have continued to roll back production, there is still an oversupply of NAND Flash as demand for products such as servers, smartphones, and notebooks is still too weak. Therefore, TrendForce predicts that the ASP of NAND Flash will continue to fall in 2Q23, though that decline may shrink to 5~10%. The key to supply and demand returning to a market equilibrium lies in whether NAND suppliers can cut back on production even more. TrendForce believes if demand remains stable, then the ASP of NAND Flash will have an opportunity to rebound in 4Q23; if demand is weaker than expected, then ASP will take longer to recover.

Client SSD: Currently, PC OEM's have managed to liquidate most of their component inventory, and are now gearing up in preparation for mid-year sales events. Suppliers are cutting prices to clear out their inventories of PCIe Gen 3 SSDs, which are gradually being phased out. Meanwhile, prices of PCIe Gen 4 SSDs continue to face downward pressure due to a slow intake of new customer orders. The continuous decline of QLC products in 1Q23 has also dragged down the prices of TLC products, and there is relatively little room for prices to keep falling in 2Q23. While it still remains unclear whether or not demand will recover, TrendForce projects that the prices of PC client SSDs will drop 5~10% in 2Q23.

AMD EPYC Genoa-X Processor Spotted with 1248 MBs of 3D V-Cache

AMD's EPYC lineup already features the new Zen 4 core designed for better performance and efficiency. However, since the release of EPYC Milan-X processors with 3D V-cache integrated into server offerings, we wondered if AMD will continue to make such SKUs for upcoming generations. According to the report from Wccftech, we have a leaked table of specifications that showcase what some seemingly top-end Genoa-X SKUs will look like. The two SKUs listed here are the "100-000000892-04" coded engineering sample and the "100-000000892-06" coded retail sample. With support for the same SP5 platform, these CPUs should be easily integrated with the existing offerings from OEM.

As far as specifications, this processor features 384 MBs of L3 cache coming from CCDs, 768 MBs of L3 cache from the 3D V-Cache stacks, and 96 MBs of L2 cache for a total of 1248 MBs in the usable cache. A 3 MB stack of L1 cache is also dedicated to instructions and primary CPU data. Compared to the regular Genoa design, this is a 260% increase in cache sizes, and compared to Milan-X, the Genoa-X design also progresses with 56% more cache. With a TDP of up to 400 Watts, configurable to 320 Watts, this CPU can boost up to 3.7 GHz. AMD EPYC Genoa-X CPUs are expected to hit the shelves in the middle of 2023.

AMD Expected to Occupy Over 20% of Server CPU Market and Arm 8% in 2023

AMD and Arm have been gaining up on Intel in the server CPU market in the past few years, and the margins of the share that AMD had won over were especially large in 2022 as datacenter operators and server brands began finding that solutions from the number-2 maker growing superior to those of the long-time leader, according to Frank Kung, DIGITIMES Research analyst focusing primarily on the server industry, who anticipates that AMD's share will well stand above 20% in 2023, while Arm will get 8%.

Prices are one of the three major drivers that resulted in datacenter operators and server brands switching to AMD. Comparing server CPUs from AMD and Intel with similar numbers of cores, clockspeed, and hardware specifications, the price tags of most of the former's products are at least 30% cheaper than the latter's, and the differences could go as high as over 40%, Kung said.

EK-Pro Line Extends to AMD Socket SP5 CPU Water Blocks

EK, the leading liquid cooling gear manufacturer, launches a workstation and a 1U rack-compatible high-performance liquid cooling solution for AMD Zen 4-based EPYC server processors. Code-named "Genoa," these AMD CPUs come with up to 96 cores and 192 threads and have a TDP of up to 360W. These specifications render these processors perfect for liquid cooling, especially in a dual-socket motherboard environment.

EK-Pro CPU WB SP5 Ni + Acetal
This is a dedicated enterprise-grade water block developed specifically for AMD processors. It features three standard G1/4" threaded ports located on the top of the water block and is intended for workstations and taller server racks.

Projected YoY Growth Rate of Global Server Shipments for 2023 Has Been Lowered to 1.87% Due to North American Cloud Service Providers Cutting Demand

Facing global economic headwinds, the four major North American cloud service providers (CSPs) have scaled back their server procurement quantities for 2023 and could make further downward corrections in the future. Meta is the leader among the four in terms of server demand reduction, followed by Microsoft, Google, and AWS. TrendForce has lowered the YoY growth rate of their total server procurement quantity for this year from the original projection of 6.9% to the latest projection of 4.4%. With CSPs cutting demand, global server shipments are now estimated to grow by just 1.87% YoY for 2023. Regarding the server DRAM market, prices there are estimated to drop by around 20~25% QoQ for 1Q23 as CSPs' downward corrections exacerbate the oversupply situation.

Looking at the four CSPs individually, the YoY decline of Meta's server procurement quantity has been widened to 3.0% and could get larger. The instability of the global economy remains the largest variable for all CSPs. Besides this, Meta has also encountered a notable obstacle in expanding its operation in Europe. Specifically, its data center in Denmark has not met the regional standard for emissions. This issue is expected to hinder its progress in setting up additional data centers across the EU. Moreover, businesses related to e-commerce account for about 98% of Meta's revenue. Therefore, the decline in e-commerce activities amidst the recent easing of the COVID-19 pandemic has impacted Meta's growth momentum. Additionally, Meta's server demand has been affected by the high level of component inventory held by server ODMs.

AIC's New Edge Server Platform Powered by 4th Gen AMD EPYC Processors Will Make a Debut at SC22

AIC Inc., (from now on referred to as "AIC"), a leading provider in enterprise storage and server solutions, today revealed its new edge server appliance powered by 4th Gen AMD EPYC processors (codename Genoa). The new server, EB202-CP, is designed to deliver superior performance in a compact size while offering excellent cost efficiency. Combined with the 4th Gen AMD EPYC processors, EB202-CP is expected to drive the innovations in AI, training simulation, autonomous vehicles and edge applications. AIC will showcase EB202-CP at SC22 expo from November 14th to 17th, 2022.

AIC EB202-CP is a 2U rackmount server with 22 inches in depth. It supports eight E1.S/ E3.S or U.2 SSDs which are front-serviceable and hot-swappable. The E1.S/ E3.S drives are Enterprise and Datacenter SSD Form Factor (EDSFF) that enables EB202-CP to provide high-density all-flash NVMe for half petabyte storage capabilities and enhance IOPS and space utilization. EB202-CP has great expansion functionality and supports up to two double-stack GPU or accelerator cards, two FHHL/HHHL PCIe 5.0 cards and an OCP 3.0 card. Based on AIC server board Capella, EB202-CP supports single 4th Gen AMD EPYC processor and eight DDR5 DIMMs. The 4th Gen AMD EPYC processors, built on "Zen 4" architecture, are optimized for general-purpose workloads across enterprise, cloud and edge. This new generation of AMD EPYC features the world's highest-performing x86 processor, PCIe 5.0 ready, and enables low TCO. It also delivers leadership energy efficiency as well as state-of-the-art security features.

AMD 4th Generation EPYC "Genoa" Processors Benchmarked

Yesterday, AMD announced its latest addition to the data center family of processors called EPYC Genoa. Named the 4th generation EPYC processors, they feature a Zen 4 design and bring additional I/O connectivity like PCIe 5.0, DDR5, and CXL support. To disrupt the cloud, enterprise, and HPC offerings, AMD decided to manufacture SKUs with up to 96 cores and 192 threads, an increase from the previous generation's 64C/128T designs. Today, we are learning more about the performance and power aspects of the 4th generation AMD EPYC Genoa 9654, 9554, and 9374F SKUs from 3rd party sources, and not the official AMD presentation. Tom's Hardware published a heap of benchmarks consisting of rendering, compilation, encoding, parallel computing, molecular dynamics, and much more.

In the comparison tests, we have AMD EPYC Milan 7763, 75F3, and Intel Xeon Platinum 8380, a current top-end Intel offering until Sapphire Rapids arrives. Comparing 3rd-gen EPYC 64C/128T SKUs with 4th-gen 64C/128T EPYC SKUs, the new generation brings about a 30% increase in compression and parallel compute benchmarks performance. When scaling to the 96C/192T SKU, the gap is widened, and AMD has a clear performance leader in the server marketplace. For more details about the benchmark results, go here to explore. As far as comparison to Intel offerings, AMD leads the pack as it has a more performant single and multi-threaded design. Of course, beating the Sapphire Rapids to market is a significant win for team red, so we are still waiting to see how the 4th generation Xeon stacks up against Genoa.

AMD Launches 4th Gen EPYC "Genoa" Zen 4 Server Processors: 100% Performance Uplift for 50% More Cores

AMD at a special media event titled "together we advance_data centers," formally launched its 4th generation EPYC "Genoa" server processors based on the "Zen 4" microarchitecture. These processors debut an all new platform, with modern I/O connectivity that includes PCI-Express Gen 5, CXL, and DDR5 memory. The processors come in CPU core-counts of up to 96-core/192-thread. There are as many as 18 processor SKUs, differentiated not just in CPU core-counts, but also the way the the cores are spread across the up to 12 "Zen 4" chiplets (CCDs). Each chiplet features up to 8 "Zen 4" CPU cores, depending on the model; up to 32 MB of L3 cache, and is built on the 5 nm EUV process at TSMC. The CCDs talk to a centralized server I/O die (sIOD), which is built on the 6 nm process.

The processors AMD is launching today are the EPYC "Genoa" series, targeting general purpose servers, although they can be deployed in large cloud data-centers, too. To large-scale cloud providers such as AWS, Azure, and Google Cloud, AMD is readying a different class of processor, codenamed "Bergamo," which is plans to launch later. In 2023, the company will launch the "Genoa-X" line of processor for technical-compute and HPC applications, which benefit from large on-die caches, as they feature the 3D Vertical Cache technology. There will also be "Siena," a class of EPYC processors targeting the telecom and edge-computing markets, which could see an integration of more Xilinx IP.

TrendForce: Annual Growth of Server Shipments Forecast to Ebb to 3.7% in 2023, While DRAM Growth Slows

According to the latest TrendForce research, pandemic-induced materials shortages abated in the second half of this year and the supply and delivery of short-term materials has recovered significantly. However, assuming materials supply is secure and demand can be met, the annual growth rate of server shipments in 2023 is estimated to be only 3.7%, which is lower than 5.1% in 2022.

TrendForce indicates that this growth slowdown is due to three factors. First, once material mismatch issues had eased, buyers began adjusting previously placed purchase order overruns. Thus, ODM orders also decreased but this will not affect the 2022 shipment volume of whole servers for the time being. Second, due to the impact of rising inflation and weakness in the overall economy, corporate capital investment may trend more conservative and IT-related investment will emphasize flexibility, such as the replacement of certain server terminals with cloud services. Third, geopolitical changes will drive the continuing emergence of demand for small-scale data centers and previous construction of hyperscale data centers will slow. The recent ban on military/HPC servers issued by the U.S. Department of Commerce on October 7 has a very low market share in terms of its application category, so the impact on the overall server market is limited at present. However, if the scope of the ban is expanded further in the future, it will herald a more significant slowdown risk for China's server shipment momentum in 2023.
Return to Keyword Browsing
Nov 21st, 2024 08:05 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts