News Posts matching #EPYC

Return to Keyword Browsing

AMD "Zen 6" Processor Families to Use a Mix of TSMC N2 and N3 Nodes

AMD's upcoming Zen 6 CPU family will leverage a combination of TSMC's N3 and N2 process nodes, according to slide decks shared with engineers at three leading motherboard vendors. These documents outline five distinct silicon lines set to arrive in late 2026 across servers, desktops, and notebooks. On the server front, the EPYC "Venice" lineup splits into Venice classic for general‑purpose deployments and Venice dense for high‑density cloud racks. Both variants use TSMC's custom‑tuned N2P process, offering an 8-10% clock‑speed boost over today's N3E. At the same time, each classic die grows to 12 Zen 6 cores, and each dense die houses 32 Zen 6c cores, enabling up to 256‑core, 512-threaded dense packages when eight dies are interconnected via the existing organic interposer.

For client systems, AMD has adopted codenames that hint at their intended usage profiles. "Olympic Ridge" will drive the Ryzen 10000 desktop series on the N2P node, while "Gator Range" targets gaming laptops exceeding 55 W. The mainstream thin‑and‑light segment will be served by "Medusa Point," featuring a hybrid design that pairs an N2P compute tile with an N3P I/O tile, with entry‑level models opting for a cost‑efficient monolithic N3P die. A more detailed "Medusa Halo" and the budget‑oriented "Bumblebee" series are also in the roadmap, though their process assignments remain under review. AMD and TSMC's close co‑optimization of metal layers and libraries means the final silicon closely resembles an "N2‑AMD" stack rather than a standard N2P node. First silicon is expected back from the fab before Christmas, with volume ramps timed for the back‑to‑school 2026 notebook cycle and a subsequent server refresh wave.

Giga Computing Unveils Liquid and Air-Cooled GIGABYTE AI Servers Accelerated by NVIDIA HGX B200 Platform

Giga Computing, an industry innovator and leader in enterprise hardware and advanced cooling solutions, today announced four new GIGABYTE servers built on the NVIDIA HGX B200 platform. This expansion of the GIGABYTE GPU server portfolio brings greater thermal design flexibility and support for the latest processors, including the new AI-optimized Intel Xeon 6 CPUs, giving customers more options as they tailor their systems for workloads and efficiency.

NVIDIA HGX B200 propels the data center into a new era of accelerating computing and generative AI. Built on NVIDIA Blackwell GPUs, the HGX B200 platform features 15X faster real-time inference on trillion-parameter models.

Intel's Server Share Slips to 67% as AMD and Arm Widen the Gap

In just a few years, AMD has gone from the underdog to Intel's most serious challenger in the server world. Thanks to its EPYC processors, AMD now captures about a third of every dollar spent on server CPUs, up from essentially zero in 2017. Over that same period, Intel's share has slipped from nearly 100% to roughly 63%, signaling a significant shift in what companies choose to power their data centers. The real inflection point came with AMD's Zen architecture: by mid-2020, EPYC had already claimed more than 10% of server-CPU revenues. Meanwhile, Intel's rollout of Sapphire Rapids Xeons encountered delays and manufacturing issues, leaving customers to look elsewhere. By late 2022, AMD was over the 20% mark, and Intel found itself under 75% for the first time in years.

Looking ahead, analysts at IDC and Mercury Research, with data compiled by Bank of America, expect AMD's slice of the revenue pie to grow to about 36% by 2025, while Intel drops to around 55%. Arm-based server chips are also starting to make real inroads, forecast to account for roughly 9% of CPU revenue next year as major cloud providers seek more energy- and cost-efficient options. By 2027, AMD could approach a 40% revenue share, Intel may fall below half the market, and Arm designs could capture 10-12%. Remember that these figures track revenue rather than unit sales: AMD's gains come primarily from high-end, high-core-count processors, whereas Intel still shifts plenty of lower-priced models. With AMD poised to launch its Genoa and Bergamo EPYCs and Intel banking on the upcoming E-core Xeon 6 series to regain its footing, the fight for server-CPU supremacy is far from over. Still, Intel's once-unbeatable lead is clearly under threat.

HPE Expands ProLiant Gen12 Server Portfolio With 5th Gen AMD EPYC Processors

HPE today announced an expansion to the HPE ProLiant Compute Gen12 server portfolio, which delivers next-level security, performance and efficiency. The expanded portfolio includes two new servers powered by 5th Gen AMD EPYC processors to optimize memory-intensive workloads, and new automation features for greater visibility and control delivered through HPE Compute Ops Management.

In addition, HPE ProLiant Compute servers are now available with HPE Morpheus VM Essentials Software support. HPE Morpheus VM Essentials is an open virtualization solution that helps reduce costs, minimize vendor lock-in, and simplify IT management. HPE also announced new HPE for Azure Local solutions with the HPE ProLiant DL145 Gen11 server to empower expansion of purpose-built edge capabilities across distributed environments.

Shadow Launches Neo: The Next Generation Cloud Gaming PC

SHADOW, the Global leader in high-performance cloud computing, is proud to announce the launch of Neo, a brand-new cloud gaming PC offering designed to deliver next-level RTX experiences for gamers, creators, and professionals alike. Neo will officially roll out in Europe and North America starting June 16, 2025.

Building on the success of the company's previous offers, Neo replaces its widely adopted "Boost" tier and delivers major performance leaps—up to 150% more in gaming and 200% more in pro software performance. All existing Boost users are being upgraded to Neo at no additional cost while new users rates will start at $37.99 per month.

AMD Namedrops EPYC "Venice" Zen 6 and EPYC "Verano" Zen 7 Server Processors

AMD at its 2025 Advancing AI event name-dropped its two next generations of EPYC server processors to succeed the current EPYC "Turin" powered by Zen 5 microarchitecture. 2026 will see AMD debut the Zen 6 microarchitecture, and its main workhorse for the server segment will be EPYC "Venice." This processor will likely see a generational increase in CPU core counts, increased IPC from the full-sized Zen 6 cores, support for newer ISA, and an updated I/O package. AMD is looking to pack "Venice" with up to 256 CPU cores per package.

AMD is looking to increase the CPU core count per CCD (CPU complex die) with "Zen 6." The company plans to build these CCDs on the 2 nm TSMC N2 process node. The sIOD (server I/O die) of "Venice" implements PCI-Express Gen 6 for a generational doubling in bandwidth to GPUs, SSDs, and NICs. AMD is also claiming memory bandwidth as high as 1.6 TB/s. There are a couple of ways they can go about achieving this, either by increasing the memory clock speeds, or giving the processor a 16-channel DDR5 memory interface, up from the current 12-channel DDR5. The company could also add support for multichannel DIMM standards, such as MR-DIMM and MCR-DIMMs. All said and done, AMD is claiming a 70% increase in multithreaded performance over the current EPYC "Turin," which we assume is comparing the highest performing part to its next-gen successor.

Supermicro Delivers Liquid-Cooled and Air-Cooled AI Solutions with AMD Instinct MI350 Series GPUs and Platforms

Supermicro, Inc., a Total IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, is announcing that both liquid-cooled and air-cooled GPU solutions will be available with the new AMD Instinct MI350 series GPUs, optimized for unparalleled performance, maximum scalability, and efficiency. The Supermicro H14 generation of GPU optimized solutions featuring dual AMD EPYC 9005 CPUs along with the AMD Instinct MI350 series GPUs, are designed for organizations seeking maximum performance at scale, while reducing the total cost of ownership for their AI-driven data centers.

"Supermicro continues to lead the industry with the most experience in delivering high-performance systems designed for AI and HPC applications," said Charles Liang, president and CEO of Supermicro. "Our Data Center Building Block Solutions enable us to quickly deploy end-to-end data center solutions to market, bringing the latest technologies for the most demanding applications. The addition of the new AMD Instinct MI350 series GPUs to our GPU server lineup strengthens and expands our industry-leading AI solutions and gives customers greater choice and better performance as they design and build the next generation of data centers."

AMD Unveils Vision for an Open AI Ecosystem, Detailing New Silicon, Software and Systems at Advancing AI 2025

AMD delivered its comprehensive, end-to-end integrated AI platform vision and introduced its open, scalable rack-scale AI infrastructure built on industry standards at its 2025 Advancing AI event.

AMD and its partners showcased:
  • How they are building the open AI ecosystem with the new AMD Instinct MI350 Series accelerators
  • The continued growth of the AMD ROCm ecosystem
  • The company's powerful, new, open rack-scale designs and roadmap that bring leadership rack-scale AI performance beyond 2027

Pegatron Unveils AI-Optimized Server Innovations at GTC Paris 2025

PEGATRON, a globally recognized Design, Manufacturing, and Service (DMS) provider, is showcasing its latest AI server solutions at GTC Paris 2025. Built on NVIDIA Blackwell architecture, PEGATRON's cutting-edge systems are tailored for AI training, reasoning, and enterprise-scale deployment.

NVIDIA GB300 NVL72
At the forefront is the RA4802-72N2, built on the NVIDIA GB300 NVL72 rack system, featuring 72 NVIDIA Blackwell Ultra GPUs and 36 NVIDIA Grace CPUs. Designed for AI factories, it boosts output by up to 50X. PEGATRON's in-house developed Coolant Distribution Unit (CDU) delivers 310 kW of cooling capacity with redundant hot-swappable pumps, ensuring performance and reliability for mission-critical workloads.

MSI Powers AI's Next Leap for Enterprises at ISC 2025

MSI, a global leader in high-performance server solutions, is showcasing its enterprise-grade, high-performance server platforms at ISC 2025, taking place June 10-12 at booth #E12. Built on standardized and modular architectures, MSI's AI servers are designed to power next-generation AI and accelerated computing workloads, enabling enterprises to rapidly advance their AI innovations.

"As AI workloads continue to grow and evolve toward inference-driven applications, we're seeing a significant shift in how enterprises approach AI deployment," said Danny Hsu, General Manager of Enterprise Platform Solutions at MSI. "With modular and standards-based architectures, enterprise data centers can now adopt AI technologies more quickly and cost-effectively than ever before. This marks a new era where AI is not only powerful but also increasingly accessible to businesses of all sizes.

AMD EPYC Processors Now Power Nokia Cloud Infrastructure for Next-Gen Telecom Networks

AMD today announced that Nokia has included 5th Gen AMD EPYC processors to power the Nokia Cloud Platform, bringing the leadership performance and performance per watt to next-generation telecom infrastructure. "Telecom operators are looking for infrastructure solutions that combine performance, scalability, and power efficiency to manage the growing complexity and scale of 5G networks," said Dan McNamara, senior vice president and general manager, Server Business, AMD. "Working together with Nokia, we're using the leadership performance and energy efficiency of the 5th Gen AMD EPYC processors to help our customers build and operate high-performance, and efficient networks."

"This expanded collaboration between Nokia and AMD brings a multitude of benefits and underscores Nokia's commitment to innovation through diverse chip partnerships in 5G network infrastructure. The new 5th Gen AMD EPYC processors offer high performance and impressive energy efficiency, enabling Nokia to meet the demanding needs of its 5G customers while contributing to the industry's sustainability goals," said Kal De, senior vice president, Product and Engineering, Cloud and Network Services, Nokia.

Potential Next-gen AMD EPYC "Venice" CPU Identifier Turns Up in Linux Kernel Update

InstLatX64 has spent a significant chunk of time investigating AMD web presences; last month they unearthed various upcoming "Zen 5" processor families. This morning, a couple of mysterious CPU identifiers—"B50F00, B90F00, BA0F00, and BC0F00"—were highlighted in a social media post. According to screen-captured information, Team Red's Linux team seems to be patching in support for "Zen 6" technologies—InstLatX64 believes that the "B50F00" ID and internal "Weisshorn" codename indicate a successor to AMD's current-gen EPYC "Turin" server-grade processor series (known internally as "Breithorn"). Earlier in the month, a set of AIDA64 Beta update release notes mentioned preliminary support for "next-gen AMD desktop, server and mobile processors."

In a mid-April (2025) announcement, Dr. Lisa Su and colleagues revealed that their: "next-generation AMD EPYC processor, codenamed 'Venice,' is the first HPC product in the industry to be taped out and brought up on the TSMC advanced 2 nm (N2) process technology." According to an official "data center CPU" roadmap, "Venice" is on track to launch in 2026. Last month, details of "Venice's" supposed mixed configuration of "Zen 6" and "Zen 6C" cores—plus other technical tidbits—were disclosed via a leak. InstLatX64 and other watchdogs reckon that some of the latest identifiers refer to forthcoming "Venice-Dense" designs and unannounced Instinct accelerators.

ASUS Announces ESC A8A-E12U Support for AMD Instinct MI350 Series GPUs

ASUS today announced that its flagship high-density AI server, ESC A8A-E12U, now supports the latest AMD Instinct MI350 series GPUs. This enhancement empowers enterprises, research institutions, and cloud providers to accelerate their AI and HPC workloads with next-generation performance and efficiency—while preserving compatibility with existing infrastructure.

Built on the 4th Gen AMD CDNA architecture, AMD Instinct MI350 series GPUs deliver powerful new capabilities, including 288 GB of HBM3E memory and up to 8 TB/s of bandwidth—enabling faster, more energy-efficient execution of large AI models and complex simulations. With expanded support for low-precision compute formats such as FP4 and FP6, the Instinct MI350 series significantly accelerates generative AI, inference, and machine-learning workloads. Importantly, Instinct MI350 series GPUs maintain drop-in compatibility with existing AMD Instinct MI300 series-based systems, such as those running Instinct MI325X—offering customers a cost-effective and seamless upgrade path. These innovations reduce server resource requirements and simplify scaling and workload management, making Instinct MI350 series GPUs an ideal choice for efficient, large-scale AI deployments.

MiTAC Computing Deploys Latest AMD EPYC 4005 Series Processors

MiTAC Computing Technology Corp., a subsidiary of MiTAC Holdings Corp. and a leading manufacturer in server platform design, introduced its latest offering featuring the AMD EPYC 4005 Series processors. These updated server solutions offer enhanced performance and energy efficiency to meet the growing demands of modern business workloads, including AI, cloud services, and data analytics.

"The new AMD EPYC 4005 Series processors deliver the performance and capabilities our customers need at a price point that makes ownership more attractive and attainable," said Derek Dicker, corporate vice president, Enterprise and HPC Business, AMD. "We're enabling businesses to own their computing infrastructure at an economical price, while providing the performance, security features and efficiency modern workloads demand."

ASRock Rack Announces Support for AMD EPYC 4005 Series Processors

ASRock Rack Inc., the leading innovative server company, today announced support for the newly launched AMD EPYC 4005 Series processors across its extensive lineup of AM5 socket server systems and motherboards. This announcement reinforces ASRock Rack's commitment to delivering cutting-edge performance, broad platform compatibility, and long-term value to customers in data centers, growing businesses, and edge computing environments.

Built on the AMD 'Zen 5' architecture, the AMD EPYC 4005 Series features up to 16 SMT-enabled cores and supports DDR5 memory speeds up to 5600 MT/s, delivering class-leading performance per watt within constrained IT budgets. As AI becomes embedded in everyday business software, AMD EPYC 4005 Series CPUs provide the performance headroom needed for AI-enhanced workloads such as automated customer service and data analytics while maintaining the affordability essential for small businesses. The series expands the proven AMD EPYC portfolio with solutions purpose-built for growing infrastructure demands.

Vultr Cloud Platform Broadened with AMD EPYC 4005 Series Processors

Vultr, the world's largest privately-held cloud infrastructure company, today announced that it is one of the first cloud providers to offer the new AMD EPYC 4005 Series processors. The AMD EPYC 4005 Series processors will be available on the Vultr platform, enabling enterprise-class features and leading performance for businesses and hosted IT service providers. The AMD EPYC 4005 Series processors extend the broad AMD EPYC processor family, powering a new line of cost-effective systems designed for growing businesses and hosted IT services providers that demand performance, advanced technologies, energy efficiency, and affordability. Servers featuring the high-performance AMD EPYC 4005 Series CPUs with streamlined memory and I/O feature sets are designed to deliver compelling system price-to-performance metrics on key customer workloads. Meanwhile, the combination of up to 16 SMT-capable cores and DDR5 memory in the AMD EPYC 4005 Series processors enables smooth execution of business-critical workloads, while maintaining the thermal and power efficiency characteristics crucial for affordable compute environments.

"Vultr is committed to delivering the most advanced cloud infrastructure with unrivaled price-to-performance," said J.J. Kardwell, CEO of Vultr. "The AMD EPYC 4005 Series provides straightforward deployment, scalability, high clock speed, energy efficiency, and best-in-class performance. Whether you are a business striving to scale reliably or a developer crafting the next groundbreaking innovation, these solutions are designed to deliver exceptional value and meet demanding requirements now and in the future." Vultr's launch of systems featuring the AMD EPYC 4245P and AMD EPYC 4345P processors will expand the company's robust line of Bare Metal solutions. Vultr will also feature the AMD EPYC 4345P as part of its High Frequency Compute (HFC) offerings for organizations requiring the highest clock speeds and access to locally-attached NVMe storage.

Supermicro Announces New MicroCloud Servers Powered by AMD EPYC 4005 Series Processors

Supermicro, Inc., a Total IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, is announcing that a number of Supermicro servers are now shipping with the latest addition to the AMD based EPYC 4000 series CPUs, the AMD EPYC 4005 Series processors. These servers are optimized to deliver a powerful balance of performance density, scalability, and affordability. Supermicro will feature its new Supermicro MicroCloud multi-node solution, a 10-node CPU and a 5-node CPU + GPU version, in a 3U form factor, ideal for organizations seeking to optimize space, energy, and cost of their IT infrastructure. Supermicro's MicroCloud product family targets dedicated hosting markets where sharing the chassis, power, and cooling are desired while still maintaining physical separation.

"Supermicro continues to deliver first-to-market innovative rack-scale solutions for a wide range of use cases, with the addition of our new Supermicro MicroCloud multi-node solution that feature the latest AMD EPYC 4005 Series processors, designed to optimize the needs of on-premises or cloud service providers who need powerful but cost-effective solutions in a compact 3U form factor," said Mory Lin, Vice President, IoT/Embedded & Edge Computing at Supermicro. "These servers offer up to 2080 cores on a standard 42U rack, greatly reducing data center rack space and overall TCO for enterprise and small medium businesses."

MSI Introduces Server Platforms with AMD EPYC 4005 Processors for SMB Workloads

Today, MSI announced a lineup of entry-level servers and server motherboards powered by AMD EPYC 4005 Series Processors. Featuring up to 16 cores based on the Zen 5 architecture, these platforms deliver the performance, efficiency, and cost-effectiveness that small businesses, new business owners, and system integrators need to build reliable infrastructure, all within limited IT budgets and space footprints.

"We see this launch as a way to make enterprise-grade compute more accessible to smaller organizations," said Danny Hsu, General Manager of MSI's Enterprise Platform Solutions. "By pairing AMD EPYC 4005 Series Processors with MSI's proven hardware design, we're delivering practical, scalable server solutions built for real-world business scenarios—from private cloud and storage to web hosting and office IT systems."

AMD EPYC "Venice" Leak: 2 nm Zen 6 and Zen 6c to Offer Up to 256C/512T and 1 GB of L3 in a Single Socket

AMD is preparing to set a new data-center performance bar with its upcoming 6th-generation EPYC "Venice" processors, built on the latest "Zen 6" and "Zen 6C" core designs and the industry's first 2 nm-class node from TSMC. Leaked engineering diagrams and forum reports suggest Venice will offer additional core scalability, memory capacity, and cache productivity for demanding server workloads. At the heart of the Venice platform lies a multi-chip module design featuring up to eight Core Complex Dies (CCDs) arrayed around one or more central I/O dies (IODs). In its Zen 6 configuration, each CCD houses 12 "classic" cores, yielding a maximum of 96 cores and 192 threads per socket. The cache per CCD is rumored to reach 128 MB of shared L3, double that of its predecessor, delivering up to 1 TB of L3 cache in a fully populated eight-CCD package.

For customers prioritizing raw thread count over per-core performance, the Zen 6C variant pushes the envelope to 256 "dense" cores and 512 threads by leveraging a leaner core design and higher CCD count. Despite the density boost, each Zen 6C core maintains 2 MB of L3 cache, preserving latency benefits even at scale. Memory bandwidth also receives a major uplift: Venice will support both 16-channel (SP7) and 12-channel (SP8) DDR5 configurations, accommodating up to 6 TB of system RAM per socket. The number of PCIe Gen 5 lanes is still unknown, but it could be well over 128 lanes, which the past 5th-generation EPYC CPUs had. Thermal and power targets differentiate the two sockets: SP7 models are expected to reach TDPs around 600 W, up from 400 W on current Zen 5 chips, while SP8 parts aim for 350-400 W to suit more moderate-density racks. This tiered approach will let hyperscalers and enterprise customers balance performance, efficiency, and cooling infrastructure, especially at the scale that hyperscalers have. A projected launch date is scheduled for late 2025 or early 2026.

AMD Reports First Quarter 2025 Financial Results

AMD today announced financial results for the first quarter of 2025. First quarter revenue was $7.4 billion, gross margin was 50%, operating income was $806 million, net income was $709 million and diluted earnings per share was $0.44. On a non-GAAP(*) basis, gross margin was 54%, operating income was $1.8 billion, net income was $1.6 billion and diluted earnings per share was $0.96.

"We delivered an outstanding start to 2025 as year-over-year growth accelerated for the fourth consecutive quarter driven by strength in our core businesses and expanding data center and AI momentum," said Dr. Lisa Su, AMD chair and CEO. "Despite the dynamic macro and regulatory environment, our first quarter results and second quarter outlook highlight the strength of our differentiated product portfolio and consistent execution positioning us well for strong growth in 2025."

TSMC Leadership Speaks of "Unprecedented" Demand for 2 nm; Greater Than Previous-gen Nodes

Despite recent whispers of TSMC losing a key 4 nm node process customer, industry analysts reckon that Taiwan's premier foundry business will remain in a comfortable leading position for the foreseeable future. "Optimistic" expert opinion points to "revenue growth for supply chain players," driven by the rapid progress of the firm's 2 nm manufacturing prowess. Naturally, their cutting-edge manufacturing capabilities—supposedly bolstered by GAAFET—are being tracked with keen interest. According to a fairly fresh Ctee Taiwan news piece, the usual big players are reportedly queued up and present within factory order books. The likes of Apple, NVIDIA, AMD, Qualcomm, MediaTek and Broadcom are mentioned. Mid-way through last month, Team Red officially announced a big collaborative milestone: "(our) next-generation AMD EPYC processor—codenamed "Venice"—is the first HPC product in the industry to be taped out and brought up on the TSMC advanced 2 nm (N2) process technology."

Ctee's report cites recent statements by C. C. Wei. Apparently, the TSMC CEO has stressed (on multiple occasions) that there is "unprecedented" demand for his company's 2 nm production pipelines—far exceeding previous levels for 3 nm. In addition, TSMC reps—who are currently touring the States; hosting technology symposiums—have revealed 2 nm (N2) defect density trends. Ctee outlined these intriguing details: "(N2's) defect density (D0) performance is comparable to that of the 5 nm family, and even surpasses the 7 nm and 3 nm processes of the same period, making it one of the most technologically mature advanced nodes." Wei's foundry team seems to be well ahead of main competition; insiders reckon that alleged equipment upgrades signal a push into 1.4 nm territories. Crucially, mass production of 2 nm (N2) wafers is expected to begin later this year—a cross-facility push was uttered by industry moles.

Leaks Suggest AMD AM5 Future Support for Ryzen 9000G "Gorgon Point" & EPYC 4005 "Grado" CPUs

PC hardware watchers continue to pore over official AMD repositories and adjacent databases, in the hopes of finding unannounced next-gen technologies. Olrak29 and InstLatX64 have presented their latest Team Red-related findings; apparently reaching across futuristic desktop, mobile, and workstation product families. As outlined and interpreted by VideoCardz, several of these next-gen branches are already somewhat "known" properties—namely AMD's allegedly Zen 5-based Ryzen Threadripper "Shimada Peak" 9000WX (workstation) processor series. Following almost two years of leaks, an official introduction is expected to happen during Computex 2025. The Ryzen 9000G "Gorgon Point" desktop (Zen 5 + RDNA 3.5) APU series has turned up again; now "fully" linked to the AM5 socket platform (not a big surprise). The two leakers have also uncovered another rumored AM5-bound product lineup—"Grado" chips could be based on existing "Granite Ridge" foundations, but elevated to commercial/enterprise levels. These speculated basic/entry-level "EPYC 4005" processors are floated as natural successors to currently available 4004 forebears (related to Ryzen 7000 "Raphael" architecture).

Olrak29 and InstLatX64 have also found multiple mysterious FP8 socket-related Ryzen AI Mobile SoCs. "Krackan2" could be a cheaper refresh of current "Krackan Point" APUs—Tom's Hardware proposes smaller designs that sport fewer cores, and not configured with NPUs. Kepler_L2 has weighed in on the matter of three listed "Gorgon Point" IPs—he reckons that the third variant ("Gorgon Point3") will be a spin-off (aka refresh) of a "Krackan2" design. As suggested by insider knowledge, Team Red's convoluted scheme points to "Gorgon Point" being the sequel to "Strix Point." An FF5-based "Soundwave" processor design has appeared alongside the aforementioned futuristic Ryzen AI Mobile chipsets—industry whispers propose that AMD will be leveraging Arm architecture within a lower product tier. InstLatX64 pulled additional compelling information from AMD's Technical Information Portal—providing further insight into Ryzen AI "Medusa Point" APUs (Zen 6 + RDNA 3.5) being dreamt up, with a matching "larger footprint" FP10 platform.

XConn Technologies Demonstrates Dynamic Memory Allocation Using CXL Switch and AMD Technologies at CXL DevCon 2025

XConn Technologies, the innovation leader in next-generation interconnect technology for the future of high-performance computing and AI applications, today announced a groundbreaking demonstration of dynamic memory allocation using Compute Express Link (CXL) switch technology at CXL DevCon 2025, taking place April 29-30 at the Santa Clara Marriott hotel. The demonstration highlights a major advancement in memory flexibility, showcasing how CXL switching can enable seamless, on-demand memory pooling and expansion across heterogeneous systems.

The milestone, achieved in collaboration with AMD, unlocks a new level of efficiency for cloud, artificial intelligence (AI), and high-performance computing (HPC) workloads. By dynamically allocating memory via the XConn Apollo CXL switch, data centers can eliminate over-provisioning, enhance performance, and significantly reduce total cost of ownership (TCO).

MiTAC Computing Unveils Next-generation OCP Servers and Open Firmware Innovations at the OCP EMEA Summit 2025

MiTAC Computing Technology, a global leader in high-performance and energy-efficient server solutions, is proud to announce its participation at the OCP EMEA Summit 2025, taking place April 29-30 at the Convention Centre Dublin. At Booth No. B13, MiTAC will showcase its latest innovations in server design, sustainable cooling, and open-source firmware development - empowering the future of data center infrastructure.

C2810Z5 & C2820Z5: Advancing Sustainable Thermal Design
MiTAC will debut two new OCP server platforms - C2810Z5 (air-cooled) and C2820Z5 (liquid-cooled), built to meet the demands of high-performance computing (HPC) and AI workloads. Designed around the latest AMD EPYC 9005 series processors, these multi-node servers support the latest EPYC 9005 processors and are engineered to deliver optimal compute density and power efficiency.

MSI Servers Power the Next-Gen Datacenters at the 2025 OCP EMEA Summit

MSI, a leading global provider of high-performance server solutions, unveiled its latest ORv3-compliant and high-density multi-node server platforms at the 2025 OCP EMEA Summit, held April 29-30 at booth A19. Built on OCP-recognized DC-MHS architecture and supporting the latest AMD EPYC 9005 Series processors, these next-generation platforms are engineered to deliver outstanding compute density, energy efficiency, and scalability—meeting the evolving demands of modern, data-intensive datacenters.

"We are excited to be part of open-source innovation and sustainability through our contributions to the Open Compute Project," said Danny Hsu, General Manager of Enterprise Platform Solutions. "We remain committed to advancing open standards, datacenter-focused design, and modular server architecture. Our ability to rapidly develop products tailored to specific customer requirements is central to enabling next-generation infrastructure, making MSI a trusted partner for scalable, high-performance solutions."
Return to Keyword Browsing
Jul 25th, 2025 21:04 CDT change timezone

New Forum Posts

Popular Reviews

TPU on YouTube

Controversial News Posts