News Posts matching #AMD

Return to Keyword Browsing

AMD Silently Releases the Ryzen 5 5500X3D CPU

A little over a year has passed since we last heard rumors about AMD releasing a sub-$200 chip for price-conscious gamers, the Ryzen 5 5500X3D for AM4. Well those rumors recently became reality as leaker @Zed__Wang on X spotted it on AMD's website. The "new" AMD Ryzen 5 5500X3D is a six-core, twelve-thread processor built on the Zen 3 architecture using TSMC's 7 nm process. It operates at a base clock of 3 GHz with boost speeds up to 4 GHz (the old 5600X3D runs 3.3 GHz base and 4.4 GHz boost clocks), with a 105 W power budget. The processor features 384 KB of L1 cache, 3 MB of L2 cache, and a substantial 96 MB of L3 cache.

The 5500X3D's main selling point is its 3D V-Cache technology combined with AM4 socket compatibility for existing systems. If you already have an AM4 system and aren't ready for a complete upgrade, the 5500X3D could be worth considering as a drop-in performance boost. The decision will largely depend on pricing when it becomes available since currently it isn't yet listed on any e-commerce websites. For new builds, a modern AM5 processor like the Ryzen 5 7600X or 9600X would be a better choice, offering more future upgrade paths.

AMD Namedrops EPYC "Venice" Zen 6 and EPYC "Verano" Zen 7 Server Processors

AMD at its 2025 Advancing AI event name-dropped its two next generations of EPYC server processors to succeed the current EPYC "Turin" powered by Zen 5 microarchitecture. 2026 will see AMD debut the Zen 6 microarchitecture, and its main workhorse for the server segment will be EPYC "Venice." This processor will likely see a generational increase in CPU core counts, increased IPC from the full-sized Zen 6 cores, support for newer ISA, and an updated I/O package. AMD is looking to pack "Venice" with up to 256 CPU cores per package.

AMD is looking to increase the CPU core count per CCD (CPU complex die) with "Zen 6." The company plans to build these CCDs on the 2 nm TSMC N2 process node. The sIOD (server I/O die) of "Venice" implements PCI-Express Gen 6 for a generational doubling in bandwidth to GPUs, SSDs, and NICs. AMD is also claiming memory bandwidth as high as 1.6 TB/s. There are a couple of ways they can go about achieving this, either by increasing the memory clock speeds, or giving the processor a 16-channel DDR5 memory interface, up from the current 12-channel DDR5. The company could also add support for multichannel DIMM standards, such as MR-DIMM and MCR-DIMMs. All said and done, AMD is claiming a 70% increase in multithreaded performance over the current EPYC "Turin," which we assume is comparing the highest performing part to its next-gen successor.

Robust AI Demand Drives 6% QoQ Growth in Revenue for Top 10 Global IC Design Companies in 1Q25

TrendForce's latest investigations reveal that 1Q25 revenue for the global IC design industry reached US$77.4 billion, marking a 6% QoQ increase and setting a new record high. This growth was fueled by early stocking ahead of new U.S. tariffs on electronics and the ongoing construction of AI data centers around the world, which sustained strong chip demand despite the traditional off-season.

NVIDIA remained the top-ranking IC design company, with Q1 revenue surging to $42.3 billion—up 12% QoQ and 72% YoY—thanks to increasing shipments of its new Blackwell platform. Although its H20 chip is constrained by updated U.S. export controls and is expected to incur losses in Q2, the higher-margin Blackwell is poised to replace the Hopper platform gradually, cushioning the financial impact.

AMD's Answer to AI Advancement: ROCm 7.0 Is Here

In August, AMD will release ROCm 7, its open computing platform for high‑performance computing, machine learning, and scientific applications. This version will support a range of hardware, from Ryzen AI-equipped laptops to Radeon AI Pro desktop cards and server-grade Instinct GPUs, which have just received an update. Before the end of 2025, ROCm 7 will be integrated directly into Linux and Windows, allowing for a seamless installation process with just a few clicks. AMD isn't planning to update ROCm once every few months, either. Instead, developers will receive day-zero fixes and a major update every two weeks, complete with performance enhancements and new features. Additionally, a dedicated Dev Cloud will provide everyone with instant access to the latest AMD hardware for testing and experimentation.

Early benchmarks are encouraging. On one test, an Instinct MI300X running ROCm 7 reached roughly three times the speed recorded with the original ROCm 6 release. Of course, your mileage will vary depending on model choice, quantization, and other factors. This shift follows comments from AMD's Senior Vice President and Chief Software Officer, Andrej Zdravkovic, whom we interviewed last September. He emphasized ROCm's open-source design and the utility of HIPIFY, a tool that converts CUDA code to run on ROCm. This will enable a full-scale ROCm transition, now accelerated by a 3x performance uplift simply by updating the software version. If ROCm 7 lives up to its promise, AMD could finally unlock the potential of its hardware across devices, both big and small, and provide NVIDIA with good competition in the coming years.

Micron HBM Designed into Leading AMD AI Platform

Micron Technology, Inc. today announced the integration of its HBM3E 36 GB 12-high offering into the upcoming AMD Instinct MI350 Series solutions. This collaboration highlights the critical role of power efficiency and performance in training large AI models, delivering high-throughput inference and handling complex HPC workloads such as data processing and computational modeling. Furthermore, it represents another significant milestone in HBM industry leadership for Micron, showcasing its robust execution and the value of its strong customer relationships.

Micron HBM3E 36 GB 12-high solution brings industry-leading memory technology to AMD Instinct MI350 Series GPU platforms, providing outstanding bandwidth and lower power consumption. The AMD Instinct MI350 Series GPU platforms, built on AMD advanced CDNA 4 architecture, integrate 288 GB of high-bandwidth HBM3E memory capacity, delivering up to 8 TB/s bandwidth for exceptional throughput. This immense memory capacity allows Instinct MI350 series GPUs to efficiently support AI models with up to 520 billion parameters—on a single GPU. In a full platform configuration, Instinct MI350 Series GPUs offers up to 2.3 TB of HBM3E memory and achieves peak theoretical performance of up to 161 PFLOPS at FP4 precision, with leadership energy efficiency and scalability for high-density AI workloads. This tightly integrated architecture, combined with Micron's power-efficient HBM3E, enables exceptional throughput for large language model training, inference and scientific simulation tasks—empowering data centers to scale seamlessly while maximizing compute performance per watt. This joint effort between Micron and AMD has enabled faster time to market for AI solutions.

Giga Computing Joins AMD Advancing AI 2025 to Share Advanced Cooling AI Solutions for AMD Instinct MI355X and MI350X GPUs

Giga Computing, a subsidiary of GIGABYTE and an industry leader in generative AI servers and advanced cooling technologies, today announced participation at AMD Advancing AI 2025 to join conversations with AI thought leaders and to share powerful GIGABYTE servers for AI innovations. This one-day event will be highlighted by the keynote from AMD's Dr. Lisa Su and afterwords attendees will join customer breakout sessions, workshops, and more, including discussions with the Giga Computing team.

At AMD Advancing AI Day, GIGABYTE servers demonstrate powerful solutions for AMD Instinct MI350X and MI355X GPUs. The new server platforms are highly efficient and compute dense, and the GIGABYTE G4L3 series exemplifies this with its support for direct liquid cooling (DLC) technology for the MI355X GPU. In traditional data centers without liquid cooling infrastructure, the GIGABYTE G893 Series provides a reliable air-cooled platform for the MI350X GPU. Together, these platforms showcase GIGABYTE's readiness to meet diverse deployment needs—whether maximizing performance with liquid cooling or ensuring broad compatibility in traditional air-cooled environments. With support for the latest AMD Instinct GPUs, GIGABYTE is driving the next wave of AI innovation.

AMD Previews 432 GB HBM4 Instinct MI400 GPUs and Helios Rack‑Scale AI Solution

At its "Advancing AI 2025" event, AMD rolled out its new Instinct MI350 lineup on the CDNA 4 architecture and teased the upcoming UDNA-based AI accelerator. True to its roughly one‑year refresh rhythm, the company confirmed that the Instinct MI400 series will land in early 2026, showcasing a huge leap in memory, interconnect bandwidth, and raw compute power. Each MI400 card features twelve HBM4 stacks, providing a whopping 432 GB of on-package memory and pushing nearly 19.6 TB/s of memory bandwidth. Those early HBM4 modules deliver approximately 1.6 TB/s each, just shy of the 2 TB/s mark. On the compute front, AMD pegs the MI400 at 20 PetaFLOPS of FP8 throughput and 40 PetaFLOPS of FP4, doubling the sparse-matrix performance of today's MI355X cards. But the real game‑changer is how AMD is scaling those GPUs. Until now, you could connect up to eight cards via Infinity Fabric, and anything beyond that had to go over Ethernet.

The MI400's upgraded fabric link now offers 300 GB/s, nearly twice the speed of the MI350 series, allowing you to build full-rack clusters without relying on slower networks. That upgrade paves the way for "Helios," AMD's fully integrated AI rack solution. It combines upcoming EPYC "Venice" CPUs with MI400 GPUs and trim-to-fit networking gear, offering a turnkey setup for data center operators. AMD didn't shy away from comparisons, either. A Helios rack with 72 MI400 cards delivers approximately 3.1 ExaFLOPS of tensor performance and 31 TB of HBM4 memory. NVIDIA's Vera Rubin system, slated to feature 72 GPUs and 288 GB of memory each, is expected to achieve around 3.6 ExaFLOPS, with AMD's capabilities surpassing it in both bandwidth and capacity. And if that's not enough, whispers of a beefed‑up MI450X IF128 system are already swirling. Due in late 2026, it would directly link 128 GPUs with Infinity Fabric at 1.8 TB/s bidirectional per device, unlocking truly massive rack-scale AI clusters.

AMD Instinct MI350X Series AI GPU Silicon Detailed

AMD today unveiled its Instinct MI350X series AI GPU. Based on the company's latest CDNA 4 compute architecture, the MI350X is designed to compete with NVIDIA B200 "Blackwell" AI GPU series, with the top-spec Instinct MI355X being compared by AMD to the B200 in its presentation. The chip debuts not just the CDNA 4 architecture, but also the latest ROCm 7 software stack, and hardware ecosystem based on the industry-standard Open Compute Project specification, which combines AMD EPYC Zen 5 CPUs, Instinct MI350 series GPUs, AMD-Pensando Pollara scale-out NICs supporting Ultra-Ethernet, and industry-standard racks and nodes, both in air- and liquid-cooled form-factors.

The MI350 is a gigantic chiplet-based AI GPU that consists of stacked silicon. There are two base tiles called I/O dies (IODs), each built on the 6 nm TSMC N6 process. This tile has microscopic wiring for up to four Accelerator Compute Die (XCD) tiles stacked on top, besides the 128-channel HBM3E memory controllers, 256 MB of Infinity Cache memory, the Infinity Fabric interfaces, and a PCI-Express 5.0 x16 root complex. The XCDs are built on the 3 nm TSMC N3P foundry node. These contain a 4 MB L2 cache, and four shader engines, each with 9 compute units. Each XCD hence has 36 CU, and each IOD seats 144 CU. Two IODs are joined at the hip by a 5.5 TB/s bidirectional interconnect that enables full cache coherency among the two IODs. The package has a total of 288 CU. Each IOD controls four HBM3E stacks for 144 GB of memory, the package has 288 GB.

Compal Optimizes AI Workloads with AMD Instinct MI355X at AMD Advancing AI 2025 and International Supercomputing Conference 2025

As AI computing accelerates toward higher density and greater energy efficiency, Compal Electronics (Compal; Stock Ticker: 2324.TW), a global leader in IT and computing solutions, unveiled its latest high-performance server platform: SG720-2A/ OG720-2A at both AMD Advancing AI 2025 in the U.S. and the International Supercomputing Conference (ISC) 2025 in Europe. It features the AMD Instinct MI355X GPU architecture and offers both single-phase and two-phase liquid cooling configurations, showcasing Compal's leadership in thermal innovation and system integration. Tailored for next-generation generative AI and large language model (LLM) training, the SG720-2A/OG720-2A delivers exceptional flexibility and scalability for modern data center operations, drawing significant attention across the industry.

With generative AI and LLMs driving increasingly intensive compute demands, enterprises are placing greater emphasis on infrastructure that offers both performance and adaptability. The SG720-2A/OG720-2A emerges as a robust solution, combining high-density GPU integration and flexible liquid cooling options, positioning itself as an ideal platform for next-generation AI training and inference workloads.

Supermicro Delivers Liquid-Cooled and Air-Cooled AI Solutions with AMD Instinct MI350 Series GPUs and Platforms

Supermicro, Inc., a Total IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, is announcing that both liquid-cooled and air-cooled GPU solutions will be available with the new AMD Instinct MI350 series GPUs, optimized for unparalleled performance, maximum scalability, and efficiency. The Supermicro H14 generation of GPU optimized solutions featuring dual AMD EPYC 9005 CPUs along with the AMD Instinct MI350 series GPUs, are designed for organizations seeking maximum performance at scale, while reducing the total cost of ownership for their AI-driven data centers.

"Supermicro continues to lead the industry with the most experience in delivering high-performance systems designed for AI and HPC applications," said Charles Liang, president and CEO of Supermicro. "Our Data Center Building Block Solutions enable us to quickly deploy end-to-end data center solutions to market, bringing the latest technologies for the most demanding applications. The addition of the new AMD Instinct MI350 series GPUs to our GPU server lineup strengthens and expands our industry-leading AI solutions and gives customers greater choice and better performance as they design and build the next generation of data centers."

AMD Unveils Vision for an Open AI Ecosystem, Detailing New Silicon, Software and Systems at Advancing AI 2025

AMD delivered its comprehensive, end-to-end integrated AI platform vision and introduced its open, scalable rack-scale AI infrastructure built on industry standards at its 2025 Advancing AI event.

AMD and its partners showcased:
  • How they are building the open AI ecosystem with the new AMD Instinct MI350 Series accelerators
  • The continued growth of the AMD ROCm ecosystem
  • The company's powerful, new, open rack-scale designs and roadmap that bring leadership rack-scale AI performance beyond 2027

TSMC Prepares "CoPoS": Next-Gen 310 × 310 mm Packages

As demand for ever-growing AI compute power continues to rise and manufacturing advanced nodes becomes more difficult, packaging is undergoing its golden era of development. Today's advanced accelerators often rely on TSMC's CoWoS modules, which are built on wafer cuts measuring no more than 120 × 150 mm in size. In response to the need for more space, TSMC has unveiled plans for CoPoS, or "Chips on Panel on Substrate," which could expand substrate dimensions to 310 × 310 mm and beyond. By shifting from round wafers to rectangular panels, CoPoS offers more than five times the usable area. This extra surface makes it possible to integrate additional high-bandwidth memory stacks, multiple I/O chiplets and compute dies in a single package. It also brings panel-level packaging (PLP) to the fore. Unlike wafer-level packaging (WLP), PLP assembles components on large, rectangular panels, delivering higher throughput and lower cost per unit. Systems with PLP will be actually viable for production runs and allow faster iterations over WLP.

TSMC will establish a CoPoS pilot line in 2026 at its Visionchip subsidiary. In 2027, the pilot facility will focus on refining the process, to meet partner requirements by the end of the year. Mass production is projected to begin between the end of 2028 and early 2029 at TSMC's Chiayi AP7 campus. That site, chosen for its modern infrastructure and ample space, is also slated to host production of multi-chip modules and System-on-Wafer technologies. NVIDIA is expected to be the launch partner for CoPoS. The company plans to leverage the larger panel area to accommodate up to 12 HBM4 chips alongside several GPU chiplets, offering significant performance gains for AI workloads. At the same time, AMD and Broadcom will continue using TSMC's CoWoS-L and CoWoS-R variants for their high-end products. Beyond simply increasing size, CoPoS and PLP may work in tandem with other emerging advances, such as glass substrates and silicon photonics. If development proceeds as planned, the first CoPoS-enabled devices could reach the market by late 2029.

Pegatron Unveils AI-Optimized Server Innovations at GTC Paris 2025

PEGATRON, a globally recognized Design, Manufacturing, and Service (DMS) provider, is showcasing its latest AI server solutions at GTC Paris 2025. Built on NVIDIA Blackwell architecture, PEGATRON's cutting-edge systems are tailored for AI training, reasoning, and enterprise-scale deployment.

NVIDIA GB300 NVL72
At the forefront is the RA4802-72N2, built on the NVIDIA GB300 NVL72 rack system, featuring 72 NVIDIA Blackwell Ultra GPUs and 36 NVIDIA Grace CPUs. Designed for AI factories, it boosts output by up to 50X. PEGATRON's in-house developed Coolant Distribution Unit (CDU) delivers 310 kW of cooling capacity with redundant hot-swappable pumps, ensuring performance and reliability for mission-critical workloads.

Lenovo Announces the All-New Workstations Solutions and Updates to the ThinkStation Desktop Portfolio

Lenovo, today at NXT BLD, announced its new portfolio of Workstation Solutions, a series of purpose-built, expertly-curated industry solutions that meet and exceed the rigorous performance and workflow requirements of engineers, designers, architects, data scientists, researchers, and creators so these power users can work smarter, faster, and more cost-effectively. Lenovo also unveiled the latest editions of its newest ThinkStation P2 and P3 desktop workstations designed to maximize performance and value.

Lenovo Workstations Solutions—Your Workflow, Perfected
Businesses need more than just powerful hardware—they need complete workflow solutions tailored to real-world industry challenges. Developed by Lenovo engineering experts through research and customer engagement to understand workflow bottlenecks and pain points, Lenovo Workstation Solutions deliver easily deployable blueprints—scalable and secure reference architectures powered by the state-of-the-art Lenovo Workstations—featuring superior hardware, software and services.

MSI Powers AI's Next Leap for Enterprises at ISC 2025

MSI, a global leader in high-performance server solutions, is showcasing its enterprise-grade, high-performance server platforms at ISC 2025, taking place June 10-12 at booth #E12. Built on standardized and modular architectures, MSI's AI servers are designed to power next-generation AI and accelerated computing workloads, enabling enterprises to rapidly advance their AI innovations.

"As AI workloads continue to grow and evolve toward inference-driven applications, we're seeing a significant shift in how enterprises approach AI deployment," said Danny Hsu, General Manager of Enterprise Platform Solutions at MSI. "With modular and standards-based architectures, enterprise data centers can now adopt AI technologies more quickly and cost-effectively than ever before. This marks a new era where AI is not only powerful but also increasingly accessible to businesses of all sizes.

Micron Ships HBM4 Samples: 12-Hi 36 GB Modules with 2 TB/s Bandwidth

Micron has achieved a significant advancement of the HBM4 architecture, which will stack 12 DRAM dies (12-Hi) to provide 36 GB of capacity per package. According to company representatives, initial engineering samples are scheduled to ship to key partners in the coming weeks, paving the way for full production in early 2026. The HBM4 design relies on Micron's established 1β ("one-beta") process node for DRAM tiles, in production since 2022, while it prepares to introduce EUV-enabled 1γ ("one-gamma") later this year for DDR5. By increasing the interface width from 1,024 to 2,048 bits per stack, each HBM4 chip can achieve a sustained memory bandwidth of 2 TB/s, representing a 20% efficiency improvement over the existing HBM3E standard.

NVIDIA and AMD are expected to be early adopters of Micron's HBM4. NVIDIA plans to integrate these memory modules into its upcoming Rubin-Vera AI accelerators in the second half of 2026. AMD is anticipated to incorporate HBM4 into its next-generation Instinct MI400 series, with further information to be revealed at the company's Advancing AI 2025 conference. The increased capacity and bandwidth of HBM4 will address growing demands in generative AI, high-performance computing, and other data-intensive applications. Larger stack heights and expanded interface widths enable more efficient data movement, a critical factor in multi-chip configurations and memory-coherent interconnects. As Micron begins mass production of HBM4, major obstacles to overcome will be thermal performance and real-world benchmarks, which will determine how effectively this new memory standard can support the most demanding AI workloads.
Micron HBM4 Memory

El Capitan Retains Top Spot in 65th TOP500 List as Exascale Era Expands

The 65th edition of the TOP500 showed that the El Capitan system retains the No. 1 position. With El Capitan, Frontier, and Aurora, there are now 3 Exascale systems leading the TOP500. All three are installed at Department of Energy (DOE) laboratories in the United States.

The El Capitan system at the Lawrence Livermore National Laboratory, California, remains the No. 1 system on the TOP500. The HPE Cray EX255a system was measured with 1.742 EFlop/s on the HPL benchmark. LLNL now also submitted a measurement for the HPCG benchmark, achieving 17.41 Petaflop/s, which makes the system the new No. 1 on this ranking as well.

AMD EPYC Processors Now Power Nokia Cloud Infrastructure for Next-Gen Telecom Networks

AMD today announced that Nokia has included 5th Gen AMD EPYC processors to power the Nokia Cloud Platform, bringing the leadership performance and performance per watt to next-generation telecom infrastructure. "Telecom operators are looking for infrastructure solutions that combine performance, scalability, and power efficiency to manage the growing complexity and scale of 5G networks," said Dan McNamara, senior vice president and general manager, Server Business, AMD. "Working together with Nokia, we're using the leadership performance and energy efficiency of the 5th Gen AMD EPYC processors to help our customers build and operate high-performance, and efficient networks."

"This expanded collaboration between Nokia and AMD brings a multitude of benefits and underscores Nokia's commitment to innovation through diverse chip partnerships in 5G network infrastructure. The new 5th Gen AMD EPYC processors offer high performance and impressive energy efficiency, enabling Nokia to meet the demanding needs of its 5G customers while contributing to the industry's sustainability goals," said Kal De, senior vice president, Product and Engineering, Cloud and Network Services, Nokia.

Potential Next-gen AMD EPYC "Venice" CPU Identifier Turns Up in Linux Kernel Update

InstLatX64 has spent a significant chunk of time investigating AMD web presences; last month they unearthed various upcoming "Zen 5" processor families. This morning, a couple of mysterious CPU identifiers—"B50F00, B90F00, BA0F00, and BC0F00"—were highlighted in a social media post. According to screen-captured information, Team Red's Linux team seems to be patching in support for "Zen 6" technologies—InstLatX64 believes that the "B50F00" ID and internal "Weisshorn" codename indicate a successor to AMD's current-gen EPYC "Turin" server-grade processor series (known internally as "Breithorn"). Earlier in the month, a set of AIDA64 Beta update release notes mentioned preliminary support for "next-gen AMD desktop, server and mobile processors."

In a mid-April (2025) announcement, Dr. Lisa Su and colleagues revealed that their: "next-generation AMD EPYC processor, codenamed 'Venice,' is the first HPC product in the industry to be taped out and brought up on the TSMC advanced 2 nm (N2) process technology." According to an official "data center CPU" roadmap, "Venice" is on track to launch in 2026. Last month, details of "Venice's" supposed mixed configuration of "Zen 6" and "Zen 6C" cores—plus other technical tidbits—were disclosed via a leak. InstLatX64 and other watchdogs reckon that some of the latest identifiers refer to forthcoming "Venice-Dense" designs and unannounced Instinct accelerators.

MSI Afterburner Dev Working on Support for Radeon RX 9000 Series GPUs

The popular MSI Afterburner overclocking and hardware monitoring program will be updated in the near future, with support for AMD RDNA 4 hardware. Despite the Taiwanese manufacturer's semi-recent shifting away from modern Team Red gaming desktop/discrete graphics solutions, the Afterburner suite's developer has committed to getting official support—at least for current flagships—up and running with the next version. Fortunately, MSI and AMD continue to collaborate on the making of various motherboard models and Radeon iGPU-powered devices.

Last week, Unwinder (aka Alexey Nicolaychuk) outlined early details on the Guru3D discussion board: "as you know, due to some unknown reason MSI decided to skip RDNA 4 and focus on manufacturing NVIDIA GPU-based solutions only this (time) round. Meaning that I get no MSI RDNA 4 hardware samples for development, so there is no RX 9070 XT support in MSI Afterburner, yet. But I decided to close this gap myself, and grabbed a third party hardware vendor's 9070 XT special to add unofficial support for it. So next beta with RDNA 4 support is around the corner, and MSI Afterburner (AB) is a bit PowerColor AB now." As seen in an attached photo, Unwinder has picked up a barebones Reaper Radeon RX 9070 XT 16 GB model.

AMD Adds a Pair of New Ryzen Z2 SoCs to its Lineup of Handheld Gaming Chips

AMD's Z2 series of processors for handheld gaming devices has been expanded with a pair of new chips, namely the Ryzen AI Z2 Extreme and the Ryzen Z2 A. From AMD's naming scheme, one would assume that the two are quite similar, but if you've kept track of AMD's Z2 product lineup, you're most likely already aware that there are some major differences between the three older SKUs and this time around, we get a further change at the low-end. The new top of the range chip, the Ryzen AI Z2 Extreme appears to be largely the same SoC as the older Ryzen Z2 Extreme, with the addition of a 50 TOPS NPU for AI tasks, which appears to be shared with many of AMD's mobile SoCs.

However, the new low-end entry, the Ryzen Z2 A appears to have more in common with the Steamdeck SoC, than any of the other Z2 chips. It sports a quad core, eight thread Zen 2 CPU, an RDNA 2 based GPU with a mere eight CUs and support for LPDDR5-6400 memory. On the plus side, it has a TDP range of 6-20 Watts, suggesting it would allow for better battery life, assuming devices based on it get a similar size battery as a handheld based on one of the higher-end Z2 SoCs. ASUS is using both of these chips in its two new ROG Ally handheld gaming devices, but Lenovo is expected to follow shortly with its own handheld devices.

ASUS Announces the New ROG Xbox Ally and ROG Xbox Ally X Gaming Handhelds

ASUS Republic of Gamers (ROG) is proud to announce an all-new series of Ally handhelds built from the ground up with improved ergonomics and a seamless player-first user experience. Developed in partnership with the incredible team at Xbox, the new ROG Xbox Ally and ROG Xbox Ally X offer best-in-class ergonomics and a full-screen Xbox experience that marries the best of Xbox and PC gaming in one cohesive package.

"We wanted to take our handheld to the next level, but we could not do it alone." said Shawn Yen, Head of the Consumer product team at ASUS. "This revolutionary partnership with Microsoft allowed us to forge a brand new device with ROG muscle and the soul of Xbox." The ROG Xbox Ally sports an AMD Ryzen Z2 A Processor with incredible power efficiency, while the ROG Xbox Ally X offers the new AMD Ryzen AI Z2 Extreme Processor for next-level gaming performance. Both launch holiday 2025 in select markets, with additional markets to follow.

Kuroutoshikou Reveals Familiar Dual-fan Radeon RX 9060 XT Card Design

Kuroutoshikou has updated its custom AMD graphics card portfolio with brand-new Radeon RX 9060 XT 16 GB and 8 GB options. As covered in the recent past, this Japanese brand seems to source card designs from better known manufacturers—namely PowerColor/PC Partner and GALAX. Their latest offerings are unstickered black Reaper cards, albeit not in overclocked form—Kuroutoshikou has opted for Team Red's reference settings. A stamped PowerColor logo is still present on the largely featureless design's I/O shield.

When looking through Kuroutoshikou's catalog, several familiar current and past-gen unbadged Hellhound, Fighter and Low Profile models are present and accounted for. A minimalist aesthetic extends to retail packaging; the brand's tasteful signature box sports a mostly brushed gold-effect theme. Their Blade and Soul NEO crossover signalled a break from the norm—boringly, character illustrations were not applied to shroud or backplate pieces. Unsurprisingly, Kuroutoshikou products are exclusive to the Japanese PC hardware market. Fortunately, comprehensive distribution of nigh-identical PowerColor IPs is in effect across most of the globe.

NVIDIA Grabs Market Share, AMD Loses Ground, and Intel Disappears in Latest dGPU Update

Within the discrete graphics card sector, NVIDIA achieved a remarkable 92% share of the add-in board (AIB) GPU market in the first quarter of 2025, according to data released by Jon Peddie Research (JPR). This represents an 8.5% increase compared to NVIDIA's previous position. By contrast, AMD's share contracted to just 8%, down 7.3 points, while Intel's presence effectively disappeared, falling to 0% after losing 1.2 points. JPR reported that AIB shipments reached 9.2 million units during Q1 2025 despite desktop CPU shipments declining to 17.8 million units. The firm projects that the AIB market will face a compound annual decline of 10.3% from 2024 to 2028, although the installed base of discrete GPUs is expected to grow to 130 million units by the end of the forecast period. By 2028, an estimated 86% of desktop PCs are expected to feature a dedicated graphics card.

NVIDIA's success this quarter can be attributed to its launch of the RTX 50 series GPUs. In contrast, AMD's RDNA 4 GPUs were released significantly later in Q1. Additionally, Intel's Battlemage Arc GPUs, which were launched in Q4 2024, have struggled to gain traction, likely due to limited availability and low demand in the mainstream market. The broader PC GPU market, which includes integrated solutions, contracted by 12% from the previous quarter, with a total of 68.8 million units shipped. Desktop graphics unit sales declined by 16%, while notebook GPUs decreased by 10%. Overall, NVIDIA's total GPU share rose by 3.6 points, AMD's dipped by 1.6 points, and Intel's declined by 2.1 points. Meanwhile, data center GPUs bucked the overall downward trend, rising by 9.6% as enterprises continue to invest in artificial intelligence applications. On the CPU side, notebook processors accounted for 71% of shipments, with desktop CPUs comprising the remaining 29%.

Coracer's GPE-01 Graphene Pad for AM5 Achieves 130 W/m·K Conductivity

Coracer, a lesser-known Chinese accessories manufacturer, recently introduced a version of its GPE-01 graphene thermal pad specifically designed for AMD's AM5 processors. Until now, this pad has been compatible only with Intel's LGA 1851 and LGA 1700 sockets. The new AM5 model measures 32×32 mm, allowing it to cover the entire IHS without hanging over the edges. Thermal paste has long been the go-to option for filling the microscopic gap between CPU and cooler, but in recent years, enthusiasts have explored alternatives like liquid metal and pre-cut thermal pads. Graphene-based products have gained traction because graphene conducts heat exceptionally well. Coracer claims its GPE-01 combines graphene with silicon to achieve a thermal conductivity of 130 W/mK, which is about twice that of popular liquid metal compounds. An insulating layer around the graphene prevents any risk of shorting out the processor's circuits.

Coracer also asserts that the GPE-01 can maintain performance for up to ten years. Regular thermal paste tends to dry out and degrade over time, requiring reapplication every few years. A graphene pad like this could eliminate that chore until you swap out your CPU unless you keep the same system for over a decade. Interestingly, Coracer has almost no online footprint. Segotep, another Chinese brand, introduced a GPE-01 pad for Intel CPUs late last year, so it's unclear whether Coracer is a spin-off or if Segotep licensed the design. As of now, there's no word on pricing or availability for the AM5 version. The Intel-focused GPE-01 sells for around $15 on Taobao, which is in line with other premium pads. Without independent reviews, it's hard to know if Coracer's conductivity claim holds up in real-world testing, but graphene's reputation does offer some reason for cautious optimism. We tested a similar product, Thermal Grizzly's KryoSheet, with a conductivity of 7.5 W/m·K, so hopes are high for the GPE-01.
Return to Keyword Browsing
Jun 13th, 2025 22:09 CDT change timezone

New Forum Posts

Popular Reviews

TPU on YouTube

Controversial News Posts