News Posts matching #AI

Return to Keyword Browsing

UGREEN Showcases the New AI-Powered NASync iDX Series at NAB Show 2025

From April 6-9th, UGREEN, a global leader in consumer electronics and charging technology, is showcasing its innovative NASync series at the NAB Show in Las Vegas. The UGREEN NASync iDX6011 and iDX6011 Pro have been the highlights of the display at Booth SL9210 in the Las Vegas Convention Center. These latest UGREEN NASync iDX models revolutionize data management and security for content creators through advanced AI technology, setting a new standard as the world-first AI-powered NAS.

UGREEN NASync is a series of network-attached storage devices tailored for personal, home, or business use. In March 2024, UGREEN launched a 44-day crowdfunding campaign on Kickstarter for the NASync DXP series, successfully raising over $6.6 million achieved No.1 in the NAS category. This remarkable support highlights the strong demand for advanced storage solutions.

Tokyo Electron & IBM Renew Collaboration for Advanced Semiconductor Technology

This week, IBM and Tokyo Electron (TEL) announced an extension of their agreement for the joint research and development of advanced semiconductor technologies. The new 5-year agreement will focus on the continued advancement of technology for next-generation semiconductor nodes and architectures to power the age of generative AI. This agreement builds on a more than two-decade partnership between IBM and TEL for joint research and development. Previously, the two companies have achieved several breakthroughs, including the development of a new laser debonding process for producing 300 mm silicon chip wafers for 3D chip stacking technology.

Now, bringing together IBM's expertise in semiconductor process integration and TEL's leading-edge equipment, they will explore technology for smaller nodes and chiplet architectures to achieve the performance and energy efficiency requirements for the future of generative AI. "The work IBM and TEL have done together over the last 20 years has helped to push the semiconductor technology innovation to provide many generations of chip performance and energy efficiency to the semiconductor industry," said Mukesh Khare, GM of IBM Semiconductors and VP of Hybrid Cloud, IBM. "We are thrilled to be continuing our work together at this critical time to accelerate chip innovations that can fuel the era of generative AI."

MangoBoost Achieves Record-Breaking MLPerf Inference v5.0 Results with AMD Instinct MI300X

MangoBoost, a provider of cutting-edge system solutions designed to maximize AI data center efficiency, has set a new industry benchmark with its latest MLPerf Inference v5.0 submission. The company's Mango LLMBoost AI Enterprise MLOps software has demonstrated unparalleled performance on AMD Instinct MI300X GPUs, delivering the highest-ever recorded results for Llama2-70B in the offline inference category. This milestone marks the first-ever multi-node MLPerf inference result on AMD Instinct MI300X GPUs. By harnessing the power of 32 MI300X GPUs across four server nodes, Mango LLMBoost has surpassed all previous MLPerf inference results, including those from competitors using NVIDIA H100 GPUs.

Unmatched Performance and Cost Efficiency
MangoBoost's MLPerf submission demonstrates a 24% performance advantage over the best-published MLPerf result from Juniper Networks utilizing 32 NVIDIA H100 GPUs. Mango LLMBoost achieved 103,182 tokens per second (TPS) in the offline scenario and 93,039 TPS in the server scenario on AMD MI300X GPUs, outperforming the previous best result of 82,749 TPS on NVIDIA H100 GPUs. In addition to superior performance, Mango LLMBoost + MI300X offers significant cost advantages. With AMD MI300X GPUs priced between $15,000 and $17,000—compared to the $32,000-$40,000 cost of NVIDIA H100 GPUs (source: Tom's Hardware—H100 vs. MI300X Pricing)—Mango LLMBoost delivers up to 62% cost savings while maintaining industry-leading inference throughput.

Tenstorrent Launches Blackhole Developer Products at Tenstorrent Dev Day

Tenstorrent launched the next generation Blackhole chip family today at their DevDay event in San Francisco. Featuring all new RISC-V cores, Blackhole is built to handle massive AI workloads efficiently and offers an infinitely scalable solution.

Blackhole products are now available for order on tenstorrent.com:
  • Blackhole p100, powered by one processor without Ethernet, active-cooled: available for $999
  • Blackhole p150, powered by one processor with Ethernet, and available in passive-, active-, and liquid-cooled variants: available for $1,299
  • TT-Quiet box, a liquid-cooled desktop workstation powered by 4 Blackhole processors: available for $11,999

MediaTek Introduces Kompanio Ultra SoC, Touted to Redefine AI Performance for Chromebook Plus

MediaTek has introduced the Kompanio Ultra, the latest milestone in AI-powered, high-performance Chromebooks. Leveraging MediaTek's proven expertise in flagship innovation, this powerful new platform brings fantastic on-device AI capabilities, superior computing performance, and industry-leading power efficiency to the newest Chromebook Plus devices. "The Kompanio Ultra underscores our commitment to delivering groundbreaking computing performance and efficiency that MediaTek has shown as a leader in the mobile compute space for many years," said Adam King, Vice President & General Manager of Computing and Multimedia Business at MediaTek. "We worked closely with Google to ensure the newest Chromebook Plus devices enjoy next-generation on-device AI capabilities, superior performance per watt, and immersive multimedia."

The Kompanio Ultra is MediaTek's most powerful Chromebook processor to date, integrating 50 TOPS of AI processing power to enable on-device generative AI experiences. With MediaTek's 8th-generation NPU, users can expect real-time task automation, personalized computing, and seamless AI-enhanced workflows—with local processing for enhanced speed, security, efficiency, and support for AI workloads without an internet connection. Built on the cutting-edge (TSMC) 3 nm process, the Kompanio Ultra features an all-big-core CPU architecture with an Arm Cortex-X925 processor clocked at up to 3.62 GHz, delivering industry-leading single and multithreaded performance. Whether handling intensive applications like video editing, content creation, or high-resolution gaming, this processor ensures smooth, lag-free performance with unmatched multitasking capabilities.

Vietnamese Store Assembles AI Server, Uses Seven GIGABYTE RTX 5090 GAMING OC Cards

I_Leak_VN, a Vietnamese PC hardware influencer/leaker, reckons that the region's first GeForce RTX 5090 GPU-based "AI/mining/scalper" rig has just emerged. Earlier today, their social media post provided an informative look at a local shop's "Training AI: X7 RTX 5090 32G" build. Apparently, the retail outlet has assembled this monstrous setup for an important customer. A Nguyễn Công PC employee sent personal thanks to GIGABYTE Vietnam; for the supply of seven GeForce RTX 5090 GAMING OC graphics cards. As showcased in uploaded photos (see below), these highly-prized units were placed neatly in a row—as part of an airy open plan system. After inspecting the store's heavily watermarked shots, Western media outlets have (visually) compared the "Training AI: X7" rig to crypto mining builds of a certain vintage.

Tom's Hardware spotted multiple Super Flower Leadex 2000 W PSUs—providing sufficient juice to a system that: "can easily be valued at over $30,000, considering these GPUs go for $3500-$4000 on a good day." Wccftech's report extended coverage to Nguyễn Công PC's other AI offerings; mainly "more traditional" PC builds that utilize dual MSI GeForce RTX 5090 card setups—a "dual rig" likely costs ~$10,000. The shop's selection of gaming-grade hardware is not too surprising, given the performance prowess of NVIDIA's GB202-300-A1 GPU variant. Naturally, Team Green's cutting-edge enterprise hardware unlocks the full potential of "Blackwell" GPU designs—but the company can charge sky-high prices for this level of equipment. Going back to early 2024, Tiny Corp. started to make noise about its "tinybox" AI platform—consisting of multiple XFX Speedster MERC310 RX 7900 XTX cards, rather than AMD's freshly launched Instinct MI300X accelerator.

China's RiVAI Technologies Introduces "Lingyu" RISC-V Server Processor

RiVAI Technologies, a Shenzhen-based semiconductor firm founded in 2018, unveiled this first fully domestic high-performance RISC-V server processor designed for compute-intensive applications. The Lingyu CPU features 32 general-purpose computing cores working alongside eight specialized intelligent computing cores (LPUs) in a heterogeneous "one-core, dual architecture" design. It aims for performance comparable to current x86 server processors, with the chip implementing optimized data pathways and enhanced pipelining mechanisms to maintain high clock frequencies under computational load. The architecture specifically targets maximum throughput for parallel processing workloads typical in data center environments. The chip aims to serve HPC clusters, all-flash storage arrays, and AI large language model inference operations.

Since its inception, RiVAI has accumulated 37 RISC-V-related patents and established partnerships with over 50 industry collaborators, including academic research relationships. Professor David Patterson, a RISC-V architecture pioneer, provides technical guidance to the company's development efforts. The processor's dual-architecture approach enables dynamic workload distribution between conventional processing tasks and specialized computational operations, potentially improving performance-per-watt metrics compared to traditional single-architecture designs. The Lingyu launch significantly advances China's semiconductor self-sufficiency strategy, potentially accelerating RISC-V ecosystem development while providing Chinese data centers with domestically engineered high-performance computing solutions, ultimately bypassing x86 and Arm solutions.

Official: Nintendo Switch 2 Leveled Up With NVIDIA "Custom Processor" & AI-Powered Tech

The Nintendo Switch 2, unveiled April 2, takes performance to the next level, powered by a custom NVIDIA processor featuring an NVIDIA GPU with dedicated RT Cores and Tensor Cores for stunning visuals and AI-driven enhancements. With 1,000 engineer-years of effort across every element—from system and chip design to a custom GPU, APIs and world-class development tools—the Nintendo Switch 2 brings major upgrades. The new console enables up to 4K gaming in TV mode and up to 120 FPS at 1080p in handheld mode. Nintendo Switch 2 also supports HDR, and AI upscaling to sharpen visuals and smooth gameplay.

AI and Ray Tracing for Next-Level Visuals
The new RT Cores bring real-time ray tracing, delivering lifelike lighting, reflections and shadows for more immersive worlds. Tensor Cores power AI-driven features like Deep Learning Super Sampling (DLSS), boosting resolution for sharper details without sacrificing image quality. Tensor Cores also enable AI-powered face tracking and background removal in video chat use cases, enhancing social gaming and streaming. With millions of players worldwide, the Nintendo Switch has become a gaming powerhouse and home to Nintendo's storied franchises. Its hybrid design redefined console gaming, bridging TV and handheld play.

AMD Instinct GPUs are Ready to Take on Today's Most Demanding AI Models

Customers evaluating AI infrastructure today rely on a combination of industry-standard benchmarks and real-world model performance metrics—such as those from Llama 3.1 405B, DeepSeek-R1, and other leading open-source models—to guide their GPU purchase decisions. At AMD, we believe that delivering value across both dimensions is essential to driving broader AI adoption and real-world deployment at scale. That's why we take a holistic approach—optimizing performance for rigorous industry benchmarks like MLperf while also enabling Day 0 support and rapid tuning for the models most widely used in production by our customers.

This strategy helps ensure AMD Instinct GPUs deliver not only strong, standardized performance, but also high-throughput, scalable AI inferencing across the latest generative and language models used by customers. We will explore how AMD's continued investment in benchmarking, open model enablement, software and ecosystem tools helps unlock greater value for customers—from MLPerf Inference 5.0 results to Llama 3.1 405B and DeepSeek-R1 performance, ROCm software advances, and beyond.

Framework Laptop 12 Pre-orders Open Next Week

At the end of our launch livestream last month, we teased Framework Laptop 12, a colorful little laptop that is the ultimate expression of our product philosophy. We received a ton of interest around this product, and we have a lot more to share on Framework Laptop 12… in exactly a week! We're opening pre-orders on April 9th at 8am Pacific. That's also when we'll share the full specifications, pricing, and shipment timing. We have a hunch that the early batches are going to go very quickly, so you may want to set up your Framework account ahead of time. In the meantime, you can check out the hands-on video we just posted on our YouTube channel where we go deeper on the design decisions we made.

We know that a lot of you are eager for updates on Framework Laptop 13 and Framework Desktop too. We're happy to share that we've started manufacturing ramp on the new Ryzen AI 300 Series-powered Framework Laptop 13, along with the new translucent Bezels and Expansion Cards. We expect first shipments to go out and press reviews to go live in mid-April. We have a lot of manufacturing capacity ready to work through the pre-order batches quickly.

MLCommons Releases New MLPerf Inference v5.0 Benchmark Results

Today, MLCommons announced new results for its industry-standard MLPerf Inference v5.0 benchmark suite, which delivers machine learning (ML) system performance benchmarking in an architecture-neutral, representative, and reproducible manner. The results highlight that the AI community is focusing much of its attention and efforts on generative AI scenarios, and that the combination of recent hardware and software advances optimized for generative AI have led to dramatic performance improvements over the past year.

The MLPerf Inference benchmark suite, which encompasses both datacenter and edge systems, is designed to measure how quickly systems can run AI and ML models across a variety of workloads. The open-source and peer-reviewed benchmark suite creates a level playing field for competition that drives innovation, performance, and energy efficiency for the entire industry. It also provides critical technical information for customers who are procuring and tuning AI systems. This round of MLPerf Inference results also includes tests for four new benchmarks: Llama 3.1 405B, Llama 2 70B Interactive for low-latency applications, RGAT, and Automotive PointPainting for 3D object detection.

NVIDIA Blackwell Takes Pole Position in Latest MLPerf Inference Results

In the latest MLPerf Inference V5.0 benchmarks, which reflect some of the most challenging inference scenarios, the NVIDIA Blackwell platform set records - and marked NVIDIA's first MLPerf submission using the NVIDIA GB200 NVL72 system, a rack-scale solution designed for AI reasoning. Delivering on the promise of cutting-edge AI takes a new kind of compute infrastructure, called AI factories. Unlike traditional data centers, AI factories do more than store and process data - they manufacture intelligence at scale by transforming raw data into real-time insights. The goal for AI factories is simple: deliver accurate answers to queries quickly, at the lowest cost and to as many users as possible.

The complexity of pulling this off is significant and takes place behind the scenes. As AI models grow to billions and trillions of parameters to deliver smarter replies, the compute required to generate each token increases. This requirement reduces the number of tokens that an AI factory can generate and increases cost per token. Keeping inference throughput high and cost per token low requires rapid innovation across every layer of the technology stack, spanning silicon, network systems and software.

Qualcomm Announces Acquisition of VinAI Division, Aims to Expand GenAI Capabilities

Qualcomm today announced the acquisition of MovianAI Artificial Intelligence (AI) Application and Research JSC (MovianAI), the former generative AI division of VinAI Application and Research JSC (VinAI) and a part of the Vingroup ecosystem. As a leading AI research company, VinAI is renowned for its expertise in generative AI, machine learning, computer vision, and natural language processing. Combining VinAI's advanced generative AI research and development (R&D) capabilities with Qualcomm's decades of extensive R&D will expand its ability to drive extraordinary inventions.

For more than 20 years, Qualcomm has been working closely with the Vietnamese technology ecosystem to create and deliver innovative solutions. Qualcomm's innovations in the areas of 5G, AI, IoT and automotive have helped to fuel the extraordinary growth and success of Vietnam's information and communication technology (ICT) industry and assisted the entry of Vietnamese companies into the global marketplace.

AAEON Announces NV8600-Nano AI Developer Kit

AAEON's UP brand, a leading provider of professional developer boards, has announced the release of the NV8600-Nano AI Developer Kit, available exclusively on the company's UP Shop. Comprised of an NVIDIA Jetson Orin Nano module with Super Mode support, custom AAEON carrier board, and a preinstalled Jetson Platform Services software package, the NV8600-Nano AI Developer Kit reflects a notable expansion for UP, which since its inception has been renowned for its ability to leverage the newest Intel technologies across a range of industrial maker boards across standardized form factors.

While not AAEON's first developer kit to provide preinstalled software tools, the NV8600-Nano AI Developer Kit is the company's first to do so on an NVIDIA-accelerated platform. This new offering will provide AAEON customers an array of software tools dedicated to AI model optimization, distinguishing the kit from AAEON's broader NVIDIA-based product catalog, which typically focuses on industrial, deployment-ready system-on-modules.

Quantum Machines Announces NVIDIA DGX Quantum Early Access Program

Quantum Machines (QM), the leading provider of advanced quantum control solutions, has recently announced the NVIDIA DGX Quantum Early Customer Program, with a cohort of six leading research groups and quantum computer builders. NVIDIA DGX Quantum, a reference architecture jointly developed by NVIDIA and QM, is the first tightly integrated quantum-classical computing solution, designed to unlock new frontiers in quantum computing research and development. As quantum computers scale, their reliance on classical resources for essential operations, such as quantum error correction (QEC) and parameter drift compensation, grows exponentially. NVIDIA DGX Quantum provides access to the classical acceleration needed to support this progress, advancing the path toward practical quantum supercomputers.

NVIDIA DGX Quantum leverages OPX1000, the best-in-class, modular high-density hybrid control platform, seamlessly interfacing with NVIDIA GH200 Grace Hopper Superchips. This solution brings accelerated computing into the heart of the quantum computing stack for the first time, achieving an ultra-low round-trip latency of less than 4 µs between quantum control and AI supercomputers - faster than any other approach. The NVIDIA DGX Quantum Early Customer Program is now underway, with selected leading academic institutions, national labs, and commercial quantum computer builders participating. These include the Engineering Quantum Systems group (equs.mit.edu) led by MIT Professor William D. Oliver, the Israeli Quantum Computing Center (IQCC), quantum hardware developer Diraq, the Quantum Circuit group (led by Ecole Normale Supérieure de Lyon Professor Benjamin Huard), and more.

Lightmatter Unveils Passage M1000 Photonic Superchip

Lightmatter, the leader in photonic supercomputing, today announced Passage M1000, a groundbreaking 3D Photonic Superchip designed for next-generation XPUs and switches. The Passage M1000 enables a record-breaking 114 Tbps total optical bandwidth for the most demanding AI infrastructure applications. At more than 4,000 square millimeters, the M1000 reference platform is a multi-reticle active photonic interposer that enables the world's largest die complexes in a 3D package, providing connectivity to thousands of GPUs in a single domain.

In existing chip designs, interconnects for processors, memory, and I/O chiplets are bandwidth limited because electrical input/output (I/O) connections are restricted to the edges of these chips. The Passage M1000 overcomes this limitation by unleashing electro-optical I/O virtually anywhere on its surface for the die complex stacked on top. Pervasive interposer connectivity is enabled by an extensive and reconfigurable waveguide network that carries high-bandwidth WDM optical signals throughout the M1000. With fully integrated fiber attachment supporting an unprecedented 256 fibers, the M1000 delivers an order of magnitude higher bandwidth in a smaller package size compared to conventional Co-Packaged Optics (CPO) and similar offerings.

IBM & Intel Announce the Availability of Gaudi 3 AI Accelerators on IBM Cloud

Yesterday, at Intel Vision 2025, IBM announced the availability of Intel Gaudi 3 AI accelerators on IBM Cloud. This offering delivers Intel Gaudi 3 in a public cloud environment for production workloads. Through this collaboration, IBM Cloud aims to help clients more cost-effectively scale and deploy enterprise AI. Intel Gaudi 3 AI accelerators on IBM Cloud are currently available in Frankfurt (eu-de) and Washington, D.C. (us-east) IBM Cloud regions, with future availability for the Dallas (us-south) IBM Cloud region in Q2 2025.

IBM's AI in Action 2024 report found that 67% of surveyed leaders reported revenue increases of 25% or more due to including AI in business operations. Although AI is demonstrating promising revenue increases, enterprises are also balancing the costs associated with the infrastructure needed to drive performance. By leveraging Intel's Gaudi 3 on IBM Cloud, the two companies are aiming to help clients more cost effectively test, innovate and deploy generative AI solutions. "By bringing Intel Gaudi 3 AI accelerators to IBM Cloud, we're enabling businesses to help scale generative AI workloads with optimized performance for inferencing and fine-tuning. This collaboration underscores our shared commitment to making AI more accessible and cost-effective for enterprises worldwide," said Saurabh Kulkarni, Vice President, Datacenter AI Strategy and Product Management, Intel.

Micron Announces Memory Price Increases for 2025-2026 Amid Supply Constraints

In a letter to customers, Micron has announced upcoming memory price increases extending through 2025 and 2026, citing persistent supply constraints coupled with accelerating demand across its product portfolio. The manufacturer points to significant demand growth in DRAM, NAND flash, and high-bandwidth memory (HBM) segments as key drivers behind the pricing strategy. The memory market is rebounding from a prolonged oversupply cycle that previously depressed revenues industry-wide. Strategic production capacity reductions implemented by major suppliers have contributed to price stabilization and subsequent increases over the past twelve months. This pricing trajectory is expected to continue as data center operators, AI deployments, and consumer electronics manufacturers compete for limited memory allocation.

In communications to channel partners, Micron emphasized AI and HPC requirements as critical factors necessitating the price adjustments. The company has requested detailed forecast submissions from partners to optimize production planning and supply chain stability during the constrained market period. With its pricing announcement, Micron disclosed a $7 billion investment in a Singapore-based HBM assembly facility. The plant will begin operations in 2026 and will focus on HBM3E, HBM4, and HBM4E production—advanced memory technologies essential for next-generation AI accelerators and high-performance computing applications from NVIDIA, AMD, Intel, and other companies. The price increases could have cascading effects across the AI and GPU sector, potentially raising costs for products ranging from consumer gaming systems to enterprise data infrastructure. We are monitoring how these adjustments will impact hardware refresh cycles and technology adoption rates as manufacturers pass incremental costs to end customers.

Microsoft Copilot+ Becomes More Useful on AMD and Intel PCs

When Microsoft first introduced the Copilot+ program alongside its renewed push for Windows-on-Arm laptops, the AI-powered assistant features were mostly limited to Snapdragon X-powered devices. In addition to the inclusion of these features on Intel and AMD systems, Microsoft is also announcing Voice Access, a new accessibility feature that will first launch on Qualcomm Snapdragon systems and make their way to Intel- and AMD-powered systems. These new updates come by way of the March 27 Preview update titled KB505365. However, there is still no mention of an AMD and Intel launch for the much maligned Recall feature that Microsoft was testing late last year and recalled due to privacy concerns.

According to the latest Windows Experience Blog post, users of AMD- and Intel-powered PCs will now be able to access features, like Live Captions, Cocreator, Restyle Image, and Image Creator more broadly across the line-up of Copilot+ PCs with Intel Core Ultra 200V and AMD Ryzen AI 300 CPUs. Live Captions is officially pitched as an accessibility feature, meanwhile Restyle Image and Image Creator are AI-powered image editing and generation features, and Cocreator lies somewhere in between as a text-to-image tool that is meant to augment drawing in Paint. Cocreator will be rolling out as of the announcement, and Restyle Image and Image Creator will be available in the Photos app on Intel and AMD systems. As for Voice Access, Microsoft claims that it will allow users to be more flexible with their language when using speech to navigate their PCs, as opposed to "learning complex steps, commands and syntax that voice access previously required" for voice navigation on PC. Voice Access will initially be limited to Snapdragon X PCs, but it will roll out to AMD and Intel Copilot+ PCs later this year.

Razer Unveils The Skibidi Headset with AI-Powered "Brainrot" Translator

Razer, the leading global lifestyle brand for gamers, today unveiled the Razer Skibidi headset, the world's first AI-powered, intelligent "Brainrot" translator. Powered by Razer AI Gamer Copilot, the Razer Skibidi serves as a real-time linguistic assistant that translates "brainrot" - what the internet has dubbed Gen Alpha's slang of seemingly unintelligible words - into "normal speak" and vice versa to facilitate intergenerational conversations.

In an era where internet slang evolves faster than the latest fad, communicating across generations has reached a whole new realm of complexity. Designed to help frustrated parents and older Gen Z siblings decode the often-perplexing lexicons of internet culture from Gen Alpha, the Razer Skibidi is coded with 1,337 unique Natural Language Processing (NLP) algorithms, tapping on the full capabilities of our patented AI to decipher the impossible - all in real time, at the touch of a button.

GMKtec EVO-X2 Pre-orders Begin April 7, $2000+ Price Tag Revealed for Ryzen AI "Strix Halo" APU-powered Mini PC

Over the past weekend, GMKtec's Weibo channel announced that pre-orders for its recently unveiled EVO-X2 mini PC model will start on April 7 (through JD.com), for customers located in China. Almost two weeks ago, the manufacturer boasted about its brand-new offering being the "world's first AI mini PC" equipped with AMD's Ryzen AI "Strix Halo" Max+ 395 APU. Extra international attention was gained, due to Lisa Su's autographing of a showcased unit during proceedings at the 2025 AI PC Innovation Summit (held on March 18, in Beijing). Pricing and availability were not mentioned during this press event, but GMKtec's Saturday (March 29) bulletin has revealed a (roughly) $2067 USD price point for the EVO-X2 launch model.

The manufacturer's blog entry stated that the: "EVO-X2 AI supercomputing host is coming, 128 GB + 2 TB priced at 14999 yuan, pricing reconstructs the desktop computing power boundary! Equipped with AMD Ryzen AI Max+ 395 flagship processor, 16-core 32-thread architecture with 5.1 GHz acceleration frequency, combined with 128 GB LPDDR5X memory and 2 TB high-speed storage, it can realize local deployment of 70 billion parameter large models, and AI performance exceeds NVIDIA's GeForce RTX 5090D graphics card." This potent compact AI-crunching solution is tempered by GMKtec's "innovative" Arctic Ocean cooling system. They advertise this design as using: "dual-turbofans and VC heat sinks to achieve silent heat dissipation at a peak power consumption of 140 W. The body adopts a recycled aluminium suspension design, equipped with HDMI, DP and USB4 interfaces, and supports Wi-Fi 6 + 2.5G network access." The brand has not yet announced an international release, but their EVO-X2 mini PC could face serious competition. Late last month, Framework debuted its Desktop product range—consisting of configurable 4.5L Mini-ITX systems—with a top-end Ryzen AI Max+ 395 (128 GB) model starting at $1999.

SMIC Reportedly On Track to Finalize 5 nm Process in 2025, Projected to Cost 40-50% More Than TSMC Equivalent

According to a report produced by semiconductor industry analysts at Kiwoom Securities—a South Korean financial services firm—Semiconductor Manufacturing International Corporation (SMIC) is expected to complete the development of a 5 nm process at some point in 2025. Jukanlosreve summarized this projection in a recent social media post. SMIC is often considered to be China's flagship foundry business; the partially state-owned organization seems to heavily involved in the production of (rumored) next-gen Huawei Ascend 910 AI accelerators. SMIC foundry employees have reportedly struggled to break beyond a 7 nm manufacturing barrier, due to lack of readily accessible cutting-edge EUV equipment. As covered on TechPowerUp last month, leading lights within China's semiconductor industry are (allegedly) developing lithography solutions for cutting-edge 5 nm and 3 nm wafer production.

Huawei is reportedly evaluating an in-house developed laser-induced discharge plasma (LDP)-based machine, but finalized equipment will not be ready until 2026—at least for mass production purposes. Jukanlosreve's short interpretation of Kiwoom's report reads as follows: (SMIC) achieved mass production of the 7 nm (N+2) process without EUV and completed the development of the 5 nm process to support the mass production of the Huawei Ascend 910C. The cost of SMIC's 5 nm process is 40-50% higher than TSMC's, and its yield is roughly one-third." The nation's foundries are reliant on older ASML equipment, thus are unable to produce products that can compete with the advanced (volume and quality) output of "global" TSMC and Samsung chip manufacturing facilities. The fresh unveiling of SiCarrier's Color Mountain series has signalled a promising new era for China's foundry industry.

Intel to Receive $1.9 Billion as SK Hynix Finalizes NAND Deal

Intel and SK Hynix have finalized an $8.85 billion transaction involving Intel's NAND flash memory operations, marking the conclusion of a two-phase deal initiated in 2020. In the first phase of the transaction, SK Hynix acquired Intel's SSD division along with a NAND production facility in Dalian, China, for $6.61 billion. The Dalian facility was later rebranded as Solidigm. Notably, this phase transferred only the physical assets and operational facilities, leaving behind critical intellectual property, research and development infrastructure, and specialized technical staff. The second phase, finalized with a payment of $1.9 billion this Tuesday, addressed these remaining components. With this payment, SK Hynix secured full rights to Intel's proprietary NAND technology, R&D resources, and the technical workforce dedicated to NAND operations.

During the transition period, Intel maintained control over these elements, which limited integration between Solidigm and Intel's NAND teams. This separation was designed to manage operational risks and gradually transfer capabilities. Completing this deal helps with a strategic restructuring of Intel's portfolio as it shifts focus toward high-growth areas such as AI chip development, foundry services, and next-generation semiconductor manufacturing. A $1.9 billion financial injection is perfect in time for Intel Foundry business, burning billions per year, to offset some of the losses. For SK Hynix, consolidating the complete range of Intel's NAND operations enhances its competitive position in the global NAND market, providing access to established technologies and key industry expertise. This finalization is part of a broader trend where companies divest from commoditized memory products to concentrate on more advanced semiconductor solutions like AI chips and other accelerators, which are enjoying higher margins and a better business outlook.

NVIDIA H20 AI GPU at Risk in China, Due to Revised Energy-efficiency Guidelines & Supply Problems

NVIDIA's supply of Chinese market-exclusive H20 AI GPU faces an uncertain future, due to recently introduced energy-efficiency guidelines. As covered over a year ago, Team Green readied a regional alternative to its "full fat" H800 "Hopper" AI GPU—designed and/or neutered to comply with US sanctions. Despite being less performant than Western siblings, the H20 model proved to be highly popular by mid-2024—industry analysis projected "$12 billion in take-home revenue" for NVIDIA. According to a fresh Reuters news piece, demand for cut-down "Hopper" hardware has surged throughout early 2025. The report cites "a rush to adopt Chinese AI startup DeepSeek's cost-effective AI models" as the main cause behind an increased snap up rate of H20 chips; with the nation's "big three" AI players—Tencent, Alibaba and ByteDance—driving the majority of sales.

The supply of H20 AI GPUs seems to be under threat on several fronts; Reuters points out that "U.S. officials were considering curbs on sales of H20 chips to China" back in January. Returning to the present day, their report sources "unofficial" statements from H3C—one of China's largest server equipment manufacturers and a key OEM partner for NVIDIA. An anonymous company insider outlined a murky outlook: "H20's international supply chain faces significant uncertainties...We were told the chips would be available, but when it came time to actually purchase them, we were informed they had already been sold at higher prices." More (rumored) bad news has arrived in the shape of alleged Chinese government intervention—the Financial Times posits that local regulators have privately advised that Tencent, Alibaba and ByteDance not purchase NVIDIA H20 chips.

"GFX1153" Target Spotted in AMDGPU Library Amendment, RDNA 3.5 Again Linked to "Medusa Point" APU

At the tail end of 2024, AMD technical staffers added the "GFX1153" target to their first-party GPU supported chip list. Almost three months later, PC hardware news outlets and online enthusiasts have just picked up on this development. "GFX1150" family IPs were previously linked to Team Red's RDNA 3.5 architecture. This graphics technology debuted with the launch of Ryzen AI "Strix Halo," "Strix Point" and "Krackan Point" mobile processors. Recent leaks have suggested that Team Red is satisfied with the performance of RDNA 3.5-based Radeon iGPUs; warranting a rumored repeat rollout with next-gen "Medusa Point" APU designs.

Both "Medusa Point" and "Gorgon Point" mobile CPU families are expected to launch next year, with leaks pointing to the utilization of "Zen 6" and "Zen 5" processor cores (respectively) and RDNA 3.5 graphics architecture. RDNA 4 seems to be a strictly desktop-oriented generation. AMD could be reserving the "further out" UDNA tech for truly next-generation integrated graphics solutions. In the interim, Team Red's "GFX1153" IP will likely serve as "Medusa Point's" onboard GPU, according to the latest logical theories. Last year, the "GFX1152" target was associated with Ryzen AI 7 300-series "Krackan Point" APUs.
Return to Keyword Browsing
Jul 3rd, 2025 01:48 CDT change timezone

New Forum Posts

Popular Reviews

TPU on YouTube

Controversial News Posts