News Posts matching #AI

Return to Keyword Browsing

NVIDIA Bringing Cybersecurity Platform to Every AI Factory

As enterprises increasingly adopt AI, securing AI factories—where complex, agentic workflows are executed—has never been more critical. NVIDIA is bringing runtime cybersecurity to every AI factory with a new NVIDIA DOCA software framework, part of the NVIDIA cybersecurity AI platform. Running on the NVIDIA BlueField networking platform, NVIDIA DOCA Argus operates on every node to immediately detect and respond to attacks on AI workloads, integrating seamlessly with enterprise security systems to deliver instant threat insights. The DOCA Argus framework provides runtime threat detection by using advanced memory forensics to monitor threats in real time, delivering detection speeds up to 1,000x faster than existing agentless solutions—without impacting system performance.

Unlike conventional tools, Argus runs independently of the host, requiring no agents, integration or reliance on host-based resources. This agentless, zero-overhead design enhances system efficiency and ensures resilient security in any AI compute environment, including containerized and multi-tenant infrastructures. By operating outside the host, Argus remains invisible to attackers—even in the event of a system compromise. Cybersecurity professionals can seamlessly integrate the framework with their SIEM, SOAR and XDR security platforms, enabling continuous monitoring and automated threat mitigation and extending their existing cybersecurity capabilities for AI infrastructure.

VSORA Raises $46 Million to Produce its Jotunn8 AI Chip in 2025

SORA, a French innovator and the only European provider of ultra-high-performance artificial intelligence (AI) inference chips, today announced that it has successfully raised $46 million in a new fundraising round.

The investment was led by Otium and a French family office with additional participation from Omnes Capital, Adélie Capital and co-financing from the European Innovation Council (EIC) Fund.

Acer TravelMate and Aspire PCs Win iF Design Awards in 2025

Acer today announced that several products have been honored with the iF Design Award 2025. This year's product design winners, all under the computer category, include the TravelMate P6 14 AI business laptop, the Aspire C Series All-in-One PCs, and the multiple-award-winning Aspire Vero 16. Notably, this marks the Aspire Vero 16 series' second iF Design Award, having first been recognized during the 2022 ballot.

This year's evaluation process was based on five criteria: idea, form, function, differentiation, and sustainability. Nearly 11,000 submissions were assessed by 131 jurors, highlighting the rigorous selection process to determine this year's award recipients. Receiving these accolades celebrates Acer's exceptional product designs and the company's efforts to align with current social trends, particularly developments in sustainability and artificial intelligence, areas that were noted to have a strong influence on product submissions across all disciplines.

TSMC Outlines Roadmap for Wafer-Scale Packaging and Bigger AI Packages

At this year's Technology Symposium, TSMC unveiled an engaging multi-year roadmap for its packaging technologies. TSMC's strategy splits into two main categories: Advanced Packaging and System-on-Wafer. Back in 2016, CoWoS-S debuted with four HBM stacks paired to N16 compute dies on a 1.5× reticle-limited interposer, which was an impressive feat at the time. Fast forward to 2025, and CoWoS-S now routinely supports eight HBM chips alongside N5 and N4 compute tiles within a 3.3× reticle budget. Its successor, CoWoS-R, increases interconnect bandwidth and brings N3-node compatibility without changing that reticle constraint. Looking toward 2027, TSMC will launch CoWoS-L. First up are large N3-node chiplets, followed by N2-node tiles, multiple I/O dies, and up to a dozen HBM3E or HBM4 stacks—all housed within a 5.5× reticle ceiling. It's hard to believe that eight HBM stacks once sounded ambitious—now they're just the starting point for next-gen AI accelerators inspired by AMD's Instinct MI450X and NVIDIA's Vera Rubin.

Integrated Fan-Out, or InFO, adds another dimension with flexible 3D assemblies. The original InFO bridge is already powering AMD's Instinct cards. Later this year, InFO-POP (package-on-package) and InFO-2.5D arrive, promising even denser chip stacking and unlocking new scaling potential on a single package, away from the 2D and 2.5D packaging we were used to, going into the third dimension. On the wafer scale, TSMC's System-on-Wafer lineup—SoW-P and SoW-X—has grown from specialized AI engines into a comprehensive roadmap mirroring logic-node progress. This year marks the first SoIC stacks from N3 to N4, with each tile up to 830 mm² and no hard limit on top-die size. That trajectory points to massive, ultra-dense packages, which is exactly what HPC and AI data centers will demand in the coming years.

IBM Unveils $150 Billion Investment in America to Accelerate Technology Opportunity

Today IBM announced plans to invest $150 billion in America over the next five years to fuel the economy and to accelerate its role as the global leader in computing. This includes an investment of more than $30 billion in research and development to advance and continue IBM's American manufacturing of mainframe and quantum computers.

"Technology doesn't just build the future—it defines it," said Arvind Krishna, IBM chairman, president and chief executive officer. "We have been focused on American jobs and manufacturing since our founding 114 years ago, and with this investment and manufacturing commitment we are ensuring that IBM remains the epicenter of the world's most advanced computing and AI capabilities."

AMD Discusses "World Changing" LUMI Supercomputer - Powered by EPYC CPUs & Instinct GPUs

If you're a fan of science fiction movies, you've probably seen the story where countries come together to avert or overcome a crisis. These films usually begin with some unexpected dangerous event—maybe an alien invasion, a pandemic or rogue robots. Earth's smartest scientists and engineers work non-stop to discover a solution. Governments pool their resources and, in the end—usually at the very last possible second—humanity triumphs. This might seem like a Hollywood fantasy, but believe it or not, this movie plot is playing out in real life right now. No, we aren't facing an alien invasion or fighting off AI overlords, but the earth does face some pretty serious crises. And nations of the world are working together to develop technology to help address those problems.

For example, the LUMI supercomputer, located in Kajaani, Finland receives a portion of its funding from the European High-Performance Computing Joint Undertaking (EuroHPC JU), an effort that pools EU resources to create/provide exascale computing platforms. Additional funding comes from LUMI consortium countries, which include Finland, Belgium, Czech Republic, Denmark, Estonia, Iceland, the Netherlands, Norway, Poland, Sweden and Switzerland. According to the Top500 list published in November 2024, LUMI is the 8th fastest supercomputer in the world and the fastest supercomputer in Europe. The final configuration of the LUMI supercomputer can sustain 380 petaflops of performance, which is roughly the equivalent of 1.5 million high-end laptops. It's based on the HPE Cray EX platform with AMD EPYC CPUs and AMD Instinct MI250X GPUs. According to the Green500 list, LUMI is also the world's 25th most energy efficient supercomputer. It runs on 100% hydropower and the waste heat from the facility is recaptured to heat about 100 homes in Kajaani.

Rick Tsai, MediaTek's CEO, to Deliver Keynote Speech at Computex 2025

MediaTek CEO Dr. Rick Tsai will deliver a keynote speech at COMPUTEX 2025. The presentation will highlight MediaTek's AI vision—from edge to cloud—and how the company is driving AI innovation. It will also explore the evolution of next-generation connectivity and reveal how cutting-edge, power-efficient, high-performance chips are shaping the future. The keynote will take place on opening day, May 20 at 11:00 AM (UTC+8), at the Taipei Nangang Exhibition Center, Hall 2, 7F. As a global leader in semiconductor technology and AI computing, MediaTek continues to drive innovation across devices, smart homes, automotive electronics, IoT, and data center technologies. At COMPUTEX 2025, MediaTek CEO Dr. Rick Tsai will outline MediaTek's vision for AI computing from edge to cloud, explore the evolution of next-generation connectivity, and discuss how high-performance, power-efficient chipsets are shaping our future.

Dr. Tsai brings extensive leadership experience in the semiconductor and technology industries. Under his leadership, MediaTek has further strengthened its position as a leading innovator of advanced chip solutions, maintaining a leading position in the global mobile chipset market and driving progress in the entire portfolio of technology platforms. Demonstrate how MediaTek's vision to empower a connected, intelligent world for everyone.

Oracle Cloud Infrastructure Bolstered by Thousands of NVIDIA Blackwell GPUs

Oracle has stood up and optimized its first wave of liquid-cooled NVIDIA GB200 NVL72 racks in its data centers. Thousands of NVIDIA Blackwell GPUs are now being deployed and ready for customer use on NVIDIA DGX Cloud and Oracle Cloud Infrastructure (OCI) to develop and run next-generation reasoning models and AI agents. Oracle's state-of-the-art GB200 deployment includes high-speed NVIDIA Quantum-2 InfiniBand and NVIDIA Spectrum-X Ethernet networking to enable scalable, low-latency performance, as well as a full stack of software and database integrations from NVIDIA and OCI.

OCI, one of the world's largest and fastest-growing cloud service providers, is among the first to deploy NVIDIA GB200 NVL72 systems. The company has ambitious plans to build one of the world's largest Blackwell clusters. OCI Superclusters will scale beyond 100,000 NVIDIA Blackwell GPUs to meet the world's skyrocketing need for inference tokens and accelerated computing. The torrid pace of AI innovation continues as several companies including OpenAI have released new reasoning models in the past few weeks.

Microsoft Launches Recall and Integrates AI into Search, Plus Other Updates

Microsoft has deployed three AI features to Copilot+ PCs through the Windows 11 April 2025 non-security preview update. Users can finally access the long-promised Recall, Click to Do, and enhanced Windows Search by enabling "Get the latest updates as soon as they're available" in Settings > Windows Update. Recall operates as a background capture system that takes periodic screenshots, encrypts them via the device's TPM chip, and stores them locally. The system creates a searchable index organized by keywords, dates, and applications. Privacy controls include Windows Hello authentication for settings changes, app-specific exclusions, customizable retention periods, and snapshot deletion options. Windows E3 enterprise deployments can implement Group Policy controls for centralized management.

Click to Do functions at the window manager level, activated by Win + Click or touchscreen right swipe. The context-aware tool provides different capabilities based on content type, offering summarization and translation for text, and background removal and editing for images. Image processing works across all Copilot+ hardware, while text functionality currently supports only Snapdragon processors, with AMD Ryzen AI 300-series and Intel Core Ultra 200V compatibility scheduled for later release. The updated Windows Search implements a compact language model running directly on the device's NPU. This enables natural-language queries throughout Windows interfaces, allowing users to find content without exact filenames. The system operates on NPUs rated at over 40 TOPS and delivers 70 percent faster retrieval than Windows 10.

Intel's Lip-Bu Tan Outlines "Path Forward" Plan - CEO Announces Reduction of Workforce

Team, (yesterday) we reported our Q1 2025 results. It was a step in the right direction as we delivered revenue, gross margin and EPS (earnings per share) above our guidance, driven by Dave and Michelle's leadership. I want to thank them both, and all of you, for the good execution. We need to build on this progress—and it won't be easy. We are navigating an increasingly volatile and uncertain macroeconomic environment, which is reflected in our Q2 outlook. On top of that, there are many areas where we must improve. We need to confront our challenges head-on and take swift actions to get back on track. As I have said, this starts by revamping our culture. The feedback I have received from our customers and many of you has been consistent. We are seen as too slow, too complex and too set in our ways—and we need to change. Our flatter Executive Team (ET) structure that I shared last week was a first step. The next step is to drive greater simplicity, speed and collaboration across the entire company. To achieve these objectives, today I am announcing some important changes.

Becoming an Engineering-Focused Company
We need to get back to our roots and empower our engineers. That's why I elevated our core engineering functions to the ET. And many of the changes we will be driving are designed to make engineers more productive by removing burdensome workflows and processes that slow down the pace of innovation. To make necessary investments in our engineering talent and technology roadmaps, we need to find new ways to reduce our costs. While we have taken significant actions in the last year, our current cost structure is still well above competitive benchmarks. With that in mind, we have reduced our operating expense and capital spending targets going forward, which I will discuss during our investor call this afternoon.

Intel's AI PC Chips Underperform, "Raptor Lake" Demand Sparks Shortages

Intel's latest AI-focused laptop processors, "Lunar Lake" and "Meteor Lake," have encountered slower-than-anticipated uptake, leading device manufacturers to increase orders for the previous-generation "Raptor Lake" chips. As a result, Intel 7 manufacturing lines, originally intended to scale up production of its newest AI-ready CPUs and transition to newer nodes, are now running at full capacity on "Raptor Lake" output, limiting the availability of both the new and legacy models. In its first-quarter 2025 financial report, Intel recorded revenue of $12.7 billion, essentially flat year-over-year, and a net loss of $821 million. The results fell short of the industry's expectations, and the company's stock declined by more than 5% in after-hours trading.

Management attributed the shortfall to cautious buying patterns among OEMs, who seek to manage inventory in light of ongoing US-China tariff discussions, and to consumer hesitancy to pay higher prices for AI-enabled features that are still emerging in mainstream applications. CEO Lip-Bu Tan outlined plans to reduce operating expenses by $500 million and lower capital expenditures by approximately $2 billion to address these challenges in 2025. He also confirmed that workforce reductions are planned for the second quarter, though specific figures were not disclosed. Looking ahead, Intel intends to focus on strengthening its data-center business, where demand for Xeon processors remains robust, and to prepare for the late-2025 introduction of its Panther Lake platform. The company will also continue efforts to encourage software development that leverages on-device AI, aiming to support wider adoption of its AI-capable hardware.

Productivity Meets Gaming with Razer's New Ergonomic Pro Click V2 Mice

Razer, the leading global lifestyle brand for gamers, today announced the launch of its new line of productivity focused wireless mice, Razer Pro Click V2 Vertical Edition, and Razer Pro Click V2. Designed for users who want to integrate gaming products into their work setup, the new Razer Pro Click V2 mice emphasize ergonomic comfort, offering all-day support with gaming precision.

Ergonomic excellence for all-day comfort
Built with ergonomics in mind, Razer Pro Click V2 Vertical Edition is the brand's first wireless vertical ergonomic mouse. It features a 71.7º angle that mimics a natural handshake grip, reducing strain during prolonged use. The extended thumb rest keeps the hand relaxed as the base support elevates the wrist for smoother movements.

Report: Global PC Shipments Up 6.7% YoY in Q1 2025 Amid US Tariff Anticipation

Global PC shipments grew 6.7% YoY in Q1 2025 to reach 61.4 million units, according to Counterpoint Research's preliminary data. The growth was mainly driven by PC vendors accelerating shipments ahead of US tariffs and the increasing adoption of AI-enabled PCs amid the end of Windows 10 support. However, this surge may be short-lived, as inventory levels are likely to stabilize in the next few weeks. The impact of the US tariffs is expected to dampen the growth momentum in 2025.

Apple and Lenovo delivered strong performances in the quarter, largely due to new product launches and market dynamics. Apple experienced 17% YoY growth in shipments, driven by its AI-capable M4-based MacBook series. Lenovo's 11% growth reflected its expansion into AI-enabled PCs and its diversified product portfolio. Lenovo remained the brand with the largest market share during the quarter. HP and Dell, on the other hand, benefited from the US market pull-ins during the quarter, with 6% and 4% YoY growth respectively, and maintained their second and third places in Q1. We also found that the pull-ins happened for other major brands too ahead of the tariff uncertainty, leading to the market share further consolidating around major brands.

MSI Presenting AI's Next Leap at Japan IT Week Spring 2025

MSI, a leading global provider of high-performance server solutions, is bringing AI-driven innovation to Japan IT Week Spring 2025 at Booth #21-2 with high-performance server platforms built for next-generation AI and cloud computing workloads. MSI's NVIDIA MGX AI Servers deliver modular GPU-accelerated computing to optimize AI training and inference, while the Core Compute line of Multi-Node Servers maximize compute density and efficiency for AI inference and cloud service provider workloads. MSI's Open Compute line of ORv3 Servers enhance scalability and thermal efficiency in hyperscale AI deployments. MSI's Enterprise Servers provide balanced compute, storage, and networking for seamless AI workloads across cloud and edge. With deep expertise in system integration and AI-driven infrastructure, MSI is advancing the next generation of intelligent computing solutions to power AI's next leap.

"AI's advancement hinges on performance efficiency, compute density, and workload scalability. MSI's server platforms are engineered to accelerate model training, optimize inference, and maximize resource utilization—ensuring enterprises have the processing power to turn AI potential into real-world impact," said Danny Hsu, General Manager of MSI Enterprise Platform Solutions.

ADATA x Giga Computing Power Up in Brazil Expanding into Latin America's Server Market

ADATA Technology, a global leader in memory modules and flash storage, officially announces its collaboration with Giga Computing, a subsidiary of GIGABYTE Technology, to establish a newly upgraded server production line in Brazil. This partnership not only highlights ADATA's regional manufacturing strengths but also enhances product competitiveness by integrating ADATA's server-grade memory and storage solutions—paving the way for joint expansion into the growing Latin American market.

Anticipating the rapid growth of the Latin American market, ADATA Technology made its early move into Brazil over a decade ago, steadily expanding its presence across manufacturing and marketing operations. Since establishing its São Paulo plant in 2016 and further expanding into Manaus in 2021, ADATA has leveraged fully automated production and high-efficiency capacity to deliver stable supply to the regional market. Its commitment to both employee well-being and product excellence has been recognized with the "Great Place to Work Brazil" certification for three consecutive years (2022-2024).

TSMC Unveils Next-Generation A14 Process at North America Technology Symposium

TSMC today unveiled its next cutting-edge logic process technology, A14, at the Company's North America Technology Symposium. Representing a significant advancement from TSMC's industry-leading N2 process, A14 is designed to drive AI transformation forward by delivering faster computing and greater power efficiency. It is also expected to enhance smartphones by improving their on-board AI capabilities, making them even smarter. Planned to enter production in 2028, the current A14 development is progressing smoothly with yield performance ahead of schedule.

Compared with the N2 process, which is about to enter volume production later this year, A14 will offer up to 15% speed improvement at the same power, or up to 30% power reduction at the same speed, along with more than 20% increase in logic density. Leveraging the Company's experience in design-technology co-optimization for nanosheet transistor, TSMC is also evolving its TSMC NanoFlex standard cell architecture to NanoFlex Pro, enabling greater performance, power efficiency and design flexibility.

NVIDIA's Project G-Assist Plug-In Builder Explained: Anyone Can Customize AI on GeForce RTX AI PCs

AI is rapidly reshaping what's possible on a PC—whether for real-time image generation or voice-controlled workflows. As AI capabilities grow, so does their complexity. Tapping into the power of AI can entail navigating a maze of system settings, software and hardware configurations. Enabling users to explore how on-device AI can simplify and enhance the PC experience, Project G-Assist—an AI assistant that helps tune, control and optimize GeForce RTX systems—is now available as an experimental feature in the NVIDIA app. Developers can try out AI-powered voice and text commands for tasks like monitoring performance, adjusting settings and interacting with supporting peripherals. Users can even summon other AIs powered by GeForce RTX AI PCs.

And it doesn't stop there. For those looking to expand Project G-Assist capabilities in creative ways, the AI supports custom plug-ins. With the new ChatGPT-based G-Assist Plug-In Builder, developers and enthusiasts can create and customize G-Assist's functionality, adding new commands, connecting external tools and building AI workflows tailored to specific needs. With the plug-in builder, users can generate properly formatted code with AI, then integrate the code into G-Assist—enabling quick, AI-assisted functionality that responds to text and voice commands.

Sony PlayStation 5 Pro Lead Designers Perform Official Teardown of Flagship Console

PlayStation 5 Pro console—the most innovative PlayStation console to date—elevates gaming experiences to the next level with features like upgraded GPU, advanced ray tracing, and PlayStation Spectral Super Resolution (PSSR) - an AI-driven upscaling that delivers super sharp image clarity with high framerate gameplay. Today we're providing a closer look at the console's internal architecture, as Sony Interactive Entertainment engineers Shinya Tsuchida, PS5 Pro Mechanical Design Lead and Shinya Hiromitsu, PS5 Pro Electrical Design Lead, provide a deep-dive into the console's innovative technology and design philosophy.

Note: in this article, we refer to the PlayStation 5 model released in 2020 as the "original PS5," the PS5 released in 2023 as the "current PS5," and the PS5 Pro released in 2024 as the "PS5 Pro." Do not try this at home. Risk of fires, and exposure to electric shock or other injuries. Disassembling your console will invalidate your manufacturer's guarantee.

AMD Announces Press Conference & Livestream at Computex 2025

AMD today announced that it will be hosting a press conference during Computex 2025. The in-person and livestreamed press conference will take place on Wednesday, May 21, 2025, at 11 a.m. UTC+8, Taipei, at the Grand Hyatt, Taipei. The event will showcase the advancements AMD has driven with AI in gaming, PCs and professional workloads.

AMD senior vice president and general manager of the Computing and Graphics Group Jack Huynh, along with industry partners, will discuss how AMD is expanding its leadership across gaming, workstations, and AI PCs, and highlight the breadth of the company's high-performance computing and AI product portfolio. The livestream will start at 8 p.m. PT/11 p.m. ET on Tuesday, May 20 on AMD.com, with replay available after the conclusion of the livestream event.

Lenovo Introduces New ThinkPad Mobile Workstations and Business Laptops Designed for the AI-Ready Workforce

Lenovo today unveiled a refreshed portfolio of ThinkPad devices engineered to meet the evolving needs of modern professionals—from content creators and engineers to knowledge workers and hybrid teams. The lineup includes powerful Copilot+ PCs, such as the ThinkPad P14s Gen 6 AMD and ThinkPad P16s Gen 4 AMD mobile workstations, alongside new ThinkPad L Series business laptops and expands its ThinkPad X1 Aura Editions, delivering the performance, manageability, and intelligence today's AI-powered workflows demand.

Together, these latest ThinkPad systems reflect Lenovo's commitment to delivering smarter, more adaptive solutions that support advanced workloads, sustainability goals, and flexible work models—whether users are building complex simulations or collaborating across teams.

AMD Software Adrenalin 25.4.1 Beta Drivers Released

Yesterday, AMD released Radeon Software Adrenalin Edition 25.4.1 Optional Beta drivers that add FidelityFX Super Resolution 4 to The Elder Scrolls IV: Oblivion Remastered, Assassin's Creed Shadows, Kingdom Come Deliverance 2, Dynasty Warriors Origin, Civilization 7, and Naraka Bladepoint. It also brings Amuse 3 support and AMD-optimized models to Radeon RX 9000 and RX 7000 series graphics cards alongside Ryzen AI 300 series processors. Among the many fixes you'll find corrected lighting artifacts in Topaz Photo AI's Adjust Lighting features on RX 9000 series cards, removed flicker when using AMD FreeSync, improved DirectML and GenAI performance in Amuse 3.0 on RX 7000 GPUs and Ryzen AI 300 series chips, and patched image corruption in certain diffuser models on RX 9000 hardware.

The update also smooths out stutter and performance drops in World of Warcraft's Western Plaguelands, restores integrated camera detection after factory resets on Ryzen AI Max devices, and addresses AMD Chat installation hangs. Since this is still an optional beta with some known issues, like FSR 4 not activating in Naraka Bladepoint on Windows 10, crashes in The Last of Us Part 2, memory leaks in SteamVR on RX 9000 cards, or intermittent launch and stability hiccups in games like Cyberpunk 2077, Fantasy VII Rebirth, Battlefield 1, Monster Hunter Wilds and Marvel's Spider-Man 2, AMD recommends using the suggested workarounds such as disabling motion smoothing or integrated graphics in your BIOS and holding out for your system vendor's certified driver to ensure full compatibility.

DOWNLOAD: AMD Software Adrenalin 25.4.1 Beta

NVIDIA Blackwell Platform Boosts Water Efficiency by Over 300x - "Chill Factor" for AI Infrastructure

Traditionally, data centers have relied on air cooling—where mechanical chillers circulate chilled air to absorb heat from servers, helping them maintain optimal conditions. But as AI models increase in size, and the use of AI reasoning models rises, maintaining those optimal conditions is not only getting harder and more expensive—but more energy-intensive. While data centers once operated at 20 kW per rack, today's hyperscale facilities can support over 135 kW per rack, making it an order of magnitude harder to dissipate the heat generated by high-density racks. To keep AI servers running at peak performance, a new approach is needed for efficiency and scalability.

One key solution is liquid cooling—by reducing dependence on chillers and enabling more efficient heat rejection, liquid cooling is driving the next generation of high-performance, energy-efficient AI infrastructure. The NVIDIA GB200 NVL72 and the NVIDIA GB300 NVL72 are rack-scale, liquid-cooled systems designed to handle the demanding tasks of trillion-parameter large language model inference. Their architecture is also specifically optimized for test-time scaling accuracy and performance, making it an ideal choice for running AI reasoning models while efficiently managing energy costs and heat.

EK to Showcase Latest Liquid Cooling Innovations at Computex 2025

EK, renowned for its premium liquid cooling solutions and now managed by LM TEK, is set to present its latest innovations at COMPUTEX Taipei 2025. From May 20 to 23, attendees can visit LM TEK at Booth M1419, Hall 1 (4F), TaiNEX 1, to explore cutting-edge cooling technologies designed to meet the demands of next-generation AI computing. As artificial intelligence and high-performance computing continue to evolve, the need for efficient thermal management becomes increasingly critical. EK's newest solutions address these challenges head-on, offering advanced liquid cooling systems that elevate performance, ensure system stability, and meet the rigorous demands of modern AI workloads. This year's COMPUTEX will serve as a platform for EK to showcase how its innovative hardware—engineered for data centers, AI development platforms, and high-performance workstations—can help reshape the future of computing.

Want to know what's coming?
Visit our EK COMPUTEX 2025 page, where we will be revealing newly released products, exclusive launches, and spotlighting key partner projects on display at the show. Under the management of LM TEK, EK continues to push the boundaries of thermal management technology. This transition brings enhanced operational agility and a sharper focus on customer needs, while maintaining the trusted EK quality and design excellence. Visitors to the EK booth will have the opportunity to explore the latest liquid cooling solutions, meet the team, and discuss how EK technology can power next-gen AI innovation. Whether you're an industry professional, system integrator, or tech enthusiast, this is your chance to experience the future of liquid cooling—up close.

TSMC Can't Track Where Its Chips End Up, Annual Report Admits

TSMC has acknowledged fundamental visibility limitations in its semiconductor supply chain, stating in its latest annual report that it "inherently lacks visibility regarding the downstream use or user of final products." This disclosure relates to an incident where 7 nm chips manufactured for Sophgo were later identified in Huawei's Ascend 910B/C AI accelerators, whose hardware is subject to US export restrictions. The contract foundry outlined its standard process: receiving GDS files through intermediaries, validating technical specifications, creating photomasks, and fabricating wafers without insight into end applications. Subsequent analysis revealed that those very chips matched Huawei's specifications, providing components for approximately one million dual‑chiplet AI accelerator units, with two million dies shipped to Huawei.

The report warns that compliance violations by supply‑chain partners, such as failing to secure proper import, export or re‑export permits, could trigger regulatory investigations and penalties, even when TSMC adheres to its established protocols. US already proposed a $1 billion fine for TSMC. This visibility gap just shows that challenges in semiconductor manufacturing, where complex distribution networks obscure the path between fabrication and deployment, are not easily overcome. Foundries are facing increasing pressure to enhance tracking capabilities despite the inherent limitations of the contract manufacturing model. US sanctions on Chinese companies are growing their walls even higher, and this could mean that sanction-abiding companies might avoid doing business with Chinese entities altogether to avoid getting fined.

"FA-EX9" AMD Ryzen AI 2L Mini PC from FEVM Rivals NVIDIA DGX Spark

Today, Chinese PC maker FEVM introduced the FA‑EX9 mini PC, powered by AMD's new Ryzen AI MAX+ 395 "Strix Halo" processor. This compact system measures just 192 × 190 × 55 mm (2 L volume) and packs 16 Zen 5 CPU cores alongside 40 RDNA 3.5 compute units (Radeon 8060S) and a dedicated XDNA 2 neural engine capable of 50 TOPS. FEVM configures the MAX+ 395 to run at up to 120 W sustained power, putting it in the same performance class as a Ryzen 9 9955HX paired with an RTX 4070 Laptop GPU. Memory comes as 128 GB of LPDDR5X on a 256‑bit bus, with up to 96 GB usable as video memory for large‑model inference. Storage is handled by dual M.2 PCIe 4.0 SSD slots, supporting up to 2 TB onboard and up to 16 TB total. The FA‑EX9 offers one HDMI 2.1 port, one DisplayPort 1.4, two USB4 Type‑C connectors for up to four 8K displays, and an OCuLink port for external GPU expansion. Inspiration for this mini PC seems to be NVIDIA's DGX Spark, which is Team Green's custom solution for local AI processing in an incredibly compact housing.

In comparison, NVIDIA's DGX Spark brings the GB10 Grace Blackwell superchip to a palm‑sized AI appliance. It combines a 20‑core Arm CPU cluster with a Blackwell GPU featuring next‑gen Tensor and RT cores, delivering up to 1,000 FP4 AI TOPS. The DGX Spark is built around 128 GB of unified LPDDR5X memory at 273 GB/s, up to 4 TB of self‑encrypting NVMe storage, four USB4 ports, one HDMI output, and a ConnectX‑7 SmartNIC providing 200 GbE networking for multi‑node clusters. Its chassis measures 150 × 150 × 50.5 mm (1.24 L volume) and draws approximately 170 W under load, with pricing starting at $2,999 for a 1 TB model or $3,999 for the 4 TB Founder's Edition, now available for preorder. While the FA‑EX9 balances general‑purpose computing, flexible GPU expansion, and high‑speed I/O for edge AI and creative professionals, the DGX Spark focuses on out‑of‑the‑box AI throughput and scale‑out clustering. The FA-EX9 is more of a general-purpose Swiss army knife, which can be used for anything from AI to gaming. Release date and pricing are still unknown.
Return to Keyword Browsing
Jul 12th, 2025 00:13 CDT change timezone

New Forum Posts

Popular Reviews

TPU on YouTube

Controversial News Posts