News Posts matching #AI

Return to Keyword Browsing

Compal Optimizes AI Workloads with AMD Instinct MI355X at AMD Advancing AI 2025 and International Supercomputing Conference 2025

As AI computing accelerates toward higher density and greater energy efficiency, Compal Electronics (Compal; Stock Ticker: 2324.TW), a global leader in IT and computing solutions, unveiled its latest high-performance server platform: SG720-2A/ OG720-2A at both AMD Advancing AI 2025 in the U.S. and the International Supercomputing Conference (ISC) 2025 in Europe. It features the AMD Instinct MI355X GPU architecture and offers both single-phase and two-phase liquid cooling configurations, showcasing Compal's leadership in thermal innovation and system integration. Tailored for next-generation generative AI and large language model (LLM) training, the SG720-2A/OG720-2A delivers exceptional flexibility and scalability for modern data center operations, drawing significant attention across the industry.

With generative AI and LLMs driving increasingly intensive compute demands, enterprises are placing greater emphasis on infrastructure that offers both performance and adaptability. The SG720-2A/OG720-2A emerges as a robust solution, combining high-density GPU integration and flexible liquid cooling options, positioning itself as an ideal platform for next-generation AI training and inference workloads.

AMD Unveils Vision for an Open AI Ecosystem, Detailing New Silicon, Software and Systems at Advancing AI 2025

AMD delivered its comprehensive, end-to-end integrated AI platform vision and introduced its open, scalable rack-scale AI infrastructure built on industry standards at its 2025 Advancing AI event.

AMD and its partners showcased:
  • How they are building the open AI ecosystem with the new AMD Instinct MI350 Series accelerators
  • The continued growth of the AMD ROCm ecosystem
  • The company's powerful, new, open rack-scale designs and roadmap that bring leadership rack-scale AI performance beyond 2027

NVIDIA TensorRT Boosts Stable Diffusion 3.5 Performance on NVIDIA GeForce RTX and RTX PRO GPUs

Generative AI has reshaped how people create, imagine and interact with digital content. As AI models continue to grow in capability and complexity, they require more VRAM, or video random access memory. The base Stable Diffusion 3.5 Large model, for example, uses over 18 GB of VRAM - limiting the number of systems that can run it well. By applying quantization to the model, noncritical layers can be removed or run with lower precision. NVIDIA GeForce RTX 40 Series and the Ada Lovelace generation of NVIDIA RTX PRO GPUs support FP8 quantization to help run these quantized models, and the latest-generation NVIDIA Blackwell GPUs also add support for FP4.

NVIDIA collaborated with Stability AI to quantize its latest model, Stable Diffusion (SD) 3.5 Large, to FP8 - reducing VRAM consumption by 40%. Further optimizations to SD3.5 Large and Medium with the NVIDIA TensorRT software development kit (SDK) double performance. In addition, TensorRT has been reimagined for RTX AI PCs, combining its industry-leading performance with just-in-time (JIT), on-device engine building and an 8x smaller package size for seamless AI deployment to more than 100 million RTX AI PCs. TensorRT for RTX is now available as a standalone SDK for developers.

TSMC Prepares "CoPoS": Next-Gen 310 × 310 mm Packages

As demand for ever-growing AI compute power continues to rise and manufacturing advanced nodes becomes more difficult, packaging is undergoing its golden era of development. Today's advanced accelerators often rely on TSMC's CoWoS modules, which are built on wafer cuts measuring no more than 120 × 150 mm in size. In response to the need for more space, TSMC has unveiled plans for CoPoS, or "Chips on Panel on Substrate," which could expand substrate dimensions to 310 × 310 mm and beyond. By shifting from round wafers to rectangular panels, CoPoS offers more than five times the usable area. This extra surface makes it possible to integrate additional high-bandwidth memory stacks, multiple I/O chiplets and compute dies in a single package. It also brings panel-level packaging (PLP) to the fore. Unlike wafer-level packaging (WLP), PLP assembles components on large, rectangular panels, delivering higher throughput and lower cost per unit. Systems with PLP will be actually viable for production runs and allow faster iterations over WLP.

TSMC will establish a CoPoS pilot line in 2026 at its Visionchip subsidiary. In 2027, the pilot facility will focus on refining the process, to meet partner requirements by the end of the year. Mass production is projected to begin between the end of 2028 and early 2029 at TSMC's Chiayi AP7 campus. That site, chosen for its modern infrastructure and ample space, is also slated to host production of multi-chip modules and System-on-Wafer technologies. NVIDIA is expected to be the launch partner for CoPoS. The company plans to leverage the larger panel area to accommodate up to 12 HBM4 chips alongside several GPU chiplets, offering significant performance gains for AI workloads. At the same time, AMD and Broadcom will continue using TSMC's CoWoS-L and CoWoS-R variants for their high-end products. Beyond simply increasing size, CoPoS and PLP may work in tandem with other emerging advances, such as glass substrates and silicon photonics. If development proceeds as planned, the first CoPoS-enabled devices could reach the market by late 2029.

MAINGEAR Unleashes ULTIMA 18 - The Ultimate 18" 4K Gaming Laptop

MAINGEAR, the leader in premium-quality, high-performance gaming PCs, today announced its most powerful laptop to date, the 18-inch ULTIMA 18. Developed in collaboration with CLEVO, ULTIMA 18 redefines what a gaming laptop can be by offering desktop-level specs, like a 4K@200 Hz G-SYNC display, Intel Core Ultra 9 275HX processor, and up to an NVIDIA GeForce RTX 5090 mobile GPU, all inside a sleek chassis outfitted with metal lid and palm rest.

Designed for elite gamers and creators who demand top-tier performance without compromise, ULTIMA 18 is MAINGEAR's first laptop to support modern dual-channel DDR5 memory, PCIe Gen 5 SSDs, dual Thunderbolt 5 ports, and Wi-Fi 7. Whether plugged in or on the move, this system delivers unprecedented power, quiet efficiency, and immersive visuals for the most demanding workloads and graphics-rich game titles.

Synopsys Achieves PCIe 6.x Interoperability Milestone with Broadcom's PEX90000 Series Switch

Synopsys, Inc. today announced that its collaboration with Broadcom has achieved interoperability between Synopsys' PCIe 6.x IP solution and Broadcom's PEX90000 series switch. As a cornerstone of next-generation AI infrastructures, PCIe switches play a critical role in enabling the scalability required to meet the demands of modern AI workloads. This milestone demonstrates that future products integrating PCIe 6.x solutions from Synopsys and Broadcom will operate seamlessly within the ecosystem, reducing design risk and accelerating time-to-market for high-performance computing and AI data center systems.

The interoperability demonstration with Broadcom features a Synopsys PCIe 6.x IP solution, including PHY and controller, operating as a root complex and an endpoint running at 64 GT/s with Broadcom's PEX90000 switch. Synopsys will showcase this interoperability demonstration at PCI-SIG DevCon 2025 at booth #13, taking place June 11 and 12, where attendees can see a variety of successful Synopsys PCIe 7.0 and PCIe 6.x IP interoperability demonstrations in both the Synopsys booth and partners' booths.

AMD Instinct MI355X Draws up to 1,400 Watts in OAM Form Factor

Tomorrow evening, AMD will host its "Advancing AI" livestream to introduce the Instinct MI350 series, a new line of GPU accelerators designed for large-scale AI training and inference. First shown in prototype form at ISC 2025 in Hamburg just a day ago, each MI350 card features 288 GB of HBM3E memory, delivering up to 8 TB/s of sustained bandwidth. Customers can choose between the single-card MI350X and the higher-clocked MI355X or opt for a full eight-GPU platform that aggregates to over 2.3 TB of memory. Both chips are built on the CDNA 4 architecture, which now supports four different precision formats: FP16, FP8, FP6, and FP4. The addition of FP6 and FP4 is designed to boost throughput in modern AI workloads, where models of tomorrow with tens of trillions of parameters are trained on FP6 and FP4.

In half-precision tests, the MI350X achieves 4.6 PetaFLOPS on its own and 36.8 PetaFLOPS in eight-GPU platform form, while the MI355X surpasses those numbers, reaching 5.03 PetaFLOPS and just over 40 PetaFLOPS. AMD is also aiming to improve energy efficiency by a factor of thirty compared with its previous generation. The MI350X card runs within a 1,000 Watt power envelope and relies on air cooling, whereas the MI355X steps up to 1,400 Watts and is intended for direct-liquid cooling setups. That 400 Watt increase puts it right at NVIDIA's upcoming GB300 "Grace Blackwell Ultra" superchip, which is also a 1,400 W design. With memory capacity, raw computing, and power efficiency all pushed to new heights, the question remains whether real-world benchmarks will match these ambitious specifications. AMD now only lacks platform scaling beyond eight GPUs, which the Instinct MI400 series will address.

NVIDIA NVL72 GB200 Systems Accelerate the Journey to Useful Quantum Computing

The integration of quantum processors into tomorrow's supercomputers promises to dramatically expand the problems that can be addressed with compute—revolutionizing industries including drug and materials development.

In addition to being part of the vision for tomorrow's hybrid quantum-classical supercomputers, accelerated computing is dramatically advancing the work quantum researchers and developers are already doing to achieve that vision. And in today's development of tomorrow's quantum technology, NVIDIA GB200 NVL72 systems and their fifth-generation multinode NVIDIA NVLink interconnect capabilities have emerged as the leading architecture.

Europe Builds AI Infrastructure With NVIDIA to Fuel Region's Next Industrial Transformation

NVIDIA today announced it is working with European nations, and technology and industry leaders, to build NVIDIA Blackwell AI infrastructure that will strengthen digital sovereignty, support economic growth and position the continent as a leader in the AI industrial revolution. France, Italy, Spain and the U.K. are among the nations building domestic AI infrastructure with an ecosystem of technology and cloud providers, including Domyn, Mistral AI, Nebius and Nscale, and telecommunications providers, including Orange, Swisscom, Telefónica and Telenor.

These deployments will deliver more than 3,000 exaflops of NVIDIA Blackwell compute resources for sovereign AI, enabling European enterprises, startups and public sector organizations to securely develop, train and deploy agentic and physical AI applications. NVIDIA is establishing and expanding AI technology centers in Germany, Sweden, Italy, Spain, the U.K. and Finland. These centers build on NVIDIA's history of collaborating with academic institutions and industry through the NVIDIA AI Technology Center program and NVIDIA Deep Learning Institute to develop the AI workforce and scientific discovery throughout the regions.

NVIDIA Partners With Europe Model Builders and Cloud Providers to Accelerate Region's Leap Into AI

NVIDIA GTC Paris at VivaTech -- NVIDIA today announced that it is teaming with model builders and cloud providers across Europe and the Middle East to optimize sovereign large language models (LLMs), providing a springboard to accelerate enterprise AI adoption for the region's industries.

Model builders and AI consortiums Barcelona Supercomputing Center (BSC), Bielik.AI, Dicta, H Company, Domyn, LightOn, the National Academic Infrastructure for Supercomputing in Sweden (NAISS) together with KBLab at the National Library of Sweden, the Slovak Republic, the Technology Innovation Institute (TII), the University College of London, the University of Ljubljana and UTTER are teaming with NVIDIA to optimize their models with NVIDIA Nemotron techniques to maximize cost efficiency and accuracy for enterprise AI workloads, including agentic AI.

Pegatron Unveils AI-Optimized Server Innovations at GTC Paris 2025

PEGATRON, a globally recognized Design, Manufacturing, and Service (DMS) provider, is showcasing its latest AI server solutions at GTC Paris 2025. Built on NVIDIA Blackwell architecture, PEGATRON's cutting-edge systems are tailored for AI training, reasoning, and enterprise-scale deployment.

NVIDIA GB300 NVL72
At the forefront is the RA4802-72N2, built on the NVIDIA GB300 NVL72 rack system, featuring 72 NVIDIA Blackwell Ultra GPUs and 36 NVIDIA Grace CPUs. Designed for AI factories, it boosts output by up to 50X. PEGATRON's in-house developed Coolant Distribution Unit (CDU) delivers 310 kW of cooling capacity with redundant hot-swappable pumps, ensuring performance and reliability for mission-critical workloads.

MSI Powers AI's Next Leap for Enterprises at ISC 2025

MSI, a global leader in high-performance server solutions, is showcasing its enterprise-grade, high-performance server platforms at ISC 2025, taking place June 10-12 at booth #E12. Built on standardized and modular architectures, MSI's AI servers are designed to power next-generation AI and accelerated computing workloads, enabling enterprises to rapidly advance their AI innovations.

"As AI workloads continue to grow and evolve toward inference-driven applications, we're seeing a significant shift in how enterprises approach AI deployment," said Danny Hsu, General Manager of Enterprise Platform Solutions at MSI. "With modular and standards-based architectures, enterprise data centers can now adopt AI technologies more quickly and cost-effectively than ever before. This marks a new era where AI is not only powerful but also increasingly accessible to businesses of all sizes.

ASUS Announces Key Milestone with Nebius and Showcases NVIDIA GB300 NVL72 System at GTC Paris 2025

ASUS today joined GTC Paris at VivaTech 2025 as a Gold Sponsor, highlighting its latest portfolio of AI infrastructure solutions and reinforcing its commitment to advancing the AI Factory vision with a full range of NVIDIA Blackwell Ultra solutions, delivering breakthrough performance from large-scale datacenter to personal desktop.

ASUS is also excited to announce a transformative partnership milestone in its partnership with Nebius. Together, the two companies are enabling a new era of AI innovation built on NVIDIA's advanced platforms. Building on the success of the NVIDIA GB200 NVL72 platform deployment, ASUS and Nebius are now moving forward with strategic collaborations featuring the next-generation NVIDIA GB300 NVL72 platform. This ongoing initiative underscores ASUS's role as a key enabler in AI infrastructure, committed to delivering scalable, high-performance solutions that help enterprises accelerate AI adoption and innovation.

Micron Ships HBM4 Samples: 12-Hi 36 GB Modules with 2 TB/s Bandwidth

Micron has achieved a significant advancement of the HBM4 architecture, which will stack 12 DRAM dies (12-Hi) to provide 36 GB of capacity per package. According to company representatives, initial engineering samples are scheduled to ship to key partners in the coming weeks, paving the way for full production in early 2026. The HBM4 design relies on Micron's established 1β ("one-beta") process node for DRAM tiles, in production since 2022, while it prepares to introduce EUV-enabled 1γ ("one-gamma") later this year for DDR5. By increasing the interface width from 1,024 to 2,048 bits per stack, each HBM4 chip can achieve a sustained memory bandwidth of 2 TB/s, representing a 20% efficiency improvement over the existing HBM3E standard.

NVIDIA and AMD are expected to be early adopters of Micron's HBM4. NVIDIA plans to integrate these memory modules into its upcoming Rubin-Vera AI accelerators in the second half of 2026. AMD is anticipated to incorporate HBM4 into its next-generation Instinct MI400 series, with further information to be revealed at the company's Advancing AI 2025 conference. The increased capacity and bandwidth of HBM4 will address growing demands in generative AI, high-performance computing, and other data-intensive applications. Larger stack heights and expanded interface widths enable more efficient data movement, a critical factor in multi-chip configurations and memory-coherent interconnects. As Micron begins mass production of HBM4, major obstacles to overcome will be thermal performance and real-world benchmarks, which will determine how effectively this new memory standard can support the most demanding AI workloads.
Micron HBM4 Memory

NVIDIA Reportedly Progressing Well with "Rubin" AI GPU Development - Insiders Foresee Q3'25 Sampling

Over a year ago, industry moles started chattering about a potential "late 2025" launch of NVIDIA "Rubin" AI accelerators/ GPUs. According to older rumors, one of the successors to current-gen "Blackwell" hardware could debut in chiplet-based "R100" form. Weeks ahead of Christmas 2024, Taiwanese insider reports pointed to Team Green's development of the "Rubin" AI project being sixth months ahead of schedule. Despite this extra positive outlook, experts surmised that the North American giant would not be rushing out shiny new options—especially with the recent arrival of "Blackwell Ultra" products. A lot of leaks seem to be coming from sources at (or adjacent to) TSMC.

Taiwan's top foundry service is reportedly in the "Rubin" equation; with a 3 nm (N3P) node process and CoWoS-L packaging linked to "R100." According to local murmurs, the final "taping out"—of Rubin GPUs and Vera CPUs—is due for completion this month. Trial production is expected run throughout the summer, with initial samples being ready for distribution by September. According to a fresh Ctee TW news report, unnamed supply chain participants reckon that NVIDIA's "new chip development schedule is smoother than before, and mass production (of Rubin and Vera chips) will begin as early as 2026." In theory, the first publicly exhibited final examples could turn up at CES 2026.

GIGABYTE Reveals AI TOP 500 TRX50 Desktop - Powered by Ryzen Threadripper PRO 7965WX

Gigabyte has low-key introduced an ultra-premium AI TOP 500 TRX50 Desktop; advertised as being: "purpose-built for local AI development, multimodal fine-tuning, and high-performance gaming, delivering workstation-grade performance and advanced AI capabilities in a streamlined, plug-and-play format." A firm price point and release date were not mentioned in last week's press material. This system easily outmuscles a flagship Intel "Arrow Lake-S" Core Ultra-powered sibling; the AI TOP 100 Z890. A 24-core AMD Ryzen Threadripper PRO 7965WX sits at the heart of the AI TOP 500 TRX50—an AORUS 360 AIO Liquid Cooler tempers this beast. Given that Team Red will be launching its next-gen Ryzen Threadripper PRO 9000 "Zen 5/Shimada Peak" processor family next month, Gigabyte's selection of older "Zen 4/Storm Peak" tech seems to be ill-timed. Fortunately, their TRX50 AI TOP motherboard can support the next wave of CPUs. VideoCardz believes that another variant—sporting a 32-core Threadripper PRO 7975WX CPU—will emerge at some point in the near future.

The AI TOP 500 and 100 pre-builds have something in common: both are specced with own-brand GeForce RTX 5090 WINDFORCE graphics cards. The Taiwanese firm's signature AI TOP Utility is described as: "a unified software suite that streamlines the entire AI development process. Users can explore models via RAG (Retrieval-Augmented Generation), build and manage custom datasets, fine-tune LLMs and LMMs up to 405B parameters, and monitor system performance through a real-time dashboard. It also supports multi-node clustering via Thunderbolt 5 and Dual 10G LAN, allowing users to scale computing resources as needed." The AI TOP 500 TRX50 can be equipped with up to a total of 768 GB (8 x 96 GB) DDR5 R-DIMM memory—"enabling smooth execution of complex AI tasks and large-scale datasets. Storage is managed by AI-ready SSDs, including a 1 TB AI TOP 100E cache drive with up to 150x TBW (total bytes written) compared to standard consumer SSDs—ensuring high durability under frequent read/write workloads." An ultra-fast AORUS Gen 4 7300 SSD 2 TB is also included in this package. Appropriately, an AI TOP-branded Ultra Durable 1600 W 80 Plus Platinum PSU (ATX 3.1) provides necessary juice.

AMD Adds a Pair of New Ryzen Z2 SoCs to its Lineup of Handheld Gaming Chips

AMD's Z2 series of processors for handheld gaming devices has been expanded with a pair of new chips, namely the Ryzen AI Z2 Extreme and the Ryzen Z2 A. From AMD's naming scheme, one would assume that the two are quite similar, but if you've kept track of AMD's Z2 product lineup, you're most likely already aware that there are some major differences between the three older SKUs and this time around, we get a further change at the low-end. The new top of the range chip, the Ryzen AI Z2 Extreme appears to be largely the same SoC as the older Ryzen Z2 Extreme, with the addition of a 50 TOPS NPU for AI tasks, which appears to be shared with many of AMD's mobile SoCs.

However, the new low-end entry, the Ryzen Z2 A appears to have more in common with the Steamdeck SoC, than any of the other Z2 chips. It sports a quad core, eight thread Zen 2 CPU, an RDNA 2 based GPU with a mere eight CUs and support for LPDDR5-6400 memory. On the plus side, it has a TDP range of 6-20 Watts, suggesting it would allow for better battery life, assuming devices based on it get a similar size battery as a handheld based on one of the higher-end Z2 SoCs. ASUS is using both of these chips in its two new ROG Ally handheld gaming devices, but Lenovo is expected to follow shortly with its own handheld devices.

NVIDIA Grabs Market Share, AMD Loses Ground, and Intel Disappears in Latest dGPU Update

Within the discrete graphics card sector, NVIDIA achieved a remarkable 92% share of the add-in board (AIB) GPU market in the first quarter of 2025, according to data released by Jon Peddie Research (JPR). This represents an 8.5% increase compared to NVIDIA's previous position. By contrast, AMD's share contracted to just 8%, down 7.3 points, while Intel's presence effectively disappeared, falling to 0% after losing 1.2 points. JPR reported that AIB shipments reached 9.2 million units during Q1 2025 despite desktop CPU shipments declining to 17.8 million units. The firm projects that the AIB market will face a compound annual decline of 10.3% from 2024 to 2028, although the installed base of discrete GPUs is expected to grow to 130 million units by the end of the forecast period. By 2028, an estimated 86% of desktop PCs are expected to feature a dedicated graphics card.

NVIDIA's success this quarter can be attributed to its launch of the RTX 50 series GPUs. In contrast, AMD's RDNA 4 GPUs were released significantly later in Q1. Additionally, Intel's Battlemage Arc GPUs, which were launched in Q4 2024, have struggled to gain traction, likely due to limited availability and low demand in the mainstream market. The broader PC GPU market, which includes integrated solutions, contracted by 12% from the previous quarter, with a total of 68.8 million units shipped. Desktop graphics unit sales declined by 16%, while notebook GPUs decreased by 10%. Overall, NVIDIA's total GPU share rose by 3.6 points, AMD's dipped by 1.6 points, and Intel's declined by 2.1 points. Meanwhile, data center GPUs bucked the overall downward trend, rising by 9.6% as enterprises continue to invest in artificial intelligence applications. On the CPU side, notebook processors accounted for 71% of shipments, with desktop CPUs comprising the remaining 29%.

NVIDIA Blackwell Delivers Breakthrough Performance in Latest MLPerf Training Results

NVIDIA is working with companies worldwide to build out AI factories—speeding the training and deployment of next-generation AI applications that use the latest advancements in training and inference. The NVIDIA Blackwell architecture is built to meet the heightened performance requirements of these new applications. In the latest round of MLPerf Training—the 12th since the benchmark's introduction in 2018—the NVIDIA AI platform delivered the highest performance at scale on every benchmark and powered every result submitted on the benchmark's toughest large language model (LLM)-focused test: Llama 3.1 405B pretraining.

The NVIDIA platform was the only one that submitted results on every MLPerf Training v5.0 benchmark—underscoring its exceptional performance and versatility across a wide array of AI workloads, spanning LLMs, recommendation systems, multimodal LLMs, object detection and graph neural networks. The at-scale submissions used two AI supercomputers powered by the NVIDIA Blackwell platform: Tyche, built using NVIDIA GB200 NVL72 rack-scale systems, and Nyx, based on NVIDIA DGX B200 systems. In addition, NVIDIA collaborated with CoreWeave and IBM to submit GB200 NVL72 results using a total of 2,496 Blackwell GPUs and 1,248 NVIDIA Grace CPUs.

Intel Sets 50% Gross Margin Goal for Every New Product Before Production

Intel's tale of financial difficulties has been told for many quarters now, and the company is slowly paving the way to profitability through workforce reduction, new aggressive product roadmaps, and, as of now, a 50% gross margin requirement before entering production. At Bank of America's global technology conference, Intel Products CEO Michelle Johnston Holthaus noted that CEO Lip-Bu Tan is "laser-focused on the fact that we need to get our gross margins back up above 50%." Explaining the reasoning behind this decision, MJH added that it is "something that we probably should have had before, but we have it now so that product doesn't move forward; you actually don't get engineers assigned to it if it's not 50% or higher gross margins moving forward."

Interestingly, this means that every new product will now go into evaluation of profitability first, unlike the "build it and they will (hopefully) come" philosophy, which cost Intel many billions of R&D just to enter new markets without a solid financial plan. MJH also added about 50% gross margin expectation:" So I think our future products can all get there, I think really what it comes down to is you have to have a lot of discipline in your product life cycle planning to build products from day one that hit that and so there's a lot of things that I talked about when we talked about Lip-Bu coming on board of getting our OpEx and our CapEx in line, getting the types of products that we're going to build in alignment, really understanding market and ASPs." This means that upcoming Panther Lake, 18A node HVM, Clearwater Forest Xeons, Xe3/Xe4 Arc GPUs, and Jaguar Shores AI accelerators all carry a gross margin of 50% or more, making them viable for Intel and sustainable in the long run.

Chinese Tech Firms Reportedly Unimpressed with Overheating of Huawei AI Accelerator Samples

Mid-way through last month, Tencent's President—Martin Lau—confirmed that this company had stockpiled a huge quantity of NVIDIA H20 AI GPUs, prior to new trade restrictions coming into effect. According to earlier reports, China's largest tech firms have collectively spent $16 billion on hardware acquisitions in Q1'25. Team Green engineers are likely engaged in the creation of "nerfed" enterprise-grade chip designs—potentially ready for deployment later on in 2025. Huawei leadership is likely keen to take advantage of this situation, although it will be difficult to compete with the sheer volume of accumulated H20 units. The Shenzhen, Guangdong-based giant's Ascend AI accelerator family is considered to be a valid alternative to equivalent "sanction-conformant" NVIDIA products.

The controversial 910C model and a successor seem to be worthy candidates; as demonstrated by preliminary performance data, but fresh industry murmurs suggest teething problems. The Information has picked up inside track chatter from unnamed moles at ByteDance and Alibaba. During test runs, staffers noted the overheating of Huawei Ascend 910C trial samples. Additionally, they highlighted limitations within the Huawei Compute Architecture for Neural Networks (CANN) software platform. NVIDIA's extremely mature CUDA ecosystem holds a significant advantage here. Several of China's prime AI players—including DeepSeek—are reportedly pursuing in-house AI chip development projects; therefore positioning themselves as competing with Huawei, in a future scenario.

MSI Unveils the New Cubi NUC AI 1UMG

MSI, a global leader in innovative technology, proudly announces the launch of the Cubi NUC AI 1UMG, a next-generation mini PC designed to meet the increasing demands of AI-driven applications and contemporary business environments. Equipped with the Intel Core Ultra 7 processor and integrated Intel AI Boost NPU, the Cubi NUC AI 1UMG is fully optimized for AI computing. It delivers exceptional performance for AI tasks, offering faster response times, improved multitasking, and enhanced overall system efficiency—making it ideal for AI applications, intelligent automation, and advanced business analytics.

The Cubi NUC AI 1UMG is designed with versatility in mind, featuring outstanding display capabilities. It supports up to four monitors through two Thunderbolt 4 ports and two HDMI outputs, allowing users to create a seamless multi-display setup. This is perfect for complex workflows, automation industries, or control room applications.

IBM & Inclusive Brains Announce Collab: Combining AI, Quantum & Neurotechnologies

IBM and Inclusive Brains, a leader in non-invasive neurotechnologies and multimodal artificial intelligence, have entered a joint study agreement to experiment with advanced AI and quantum machine learning techniques. The aim of the joint study is to boost the performance of multi-modal brain-machine interfaces (BMIs).

Innovation with Positive Social Impact
BMIs have the potential to enable individuals with disabilities—particularly those who have lost the ability to use their hands or voice—to leverage connected devices and digital environments to regain control of their surroundings, eliminating the need for vocal commands or physical operation of a keyboard, screen or mouse. Inclusive Brains aims to improve access to education and employment opportunities using the insights generated in the joint study. Beyond better inclusion of people with paralysis, Inclusive Brains aims to broader societal benefits, including improved prevention of both physical and mental health issues among the wider population thanks to enhanced classifications and therefore a better understanding of brain activity patterns.

AMD's Open AI Software Ecosystem Strengthened Again, Following Acquisition of Brium

At AMD, we're committed to building a high-performance, open AI software ecosystem that empowers developers and drives innovation. Today, we're excited to take another step forward with the acquisition of Brium, a team of world-class compiler and AI software experts with deep expertise in machine learning, AI inference, and performance optimization. Brium brings advanced software capabilities that strengthen our ability to deliver highly optimized AI solutions across the entire stack. Their work in compiler technology, model execution frameworks, and end-to-end AI inference optimization will play a key role in enhancing the efficiency and flexibility of our AI platform.

This acquisition strengthens our foundation for long-term innovation. It reflects our strategic commitment to AI, particularly to the developers who are building the future of intelligent applications. It is also the latest in a series of targeted investments, following the acquisitions of Silo AI, Nod.ai, and Mipsology, that together advance our ability to support the open-source software ecosystem and deliver optimized performance on AMD hardware.

AMD Celebrates Four Decades of FPGA Innovation - From Invention to AI Acceleration

This year marks the 40th anniversary of the first commercially available field-programmable gate array (FPGA), introducing the idea of reprogrammable hardware. By creating "hardware as flexible as software," FPGA reprogrammable logic changed the face of semiconductor design. For the first time, developers could design a chip, and if specs or requirements changed mid-stream, or even after manufacturing, they could redefine its functionality to perform a different task. This flexibility enabled more rapid development of new chip designs, accelerating time to market for new products and providing an alternative to ASICs.

The impact on the market has been phenomenal. FPGAs launched a $10+ billion industry and over the past four decades we have shipped more than 3 billion FPGAs and adaptive SoCs (devices combining FPGA fabric with a system-on-chip and other processing engines) to more than 7,000 customers across diverse market segments. In fact, we've been the programmable logic market share leader for the past 25 consecutive years, and we believe we are well positioned for continued market leadership based on the strength of our product portfolio and roadmap.
Return to Keyword Browsing
Jun 12th, 2025 22:15 CDT change timezone

New Forum Posts

Popular Reviews

TPU on YouTube

Controversial News Posts