News Posts matching #AI

Return to Keyword Browsing

The Alters Studio Confirms AI Use After Community Backlash

The latest game to fall into the trap of annoying or even estranging fans by using generative AI without disclosing it is The Alters. Accusations from the community over the use of generative AI in The Alters have been coming from all corners but most prominently from Steam reviews and the game's subreddit. The accusations included everything from AI-generated text in parts of the environment to AI-translated dialogue that included part of the AI prompt in the final game. These accusations became so commonplace that gamers even started to suspect that some of the visual assets had been created with generative AI. Adding insult to injury, there was no disclosure on the game's Steam page that confirmed the use of AI, so it seemed as though 11 Bit Studios was trying to hide the use of AI-generated content in The Alters.

After the allegations started to make the news, though, 11 Bit Studios responded to the accusations, explaining that generative AI was used during the development of The Alters, specifying that "AI-generated text for a graphic asset, which was meant as a piece of background texture, was used by one of our graphical designers as a placeholder." The studio goes on to explain that the asset was never meant to make it into the final version of the game, and that it has conducted a thorough investigation and determined that this is the only instance of AI-generated placeholders making it into the game. 11 Bit Studios also says that there are instances of "last-minute translations" made to some of the licensed movies that characters can watch in the game—these are seemingly the examples pointed out by a game localization expert on LinkedIn. According to the statement put out by the game studio, the studio's usual translation partners were not included in the localization hotfix, which largely adds to the concerns of those who oppose the use of generative AI in media and gaming, since that work would likely have been done by a human instead of an LLM. Ultimately, the studio made the decision to use AI instead of its translation partner because of time constraints, and it acknowledges that it should have disclosed the use of AI from the outset.

NVIDIA's Dominance Challenged as Largest AI Lab Adopts Google TPUs

NVIDIA's AI hardware dominance is challenged as the world's leading AI lab, OpenAI, taps into Google TPU hardware, showing significant efforts to move away from single-vendor solutions. In June 2025, OpenAI began leasing Google Cloud's Tensor Processing Units to handle ChatGPT's increasing inference workload. This is the first time OpenAI has relied on non-NVIDIA chips in large-scale production. Until recently, NVIDIA GPUs powered both model training and inference for OpenAI's products. Training large language models on those cards remains costly, but it was a periodic process. Inference, by contrast, runs continuously and carries its own substantial expense. ChatGPT now serves more than 100 million daily active users, including 25 million paid subscribers. Inference operations account for nearly half of OpenAI's estimated $40 billion annual compute budget. Google's TPUs, like v6e "Trillium" provide a more cost-effective solution for steady-state inference, as they are designed specifically for high throughput and low latency.

Beyond cost savings, this decision reflects OpenAI's desire to reduce reliance on any single vendor. Microsoft Azure has been its primary cloud provider since early investments and collaborations. However, GPU supply shortages and price fluctuations exposed a weakness in relying too heavily on a single source. By adding Google Cloud to its infrastructure mix, OpenAI gains greater flexibility, avoids vendor lock-in, and can scale more smoothly during usage peaks. For Google, winning OpenAI as a TPU customer offers strong validation for its in-house chip development. TPUs were once reserved almost exclusively for internal projects such as powering the Gemini model. Now, they are attracting leading organizations like Apple and Anthropic. Note that, beyond v6e inference, Google also designs TPUs for training (yet-to-be-announced v6p), which means companies can scale their entire training runs on Google's infrastructure on demand.

AMD Ryzen AI 9 HX 475 & 470 "Gorgon Point" APUs Surface in Shipping Manifests

Last month, a confidential presentation in China gave AMD's laptop partners their first glimpse of what's coming next. The slides hinted at modest clock-speed improvements and the addition of more entry-level models, all under the codename "Gorgon Point." Not long after, NBD's shipping manifests today show a series of new FP8, FP10, and FP12 product codes. These identifiers don't match any existing "Strix Point" or "Kracken Point" chips, so it's clear AMD is gearing up for a Ryzen AI 400-series refresh set to roll out around mid-2026. Despite the new name, Gorgon Point sticks with the same winning formula. It still uses Zen 5 and Zen 5c CPU cores alongside RDNA 3.5 graphics and XDNA 2 neural accelerators. Core counts haven't changed, so you'll see configurations ranging from 4 to 12 cores just like before, with up to 4x Zen 5 and 8x Zen 5c cores.

What's different, according to the leaked partner slides, is a slight bump in boost clocks. The top-end Ryzen AI 9 HX 475 and 470 chips are now rated for up to 5.2 GHz, a slight but welcome increase from the previous 5.1 GHz, while maintaining a 28 W default TDP. AMD is also broadening its reach into budget laptops. In addition to the Ryzen 9 HX upgrades, the company will introduce new Ryzen 5 and Ryzen 3 models. Early leaks mention parts numbered 440 and 430, as well as a mystery Ryzen 3 SKU. These entry-level chips will use the same Gorgon Point silicon but will be tuned for cost-sensitive devices. Branding will likely follow AMD's recent pattern. Given how AMD has renumbered previous families, slotting Gorgon Point into familiar retail channels should be straightforward. Until AMD makes an official announcement, these shipping manifests and partner leaks are the best clues we have about the performance and efficiency gains in the next wave of AI-accelerated laptops.

Dell Announces the 16 Premium and 14 Premium Laptop Ranges

Today, Dell unveils its new lineup of flagship laptops, now under the Dell Premium name. Powered by the latest Intel Core Ultra 200H series processors, these devices deliver meaningful performance advancements designed for students, creators and entrepreneurs who rely on their PCs to fuel their ambitions and keep pace with what's next.

Staying true to the XPS tradition, the new Dell Premium laptops uphold the signature craftsmanship and innovation that customers know and love - stunning displays, elevated and smooth finishes, monochromatic colors and cutting-edge technology. The new name signals a fresh chapter—one that makes it easier than ever to find the right PC while providing the same exceptional quality, design and performance.

XMG EVO 14: 14-inch Ultrabook Updated with AMD Ryzen AI 300 Series

The first model update for XMG's ultrabooks, originally introduced last year, is now underway. Leading the way is the XMG EVO 14, which will be available with three different AMD Ryzen AI processors, including the Ryzen AI 9 HX 370. The 1.45 kg laptop features a brighter 1800p display, now offering 500 nits compared to the previous 400 nits. Other highlights include an 80 Wh battery, a dual-fan cooling system and a wide range of external connectivity options. Like its predecessor, the EVO 14 continues to rely on upgradeable, plugged-in SSDs and DDR5 RAM modules - now configurable with up to 128 GB of memory.

Launched in 2024, the EVO series marked XMG's first ultrabooks to forgo a dedicated graphics unit in favour of an efficient integrated graphics solution. The 14-inch model is the first to receive an update: the XMG EVO 14 (E25) is now available with a choice of three AMD processors. Alongside AMD's Ryzen AI 9 HX 370 (12 cores, 24 threads), XMG is offering the compact 311 x 220 x 17 mm, 1.45 kg ultrabook with either the Ryzen AI 9 365 (10 cores, 20 threads) or Ryzen AI 7 350 (8 cores, 16 threads). These options also differ in their integrated graphics: with 16 compute units, the iGPU in the HX 370 delivers the highest 3D performance for accelerating content creation applications or enabling entry-level gaming.

xMEMS Announces µCooling Fan-on-a-Chip Solution for XR Smart Glasses

xMEMS Labs, Inc., inventor of the world's first monolithic silicon MEMS air pump, today announced the expansion of its revolutionary µCooling fan-on-a-chip platform into XR smart glasses, providing the industry's first in-frame active cooling solution for AI-powered wearable displays.

As smart glasses rapidly evolve to integrate AI processors, advanced cameras, sensors, and high-resolution AR displays, thermal management has become a major design constraint. Total device power (TDP) is increasing from today's 0.5-1 W levels to 2 W and beyond, driving significant heat into the frame materials that rest directly on the skin. Conventional passive heat sinking struggles to maintain safe and comfortable surface temperatures for devices worn directly on the face for extended periods.

Humanoid Robots to Assemble NVIDIA's GB300 NVL72 "Blackwell Ultra"

NVIDIA's upcoming GB300 NVL72 "Blackwell Ultra" rack-scale systems are reportedly going to get a humanoid robot assembly, according to sources close to Reuters. As readers are aware, most of the traditional manufacturing processes in silicon manufacturing, PCB manufacturing, and server manufacturing are automated, requiring little to no human intervention. However, rack-scale systems required humans for final assembly up until now. It appears that Foxconn and NVIDIA have made plans to open up the first AI-powered humanoid robot assembly plant in Houston, Texas. The central plan is that, in the coming months as the plant is completed, humanoid robots will take over the final assembly process entirely removing humans from the manufacturing loop.

And this is not a bad thing. Since server assembly typically requires lifting heavy server racks throughout the day, the humanoid robot system will aid humans by doing the hard work, thereby saving workers from excessive labor. Initially, humans will oversee these robots in their operations, with fully autonomous factories expected later on. The human element here will primarily involve inspecting the work. NVIDIA has been laying the groundwork for humanoid robots for some time, as the company has developed NVIDIA Isaac, a comprehensive CUDA-accelerated platform designed for humanoid robots. As models from Agility Robotics, Boston Dynamics, Fourier, Foxlink, Galbot, Mentee Robotics, NEURA Robotics, General Robotics, Skild AI, and XPENG require models that are aware of their surroundings, NVIDIA created Isaac GR00T N1, the world's first open humanoid robot foundation model, available for anyone to use and finetune.

Inventory Headwinds Weigh on Top 5 Enterprise SSD Vendors in 1Q25; Recovery Expected as AI Demand Grows

TrendForce's latest investigations reveal that several negative factors weighed on the enterprise SSD market in the first quarter of 2025. These include production challenges for next-gen AI systems and persistent inventory overhang in North America. As a result, major clients significantly scaled back orders, causing the ASP of enterprise SSDs to plunge nearly 20%. This led to QoQ revenue declines for the top five enterprise SSD vendors, reflecting a period of market adjustment.

However, conditions are expected to improve in the second quarter. As shipments of NVIDIA's new chips ramp up, demand for AI infrastructure in North America is rising. Meanwhile, Chinese CSPs are steadily expanding storage capacity in their data centers. Together, these trends are set to reinvigorate the enterprise SSD market, with overall revenue projected to return to positive growth.

RuggON Unveils 12-inch SOL 7: The World's First Rugged Tablet Powered by Intel Arrow Lake Processors

RuggON, a global provider of rugged computing solutions, announces the launch of the SOL 7, a groundbreaking 12-inch fully rugged tablet and the first powered by Intel Arrow Lake processors. Designed for high-performance computing in demanding environments, the SOL 7 delivers next-generation AI capabilities, robust durability and seamless connectivity for critical sectors including public safety, automotive applications, warehousing, logistics, and agricultural.

Unmatched AI Performance with Intel Arrow Lake
Powered by the latest the latest Intel Core Ultra 5/7 processor with integrated Intel AI Boost, the SOL 7 enables powerful on-device AI for real-time analytics, image recognition, and rapid decision-making. Equipped with a suite of data capture tools including optional 2D barcode scanner with OCR, NFC reader with FIDO2 security, smart card reader, fingerprint reader and UHF RFID reader, the SOL 7 empowers field professionals to operate smarter, faster, and more securely.

Arteris Accelerates AI-Driven Silicon Innovation with Expanded Multi-Die Solution

In a market reshaped by the compute demands of AI, Arteris, Inc. (Nasdaq: AIP), a leading provider of system IP for accelerating semiconductor creation, today announced an expansion of its multi-die solution, delivering a foundational technology for rapid chiplet-based innovation. "In the chiplet era, the need for computational power increasingly exceeds what is available by traditional monolithic die designs," said K. Charles Janac, president and CEO of Arteris. "Arteris is leading the transition into the chiplet era with standards-based, automated and silicon-proven solutions that enable seamless integration across IP cores, chiplets, and SoCs."

Moore's Law, predicting the doubling of transistor count on a chip every two years, is slowing down. As the semiconductor industry accelerates efforts to increase performance and efficiency, especially driven by AI workloads, architectural innovation through multi-die systems has become critical. Arteris' expanded multi-die solution addresses this shift with a suite of enhanced technologies that are purpose-built for scalable and faster time-to-silicon, high-performance computing, and automotive-grade mission-critical designs.

Taiwan Adds Huawei and SMIC to Export Control List, TSMC First to Comply

On June 10, Taiwan's Ministry of Economic Affairs expanded its list of strategic export-controlled customers to include Huawei and Semiconductor Manufacturing International Corporation (SMIC). The ministry announced that this decision followed a review meeting focused on preventing the proliferation of arms and other national security concerns. Moving forward, any Taiwanese exporter must obtain formal government approval before shipping semiconductors, lithography machines, or related equipment to Huawei or SMIC. TSMC immediately confirmed its full compliance. Company representatives reminded stakeholders that no orders have been fulfilled for Huawei since September 2020 and pledged to enhance internal verification procedures to block any unauthorized transactions. These steps build on one‑billion‑dollar penalty, imposed after investigators determined that two million advanced AI chiplets had been supplied for Huawei's Ascend 910B accelerator without proper clearance.

For Huawei and SMIC, this latest measure compounds the challenges created by existing US export controls, which prohibit both companies from sourcing many US-origin technologies and designs. The two Chinese giants will accelerate efforts to develop domestic alternatives, yet true semiconductor independence remains a distant goal. Designing and building reliable extreme ultraviolet lithography systems demands years of specialized research and highly precise manufacturing capabilities. Scaling production without foreign expertise could introduce costly delays. In response, Chinese research institutes report that the country's first homegrown EUV lithography machines are slated to enter trial production in the third quarter of 2025. Meanwhile, state‑backed partners are racing to develop advanced packaging tools to rival those offered by ASML. Despite these initiatives, experts warn that catching up with global leaders will require substantial time and continued investment.

Next‑Gen HBM4 to HBM8: Toward Multi‑Terabyte Memory on 15,000 W Accelerators

In a joint briefing this week, KAIST's Memory Systems Laboratory and TERA's Interconnection and Packaging group presented a forward-looking roadmap for High Bandwidth Memory (HBM) standards and the accelerator platforms that will employ them. Shared via Wccftech and VideoCardz, the outline covers five successive generations, from HBM4 to HBM8, each promising substantial gains in capacity, bandwidth, and packaging sophistication. First up is HBM4, targeted for a 2026 rollout in AI GPUs and data center accelerators. It will deliver approximately 2 TB/s per stack at an 8 Gbps pin rate over a 2,048-bit interface. Die stacks will reach 12 to 16 layers, yielding 36-48 GB per package with a 75 W power envelope. NVIDIA's upcoming Rubin series and AMD's Instinct MI500 cards are slated to employ HBM4, with Rubin Ultra doubling the number of memory stacks from eight to sixteen and AMD targeting up to 432 GB per device.

Looking to 2029, HBM5 maintains an 8 Gbps speed but doubles the I/O lanes to 4,096 bits, boosting throughput to 4 TB/s per stack. Power rises to 100 W and capacity scales to 80 GB using 16‑high stacks of 40 Gb dies. NVIDIA's tentative Feynman accelerator is expected to be the first HBM5 adopter, packing 400-500 GB of memory into a multi-die package and drawing more than 4,400 W of total power. By 2032, HBM6 will double pin speeds to 16 Gbps and increase bandwidth to 8 TB/s over 4,096 lanes. Stack heights can grow to 20 layers, supporting up to 120 GB per stack at 120 W. Immersion cooling and bumpless copper-copper bonding will become the norm. The roadmap then predicts HBM7 in 2035, which includes 24 Gbps speeds, 8,192-bit interfaces, 24 TB/s throughput, and up to 192 GB per stack at 160 W. NVIDIA is preparing a 15,360 W accelerator to accommodate this monstrous memory.

This Week in Gaming (Week 25)

Welcome to the middle of June and to those that celebrate, happy midsummer. This week is jam crammed full of new releases, kicking off with a large-scale real-time modern warfare tactics game as our major release. If that's too complex for you, then maybe a spot of fishing is more your thing? Alternatively, we have some first-person shooter action from Finland or a new Diablo like game, if action is more of your thing. If that's not it, some 5v5 footie might be more of your cup of tea or how about a laid back MMORPG? Don't forget the other four games that might tickle your fancy, as some of them are quite big releases as well.

Broken Arrow / This week's major release / Thursday 19 June
Broken Arrow is a large-scale real-time modern warfare tactics game that combines the complexity of joint-forces wargaming with action-packed real-time tactics gameplay. Featuring over 300 realistic military units and technologies, each battle offers an immersive experience with endless replayability. Steam link

AMD Namedrops EPYC "Venice" Zen 6 and EPYC "Verano" Zen 7 Server Processors

AMD at its 2025 Advancing AI event name-dropped its two next generations of EPYC server processors to succeed the current EPYC "Turin" powered by Zen 5 microarchitecture. 2026 will see AMD debut the Zen 6 microarchitecture, and its main workhorse for the server segment will be EPYC "Venice." This processor will likely see a generational increase in CPU core counts, increased IPC from the full-sized Zen 6 cores, support for newer ISA, and an updated I/O package. AMD is looking to pack "Venice" with up to 256 CPU cores per package.

AMD is looking to increase the CPU core count per CCD (CPU complex die) with "Zen 6." The company plans to build these CCDs on the 2 nm TSMC N2 process node. The sIOD (server I/O die) of "Venice" implements PCI-Express Gen 6 for a generational doubling in bandwidth to GPUs, SSDs, and NICs. AMD is also claiming memory bandwidth as high as 1.6 TB/s. There are a couple of ways they can go about achieving this, either by increasing the memory clock speeds, or giving the processor a 16-channel DDR5 memory interface, up from the current 12-channel DDR5. The company could also add support for multichannel DIMM standards, such as MR-DIMM and MCR-DIMMs. All said and done, AMD is claiming a 70% increase in multithreaded performance over the current EPYC "Turin," which we assume is comparing the highest performing part to its next-gen successor.

Robust AI Demand Drives 6% QoQ Growth in Revenue for Top 10 Global IC Design Companies in 1Q25

TrendForce's latest investigations reveal that 1Q25 revenue for the global IC design industry reached US$77.4 billion, marking a 6% QoQ increase and setting a new record high. This growth was fueled by early stocking ahead of new U.S. tariffs on electronics and the ongoing construction of AI data centers around the world, which sustained strong chip demand despite the traditional off-season.

NVIDIA remained the top-ranking IC design company, with Q1 revenue surging to $42.3 billion—up 12% QoQ and 72% YoY—thanks to increasing shipments of its new Blackwell platform. Although its H20 chip is constrained by updated U.S. export controls and is expected to incur losses in Q2, the higher-margin Blackwell is poised to replace the Hopper platform gradually, cushioning the financial impact.

Compal Optimizes AI Workloads with AMD Instinct MI355X at AMD Advancing AI 2025 and International Supercomputing Conference 2025

As AI computing accelerates toward higher density and greater energy efficiency, Compal Electronics (Compal; Stock Ticker: 2324.TW), a global leader in IT and computing solutions, unveiled its latest high-performance server platform: SG720-2A/ OG720-2A at both AMD Advancing AI 2025 in the U.S. and the International Supercomputing Conference (ISC) 2025 in Europe. It features the AMD Instinct MI355X GPU architecture and offers both single-phase and two-phase liquid cooling configurations, showcasing Compal's leadership in thermal innovation and system integration. Tailored for next-generation generative AI and large language model (LLM) training, the SG720-2A/OG720-2A delivers exceptional flexibility and scalability for modern data center operations, drawing significant attention across the industry.

With generative AI and LLMs driving increasingly intensive compute demands, enterprises are placing greater emphasis on infrastructure that offers both performance and adaptability. The SG720-2A/OG720-2A emerges as a robust solution, combining high-density GPU integration and flexible liquid cooling options, positioning itself as an ideal platform for next-generation AI training and inference workloads.

AMD Unveils Vision for an Open AI Ecosystem, Detailing New Silicon, Software and Systems at Advancing AI 2025

AMD delivered its comprehensive, end-to-end integrated AI platform vision and introduced its open, scalable rack-scale AI infrastructure built on industry standards at its 2025 Advancing AI event.

AMD and its partners showcased:
  • How they are building the open AI ecosystem with the new AMD Instinct MI350 Series accelerators
  • The continued growth of the AMD ROCm ecosystem
  • The company's powerful, new, open rack-scale designs and roadmap that bring leadership rack-scale AI performance beyond 2027

NVIDIA TensorRT Boosts Stable Diffusion 3.5 Performance on NVIDIA GeForce RTX and RTX PRO GPUs

Generative AI has reshaped how people create, imagine and interact with digital content. As AI models continue to grow in capability and complexity, they require more VRAM, or video random access memory. The base Stable Diffusion 3.5 Large model, for example, uses over 18 GB of VRAM - limiting the number of systems that can run it well. By applying quantization to the model, noncritical layers can be removed or run with lower precision. NVIDIA GeForce RTX 40 Series and the Ada Lovelace generation of NVIDIA RTX PRO GPUs support FP8 quantization to help run these quantized models, and the latest-generation NVIDIA Blackwell GPUs also add support for FP4.

NVIDIA collaborated with Stability AI to quantize its latest model, Stable Diffusion (SD) 3.5 Large, to FP8 - reducing VRAM consumption by 40%. Further optimizations to SD3.5 Large and Medium with the NVIDIA TensorRT software development kit (SDK) double performance. In addition, TensorRT has been reimagined for RTX AI PCs, combining its industry-leading performance with just-in-time (JIT), on-device engine building and an 8x smaller package size for seamless AI deployment to more than 100 million RTX AI PCs. TensorRT for RTX is now available as a standalone SDK for developers.

TSMC Prepares "CoPoS": Next-Gen 310 × 310 mm Packages

As demand for ever-growing AI compute power continues to rise and manufacturing advanced nodes becomes more difficult, packaging is undergoing its golden era of development. Today's advanced accelerators often rely on TSMC's CoWoS modules, which are built on wafer cuts measuring no more than 120 × 150 mm in size. In response to the need for more space, TSMC has unveiled plans for CoPoS, or "Chips on Panel on Substrate," which could expand substrate dimensions to 310 × 310 mm and beyond. By shifting from round wafers to rectangular panels, CoPoS offers more than five times the usable area. This extra surface makes it possible to integrate additional high-bandwidth memory stacks, multiple I/O chiplets and compute dies in a single package. It also brings panel-level packaging (PLP) to the fore. Unlike wafer-level packaging (WLP), PLP assembles components on large, rectangular panels, delivering higher throughput and lower cost per unit. Systems with PLP will be actually viable for production runs and allow faster iterations over WLP.

TSMC will establish a CoPoS pilot line in 2026 at its Visionchip subsidiary. In 2027, the pilot facility will focus on refining the process, to meet partner requirements by the end of the year. Mass production is projected to begin between the end of 2028 and early 2029 at TSMC's Chiayi AP7 campus. That site, chosen for its modern infrastructure and ample space, is also slated to host production of multi-chip modules and System-on-Wafer technologies. NVIDIA is expected to be the launch partner for CoPoS. The company plans to leverage the larger panel area to accommodate up to 12 HBM4 chips alongside several GPU chiplets, offering significant performance gains for AI workloads. At the same time, AMD and Broadcom will continue using TSMC's CoWoS-L and CoWoS-R variants for their high-end products. Beyond simply increasing size, CoPoS and PLP may work in tandem with other emerging advances, such as glass substrates and silicon photonics. If development proceeds as planned, the first CoPoS-enabled devices could reach the market by late 2029.

MAINGEAR Unleashes ULTIMA 18 - The Ultimate 18" 4K Gaming Laptop

MAINGEAR, the leader in premium-quality, high-performance gaming PCs, today announced its most powerful laptop to date, the 18-inch ULTIMA 18. Developed in collaboration with CLEVO, ULTIMA 18 redefines what a gaming laptop can be by offering desktop-level specs, like a 4K@200 Hz G-SYNC display, Intel Core Ultra 9 275HX processor, and up to an NVIDIA GeForce RTX 5090 mobile GPU, all inside a sleek chassis outfitted with metal lid and palm rest.

Designed for elite gamers and creators who demand top-tier performance without compromise, ULTIMA 18 is MAINGEAR's first laptop to support modern dual-channel DDR5 memory, PCIe Gen 5 SSDs, dual Thunderbolt 5 ports, and Wi-Fi 7. Whether plugged in or on the move, this system delivers unprecedented power, quiet efficiency, and immersive visuals for the most demanding workloads and graphics-rich game titles.

Synopsys Achieves PCIe 6.x Interoperability Milestone with Broadcom's PEX90000 Series Switch

Synopsys, Inc. today announced that its collaboration with Broadcom has achieved interoperability between Synopsys' PCIe 6.x IP solution and Broadcom's PEX90000 series switch. As a cornerstone of next-generation AI infrastructures, PCIe switches play a critical role in enabling the scalability required to meet the demands of modern AI workloads. This milestone demonstrates that future products integrating PCIe 6.x solutions from Synopsys and Broadcom will operate seamlessly within the ecosystem, reducing design risk and accelerating time-to-market for high-performance computing and AI data center systems.

The interoperability demonstration with Broadcom features a Synopsys PCIe 6.x IP solution, including PHY and controller, operating as a root complex and an endpoint running at 64 GT/s with Broadcom's PEX90000 switch. Synopsys will showcase this interoperability demonstration at PCI-SIG DevCon 2025 at booth #13, taking place June 11 and 12, where attendees can see a variety of successful Synopsys PCIe 7.0 and PCIe 6.x IP interoperability demonstrations in both the Synopsys booth and partners' booths.

AMD Instinct MI355X Draws up to 1,400 Watts in OAM Form Factor

Tomorrow evening, AMD will host its "Advancing AI" livestream to introduce the Instinct MI350 series, a new line of GPU accelerators designed for large-scale AI training and inference. First shown in prototype form at ISC 2025 in Hamburg just a day ago, each MI350 card features 288 GB of HBM3E memory, delivering up to 8 TB/s of sustained bandwidth. Customers can choose between the single-card MI350X and the higher-clocked MI355X or opt for a full eight-GPU platform that aggregates to over 2.3 TB of memory. Both chips are built on the CDNA 4 architecture, which now supports four different precision formats: FP16, FP8, FP6, and FP4. The addition of FP6 and FP4 is designed to boost throughput in modern AI workloads, where models of tomorrow with tens of trillions of parameters are trained on FP6 and FP4.

In half-precision tests, the MI350X achieves 4.6 PetaFLOPS on its own and 36.8 PetaFLOPS in eight-GPU platform form, while the MI355X surpasses those numbers, reaching 5.03 PetaFLOPS and just over 40 PetaFLOPS. AMD is also aiming to improve energy efficiency by a factor of thirty compared with its previous generation. The MI350X card runs within a 1,000 Watt power envelope and relies on air cooling, whereas the MI355X steps up to 1,400 Watts and is intended for direct-liquid cooling setups. That 400 Watt increase puts it right at NVIDIA's upcoming GB300 "Grace Blackwell Ultra" superchip, which is also a 1,400 W design. With memory capacity, raw computing, and power efficiency all pushed to new heights, the question remains whether real-world benchmarks will match these ambitious specifications. AMD now only lacks platform scaling beyond eight GPUs, which the Instinct MI400 series will address.

NVIDIA NVL72 GB200 Systems Accelerate the Journey to Useful Quantum Computing

The integration of quantum processors into tomorrow's supercomputers promises to dramatically expand the problems that can be addressed with compute—revolutionizing industries including drug and materials development.

In addition to being part of the vision for tomorrow's hybrid quantum-classical supercomputers, accelerated computing is dramatically advancing the work quantum researchers and developers are already doing to achieve that vision. And in today's development of tomorrow's quantum technology, NVIDIA GB200 NVL72 systems and their fifth-generation multinode NVIDIA NVLink interconnect capabilities have emerged as the leading architecture.

Europe Builds AI Infrastructure With NVIDIA to Fuel Region's Next Industrial Transformation

NVIDIA today announced it is working with European nations, and technology and industry leaders, to build NVIDIA Blackwell AI infrastructure that will strengthen digital sovereignty, support economic growth and position the continent as a leader in the AI industrial revolution. France, Italy, Spain and the U.K. are among the nations building domestic AI infrastructure with an ecosystem of technology and cloud providers, including Domyn, Mistral AI, Nebius and Nscale, and telecommunications providers, including Orange, Swisscom, Telefónica and Telenor.

These deployments will deliver more than 3,000 exaflops of NVIDIA Blackwell compute resources for sovereign AI, enabling European enterprises, startups and public sector organizations to securely develop, train and deploy agentic and physical AI applications. NVIDIA is establishing and expanding AI technology centers in Germany, Sweden, Italy, Spain, the U.K. and Finland. These centers build on NVIDIA's history of collaborating with academic institutions and industry through the NVIDIA AI Technology Center program and NVIDIA Deep Learning Institute to develop the AI workforce and scientific discovery throughout the regions.

NVIDIA Partners With Europe Model Builders and Cloud Providers to Accelerate Region's Leap Into AI

NVIDIA GTC Paris at VivaTech -- NVIDIA today announced that it is teaming with model builders and cloud providers across Europe and the Middle East to optimize sovereign large language models (LLMs), providing a springboard to accelerate enterprise AI adoption for the region's industries.

Model builders and AI consortiums Barcelona Supercomputing Center (BSC), Bielik.AI, Dicta, H Company, Domyn, LightOn, the National Academic Infrastructure for Supercomputing in Sweden (NAISS) together with KBLab at the National Library of Sweden, the Slovak Republic, the Technology Innovation Institute (TII), the University College of London, the University of Ljubljana and UTTER are teaming with NVIDIA to optimize their models with NVIDIA Nemotron techniques to maximize cost efficiency and accuracy for enterprise AI workloads, including agentic AI.
Return to Keyword Browsing
Jun 30th, 2025 16:45 CDT change timezone

New Forum Posts

Popular Reviews

TPU on YouTube

Controversial News Posts