News Posts matching #training

Return to Keyword Browsing

This Week in Gaming (Week 26)

Welcome to the halfway point of 2025 and it's the slowest week of new releases we've had in a while, with this week's major release being a JPRG from SEGA. This is followed by another Asian game, but in a very different style, some zombie killing, but in a cute Asian way, another RPG, but about teenagers, a 25th anniversary remaster of a classic and finally a 2.5D platformer.

Persona5: The Phantom X / This week's major release / Thursday 26 June
The Persona series, which has sold over 23.5 million copies worldwide, welcomes its first mobile/PC release with its newest entry. Featuring a brand-new story and a cast of captivating new characters, the world of P5X offers an experience that both Persona5 fans and newcomers alike can enjoy. Steam link

Compal Optimizes AI Workloads with AMD Instinct MI355X at AMD Advancing AI 2025 and International Supercomputing Conference 2025

As AI computing accelerates toward higher density and greater energy efficiency, Compal Electronics (Compal; Stock Ticker: 2324.TW), a global leader in IT and computing solutions, unveiled its latest high-performance server platform: SG720-2A/ OG720-2A at both AMD Advancing AI 2025 in the U.S. and the International Supercomputing Conference (ISC) 2025 in Europe. It features the AMD Instinct MI355X GPU architecture and offers both single-phase and two-phase liquid cooling configurations, showcasing Compal's leadership in thermal innovation and system integration. Tailored for next-generation generative AI and large language model (LLM) training, the SG720-2A/OG720-2A delivers exceptional flexibility and scalability for modern data center operations, drawing significant attention across the industry.

With generative AI and LLMs driving increasingly intensive compute demands, enterprises are placing greater emphasis on infrastructure that offers both performance and adaptability. The SG720-2A/OG720-2A emerges as a robust solution, combining high-density GPU integration and flexible liquid cooling options, positioning itself as an ideal platform for next-generation AI training and inference workloads.

AMD Instinct MI355X Draws up to 1,400 Watts in OAM Form Factor

Tomorrow evening, AMD will host its "Advancing AI" livestream to introduce the Instinct MI350 series, a new line of GPU accelerators designed for large-scale AI training and inference. First shown in prototype form at ISC 2025 in Hamburg just a day ago, each MI350 card features 288 GB of HBM3E memory, delivering up to 8 TB/s of sustained bandwidth. Customers can choose between the single-card MI350X and the higher-clocked MI355X or opt for a full eight-GPU platform that aggregates to over 2.3 TB of memory. Both chips are built on the CDNA 4 architecture, which now supports four different precision formats: FP16, FP8, FP6, and FP4. The addition of FP6 and FP4 is designed to boost throughput in modern AI workloads, where models of tomorrow with tens of trillions of parameters are trained on FP6 and FP4.

In half-precision tests, the MI350X achieves 4.6 PetaFLOPS on its own and 36.8 PetaFLOPS in eight-GPU platform form, while the MI355X surpasses those numbers, reaching 5.03 PetaFLOPS and just over 40 PetaFLOPS. AMD is also aiming to improve energy efficiency by a factor of thirty compared with its previous generation. The MI350X card runs within a 1,000 Watt power envelope and relies on air cooling, whereas the MI355X steps up to 1,400 Watts and is intended for direct-liquid cooling setups. That 400 Watt increase puts it right at NVIDIA's upcoming GB300 "Grace Blackwell Ultra" superchip, which is also a 1,400 W design. With memory capacity, raw computing, and power efficiency all pushed to new heights, the question remains whether real-world benchmarks will match these ambitious specifications. AMD now only lacks platform scaling beyond eight GPUs, which the Instinct MI400 series will address.

NVIDIA NVL72 GB200 Systems Accelerate the Journey to Useful Quantum Computing

The integration of quantum processors into tomorrow's supercomputers promises to dramatically expand the problems that can be addressed with compute—revolutionizing industries including drug and materials development.

In addition to being part of the vision for tomorrow's hybrid quantum-classical supercomputers, accelerated computing is dramatically advancing the work quantum researchers and developers are already doing to achieve that vision. And in today's development of tomorrow's quantum technology, NVIDIA GB200 NVL72 systems and their fifth-generation multinode NVIDIA NVLink interconnect capabilities have emerged as the leading architecture.

Europe Builds AI Infrastructure With NVIDIA to Fuel Region's Next Industrial Transformation

NVIDIA today announced it is working with European nations, and technology and industry leaders, to build NVIDIA Blackwell AI infrastructure that will strengthen digital sovereignty, support economic growth and position the continent as a leader in the AI industrial revolution. France, Italy, Spain and the U.K. are among the nations building domestic AI infrastructure with an ecosystem of technology and cloud providers, including Domyn, Mistral AI, Nebius and Nscale, and telecommunications providers, including Orange, Swisscom, Telefónica and Telenor.

These deployments will deliver more than 3,000 exaflops of NVIDIA Blackwell compute resources for sovereign AI, enabling European enterprises, startups and public sector organizations to securely develop, train and deploy agentic and physical AI applications. NVIDIA is establishing and expanding AI technology centers in Germany, Sweden, Italy, Spain, the U.K. and Finland. These centers build on NVIDIA's history of collaborating with academic institutions and industry through the NVIDIA AI Technology Center program and NVIDIA Deep Learning Institute to develop the AI workforce and scientific discovery throughout the regions.

NVIDIA Partners With Europe Model Builders and Cloud Providers to Accelerate Region's Leap Into AI

NVIDIA GTC Paris at VivaTech -- NVIDIA today announced that it is teaming with model builders and cloud providers across Europe and the Middle East to optimize sovereign large language models (LLMs), providing a springboard to accelerate enterprise AI adoption for the region's industries.

Model builders and AI consortiums Barcelona Supercomputing Center (BSC), Bielik.AI, Dicta, H Company, Domyn, LightOn, the National Academic Infrastructure for Supercomputing in Sweden (NAISS) together with KBLab at the National Library of Sweden, the Slovak Republic, the Technology Innovation Institute (TII), the University College of London, the University of Ljubljana and UTTER are teaming with NVIDIA to optimize their models with NVIDIA Nemotron techniques to maximize cost efficiency and accuracy for enterprise AI workloads, including agentic AI.

MSI Powers AI's Next Leap for Enterprises at ISC 2025

MSI, a global leader in high-performance server solutions, is showcasing its enterprise-grade, high-performance server platforms at ISC 2025, taking place June 10-12 at booth #E12. Built on standardized and modular architectures, MSI's AI servers are designed to power next-generation AI and accelerated computing workloads, enabling enterprises to rapidly advance their AI innovations.

"As AI workloads continue to grow and evolve toward inference-driven applications, we're seeing a significant shift in how enterprises approach AI deployment," said Danny Hsu, General Manager of Enterprise Platform Solutions at MSI. "With modular and standards-based architectures, enterprise data centers can now adopt AI technologies more quickly and cost-effectively than ever before. This marks a new era where AI is not only powerful but also increasingly accessible to businesses of all sizes.

ASUS Announces Key Milestone with Nebius and Showcases NVIDIA GB300 NVL72 System at GTC Paris 2025

ASUS today joined GTC Paris at VivaTech 2025 as a Gold Sponsor, highlighting its latest portfolio of AI infrastructure solutions and reinforcing its commitment to advancing the AI Factory vision with a full range of NVIDIA Blackwell Ultra solutions, delivering breakthrough performance from large-scale datacenter to personal desktop.

ASUS is also excited to announce a transformative partnership milestone in its partnership with Nebius. Together, the two companies are enabling a new era of AI innovation built on NVIDIA's advanced platforms. Building on the success of the NVIDIA GB200 NVL72 platform deployment, ASUS and Nebius are now moving forward with strategic collaborations featuring the next-generation NVIDIA GB300 NVL72 platform. This ongoing initiative underscores ASUS's role as a key enabler in AI infrastructure, committed to delivering scalable, high-performance solutions that help enterprises accelerate AI adoption and innovation.

NVIDIA Blackwell a Focal Point in AI Factories; As Built by Dell Technologies

Over a century ago, Henry Ford pioneered the mass production of cars and engines to provide transportation at an affordable price. Today, the technology industry manufactures the engines for a new kind of factory—those that produce intelligence. As companies and countries increasingly focus on AI, and move from experimentation to implementation, the demand for AI technologies continues to grow exponentially. Leading system builders are racing to ramp up production of the servers for AI factories—the engines of AI factories—to meet the world's exploding demand for intelligence and growth. Dell Technologies is a leader in this renaissance. Dell and NVIDIA have partnered for decades and continue to push the pace of innovation. In its last earnings call, Dell projected that its AI server business will grow at least $15 billion this year.

"We're on a mission to bring AI to millions of customers around the world," said Michael Dell, chairman and chief executive officer, Dell Technologies, in a recent announcement at Dell Technologies World. "With the Dell AI Factory with NVIDIA, enterprises can manage the entire AI lifecycle across use cases, from training to deployment, at any scale." The latest Dell AI servers, powered by NVIDIA Blackwell, offer up to 50x more AI reasoning inference output and 5x improvement in throughput compared with the Hopper platform. Customers use them to generate tokens for new AI applications that will help solve some of the world's biggest challenges, from disease prevention to advanced manufacturing.

Red Hat & AMD Strengthen Strategic Collaboration - Leading to More Efficient GenAI

Red Hat, the world's leading provider of open source solutions, and AMD today announced a strategic collaboration to propel AI capabilities and optimize virtualized infrastructure. With this deepened alliance, Red Hat and AMD will expand customer choice across the hybrid cloud, from deploying optimized, efficient AI models to more cost-effectively modernizing traditional virtual machines (VMs). As workload demand and diversity continue to rise with the introduction of AI, organizations must have the capacity and resources to meet these escalating requirements. The average datacenter, however, is dedicated primarily to traditional IT systems, leaving little room to support intensive workloads such as AI. To answer this need, Red Hat and AMD are bringing together the power of Red Hat's industry-leading open source solutions with the comprehensive portfolio of AMD high-performance computing architectures.

AMD and Red Hat: Driving to more efficient generative AI
Red Hat and AMD are combining the power of Red Hat AI with the AMD portfolio of x86-based processors and GPU architectures to support optimized, cost-efficient and production-ready environments for AI-enabled workloads. AMD Instinct GPUs are now fully enabled on Red Hat OpenShift AI, empowering customers with the high-performing processing power necessary for AI deployments across the hybrid cloud without extreme resource requirements. In addition, using AMD Instinct MI300X GPUs with Red Hat Enterprise Linux AI, Red Hat and AMD conducted testing on Microsoft Azure ND MI300X v5 to successfully demonstrate AI inferencing for scaling small language models (SLMs) as well as large language models (LLM) deployed across multiple GPUs on a single VM, reducing the need to deploy across multiple VMs and reducing performance costs.

Marvell Custom Cloud Platform Upgraded with NVIDIA NVLink Fusion Tech

Marvell Technology, Inc., a leader in data infrastructure semiconductor solutions, today announced it is teaming with NVIDIA to offer NVLink Fusion technology to customers employing Marvell custom cloud platform silicon. NVLink Fusion is an innovative new offering from NVIDIA for integrating custom XPU silicon with NVIDIA NVLink connectivity, rack-scale hardware architecture, software and other technology, providing customers with greater flexibility and choice in developing next-generation AI infrastructure.

The Marvell custom platform strategy seeks to deliver breakthrough results through unique semiconductor designs and innovative approaches. By combining expertise in system and semiconductor design, advanced process manufacturing, and a comprehensive portfolio of semiconductor platform solutions and IP—including electrical and optical serializer/deserializers (SerDes), die-to-die interconnects for 2D and 3D devices, advanced packaging, silicon photonics, co-packaged copper, custom high-bandwidth memory (HBM), system-on-chip (SoC) fabrics, optical IO, and compute fabric interfaces such as PCIe Gen 7—Marvell is able to create platforms in collaboration with customers that transform infrastructure performance, efficiency and value.

NVIDIA Discusses the Revenue-Generating Potential of AI Factories

AI is creating value for everyone—from researchers in drug discovery to quantitative analysts navigating financial market changes. The faster an AI system can produce tokens, a unit of data used to string together outputs, the greater its impact. That's why AI factories are key, providing the most efficient path from "time to first token" to "time to first value." AI factories are redefining the economics of modern infrastructure. They produce intelligence by transforming data into valuable outputs—whether tokens, predictions, images, proteins or other forms—at massive scale.

They help enhance three key aspects of the AI journey—data ingestion, model training and high-volume inference. AI factories are being built to generate tokens faster and more accurately, using three critical technology stacks: AI models, accelerated computing infrastructure and enterprise-grade software. Read on to learn how AI factories are helping enterprises and organizations around the world convert the most valuable digital commodity—data—into revenue potential.

Tencent President Discusses Significant Stockpiling of AI GPUs - Open to Future Adoption of Native Designs

Martin Lau, President of Tencent, has divulged that his company has accumulated a "pretty strong stockpile" of NVIDIA AI chips. In a mid-week earnings call, the Chinese executive reckoned that this surplus will come in handy—upon the company unleashing its full-on upcoming "AI strategy." Lau was responding to a question regarding ripples caused by a recent introduction of revised licensing requirements for "high-end GPUs." His lengthy reply seems to align with "leaked April time" information; when industry analysts theorized a massive $16 billion spend—reportedly, big Chinese tech firms had splurged out with swift acquisitions of NVIDIA H20 GPUs. Lau commented on present day conditions: "it's actually a very dynamic situation right now. Since the last earnings call, we have seen an H20 ban, and then after that there was the BIS new guidelines that just came in overnight...If you look at the allocation of the usage of these chips, obviously they'll be used for the applications that will generate immediate returns for us. For example, in the advertising business as well as content recommendation product, where we actually would be using a lot of these GPUs to generate results and generate returns for us. Secondly, in terms of the training of our large language models, they will be of the next priority and the training actually requires higher-end chips."

Team Green's engineering team has likely been strong-armed into designing further compromised hardware; as "exclusive" sanction-conforming options for important enterprise customers in China. Tencent seems to have enough pre-ban specimens to tide things over, for a while. The firm's president envisioned a comfortable position, for the foreseeable future: "over the past few months, we (started) to move off the concept or the belief of American tech companies—which they call 'the scaling law'—which required continuous expansion of the training cluster. And now we can see even with a smaller cluster you can actually achieve very good training results. And there's a lot of potential that we can get on the post-training side which do not necessarily meet very large clusters. We should have enough high-end chips to continue our training of models for a few more generations going forward." Huawei's controversial Ascend 910C AI accelerator seems to be the top alternative contender; tech watchdogs believe that this design's fortunes will be closely tied to the rising dominance of DeepSeek. Fairly recent leaks have indicated impressive progress being made within China's domestic AI accelerator infrastructure.

IBM Announces Granite 4.0 Tiny Preview - an Extremely Compact & Compute Efficient AI Model

We're excited to present IBM Granite 4.0 Tiny Preview, a preliminary version of the smallest model in the upcoming Granite 4.0 family of language models, to the open source community. Granite 4.0 Tiny Preview is extremely compact and compute efficient: at FP8 precision, several concurrent sessions performing long context (128K) tasks can be run on consumer grade hardware, including GPUs commonly available for under $350 USD. Though the model is only partially trained—it has only seen 2.5T of a planned 15T or more training tokens—it already offers performance rivaling that of IBM Granite 3.3 2B Instruct despite fewer active parameters and a roughly 72% reduction in memory requirements. We anticipate Granite 4.0 Tiny's performance to be on par with that of Granite 3.3 8B Instruct by the time it has completed training and post-training.

As its name suggests, Granite 4.0 Tiny will be among the smallest offerings in the Granite 4.0 model family. It will be officially released this summer as part of a model lineup that also includes Granite 4.0 Small and Granite 4.0 Medium. Granite 4.0 continues IBM's firm commitment to making efficiency and practicality the cornerstone of its enterprise LLM development. This preliminary version of Granite 4.0 Tiny is now available on Hugging Face—though we do not yet recommend the preview version for enterprise use—under a standard Apache 2.0 license. Our intent is to allow even GPU-poor developers to experiment and tinker with the model on consumer-grade GPUs. The model's novel architecture is pending support in Hugging Face transformers and vLLM, which we anticipate will be completed shortly for both projects. Official support to run this model locally through platform partners including Ollama and LMStudio is expected in time for the full model release later this summer.

Oracle Cloud Infrastructure Bolstered by Thousands of NVIDIA Blackwell GPUs

Oracle has stood up and optimized its first wave of liquid-cooled NVIDIA GB200 NVL72 racks in its data centers. Thousands of NVIDIA Blackwell GPUs are now being deployed and ready for customer use on NVIDIA DGX Cloud and Oracle Cloud Infrastructure (OCI) to develop and run next-generation reasoning models and AI agents. Oracle's state-of-the-art GB200 deployment includes high-speed NVIDIA Quantum-2 InfiniBand and NVIDIA Spectrum-X Ethernet networking to enable scalable, low-latency performance, as well as a full stack of software and database integrations from NVIDIA and OCI.

OCI, one of the world's largest and fastest-growing cloud service providers, is among the first to deploy NVIDIA GB200 NVL72 systems. The company has ambitious plans to build one of the world's largest Blackwell clusters. OCI Superclusters will scale beyond 100,000 NVIDIA Blackwell GPUs to meet the world's skyrocketing need for inference tokens and accelerated computing. The torrid pace of AI innovation continues as several companies including OpenAI have released new reasoning models in the past few weeks.

MSI Presenting AI's Next Leap at Japan IT Week Spring 2025

MSI, a leading global provider of high-performance server solutions, is bringing AI-driven innovation to Japan IT Week Spring 2025 at Booth #21-2 with high-performance server platforms built for next-generation AI and cloud computing workloads. MSI's NVIDIA MGX AI Servers deliver modular GPU-accelerated computing to optimize AI training and inference, while the Core Compute line of Multi-Node Servers maximize compute density and efficiency for AI inference and cloud service provider workloads. MSI's Open Compute line of ORv3 Servers enhance scalability and thermal efficiency in hyperscale AI deployments. MSI's Enterprise Servers provide balanced compute, storage, and networking for seamless AI workloads across cloud and edge. With deep expertise in system integration and AI-driven infrastructure, MSI is advancing the next generation of intelligent computing solutions to power AI's next leap.

"AI's advancement hinges on performance efficiency, compute density, and workload scalability. MSI's server platforms are engineered to accelerate model training, optimize inference, and maximize resource utilization—ensuring enterprises have the processing power to turn AI potential into real-world impact," said Danny Hsu, General Manager of MSI Enterprise Platform Solutions.

NVIDIA Blackwell Platform Boosts Water Efficiency by Over 300x - "Chill Factor" for AI Infrastructure

Traditionally, data centers have relied on air cooling—where mechanical chillers circulate chilled air to absorb heat from servers, helping them maintain optimal conditions. But as AI models increase in size, and the use of AI reasoning models rises, maintaining those optimal conditions is not only getting harder and more expensive—but more energy-intensive. While data centers once operated at 20 kW per rack, today's hyperscale facilities can support over 135 kW per rack, making it an order of magnitude harder to dissipate the heat generated by high-density racks. To keep AI servers running at peak performance, a new approach is needed for efficiency and scalability.

One key solution is liquid cooling—by reducing dependence on chillers and enabling more efficient heat rejection, liquid cooling is driving the next generation of high-performance, energy-efficient AI infrastructure. The NVIDIA GB200 NVL72 and the NVIDIA GB300 NVL72 are rack-scale, liquid-cooled systems designed to handle the demanding tasks of trillion-parameter large language model inference. Their architecture is also specifically optimized for test-time scaling accuracy and performance, making it an ideal choice for running AI reasoning models while efficiently managing energy costs and heat.

Huawei Prepares 6 nm Ascend 920C Accelerator: 900 TeraFLOPS, 4000 GB/s HBM3

Huawei recently revealed that its CloudMatrix 384 AI super node cluster can outperform NVIDIA's GB200 NVL72 in standard benchmarks, even though it consumes more power per performance unit. That system relies on Huawei's current Ascend 910C accelerators, which deliver strong raw compute performance but lag behind in efficiency metrics. To address this gap, Huawei is preparing the Ascend 920 family, with the training‑focused Ascend 920C built on SMIC's 6 nm process. According to DigiTimes, each 920C card will deliver more than 900 TeraFLOPS of BF16 half-precision performance. It also upgrades memory to HBM3 modules, providing 4,000 GB/s of bandwidth, which is up from the Ascend 910C's eight HBM2E stacks and 3,200 GB/s.

The existing Ascend 910C peaks at 780 TeraFLOPS in BF16 operations and uses a chip‑to‑chip interconnect bandwidth of 400 GB/s. Its packaging limits that bandwidth, but it still supports high‑speed communication between nodes in ultra‑dense AI clusters. Huawei will retain the chiplet‑based design in the 920C and refine the tensor acceleration engines for Transformer and Mixture‑of‑Experts models. Internal projections estimate that overall training efficiency on the 920C will improve by 30-40 percent compared to the 910C. This should narrow the performance‑per‑watt difference against competitors' solutions. In terms of system integration, the Ascend 920C will support PCIe 5.0 and next‑generation high‑throughput interconnect protocols. These features aim to improve resource scheduling and reduce latency in super node deployments, where tight node‑to‑node synchronization is critical. Huawei has not announced a firm release date for the Ascend 920C, but DigiTimes sources claim that it will enter mass production in the second half of 2025, which could mean just a few months from now.

MangoBoost Achieves Record-Breaking MLPerf Inference v5.0 Results with AMD Instinct MI300X

MangoBoost, a provider of cutting-edge system solutions designed to maximize AI data center efficiency, has set a new industry benchmark with its latest MLPerf Inference v5.0 submission. The company's Mango LLMBoost AI Enterprise MLOps software has demonstrated unparalleled performance on AMD Instinct MI300X GPUs, delivering the highest-ever recorded results for Llama2-70B in the offline inference category. This milestone marks the first-ever multi-node MLPerf inference result on AMD Instinct MI300X GPUs. By harnessing the power of 32 MI300X GPUs across four server nodes, Mango LLMBoost has surpassed all previous MLPerf inference results, including those from competitors using NVIDIA H100 GPUs.

Unmatched Performance and Cost Efficiency
MangoBoost's MLPerf submission demonstrates a 24% performance advantage over the best-published MLPerf result from Juniper Networks utilizing 32 NVIDIA H100 GPUs. Mango LLMBoost achieved 103,182 tokens per second (TPS) in the offline scenario and 93,039 TPS in the server scenario on AMD MI300X GPUs, outperforming the previous best result of 82,749 TPS on NVIDIA H100 GPUs. In addition to superior performance, Mango LLMBoost + MI300X offers significant cost advantages. With AMD MI300X GPUs priced between $15,000 and $17,000—compared to the $32,000-$40,000 cost of NVIDIA H100 GPUs (source: Tom's Hardware—H100 vs. MI300X Pricing)—Mango LLMBoost delivers up to 62% cost savings while maintaining industry-leading inference throughput.

Vietnamese Store Assembles AI Server, Uses Seven GIGABYTE RTX 5090 GAMING OC Cards

I_Leak_VN, a Vietnamese PC hardware influencer/leaker, reckons that the region's first GeForce RTX 5090 GPU-based "AI/mining/scalper" rig has just emerged. Earlier today, their social media post provided an informative look at a local shop's "Training AI: X7 RTX 5090 32G" build. Apparently, the retail outlet has assembled this monstrous setup for an important customer. A Nguyễn Công PC employee sent personal thanks to GIGABYTE Vietnam; for the supply of seven GeForce RTX 5090 GAMING OC graphics cards. As showcased in uploaded photos (see below), these highly-prized units were placed neatly in a row—as part of an airy open plan system. After inspecting the store's heavily watermarked shots, Western media outlets have (visually) compared the "Training AI: X7" rig to crypto mining builds of a certain vintage.

Tom's Hardware spotted multiple Super Flower Leadex 2000 W PSUs—providing sufficient juice to a system that: "can easily be valued at over $30,000, considering these GPUs go for $3500-$4000 on a good day." Wccftech's report extended coverage to Nguyễn Công PC's other AI offerings; mainly "more traditional" PC builds that utilize dual MSI GeForce RTX 5090 card setups—a "dual rig" likely costs ~$10,000. The shop's selection of gaming-grade hardware is not too surprising, given the performance prowess of NVIDIA's GB202-300-A1 GPU variant. Naturally, Team Green's cutting-edge enterprise hardware unlocks the full potential of "Blackwell" GPU designs—but the company can charge sky-high prices for this level of equipment. Going back to early 2024, Tiny Corp. started to make noise about its "tinybox" AI platform—consisting of multiple XFX Speedster MERC310 RX 7900 XTX cards, rather than AMD's freshly launched Instinct MI300X accelerator.

Chucklefish Unveils Witchbrook, a Spellbinding Witch Life-sim Game

Welcome Coven members, old and new. We—at Chucklefish and Robotality—are so excited to finally pull back the curtain and share a glimpse of what we've been brewing in the studio these past few years. Witchbrook has been a true labour of love, crafted by our small team here at Chucklefish (with some recent, incredible help from our friends at Robotality!). It's been a journey of hundreds of hours poured into art, music, and code, and we're beyond excited to give you a first look at the magical world of Witchbrook.

We'll also have a fresh edition of The Oracle, our in-game newspaper with a new editor at the helm, where you can catch up on the latest happenings in the sunny seaside city of Mossport, releasing in the next few days. It's your gateway to staying connected with the in-world happenings, so be sure to sign up here. Over the next few months, we'll be sharing more gameplay, details, and fun surprises. In the meantime, we'd love for you to join us and the rest of the community over on Discord. We pride ourselves on having a warm and welcoming community, and would love for folks to join us on our journey.

ASUS Republic of Gamers Bolsters Partnership with Team Vitality

ASUS Republic of Gamers (ROG) and Team Vitality are delighted to announce the extension of their partnership. As of today, Team Vitality's Counter-Strike 2 and VALORANT teams will also benefit from a full range of ROG laptops, desktops and handheld consoles, providing them with even better hardware on their road toward excellence.

Thanks to this new phase of collaboration, Team Vitality players will benefit from cutting-edge gaming equipment, adapted to the demands of the highest competitive level. These computers will offer unrivalled processing power, ultra-high refresh rates and minimal latency, guaranteeing maximum precision and responsiveness in both training and competition. This partnership with Team Vitality marks a further step in ROG's drive to innovate and offer professional and amateur gamers alike an increasingly immersive and high-performance gaming experience.

Google Teams up with MediaTek for Next-Generation TPU v7 Design

According to Reuters, citing The Information, Google will collaborate with MediaTek to develop its seventh-generation Tensor Processing Unit (TPU), which is also known as TPU v7. Google maintains its existing partnership with Broadcom despite the new MediaTek collaboration. The AI accelerator is scheduled for production in 2026, and TSMC is handling manufacturing duties. Google will lead the core architecture design while MediaTek manages I/O and peripheral components, as Economic Daily News reports. This differs from Google's ongoing relationship with Broadcom, which co-develops core TPU architecture. The MediaTek partnership reportedly stems from the company's strong TSMC relationship and lower costs compared to Broadcom.

There is also a possibility that MediaTek could design inference-focused TPU v7 chips while Broadcom focuses on training architecture. Nonetheless, the development of TPU is a massive market as Google is using so many chips that it could use a third company, hypothetically. The development of TPU continues Google's vertical integration strategy for AI infrastructure. Google reduces dependency on NVIDIA hardware by designing proprietary AI chips for internal R&D and cloud operations. At the same time, competitors like OpenAI, Anthropic, and Meta rely heavily on NVIDIA's processors for AI training and inference. At Google's scale, serving billions of queries a day, designing custom chips makes sense from both financial and technological sides. As Google develops its own specific workloads, translating that into hardware acceleration is the game that Google has been playing for years now.

Ubisoft Summarizes Rainbow Six Siege X Showcase, Announces June 10 Release

The next evolution of Rainbow Six Siege was revealed today at the Siege X Showcase. Launching on June 10, Siege X will introduce Dual Front, a dynamic new 6v6 game mode, as well as deliver foundational upgrades to the core game (including visual enhancements, an audio overhaul, rappel upgrades, and more) alongside revamped player protection systems, and free access that will allow players to experience the unique tactical action of Rainbow Six Siege at no cost. Plus, from now through March 19, a free Dual Front closed beta is live on PC via Ubisoft Connect, PS5, and Xbox Series X|S, giving players a first chance to play the exciting new mode. Read on to find out how to get into the beta and try Dual Front for yourself.

Dual Front
Taking place on an entirely new map called District, Dual Front is a new mode that pits two teams of six Operators against each other in a fight to attack enemy sectors while defending their own. Players can choose from a curated roster of 35 Operators—both Attackers and Defenders - that will rotate twice per season. During each match, two objective points are live at all times, one in each team's lane; teams must plant a sabotage kit (akin to a defuser) in the opposing team's objective room and defend it in order to capture the sector and progress towards the final objective: the Base. Sabotage the Base to claim victory, but don't forget to defend your own sector, lest your foes progress faster than you and beat you to it.

Meta Reportedly Reaches Test Phase with First In-house AI Training Chip

According to a Reuters technology report, Meta's engineering department is engaged in the testing of their "first in-house chip for training artificial intelligence systems." Two inside sources have declared this significant development milestone; involving a small-scale deployment of early samples. The owner of Facebook could ramp up production, upon initial batches passing muster. Despite a recent-ish showcasing of an open-architecture NVIDIA "Blackwell" GB200 system for enterprise, Meta leadership is reported to be pursuing proprietary solutions. Multiple big players—in the field of artificial intelligence—are attempting to breakaway from a total reliance on Team Green. Last month, press outlets concentrated on OpenAI's alleged finalization of an in-house design, with rumored involvement coming from Broadcom and TSMC.

One of the Reuters industry moles believes that Meta has signed up with TSMC—supposedly, the Taiwanese foundry was responsible for the production of test batches. Tom's Hardware reckons that Meta and Broadcom were working together with the tape out of the social media giant's "first AI training accelerator." Development of the company's "Meta Training and Inference Accelerator" (MTIA) series has stretched back a couple of years—according to Reuters, this multi-part project: "had a wobbly start for years, and at one point scrapped a chip at a similar phase of development...Meta last year, started using an MTIA chip to perform inference, or the process involved in running an AI system as users interact with it, for the recommendation systems that determine which content shows up on Facebook and Instagram news feeds." Leadership is reportedly aiming to get custom silicon solutions up and running for AI training by next year. Past examples of MTIA hardware were deployed with open-source RISC-V cores (for inference tasks), but is not clear whether this architecture will form the basis of Meta's latest AI chip design.
Return to Keyword Browsing
Jul 6th, 2025 17:47 CDT change timezone

New Forum Posts

Popular Reviews

TPU on YouTube

Controversial News Posts