News Posts matching #Google

Return to Keyword Browsing

RISC-V Ecosystem Gets More Standardization as Ubuntu Drops Non-Compliant CPUs

Canonical, the company behind Ubuntu, has announced that its next-generation release will require RISC‑V processors to meet the newly ratified RVA23 profile. This specification, approved back in April, includes full support for Vector Instructions 1.0 and a Hypervisor extension. As Laurine Kirk, security researcher at Google, notes, by setting this higher baseline, Ubuntu 26.04 will no longer run on roughly 90% of existing RISC-V single-board computers, including the popular Raspberry Pi-style boards, unless their hardware is upgraded. Canonical's move puts it in step with Google and Microsoft, both of which have already targeted RVA23 for their RISC‑V builds. This change will drive manufacturers to ship more secure, future-proof silicon, helping to guard against exploits like GhostWrite, a memory-access vulnerability discovered last year in T-Head's XuanTie C910 CPUs.

We discussed RISC-V ecosystem fragmentation with Andrea Gallo, then the CTO and now the CEO of the RISC-V Foundation, regarding the fragmentation within the RISC-V ecosystem. "If you want to claim that you are RISC-V compatible, then there's an architecture compatibility test suite that verifies that you are complying with the ISA. We run the same tests on a golden reference model and compare the signatures of the tests to ensure alignment with the specification." He added that "We just ratified the RVA23 Profile. The newly ratified RVA23 Profile is a major release for the RISC-V software ecosystem and will help accelerate widespread implementation among toolchains and operating systems." For anyone who wants to ship a working RISC-V processor, be it data center or mobile, the RVA23 profile is the one that guarantees no fragmentation and compatibility.

Intel Foundry Reportedly Secures Microsoft Contract for 18A Node

According to Chosun Biz, Intel Foundry client acquisition efforts for the 18A node have shifted into high gear, with the latest reports indicating that Microsoft has inked a substantial foundry deal based on the 18A process. Talks with Google are also said to be advancing, suggesting that Intel may soon secure a second cloud giant for a customer of its 18A technology. Intel's flagship 18A node, which entered risk production earlier this year, is slated for full-scale volume manufacturing before the end of 2025. Beyond the baseline 18A offering, the company is already developing two enhanced variants: 18A-P, scheduled for rollout in 2026, and 18A-PT, targeted for 2028. Chosun Biz reports that prototype 18A-P wafers have been produced in Intel's domestic fabs, pointing out the foundry's swift pace of new node production.

Intel has even begun sharing early PDKs for its next-generation 14A node with select partners, paving the way for continued scaling beyond the 18A era. Strategically, Intel's extensive US fab footprint, which includes two under-construction fabs in Arizona (a USD 32 billion investment), expanded packaging facilities in New Mexico, a new 300 mm logic plant in Oregon, and two Ohio fabs earmarked for the early 2030s, could prove advantageous amid ongoing tariff uncertainties. Beyond North America, Intel is gearing up Fab 34 in Ireland for mass production of its Intel 4 node and inaugural 3 nm chips later this year. In Israel, Fab 38 is being outfitted for EUV-based, high-performance wafer manufacturing, while an advanced packaging site in Penang, Malaysia, supports global assembly and testing.

Razer Officially Launches PC Remote Play

Razer, the leading global lifestyle brand for gamers, today announced the official launch of Razer PC Remote Play, the ultimate platform for streaming PC games directly to mobile devices. Razer PC Remote Play unlocks the full potential of your gaming rig by streaming your favorite PC game titles to your phone, tablet, or Windows handheld with unmatched visual clarity and responsiveness. By leveraging the full frame rate and resolution of your mobile device gamers will experience pristine visuals without black bars.

Introduced at CES 2025, the beta version of Razer PC Remote Play received great fanfare amongst users. The official launch brings a redesigned interface on Razer Cortex PC, along with added support to all mobile gaming controllers compatible with iOS or Android operating systems, support for the AV1 video codec for improved quality and lower latency, and software updates that improves the gaming experience for seamless PC gaming on the go. Gamers can now look forward to ultra-smooth, high-fidelity gameplay on smartphones and tablets - anywhere, anytime.

Safe Superintelligence Inc. Uses Google TPUs Instead of Regular GPUs for Next-Generation Models

It seems like Google aims to grab a bit of the market share from NVIDIA and AMD by offering startups large compute deals and allowing them to train their massive AI models on the Google Cloud Platform (GCP). One such case is the OpenAI co-founder Ilya Sutskever's Safe Superintelligence Inc. (SSI) startup. According to a GCP post, SSI is "partnering with Google Cloud to use TPUs to accelerate its research and development efforts toward building a safe, superintelligent AI." Google's latest TPU v7p, codenamed Ironwood, was released yesterday. Carrying 4,614 TeraFLOPS of FP8 precision and 192 GB of HBM memory, these TPUs are interconnected using Google's custom ICI infrastructure and are scaled to configurations in pods of 9,216 chips, where Ironwood delivers 42.5 ExaFLOPS of total computing power.

For AI training, this massive power will allow AI models to quickly go over training, accelerating research iterations and ultimately accelerating model development. For SSI, the end goal is a simple mission: achieving ASI with safety at the front. "We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead," notes the SSI website, adding that "Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures."

NVIDIA Will Bring Agentic AI Reasoning to Enterprises with Google Cloud

NVIDIA is collaborating with Google Cloud to bring agentic AI to enterprises seeking to locally harness the Google Gemini family of AI models using the NVIDIA Blackwell HGX and DGX platforms and NVIDIA Confidential Computing for data safety. With the NVIDIA Blackwell platform on Google Distributed Cloud, on-premises data centers can stay aligned with regulatory requirements and data sovereignty laws by locking down access to sensitive information, such as patient records, financial transactions and classified government information. NVIDIA Confidential Computing also secures sensitive code in the Gemini models from unauthorized access and data leaks.

"By bringing our Gemini models on premises with NVIDIA Blackwell's breakthrough performance and confidential computing capabilities, we're enabling enterprises to unlock the full potential of agentic AI," said Sachin Gupta, vice president and general manager of infrastructure and solutions at Google Cloud. "This collaboration helps ensure customers can innovate securely without compromising on performance or operational ease." Confidential computing with NVIDIA Blackwell provides enterprises with the technical assurance that their user prompts to the Gemini models' application programming interface—as well as the data they used for fine-tuning—remain secure and cannot be viewed or modified. At the same time, model owners can protect against unauthorized access or tampering, providing dual-layer protection that enables enterprises to innovate with Gemini models while maintaining data privacy.

Google Unveils Seventh-Generation AI Processor: Ironwood

Google has rolled out its seventh-generation AI chip, Ironwood, which aims to boost AI application performance. This processor focuses on "inference" computing—the quick calculations needed for chatbot answers and other AI outputs. Ironwood stands as one of the few real options to NVIDIA's leading AI processors coming from Google's ten-year multi-billion-dollar push to develop it. These tensor processing units (TPUs) are exclusively available through Google's cloud service or to its internal engineers.

According to Google Vice President Amin Vahdat, Ironwood combines functions from previously separate designs while increasing memory capacity. The chip can operate in groups of up to 9,216 processors and delivers twice the performance-per-energy ratio compared to last year's Trillium chip. When configured in pods of 9,216 chips, Ironwood delivers 42.5 Exaflops of computing power. This is more than 24 times the computational capacity of El Capitan, currently the world's largest supercomputer, which provides only 1.7 Exaflops per pod.

5th Gen AMD EPYC Processors Deliver Leadership Performance for Google Cloud C4D and H4D Virtual Machines

Today, AMD announced the new Google Cloud C4D and H4D virtual machines (VMs) are powered by 5th Gen AMD EPYC processors. The latest additions to Google Cloud's general-purpose and HPC-optimized VMs deliver leadership performance, scalability, and efficiency for demanding cloud workloads; for everything from data analytics and web serving to high-performance computing (HPC) and AI.

Google Cloud C4D instances deliver impressive performance, efficiency, and consistency for general-purpose computing workloads and AI inference. Based on Google Cloud's testing, leveraging the advancements of the AMD "Zen 5" architecture allowed C4D to deliver up to 80% higher throughput/vCPU compared to previous generations. H4D instances, optimized for HPC workloads, feature AMD EPYC CPUs with Cloud RDMA for efficient scaling of up to tens of thousands of cores.

Samsung and Google Cloud Expand Partnership, Bring Gemini to Ballie, a Home AI Companion Robot by Samsung

Samsung Electronics and Google Cloud today announced an expanded partnership to bring Google Cloud's generative AI technology to Ballie, a new home AI companion robot from Samsung. Available to consumers this Summer, Ballie will be able to engage in natural, conversational interactions to help users manage home environments, including adjusting lighting, greeting people at the door, personalizing schedules, setting reminders and more.

"Through this partnership, Samsung and Google Cloud are redefining the role of AI in the home," said Yongjae Kim, Executive Vice President of the Visual Display Business at Samsung Electronics. "By pairing Gemini's powerful multimodal reasoning with Samsung's AI capabilities in Ballie, we're leveraging the power of open collaboration to unlock a new era of personalized AI companion—one that moves with users, anticipates their needs and interacts in more dynamic and meaningful ways than ever before."

ViewSonic New M1 Max Brings 360-degree Portable Projection with Built-in Google TV

ViewSonic Corp., a leading global provider of visual solutions, today unveiled the M1 Max smart portable LED projector, the latest addition to its iF Design Award-winning M1 Series, recognized for its innovative 360-degree 3-in-1 stand. Designed for modern adventurers and entertainment enthusiasts, this palm-sized projector transforms any space into a cinematic oasis with built-in Google TV, vibrant Full HD 1080p visuals, and powerful Harman Kardon audio, all in an ultra-portable, sleek design.

"At ViewSonic, we are committed to redefining entertainment through user-centric innovations, creating solutions that seamlessly blend technology with contemporary lifestyles," said Dean Tsai, General Manager of Projector & LED Display Business Unit at ViewSonic. "The M1 Max embodies this vision by integrating premium audiovisual performance, effortless streaming, and a minimalist design into an ultra-portable device. We strive to empower users to break free from traditional setups and enjoy a versatile big-screen experience wherever life takes them."

MLCommons Releases New MLPerf Inference v5.0 Benchmark Results

Today, MLCommons announced new results for its industry-standard MLPerf Inference v5.0 benchmark suite, which delivers machine learning (ML) system performance benchmarking in an architecture-neutral, representative, and reproducible manner. The results highlight that the AI community is focusing much of its attention and efforts on generative AI scenarios, and that the combination of recent hardware and software advances optimized for generative AI have led to dramatic performance improvements over the past year.

The MLPerf Inference benchmark suite, which encompasses both datacenter and edge systems, is designed to measure how quickly systems can run AI and ML models across a variety of workloads. The open-source and peer-reviewed benchmark suite creates a level playing field for competition that drives innovation, performance, and energy efficiency for the entire industry. It also provides critical technical information for customers who are procuring and tuning AI systems. This round of MLPerf Inference results also includes tests for four new benchmarks: Llama 3.1 405B, Llama 2 70B Interactive for low-latency applications, RGAT, and Automotive PointPainting for 3D object detection.

NVIDIA Blackwell Takes Pole Position in Latest MLPerf Inference Results

In the latest MLPerf Inference V5.0 benchmarks, which reflect some of the most challenging inference scenarios, the NVIDIA Blackwell platform set records - and marked NVIDIA's first MLPerf submission using the NVIDIA GB200 NVL72 system, a rack-scale solution designed for AI reasoning. Delivering on the promise of cutting-edge AI takes a new kind of compute infrastructure, called AI factories. Unlike traditional data centers, AI factories do more than store and process data - they manufacture intelligence at scale by transforming raw data into real-time insights. The goal for AI factories is simple: deliver accurate answers to queries quickly, at the lowest cost and to as many users as possible.

The complexity of pulling this off is significant and takes place behind the scenes. As AI models grow to billions and trillions of parameters to deliver smarter replies, the compute required to generate each token increases. This requirement reduces the number of tokens that an AI factory can generate and increases cost per token. Keeping inference throughput high and cost per token low requires rapid innovation across every layer of the technology stack, spanning silicon, network systems and software.

Qualcomm Announces Acquisition of VinAI Division, Aims to Expand GenAI Capabilities

Qualcomm today announced the acquisition of MovianAI Artificial Intelligence (AI) Application and Research JSC (MovianAI), the former generative AI division of VinAI Application and Research JSC (VinAI) and a part of the Vingroup ecosystem. As a leading AI research company, VinAI is renowned for its expertise in generative AI, machine learning, computer vision, and natural language processing. Combining VinAI's advanced generative AI research and development (R&D) capabilities with Qualcomm's decades of extensive R&D will expand its ability to drive extraordinary inventions.

For more than 20 years, Qualcomm has been working closely with the Vietnamese technology ecosystem to create and deliver innovative solutions. Qualcomm's innovations in the areas of 5G, AI, IoT and automotive have helped to fuel the extraordinary growth and success of Vietnam's information and communication technology (ICT) industry and assisted the entry of Vietnamese companies into the global marketplace.

Google's Latest Gemini 2.5 Pro Dominates AI Benchmarks and Reasoning Tasks

Google has just released its latest flagship Gemini 2.5 Pro AI model. In case you didn't know, it was Google who created the original Transformer model architecture that OpenAI's ChatGPT, xAI's Grok, Anthropic Claude, and other models use. Google has been iterating its Gemini series of models for a while, and the company has released its most powerful version yet--the Gemini 2.5 Pro. Being the v2.5 family, it is a part of thinking models, capable of reasoning through their thoughts before producing output, allowing it to reiterate its "thoughts" before delivering optimal results. Reasoning, done through reinforcement learning and chain-of-thought prompting, forces the model to analyze and draw logical, step-by-step solutions, hence delivering better results.

In LMArea, which gives users outputs of AI model, which they grade and decide which one is better, Gemini 2.5 Pro climbed on top in the overall ranking, with number one spot in areas like hard prompts, coding, math, creative writing, instruction following, longer query, and multi-turn answers. This is an impressive result for Google as it now leads the leaderboard in all these areas and beats xAI's Grok 3 and OpenAI's GTP-4.5. In standardized AI industry benchmarks Gemini 2.5 Pro is also a leader in most of the benchmarks, such as AIME, LiveCodeBench, Aider, SWE-Bench, SimpleQA, and others. Interestingly, the 18.8% in Humanity's Last Exam is currently the most difficult AI benchmark. Interestingly, Google's Gemini 2.5 Pro can process massive context with a one million token context window, which will soon extend to two million tokens. It's literally enough to process entire books of context to give the model. Gemini 2.5 Pro is now available in Google AI Studio, and Gemini Advanced users can select it in the model dropdown on desktop and mobile.

NVIDIA NIM Microservices Now Available to Streamline Agentic Workflows on RTX AI PCs and Workstations

Generative AI is unlocking new capabilities for PCs and workstations, including game assistants, enhanced content-creation and productivity tools and more. NVIDIA NIM microservices, available now, and AI Blueprints, in the coming weeks, accelerate AI development and improve its accessibility. Announced at the CES trade show in January, NVIDIA NIM provides prepackaged, state-of-the-art AI models optimized for the NVIDIA RTX platform, including the NVIDIA GeForce RTX 50 Series and, now, the new NVIDIA Blackwell RTX PRO GPUs. The microservices are easy to download and run. They span the top modalities for PC development and are compatible with top ecosystem applications and tools.

The experimental System Assistant feature of Project G-Assist was also released today. Project G-Assist showcases how AI assistants can enhance apps and games. The System Assistant allows users to run real-time diagnostics, get recommendations on performance optimizations, or control system software and peripherals - all via simple voice or text commands. Developers and enthusiasts can extend its capabilities with a simple plug-in architecture and new plug-in builder.

NVIDIA Project G-Assist Now Available in NVIDIA App

At Computex 2024, we showcased Project G-Assist - a tech demo that offered a glimpse of how AI assistants could elevate the PC experience for gamers, creators, and more. Today, we're releasing an experimental version of the Project G-Assist System Assistant feature for GeForce RTX desktop users, via NVIDIA app, with GeForce RTX laptop support coming in a future update. As modern PCs become more powerful, they also grow more complex to operate. Users today face over a trillion possible combinations of hardware and software settings when configuring a PC for peak performance - spanning the GPU, CPU, motherboard, monitors, peripherals, and more.

We built Project G-Assist, an AI assistant that runs locally on GeForce RTX AI PCs, to simplify this experience. G-Assist helps users control a broad range of PC settings, from optimizing game and system settings, charting frame rates and other key performance statistics, to controlling select peripherals settings such as lighting - all via basic voice or text commands.

Google Making Vulkan the Official Graphics API on Android

We're stepping up our multiplatform gaming offering with exciting news dropping at this year's Game Developers Conference (GDC). We're bringing users more games, more ways to play your games across devices, and improved gameplay. You can read all about the updates for users from The Keyword. At GDC, we'll be diving into all of the latest games coming to Play, plus new developer tools that'll help improve gameplay across the Android ecosystem.

We're sharing a closer look at what's new from Android. We're making Vulkan the official graphics API on Android, enabling you to build immersive visuals, and we're enhancing the Android Dynamic Performance Framework (ADPF) to help you deliver longer, more stable gameplays. Check out our video, or keep reading below.

Google Teams up with MediaTek for Next-Generation TPU v7 Design

According to Reuters, citing The Information, Google will collaborate with MediaTek to develop its seventh-generation Tensor Processing Unit (TPU), which is also known as TPU v7. Google maintains its existing partnership with Broadcom despite the new MediaTek collaboration. The AI accelerator is scheduled for production in 2026, and TSMC is handling manufacturing duties. Google will lead the core architecture design while MediaTek manages I/O and peripheral components, as Economic Daily News reports. This differs from Google's ongoing relationship with Broadcom, which co-develops core TPU architecture. The MediaTek partnership reportedly stems from the company's strong TSMC relationship and lower costs compared to Broadcom.

There is also a possibility that MediaTek could design inference-focused TPU v7 chips while Broadcom focuses on training architecture. Nonetheless, the development of TPU is a massive market as Google is using so many chips that it could use a third company, hypothetically. The development of TPU continues Google's vertical integration strategy for AI infrastructure. Google reduces dependency on NVIDIA hardware by designing proprietary AI chips for internal R&D and cloud operations. At the same time, competitors like OpenAI, Anthropic, and Meta rely heavily on NVIDIA's processors for AI training and inference. At Google's scale, serving billions of queries a day, designing custom chips makes sense from both financial and technological sides. As Google develops its own specific workloads, translating that into hardware acceleration is the game that Google has been playing for years now.

Global Top 10 IC Design Houses See 49% YoY Growth in 2024, NVIDIA Commands Half the Market

TrendForce reveals that the combined revenue of the world's top 10 IC design houses reached approximately US$249.8 billion in 2024, marking a 49% YoY increase. The booming AI industry has fueled growth across the semiconductor sector, with NVIDIA leading the charge, posting an astonishing 125% revenue growth, widening its lead over competitors, and solidifying its dominance in the IC industry.

Looking ahead to 2025, advancements in semiconductor manufacturing will further enhance AI computing power, with LLMs continuing to emerge. Open-source models like DeepSeek could lower AI adoption costs, accelerating AI penetration from servers to personal devices. This shift positions edge AI devices as the next major growth driver for the semiconductor industry.

You Can Now Jailbreak Your AMD Zen1-Zen4 CPU Thanks to the Latest Vulnerability

Google security researchers have published comprehensive details on "EntrySign," a significant vulnerability affecting all AMD Zen processors through Zen 4. The flaw allows attackers with local administrator privileges to install custom microcode updates on affected CPUs, bypassing AMD's cryptographic verification system. The vulnerability stems from AMD's use of AES-CMAC as a hash function in its signature verification process—a critical cryptographic error. CMAC is designed as a message authentication code, not a secure hash function. The researchers discovered that AMD had been using a published example key from NIST documentation since Zen 1, allowing them to forge signatures and deploy arbitrary microcode modifications. These modifications can alter CPU behavior at the most fundamental level, enabling sophisticated attacks that persist until the next system reboot.

Google's security team has released "zentool," an open-source jailbreak toolkit allowing researchers to create, sign, and deploy custom microcode patches on vulnerable processors. The toolkit includes capabilities for microcode disassembly, patch authoring with limited assembly support, and cryptographic signing functions. As a proof-of-concept, the researchers demonstrated modifying the RDRAND instruction to consistently return predetermined values, effectively compromising the CPU's random number generation. AMD has issued microcode updates that replace the compromised validation routine with a custom secure hash function. The company's patches also leverage the AMD Secure Processor to update the validation routine before x86 cores can process potentially tampered microcode. While the attack requires local administrator access and doesn't persist through power cycles, it poses significant risks to confidential computing environments using technologies like SEV-SNP and DRTM. The researchers noted their findings could enable further CPU security research beyond exploit development, potentially allowing the implementation of new security features similar to those developed for Intel processors through similar techniques.

MSI Unveils New Modern MD272UPSW Smart Monitor

MSI proudly announces the release of its first branded Google TV Smart Monitor - Modern MD272UPSW. The Smart Monitor supports Multi Control and KVM functions, making it a versatile choice for both entertainment and work. Equipped with 4K UHD resolution, IPS panel, and wide color gamut coverage of 94% Adobe RGB and 98% DCI-P3, delivering vibrant and lifelike visuals. In addition, the monitor includes a USB Type-C port with 65 W Power Delivery and an ergonomic stand that tilts, swivels, rotates and is height adjustable for seamless connectivity and comfortable use while working.

The entertainment you love. With a little help from Google
No more jumping from app to app. Google TV brings together 400,000+ movies, TV episodes, and more from across your streaming services - organized in one place. Need inspiration? Get curated recommendations and use Google's powerful search to find shows across 10,000+ apps or to browse 800+ free live TV channels and thousands of free movies. Ask Google Assistant to find movies, stream apps, play music, and control the monitor - all with your voice. Simply press the Google Assistant button on the remote to get started.

AMD "Zen 1" to "Zen 4" Processors Affected by Microcode Signature Verification Vulnerability

Google Security Research team has just published its latest research on a fundamental flaw in the microcode patch verification system that affects AMD processors from "Zen 1" through "Zen 4" generations. The vulnerability stems from an inadequate hash function implementation in the CPU's signature validation process for microcode updates, enabling attackers with local administrator privileges (ring 0 from outside a VM) to inject malicious microcode patches, potentially compromising AMD SEV-SNP-protected confidential computing workloads and Dynamic Root of Trust Measurement systems. Google disclosed this high-severity issue to AMD on September 25, 2024, leading to AMD's release of an embargoed fix to customers on December 17, 2024, with public disclosure following on February 3, 2025; however, due to the complexity of supply chain dependencies and remediation requirements, comprehensive technical details are being withheld until March 5, 2025, allowing organizations time to implement necessary security measures and re-establish trust in their confidential compute environments.

AMD has released comprehensive mitigation measures through AGESA firmware updates across its entire EPYC server processor lineup, from the first-generation Naples to the latest Genoa-X and Bergamo architectures. The security patch, designated as CVE-2024-56161 with a high severity rating of 7.2, introduces critical microcode updates: Naples B2 processors require uCode version 0x08001278, Rome B0 systems need 0x0830107D, while Milan and Milan-X variants mandate versions 0x0A0011DB and 0x0A001244 respectively. For the latest Genoa-based systems, including Genoa-X and Bergamo/Siena variants, the required microcode versions are 0x0A101154, 0x0A10124F, and 0x0AA00219. These updates implement robust protections across all SEV security features - including SEV, SEV-ES, and SEV-SNP - while introducing new restrictions on microcode hot-loading capabilities to prevent future exploitation attempts.

HTC Announces XR Agreement with Google

HTC Corp. and Google LLC announced today a definitive agreement under which HTC will receive US$250 million in cash from Google, and certain HTC employees from its XR team will join Google. As part of the transaction, Google will receive a non-exclusive license for HTC's XR intellectual property (IP). Following this agreement, HTC and Google will explore future collaboration opportunities.

This agreement reinforces HTC's strategy of continued development around the XR ecosystem, enabling a more streamlined product portfolio with a focus on platforms, greater operational efficiency and financial flexibility. HTC's commitment to delivering innovative VIVE XR solutions such as the VIVE Focus Vision remains unchanged, with existing product lines and solutions to be supported and developed without interruption.

Samsung Galaxy S25 Series Sets the Standard of AI Phone as a True AI companion

Samsung Electronics Co., Ltd. today announced the Galaxy S25 Ultra, Galaxy S25+, and Galaxy S25, setting a new standard towards a true AI companion with our most natural and context-aware mobile experiences ever created. Introducing multimodal AI agents, the Galaxy S25 series is the first step in Samsung's vision to change the way users interact with their phone—and with their world. A first-of-its-kind customized Snapdragon 8 Elite Mobile Platform for Galaxy chipset delivers greater on-device processing power for Galaxy AI plus superior camera range and control with Galaxy's next-gen ProVisual Engine.

"The greatest innovations are a reflection of their users, which is why we evolved Galaxy AI to help everyone interact with their devices more naturally and effortlessly while trusting that their privacy is secured," said TM Roh, President and Head of Mobile eXperience Business at Samsung Electronics. "Galaxy S25 series opens the door to an AI-integrated OS that fundamentally shifts how we use technology and how we live our lives."

NVIDIA's GB200 "Blackwell" Racks Face Overheating Issues

NVIDIA's new GB200 "Blackwell" racks are running into trouble (again). Big cloud companies like Microsoft, Amazon, Google, and Meta Platforms are cutting back their orders because of heat problems, Reuters reports, quoting The Information. The first shipments of racks with Blackwell chips are getting too hot and have connection issues between chips, the report says. These tech hiccups have made some customers who ordered $10 billion or more worth of racks think twice about buying.

Some are putting off their orders until NVIDIA has better versions of the racks. Others are looking at buying older NVIDIA AI chips instead. For example, Microsoft planned to set up GB200 racks with no less than 50,000 Blackwell chips at one of its Phoenix sites. However, The Information reports that OpenAI has asked Microsoft to provide NVIDIA's older "Hopper" chips instead pointing to delays linked to the Blackwell racks. NVIDIA's problems with its Blackwell GPUs housed in high-density racks are not something new; in November 2024, Reuters, also referencing The Information, uncovered overheating issues in servers that housed 72 processors. NVIDIA has made several changes to its server rack designs to tackle these problems, however, it seems that the problem was not entirely solved.

Technics Launches the EAH-AZ100 Earbuds With a Magnetic Fluid Driver

Technics announces the EAH-AZ100, its latest true wireless earbuds, adding to the family of award-winning audio products backed by 60 years of sound engineering and product development. The EAH-AZ100 levels up users' sound experience with a newly developed, proprietary "Magnetic Fluid Driver" that creates clean, high-resolution, low-vibration and low-distortion sounds for the most authentic, balanced audio that's true to the original source.

For stress-free communication, the EAH-AZ100 introduces "Voice Focus AI"- an innovative feature that combines an AI noise reduction chip and three microphones in each earbud. This advanced technology delivers precise tuning, ensuring the ultimate call quality for both the speaker and the listener. "Voice Focus AI" goes beyond eliminating common call distractions, such as busy street traffic wind or other background noise. It actively analyses incoming sound, enhancing the sound of the caller.
Return to Keyword Browsing
Jul 13th, 2025 04:50 CDT change timezone

New Forum Posts

Popular Reviews

TPU on YouTube

Controversial News Posts