News Posts matching #ML

Return to Keyword Browsing

NVIDIA TensorRT Boosts Stable Diffusion 3.5 Performance on NVIDIA GeForce RTX and RTX PRO GPUs

Generative AI has reshaped how people create, imagine and interact with digital content. As AI models continue to grow in capability and complexity, they require more VRAM, or video random access memory. The base Stable Diffusion 3.5 Large model, for example, uses over 18 GB of VRAM - limiting the number of systems that can run it well. By applying quantization to the model, noncritical layers can be removed or run with lower precision. NVIDIA GeForce RTX 40 Series and the Ada Lovelace generation of NVIDIA RTX PRO GPUs support FP8 quantization to help run these quantized models, and the latest-generation NVIDIA Blackwell GPUs also add support for FP4.

NVIDIA collaborated with Stability AI to quantize its latest model, Stable Diffusion (SD) 3.5 Large, to FP8 - reducing VRAM consumption by 40%. Further optimizations to SD3.5 Large and Medium with the NVIDIA TensorRT software development kit (SDK) double performance. In addition, TensorRT has been reimagined for RTX AI PCs, combining its industry-leading performance with just-in-time (JIT), on-device engine building and an 8x smaller package size for seamless AI deployment to more than 100 million RTX AI PCs. TensorRT for RTX is now available as a standalone SDK for developers.

AnythingLLM App Best Experienced on NVIDIA RTX AI PCs

Large language models (LLMs), trained on datasets with billions of tokens, can generate high-quality content. They're the backbone for many of the most popular AI applications, including chatbots, assistants, code generators and much more. One of today's most accessible ways to work with LLMs is with AnythingLLM, a desktop app built for enthusiasts who want an all-in-one, privacy-focused AI assistant directly on their PC. With new support for NVIDIA NIM microservices on NVIDIA GeForce RTX and NVIDIA RTX PRO GPUs, AnythingLLM users can now get even faster performance for more responsive local AI workflows.

What Is AnythingLLM?
AnythingLLM is an all-in-one AI application that lets users run local LLMs, retrieval-augmented generation (RAG) systems and agentic tools. It acts as a bridge between a user's preferred LLMs and their data, and enables access to tools (called skills), making it easier and more efficient to use LLMs for specific tasks.

Preparing Windows for the Quantum Age: Microsoft Hardens Windows 11 Preview with New Encryption

To defend regular users from bad actors wielding quantum computing power like Majorana 1, Windows 11 Insider Preview now includes built-in support for post-quantum cryptography (PQC), giving developers and security teams early access to algorithms designed to withstand the capabilities of future quantum computers. Available in Canary Channel Build 27852 and above, this update integrates two new schemes, ML-KEM for key exchange and ML-DSA for digital signatures, directly into the Cryptography API: Next Generation (CNG) and certificate management functions. ML-KEM addresses the "harvest now, decrypt later" threat model, in which adversaries collect encrypted data today to decrypt it once quantum hardware has advanced. Microsoft offers three levels of ML-KEM security: a Level 1 option that produces 800-byte ciphertexts and a 32-byte shared secret; a Level 3 configuration with 1,184-byte ciphertexts and the same 32-byte secret; and a Level 5 tier that increases ciphertext size to 1,568 bytes while keeping the shared secret at 32 bytes. These parameter sets allow organizations to balance performance and protection according to their threat models and operational requirements.

NVIDIA and Microsoft Advance Development on RTX AI PCs

Generative AI is transforming PC software into breakthrough experiences - from digital humans to writing assistants, intelligent agents and creative tools. NVIDIA RTX AI PCs are powering this transformation with technology that makes it simpler to get started experimenting with generative AI and unlock greater performance on Windows 11. NVIDIA TensorRT has been reimagined for RTX AI PCs, combining industry-leading TensorRT performance with just-in-time, on-device engine building and an 8x smaller package size for seamless AI deployment to more than 100 million RTX AI PCs.

Announced at Microsoft Build, TensorRT for RTX is natively supported by Windows ML - a new inference stack that provides app developers with both broad hardware compatibility and state-of-the-art performance. For developers looking for AI features ready to integrate, NVIDIA software development kits (SDKs) offer a wide array of options, from NVIDIA DLSS to multimedia enhancements like NVIDIA RTX Video. This month, top software applications from Autodesk, Bilibili, Chaos, LM Studio and Topaz Labs are releasing updates to unlock RTX AI features and acceleration.

Cadence Accelerates Physical AI Applications with Tensilica NeuroEdge 130 AI Co-Processor

Cadence today announced the Cadence Tensilica NeuroEdge 130 AI Co-Processor (AICP), a new class of processor designed to complement any neural processing unit (NPU) and enable end-to-end execution of the latest agentic and physical AI networks on advanced automotive, consumer, industrial and mobile SoCs. Based on the proven architecture of the highly successful Tensilica Vision DSP family, the NeuroEdge 130 AICP delivers more than 30% area savings and over 20% savings in dynamic power and energy without impacting performance. It also leverages the same software, AI compilers, libraries and frameworks to deliver faster time to market. Multiple customer engagements are currently underway, and customer interest is strong.

"With the rapid proliferation of AI processing in physical AI applications such as autonomous vehicles, robotics, drones, industrial automation and healthcare, NPUs are assuming a more critical role," said Karl Freund, founder and principal analyst of Cambrian AI Research. "Today, NPUs handle the bulk of the computationally intensive AI/ML workloads, but a large number of non-MAC layers include pre- and post-processing tasks that are better offloaded to specialized processors. However, current CPU, GPU and DSP solutions involve tradeoffs, and the industry needs a low-power, high-performance solution that is optimized for co-processing and allows future proofing for rapidly evolving AI processing needs."

NVIDIA's Project G-Assist Plug-In Builder Explained: Anyone Can Customize AI on GeForce RTX AI PCs

AI is rapidly reshaping what's possible on a PC—whether for real-time image generation or voice-controlled workflows. As AI capabilities grow, so does their complexity. Tapping into the power of AI can entail navigating a maze of system settings, software and hardware configurations. Enabling users to explore how on-device AI can simplify and enhance the PC experience, Project G-Assist—an AI assistant that helps tune, control and optimize GeForce RTX systems—is now available as an experimental feature in the NVIDIA app. Developers can try out AI-powered voice and text commands for tasks like monitoring performance, adjusting settings and interacting with supporting peripherals. Users can even summon other AIs powered by GeForce RTX AI PCs.

And it doesn't stop there. For those looking to expand Project G-Assist capabilities in creative ways, the AI supports custom plug-ins. With the new ChatGPT-based G-Assist Plug-In Builder, developers and enthusiasts can create and customize G-Assist's functionality, adding new commands, connecting external tools and building AI workflows tailored to specific needs. With the plug-in builder, users can generate properly formatted code with AI, then integrate the code into G-Assist—enabling quick, AI-assisted functionality that responds to text and voice commands.

Thousands of NVIDIA Grace Blackwell GPUs Now Live at CoreWeave

CoreWeave today became one of the first cloud providers to bring NVIDIA GB200 NVL72 systems online for customers at scale, and AI frontier companies Cohere, IBM and Mistral AI are already using them to train and deploy next-generation AI models and applications. CoreWeave, the first cloud provider to make NVIDIA Grace Blackwell generally available, has already shown incredible results in MLPerf benchmarks with NVIDIA GB200 NVL72 - a powerful rack-scale accelerated computing platform designed for reasoning and AI agents. Now, CoreWeave customers are gaining access to thousands of NVIDIA Blackwell GPUs.

"We work closely with NVIDIA to quickly deliver to customers the latest and most powerful solutions for training AI models and serving inference," said Mike Intrator, CEO of CoreWeave. "With new Grace Blackwell rack-scale systems in hand, many of our customers will be the first to see the benefits and performance of AI innovators operating at scale."

AMD Instinct GPUs are Ready to Take on Today's Most Demanding AI Models

Customers evaluating AI infrastructure today rely on a combination of industry-standard benchmarks and real-world model performance metrics—such as those from Llama 3.1 405B, DeepSeek-R1, and other leading open-source models—to guide their GPU purchase decisions. At AMD, we believe that delivering value across both dimensions is essential to driving broader AI adoption and real-world deployment at scale. That's why we take a holistic approach—optimizing performance for rigorous industry benchmarks like MLperf while also enabling Day 0 support and rapid tuning for the models most widely used in production by our customers.

This strategy helps ensure AMD Instinct GPUs deliver not only strong, standardized performance, but also high-throughput, scalable AI inferencing across the latest generative and language models used by customers. We will explore how AMD's continued investment in benchmarking, open model enablement, software and ecosystem tools helps unlock greater value for customers—from MLPerf Inference 5.0 results to Llama 3.1 405B and DeepSeek-R1 performance, ROCm software advances, and beyond.

EA Presented AI-enriched Development Tools at GDC 2025

Technology and its impact on interactive entertainment experiences is nothing short of remarkable. AI and machine learning are allowing our creators to explore new ways to test games, build worlds, and create deeper, more immersive experiences at a faster rate without sacrificing quality. Our artists are empowered to create more realistic, dynamic animations and visuals at a greater scale, delivering more cinematic moments that draw us in and leave us stunned. Or completely immersed in the most enthralling environments possible, surrounded by and interacting with authentic, high-fidelity characters.

Above all, these advancements connect and offer millions of EA's players and fans around the globe new ways to play, create, watch, and connect in and beyond our games like never before. At the recently concluded Game Developers Conference (GDC), EA's brilliant minds showcased some of these technological innovations, highlighting how they empower our developers to craft extraordinary interactive entertainment experiences. Their talks provided insight into the cutting-edge tools and techniques driving the future of game development at EA, and the positive impact this will have on our developers, players and fans across the globe.

AMD Introduces GAIA - an Open-Source Project That Runs Local LLMs on Ryzen AI NPUs

AMD has launched a new open-source project called, GAIA (pronounced /ˈɡaɪ.ə/), an awesome application that leverages the power of Ryzen AI Neural Processing Unit (NPU) to run private and local large language models (LLMs). In this blog, we'll dive into the features and benefits of GAIA, while introducing how you can take advantage of GAIA's open-source project to adopt into your own applications.

Introduction to GAIA
GAIA is a generative AI application designed to run local, private LLMs on Windows PCs and is optimized for AMD Ryzen AI hardware (AMD Ryzen AI 300 Series Processors). This integration allows for faster, more efficient processing - i.e. lower power- while keeping your data local and secure. On Ryzen AI PCs, GAIA interacts with the NPU and iGPU to run models seamlessly by using the open-source Lemonade (LLM-Aid) SDK from ONNX TurnkeyML for LLM inference. GAIA supports a variety of local LLMs optimized to run on Ryzen AI PCs. Popular models like Llama and Phi derivatives can be tailored for different use cases, such as Q&A, summarization, and complex reasoning tasks.

Google Making Vulkan the Official Graphics API on Android

We're stepping up our multiplatform gaming offering with exciting news dropping at this year's Game Developers Conference (GDC). We're bringing users more games, more ways to play your games across devices, and improved gameplay. You can read all about the updates for users from The Keyword. At GDC, we'll be diving into all of the latest games coming to Play, plus new developer tools that'll help improve gameplay across the Android ecosystem.

We're sharing a closer look at what's new from Android. We're making Vulkan the official graphics API on Android, enabling you to build immersive visuals, and we're enhancing the Android Dynamic Performance Framework (ADPF) to help you deliver longer, more stable gameplays. Check out our video, or keep reading below.

ASUS Introduces New "AI Cache Boost" BIOS Feature - R&D Team Claims Performance Uplift

Large language models (LLMs) love large quantities of memory—so much so, in fact, that AI enthusiasts are turning to multi-GPU setups to make even more VRAM available for their AI apps. But since many current LLMs are extremely large, even this approach has its limits. At times, the GPU will decide to make use of CPU processing power for this data, and when it does, the performance of your CPU cache and DRAM comes into play. All this means that when it comes to the performance of AI applications, it's not just the GPU that matters, but the entire pathway that connects the GPU to the CPU to the I/O die to the DRAM modules. It stands to reason, then, that there are opportunities to boost AI performance by optimizing these elements.

That's exactly what we've found as we've spent time in our R&D labs with the latest AMD Ryzen CPUs. AMD just launched two new Ryzen CPUs with AMD 3D V-Cache Technology, the AMD Ryzen 9 9950X3D and Ryzen 9 9900X3D, pushing the series into new performance territory. After testing a wide range of optimizations in a variety of workloads, we uncovered a range of settings that offer tangible benefits for AI enthusiasts. Now, we're ready to share these optimizations with you through a new BIOS feature: AI Cache Boost. Available through an ASUS AMD 800 Series motherboard and our most recent firmware update, AI Cache Boost can accelerate performance up to 12.75% when you're working with massive LLMs.

AMD Recommends EPYC Processors for Everyday AI Server Tasks

Ask a typical IT professional today whether they're leveraging AI, and there's a good chance they'll say yes-after all, they have reputations to protect! Kidding aside, many will report that their teams may use Web-based tools like ChatGPT or even have internal chatbots that serve their employee base on their intranet, but for that not much AI is really being implemented at the infrastructure level. As it turns out, the true answer is a bit different. AI tools and techniques have embedded themselves firmly into standard enterprise workloads and are a more common, everyday phenomena than even many IT people may realize. Assembly line operations now include computer vision-powered inspections. Supply chains use AI for demand forecasting making business move faster and of course, AI note-taking and meeting summary is embedded on virtually all the variants of collaboration and meeting software.

Increasingly, critical enterprise software tools incorporate built-in recommendation systems, virtual agents or some other form of AI-enabled assistance. AI is truly becoming a pervasive, complementary tool for everyday business. At the same time, today's enterprises are navigating a hybrid landscape where traditional, mission-critical workloads coexist with innovative AI-driven tasks. This "mixed enterprise and AI" workload environment calls for infrastructure that can handle both types of processing seamlessly. Robust, general-purpose CPUs like the AMD EPYC processors are designed to be powerful and secure and flexible to address this need. They handle everyday tasks—running databases, web servers, ERP systems—and offer strong security features crucial for enterprise operations augmented with AI workloads. In essence, modern enterprise infrastructure is about creating a balanced ecosystem. AMD EPYC CPUs play a pivotal role in creating this balance, delivering high performance, efficiency, and security features that underpin both traditional enterprise workloads and advanced AI operations.

EA Details How ML & AI Bolstered Development of Latest Madden & College Football Titles

On June 1, 1988, the very first Madden video game was released to the world. Players needed to load up either a Commodore 64/Commodore 128, Apple II, or MS-DOS to launch the game. When they did, they were greeted with 8-bit animations of the NFL's most popular teams and found themselves controlling their favorite players to try and win themselves a Super Bowl. And at that time, it was amazing. Thirty-seven years later and EA SPORTS hasn't stopped advancing Madden and our American Football games.

Most recently, we launched EA SPORTS Madden NFL 25 and College Football 25, which are tentpoles of our beloved American Football Ecosystem. Yet our football games are no longer blocky pixels and four-directional controls. They're among the most realistic sports simulation titles on the planet. We even celebrated the recent Super Bowl weekend with these titles and our very own Madden Bowl, featuring championship games and incredible music all in the heart of New Orleans. This is in no small part due to the incredible teams and their mission to make our games better every single year. And technology plays a critical role in making this happen.

Arm Intros Cortex-A320 Armv9 CPU for IoT and Edge AI Applications

Arm's new Cortex-A320 represents its first ultra-efficient CPU using the advanced Armv9 architecture dedicated to the needs of IoT and AI applications. The processor achieves over 50% higher efficiency compared to the Cortex-A520 through several microarchitecture optimizations, together with a narrow fetch and decode data path, densely banked L1 caches, and a reduced-port integer register file. It also delivers 30% improved scalar performance compared with its predecessor, the Cortex-A35, via efficient branch predictors, pre-fetchers, and memory system improvements.

The Cortex-A320 is a single-issue, in-order CPU with a 32-bit instruction fetch and 8-stage pipeline. The processor offers scalability by supporting single-core to quad-core configurations. It features DSU-120T, a streamlined DynamIQ Shared Unit (DSU) which enables Cortex-A320-only clusters. Cortex-A320 supports up to 64 KB L1 caches and up to 512 KB L2, with a 256-bit AMBA5 AXI interface to external memory. The L2 cache and the L2 TLB can be shared between the Cortex-A320 CPUs. The vector processing unit, which implements the NEON and SVE2 SIMD (Single Instruction, Multiple Data) technologies, can be either private in a single core complex or shared between cores in dual-core or quad-core implementations.

SanDisk Develops HBM Killer: High-Bandwidth Flash (HBF) Allows 4 TB of VRAM for AI GPUs

During its first post-Western Digital spinoff investor day, SanDisk showed something it has been working on to tackle the AI sector. High-bandwidth flash (HBF) is a new memory architecture that combines 3D NAND flash storage with bandwidth capabilities comparable to high-bandwidth memory (HBM). The HBF design stacks 16 3D NAND BiCS8 dies using through-silicon vias, with a logic layer enabling parallel access to memory sub-arrays. This configuration achieves 8 to 16 times greater capacity per stack than current HBM implementations. A system using eight HBF stacks can provide 4 TB of VRAM to store large AI models like GPT-4 directly on GPU hardware. The architecture breaks from conventional NAND design by implementing independently accessible memory sub-arrays, moving beyond traditional multi-plane approaches. While HBF surpasses HBM's capacity specifications, it maintains higher latency than DRAM, limiting its application to specific workloads.

SanDisk has not disclosed its solution for NAND's inherent write endurance limitations, though using pSLC NAND makes it possible to balance durability and cost. The bandwidth of HBF is also unknown, as the company hasn't put out details yet. SanDisk Memory Technology Chief Alper Ilkbahar confirmed the technology targets read-intensive AI inference tasks rather than latency-sensitive applications. The company is developing HBF as an open standard, incorporating mechanical and electrical interfaces similar to HBM to simplify integration. Some challenges remain, including NAND's block-level addressing limitations and writing endurance constraints. While these factors make HBF unsuitable for gaming applications, the technology's high capacity and throughput characteristics align with AI model storage and inference requirements. SanDisk has announced plans for three generations of HBF development, indicating a long-term commitment to the technology.

OnLogic Reveals the Axial AX300 Edge Server

OnLogic, a leading provider of edge computing solutions, has launched the Axial AX300, a highly customizable and powerful edge server. The AX300 is engineered to help businesses of any size better leverage their on-site data and unlock the potential of AI by placing powerful computing capabilities on-site.

The Axial AX300 empowers organizations to seamlessly move computing resources closer to the data source, providing significant advantages in performance, latency, operational efficiency, and total cost of ownership over cloud-based data management. With its robust design, flexible configuration options, and advanced security features, the Axial AX300 is the ideal platform for a wide range of highly-impactful edge computing applications, including:
  • AI/ML inference and training: Leveraging the power of AI/ML at the edge for real-time insights, predictive maintenance, and improved decision-making.
  • Data analytics: Processing and analyzing data generated by IoT devices and sensors in real-time to improve operational efficiency.
  • Virtualization: Consolidating multiple workloads onto a single server, optimizing resource utilization and simplifying deployment and management.

Supermicro Empowers AI-driven Capabilities for Enterprise, Retail, and Edge Server Solutions

Supermicro, Inc. (SMCI), a Total IT Solution Provider for AI/ML, HPC, Cloud, Storage, and 5G/Edge, is showcasing the latest solutions for the retail industry in collaboration with NVIDIA at the National Retail Federation (NRF) annual show. As generative AI (GenAI) grows in capability and becomes more easily accessible, retailers are leveraging NVIDIA NIM microservices, part of the NVIDIA AI Enterprise software platform, for a broad spectrum of applications.

"Supermicro's innovative server, storage, and edge computing solutions improve retail operations, store security, and operational efficiency," said Charles Liang, president and CEO of Supermicro. "At NRF, Supermicro is excited to introduce retailers to AI's transformative potential and to revolutionize the customer's experience. Our systems here will help resolve day-to-day concerns and elevate the overall buying experience."

Graid Technology Unveils SupremeRAID(TM) AE: The AI Edition Designed for GPU-Driven AI Workloads

Graid Technology, the global leader in innovative storage performance solutions, is proud to announce the launch of SupremeRAID AE (AI Edition), the most resilient RAID data protection solution for enterprises leveraging GPU servers and AI workloads. Featuring GPUDirect Storage support and an intelligent data offload engine, SupremeRAID AE redefines how AI applications manage data, delivering unmatched performance, flexibility, and efficiency.

SupremeRAID AE's cutting-edge technology empowers organizations to accelerate AI workflows by reducing data access latency and increasing I/O efficiency, while protecting mission-critical datasets with enterprise-grade reliability. Its seamless scalability enables enterprises to meet future AI demands without overhauling existing infrastructure. Designed for a wide range of users, SupremeRAID AE benefits AI/ML teams by delivering faster training and inference for data-intensive models, enterprises with GPU servers by optimizing GPU performance for critical workloads, and data scientists and researchers by providing seamless access to vast datasets without bottlenecks. IT teams also gain resilient, scalable RAID storage that integrates effortlessly into existing systems without requiring additional hardware.

Emotiv Launches MW20 EEG Active Noise-Cancelling Earphones at CES

Emotiv, a global leader in EEG technology, announces its next-generation EEG Active Noise-Cancelling Earphones. These smart earphones enhance personal wellness by integrating advanced EEG technology to provide insights into cognitive performance and overall well-being—alongside exceptional sound quality.

Building on Emotiv's MN8 earphones launched in 2018 (the world's first EEG-enabled earphones), the MW20 marks the next evolution of wearable technology. Designed with precision, the product merges premium audio with neurotechnology to deliver actionable wellness insights and BCI capabilities in an intuitive form factor. Made of machined aluminium and sapphire glass, the earphones feature an ergonomic design engineered for optimal fit and precise acoustics.

SPEC Delivers Major SPECworkstation 4.0 Benchmark Update, Adds AI/ML Workloads

The Standard Performance Evaluation Corporation (SPEC), the trusted global leader in computing benchmarks, today announced the availability of the SPECworkstation 4.0 benchmark, a major update to SPEC's comprehensive tool designed to measure all key aspects of workstation performance. This significant upgrade from version 3.1 incorporates cutting-edge features to keep pace with the latest workstation hardware and the evolving demands of professional applications, including the increasing reliance on data analytics, AI and machine learning (ML).

The new SPECworkstation 4.0 benchmark provides a robust, real-world measure of CPU, graphics, accelerator, and disk performance, ensuring professionals have the data they need to make informed decisions about their hardware investments. The benchmark caters to the diverse needs of engineers, scientists, and developers who rely on workstation hardware for daily tasks. It includes real-world applications like Blender, Handbrake, LLVM and more, providing a comprehensive performance measure across seven different industry verticals, each focusing on specific use cases and subsystems critical to workstation users. SPECworkstation 4.0 benchmark marks a significant milestone for measuring workstation AI performance, providing an unbiased, real-world, application-driven tool for measuring how workstations handle AI/ML workloads.

Amazon AWS Announces General Availability of Trainium2 Instances, Reveals Details of Next Gen Trainium3 Chip

At AWS re:Invent, Amazon Web Services, Inc. (AWS), an Amazon.com, Inc. company, today announced the general availability of AWS Trainium2-powered Amazon Elastic Compute Cloud (Amazon EC2) instances, introduced new Trn2 UltraServers, enabling customers to train and deploy today's latest AI models as well as future large language models (LLM) and foundation models (FM) with exceptional levels of performance and cost efficiency, and unveiled next-generation Trainium3 chips.

"Trainium2 is purpose built to support the largest, most cutting-edge generative AI workloads, for both training and inference, and to deliver the best price performance on AWS," said David Brown, vice president of Compute and Networking at AWS. "With models approaching trillions of parameters, we understand customers also need a novel approach to train and run these massive workloads. New Trn2 UltraServers offer the fastest training and inference performance on AWS and help organizations of all sizes to train and deploy the world's largest models faster and at a lower cost."

US to Implement Semiconductor Restrictions on Chinese Equipment Makers

The Biden administration is set to announce new, targeted restrictions on China's semiconductor industry, focusing primarily on emerging chip manufacturing equipment companies rather than broad industry-wide limitations. According to Bloomberg, these new restrictions are supposed to take effect on Monday. The new rules will specifically target two manufacturing facilities owned by Semiconductor Manufacturing International Corp. (SMIC) and will add select companies to the US Entity List, restricting their access to American technology. However, most of Huawei's suppliers can continue their operations, suggesting a more mild strategy. The restrictions will focus on over 100 emerging Chinese semiconductor equipment manufacturers, many of which receive government funding. These companies are developing tools intended to replace those currently supplied by industry leaders such as ASML, Applied Materials, and Tokyo Electron.

The moderated approach comes after significant lobbying efforts from American semiconductor companies, who argued that stricter restrictions could disadvantage them against international competitors. Major firms like Applied Materials, KLA, and Lam Research voiced concerns about losing market share to companies in Japan and the Netherlands, where similar but less stringent export controls are in place. Notably, Japanese companies like SUMCO are already seeing the revenue impacts of Chinese independence. Lastly, the restrictions will have a limited effect on China's memory chip sector. The new measures will not directly affect ChangXin Memory Technologies (CXMT), a significant Chinese DRAM manufacturer capable of producing high-bandwidth memory for AI applications.

Interview with RISC-V International: High-Performance Chips, AI, Ecosystem Fragmentation, and The Future

RISC-V is an industry standard instruction set architecture (ISA) born in UC Berkeley. RISC-V is the fifth iteration in the lineage of historic RISC processors. The core value of the RISC-V ISA is the freedom of usage it offers. Any organization can leverage the ISA to design the best possible core for their specific needs, with no regional restrictions or licensing costs. It attracts a massive ecosystem of developers and companies building systems using the RISC-V ISA. To support these efforts and grow the ecosystem, the brains behind RISC decided to form RISC-V International—a non-profit foundation that governs the ISA and guides the ecosystem.

We had the privilege of talking with Andrea Gallo, Vice President of Technology at RISC-V International. Andrea oversees the technological advancement of RISC-V, collaborating with vendors and institutions to overcome challenges and expand its global presence. Andrea's career in technology spans several influential roles at major companies. Before joining RISC-V International, he worked at Linaro, where he pioneered Arm data center engineering initiatives, later overseeing diverse technological sectors as Vice President of Segment Groups, and ultimately managing crucial business development activities as executive Vice President. During his earlier tenure as a Fellow at ST-Ericsson, he focused on smartphone and application processor technology, and at STMicroelectronics he optimized hardware-software architectures and established international development teams.

Emteq Labs Unveils World's First Emotion-Sensing Eyewear

Emteq Labs, the market leader in emotion-recognition wearable technology, today announced the forthcoming introduction of Sense, the world's first emotion-sensing eyewear. Alongside the unveiling of Sense, the company is pleased to announce the appointment of Steen Strand, former head of the hardware division of Snap Inc., as its new Chief Executive Officer.

Over the past decade, Emteq Labs - led by renowned surgeon and facial musculature expert, Dr. Charles Nduka - has been at the forefront of engineering advanced technologies for sensing facial movements and emotions. This data has significant implications on health and well-being, but has never been available outside of a laboratory, healthcare facility, or other controlled setting. Now, Emteq Labs has developed Sense: a patented, AI-powered eyewear platform that provides lab-quality insights in real life and in real time. This includes comprehensive measurement and analysis of the wearer's facial expressions, dietary habits, mood, posture, attention levels, physical activity, and additional health-related metrics.
Return to Keyword Browsing
Jul 12th, 2025 14:59 CDT change timezone

New Forum Posts

Popular Reviews

TPU on YouTube

Controversial News Posts