News Posts matching #next generation

Return to Keyword Browsing

Wreckfest 2 Smashes into Early Access, Bugbear & THQ Nordic Reveal Launch System Requirements

Vienna, Austria / Helsinki, Finland, March 20th, 2025: Ladies and Gentlemen, the next generation of demolition derby madness is here. Wreckfest 2 is crashing into Early Access on Steam today, bringing next-level car destruction, slicker graphics, improved physics, and a whole lot of new features to fuel your inner petrolhead. And yes, it's once again developed by Bugbear Entertainment—the masters of metal-mangling mayhem.

Just like its predecessor, Wreckfest 2 will keep evolving throughout Early Access, with fresh content and features rolling in regularly. Bugbear and THQ Nordic are all about making this a game for the players, shaped by the players. We want your feedback - tell us what you love, what you want more of, and what crazy ideas we absolutely need to implement. We've got a truckload of ideas (and an actual truck), but we want to hear which ones you like the most!

MSI Outlines Claw 8 AI+ & Claw 7 AI+ Upgrades, Based on Original Claw User Feedback

The first MSI Claw was designed for ergonomic comfort and seamless gaming across platforms like Steam, Ubisoft, and Xbox. It even handled mobile games effortlessly. Now, thanks to valuable feedback from our community, we've made significant improvements in both hardware and software for the next generation—introducing the Claw 8 AI+ and Claw 7 AI+, powered by the latest Intel Lunar Lake processor for enhanced performance and efficiency.

Key Upgrades Based on Community Feedback
1) Enhanced Connectivity with Dual Thunderbolt 4 Ports. Both the Claw 8 AI+ and Claw 7 AI+ now feature two Thunderbolt 4 ports, allowing you to connect an external SSD without unplugging the power cord. This ensures greater flexibility and seamless gameplay.

HP Announces a Wide Range of New Products at its Amplify Conference

At its annual Amplify Conference, HP Inc. today announced new products and services designed to shape the future of work, empowering people and businesses to create and manage their own way of working. The company unveiled more than 80 PCs, AI-powered print tools for SMBs, and Workforce Experience Platform enhancements all built to drive company growth and professional fulfillment.

"HP is translating AI into meaningful experiences that drive growth and fulfillment," said Enrique Lores, President and CEO at HP Inc. "We are shaping the future of work with game-changing AI innovations that seamlessly adapt to how people want to work."

Ubisoft Summarizes Rainbow Six Siege X Showcase, Announces June 10 Release

The next evolution of Rainbow Six Siege was revealed today at the Siege X Showcase. Launching on June 10, Siege X will introduce Dual Front, a dynamic new 6v6 game mode, as well as deliver foundational upgrades to the core game (including visual enhancements, an audio overhaul, rappel upgrades, and more) alongside revamped player protection systems, and free access that will allow players to experience the unique tactical action of Rainbow Six Siege at no cost. Plus, from now through March 19, a free Dual Front closed beta is live on PC via Ubisoft Connect, PS5, and Xbox Series X|S, giving players a first chance to play the exciting new mode. Read on to find out how to get into the beta and try Dual Front for yourself.

Dual Front
Taking place on an entirely new map called District, Dual Front is a new mode that pits two teams of six Operators against each other in a fight to attack enemy sectors while defending their own. Players can choose from a curated roster of 35 Operators—both Attackers and Defenders - that will rotate twice per season. During each match, two objective points are live at all times, one in each team's lane; teams must plant a sabotage kit (akin to a defuser) in the opposing team's objective room and defend it in order to capture the sector and progress towards the final objective: the Base. Sabotage the Base to claim victory, but don't forget to defend your own sector, lest your foes progress faster than you and beat you to it.

Getac Introduces Next-gen AI-ready B360 and B360 Pro "Fully Rugged" Laptops

Getac Technology Corporation (Getac), a leading provider of rugged computing and mobile video solutions, today announced the launch of its next generation B360 and B360 Pro fully rugged laptops, offering professionals across industries including field services, utilities and defense two powerful, yet versatile solutions to overcome the daily challenges they face.

Next generation AI-ready performance
The next generation B360 and B360 Pro combine fully rugged build quality with a host of innovative new technology upgrades. This includes the latest Intel Core Ultra Series 2 processors and Intel AI Boost technology, which enables users to leverage on-device Edge AI to quickly and seamlessly execute tasks. In a recent text-to-report evaluation test conducted with Getac industry customers using Llama 3.1 8B, AI applications running on the B360 were able to turn extensive texts into full reports in a matter of seconds. This powerful Edge AI performance offers significant operational advantages over cloud AI, particularly in scenarios requiring real-time processing, high levels of data privacy and security, offline capability, and cost efficiency.

Physical SIM Support Reportedly in the Balance for Ultra-thin Smartphones w/ Snapdragon 8 Elite Gen 2 SoCs

According to Digital Chat Station—a repeat leaker of unannounced Qualcomm hardware—unnamed Android smartphone manufacturers are considering an eSIM-only operating model for future flagship devices. Starting with the iPhone 14 generation (2022), Apple has continued to deliver gadgets that are not reliant on "slotted-in" physical SIM cards. According to industry insiders, competitors could copy the market leader's homework—Digital Chat Station's latest Weibo blog post discusses the space-saving benefits of eSIM operation; being "conducive to lightweight and integrated design." Forthcoming top-tier slimline Android mobile devices are tipped to utilize Qualcomm's rumored second-generation "Snapdragon 8 Elite Gen 2" (SM8850) chipset.

Digital Chat Station reckons that: "SM8850 series phones at the end of the year are testing eSIM. Whether they can be implemented in China is still a question mark. Let's wait and see the iPhone 17 Air. In order to have an ultra-thin body, this phone directly cancels the physical SIM card slot. Either it will be a special phone for the domestic market, or it will get eSIM." The phasing out of physical SIM cards within the Chinese mobile market could be a tricky prospect for local OEMs, but reports suggest that "traditionally-dimensioned" flagship offerings will continue to support the familiar subscriber identity module standard. Physical SIM card purists often point out that the format still provides superior network support range.

RayNeo Unveils Next-Gen XR Glasses RayNeo Air 3s at MWC 2025

RayNeo, a leading innovator in consumer Augmented Reality (AR) technology, has unveiled its latest XR glasses, the RayNeo Air 3s, at MWC 2025. Alongside this groundbreaking release, the company showcased its other two innovations: the AI-powered, Full-Color AR Glasses RayNeo X3 Pro and the Camera AI Glasses RayNeo V3. Together, these cutting-edge devices underscore RayNeo's unwavering commitment to redefining immersive experiences and enhancing everyday usability through advanced AR solutions.

RayNeo Air 3s: Lightweight XR for Seamless Daily Use
The RayNeo Air 3s redefines the landscape of lightweight XR glasses, seamlessly merging portability with state-of-the-art display technology. Equipped with 3840 Hz high-frequency dimming, a staggering 200,000:1 contrast ratio, and a 154% sRGB color gamut, the Air 3s delivers breathtaking image quality, setting a new benchmark for birdbath display solutions. Its expansive 201-inch virtual screen and TÜV Rheinland-certified eye comfort technology ensure an immersive yet comfortable experience, making it ideal for all-day wear.

Lenovo Delivers Unmatched Flexibility, Performance and Design with New ThinkSystem V4 Servers Powered by Intel Xeon 6 Processors

Today, Lenovo announced three new infrastructure solutions, powered by Intel Xeon 6 processors, designed to modernize and elevate data centers of any size to AI-enabled powerhouses. The solutions include next generation Lenovo ThinkSystem V4 servers that deliver breakthrough performance and exceptional versatility to handle any workload while enabling powerful AI capabilities in compact, high-density designs. Whether deploying at the edge, co-locating or leveraging a hybrid cloud, Lenovo is delivering the right mix of solutions that seamlessly unlock intelligence and bring AI wherever it is needed.

The new Lenovo ThinkSystem servers are purpose-built to run the widest range of workloads, including the most compute intensive - from algorithmic trading to web serving, astrophysics to email, and CRM to CAE. Organizations can streamline management and boost productivity with the new systems, achieving up to 6.1x higher compute performance than previous generation CPUs with Intel Xeon 6 with P-cores and up to 2x the memory bandwidth when using new MRDIMM technology, to scale and accelerate AI everywhere.

Synopsys Expands Its Hardware-Assisted Verification (HAV) Portfolio for Next-Gen Semiconductors

Synopsys, Inc. today announced the expansion of its industry-leading hardware-assisted verification (HAV) portfolio with new HAPS prototyping and ZeBu emulation systems using the latest AMD Versal Premium VP1902 adaptive SoC. The next generation HAPS-200 prototyping and ZeBu-200 emulation systems deliver improved runtime performance, better compile time and improved debug productivity. They are built on new Synopsys Emulation and Prototyping (EP-Ready) Hardware that optimizes customer return on investment by enabling emulation and prototyping use cases via reconfiguration and optimized software. ZeBu Server 5 is enhanced to deliver industry-leading scalability beyond 60 billion gates (BG) to address the escalating hardware and software complexity in SoC and multi-die designs. It continues to offer industry-best density to optimize data center space utilization.

"With the industry approaching 100s of billions of gates per chip and 100s of millions of lines of software code in SoC and multi-die solutions, verification of advanced designs poses never-before seen challenges," said Ravi Subramanian, chief product management officer, Synopsys. "Continuing our strong partnership with AMD, our new systems deliver the highest HAV performance while offering the ultimate flexibility between prototyping and emulation use. Industry leaders are adopting Synopsys EP-Ready Hardware platforms for silicon to system verification and validation."

AMD & CEA Partner for AI Compute Advancements

AMD (NASDAQ: AMD) today announced the signing of a Letter of Intent (LOI) with the Commissariat à l'énergie atomique et aux énergies alternatives (CEA) of France to collaborate on the advanced technologies, component and system architectures that will shape the future of AI computing. The collaboration will leverage the strengths of both organizations to push the boundaries on energy-efficient systems needed to support the world's most compute-intensive AI workloads in fields from energy to medicine.

Through this initiative, AMD and CEA will engage in a structured collaboration, focused on technological advancements on next generation AI compute infrastructure. AMD and CEA also are planning a symposium on the future of AI compute in 2025 that will convene European stakeholders and global technology providers, startups, supercomputing centers, universities and policy makers to accelerate collaboration around state-of-the-art and emerging AI computing technologies.

CoreWeave Launches Debut Wave of NVIDIA GB200 NVL72-based Cloud Instances

AI reasoning models and agents are set to transform industries, but delivering their full potential at scale requires massive compute and optimized software. The "reasoning" process involves multiple models, generating many additional tokens, and demands infrastructure with a combination of high-speed communication, memory and compute to ensure real-time, high-quality results. To meet this demand, CoreWeave has launched NVIDIA GB200 NVL72-based instances, becoming the first cloud service provider to make the NVIDIA Blackwell platform generally available. With rack-scale NVIDIA NVLink across 72 NVIDIA Blackwell GPUs and 36 NVIDIA Grace CPUs, scaling to up to 110,000 GPUs with NVIDIA Quantum-2 InfiniBand networking, these instances provide the scale and performance needed to build and deploy the next generation of AI reasoning models and agents.

NVIDIA GB200 NVL72 on CoreWeave
NVIDIA GB200 NVL72 is a liquid-cooled, rack-scale solution with a 72-GPU NVLink domain, which enables the six dozen GPUs to act as a single massive GPU. NVIDIA Blackwell features many technological breakthroughs that accelerate inference token generation, boosting performance while reducing service costs. For example, fifth-generation NVLink enables 130 TB/s of GPU bandwidth in one 72-GPU NVLink domain, and the second-generation Transformer Engine enables FP4 for faster AI performance while maintaining high accuracy. CoreWeave's portfolio of managed cloud services is purpose-built for Blackwell. CoreWeave Kubernetes Service optimizes workload orchestration by exposing NVLink domain IDs, ensuring efficient scheduling within the same rack. Slurm on Kubernetes (SUNK) supports the topology block plug-in, enabling intelligent workload distribution across GB200 NVL72 racks. In addition, CoreWeave's Observability Platform provides real-time insights into NVLink performance, GPU utilization and temperatures.

ASUS ROG Takes a Closer Look at Astral GeForce RTX 5090 & 5080 Models

The next generation of graphics performance has arrived. We've prepared an all-new series of cards: ROG Astral. Featuring a new, sophisticated design and an outstanding cooling solution, the ROG Astral GeForce RTX 5090 and ROG Astral GeForce RTX 5080 are your premium picks for supercharging the performance of your gaming PC. All this new hardware in the ROG Astral GeForce RTX 5090 requires no small amount of power so that it can stretch its legs and run. Your PSU should be capable of at least 1000 W to run this card—more on that later. The circuitry that delivers this power is just as important, and it's one reason why many enthusiasts prefer ROG graphics cards. We've equipped the ROG Astral GeForce RTX 5090 and 5080 for premium power delivery with 80-amp MOSFETs that can supply over 35% more headroom than standard designs. A massive 24-phase VRM array for the GPU and a seven-phase VRM for the GDDR7 memory chips distribute the work of supplying power, ensuring rock-solid stability and long-lasting performance. To give you peace of mind that your 16-pin PCIe power connector is seated properly, we provide monitoring through Power Detector+ in the GPU Tweak III app so that you can verify that the connector is fully seated. The app can even tell you exactly which pin is not seated properly, if that ever becomes a concern.

Ada, meet Blackwell
With the GeForce RTX 50 Series, NVIDIA debuts its latest Blackwell architecture. Armed with fifth-gen Tensor cores, new streaming multiprocessors optimized for neural shaders, and fourth-gen Ray Tracing cores built for Mega Geometry, the new graphics cards unlock access to the next generation of graphics technologies. For many gamers, the highlight of the new architecture is DLSS 4. DLSS is a revolutionary suite of neural rendering technologies that uses AI to boost FPS, reduce latency, and improve image quality. The latest breakthrough, DLSS 4, brings new Multi Frame Generation and enhanced Ray Reconstruction and Super Resolution. But there's more. NVIDIA Reflex 2 with Frame Warp provides game-winning responsiveness, and these cards are equipped to give you the best experience with ray-traced graphics yet.

NVIDIA RTX 5090 Geekbench Leak: OpenCL and Vulkan Tests Reveal True Performance Uplifts

The RTX 50-series fever continues to rage on, with independent reviews for the RTX 5080 and RTX 5090 dropping towards the end of this month. That does not stop benchmarks from leaking out, unsurprisingly, and a recent lineup of Geekbench listings have revealed the raw performance uplifts that can be expected from NVIDIA's next generation GeForce flagship. A sizeable chunk of the tech community was certainly rather disappointed with NVIDIA's reliance on AI-powered frame generation for much of the claimed improvements in gaming. Now, it appears we can finally figure out how much raw improvement NVIDIA was able to squeeze out with consumer Blackwell, and the numbers, for the most part, appear decent enough.

Starting off with the OpenCL tests, the highest score that we have seen so far from the RTX 5090 puts it around 367,000 points, which marks an acceptable jump from the RTX 4090, which manages around 317,000 points according to Geekbench's official average data. Of course, there are a plethora of cards that may easily exceed the average scores, which must be kept in mind. That said, we are not aware of the details of the RTX 5090 that was tested, so pitting it against average scores does seem fair. Moving to Vulkan, the performance uplift is much more satisfying, with the RTX 5090 managing a minimum of 331,000 points and a maximum of around 360,000 points, compared to the RTX 4090's 262,000 - a sizeable 37% improvement at the highest end. Once again, we are comparing the best results posted so far against last year's averages, so expect slightly more modest gains in the real world. Once more reviews start appearing after the embargo lifts, the improvement figures should become much more reliable.

NVIDIA AI Expected to Transform $10 Trillion Healthcare & Life Sciences Industry

At yesterday's J.P. Morgan Healthcare Conference NVIDIA announced new partnerships to transform the $10 trillion healthcare and life sciences industry by accelerating drug discovery, enhancing genomic research and pioneering advanced healthcare services with agentic and generative AI. The convergence of AI, accelerated computing and biological data is turning healthcare into the largest technology industry. Healthcare leaders IQVIA, Illumina and Mayo Clinic, as well as Arc Institute, are using the latest NVIDIA technologies to develop solutions that will help advance human health.

These solutions include AI agents that can speed clinical trials by reducing administrative burden, AI models that learn from biology instruments to advance drug discovery and digital pathology, and physical AI robots for surgery, patient monitoring and operations. AI agents, AI instruments and AI robots will help address the $3 trillion of operations dedicated to supporting industry growth and create an AI factory opportunity in the hundreds of billions of dollars.

Morse Micro Intros New World-Beating Wi-Fi SoC - Smallest, Fastest & Farthest-Reaching

Morse Micro, the world's leading provider of Wi-Fi HaLow chips based on the IEEE 802.11ah specification, has announced the launch of its highly anticipated second-generation MM8108 System-on-Chip (SoC). Building on the success of the first-generation MM6108 SoC, the MM8108 offers even better performance in all key areas of range, throughput, and power efficiency while also reducing the cost, effort, and time to bring the next generation of Wi-Fi HaLow enabled products to market.

The MM8108 delivers class-leading data rates of up to 43.33 Mbps using world-first sub-GHz 256-QAM modulation at an 8 MHz bandwidth, making it ideal for a range of applications in agricultural, mining, industrial, home, and city environments. Its integrated 26dBm power amplifier (PA) with exceptional power efficiency, and low-noise amplifier (LNA) ensure exceptional performance and enable global regulatory certification without the need for external Surface Acoustic Wave (SAW) filters. The exceptional power efficiency significantly extends battery life and enables the uptake of solar-powered Wi-Fi HaLow connected cameras and IoT devices.

VeriSilicon Unveils Next-Gen Vitality Architecture GPU IP Series

VeriSilicon today announced the launch of its latest Vitality architecture Graphics Processing Unit (GPU) IP series, designed to deliver high-performance computing across a wide range of applications, including cloud gaming, AI PC, and both discrete and integrated graphics cards.

VeriSilicon's new generation Vitality GPU architecture delivers exceptional advancements in computational performance with scalability. It incorporates advanced features such as a configurable Tensor Core AI accelerator and a 32 MB to 64 MB Level 3 (L3) cache, offering both powerful processing power and superior energy efficiency. Additionally, the Vitality architecture supports up to 128 channels of cloud gaming per core, addressing the needs of high concurrency and high image quality cloud-based entertainment, while enabling large-scale desktop gaming and applications on Windows systems. With robust support for Microsoft DirectX 12 APIs and AI acceleration libraries, this architecture is ideally suited for a wide range of performance-intensive applications and complex computing workloads.

Google Announces Android XR

We started Android over a decade ago with a simple idea: transform computing for everyone. Android powers more than just phones—it's on tablets, watches, TVs, cars and more.

Now, we're taking the next step into the future. Advancements in AI are making interacting with computers more natural and conversational. This inflection point enables new extended reality (XR) devices, like headsets and glasses, to understand your intent and the world around you, helping you get things done in entirely new ways.

Amazon AWS Announces General Availability of Trainium2 Instances, Reveals Details of Next Gen Trainium3 Chip

At AWS re:Invent, Amazon Web Services, Inc. (AWS), an Amazon.com, Inc. company, today announced the general availability of AWS Trainium2-powered Amazon Elastic Compute Cloud (Amazon EC2) instances, introduced new Trn2 UltraServers, enabling customers to train and deploy today's latest AI models as well as future large language models (LLM) and foundation models (FM) with exceptional levels of performance and cost efficiency, and unveiled next-generation Trainium3 chips.

"Trainium2 is purpose built to support the largest, most cutting-edge generative AI workloads, for both training and inference, and to deliver the best price performance on AWS," said David Brown, vice president of Compute and Networking at AWS. "With models approaching trillions of parameters, we understand customers also need a novel approach to train and run these massive workloads. New Trn2 UltraServers offer the fastest training and inference performance on AWS and help organizations of all sizes to train and deploy the world's largest models faster and at a lower cost."

Smartkem and AUO Partner to Develop a New Generation of Rollable, Transparent MicroLED Displays

Smartkem, positioned to power the next generation of displays using its disruptive organic thin-film transistors (OTFTs), has partnered with AUO, the largest display manufacturer in Taiwan, to jointly develop the world's first advanced rollable, transparent microLED display using Smartkem's technology.

"We believe that collaborating with global display industry leader AUO to develop a novel microLED display puts Smartkem's technology on the frontier of microLED display commercialization. Our unique transistor technology is expected to enable display manufacturers to efficiently produce microLED displays, making mass production commercially viable. Smartkem's technology has the potential to take today's microLED TVs from high end market prices of $100,000 down to mass market prices," stated Ian Jenks, Smartkem Chairman and CEO.

NVIDIA cuLitho Computational Lithography Platform is Moving to Production at TSMC

TSMC, the world leader in semiconductor manufacturing, is moving to production with NVIDIA's computational lithography platform, called cuLitho, to accelerate manufacturing and push the limits of physics for the next generation of advanced semiconductor chips. A critical step in the manufacture of computer chips, computational lithography is involved in the transfer of circuitry onto silicon. It requires complex computation - involving electromagnetic physics, photochemistry, computational geometry, iterative optimization and distributed computing. A typical foundry dedicates massive data centers for this computation, and yet this step has traditionally been a bottleneck in bringing new technology nodes and computer architectures to market.

Computational lithography is also the most compute-intensive workload in the entire semiconductor design and manufacturing process. It consumes tens of billions of hours per year on CPUs in the leading-edge foundries. A typical mask set for a chip can take 30 million or more hours of CPU compute time, necessitating large data centers within semiconductor foundries. With accelerated computing, 350 NVIDIA H100 Tensor Core GPU-based systems can now replace 40,000 CPU systems, accelerating production time, while reducing costs, space and power.

LG gram Ready to Define the Next-Gen AI Laptop With New Intel Core Ultra Processors

LG Electronics (LG) is excited to announce that its newest LG gram laptop featuring the Intel Core Ultra processor (Series 2) will be showcased at the Intel Core Ultra Global Launch Event from September 3-8. Renowned for its powerful performance and ultra-lightweight design, the LG gram series now integrates advanced AI capabilities powered by the latest Intel Core Ultra processor. The LG gram 16 Pro, the first model to feature these new Intel processors, will be unveiled before its release at the end of 2024.

As the first on-device AI laptop from the LG gram series, it offers up to an impressive 48 neural processing unit (NPU) tera operations per second (TOPS), setting a new standard for AI PCs and providing the exceptional performance required for Copilot experiences. Powered by the latest Intel Core Ultra processor, the LG gram 16 Pro is now more efficient thanks to advanced AI functionalities such as productivity assistants, text and image creation and collaboration tools. What's more, its extended battery life helps users handle tasks without worry.

Intel "Arrow Lake" and "Lunar Lake" Are Safe from Voltage Stability Issues, Company Reports

Intel's 13th and 14th generation processors, codenamed "Raptor Lake" and "Raptor Lake Refresh," have been notoriously riddled with stability issues over the past few months, up until Intel shipped the 0x129 microcode update on August 10 to fix these issues. However, the upcoming Intel Core Ultra 200 "Arrow Lake" and 200V series "Lunar Lake" processors will not have these issues as the company confirmed that an all-new design is used, even for the segment of power regulation. The official company note states: "Intel confirms that its next generation of processors, codenamed Arrow Lake and Lunar Lake, are not affected by the Vmin Shift Instability issue due to the new architectures powering both product families. Intel will ensure future product families are protected against the Vmin Shift Instability issue as well."

Originally, Intel's analysis for 13th—and 14th-generation processors indicated that stability issues stemmed from excessive voltage during processor operation. These voltage increases led to degradation, raising the minimum voltage necessary for stable performance, which Intel refers to as "Vmin shift." Given that the design phase of new architectures lasts for years, Intel has surely anticipated that the old power delivery could yield problems, and the upcoming CPU generations are now exempt from these issues, bringing stability once again to Intel's platforms. When these new products launch, all eyes will be on the platform's performance, but with a massive interest in stability testing from enthusiasts.

EK Announces Next-Generation Flat Style EK Quantum Combo Units

EK, the leading premium liquid cooling gear manufacturer, is releasing the next generation of its versatile flat-style combo units. EK-Quantum Kinetic³ FLT D5 series pump-reservoir units are equipped with genuine D5 pumps made in Europe, capable of delivering flow rates of 1000 liters per hour with a maximum head pressure of up to 5.2 meters! These new FLT series combo units are available in six different sizes 92 mm, 120 mm, 240 mm, 140 mm, 280 mm, and 360 mm.

The main upgrades of the Kinetic³ combo units include:
  • Additional G1/4" connection ports
  • Front-mounted pump for added versatility
  • Side and back mounting holes for flexibility during installation
  • A custom-made inner O-ring that prevents coolant from seeping between the channels and improves flow
  • Dense LED strip implementation
  • User-friendly LED cover for LED strip service

Intel 18A Powers On, Panther Lake and Clearwater Forest Out of the Fab and Booting OS

Intel today announced that its lead products on Intel 18A, Panther Lake (AI PC client processor) and Clearwater Forest (server processor), are out of the fab and have powered-on and booted operating systems. These milestones were achieved less than two quarters after tape-out, with both products on track to start production in 2025. The company also announced that the first external customer is expected to tape out on Intel 18A in the first half of next year.

"We are pioneering multiple systems foundry technologies for the AI era and delivering a full stack of innovation that's essential to the next generation of products for Intel and our foundry customers. We are encouraged by our progress and are working closely with customers to bring Intel 18A to market in 2025." -Kevin O'Buckley, Intel senior vice president and general manager of Foundry Services

NEO Semiconductor Announces 3D X-AI Chip as HBM Successor

NEO Semiconductor, a leading developer of innovative technologies for 3D NAND flash memory and 3D DRAM, announced today the development of its 3D X-AI chip technology, targeted to replace the current DRAM chips inside high bandwidth memory (HBM) to solve data bus bottlenecks by enabling AI processing in 3D DRAM. 3D X-AI can reduce the huge amount of data transferred between HBM and GPUs during AI workloads. NEO's innovation is set to revolutionize the performance, power consumption, and cost of AI Chips for AI applications like generative AI.

AI Chips with NEO's 3D X-AI technology can achieve:
  • 100X Performance Acceleration: contains 8,000 neuron circuits to perform AI processing in 3D memory.
  • 99% Power Reduction: minimizes the requirement of transferring data to the GPU for calculation, reducing power consumption and heat generation by the data bus.
  • 8X Memory Density: contains 300 memory layers, allowing HBM to store larger AI models.
Return to Keyword Browsing
Mar 22nd, 2025 19:40 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts