News Posts matching #CPU

Return to Keyword Browsing

US Weighs National Security Risks of China's RISC-V Chip Development Involvement

The US government is investigating the potential national security risks associated with China's involvement in the development of open-source RISC-V chip technology. According to a letter obtained by Reuters, the Department of Commerce has informed US lawmakers that it is actively reviewing the implications of China's work in this area. RISC-V, an open instruction set architecture (ISA) created in 2014 at the University of California, Berkeley, offers an alternative to proprietary and licensed ISAs like those developed by Arm. This open-source ISA can be utilized in a wide range of applications, from AI chips and general-purpose CPUs to high-performance computing applications. Major Chinese tech giants, including Alibaba and Huawei, have already embraced RISC-V, positioning it as a new battleground in the ongoing technological rivalry between the United States and China over cutting-edge semiconductor capabilities.

In November, a group of 18 US lawmakers from both chambers of Congress urged the Biden administration to outline its strategy for preventing China from gaining a dominant position in RISC-V technology, expressing concerns about the potential impact on US national and economic security. While acknowledging the need to address potential risks, the Commerce Department noted in its letter that it must proceed cautiously to avoid unintentionally harming American companies actively participating in international RISC-V development groups. Previous attempts to restrict the transfer of 5G technology to China have created obstacles for US firms involved in global standards bodies where China is also a participant, potentially jeopardizing American leadership in the field. As the review process continues, the Commerce Department faces the delicate task of balancing national security interests with the need to maintain the competitiveness of US companies in the rapidly evolving landscape of open-source chip technologies.

Qualcomm Continues to Disrupt the PC Industry with the Addition of Snapdragon X Plus Platform

Qualcomm Technologies, Inc. today expands the leading Snapdragon X Series platform portfolio with Snapdragon X Plus. Snapdragon X Plus features the state-of-the-art Qualcomm Oryon CPU, a custom-integrated processor that delivers up to 37% faster CPU performance compared to competitors, while consuming up to 54% less power. This remarkable advancement in CPU performance sets a new standard in mobile computing, enabling users to accomplish more with greater efficiency. Snapdragon X Plus is also designed to meet the demands of on-device AI-driven applications, powered by the Qualcomm Hexagon NPU capable of 45 TOPS, making it the world's fastest NPU for laptops. This platform is a significant leap in computing innovation and is set to transform the PC industry.

"Snapdragon X Series platforms deliver leading experiences and are positioned to revolutionize the PC industry. Snapdragon X Plus will power AI-Supercharged PCs that enable even more users to excel as radical new AI experiences emerge in this period of rapid development and deployment," said Kedar Kondap, senior vice president and general manager of compute and gaming, Qualcomm Technologies, Inc. "By delivering leading CPU performance, AI capabilities, and power efficiency, we are once again pushing the boundaries of what is possible in mobile computing."

Dynabook Releases Hyperlight 14-inch Portégé X40L-M Laptop with Intel Core Ultra Processors and Powerful AI Integration

Dynabook Americas, Inc., the gold standard for long-lasting, professional-grade laptops, today unveiled the latest generation of its hyperlight 14-inch premium business laptop - the Portégé X40L-M. Now engineered with cutting-edge Intel Core Ultra (Series 1) processors and packing advanced AI capabilities, this powerful laptop redefines productivity, performance, and security for today's on-the-go professionals, while meeting Intel EVO platform and Windows 11 Secured-core PC standards.

"The Portégé X40L-M is a testament to Dynabook's commitment to delivering premium, cutting-edge solutions that empowers professionals to achieve more in their work," said James Robbins, General Manager, Dynabook Americas, Inc. "With the integration of Intel's latest Core Ultra processors, advanced AI capabilities, and seamless Windows 11 with Copilot integration, the Portégé X40L-M sets a new standard for productivity, performance, and innovation in the business laptop market."

Intel Prepares 500-Watt Xeon 6 SKUs of Granite Rapids and Sierra Forest

Intel is preparing to unveil its cutting-edge Xeon 6 series server CPUs, known as Granite Rapids and Sierra Forest. These forthcoming processors are set to deliver a significant boost in performance, foreshadowing a new era of computing power, albeit with a trade-off in increased power consumption. Two days ago, Yuuki_Ans posted information about the Beechnut City validation platform. Today, he updated the X thread with more information that Intel is significantly boosting core counts across its new Xeon 6 lineup. The flagship Xeon 6 6980P is a behemoth, packing 128 cores with a blistering 500 Watt Thermal Design Power (TDP) rating. In fact, Intel is equipping five of its Xeon 6 CPUs with a sky-high 500 W TDP, including the top four Granite Rapids parts and even the flagship Sierra Forest SKU, which is composed entirely of efficiency cores. This marks a substantial increase from Intel's previous Xeon Scalable processors, which maxed out at 350-385 Watts.

The trade-off for this performance boost is a dramatic rise in power consumption. By nearly doubling the TDP ceiling, Intel can double the core count from 64 to 128 cores on its Granite Rapids CPUs, vastly improving its multi-core capabilities. However, this focus on raw performance over power efficiency means server manufacturers must redesign their cooling solutions to accommodate Intel's flagship 500 W parts adequately. Failure to do so could lead to potential thermal throttling issues. Intel's next-gen Xeon CPU architectures are shaping up to be one of the most considerable generational leaps in recent memory. Still, they come with a trade-off in power consumption that vendors and data centers will need to address. Densely packing thousands of these 500-Watt SKUs will lead to new power and thermal challenges, and we wait to see future data center projects utilizing them.

ZOTAC to Show Scalable GPU Platforms and Industrial Solutions at Hannover Messe 2024

ZOTAC Technology is announcing a new lineup of enterprise and healthcare-oriented mini PCs designed for specific applications and scalable deployment, as well as a whole new class of external GPU acceleration platforms for Thunderbolt 3-compatible PCs. Aside from the all-new additions, ZOTAC is also refreshing its best-selling performance mini PCs with the newest generations of Intel Core Processors and NVIDIA RTX-enabled GPUs. ZOTAC will debut these rugged, innovative solutions and showcase other AI-ready compute solutions during Hannover Messe 2024, reaffirming ZOTAC's commitment to embrace the AI-driven future.

ZOTAC ZBOX Healthcare Series: Medical AI Solution
With the all-new ZOTAC Healthcare Series, ZOTAC is offering the reputed superior quality and performance of ZOTAC ZBOX Mini PCs to the realm of Healthcare. The ZBOX H39R5000W and ZBOX H37R3500W are equipped with 13th Generation Intel Core i9 or i7 laptop processors, as well as professional-grade NVIDIA RTX Ada Generation laptop GPUs. These mini PCs are ready to power medical imaging, algorithms, and more, with some of the latest and greatest hardware currently available.

Intel Builds World's Largest Neuromorphic System to Enable More Sustainable AI

Today, Intel announced that it has built the world's largest neuromorphic system. Code-named Hala Point, this large-scale neuromorphic system, initially deployed at Sandia National Laboratories, utilizes Intel's Loihi 2 processor, aims at supporting research for future brain-inspired artificial intelligence (AI), and tackles challenges related to the efficiency and sustainability of today's AI. Hala Point advances Intel's first-generation large-scale research system, Pohoiki Springs, with architectural improvements to achieve over 10 times more neuron capacity and up to 12 times higher performance.

"The computing cost of today's AI models is rising at unsustainable rates. The industry needs fundamentally new approaches capable of scaling. For that reason, we developed Hala Point, which combines deep learning efficiency with novel brain-inspired learning and optimization capabilities. We hope that research with Hala Point will advance the efficiency and adaptability of large-scale AI technology." -Mike Davies, director of the Neuromorphic Computing Lab at Intel Labs

Lenovo Prepares Thinkpad T14s and Yoga Slim 14 Laptops with Qualcomm Snapdragon X Processor

Lenovo is putting finishing touches on the Yoga Slim 7 14 2024, one of the first non-reference laptop to feature Qualcomm's latest Snapdragon X processor. Leaked images circulating on X (formerly Twitter) reveal a sleek and stylish design, with a 14-inch or 14.5-inch display encased in a slim and portable form factor. Qualcomm has previously showcased eye-catching demo reference systems in a striking red color scheme, but Lenovo's Yoga Slim 7 14 2024 marks the first time a significant laptop manufacturer with actual product images incorporating the Snapdragon X chip. The Yoga Slim 7 14 2024 is part of Lenovo's popular Slim laptop lineup, including models powered by Intel and AMD processors. The latest "Gen 8" iteration featured options for AMD Ryzen 7040 series and Intel 13th Gen Core i and Core Ultra series CPUs.

One notable addition to the Snapdragon X-powered model is the inclusion of a dedicated Microsoft Copilot button. Qualcomm has heavily touted the Snapdragon X's Neural Processing Unit (NPU) performance and its ability to directly accelerate various AI and machine learning algorithms on the device. There have been a few comparison points between Meteor Lake with Intel's NPU and Snapdragon X Elite with Qualcomm's NPU. The chipmaker's X Elite and X Plus laptop offerings are expected to arrive soon, and there are strong indications that this may happen during the Computex trade show.

Update 17:28 UTC: X user WalkingCat has posted images of Lenovo Thinkpad T14s laptop, which can be seen below.

MSI AMD 600 Series Motherboard Ready To Support Next-Gen CPU

MSI is here to announce the latest AGESA ComboPI 1.1.7.0 Patch A BIOS update for AM5 next gen CPU support on X670E, X670, B650, A620 motherboards. Users would simply need to update the BIOS to the latest version accordingly. MSI will continue to update the latest news for our users. Please follow MSI's official channels and check the product pages for the latest BIOS updates to guarantee optimal experience, heightened performance, and enhanced stability.

For more about MSI AMD 600 series motherboards, please check here.

Demand for NVIDIA's Blackwell Platform Expected to Boost TSMC's CoWoS Total Capacity by Over 150% in 2024

NVIDIA's next-gen Blackwell platform, which includes B-series GPUs and integrates NVIDIA's own Grace Arm CPU in models such as the GB200, represents a significant development. TrendForce points out that the GB200 and its predecessor, the GH200, both feature a combined CPU+GPU solution, primarily equipped with the NVIDIA Grace CPU and H200 GPU. However, the GH200 accounted for only approximately 5% of NVIDIA's high-end GPU shipments. The supply chain has high expectations for the GB200, with projections suggesting that its shipments could exceed millions of units by 2025, potentially making up nearly 40 to 50% of NVIDIA's high-end GPU market.

Although NVIDIA plans to launch products such as the GB200 and B100 in the second half of this year, upstream wafer packaging will need to adopt more complex and high-precision CoWoS-L technology, making the validation and testing process time-consuming. Additionally, more time will be required to optimize the B-series for AI server systems in aspects such as network communication and cooling performance. It is anticipated that the GB200 and B100 products will not see significant production volumes until 4Q24 or 1Q25.

Sony PlayStation 5 Pro Specifications Confirmed, Console Arrives Before Holidays

Thanks for the detailed information obtained by The Verge, today we confirm previously leaked details as Sony gears up to unveil the highly anticipated PlayStation 5 Pro, codenamed "Trinity." According to insider reports, Sony is urging developers to optimize their games for the PS5 Pro, with a primary focus on enhancing ray tracing capabilities. The console is expected to feature an RDNA 3 GPU with 30 WGP running BVH8, capable of 33.5 TeraFLOPS of FP32 single-precision computing power, and a slightly quicker CPU running at 3.85 GHz, enabling it to render games with ray tracing enabled or achieve higher resolutions and frame rates in select titles. Sony anticipates GPU rendering on the PS5 Pro to be approximately 45 percent faster than the standard PlayStation 5. The PS5 Pro GPU will be larger and utilize faster system memory to bolster ray tracing performance, boasting up to three times the speed of the regular PS5.

Additionally, the console will employ a more powerful ray tracing architecture, backed by PlayStation Spectral Super Resolution (PSSR), allowing developers to leverage graphics features like ray tracing more extensively. To support this endeavor, Sony is providing developers with test kits, and all games submitted for certification from August onward must be compatible with the PS5 Pro. Insider Gaming, the first to report the full PS5 Pro specs, suggests a potential release during the 2024 holiday period. The PS5 Pro will also feature modifications for developers regarding system memory, with Sony increasing the memory bandwidth from 448 GB/s to 576 GB/s, enhancing efficiency for an even more immersive gaming experience. To do AI processing, there is an custom AI accelerator capable of 300 8-bit INT8 TOPS and 67 16-bit FP16 TeraFLOPS, in addition to ACV audio codec running up to 35% faster.

NVIDIA Points Intel Raptor Lake CPU Users to Get Help from Intel Amid System Instability Issues

According to a recently published help guide, spotted by the X/Twitter user @harukaze5719, NVIDIA has addressed reported stability problems users are experiencing with Intel's latest 13th and 14th generation Raptor Lake Core processors, especially the high-performance overclockable K-series models. In a recent statement, NVIDIA recommended that owners of the affected Intel CPUs consult directly with Intel if they encounter issues such as system instability, video memory errors, game crashes, or failures to launch certain applications. The problems seem particularly prevalent when running demanding workloads like gaming on Unreal Engine 5 titles or during shader compilation tasks that heavily utilize the processor and graphics capabilities. Intel has established a dedicated website to provide support for these CPU instability cases. However, the chipmaker still needs to issue a broad public statement and provide a definitive resolution.

The instability is often attributed to the very high frequencies and performance the K-series Raptor Lake chips are designed to achieve, which are among the fastest processors in Intel's lineup. While some community suggestions like undervolting or downclocking the CPUs may help mitigate issues in the short term, it remains unclear if permanent fixes will require BIOS updates from motherboard manufacturers or game patches.

Update: As the community has pointed out, motherboard makers often run the CPU outside of Intel's default spec, specifically causing overvolting through modifying or removing power limits, which could introduce instabilities into the system. Running the CPU at Intel-defined specification must be assured with a BIOS check to see if the CPU is running at specified targets. Intel programs the voltage curve into the CPU, and when motherboard makers remove any voltage/power limits, the CPU takes freedom in utilizing the available headroom, possibly causing system instability. We advise everyone to check the power limit setting in the BIOS for the health of their own system.

Intel Discontinues 13th Generation "Raptor Lake" K-Series Overclockable CPU SKUs

Intel has decided to discontinue its entire 13th Gen Raptor Lake lineup of overclockable "K-series" CPU SKUs. According to an official product change notice, the company will stop accepting orders for chips like the Core i9-13900KS, Core i9-13900K, Core i9-13900KF, Core i7-13700K, Core i7-13700KF, Core i5-13600K, and Core i5-13600KF after May 24th, 2024. Final shipments to vendors are targeted for June 28th. After those dates, availability of the unlocked Raptor Lake processors will rapidly diminish as the remaining inventory gets sold off, possibly at inflated prices due to shortages. This discontinuation comes just over a year after Raptor Lake's launch in late 2022, which delivered additional performance improvements over the previous Alder Lake generation.

Raptor Lake brought higher clocks, more cache, additional efficiency cores, and enough muscle to compete with AMD's Ryzen 7000 CPUs in many workloads. Interestingly, Intel has not yet discontinued Alder Lake, suggesting those 12th-generation chips may still be available for some time. While the death of the overclockable Raptor Lake K-series CPUs is unfortunate for enthusiasts, there is an upside—it paves the way for Intel's current generation Raptor Lake refresh, 14th generation Core processors, to clear inventory before the next-generation processors arrive. The 15th generation "Arrow Lake" Core Ultra 2 series of processors could be teased at the upcoming Computex event in June.

GEEKOM's XT12 Pro Now Available with Core i9-12900H CPU

The GEEKOM XT12 Pro, a new mini PC that rocks an incredible Intel Core i9-12900H processor, is now available for pre-orders! The unit with 32 GB of DDR4 RAM, a 1 TB SSD, and licensed Windows 11 Pro is priced at only $699, making the XT12 Pro arguably the best-bang-for-the-buck i9 powered mini PC ever.

The GEEKOM XT12 Pro employs a uni-body aluminium chassis that measures 117 x 111 x 38.5 mm (0.5 liter). The anodized matte finish gives the mini PC a refreshingly gorgeous look, and the minimalist design makes it DIY customization easy. As small as it is, the XT12 Pro still packs a wide array of I/O, including two full-function 40 Gbps USB4, four USB-A, two HDMI 2.0, a 2.5 Gbps Ethernet port, and a 3.5 mm audio jack.

Google Launches Axion Arm-based CPU for Data Center and Cloud

Google has officially joined the club of custom Arm-based, in-house-developed CPUs. As of today, Google's in-house semiconductor development team has launched the "Axion" CPU based on Arm instruction set architecture. Using the Arm Neoverse V2 cores, Google claims that the Axion CPU outperforms general-purpose Arm chips by 30% and Intel's processors by a staggering 50% in terms of performance. This custom silicon will fuel various Google Cloud offerings, including Compute Engine, Kubernetes Engine, Dataproc, Dataflow, and Cloud Batch. The Axion CPU, designed from the ground up, will initially support Google's AI-driven services like YouTube ads and Google Earth Engine. According to Mark Lohmeyer, Google Cloud's VP and GM of compute and machine learning infrastructure, Axion will soon be available to cloud customers, enabling them to leverage its performance without overhauling their existing applications.

Google's foray into custom silicon aligns with the strategies of its cloud rivals, Microsoft and Amazon. Microsoft recently unveiled its own AI chip for training large language models and an Arm-based CPU called Cobalt 100 for cloud and AI workloads. Amazon, on the other hand, has been offering Arm-based servers through its custom Graviton CPUs for several years. While Google won't sell these chips directly to customers, it plans to make them available through its cloud services, enabling businesses to rent and leverage their capabilities. As Amin Vahdat, the executive overseeing Google's in-house chip operations, stated, "Becoming a great hardware company is very different from becoming a great cloud company or a great organizer of the world's information."

Acer Launches New Nitro 14 and Nitro 16 Gaming Laptops Powered by AMD Ryzen 8040 Series Processors

Acer today announced the new Nitro 14 and Nitro 16 gaming laptops, powered by AMD Ryzen 8040 Series processors with Ryzen AI[1]. With up to NVIDIA GeForce RTX 4060[2] Laptop GPUs supported by DLSS 3.5 technology, both are backed by NVIDIA's RTX AI platform, providing an array of capabilities in over 500 games and applications, enhanced by AI. Gamers are immersed in their 14- and 16-inch NVIDIA G-SYNC compatible panels with up to WQXGA (2560x1600) resolution.

Whether in call or streaming in-game, Acer PurifiedVoice 2.0 harnesses the power of AI to block out external noises, while Acer PurifiedView keeps users always front and center of all the action. Microsoft Copilot in Windows (with a dedicated Copilot key) helps accelerate everyday tasks on these AI laptops, and with one month of Xbox Game Pass Ultimate included with every device, players will enjoy hundreds of high-quality PC games. To seamlessly take command of device performance and customizations, one click of the NitroSense key directs users to the control center and the library of available AI-related functions through the new Experience Zone.

AMD Extends Leadership Adaptive SoC Portfolio with New Versal Series Gen 2 Devices Delivering End-to-End Acceleration for AI-Driven Embedded Systems

AMD today announced the expansion of the AMD Versal adaptive system on chip (SoC) portfolio with the new Versal AI Edge Series Gen 2 and Versal Prime Series Gen 2 adaptive SoCs, which bring preprocessing, AI inference, and postprocessing together in a single device for end-to-end acceleration of AI-driven embedded systems.

These initial devices in the Versal Series Gen 2 portfolio build on the first generation with powerful new AI Engines expected to deliver up to 3x higher TOPs-per-watt than first generation Versal AI Edge Series devicesi, while new high-performance integrated Arm CPUs are expected to offer up to 10x more scalar compute than first gen Versal AI Edge and Prime series devicesii.

Imagination's new Catapult CPU is Driving RISC-V Device Adoption

Imagination Technologies today unveils the next product in the Catapult CPU IP range, the Imagination APXM-6200 CPU: a RISC-V application processor with compelling performance density, seamless security and the artificial intelligence capabilities needed to support the compute and intuitive user experience needs for next generation consumer and industrial devices.

"The number of RISC-V based devices is skyrocketing with over 16Bn units forecast by 2030, and the consumer market is behind much of this growth" says Rich Wawrzyniak, Principal Analyst at SHD Group. "One fifth of all consumer devices will have a RISC-V based CPU by the end of this decade. Imagination is set to be a force in RISC-V with a strategy that prioritises quality and ease of adoption. Products like APXM-6200 are exactly what will help RISC-V achieve the promised success."

AIO Workstation Combines 128-Core Arm Processor and Four NVIDIA GPUs Totaling 28,416 CUDA Cores

All-in-one computers are often traditionally seen as lower-powered alternatives to traditional desktop workstations. However, a new offering from Alafia AI, a startup focused on medical imaging appliances, aims to shatter that perception. The company's upcoming Alafia Aivas SuperWorkstation packs serious hardware muscle, demonstrating that all-in-one systems can match the performance of their more modular counterparts. At the heart of the Aivas SuperWorkstation lies a 128-core Ampere Altra processor, running at 3.0 GHz clock speed. This CPU is complemented by not one but three NVIDIA L4 GPUs for compute, and a single NVIDIA RTX 4000 Ada GPU for video output, delivering a combined 28,416 CUDA cores for accelerated parallel computing tasks. The system doesn't skimp on other components, either. It features a 4K touch display with up to 360 nits of brightness, an extensive 2 TB of DDR4 RAM, and storage options up to an 8 TB solid-state drive. This combination of cutting-edge CPU, GPU, memory, and storage is squarely aimed at the demands of medical imaging and AI development workloads.

The all-in-one form factor packs this incredible hardware into a sleek, purposefully designed clinical research appliance. While initially targeting software developers, Alafia AI hopes that institutions that can optimize their applications for the Arm architecture can eventually deploy the Aivas SuperWorkstation for production medical imaging workloads. The company is aiming for application integration in Q3 2024 and full ecosystem device integration by Q4 2024. With this powerful new offering, Alafia AI is challenging long-held assumptions about the performance limitations of all-in-one systems. The Aivas SuperWorkstation demonstrates that the right hardware choices can transform these compact form factors into true powerhouse workstations. Especially with a combined total output of three NVIDIA L4 compute units, alongside RTX 4000 Ada graphics card, the AIO is more powerful than some of the high-end desktop workstations.

X-Silicon Startup Wants to Combine RISC-V CPU, GPU, and NPU in a Single Processor

While we are all used to having a system with a CPU, GPU, and, recently, NPU—X-Silicon Inc. (XSi), a startup founded by former Silicon Valley veterans—has unveiled an interesting RISC-V processor that can simultaneously handle CPU, GPU, and NPU workloads in a chip. This innovative chip architecture, which will be open-source, aims to provide a flexible and efficient solution for a wide range of applications, including artificial intelligence, virtual reality, automotive systems, and IoT devices. The new microprocessor combines a RISC-V CPU core with vector capabilities and GPU acceleration into a single chip, creating a versatile all-in-one processor. By integrating the functionality of a CPU and GPU into a single core, X-Silicon's design offers several advantages over traditional architectures. The chip utilizes the open-source RISC-V instruction set architecture (ISA) for both CPU and GPU operations, running a single instruction stream. This approach promises lower memory footprint execution and improved efficiency, as there is no need to copy data between separate CPU and GPU memory spaces.

Called the C-GPU architecture, X-Silicon uses RISC-V Vector Core, which has 16 32-bit FPUs and a Scaler ALU for processing regular integers as well as floating point instructions. A unified instruction decoder feeds the cores, which are connected to a thread scheduler, texture unit, rasterizer, clipping engine, neural engine, and pixel processors. All is fed into a frame buffer, which feeds the video engine for video output. The setup of the cores allows the users to program each core individually for HPC, AI, video, or graphics workloads. Without software, there is no usable chip, which prompts X-Silicon to work on OpenGL ES, Vulkan, Mesa, and OpenCL APIs. Additionally, the company plans to release a hardware abstraction layer (HAL) for direct chip programming. According to Jon Peddie Research (JPR), the industry has been seeking an open-standard GPU that is flexible and scalable enough to support various markets. X-Silicon's CPU/GPU hybrid chip aims to address this need by providing manufacturers with a single, open-chip design that can handle any desired workload. The XSi gave no timeline, but it has plans to distribute the IP to OEMs and hyperscalers, so the first silicon is still away.

Intel Xeon "Granite Rapids-SP" 80-core Engineering Sample Leaked

A CPU-Z screenshot has been shared by YuuKi_AnS—the image contains details about an alleged next-gen Intel Xeon Scalable processor engineering sample (ES). The hardware tipster noted in (yesterday's post) that an error had occurred in the application's identification of this chunk of prototype silicon. CPU-Z v2.09 has recognized the basics—an Intel Granite Rapids-SP processor that is specced with 80 cores, 2.5 GHz max frequency, a whopping 672 MB of L3 cache, and a max. TDP rating of 350 W. The counting of 320 threads seems to be CPU-Z's big mistake here—previous Granite Rapids-related leaks have not revealed Team Blue's Hyper-Threading technology producing such impressive numbers.

The alleged prototype status of this Xeon chip is very apparent in CPU-Z's tracking of single and multi-core performance—the benchmark results are really off the mark, when compared to finalized current-gen scores (produced by rival silicon). Team Blue's next-gen Xeon series is likely positioned to catch up with AMD EPYC's deployment of large core counts—"Granite Rapids" has been linked to the Intel 3 foundry node, reports from last month suggest that XCC-type processors could be configured with "counts going up to 56-core/112-threads." Micron is prepping next-gen "Tall Form Factor" memory modules, designed with future enterprise processor platforms in mind—including Intel's Xeon Scalable "Granite Rapids" family. Industry watchdogs posit that Team Blue will be launching this series in the coming months.

Entry-level Intel "Meteor Lake" SKU Appears Online: Core Ultra 5 115U

Intel's "Meteor Lake" mobile processor family launched last December, with an initial selection comprised of eleven "Core Ultra" SKUs. This week, internet sleuths have stumbled on some new additions—Team Blue has seemingly rolled out new models without much fanfare. Benchleaks discovered an intriguing Geekbench Browser entry that detailed a "Google Rex" Android device specced with an Intel Core Ultra 5 115U CPU. The benchmark database displays two errors—namely the incorrect detection of 10 cores and 10 threads. Team Blue's official product page lists 8 cores and 10 threads—specifically a configuration housing two P-Cores, four E-Cores, and two LP-Cores.

Amusingly, the official datasheet specifies that the Core Ultra 5 115U launched alongside the debut batch of Meteor Lake parts. VideoCardz posits that the chip's weaker iGPU specs separate it from the rest of the pack: "its designation as 'Ultra' might be misleading. In reality, even its graphics have been scaled down to 3 Xe-Cores, making it the sole SKU in the entire lineup with fewer than 4 Xe-Cores. The NPU is still intact and seems to be working at the same speed as the most powerful Meteor Lake chip. This suggests that the 115U could potentially excel as an AI accelerator, prioritizing AI tasks over other functions." This entry-level SKU is not fully out in the wild, but the existence of test platforms (via Geekbench Browser entries) semi-proves that Team Blue and its hardware partners are readying new portable products.

Alleged AMD Ryzen "Granite Ridge" Engineering Samples Pop Up in Shipping Manifests

Shipping manifests appear to be great sources of pre-release information—only a few hours ago, the existence of prototype AMD "Strix Point" and "Fire Range" mobile processors was highlighted by hardware sleuth harukaze5719. A related leak has appeared online fairly quickly after the discovery of laptop-oriented "Zen 5" chips. momomo_us joined in on the fun, with their exposure of speculated desktop silicon. Two brand-new AMD OPN codes have been linked to the upcoming "Granite Ridge" series of AM5 processors.

100-000001404-01 is likely an eight-core/ sixteen-thread "Zen 5" Ryzen CPU with a 170 W TDP—a stepping designation, B0, indicates engineering sample status. The other listing, 100-000001290-21, seems to be an A0-type engineering sample—leaked info suggests that this a six-core/twelve-thread (105 W TDP) next-gen mainstream desktop processor. AMD is likely nearing the finish line with its Ryzen 9000-series—a new generation of chipsets, including X870E, is reportedly in the pipeline. Additionally, VideoCardz posits that a refresh of 700-series boards could be on the cards. "Granite Range" CPUs are expected to retain the current-gen 6 nm client I/O die (cIOD), as sported by "Raphael" Ryzen 7000-series desktop processors.

AMD EPYC "Turin" 9000-series Motherboard Specs Suggest Support for DDR5 6000 MT/s

AMD's next-gen EPYC Zen 5 processor family seems to be nearing launch status—late last week, momomo_us uncovered an unnamed motherboard's datasheet; this particular model will accommodate a single 9000-series CPU—with a maximum 400 W TDP—via an SP5 socket. 500 W and 600 W limits have been divulged (via leaks) in the past, so the 400 W spec could be an error or a: "legitimate compatibility issue with the motherboard, though 400 Watts would be in character with high-end Zen 4 SP5 motherboards," according to Tom's Hardware analysis.

AMD's current-gen "Zen 4" based EPYC "Genoa" processor family—sporting up to 96-cores/192-threads—is somewhat limited by its DDR5 support transfer rates of up to 4800 MT/s. The latest leak suggests that "Turin" is upgraded quite nicely in this area—when compared to predecessors—the SP5 board specs indicate DDR5 speeds of up to 6000 MT/s with 4 TB of RAM. December 2023 reports pointed to "Zen 5c" variants featuring (max.) 192-core/384-thread configurations, while larger "Zen 5" models are believed to be "modestly" specced with up to 128-cores and 256-threads. AMD has not settled on an official release date for its EPYC "Turin" 9000-series processors, but a loose launch window is expected "later in 2024" based on timeframes presented within product roadmaps.

AMD Response to "ZENHAMMER: Rowhammer Attacks on AMD Zen-Based Platforms"

On February 26, 2024, AMD received new research related to an industry-wide DRAM issue documented in "ZENHAMMER: Rowhammering Attacks on AMD Zen-based Platforms" from researchers at ETH Zurich. The research demonstrates performing Rowhammer attacks on DDR4 and DDR5 memory using AMD "Zen" platforms. Given the history around Rowhammer, the researchers do not consider these rowhammering attacks to be a new issue.

Mitigation
AMD continues to assess the researchers' claim of demonstrating Rowhammer bit flips on a DDR5 device for the first time. AMD will provide an update upon completion of its assessment.

NVIDIA Modulus & Omniverse Drive Physics-informed Models and Simulations

A manufacturing plant near Hsinchu, Taiwan's Silicon Valley, is among facilities worldwide boosting energy efficiency with AI-enabled digital twins. A virtual model can help streamline operations, maximizing throughput for its physical counterpart, say engineers at Wistron, a global designer and manufacturer of computers and electronics systems. In the first of several use cases, the company built a digital copy of a room where NVIDIA DGX systems undergo thermal stress tests (pictured above). Early results were impressive.

Making Smart Simulations
Using NVIDIA Modulus, a framework for building AI models that understand the laws of physics, Wistron created digital twins that let them accurately predict the airflow and temperature in test facilities that must remain between 27 and 32 degrees C. A simulation that would've taken nearly 15 hours with traditional methods on a CPU took just 3.3 seconds on an NVIDIA GPU running inference with an AI model developed using Modulus, a whopping 15,000x speedup. The results were fed into tools and applications built by Wistron developers with NVIDIA Omniverse, a platform for creating 3D workflows and applications based on OpenUSD.
Return to Keyword Browsing
Dec 23rd, 2024 11:10 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts