News Posts matching #x86

Return to Keyword Browsing

Intel's "Wildcat Lake" Emerges as New Entry-Level Processor Series

According to recently discovered shipping manifests, Intel is developing a new processor series codenamed "Wildcat Lake," potentially succeeding their entry-level "Intel Processor" lineup based on Alder Lake-N. The documents, revealed by x86deadandback, suggest a 2025 launch timeline for these chips targeting lightweight laptops and mini-PCs. The shipping records from October 30 mention CPU reball equipment compatible with BGA 1516 sockets, measuring 35 x 25 mm, indicating early validation testing is underway. These processors are expected to be manufactured using Intel's advanced 18A process technology, sharing the same manufacturing node as the upcoming Panther Lake series. Early technical specifications of Wildcat Lake point to a hybrid architecture combining next-generation "Cougar Cove" performance cores with "Darkmont" low-power efficiency (LPE) cores in a 2P+4LPE configuration.

This design appears to separate the core clusters, departing from traditional shared ring bus arrangements, similar to the approach taken in Intel's Lunar Lake and Arrow Lake processors. While Wildcat Lake's exact position in Intel's product stack remains unclear, it could serve as a modernized replacement for the what were Pentium and Celeron processor families. These chips traditionally power devices like Chromebooks, embedded systems, and home servers, with the new series potentially offering significant performance improvements for these market segments. The processor is expected to operate in the sub-double-digit TDP power envelope, positioning it below the more powerful Lunar Lake series. Graphics capabilities will likely be more modest than Lunar Lake's Xe2 architecture, aligning with its entry-level market positioning.

Intel Abandons "x86S" Plans to Focus on The Regular x86-64 ISA Advisory Group

Intel has announced it will not proceed with X86S, an experimental instruction set architecture that aims to simplify its processor design by removing legacy support for older 32-bit and 16-bit operating modes. The decision comes after gathering feedback from the technology ecosystem on a draft specification that was released for evaluation. The x86, and its 64-bit x86-64 we use today, is a giant cluster of specifications that contains so many instructions rarely anyone can say with precision how many are there. All of this stems from the era of original 8086 processor, which has its own 16-bit instructions. Later on we transitioned to 32, then 64-bit systems with all have brought their own specific instructions. Adding support for processing of vector, matrix, and other data types has increased the ISA specification so much that no one outside a few select engineers at Intel (and AMD) understands in full. From that x86S idea was born to solve the issue of supporting legacy systems and legacy code, and moving on to the x86S ISA, where "S" stands for simplified.

The X86S proposal included several notable modifications, such as eliminating support for rings 1 and 2 in the processor's protection model, removing 16-bit addressing capabilities, and discontinuing legacy interrupt controller support. These changes would have potentially reduced hardware complexity and modernized the platform's architecture. A key feature of the proposed design was a simplified boot process that would have allowed processors to start directly in 64-bit mode, eliminating the current requirement for systems to boot through various legacy modes before reaching 64-bit operation. The architecture also promised improvements in handling modern features like 5-level paging. "Intel will continue to maintain its longstanding commitment to software compatibility," the company stated in the official document on its website, acknowledging that the x86S dream is over.

TechPowerUp GPU-Z v2.61.0 Released

TechPowerUp today released the latest update to TechPowerUp GPU-Z, the graphics sub-system information and monitoring utility for PC gamers and enthusiasts. Version 2.61.0 adds support for the new Intel Arc B580 and B570 "Battlemage" graphics cards. Preliminary support is also added for AMD "Navi 48" RDNA 4. This is also the first version of GPU-Z to support detection of Qualcomm Adreno 540, 630, 640, and 642L. GPU-Z is an x86 application, although you can run it on Windows on Arm platforms, where the operating system's emulation allows GPU-Z to detect the underlying hardware.

Among the other GPUs we've added support for, include the iGPU of the AMD Ryzen 7 9800X3D, NVIDIA H100 80 GB HBM3, A4000H, A800 40 GB Active, RTX 5880 Ada, and Tesla K40st. We've also added PCI vendor detection for ONIX, the new Intel Arc board partner, and Shangke. A crash on some AMD Ryzen systems with older drivers, an installed discrete GPU, and disabled iGPU, has been fixed. Grab GPU-Z from the link below.

DOWNLOAD: TechPowerUp GPU-Z 2.61.0

Intel and Qualcomm Clash Over Arm-based PC Return Rates, Qualcomm Notes It's "Within Industry Norm"

In an interesting exchange about product stance between Intel's interim co-CEO Michelle Johnston Holthaus and Qualcomm, both have offered conflicting statements about the market performance of Arm-based PCs. The dispute centers on customer satisfaction and return rates for PCs powered by Qualcomm's Snapdragon X processors. During the Barclays 22nd Annual Global Technology Conference, Holthaus claimed that retailers are experiencing high return rates for Arm PCs, mainly citing software compatibility issues. According to her, customers are finding that typical applications don't work as expected on these devices. "I mean, if you look at the return rate for Arm PCs, you go talk to any retailer, their number one concern is, wow, I get a large percentage of these back. Because you go to set them up, and the things that we just expect don't work," said Holthaus.

"Our devices continue to have greater than 4+ stars across consumer reviews and our products have received numerous accolades across the industry including awards from Fast Company, TechRadar, and many consumer publications. Our device return rates are within industry norm," said Qualcomm representative for CRN. Qualcomm projects that up to 50% of laptops will transition to non-x86 platforms within five years, signaling their confidence in Arm-based solutions. While software compatibility remains a challenge for Arm PCs, with not all Windows applications fully supported, Qualcomm and Microsoft have implemented an emulation layer to address these limitations. Holthaus acknowledged that Apple's successful transition to Arm-based processors has helped pave the way for broader Arm adoption in the PC market. "Apple did a lot of that heavy lift for Arm to make that ubiquitous with their iOS and their whole walled garden stack. So I'm not going to say Arm will get more, I'm sure, than it gets today. But there are certainly, I think, some real barriers to getting there," noted Holthaus.

Advantech Unveils Hailo-8 Powered AI Acceleration Modules for High-Efficiency Vision AI Applications

Advantech, a leading provider of AIoT platforms and services, proudly unveils its latest AI acceleration modules: the EAI-1200 and EAI-3300, powered by Hailo-8 AI processors. These modules deliver AI performance of up to 52 TOPS while achieving more than 12 times the power efficiency of comparable AI modules and GPU cards. Designed in standard M.2 and PCIe form factors, the EAI-1200 and EAI-3300 can be seamlessly integrated with diverse x86 and Arm-based platforms, enabling quick upgrades of existing systems and boards to incorporate AI capabilities. With these AI acceleration modules, developers can run inference efficiently on the Hailo-8 NPU while handling application processing primarily on the CPU, optimizing resource allocation. The modules are paired with user-friendly software toolkits, including the Edge AI SDK for seamless integration with HailoRT, the Dataflow Compiler for converting existing models, and TAPPAS, which offers pre-trained application examples. These features accelerate the development of edge-based vision AI applications.

EAI-1200 M.2 AI Module: Accelerating Development for Vision AI Security
The EAI-1200 is an M.2 AI module powered by a single Hailo-8 VPU, delivering up to 26 TOPS of computing performance while consuming approximately 5 watts of power. An optional heatsink supports operation in temperatures ranging from -40 to 65°C, ensuring easy integration. This cost-effective module is especially designed to bundle with Advantech's systems and boards, such as the ARK-1221L, AIR-150, and AFE-R770, enhancing AI applications including baggage screening, workforce safety, and autonomous mobile robots (AMR).

HighPoint M.2/E1.S NVMe RAID AICs Deliver Unparalleled Speed and Scalability for GPU Server Applications

HighPoint Technologies, a leader in advanced PCIe Switch and RAID AIC, Adapter and Storage Enclosure solutions, has announced an extensive line of M.2 and E1.S RAID AICs engineered to accommodate high-performance GPU-centric workloads. Designed for enterprise and datacenter class computing environments, HighPoint NVMe RAID AICs deliver class-leading performance and unmatched scalability, enabling modern x86/AMD and Arm platforms to support 4 to 16 NVMe SSDs via a single PCIe Gen 4 or Gen 5 x16 slot. State-of-the-art PCIe Switching Architecture and flexible RAID technology enable administrators to custom tailor M.2 and E1.S storage configurations for a broad range of data-intensive applications, and seamlessly scale or expand existing storage configurations to meet the needs of evolving workflows.

Unprecedented Storage Density
HighPoint NVMe AICs have established a new milestone for M.2 NVMe storage. HighPoint's revolutionary Dual-Width AIC architecture enables a single PCIe Gen 4 or Gen 5 x16 slot to directly host up to 16 M.2 NVMe SSDs, and 128 TB of storage capacity, at speeds up to 28 GB/s; a truly unprecedented advancement in compact, single-device storage expansion solutions. State-of-the art PCIe switching technology and advanced cooling systems maximize transfer throughput and ensure M.2 configurations operate at peak efficiency by halting the performance sapping threat of thermal throttling in its tracks.

RPCS3 PlayStation 3 Emulator Gets Native arm64 Support on Linux, macOS, and Windows

The RPCS3 team has announced the successful implementation of arm64 architecture support for their PlayStation 3 emulator. This development enables the popular emulator to run on a broader range of devices, including Apple Silicon machines, Windows-on-Arm, and even some smaller Arm-based SBC systems like the Raspberry Pi 5. The journey to arm64 support began in late 2021, following the release of Apple's M1 processors, with initial efforts focused on Linux platforms. After overcoming numerous technical hurdles, the development team, led by core developer Nekotekina and graphics specialist kd-11, achieved a working implementation by mid-2024. One of the primary challenges involved adapting the emulator's just-in-time (JIT) compiler for arm64 systems.

The team developed a solution using LLVM's intermediate representation (IR) transformer, which allows the emulator to generate code once for x86-64 and then transform it for arm64 platforms. This approach eliminated the need to maintain separate codebases for different architectures. A particular technical challenge emerged from the difference in memory management between x86 and arm64 systems. While the PlayStation 3 and traditional x86 systems use 4 KB memory pages, modern arm64 platforms typically operate with 16 KB pages. Though this larger page size can improve memory performance in native applications, it presented unique challenges for emulating the PS3's graphics systems, particularly when handling smaller textures and buffers. While the emulator now runs on arm64 devices, performance varies significantly depending on the hardware. Simple applications and homebrew software show promising results, but more demanding commercial games may require substantial computational power beyond what current affordable Arm devices can provide.

Linux Kernel Patch Fixes Minutes-Long Boot Times on AMD "Zen 1" and "Zen 2" Processors

A significant fix has been submitted to the Linux kernel 6.13-rc1 that addresses prolonged boot times affecting older AMD processors, specifically targeting "Zen 1" and "Zen 2" architectures. The issue, which has been present for approximately 18 months, could cause boot delays ranging from several seconds to multiple minutes in extreme cases. The problem was discovered by a Nokia engineer who reported inconsistent boot delays across multiple AMD EPYC servers. The most severe instances showed the initial unpacking process taking several minutes longer than expected, though not all boots were affected. Investigation revealed that the root cause stemmed from a kernel modification implemented in June 2023, specifically related to CPU microcode update handling.

The technical issue was identified as a missing step in the boot process: Zen 1 and Zen 2 processors require the patch buffer mapping to be flushed from the Translation Lookaside Buffer (TLB) after applying CPU microcode updates during startup. The fix, submitted as part of the "x86/urgent" material ahead of the Linux 6.13-rc1 release, implements the necessary TLB flush for affected AMD Ryzen and EPYC systems. This addition eliminates what developers described as "unnecessary and unnatural delays" in the boot process. While the solution will be included in the upcoming Linux 6.13 kernel release, plans are in place to back-port the fix to stable kernel versions to help cover most Linux users on older Zen architectures.

"Jaguar Shores" is Intel's Successor to "Falcon Shores" Accelerator for AI and HPC

Intel has prepared "Jaguar Shores," its "next-next" generation AI and HPC accelerator, successor to its upcoming "Falcon Shores" GPU. Revealed during a technical workshop at the SC2024 conference, the chip was unveiled by Intel's Habana Labs division, albeit unintentionally. This announcement positions Jaguar Shores as the successor to Falcon Shores, which is scheduled to launch next year. While details about Jaguar Shores remain sparse, its designation suggests it could be a general-purpose GPU (GPGPU) aimed at both AI training, inferencing, and HPC tasks. Intel's strategy aligns with its push to incorporate advanced manufacturing nodes, such as the 18A process featuring RibbonFET and backside power delivery, which promise significant efficiency gains, so we can expect to see upcoming AI accelerators incorporating these technologies.

Intel's AI chip lineup has faced numerous challenges, including shifting plans for Falcon Shores, which has transitioned from a CPU-GPU hybrid to a standalone GPU, and cancellation of Ponte Vecchio. Despite financial constraints and job cuts, Intel has maintained its focus on developing cutting-edge AI solutions. "We continuously evaluate our roadmap to ensure it aligns with the evolving needs of our customers. While we don't have any new updates to share, we are committed to providing superior enterprise AI solutions across our CPU and accelerator/GPU portfolio." an Intel spokesperson stated. The announcement of Jaguar Shores shows Intel's determination to remain competitive. However, the company faces steep competition. NVIDIA and AMD continue to set benchmarks with performant designs, while Intel has struggled to capture a significant share of the AI training market. The company's Gaudi lineup ends with third generation, and Gaudi IP will get integrated into Falcon Shores.

Interview with RISC-V International: High-Performance Chips, AI, Ecosystem Fragmentation, and The Future

RISC-V is an industry standard instruction set architecture (ISA) born in UC Berkeley. RISC-V is the fifth iteration in the lineage of historic RISC processors. The core value of the RISC-V ISA is the freedom of usage it offers. Any organization can leverage the ISA to design the best possible core for their specific needs, with no regional restrictions or licensing costs. It attracts a massive ecosystem of developers and companies building systems using the RISC-V ISA. To support these efforts and grow the ecosystem, the brains behind RISC decided to form RISC-V International—a non-profit foundation that governs the ISA and guides the ecosystem.

We had the privilege of talking with Andrea Gallo, Vice President of Technology at RISC-V International. Andrea oversees the technological advancement of RISC-V, collaborating with vendors and institutions to overcome challenges and expand its global presence. Andrea's career in technology spans several influential roles at major companies. Before joining RISC-V International, he worked at Linaro, where he pioneered Arm data center engineering initiatives, later overseeing diverse technological sectors as Vice President of Segment Groups, and ultimately managing crucial business development activities as executive Vice President. During his earlier tenure as a Fellow at ST-Ericsson, he focused on smartphone and application processor technology, and at STMicroelectronics he optimized hardware-software architectures and established international development teams.

What the Intel-AMD x86 Ecosystem Advisory Group is, and What it's Not

AVX-512 was proposed by Intel more than a decade ago—in 2013 to be precise. A decade later, the implementation of this instruction set on CPU cores remains wildly spotty—Intel implemented it first on an HPC accelerator, then its Xeon server processors, then its client processors, before realizing that hardware hasn't caught up with the technology to execute AVX-512 instructions in an energy-efficient manner, before deprecating it on the client. AMD implemented it just a couple of years ago with Zen 4 with a dual-pumped 256-bit FPU on 5 nm, before finally implementing a true 512-bit FPU on 4 nm. AVX-512 is a microcosm of what's wrong with the x86 ecosystem.

There are only two x86 CPU core vendors, the IP owner Intel, and its only surviving licensee capable of contemporary CPU cores, AMD. Any new additions to the ISA introduced by either of the two have to go through the grind of their duopolistic competition before software vendors could assume that there's a uniform install base to implement something new. x86 is a net-loser of this, and Arm is a net-winner. Arm Holdings makes no hardware of its own, except continuously developing the Arm machine architecture, and a first-party set of reference-design CPU cores that any licensee can implement. Arm's great march began with tiny embedded devices, before its explosion into client computing with smartphone SoCs. There are now Arm-based server processors, and the architecture is making inroads to the last market that x86 holds sway over—the PC. Apple's M-series processors compete with all segments of PC processors—right from the 7 W class, to the HEDT/workstation class. Qualcomm entered this space with its Snapdragon Elite family, and now Dell believes NVIDIA will take a swing at client processors in 2025. Then there's RISC-V. Intel finally did something it should have done two decades ago—set up a multi-brand Ecosystem Advisory Group. Here's what it is, and more importantly, what it's not.

Latest Asahi Linux Brings AAA Windows Games to Apple M1 MacBooks With Intricate Graphics Driver and Translation Stack

While Apple laptops have never really been the first stop for PC gaming, Linux is slowly shaping up to be an excellent gaming platform, largely thanks to open-source development efforts as well as work from the likes of AMD and NVIDIA, who have both put significant work into their respective Linux drivers in recent years. This makes efforts like the Asahi Linux Project all the more intriguing. Asahi Linux is a project that aims to bring Linux to Apple Silicon Macs—a task that has proven rather difficult, thanks to the intricacies of developing a bespoke GPU driver for Apple's custom ARM GPUs. In a recent blog post, the graphics developer behind the Asahi Linux Project showed off a number of AAA games, albeit older titles, running on an Apple M1 processor on the latest Asahi Linux build.

To run the games on Apple Silicon, Asahi Linux uses a "game playing toolkit," which relies on a number of custom graphics drivers and emulators, including tools from Valve's Proton translation layer, which ironically was also the foundation for Apple's Game Porting Toolkit. Asahi uses FEX to emulate x86 on ARM, Wine as a translation layer for Windows apps, and DXVK and vkd3d-proton for DirectX-Vulkan translation. In the blog post, the Asahi developer claims that the alpha is capable of running games like Control, The Witcher 3, and Cyberpunk 2077 at playable frame rates. Unfortunately, 60 FPS is not yet attainable in the majority of new high-fidelity games, there are a number of indie titles that run quite well on Asahi Linux, including Hollow Knight, Ghostrunner, and Portal 2.

Intel Updates 64-Bit Only "X86S" Instruction Set Architecture Specification to Version 1.2

Intel has released version 1.2 of its X86S architecture specification. The X86S project, first announced last year, aims to modernize the x86 architecture that has been the heart of PCs since the late 1970s. Over the decades, Intel and AMD have continually expanded x86's capabilities, resulting in a complex instruction set that Intel now sees as partially outdated. The latest specification primarily focuses on removing legacy features, particularly 16-bit and 32-bit support. This radical departure from x86's long-standing commitment to backward compatibility aligns with the simplification of x86. While the specification does mention a "32-bit compatibility mode," we are yet to how would 32-bit apps run. This ambiguity raises questions about how X86S might handle existing 32-bit applications, which, despite declining relevance, still play a role in many computing environments.

The potential transition to X86S comes at a time when the industry is already moving away from 32-bit support. However, the proposed changes are subject to controversy. The x86 architecture's strength has long been its extensive legacy support, allowing older software to run on modern hardware. A move to X86S could disrupt this ecosystem, particularly for users relying on older applications. Furthermore, introducing X86S raises questions about the future relationship between Intel and AMD, the two primary x86 CPU designers. While Intel leads the initiative, AMD's role in the potential transition remains uncertain, given its significant contributions to the current x86-64 standard.

Intel Launches Gaudi 3 AI Accelerator and P-Core Xeon 6 CPU

As AI continues to revolutionize industries, enterprises are increasingly in need of infrastructure that is both cost-effective and available for rapid development and deployment. To meet this demand head-on, Intel today launched Xeon 6 with Performance-cores (P-cores) and Gaudi 3 AI accelerators, bolstering the company's commitment to deliver powerful AI systems with optimal performance per watt and lower total cost of ownership (TCO).

"Demand for AI is leading to a massive transformation in the data center, and the industry is asking for choice in hardware, software and developer tools," said Justin Hotard, Intel executive vice president and general manager of the Data Center and Artificial Intelligence Group. "With our launch of Xeon 6 with P-cores and Gaudi 3 AI accelerators, Intel is enabling an open ecosystem that allows our customers to implement all of their workloads with greater performance, efficiency and security."

Qualcomm Said to Have Approached Intel About Takeover Bid

This is not an April fool, as Qualcomm has apparently approached Intel with a takeover bid, according to the Wall Street Journal. The news follows earlier rumours about Qualcomm having eyed the opportunity to buy parts of Intel's client PC business, especially the parts related to chip design. Now it looks like Qualcomm has decided it might as well give it a go and take over Intel entirely, if the WSJ's sources can be trusted. It's still early days though and no official offers appear to have been proposed by Qualcomm so far and it doesn't appear to be a hostile takeover offer at this point in time. As such, this could turn out to be nothing, or we could see a huge change in the chip market if something comes of it.

It's worth keeping in mind that Intel's share price has dropped by around 57 percent so far this year—not taking into account today's small jump for Intel—and Qualcomm's market cap stands at over twice that of Intel's at 188 vs 93 billion US dollars. Even if Intel was to agree to a takeover offer from Qualcomm, there are several antitrust hurdles in multiple countries to get around for the two giants as well. This is despite the two not being direct competitors, but with Qualcomm recently having entered the Windows laptop market, the two are at least competing for some market share there. It's also unclear what Qualcomm would do with Intel's x86 legacy if it acquired Intel, as Qualcomm might not be interested in keeping it, at least not on the consumer side of its business. Time will tell if this is just some advanced speculation or a serious consideration by Qualcomm.

Microsoft DirectX 12 Shifts to SPIR-V as Default Interchange Format

Microsoft's Direct3D and HLSL teams have unveiled plans to integrate SPIR-V support into DirectX 12 with the upcoming release of Shader Model 7. This significant transition marks a new era in GPU programmability, as it aims to unify the intermediate representation for graphical-shader stages and compute kernels. SPIR-V, an open standard intermediate representation for graphics and compute shaders, will replace the proprietary DirectX Intermediate Language (DXIL) as the shader interchange format for DirectX 12. The adoption of SPIR-V is expected to ease development processes across multiple GPU runtime environments. By embracing this open standard, Microsoft aims to enhance HLSL's position as the premier language for compiling graphics and compute shaders across various devices and APIs. This transition is part of a multi-year development process, during which Microsoft will work closely with The Khronos Group and the LLVM Project. The company has joined Khronos' SPIR and Vulkan working groups to ensure smooth collaboration and rapid feature adoption.

While the transition will take several years, Microsoft is providing early notice to allow developers and partners to plan accordingly. The company will offer translation tools between SPIR-V and DXIL to facilitate a gradual transition for both application and driver developers. For those not familiar with graphics development, graphics APIs ship with virtual instruction set architectures (ISA) that abstracts standard hardware features at a higher level. As GPUs don't follow the same ISA as CPUs (x86, Arm, RISC-V), this virtual ISA is needed to define some generics in the GPU architecture and allow various APIs like DirectX and Vulkan to run. Instead of focusing support on several formats like DXIL, Microsoft is embracing the open SPIR-V standard, which will become de facto for API developers in the future, allowing focus on more features instead of constantly replicating each other's functions. While DXIL is used mainly for gaming environments, SPIR-V has adoption in high-performance computing as well, with OpenCL and SYCL. Gaming presence is also there with Vulkan API, and we expect to see SPIR-V join DirectX 12 games.

Intel Announces New Mobile Lunar Lake Core Ultra 200V Series Processors

Intel today launched its most efficient family of x86 processors ever, the Intel Core Ultra 200V series processors. They deliver exceptional performance, breakthrough x86 power efficiency, a massive leap in graphics performance, no-compromise application compatibility, enhanced security and unmatched AI compute. The technology will power the industry's most complete and capable AI PCs with more than 80 consumer designs from more than 20 of the world's top manufacturing partners, including Acer, ASUS, Dell Technologies, HP, Lenovo, LG, MSI and Samsung. Pre-orders begin today with systems available globally on-shelf and online at over 30 global retailers starting Sept. 24. All designs featuring Intel Core Ultra 200V series processors and running the latest version of Windows are eligible to receive Copilot+ PC features as a free update starting in November.

"Intel's newest Core Ultra processors set the industry standard for mobile AI and graphics performance, and smash misconceptions about x86 efficiency. Only Intel has the scale through our partnerships with ISVs and OEMs, and the broader technology ecosystem, to provide consumers with a no-compromise AI PC experience."
--Michelle Johnston Holthaus, Intel executive vice president and general manager of the Client Computing Group

Intel Announces Deployment of Gaudi 3 Accelerators on IBM Cloud

IBM and Intel announced a global collaboration to deploy Intel Gaudi 3 AI accelerators as a service on IBM Cloud. This offering, which is expected to be available in early 2025, aims to help more cost-effectively scale enterprise AI and drive innovation underpinned with security and resiliency. This collaboration will also enable support for Gaudi 3 within IBM's watsonx AI and data platform. IBM Cloud is the first cloud service provider (CSP) to adopt Gaudi 3, and the offering will be available for both hybrid and on-premise environments.

"Unlocking the full potential of AI requires an open and collaborative ecosystem that provides customers with choice and accessible solutions. By integrating Gaudi 3 AI accelerators and Xeon CPUs with IBM Cloud, we are creating new AI capabilities and meeting the demand for affordable, secure and innovative AI computing solutions," said Justin Hotard, Intel executive vice president and general manager of the Data Center and AI Group.

Intel Dives Deep into Lunar Lake, Xeon 6, and Gaudi 3 at Hot Chips 2024

Demonstrating the depth and breadth of its technologies at Hot Chips 2024, Intel showcased advancements across AI use cases - from the data center, cloud and network to the edge and PC - while covering the industry's most advanced and first-ever fully integrated optical compute interconnect (OCI) chiplet for high-speed AI data processing. The company also unveiled new details about the Intel Xeon 6 SoC (code-named Granite Rapids-D), scheduled to launch during the first half of 2025.

"Across consumer and enterprise AI usages, Intel continuously delivers the platforms, systems and technologies necessary to redefine what's possible. As AI workloads intensify, Intel's broad industry experience enables us to understand what our customers need to drive innovation, creativity and ideal business outcomes. While more performant silicon and increased platform bandwidth are essential, Intel also knows that every workload has unique challenges: A system designed for the data center can no longer simply be repurposed for the edge. With proven expertise in systems architecture across the compute continuum, Intel is well-positioned to power the next generation of AI innovation." -Pere Monclus, chief technology officer, Network and Edge Group at Intel.

Tachyum Builds Last FPGA Prototypes Batch Ahead of Tape-Out

Tachyum today announced the final build of its Prodigy FPGA emulation system in advance of chip production and general availability next year. As part of the announcement, the company is also ending its purchase program for prototype systems that was previously offered to commercial and federal customers.

These last hardware FPGA prototype units will ensure Tachyum hits its extreme-reliability test targets of more than 10 quadrillion cycles prior to tape-out and before the first Prodigy chips hit the market. Tachyum's software emulation system - and access to it - is expanding with additional availability of open-source software ported ahead of Prodigy's upstreaming.

Qualcomm Snapdragon X Elite Mini-PC Dev Kit Arrives at $899

Qualcomm has started accepting preorders for its Snapdragon Dev Kit for Windows, based on the Snapdragon X Elite processor. Initially announced in May, the device is now available for preorder through Arrow at a competitive price point of $899. Despite its relatively high cost compared to typical mini PCs, it undercuts most recent laptops equipped with Snapdragon X processors, making it an attractive option for both developers and power users alike. Measuring a mere 199 x 175 x 35 mm, it comes equipped with 32 GB of LPDDR5x RAM, a 512 GB NVMe SSD, and support for the latest Wi-Fi 7 and Bluetooth 5 technologies. The connectivity options are equally robust, featuring three USB4 Type-C ports, two USB 3.2 Type-A ports, an HDMI output, and an Ethernet port.

This mini PC's heart lies the Snapdragon X Elite (X1E-00-1DE) processor. This chip houses 12 Oryon CPU cores capable of reaching speeds up to 3.8 GHz, with a dual-core boost potential of 4.3 GHz. The processor also integrates Adreno graphics, delivering up to 4.6 TFLOPS of performance, and a Hexagon NPU capable of up to 45 TOPS for AI tasks. While similar to its laptop counterpart, the X1E-84-100, this version is optimized for desktop use. It can consume up to 80 watts of power, enabling superior sustained performance without the constraints of battery life or heat dissipation typically associated with mobile devices. This dev kit is made primarily to optimize x86-64 software to run on the Arm platform; hence, removing the power limit is beneficial for translating the code to Windows on Arm. The Snapdragon Dev Kit for Windows ships with a 180 W power adapter and comes pre-installed with Windows 11, making it ready for immediate use upon arrival.

Intel Core Ultra 200V "Lunar Lake" CPUs Arrive on September 3rd

Intel has officially confirmed the upcoming Core Ultra 200V "Lunar Lake" CPU generation is arriving on September 3rd. The official media alert states: "Ahead of the IFA 2024 conference, join Michelle Johnston Holthaus, Intel executive vice president and general manager of the Client Computing Group, and Jim Johnson, senior vice president and general manager of the Client Business Group, and Intel partners as they launch the next generation of Intel Core Ultra processors, code-named Lunar Lake. During the livestreamed event, they will reveal details on the new processors' breakthrough x86 power efficiency, exceptional core performance, massive leaps in graphics performance and the unmatched AI computing power that will drive this and future generations of Intel products."

With IFA happening in Berlin from September 6th to 10th, Intel's Lunar Lake launch is also happening in Berlin just a few days before, on September 3rd at 6 p.m. CEST (9 a.m. PDT). We expect to see nine SKUs: Core Ultra 9 288V, Core Ultra 7 268V, Core Ultra 7 266V, Core Ultra 7 258V, Core Ultra 7 256V, Core Ultra 5 238V, Core Ultra 5 236V, Core Ultra 5 228V, and Core Ultra 5 226V. All of the aforementioned models feature four P-cores and four E-cores, with varying Xe2 GPU core counts and clocks. We also expect to see Intel present its design wins and upcoming Lunar Lake devices like laptops during the launch.
Intel Core Ultra 200V Lunar Lake

Qualcomm Snapdragon X "Copilot+" AI PCs Only Accounted for 0.3% of PassMark Benchmark Runs

The much-anticipated revolution in AI-powered personal computing seems to be off to a slower start than expected. Qualcomm's Snapdragon X CPUs, touted as game-changers in the AI PC market, have struggled to gain significant traction since their launch. Recent data from PassMark, a popular benchmarking software, reveals that Snapdragon X CPUs account for a mere 0.3% of submissions in the past 30 days. This is a massive contrast to the 99.7% share held by traditional x86 processors from Intel and AMD, which raises questions about the immediate future of ARM-based PCs. The underwhelming adoption comes despite bold predictions from industry leaders. Qualcomm CEO Cristiano Amon had projected that ARM-based CPUs could capture up to 50% of the Windows PC market by 2029. Similarly, ARM's CEO anticipated a shift away from x86's long-standing dominance.

However, it turns out that these PCs are primarily bought for the battery life, not their AI capabilities. Of course, it's premature to declare Arm's Windows venture a failure. The AI PC market is still in its infancy, and upcoming mid-tier laptops featuring Snapdragon X Elite CPUs could boost adoption rates. A lot of time still needs to pass before the volume of these PCs reaches millions of units shipped by x86 makers. The true test will come with the launch of AMD's Ryzen AI 300 and Intel's Lunar Lake CPUs, providing a clearer picture of how ARM-based options compare in AI performance. As the AI PC landscape evolves, Qualcomm faces mounting pressure. NVIDIA's anticipated entry into the market and significant performance improvements in next-generation x86 processors from Intel and AMD pose a massive challenge. The coming months will be crucial in determining whether Snapdragon X CPUs can live up to their initial hype and carve out a significant place in the AI PC ecosystem.

AMD Zen 6 to Cram Up to 32 CPU Cores Per CCD

AMD's future "Zen 6" CPU microarchitecture is rumored to cram up to 32 cores per CCD (CPU complex die), or the common client/server chiplet with the CPU cores, according to Kepler_L2, a reliable source with hardware leaks. At this point it's not clear if they are referring to the regular "Zen 6" CPU core, or the physically compacted "Zen 6c" core meant for high core-count cloud server processors. The current pure "Zen 4c" CCD found in EPYC "Bergamo" processor packs 16 cores across two 8-core CCX (CPU core complexes) that share a 16 MB L3 cache among the 8 cores within the CCX. The upcoming "Zen 5c" CCD will pack 16 cores, but in a single 16-core CCX that shares 32 MB of L3 cache among the 16 cores for improved per-core cache access. "Zen 6" is expected to double this to 32 cores per CCD.

The 32-core CCD powered by "Zen 6" (likely Zen 6c), might take advantage of process improvements to double the core-count. At this point, it's not clear if this jumbo CCD features a single large CCX with all 32 cores sharing a large L3 cache; or if it's using two 16-core CCX that shares, say, 32 MB of L3 cache among the 16 cores. What's clear with this leak, though, is that AMD is looking to continue ramping up CPU core counts per socket. Data-centers and cloud customers seem to love this, and AMD is the only x86 processor maker in a serious competition with Arm-based server processor manufacturers such as Ampere, to increase significantly increase core counts per socket with each generation.

AMD Hits Highest-Ever x86 CPU Market Share in Q1 2024 Across Desktop and Server

AMD has reached a significant milestone, capturing a record-high share of the X86 CPU market in the first quarter of 2024, according to the latest report from Mercury Research. This achievement marks a significant step forward for the chipmaker in its long battle against rival Intel's dominance in the crucial computer processor space. The surge was fueled by strong demand for AMD's Ryzen and EPYC processors across consumer and enterprise markets. The Ryzen lineup's compelling price-to-performance ratio has struck a chord with gamers, content creators, and businesses seeking cost-effective computing power without sacrificing capabilities. It secured AMD's 23.9% share, an increase from the previous Q4 of 2023, which has seen a 19.8% market share.

The company has also made major inroads on the data center front with its EPYC server CPUs. AMD's ability to supply capable yet affordable processors has enabled cloud providers and enterprises to scale operations on AMD's platform. Several leading tech giants have embraced EPYC, contributing to AMD's surging server market footprint. Now, it is at 23.6%, a significant increase over the past few years, whereas AMD was just above 10% four years ago in 2020. AMD lost some share to Intel on the mobile PC front due to the Meteor Lake ramp, but it managed to gain a small percentage of the market share of client PCs. As AMD rides the momentum into the second half of 2024, all eyes will be on whether the chipmaker can sustain this trajectory and potentially claim an even larger slice of the x86 CPU pie from Intel in the coming quarters.
Below, you can see additional graphs of mobile PC and client PC market share.
Return to Keyword Browsing
Dec 22nd, 2024 21:48 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts