News Posts matching #x86

Return to Keyword Browsing

"Jaguar Shores" is Intel's Successor to "Falcon Shores" Accelerator for AI and HPC

Intel has prepared "Jaguar Shores," its "next-next" generation AI and HPC accelerator, successor to its upcoming "Falcon Shores" GPU. Revealed during a technical workshop at the SC2024 conference, the chip was unveiled by Intel's Habana Labs division, albeit unintentionally. This announcement positions Jaguar Shores as the successor to Falcon Shores, which is scheduled to launch next year. While details about Jaguar Shores remain sparse, its designation suggests it could be a general-purpose GPU (GPGPU) aimed at both AI training, inferencing, and HPC tasks. Intel's strategy aligns with its push to incorporate advanced manufacturing nodes, such as the 18A process featuring RibbonFET and backside power delivery, which promise significant efficiency gains, so we can expect to see upcoming AI accelerators incorporating these technologies.

Intel's AI chip lineup has faced numerous challenges, including shifting plans for Falcon Shores, which has transitioned from a CPU-GPU hybrid to a standalone GPU, and cancellation of Ponte Vecchio. Despite financial constraints and job cuts, Intel has maintained its focus on developing cutting-edge AI solutions. "We continuously evaluate our roadmap to ensure it aligns with the evolving needs of our customers. While we don't have any new updates to share, we are committed to providing superior enterprise AI solutions across our CPU and accelerator/GPU portfolio." an Intel spokesperson stated. The announcement of Jaguar Shores shows Intel's determination to remain competitive. However, the company faces steep competition. NVIDIA and AMD continue to set benchmarks with performant designs, while Intel has struggled to capture a significant share of the AI training market. The company's Gaudi lineup ends with third generation, and Gaudi IP will get integrated into Falcon Shores.

Interview with RISC-V International: High-Performance Chips, AI, Ecosystem Fragmentation, and The Future

RISC-V is an industry standard instruction set architecture (ISA) born in UC Berkeley. RISC-V is the fifth iteration in the lineage of historic RISC processors. The core value of the RISC-V ISA is the freedom of usage it offers. Any organization can leverage the ISA to design the best possible core for their specific needs, with no regional restrictions or licensing costs. It attracts a massive ecosystem of developers and companies building systems using the RISC-V ISA. To support these efforts and grow the ecosystem, the brains behind RISC decided to form RISC-V International—a non-profit foundation that governs the ISA and guides the ecosystem.

We had the privilege of talking with Andrea Gallo, Vice President of Technology at RISC-V International. Andrea oversees the technological advancement of RISC-V, collaborating with vendors and institutions to overcome challenges and expand its global presence. Andrea's career in technology spans several influential roles at major companies. Before joining RISC-V International, he worked at Linaro, where he pioneered Arm data center engineering initiatives, later overseeing diverse technological sectors as Vice President of Segment Groups, and ultimately managing crucial business development activities as executive Vice President. During his earlier tenure as a Fellow at ST-Ericsson, he focused on smartphone and application processor technology, and at STMicroelectronics he optimized hardware-software architectures and established international development teams.

What the Intel-AMD x86 Ecosystem Advisory Group is, and What it's Not

AVX-512 was proposed by Intel more than a decade ago—in 2013 to be precise. A decade later, the implementation of this instruction set on CPU cores remains wildly spotty—Intel implemented it first on an HPC accelerator, then its Xeon server processors, then its client processors, before realizing that hardware hasn't caught up with the technology to execute AVX-512 instructions in an energy-efficient manner, before deprecating it on the client. AMD implemented it just a couple of years ago with Zen 4 with a dual-pumped 256-bit FPU on 5 nm, before finally implementing a true 512-bit FPU on 4 nm. AVX-512 is a microcosm of what's wrong with the x86 ecosystem.

There are only two x86 CPU core vendors, the IP owner Intel, and its only surviving licensee capable of contemporary CPU cores, AMD. Any new additions to the ISA introduced by either of the two have to go through the grind of their duopolistic competition before software vendors could assume that there's a uniform install base to implement something new. x86 is a net-loser of this, and Arm is a net-winner. Arm Holdings makes no hardware of its own, except continuously developing the Arm machine architecture, and a first-party set of reference-design CPU cores that any licensee can implement. Arm's great march began with tiny embedded devices, before its explosion into client computing with smartphone SoCs. There are now Arm-based server processors, and the architecture is making inroads to the last market that x86 holds sway over—the PC. Apple's M-series processors compete with all segments of PC processors—right from the 7 W class, to the HEDT/workstation class. Qualcomm entered this space with its Snapdragon Elite family, and now Dell believes NVIDIA will take a swing at client processors in 2025. Then there's RISC-V. Intel finally did something it should have done two decades ago—set up a multi-brand Ecosystem Advisory Group. Here's what it is, and more importantly, what it's not.

Latest Asahi Linux Brings AAA Windows Games to Apple M1 MacBooks With Intricate Graphics Driver and Translation Stack

While Apple laptops have never really been the first stop for PC gaming, Linux is slowly shaping up to be an excellent gaming platform, largely thanks to open-source development efforts as well as work from the likes of AMD and NVIDIA, who have both put significant work into their respective Linux drivers in recent years. This makes efforts like the Asahi Linux Project all the more intriguing. Asahi Linux is a project that aims to bring Linux to Apple Silicon Macs—a task that has proven rather difficult, thanks to the intricacies of developing a bespoke GPU driver for Apple's custom ARM GPUs. In a recent blog post, the graphics developer behind the Asahi Linux Project showed off a number of AAA games, albeit older titles, running on an Apple M1 processor on the latest Asahi Linux build.

To run the games on Apple Silicon, Asahi Linux uses a "game playing toolkit," which relies on a number of custom graphics drivers and emulators, including tools from Valve's Proton translation layer, which ironically was also the foundation for Apple's Game Porting Toolkit. Asahi uses FEX to emulate x86 on ARM, Wine as a translation layer for Windows apps, and DXVK and vkd3d-proton for DirectX-Vulkan translation. In the blog post, the Asahi developer claims that the alpha is capable of running games like Control, The Witcher 3, and Cyberpunk 2077 at playable frame rates. Unfortunately, 60 FPS is not yet attainable in the majority of new high-fidelity games, there are a number of indie titles that run quite well on Asahi Linux, including Hollow Knight, Ghostrunner, and Portal 2.

Intel Updates 64-Bit Only "X86S" Instruction Set Architecture Specification to Version 1.2

Intel has released version 1.2 of its X86S architecture specification. The X86S project, first announced last year, aims to modernize the x86 architecture that has been the heart of PCs since the late 1970s. Over the decades, Intel and AMD have continually expanded x86's capabilities, resulting in a complex instruction set that Intel now sees as partially outdated. The latest specification primarily focuses on removing legacy features, particularly 16-bit and 32-bit support. This radical departure from x86's long-standing commitment to backward compatibility aligns with the simplification of x86. While the specification does mention a "32-bit compatibility mode," we are yet to how would 32-bit apps run. This ambiguity raises questions about how X86S might handle existing 32-bit applications, which, despite declining relevance, still play a role in many computing environments.

The potential transition to X86S comes at a time when the industry is already moving away from 32-bit support. However, the proposed changes are subject to controversy. The x86 architecture's strength has long been its extensive legacy support, allowing older software to run on modern hardware. A move to X86S could disrupt this ecosystem, particularly for users relying on older applications. Furthermore, introducing X86S raises questions about the future relationship between Intel and AMD, the two primary x86 CPU designers. While Intel leads the initiative, AMD's role in the potential transition remains uncertain, given its significant contributions to the current x86-64 standard.

Intel Launches Gaudi 3 AI Accelerator and P-Core Xeon 6 CPU

As AI continues to revolutionize industries, enterprises are increasingly in need of infrastructure that is both cost-effective and available for rapid development and deployment. To meet this demand head-on, Intel today launched Xeon 6 with Performance-cores (P-cores) and Gaudi 3 AI accelerators, bolstering the company's commitment to deliver powerful AI systems with optimal performance per watt and lower total cost of ownership (TCO).

"Demand for AI is leading to a massive transformation in the data center, and the industry is asking for choice in hardware, software and developer tools," said Justin Hotard, Intel executive vice president and general manager of the Data Center and Artificial Intelligence Group. "With our launch of Xeon 6 with P-cores and Gaudi 3 AI accelerators, Intel is enabling an open ecosystem that allows our customers to implement all of their workloads with greater performance, efficiency and security."

Qualcomm Said to Have Approached Intel About Takeover Bid

This is not an April fool, as Qualcomm has apparently approached Intel with a takeover bid, according to the Wall Street Journal. The news follows earlier rumours about Qualcomm having eyed the opportunity to buy parts of Intel's client PC business, especially the parts related to chip design. Now it looks like Qualcomm has decided it might as well give it a go and take over Intel entirely, if the WSJ's sources can be trusted. It's still early days though and no official offers appear to have been proposed by Qualcomm so far and it doesn't appear to be a hostile takeover offer at this point in time. As such, this could turn out to be nothing, or we could see a huge change in the chip market if something comes of it.

It's worth keeping in mind that Intel's share price has dropped by around 57 percent so far this year—not taking into account today's small jump for Intel—and Qualcomm's market cap stands at over twice that of Intel's at 188 vs 93 billion US dollars. Even if Intel was to agree to a takeover offer from Qualcomm, there are several antitrust hurdles in multiple countries to get around for the two giants as well. This is despite the two not being direct competitors, but with Qualcomm recently having entered the Windows laptop market, the two are at least competing for some market share there. It's also unclear what Qualcomm would do with Intel's x86 legacy if it acquired Intel, as Qualcomm might not be interested in keeping it, at least not on the consumer side of its business. Time will tell if this is just some advanced speculation or a serious consideration by Qualcomm.

Microsoft DirectX 12 Shifts to SPIR-V as Default Interchange Format

Microsoft's Direct3D and HLSL teams have unveiled plans to integrate SPIR-V support into DirectX 12 with the upcoming release of Shader Model 7. This significant transition marks a new era in GPU programmability, as it aims to unify the intermediate representation for graphical-shader stages and compute kernels. SPIR-V, an open standard intermediate representation for graphics and compute shaders, will replace the proprietary DirectX Intermediate Language (DXIL) as the shader interchange format for DirectX 12. The adoption of SPIR-V is expected to ease development processes across multiple GPU runtime environments. By embracing this open standard, Microsoft aims to enhance HLSL's position as the premier language for compiling graphics and compute shaders across various devices and APIs. This transition is part of a multi-year development process, during which Microsoft will work closely with The Khronos Group and the LLVM Project. The company has joined Khronos' SPIR and Vulkan working groups to ensure smooth collaboration and rapid feature adoption.

While the transition will take several years, Microsoft is providing early notice to allow developers and partners to plan accordingly. The company will offer translation tools between SPIR-V and DXIL to facilitate a gradual transition for both application and driver developers. For those not familiar with graphics development, graphics APIs ship with virtual instruction set architectures (ISA) that abstracts standard hardware features at a higher level. As GPUs don't follow the same ISA as CPUs (x86, Arm, RISC-V), this virtual ISA is needed to define some generics in the GPU architecture and allow various APIs like DirectX and Vulkan to run. Instead of focusing support on several formats like DXIL, Microsoft is embracing the open SPIR-V standard, which will become de facto for API developers in the future, allowing focus on more features instead of constantly replicating each other's functions. While DXIL is used mainly for gaming environments, SPIR-V has adoption in high-performance computing as well, with OpenCL and SYCL. Gaming presence is also there with Vulkan API, and we expect to see SPIR-V join DirectX 12 games.

Intel Announces New Mobile Lunar Lake Core Ultra 200V Series Processors

Intel today launched its most efficient family of x86 processors ever, the Intel Core Ultra 200V series processors. They deliver exceptional performance, breakthrough x86 power efficiency, a massive leap in graphics performance, no-compromise application compatibility, enhanced security and unmatched AI compute. The technology will power the industry's most complete and capable AI PCs with more than 80 consumer designs from more than 20 of the world's top manufacturing partners, including Acer, ASUS, Dell Technologies, HP, Lenovo, LG, MSI and Samsung. Pre-orders begin today with systems available globally on-shelf and online at over 30 global retailers starting Sept. 24. All designs featuring Intel Core Ultra 200V series processors and running the latest version of Windows are eligible to receive Copilot+ PC features as a free update starting in November.

"Intel's newest Core Ultra processors set the industry standard for mobile AI and graphics performance, and smash misconceptions about x86 efficiency. Only Intel has the scale through our partnerships with ISVs and OEMs, and the broader technology ecosystem, to provide consumers with a no-compromise AI PC experience."
--Michelle Johnston Holthaus, Intel executive vice president and general manager of the Client Computing Group

Intel Announces Deployment of Gaudi 3 Accelerators on IBM Cloud

IBM and Intel announced a global collaboration to deploy Intel Gaudi 3 AI accelerators as a service on IBM Cloud. This offering, which is expected to be available in early 2025, aims to help more cost-effectively scale enterprise AI and drive innovation underpinned with security and resiliency. This collaboration will also enable support for Gaudi 3 within IBM's watsonx AI and data platform. IBM Cloud is the first cloud service provider (CSP) to adopt Gaudi 3, and the offering will be available for both hybrid and on-premise environments.

"Unlocking the full potential of AI requires an open and collaborative ecosystem that provides customers with choice and accessible solutions. By integrating Gaudi 3 AI accelerators and Xeon CPUs with IBM Cloud, we are creating new AI capabilities and meeting the demand for affordable, secure and innovative AI computing solutions," said Justin Hotard, Intel executive vice president and general manager of the Data Center and AI Group.

Intel Dives Deep into Lunar Lake, Xeon 6, and Gaudi 3 at Hot Chips 2024

Demonstrating the depth and breadth of its technologies at Hot Chips 2024, Intel showcased advancements across AI use cases - from the data center, cloud and network to the edge and PC - while covering the industry's most advanced and first-ever fully integrated optical compute interconnect (OCI) chiplet for high-speed AI data processing. The company also unveiled new details about the Intel Xeon 6 SoC (code-named Granite Rapids-D), scheduled to launch during the first half of 2025.

"Across consumer and enterprise AI usages, Intel continuously delivers the platforms, systems and technologies necessary to redefine what's possible. As AI workloads intensify, Intel's broad industry experience enables us to understand what our customers need to drive innovation, creativity and ideal business outcomes. While more performant silicon and increased platform bandwidth are essential, Intel also knows that every workload has unique challenges: A system designed for the data center can no longer simply be repurposed for the edge. With proven expertise in systems architecture across the compute continuum, Intel is well-positioned to power the next generation of AI innovation." -Pere Monclus, chief technology officer, Network and Edge Group at Intel.

Tachyum Builds Last FPGA Prototypes Batch Ahead of Tape-Out

Tachyum today announced the final build of its Prodigy FPGA emulation system in advance of chip production and general availability next year. As part of the announcement, the company is also ending its purchase program for prototype systems that was previously offered to commercial and federal customers.

These last hardware FPGA prototype units will ensure Tachyum hits its extreme-reliability test targets of more than 10 quadrillion cycles prior to tape-out and before the first Prodigy chips hit the market. Tachyum's software emulation system - and access to it - is expanding with additional availability of open-source software ported ahead of Prodigy's upstreaming.

Qualcomm Snapdragon X Elite Mini-PC Dev Kit Arrives at $899

Qualcomm has started accepting preorders for its Snapdragon Dev Kit for Windows, based on the Snapdragon X Elite processor. Initially announced in May, the device is now available for preorder through Arrow at a competitive price point of $899. Despite its relatively high cost compared to typical mini PCs, it undercuts most recent laptops equipped with Snapdragon X processors, making it an attractive option for both developers and power users alike. Measuring a mere 199 x 175 x 35 mm, it comes equipped with 32 GB of LPDDR5x RAM, a 512 GB NVMe SSD, and support for the latest Wi-Fi 7 and Bluetooth 5 technologies. The connectivity options are equally robust, featuring three USB4 Type-C ports, two USB 3.2 Type-A ports, an HDMI output, and an Ethernet port.

This mini PC's heart lies the Snapdragon X Elite (X1E-00-1DE) processor. This chip houses 12 Oryon CPU cores capable of reaching speeds up to 3.8 GHz, with a dual-core boost potential of 4.3 GHz. The processor also integrates Adreno graphics, delivering up to 4.6 TFLOPS of performance, and a Hexagon NPU capable of up to 45 TOPS for AI tasks. While similar to its laptop counterpart, the X1E-84-100, this version is optimized for desktop use. It can consume up to 80 watts of power, enabling superior sustained performance without the constraints of battery life or heat dissipation typically associated with mobile devices. This dev kit is made primarily to optimize x86-64 software to run on the Arm platform; hence, removing the power limit is beneficial for translating the code to Windows on Arm. The Snapdragon Dev Kit for Windows ships with a 180 W power adapter and comes pre-installed with Windows 11, making it ready for immediate use upon arrival.

Intel Core Ultra 200V "Lunar Lake" CPUs Arrive on September 3rd

Intel has officially confirmed the upcoming Core Ultra 200V "Lunar Lake" CPU generation is arriving on September 3rd. The official media alert states: "Ahead of the IFA 2024 conference, join Michelle Johnston Holthaus, Intel executive vice president and general manager of the Client Computing Group, and Jim Johnson, senior vice president and general manager of the Client Business Group, and Intel partners as they launch the next generation of Intel Core Ultra processors, code-named Lunar Lake. During the livestreamed event, they will reveal details on the new processors' breakthrough x86 power efficiency, exceptional core performance, massive leaps in graphics performance and the unmatched AI computing power that will drive this and future generations of Intel products."

With IFA happening in Berlin from September 6th to 10th, Intel's Lunar Lake launch is also happening in Berlin just a few days before, on September 3rd at 6 p.m. CEST (9 a.m. PDT). We expect to see nine SKUs: Core Ultra 9 288V, Core Ultra 7 268V, Core Ultra 7 266V, Core Ultra 7 258V, Core Ultra 7 256V, Core Ultra 5 238V, Core Ultra 5 236V, Core Ultra 5 228V, and Core Ultra 5 226V. All of the aforementioned models feature four P-cores and four E-cores, with varying Xe2 GPU core counts and clocks. We also expect to see Intel present its design wins and upcoming Lunar Lake devices like laptops during the launch.
Intel Core Ultra 200V Lunar Lake

Qualcomm Snapdragon X "Copilot+" AI PCs Only Accounted for 0.3% of PassMark Benchmark Runs

The much-anticipated revolution in AI-powered personal computing seems to be off to a slower start than expected. Qualcomm's Snapdragon X CPUs, touted as game-changers in the AI PC market, have struggled to gain significant traction since their launch. Recent data from PassMark, a popular benchmarking software, reveals that Snapdragon X CPUs account for a mere 0.3% of submissions in the past 30 days. This is a massive contrast to the 99.7% share held by traditional x86 processors from Intel and AMD, which raises questions about the immediate future of ARM-based PCs. The underwhelming adoption comes despite bold predictions from industry leaders. Qualcomm CEO Cristiano Amon had projected that ARM-based CPUs could capture up to 50% of the Windows PC market by 2029. Similarly, ARM's CEO anticipated a shift away from x86's long-standing dominance.

However, it turns out that these PCs are primarily bought for the battery life, not their AI capabilities. Of course, it's premature to declare Arm's Windows venture a failure. The AI PC market is still in its infancy, and upcoming mid-tier laptops featuring Snapdragon X Elite CPUs could boost adoption rates. A lot of time still needs to pass before the volume of these PCs reaches millions of units shipped by x86 makers. The true test will come with the launch of AMD's Ryzen AI 300 and Intel's Lunar Lake CPUs, providing a clearer picture of how ARM-based options compare in AI performance. As the AI PC landscape evolves, Qualcomm faces mounting pressure. NVIDIA's anticipated entry into the market and significant performance improvements in next-generation x86 processors from Intel and AMD pose a massive challenge. The coming months will be crucial in determining whether Snapdragon X CPUs can live up to their initial hype and carve out a significant place in the AI PC ecosystem.

AMD Zen 6 to Cram Up to 32 CPU Cores Per CCD

AMD's future "Zen 6" CPU microarchitecture is rumored to cram up to 32 cores per CCD (CPU complex die), or the common client/server chiplet with the CPU cores, according to Kepler_L2, a reliable source with hardware leaks. At this point it's not clear if they are referring to the regular "Zen 6" CPU core, or the physically compacted "Zen 6c" core meant for high core-count cloud server processors. The current pure "Zen 4c" CCD found in EPYC "Bergamo" processor packs 16 cores across two 8-core CCX (CPU core complexes) that share a 16 MB L3 cache among the 8 cores within the CCX. The upcoming "Zen 5c" CCD will pack 16 cores, but in a single 16-core CCX that shares 32 MB of L3 cache among the 16 cores for improved per-core cache access. "Zen 6" is expected to double this to 32 cores per CCD.

The 32-core CCD powered by "Zen 6" (likely Zen 6c), might take advantage of process improvements to double the core-count. At this point, it's not clear if this jumbo CCD features a single large CCX with all 32 cores sharing a large L3 cache; or if it's using two 16-core CCX that shares, say, 32 MB of L3 cache among the 16 cores. What's clear with this leak, though, is that AMD is looking to continue ramping up CPU core counts per socket. Data-centers and cloud customers seem to love this, and AMD is the only x86 processor maker in a serious competition with Arm-based server processor manufacturers such as Ampere, to increase significantly increase core counts per socket with each generation.

AMD Hits Highest-Ever x86 CPU Market Share in Q1 2024 Across Desktop and Server

AMD has reached a significant milestone, capturing a record-high share of the X86 CPU market in the first quarter of 2024, according to the latest report from Mercury Research. This achievement marks a significant step forward for the chipmaker in its long battle against rival Intel's dominance in the crucial computer processor space. The surge was fueled by strong demand for AMD's Ryzen and EPYC processors across consumer and enterprise markets. The Ryzen lineup's compelling price-to-performance ratio has struck a chord with gamers, content creators, and businesses seeking cost-effective computing power without sacrificing capabilities. It secured AMD's 23.9% share, an increase from the previous Q4 of 2023, which has seen a 19.8% market share.

The company has also made major inroads on the data center front with its EPYC server CPUs. AMD's ability to supply capable yet affordable processors has enabled cloud providers and enterprises to scale operations on AMD's platform. Several leading tech giants have embraced EPYC, contributing to AMD's surging server market footprint. Now, it is at 23.6%, a significant increase over the past few years, whereas AMD was just above 10% four years ago in 2020. AMD lost some share to Intel on the mobile PC front due to the Meteor Lake ramp, but it managed to gain a small percentage of the market share of client PCs. As AMD rides the momentum into the second half of 2024, all eyes will be on whether the chipmaker can sustain this trajectory and potentially claim an even larger slice of the x86 CPU pie from Intel in the coming quarters.
Below, you can see additional graphs of mobile PC and client PC market share.

AMD Expands Commercial AI PC Portfolio to Deliver Leadership Performance Across Professional Mobile and Desktop Systems

Today, AMD announced new products that will expand its commercial mobile and desktop AI PC portfolio, delivering exceptional productivity and premium AI and connectivity experiences to business users. The new AMD Ryzen PRO 8040 Series are the most advanced x86 processors built for business laptops and mobile workstations. In addition, AMD also announced the AMD Ryzen PRO 8000 Series desktop processor, the first AI enabled desktop processor for business users, engineered to deliver cutting-edge performance with low power consumption.

With AMD Ryzen AI built into select models, AMD is further extending its AI PC leadership. By leveraging the CPU, GPU, and dedicated on-chip neural processing unit (NPU), new Ryzen AI-powered processors provide more dedicated AI processing power than previous generations, with up to 16 dedicated NPU TOPS (Trillions of Operations Per Second) and up to 39 total system TOPS. Commercial PCs equipped with new Ryzen AI-enabled processors will help transform user experience, offering next-gen performance for AI-enabled collaboration, content creation, and data and analytics workloads. With the addition of AMD PRO technologies, IT managers can unlock enterprise-grade manageability features to simplify IT operations and complete PC deployment faster across the organization, built-in security features for chip-to-cloud defense from sophisticated attacks, as well as unprecedented stability, reliability and platform longevity for enterprise software.

Report: Global PC Shipments Return to Growth and Pre-Pandemic Volumes in the First Quarter of 2024

After two years of decline, the worldwide traditional PC market returned to growth during the first quarter of 2024 (1Q24) with 59.8 million shipments, growing 1.5% year over year, according to preliminary results from the International Data Corporation (IDC) Worldwide Quarterly Personal Computing Device Tracker. Growth was largely achieved due to easy year-over-year comparisons as the market declined 28.7% during the first quarter of 2023, which was the lowest point in PC history. In addition, global PC shipments finally returned to pre-pandemic levels as 1Q24 volumes rivaled those seen in 1Q19 when 60.5 million units were shipped.

With inflation numbers trending down, PC shipments have begun to recover in most regions, leading to growth in the Americas as well as Europe, the Middle East, and Africa (EMEA). However, the deflationary pressures in China directly impacted the global PC market. As the largest consumer of desktop PCs, weak demand in China led to yet another quarter of declines for global desktop shipments, which already faced pressure from notebooks as the preferred form factor.

Google Launches Arm-Optimized Chrome for Windows, in Time for Qualcomm Snapdragon X Elite Processors

Google has just released an Arm-optimized version of its popular Chrome browser for Windows PCs. This new version is designed to take full advantage of Arm-based devices' hardware and operating system, promising users a faster and smoother browsing experience. The Arm-optimized Chrome for Windows has been developed in close collaboration with Qualcomm, ensuring that Chrome users get the best possible experience on current Arm-compatible PCs. Hiroshi Lockheimer, Senior Vice President at Google, stated, "We've designed Chrome browser to be fast, secure, and easy to use across desktops and mobile devices, and we're always looking for ways to bring this experience to more people." Early testers of the Arm-optimized Chrome have reported significant performance improvements compared to the x86-emulated version. The new browser is rolling out starting today and will be available on existing Arm devices, including PCs powered by Snapdragon 8cx, 8c, and 7c processors.

Shortly, Chrome will receive an even more performant chip boost with Qualcomm's upcoming Snapdragon X Elite SoC launch. Cristiano Amon, President and CEO of Qualcomm, expressed his excitement about the collaboration, saying, "As we enter the era of the AI PC, we can't wait to see Chrome shine by taking advantage of the powerful Snapdragon X Elite system." Qualcomm's Snapdragon X Elite devices are expected to hit the market in mid-2024 with "dramatic performance improvement in the Speedometer 2.0 benchmark" on reference hardware. Being one of the most essential applications, getting a native Chrome build to run on Windows-on-Arm is a significant step for the platform, promising more investment from software makers.

Zhaoxin KX-7000 8-Core CPU Gets Geekbenched

Zhaoxin finally released its oft-delayed KX-7000 CPU series last December—the Chinese manufacturer claimed that its latest "Century Avenue Core" uArch consumer/desktop-oriented range was designed to "deliver double the performance of previous generations." Freshly discovered Geekbench 6.2.2 results indicate that Zhaoxin has succeeded on that front—Wccftech has pored over these figures, generated by an: "entry-level Zhaoxin KX-7000 CPU which has 8 cores, 8 threads, 4 MB of L2, and 32 MB of L3 cache. This chip was running at a base clock of 3.0 GHz and a boost clock of 3.3 GHz which is below its standard 3.6 GHz boost profile."

The new candidate was compared to Zhaoxin's previous-gen KX-U6780A and KX-6000G models. Intel's Core i3-10100F processor was thrown in as a familiar Western point of reference. The KX-7000 scored: "823 points in single-core, and 3813 points in multi-core tests. For comparisons, the Intel's Comet Lake CPU with 4 cores and 8 threads plus a boost of up to 4.3 GHz offers a much higher score. It's around 75% faster in single and 17% faster in multi-core tests within the same benchmark." The higher clock speeds, doubled core counts and TDPs do deliver "twice the performance" when compared to direct forebears—mission accomplished there. It is clear that Zhaoxin's latest CPU architecture cannot keep up with a generations old Team Blue design. Loongson's 3A6000 processor is a very promising prospect—reports suggest that this chip is somewhat comparable to mainstream AMD Zen 4 and Intel Raptor Lake products.

Qualcomm AI Hub Introduced at MWC 2024

Qualcomm Technologies, Inc. unveiled its latest advancements in artificial intelligence (AI) at Mobile World Congress (MWC) Barcelona. From the new Qualcomm AI Hub, to cutting-edge research breakthroughs and a display of commercial AI-enabled devices, Qualcomm Technologies is empowering developers and revolutionizing user experiences across a wide range of devices powered by Snapdragon and Qualcomm platforms.

"With Snapdragon 8 Gen 3 for smartphones and Snapdragon X Elite for PCs, we sparked commercialization of on-device AI at scale. Now with the Qualcomm AI Hub, we will empower developers to fully harness the potential of these cutting-edge technologies and create captivating AI-enabled apps," said Durga Malladi, senior vice president and general manager, technology planning and edge solutions, Qualcomm Technologies, Inc. "The Qualcomm AI Hub provides developers with a comprehensive AI model library to quickly and easily integrate pre-optimized AI models into their applications, leading to faster, more reliable and private user experiences."

Huawei's HiSilicon Taishan V120 Server Core Matches Zen 3 Performance

Huawei's new server CPU based on the HiSilicon Taishan V120 core has shown impressive single-threaded performance that matches AMD's Zen 3 architecture in a leaked Geekbench 6 benchmark. The Taishan V120 is likely being manufactured on SMIC's 7 nm process node. The Geekbench 6 result posted on social media does not identify the exact Huawei server CPU model, but speculation points to it being the upcoming Kunpeng 930 chip. In the benchmark, the Taishan V120 CPU operating at 2.9 GHz scored 1527 in the single-core test. This positions it nearly equal to AMD's EPYC 7413 server CPU based on the Zen 3 architecture, which boosts up to 3.6 GHz and which scored 1538 points. It also matches the single-threaded performance of Intel's Coffee Lake-based Xeon E-2136 from 2018, even though that Intel chip can reach 4.5 GHz boost speeds, scoring 1553 points.

The Taishan V120 core first appeared in Huawei's Kirin 9000 smartphone SoC in 2020. Using the core in server CPUs would allow Huawei to achieve competitive single-threaded performance to rival AMD's last-generation EPYC Milan and Intel's older Skylake server chips. Multi-threaded benchmarks will be required to gauge the Kunpeng 930's overall performance fully when it launches. Huawei continues innovating its ARM-based server CPU designs even while facing restrictions on manufacturing and selling chips internationally due to its inclusion on the US Entity List in 2019. The impressive single-threaded results versus leading x86 competitors demonstrate Huawei's resilience and self-reliance in developing homegrown data center technology through its HiSilicon division. More details on the Kunpeng 930 server chip will likely surface later this year, along with server configurations from Chinese OEMs.

Loongson 3A6000 CPU Reportedly Matches AMD Zen 4 and Intel Raptor Lake IPC

China's homegrown Loongson 3A6000 CPU shows promise but still needs to catch up AMD and Intel's latest offerings in real-world performance. According to benchmarks by Chinese tech reviewer Geekerwan, the 3A6000 has instructions per clock (IPC) on par with AMD's Zen 4 architecture and Intel's Raptor Lake. Using the SPEC CPU 2017 processor benchmark, Geekerwan has clocked all the CPUs at 2.5 GHs to compare the raw benchmark results to Zen 4 and Intel's Raptor Lake (Raptor Cove) processors. As a result, the Loongson 3A6000 seemingly matches the latest designs by AMD and Intel in integer results, with integer IPC measured at 4.8, while Zen 4 and Raptor Cove have 5.0 and 4.9, respectively. The floating point performance is still lagging behind a lot, though. This demonstrates that Loongson's CPU design can catching up to global leaders, but still needs further development, especially for floating point arithmetic.

However, the 3A6000 is held back by low clock speeds and limited core counts. With a maximum boost speed of just 2.5 GHz across four CPU cores, the 3A6000 cannot compete with flagship chips like AMD's 16-core Ryzen 9 7950X running at 5.7 GHz. While the 3A6000's IPC is impressive, its raw computing power is a fraction of that of leading x86 CPUs. Loongson must improve manufacturing process technology to increase clock speeds, core counts, and cache size. The 3A6000's strengths highlight Loongson's ambitions: an in-house LoongArch ISA design fabricated on 12 nm achieves competitive IPC to state-of-the-art x86 chips built on more advanced TSMC 5 nm and Intel 7 nm nodes. This shows the potential behind Loongson's engineering. Reports suggest that next-generation Loongson 3A7000 CPUs will use SMIC 7 nm, allowing higher clocks and more cores to better harness the architecture's potential. So, we expect the next generation to set a bar for China's homegrown CPU performance.

AMD Zen 5 Details Emerge with GCC "Znver5" Patch: New AVX Instructions, Larger Pipelines

AMD's upcoming family of Ryzen 9000 series of processors on the AM5 platform will carry a new silicon SKU under the hood—Zen 5. The latest revision of AMD's x86-64 microarchitecture will feature a few interesting improvements over its current Zen 4 that it is replacing, targeting the rumored 10-15% IPC improvement. Thanks to the latest set of patches for GNU Compiler Collection (GCC), we have the patch set that proposes changes taking place with "znver5" enablement. One of the most interesting additions to the Zen 5 over the previous Zen 4 is the expansion of the AVX instruction set, mainly new AVX and AVX-512 instructions: AVX-VNNI, MOVDIRI, MOVDIR64B, AVX512VP2INTERSECT, and PREFETCHI.

AVX-VNNI is a 256-bit vector version of the AVX-512 VNNI instruction set that accelerates neural network inferencing workloads. AVX-VNNI delivers the same VNNI instruction set for CPUs that support 256-bit vectors but lack full 512-bit AVX-512 capabilities. AVX-VNNI effectively extends useful VNNI instructions for AI acceleration down to 256-bit vectors, making the technology more efficient. While narrow in scope (no opmasking and extra vector register access compared to AVX-512 VNNI), AVX-VNNI is crucial in spreading VNNI inferencing speedups to real-world CPUs and applications. The new AVX-512 VP2INTERSECT instruction is also making it in Zen 5, as noted above, which has been present only in Intel Tiger Lake processor generation, and is now considered deprecated for Intel SKUs. We don't know the rationale behind this inclusion, but AMD sure had a use case for it.
Return to Keyword Browsing
Nov 22nd, 2024 06:59 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts