News Posts matching #SoC

Return to Keyword Browsing

Arm Announces the Cortex-X925 and Cortex-A725 Armv9 CPU Cores

Arm has announced a pair of new Armv9 CPU cores today, alongside a refresh of a third. The new additions are the Cortex-X925—which is a huge model number jump from the previous Cortex-X4—and the Cortex-A725 which should be an upgraded Cortex-A720. Finally the Cortex-A520 has been refreshed to bring a 15 percent power efficiency improvement as well as support for 3 nm production nodes. Arm claims that the Cortex-A925 delivers its highest performance improvement ever over a previous generation with a single core uplift of up to 36 percent and an AI performance improvement of up to 46 percent compared to the Cortex-X4. The Cortex-X925 will support up to 3 MB private L2 cache and is tape-out ready for 3 nm production nodes.

The Cortex-A725 is said to offer a 35 percent performance efficiency improvement over the Cortex-A720 and it's been given performance boosts both when it comes to AI and gaming workloads. It's said to be up to 25 percent more power efficient than the Cortex-A720 and L3 cache traffic has been improved by up to 20 percent. Again, the Cortex-A720 is ready for production on a 3 nm node. Finally, Arm has also updated its DynamIQ Shared Unit to the DS-120 and here Arm has managed to reduce the typical workload power by up to 50 percent and the cache miss power by up to 60 percent. The DSU-120 scales to up to 14 Arm cores, suggesting that we might get to see some interesting new SoC implementations in the coming years from Arm's partners, although Arm's reference platform is a 2-4-2 configuration of the new cores.

Qualcomm's Success with Windows AI PC Drawing NVIDIA Back to the Client SoC Business

NVIDIA is eying a comeback to the client processor business, reveals a Bloomberg interview with the CEOs of NVIDIA and Dell. For NVIDIA, all it takes is a simple driver update that exposes every GeForce GPU with tensor cores as an NPU to Windows 11, with translation layers to get popular client AI apps to work with TensorRT. But that would need you to have a discrete NVIDIA GPU. What about the vast market of Windows AI PCs powered by the likes of Qualcomm, Intel, and AMD, who each sell 15 W-class processors with integrated NPUs capable of 50 AI TOPS, which is all that Copilot+ needs? NVIDIA held an Arm license for decades now, and makes Arm-based CPUs to this day, with the NVIDIA Grace, however, that is a large server processor meant for its AI GPU servers.

NVIDIA already made client processors under the Tegra brand targeting smartphones, which it winded down last decade. It's since been making Drive PX processors for its automotive self-driving hardware division; and of course there's Grace. NVIDIA hinted that it might have a client CPU for the AI PC market in 2025. In the interview Bloomberg asked NVIDIA CEO Jensen Huang a pointed question on whether NVIDIA has a place in the AI PC market. Dell CEO Michael Dell, who was also in the interview, interjected "come back next year," to which Jensen affirmed "exactly." Dell would be in a front-and-center position to know if NVIDIA is working on a new PC processor for launch in 2025, and Jensen's nod almost confirms this

Apple COO Meets with TSMC CEO to Reserve First Batch of 2 nm Allocation

Apple is locked in a fierce competition to stay ahead in the client AI applications race, and needs access to the latest foundry process at TSMC to built its future-generation SoCs on. The company's COO, Jeff Williams, reportedly paid a visit to TSMC CEO CC Wei to discuss Apple's foundry allocation of the Taiwanese foundry's 2 nm-class silicon fabrication process, for its next-generation M-series and A-series SoCs powering its future generations of iPhone, iPad, and Macs. Taiwan based industry observer, Economic Daily, which broke this story, says that it isn't just an edge with performance and efficiency that Apple is after, but also leadership in generative AI, and client AI applications. The company has reportedly invested over $100 billion in generative AI research and development over the past 5 years.

Apple's latest silicon, the M4 SoC, which debuted with the iPad Pro earlier this month, is built on TSMC's N3E (3 nm-class) node, and it's widely expected that the rest of the M4 line of SoCs for Macs, and the "A18," could be built on the same process, which would cover Apple for the rest of 2024, going into the first half of 2025. TSMC is expected to commence mass-production of chips on its 2 nm node in 2025, which is why Apple is in the TSMC boss's office to seek the first foundry allocation.

Microsoft Introduces Copilot+ PCs

Today, at a special event on our new Microsoft campus, we introduced the world to a new category of Windows PCs designed for AI, Copilot+ PCs. Copilot+ PCs are the fastest, most intelligent Windows PCs ever built. With powerful new silicon capable of an incredible 40+ TOPS (trillion operations per second), all-day battery life and access to the most advanced AI models, Copilot+ PCs will enable you to do things you can't on any other PC. Easily find and remember what you have seen in your PC with Recall, generate and refine AI images in near real-time directly on the device using Cocreator, and bridge language barriers with Live Captions, translating audio from 40+ languages into English.

These experiences come to life on a set of thin, light and beautiful devices from Microsoft Surface and our OEM partners Acer, ASUS, Dell, HP, Lenovo and Samsung, with pre-orders beginning today and availability starting on June 18. Starting at $999, Copilot+ PCs offer incredible value. This first wave of Copilot+ PCs is just the beginning. Over the past year, we have seen an incredible pace of innovation of AI in the cloud with Copilot allowing us to do things that we never dreamed possible. Now, we begin a new chapter with AI innovation on the device. We have completely reimagined the entirety of the PC - from silicon to the operating system, the application layer to the cloud - with AI at the center, marking the most significant change to the Window platform in decades.

TSMC Unveils Next-Generation HBM4 Base Dies, Built on 12 nm and 5 nm Nodes

During the European Technology Symposium 2024, TSMC has announced its readiness to manufacture next-generation HBM4 base dies using both 12 nm and 5 nm nodes. This significant development is expected to substantially improve the performance, power consumption, and logic density of HBM4 memory, catering to the demands of high-performance computing (HPC) and artificial intelligence (AI) applications. The shift from a traditional 1024-bit interface to an ultra-wide 2048-bit interface is a key aspect of the new HBM4 standard. This change will enable the integration of more logic and higher performance while reducing power consumption. TSMC's N12FFC+ and N5 processes will be used to produce these base dies, with the N12FFC+ process offering a cost-effective solution for achieving HBM4 performance and the N5 process providing even more logic and lower power consumption at HBM4 speeds.

The company is collaborating with major HBM memory partners, including Micron, Samsung, and SK Hynix, to integrate advanced nodes for HBM4 full-stack integration. TSMC's base die, fabricated using the N12FFC+ process, will be used to install HBM4 memory stacks on a silicon interposer alongside system-on-chips (SoCs). This setup will enable the creation of 12-Hi (48 GB) and 16-Hi (64 GB) stacks with per-stack bandwidth exceeding 2 TB/s. TSMC's collaboration with EDA partners like Cadence, Synopsys, and Ansys ensures the integrity of HBM4 channel signals, thermal accuracy, and electromagnetic interference (EMI) in the new HBM4 base dies. TSMC is also optimizing CoWoS-L and CoWoS-R for HBM4 integration, meaning that massive high-performance chips are already utilizing this technology and getting ready for volume manufacturing.

ASUS Leaks its own Snapdragon X Elite Notebook

Courtesy of ASUS Vietnam (via @rquandt on X/Twitter), we now have an idea of what ASUS' first Qualcomm Snapdragon X Elite notebook will look like, but also what the main specifications are. It will share the Vivobook S 15 OLED branding with other notebooks from ASUS, although the leaked model carries the model number S5507QA-MA089WS. At its core is a Qualcomm Snapdragon X Elite X1E-78-100 SoC which is the base model from Qualcomm. The SoC consists of 12 Oryon cores, of which eight are performance cores and four are energy efficient cores. A peak, multi-threaded clock speed of 3.4 GHz and 42 MB of cache, as well as a 75 TOPs AI engine rounds off the SoC specs. The SoC is also home to a Qualcomm Adreno GPU, but so far Qualcomm hasn't released any useful specs about the GPU in the Snapdragon X Elite series of chips.

ASUS has paired the SoC with 32 GB of LPDDR5X memory of an unknown clock speed, although Qualcomm officially supports speed of up to 8,448 MT/s in a to PC users unusual configuration of eight channels at 16-bit wide, for a bandwidth of up to 135 GB/s. For comparison, Intel's latest Core Ultra processors max out at LPDDR5X 7,467 MT/s and up to 120 GB/s memory bandwidth. Other features include a 1 TB PCIe 4.0 NVMe SSD, a glossy 15.6-inch 2,880 x 1,620 resolution, 120 Hz OLED display with 600 nits peak brightness and a 70 WHr battery. It's unclear what connectivity options will be on offer, but judging by the screenshot below, we can at least expect an HDMI out as well as a pair of USB Type-C ports, a micro SD card slot and a headphone jack. As far as pricing goes, Roland Quandt is suggesting a €1,500 base price on X/Twitter, but we'll have to wait for the official launch to find out what these Arm based laptops will retail for. ASUS Vietnam has already removed the page from its website.

Apple Introduces the M4 Chip

Apple today announced M4, the latest chip delivering phenomenal performance to the all-new iPad Pro. Built using second-generation 3-nanometer technology, M4 is a system on a chip (SoC) that advances the industry-leading power efficiency of Apple silicon and enables the incredibly thin design of iPad Pro. It also features an entirely new display engine to drive the stunning precision, color, and brightness of the breakthrough Ultra Retina XDR display on iPad Pro. A new CPU has up to 10 cores, while the new 10-core GPU builds on the next-generation GPU architecture introduced in M3, and brings Dynamic Caching, hardware-accelerated ray tracing, and hardware-accelerated mesh shading to iPad for the first time. M4 has Apple's fastest Neural Engine ever, capable of up to 38 trillion operations per second, which is faster than the neural processing unit of any AI PC today. Combined with faster memory bandwidth, along with next-generation machine learning (ML) accelerators in the CPU, and a high-performance GPU, M4 makes the new iPad Pro an outrageously powerful device for artificial intelligence.

"The new iPad Pro with M4 is a great example of how building best-in-class custom silicon enables breakthrough products," said Johny Srouji, Apple's senior vice president of Hardware Technologies. "The power-efficient performance of M4, along with its new display engine, makes the thin design and game-changing display of iPad Pro possible, while fundamental improvements to the CPU, GPU, Neural Engine, and memory system make M4 extremely well suited for the latest applications leveraging AI. Altogether, this new chip makes iPad Pro the most powerful device of its kind."

Radxa Launches NAS Friendly ROCK 5 ITX Motherboard with Arm SoC

Radxa is a Chinese manufacturer of various Arm based devices and something of a minor competitor to the Raspberry Pi Foundation. The company has just launched its latest product which is called the ROCK 5 ITX. As the name implies, it's a Mini-ITX form factor motherboard, which in itself is rather unusual for Arm based hardware to start with. However, Radxa has designed the ROCK 5 ITX to be a NAS motherboard and this is the first time we've come across such a product, as most Arm based boards are either intended for hobby projects, software development or routers. This makes the ROCK 5 ITX quite unique, at least based on its form factor, as it'll be compatible with standard Mini-ITX chassis.

The SoC on the board is a Rockchip RK3588 which sports four Cortex-A76 cores at up to 2.4 GHz and four Cortex-A55 cores at 1.8 GHz. This is not exactly cutting edge, but should be plenty fast enough for a SATA drive based NAS. The board offers four SATA 6 Gbps connectors via an ASMedia ASM1164 controller, each with an individual power connector next to it. However, Radxa seems to have chosen to use fan-header type power connectors, which means it'll be hard to get replacement power cables. The board also has a PCIe 3.0 x2 M.2 slot for an NVMe drive. The OS boots from eMMC and Radxa supports its own Roobi OS which is Debian Linux based.

PC Market Returns to Growth in Q1 2024 with AI PCs to Drive Further 2024 Expansion

Global PC shipments grew around 3% YoY in Q1 2024 after eight consecutive quarters of declines due to demand slowdown and inventory correction, according to the latest data from Counterpoint Research. The shipment growth in Q1 2024 came on a relatively low base in Q1 2023. The coming quarters of 2024 will see sequential shipment growth, resulting in 3% YoY growth for the full year, largely driven by AI PC momentum, shipment recovery across different sectors, and a fresh replacement cycle.

Lenovo's PC shipments were up 8% in Q1 2024 off an easy comparison from last year. The brand managed to reclaim its 24% share in the market, compared to 23% in Q1 2023. HP and Dell, with market shares of 21% and 16% respectively, remained flattish, waiting for North America to drive shipment growth in the coming quarters. Apple's shipment performance was also resilient, with the 2% growth mainly supported by M3 base models.

AMD Extends Leadership Adaptive SoC Portfolio with New Versal Series Gen 2 Devices Delivering End-to-End Acceleration for AI-Driven Embedded Systems

AMD today announced the expansion of the AMD Versal adaptive system on chip (SoC) portfolio with the new Versal AI Edge Series Gen 2 and Versal Prime Series Gen 2 adaptive SoCs, which bring preprocessing, AI inference, and postprocessing together in a single device for end-to-end acceleration of AI-driven embedded systems.

These initial devices in the Versal Series Gen 2 portfolio build on the first generation with powerful new AI Engines expected to deliver up to 3x higher TOPs-per-watt than first generation Versal AI Edge Series devicesi, while new high-performance integrated Arm CPUs are expected to offer up to 10x more scalar compute than first gen Versal AI Edge and Prime series devicesii.

SiFive Unveils the HiFive Premier P550 Out-of-Order RISC-V Development Board

Today at Embedded World, SiFive, Inc., the pioneer and leader of RISC-V computing, unveiled its new state-of-the-art RISC-V development board, the HiFive Premier P550. The board will be available for large-scale deployment through Arrow Electronics so developers around the world can test and develop new RISC-V applications like machine vision, video analysis, AI PC and others, allowing them to use AI and other cutting-edge technologies across many different market segments.

With a quad-core SiFive Performance P550 processor, the HiFive Premier P550 is the highest performance RISC-V development board in the industry, and the latest in the popular HiFive family. Designed to meet the computing needs of modern workloads, the out-of-order P550 core delivers superior compute density and performance in an energy-efficient area footprint. Furthermore, the modular design of the HiFive Premier P550, which includes a replaceable system-on-module (SOM) board, gives developers the flexibility they need to tailor their designs.

Imagination's new Catapult CPU is Driving RISC-V Device Adoption

Imagination Technologies today unveils the next product in the Catapult CPU IP range, the Imagination APXM-6200 CPU: a RISC-V application processor with compelling performance density, seamless security and the artificial intelligence capabilities needed to support the compute and intuitive user experience needs for next generation consumer and industrial devices.

"The number of RISC-V based devices is skyrocketing with over 16Bn units forecast by 2030, and the consumer market is behind much of this growth" says Rich Wawrzyniak, Principal Analyst at SHD Group. "One fifth of all consumer devices will have a RISC-V based CPU by the end of this decade. Imagination is set to be a force in RISC-V with a strategy that prioritises quality and ease of adoption. Products like APXM-6200 are exactly what will help RISC-V achieve the promised success."

Arm China Develops NPU Accelerator for AI, Targeting Domestic CPUs

Arm China is making strides in the AI accelerator market with its new neural processing unit (NPU) called Zhouyi. The company aims to integrate the NPU into low-cost domestic CPUs, potentially giving it an edge over competitors like AMD and Intel. Initially a part of Arm Holdings, which licensed IP in China, Arm China took on a new strategy of developing its own IP specifically for Chinese customers a few years ago. While the company does not develop high-performance general-purpose cores, its Zhouyi NPU could become a fundamental building block for affordable processors. A significant step forward is the upcoming addition of an open-source driver for Zhouyi to the Linux kernel. This will make the IP easy to program for software developers, increasing its appeal to chip designers.

Being an open-source driver, the integration in the Linux kernel brings assurance to developers that Zhouyi NPU could be the first in many generations from Arm China. While Zhouyi may not directly compete with offerings from AMD or Intel, its potential for widespread adoption in millions of devices could help Arm China acquire local customers with their IP. The project, which began three years ago with a kernel-only driver, has since evolved into a full driver stack. There is even a development kit board called EAIDK310, powered by Rockwell SoC and Zhouyi NPU, which is available on Aliexpress and Amazon. The integration of AI accelerator technology into the Linux ecosystem is a significant development, though there is still work to be done. Nonetheless, Arm China's Zhouyi NPU and open-source driver are essential to making AI capabilities more accessible and widely available in the domestic Chinese market.

Google Launches Arm-Optimized Chrome for Windows, in Time for Qualcomm Snapdragon X Elite Processors

Google has just released an Arm-optimized version of its popular Chrome browser for Windows PCs. This new version is designed to take full advantage of Arm-based devices' hardware and operating system, promising users a faster and smoother browsing experience. The Arm-optimized Chrome for Windows has been developed in close collaboration with Qualcomm, ensuring that Chrome users get the best possible experience on current Arm-compatible PCs. Hiroshi Lockheimer, Senior Vice President at Google, stated, "We've designed Chrome browser to be fast, secure, and easy to use across desktops and mobile devices, and we're always looking for ways to bring this experience to more people." Early testers of the Arm-optimized Chrome have reported significant performance improvements compared to the x86-emulated version. The new browser is rolling out starting today and will be available on existing Arm devices, including PCs powered by Snapdragon 8cx, 8c, and 7c processors.

Shortly, Chrome will receive an even more performant chip boost with Qualcomm's upcoming Snapdragon X Elite SoC launch. Cristiano Amon, President and CEO of Qualcomm, expressed his excitement about the collaboration, saying, "As we enter the era of the AI PC, we can't wait to see Chrome shine by taking advantage of the powerful Snapdragon X Elite system." Qualcomm's Snapdragon X Elite devices are expected to hit the market in mid-2024 with "dramatic performance improvement in the Speedometer 2.0 benchmark" on reference hardware. Being one of the most essential applications, getting a native Chrome build to run on Windows-on-Arm is a significant step for the platform, promising more investment from software makers.

MediaTek Licenses NVIDIA GPU IP for AI-Enhanced Vehicle Processors

NVIDIA has been offering its GPU IP for more than a decade now ever since the introduction of Kepler uArch, and its IP has had relatively low traction in other SoCs. However, that trend seems to be reaching an inflection point as NVIDIA has given MediaTek a license to use its GPU IP to produce the next generation of processors for the auto industry. The newest MediaTek Dimensity Auto Cockpit family consists of CX-1, CY-1, CM-1, and CV-1, where the CX-1 targets premium vehicles, CM targets medium range, and CV targets lower-end vehicles, probably divided by their compute capabilities. The Dimensity Auto Cockpit family is brimming with the latest technology, as the processor core of choice is an Armv9-based design paired with "next-generation" NVIDIA GPU IP, possibly referring to Blackwell, capable of doing ray tracing and DLSS 3, powered by RTX and DLA.

The SoC is supposed to integrate a lot of technology to lower BOM costs of auto manufacturing, and it includes silicon for controlling displays, cameras (advanced HDR ISP), audio streams (multiple audio DSPs), and connectivity (WiFi networking). Interestingly, the SKUs can play movies with AI-enhanced video and support AAA gaming. MediaTek touts the Dimensity Auto Cockpit family with fully local AI processing capabilities, without requiring assistance from outside servers via WiFi, and 3D spatial sensing with driver and occupant monitoring, gaze-aware UI, and natural controls. All of that fits into an SoC fabricated at TSMC's fab on a 3 nm process and runs on the industry-established NVIDIA DRIVE OS.

Alibaba Unveils Plans for Server-Grade RISC-V Processor and RISC-V Laptop

Chinese e-commerce and cloud giant Alibaba announced its plans to launch a server-grade RISC-V processor later this year, and it showcased a RISC-V-powered laptop running an open-source operating system. The announcements were made by Alibaba's research division, the Damo Academy, at the recent Xuantie RISC-V Ecological Conference in Shenzhen. The upcoming server-class processor called the Xuantie C930, is expected to be launched by the end of 2024. While specific details about the chip have not been disclosed, it is anticipated to cater to AI and server workloads. This development is part of Alibaba's ongoing efforts to expand its RISC-V portfolio and reduce reliance on foreign chip technologies amidst US export restrictions. To complement the C930, Alibaba is also preparing a Xuantie 907 matrix processing unit for AI, which could be an IP block inside an SoC like the C930 or an SoC of its own.

In addition to the C930, Alibaba showcased the RuyiBOOK, a laptop powered by the company's existing T-Head C910 processor. The C910, previously designed for edge servers, AI, and telecommunications applications, has been adapted for use in laptops. Strangely, the RuyiBOOK laptop runs on the openEuler operating system, an open-source version of Huawei's EulerOS, which is based on Red Hat Linux. The laptop also features Alibaba's collaboration suite, Ding Talk, and the open-source office software Libre Office, demonstrating its potential to cater to the needs of Chinese knowledge workers and consumers without relying on foreign software. Zhang Jianfeng, president of the Damo Academy, emphasized the increasing demand for new computing power and the potential for RISC-V to enter a period of "application explosion." Alibaba plans to continue investing in RISC-V research and development and fostering collaboration within the industry to promote innovation and growth in the RISC-V ecosystem, lessening reliance on US-sourced technology.

Sony Semiconductor Solutions Selects Cutting-Edge AMD Adaptive Computing Tech

Yesterday, AMD announced that its cutting-edge adaptive computing technology was selected by Sony Semiconductor Solutions (SSS) for its newest automotive LiDAR reference design. SSS, a global leader in image sensor technology, and AMD joined forces to deliver a powerful and efficient LiDAR solution for use in autonomous vehicles. Using adaptive computing technology from AMD significantly extends the SSS LiDAR system capabilities, offering extraordinary accuracy, fast data processing, and high reliability for next-generation autonomous driving solutions.

In the rapidly evolving landscape of autonomous driving, the demand for precise and reliable sensor technology has never been greater. LiDAR (Light Detection and Ranging) technology plays a pivotal role in enabling depth perception and environmental mapping for various industries. LiDAR delivers image classification, segmentation, and object detection data that is essential for 3D vision perception enhanced by AI, which cannot be provided by cameras alone, especially in low-light or inclement weather. The dedicated LiDAR reference design addresses the complexities of autonomous vehicle development with a standardized platform to enhance safety in navigating diverse driving scenarios.

Samsung Prepares Mach-1 Chip to Rival NVIDIA in AI Inference

During its 55th annual shareholders' meeting, Samsung Electronics announced its entry into the AI processor market with the upcoming launch of its Mach-1 AI accelerator chips in early 2025. The South Korean tech giant revealed its plans to compete with established players like NVIDIA in the rapidly growing AI hardware sector. The Mach-1 generation of chips is an application-specific integrated circuit (ASIC) design equipped with LPDDR memory that is envisioned to excel in edge computing applications. While Samsung does not aim to directly rival NVIDIA's ultra-high-end AI solutions like the H100, B100, or B200, the company's strategy focuses on carving out a niche in the market by offering unique features and performance enhancements at the edge, where low power and efficient computing is what matters the most.

According to SeDaily, the Mach-1 chips boast a groundbreaking feature that significantly reduces memory bandwidth requirements for inference to approximately 0.125x compared to existing designs, which is an 87.5% reduction. This innovation could give Samsung a competitive edge in terms of efficiency and cost-effectiveness. As the demand for AI-powered devices and services continues to soar, Samsung's foray into the AI chip market is expected to intensify competition and drive innovation in the industry. While NVIDIA currently holds a dominant position, Samsung's cutting-edge technology and access to advanced semiconductor manufacturing nodes could make it a formidable contender. The Mach-1 has been field-verified on an FPGA, while the final design is currently going through a physical design for SoC, which includes placement, routing, and other layout optimizations.

NVIDIA "Blackwell" GeForce RTX to Feature Same 5nm-based TSMC 4N Foundry Node as GB100 AI GPU

Following Monday's blockbuster announcements of the "Blackwell" architecture and NVIDIA's B100, B200, and GB200 AI GPUs, all eyes are now on its client graphics derivatives, or the GeForce RTX GPUs that implement "Blackwell" as a graphics architecture. Leading the effort will be the new GB202 ASIC, a successor to the AD102 powering the current RTX 4090. This will be NVIDIA's biggest GPU with raster graphics and ray tracing capabilities. The GB202 is rumored to be followed by the GB203 in the premium segment, the GB205 a notch lower, and the GB206 further down the stack. Kopite7kimi, a reliable source with NVIDIA leaks, says that the GB202 silicon will be built on the same TSMC 4N foundry node as the GB100.

TSMC 4N is a derivative of the company's mainline N4P node, the "N" in 4N stands for NVIDIA. This is a nodelet that TSMC designed with optimization for NVIDIA SoCs. TSMC still considers the 4N as a derivative of the 5 nm EUV node. There is very little public information on the power- and transistor density improvements of the TSMC 4N over TSMC N5. For reference, the N4P, which TSMC regards as a 5 nm derivative, offers a 6% transistor-density improvement, and a 22% power efficiency improvement. In related news, Kopite7kimi says that with "Blackwell," NVIDIA is focusing on enlarging the L1 caches of the streaming multiprocessors (SM), which suggests a design focus on increasing the performance at an SM-level.

Sony PlayStation 5 Pro Details Emerge: Faster CPU, More System Bandwidth, and Better Audio

Sony is preparing to launch its next-generation PlayStation 5 Pro console in the Fall of 2024, right around the holidays. We previously covered a few graphics details about the console. However, today, we get more details about the CPU and the overall system, thanks to the exclusive information from Insider Gaming. Starting off, the sources indicate that PS5 Pro system memory will get a 28% bump in bandwidth, where the standard PS5 console had 448 GB/s, and the upgraded PS5 Pro will get 576 GB/s. Apparently, the memory system is more efficient, likely coming from an upgrade in memory from the GDDR6 SDRAM of the regular PS5. The next upgrade is the CPU, which has special modes for the main processor. The CPU uArch is likely the same, with clocks pushed to 3.85 GHz, resulting in a 10% frequency increase.

However, this is only achieved in the "High CPU Frequency Mode," which steals the SoC's power from the GPU and downclocks it slightly to allocate more power to the CPU in highly CPU-intense settings. The GPU we discussed here is an RDNA 3 IP with up to 45% faster graphics rendering. The ray tracing performance can be up to four times higher than the regular PS5, while the entire GPU delivers 33.5 TeraFLOPS of FP32 single-precision computing. This comes from 30 WGP running BVH8 shaders vs the 18 WGPs running BVH4 shaders on the regular PS5. There are PSSR upscalers present, and the GPU can output 8K resolution, which will come with future software updates. Last but not least, the AI front also has a custom AI accelerator capable of 300 8-bit INT8 TOPS and 67 16-bit FP16 TeraFLOPS. Audio codecs are getting some love, as well, with ACV running up to 35% faster.

Qualcomm Teases "Snapdragon 8s Gen 3" SoC Launch

Qualcomm's Weibo social media account has teased an upcoming new product launch: "the spring dragon raises its head, and everything is reborn! The new Snapdragon flagship is about to be released. Let's welcome the New Year and the new era. On March 18, please stay tuned for the Snapdragon flagship new product launch conference." News outlets believe that a variant of the current top-of-the-line Snapdragon 8 Gen 3 (SM8650-AB) chipset will be introduced next week. Smartphone tech tipster, Digital Chat Station, revealed that a mysterious Qualcomm Snapdragon "SM8635" model was in the pipeline. Early February speculation pointed to a possible "Snapdragon 8s Gen 3" moniker—the added "s" implies that this mobile processor could emerge as a cheaper "sub-flagship" model.

Geekbench 6.2.2 results—posted by a trio of Realme "RMX3851" android smartphones—revealed speculated "8s Gen 3" specifications, including a 3.01 GHz "Big" Core clock, an Adreno 735 integrated GPU, and a 1+3+4 cluster configuration. The pre-release samples could not keep up with finalized Snapdragon 8 Gen 3 hardware in performance gauntlets. A mid-range "Snapdragon 7+ Gen 3" SoC could make an appearance on March 18, but tipsters believe that the event will be dedicated to a single new product. Digital Chat Station reckons that Qualcomm will market the Snapdragon 8s Gen 3 "as a Little 8G3."

ScaleFlux SFX 5016 Will Set New Benchmarks for Enterprise SSD Efficiency and AI Workload Performance

As the IT sector continues to seek answers for scaling data processing performance while simultaneously improving efficiency - in terms of performance and density per watt, per system, per rack, and per dollar of CapEx and OpEx - ScaleFlux is answering the call with innovative design choices in its SSD controllers. The SFX 5016 promises to set new standards both for performance and for power efficiency.

In addition to carrying forward the transparent compression feature that ScaleFlux first released in 2020 in upgraded in 2022 with the SFX 3016 computational storage drive controller, the new SFX 5016 SOC processor includes a number of design advances.

Qualcomm Snapdragon X Elite Benchmarked Against Intel Core Ultra 7 155H

Qualcomm Snapdragon X Elite is about to make landfall in the ultraportable notebook segment, powering a new wave of Windows 11 devices powered by Arm, capable of running even legacy Windows applications. The Snapdragon X Elite SoC in particular has been designed to rival the Apple M3 chip powering the 2024 MacBook Air, and some of the "entry-level" variants of the 2023 MacBook Pros. These chips threaten the 15 W U-segment and even 28 W P-segment of x86-64 processors from Intel, such as the Core Ultra "Meteor Lake," and Ryzen 8040 "Hawk Point." Erdi Özüağ, prominent tech journalist from Türkiye, has access to a Qualcomm-reference notebook powered by the Snapdragon X Elite X1E80100 28 W SoC. He compared its performance to an off-the-shelf notebook powered by a 28 W Intel Core Ultra 7 155H "Meteor Lake" processor.

There are three tests that highlight the performance of the key components of the SoCs—CPU, iGPU, and NPU. A Microsoft Visual Studio code compile test sees the Snapdragon X Elite with its 12-core Oryon CPU finish the test in 37 seconds; compared to 54 seconds by the Core Ultra 7 155H with its 6P+8E+2LP CPU. In the 3DMark test, the Adreno 750 iGPU posts identical performance numbers to the Arc Graphics Xe-LPG of the 155H. Where the Snapdragon X Elite dominates the Intel chip is AI inferencing. The UL Procyon test sees the 45 TOPS NPU of the Snapdragon X Elite score 1720 points compared to 476 points by the 10 TOPS AI Boost NPU of the Core Ultra. The Intel machine is using OpenVINO, while the Snapdragon is using Qualcomm SNPE SDK for the test. Don't forget to check out the video review by Erdi Özüağ in the source link below.

Huawei's HiSilicon Taishan V120 Server Core Matches Zen 3 Performance

Huawei's new server CPU based on the HiSilicon Taishan V120 core has shown impressive single-threaded performance that matches AMD's Zen 3 architecture in a leaked Geekbench 6 benchmark. The Taishan V120 is likely being manufactured on SMIC's 7 nm process node. The Geekbench 6 result posted on social media does not identify the exact Huawei server CPU model, but speculation points to it being the upcoming Kunpeng 930 chip. In the benchmark, the Taishan V120 CPU operating at 2.9 GHz scored 1527 in the single-core test. This positions it nearly equal to AMD's EPYC 7413 server CPU based on the Zen 3 architecture, which boosts up to 3.6 GHz and which scored 1538 points. It also matches the single-threaded performance of Intel's Coffee Lake-based Xeon E-2136 from 2018, even though that Intel chip can reach 4.5 GHz boost speeds, scoring 1553 points.

The Taishan V120 core first appeared in Huawei's Kirin 9000 smartphone SoC in 2020. Using the core in server CPUs would allow Huawei to achieve competitive single-threaded performance to rival AMD's last-generation EPYC Milan and Intel's older Skylake server chips. Multi-threaded benchmarks will be required to gauge the Kunpeng 930's overall performance fully when it launches. Huawei continues innovating its ARM-based server CPU designs even while facing restrictions on manufacturing and selling chips internationally due to its inclusion on the US Entity List in 2019. The impressive single-threaded results versus leading x86 competitors demonstrate Huawei's resilience and self-reliance in developing homegrown data center technology through its HiSilicon division. More details on the Kunpeng 930 server chip will likely surface later this year, along with server configurations from Chinese OEMs.

Qualcomm "Snapdragon 8s Gen 3" SoC with Adreno 735 GPU Gets Geekbenched

A mysterious Qualcomm Snapdragon "SM8635" model emerged earlier this month—courtesy of ever reliable smartphone tech tipster Digital Chat Station. They claimed that the unnamed mobile chipset had posted an AnTuTu score of roughly 1.7 million, with specifications including one Cortex-X4 core clocked at 2.9 GHz and an integrated Adreno 735 GPU. TSMC's 4 nm process node was also mentioned—not a particularly big revelation since the latest Snapdragon flagship is a 4 nm part. Early guess work pointed to possible Snapdragon 8s Gen 2 or Snapdragon 8 Gen 3 Lite guises, but a Geekbench Browser leak indicates that SM8635 is destined to become "Snapdragon 8s Gen 3," in Digital Chat Station's opinion.

A Realme "RMX3851" android device was tested in Geekbench 6.2.2—stated specifications include a 3.01 GHz "Big" Core clock, Adreno 735 GPU, and a 1+3+4 cluster configuration. Many believe that the SM8635 is positioned as a cut-down alternative to Snapdragon 8 Gen 3 (SM8650-AB), given that Realme specializes in producing value-oriented "near flagship" specced smartphones. Wccftech has spent hands-on time with various Qualcomm Snapdragon 8 Gen 3-powered devices: "You can see in (Realme's Geekbench entry) that the alleged Snapdragon 8s Gen 3 does not perform on the same level as its elder brother, which scores higher in both single and multi-core. For the sake of reference, I have seen the elder sibling going as high as 2,329 in single-core tests and 7,501 in multi-core tests. So, this chipset is performing at half the speed, but of course, this seems like a device that is not completely ready, so the final scores might improve." Further (insider) leaks or an official Qualcomm announcement will confirm whether the posited "Snapdragon 8s Gen 3" moniker is a good guess, although another leaked chip suggests another path. Roland Quandt reckons that a similarly configured "SM7675" SoC will be joining the Snapdragon 7 Gen family.
Return to Keyword Browsing
Nov 22nd, 2024 03:19 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts