News Posts matching #LPDDR5

Return to Keyword Browsing

Qualcomm Snapdragon X Elite Mini-PC Dev Kit Arrives at $899

Qualcomm has started accepting preorders for its Snapdragon Dev Kit for Windows, based on the Snapdragon X Elite processor. Initially announced in May, the device is now available for preorder through Arrow at a competitive price point of $899. Despite its relatively high cost compared to typical mini PCs, it undercuts most recent laptops equipped with Snapdragon X processors, making it an attractive option for both developers and power users alike. Measuring a mere 199 x 175 x 35 mm, it comes equipped with 32 GB of LPDDR5x RAM, a 512 GB NVMe SSD, and support for the latest Wi-Fi 7 and Bluetooth 5 technologies. The connectivity options are equally robust, featuring three USB4 Type-C ports, two USB 3.2 Type-A ports, an HDMI output, and an Ethernet port.

This mini PC's heart lies the Snapdragon X Elite (X1E-00-1DE) processor. This chip houses 12 Oryon CPU cores capable of reaching speeds up to 3.8 GHz, with a dual-core boost potential of 4.3 GHz. The processor also integrates Adreno graphics, delivering up to 4.6 TFLOPS of performance, and a Hexagon NPU capable of up to 45 TOPS for AI tasks. While similar to its laptop counterpart, the X1E-84-100, this version is optimized for desktop use. It can consume up to 80 watts of power, enabling superior sustained performance without the constraints of battery life or heat dissipation typically associated with mobile devices. This dev kit is made primarily to optimize x86-64 software to run on the Arm platform; hence, removing the power limit is beneficial for translating the code to Windows on Arm. The Snapdragon Dev Kit for Windows ships with a 180 W power adapter and comes pre-installed with Windows 11, making it ready for immediate use upon arrival.

AMD Strix Point Silicon Pictured and Annotated

The first die shot of AMD's new 4 nm "Strix Point" mobile processor surfaced, thanks to an enthusiast on Chinese social media. "Strix Point" is a significantly larger die than "Phoenix." It measures 12.06 mm x 18.71 mm (L x W), compared to the 9.06 mm x 15.01 mm of "Phoenix." Much of this die size increase comes from the larger CPU, iGPU, and NPU. The process has been improved from TSMC N4 on "Phoenix" and its derivative "Hawk Point," to the newer TSMC N4P node.

Nemez (GPUsAreMagic) annotated the die shot in great detail. The CPU now has 12 cores spread across two CCX, one of which contains four "Zen 5" cores sharing a 16 MB L3 cache; and the other with eight "Zen 5c" cores sharing an 8 MB L3 cache. The two CCXs connect to the rest of the chip over Infinity Fabric. The rather large iGPU takes up the central region of the die. It is based on the RDNA 3.5 graphics architecture, and features 8 workgroup processors (WGPs), or 16 compute units (CU) worth 1,024 stream processors. Other key components include four render backends worth 16 ROPs, and control logic. The GPU has its own 2 MB of L2 cache that cushions transfers to the Infinity Fabric.

AMD Strix Point SoC Reintroduces Dual-CCX CPU, Other Interesting Silicon Details Revealed

Since its reveal last week, we got a slightly more technical deep-dive from AMD on its two upcoming processors—the "Strix Point" silicon powering its Ryzen AI 300 series mobile processors; and the "Granite Ridge" chiplet MCM powering its Ryzen 9000 desktop processors. We present a closer look into the "Strix Point" SoC in this article. It turns out that "Strix Point" takes a significantly different approach to heterogeneous multicore than "Phoenix 2." AMD gave us a close look at how this works. AMD built the "Strix Point" monolithic silicon on the TSMC N4P foundry node, with a die-area of around 232 mm².

The "Strix Point" silicon sees the company's Infinity Fabric interconnect as its omnipresent ether. This is a point-to-point interconnect, unlike the ringbus on some Intel processors. The main compute machinery on the "Strix Point" SoC are its two CPU compute complexes (CCX), each with a 32b (read)/16b (write) per cycle data-path to the fabric. The concept of CCX makes a comeback with "Strix Point" after nearly two generations of "Zen." The first CCX contains the chip's four full-sized "Zen 5" CPU cores, which share a 16 MB L3 cache among themselves. The second CCX contains the chip's eight "Zen 5c" cores that share a smaller 8 MB L3 cache. Each of the 12 cores has a 1 MB dedicated L2 cache.

AMD Details the Radeon 890M RDNA 3.5 iGPU of "Strix Point" a bit More

AMD presented a closer look at the Radeon 890M iGPU powering the Ryzen AI 300 series "Strix Point" mobile processor. The iGPU introduces the new RDNA 3.5 graphics architecture, with several architecture-level improvements built around the existing RDNA 3 SIMD, to yield performance/Watt improvements that AMD could trade in to increase the SIMD muscle for its processors, and proportionately increase performance. The iGPU features one Shader Engine with 8 workgroup processors (WGPs), which amount to 16 CU (compute units), for a total of 1,024 stream processors, 32 AI accelerators, and 16 Ray accelerators. The iGPU also has 4 render backends+, for 16 ROPs. It is specced with a maximum engine clock of 2.90 GHz, which yields over 11 TFLOP/s of FP32 throughput, which is around 30% higher than the iGPU of "Phoenix" (12 CU, RDNA 3), at comparable power.

AMD goes into the finer points of how it yielded the performance/Watt gains. The company worked on the texture subsystem to double the texture sampler rate, and introduced point-sampling acceleration. The shader sub-system features interpolation and comparison rate doubling. The raster sub-system introduces sub-batching of batch raster operations, with a programmable bin order, for the hardware to be more efficient. Lastly, AMD worked on the iGPU's memory-management, to be more aware of LPDDR5 (which has a different physical layer or way of writing/fetching than GDDR6). The company worked on improving the memory compression technologies, to improve performance, and reduce the iGPU's memory footprint.

Memory Industry Revenue Expected to Reach Record High in 2025 Due to Increasing Average Prices and the Rise of HBM and QLC

TrendForce's latest report on the memory industry reveals that DRAM and NAND Flash revenues are expected to see significant increases of 75% and 77%, respectively, in 2024, driven by increased bit demand, an improved supply-demand structure, and the rise of high-value products like HBM.

Furthermore, industry revenues are projected to continue growing in 2025, with DRAM expected to increase by 51% and NAND Flash by 29%, reaching record highs. This growth is anticipated to revive capital expenditures and boost demand for upstream raw materials, although it will also increase cost pressure for memory buyers.

CTL Introduces the Next-Generation 14" Chromebook: the CTL Chromebook PX141E Series

CTL, a global cloud-computing solution leader for education and enterprise, announced today the introduction of the new CTL Chromebook PX141E Series. This series equips educators, staff, and students with the powerful performance and connectivity they need for all-day productivity. Easy ChromeOS device manageability combined with CTL's lifecycle services reduces the burden on IT departments.

"We are excited to refresh our product line with the next-generation 14" Chromebook. As the new powerhouse performer in our Chromebook lineup, we know our education and enterprise customers will appreciate the upgraded power, Wi-Fi, memory, and storage to support all day teaching and learning, while new conveniences such as the webcam privacy shutter and additional USB-A port enhance useability," said Erik Stromquist, CEO of CTL.

Intel Core Ultra 300 Series "Panther Lake" Leaks: 16 CPU Cores, 12 Xe3 GPU Cores, and Five-Tile Package

Intel is preparing to launch its next generation of mobile CPUs with Core Ultra 200 series "Lunar Lake" leading the charge. However, as these processors are about to hit the market, leakers reveal Intel's plans for the next-generation Core Ultra 300 series "Panther Lake". According to rumors, Panther Lake will double the core count of Lunar Lake, which capped out at eight cores. There are several configurations of Panther Lake in the making based on the different combinations of performance (P) "Cougar Cove," efficiency (E) "Skymont," and low power (LP) cores. First is the PTL-U with 4P+0E+4LP cores with four Xe3 "Celestial" GPU cores. This configuration is delivered within a 15 W envelope. Next, we have the PTL-H variant with 4P+8E+4LP cores for a total of 16 cores, with four Xe3 GPU cores, inside a 25 W package. Last but not least, Intel will also make PTL-P SKUs with 4P+8E+4LP cores, with 12 Xe3 cores, to create a potentially decent gaming chip with 25 W of power.

Intel's Panther Lake CPU architecture uses an innovative design approach, utilizing a multi-tile configuration. The processor incorporates five distinct tiles, with three playing active roles in its functionality. The central compute operations are handled by one "Die 4" tile with CPU and NPU, while "Die 1" is dedicated to platform control (PCD). Graphics processing is managed by "Die 5", leveraging Intel's Xe3 technology. Interestingly, two of the five tiles serve a primarily structural purpose. These passive elements are strategically placed to achieve a balanced, rectangular form factor for the chip. This design philosophy echoes a similar strategy employed in Intel's Lunar Lake processors. Panther Lake is poised to offer greater versatility compared to its Lunar Lake counterpart. It's expected to cater to a wider range of market segments and use cases. One notable advancement is the potential for increased memory capacity compared to Lunar Lake, which capped out at 32 GB of LPDDR5X memory running at 8533 MT/s. We can expect to hear more potentially at Intel's upcoming Innovation event in September, while general availability of Panther Lake is expected in late 2025 or early 2026.

ASUS Previews Intel's "Lunar Lake" Platform with ExpertBook P5 14-Inch Laptop

ASUS has revealed its upcoming ExpertBook P5 laptop, set to debut alongside Intel's highly anticipated "Lunar Lake" processors. This ultrabook aims to boost AI-capable laptop market, featuring an unspecified Intel Lunar Lake "Core Ultra 200V" CPU at its core. The ExpertBook P5 boasts impressive AI processing capabilities, with over 45 TOPS from its Neural Processing Unit and a combined 100+ TOPS when factoring in the CPU and GPU. The NPU provides efficient processing, with additional power coming from Lunar Lake's GPU with XMX cores, featuring the Xe2 Battlemage architecture. This is more than enough for the Copilot+ certification from Microsoft, making the laptop debut as an "AI PC." The ExpertBook P5 offers up to 32 GB of LPDDR5X memory running at 8333 MT/s, up to 3 TB of PCIe 4.0 SSD storage with two drives, and Wi-Fi 7 support.

The 14-inch anti-glare display features a 2.5K resolution and a smooth 144 Hz refresh rate, ensuring a premium visual experience. Despite its powerful internals, the ExpertBook P5 maintains a solid profile weighing just 1.3 kg. The laptop is housed in an all-metal military-grade aluminium body with a 180-degree lay-flat hinge, making it both portable and versatile. ASUS has also prioritized cooling efficiency with innovative technology that optimizes thermal management, whether the laptop is open or closed. Security hasn't been overlooked either, with the ExpertBook P5 featuring a robust security ecosystem, including Windows 11 secured-core PC framework, NIST-155-ready Commercial-Grade BIOS protection, and biometric login options. While an exact release date hasn't been confirmed, ASUS is preparing ExpertBook P5 and other Lunar Lake-powered laptops to hit the market in the second half of 2024.

DFI Launches COM Express Mini Type 10 Module with Intel Atom x7000RE

DFI, the world's leading brand in embedded motherboards and industrial computers, introduces its latest innovation: the ASL9A2 System-on-Module (SoM). With an Intel Atom processor, the ASL9A2 achieves speeds up to 3.6 GHz and is based on Intel's Gracemont architecture. Designed for high-performance, low-power, and ruggedized edge applications, the ASL9A2 is perfect for continuous 24/7 operation, supporting a wide range of Internet of Things (IoT) solutions at the edge.

As reported by The Business Research Company, the SoM market has experienced robust growth, increasing from 2.22 billion in 2023 to 2.43 billion in 2024, with a CAGR of 9.6%. From DFI's business viewpoint, a variety of SoM projects are yielding positive results. Furthermore, DFI enhances its Intel x86 COM Express modules with value-added enhancements to improve customers' applications efficiency, earning praise from customers.

Intel Core Ultra 200V Lunar Lake Family Leaks: Nine Models with One Core 9 Ultra SKU

During Computex 2024, Intel announced the next-generation compute platform for the notebook segment in the form of the Core Ultra 200V series, codenamed Lunar Lake. Set for release in September 2024, these processors are generating excitement among tech enthusiasts and industry professionals alike. According to the latest leak by VideoCardz, Intel plans to unveil nine variants of Lunar Lake, including Core Ultra 7 and Core Ultra 5 models, with a single high-end Core Ultra 9 variant. While exact specifications remain under wraps, Intel's focus on artificial intelligence capabilities is clear. The company aims to secure a spot in Microsoft's Copilot+ lineup by integrating its fourth-generation Neural Processing Unit (NPU), boasting up to 48 TOPS of performance. All Lunar Lake variants are expected to feature a hybrid architecture with four Lion Cove performance cores and four Skymont efficiency cores.

This design targets low-power mobile devices, striking a balance between performance and energy efficiency. For graphics, Intel is incorporating its next-generation Arc technology, dubbed Battlemage GPU, which utilizes the Xe2-LPG architecture. The leaked information suggests that Lunar Lake processors will come with either 16 GB or 32 GB of non-upgradable LPDDR5-8533 memory. Graphics configurations are expected to include seven or eight Xe2 GPU cores, depending on the model. At the entry level, the Core Ultra 5 226V is rumored to offer a 17 W base power and 30 W maximum turbo power, with performance cores clocking up to 4.5 GHz. The top-tier Core Ultra 9 288V is expected to push the envelope with a 30 W base power, performance cores boosting to 5.1 GHz, and an NPU capable of 48 TOPS. You can check out the rest of the SKUs in the table below.

Essencore KLEVV at Computex 2024: Slick Understated Styling

KLEVV by Essencore had a formidable lineup of high-end gaming PC memory and SSDs at Computex 2024. We were greeted at the booth with an Essencore-branded LPCAMM2 module with 32 GB density, and LPDDR5-8533 speeds on tap. The Genuine G560 (it's named Genuine) is a modern M.2 NVMe Gen 5 SSD with a fanless heatsink. It comes in capacities of 1 TB, 2 TB, and 4 TB; with sequential speeds ranging between 13 GB/s to 14 GB/s reads, and 9.5 GB/s to 12 GB/s writes; and depending on the capacity, the endurance is between 700 TBW to 3000 TBW. The CRAS C930 is a premium M.2 Gen 4 SSD, with 1 TB and 2 TB models available, sequential read speeds of up to 7.4 GB/s, and sequential write speeds between 6.4 GB/s to 6.8 GB/s. Endurance ranges between 750 TBW for the 1 TB model, and 1500 TBW for the 2 TB.

At the value end of KLEVV's SSD lineup is the CRAS C925, which offers mostly similar performance numbers to the C930, but with slightly different endurance ratings. It ranges between 500 GB and 2 TB; with the same 7.4 GB/s maximum read speeds, but slightly lower maximum write speeds of 6.2 GB/s for the 500 GB model, 6.3 GB/s for the 1 TB, and 6.5 GB/s for the 2 TB model; and endurance rated at 600 TBW, 1200 TBW, and 2400 TBW, respectively.

MSI Demonstrates Advanced Applications of AIoT Simulated Smart City with Five Exhibition Topics

MSI, a world leader in AI PC and AIoT solutions, is going to participate in COMPUTEX 2024 from 6/4 to 6/7. With cutting-edge skills, MSI's AIoT team has been focusing on product development and hardware-software integration for AI applications in recent years, achieving great results on application development in various fields. MSI will create an exclusive exhibition area for Smart City to introduce AIoT application scenarios which have five topics including AI & Datacenter, Automation, Industrial Solutions, Commercial Solutions, and Automotive Solutions.

The most iconic products this year are diverse GPU platforms for AI markets and a new CXL (Compute Express Link) memory expansion server which was developed by the cooperation of key players in the CXL technology field, including AMD, Samsung, and Micron. Besides, the latest Autonomous Mobile Robot (AMR) powered by NVIDIA Jetson AGX Orin is also one of the major highlights. For new energy vehicles, we will first disclose the complete AC/DC chargers coupled with MSI E-Connect dashboard (EMS) and AI-powered car recognition applications to show the one-stop service of HW/SW integration.

ASUS Announces the Chromebook CR Series

ASUS today announced the ASUS Chromebook CR series of laptops, tailored to meet the needs of K-12 students. The ASUS Chromebook CR Series stands out as the ideal companion for students, whether engaged in in-person classroom learning or remote education. The rugged and modular design, featuring replaceable internal parts, guarantees both durability and longevity. With 11.6-inch or 12.2-inch Corning Gorilla Glass touchscreens and a 180° lay-flat or 360°-flippable hinge, the laptops offer flexibility for enriched educational experiences. This design fosters the adventurous mindset of modern students, ensuring an enjoyable and secure learning journey, whether they're engaging in online courses or in-class sessions—ready for every learning journey.

A trusted study partner
For K-12 students, an everyday-use laptop should be invincible. With lively and active users, scratches and knocks are an almost-inevitable part of their daily routine, so the ASUS Chromebook CR series features an all-round rubber bumper for extra peace of mind. The laptops also feature a rugged design that's tested to meet or exceed the MIL-STD-810H US military-grade standard, and use tough Corning

LPDDR6 LPCAMM2 Pictured and Detailed Courtesy of JEDEC

Yesterday we reported on DDR6 memory hitting new heights of performance and it looks like LPDDR6 will follow suit, at least based on details in a JEDEC presentation. LPDDR6 will just like LPDDR5 be available as solder down memory, but it will also be available in a new LPCAMM2 module. The bus speed of LPDDR5 on LPCAMM2 modules is expected to peak at 9.2 GT/s based on JEDEC specifications, but LPDDR6 will extend this to 14.4 GT/s or roughly a 50 percent increase. However, today the fastest and only LPCAMM2 modules on the retail market which are using LPDDR5X, comes in at 7.5 GT/s, which suggests that launch speeds of LPDDR6 will end up being quite far from the peak speeds.

There will be some other interesting changes to LPDDR6 CAMM2 modules as there will be a move from 128-bit per module to 192-bit per module and each channel will go from 32-bits to 48-bits. Part of the reason for this is that LPDDR6 is moving to a 24-bit channel width, consisting of two 12-bit sub channels, as mentioned in yesterday's news post. This might seem odd at first, but in reality is fairly simple, LPDDR6 will have native ECC (Error Correction Code) or EDC (Error Detection Code) support, but it's currently not entirely clear how this will be implemented on a system level. JEDEC is also looking at developing a screwless solution for the CAMM2 and LPCAMM2 memory modules, but at the moment there's no clear solution in sight. We might also get to see LPDDR6 via LPCAMM2 modules on the desktop, although the presentation only mentions CAMM2 for the desktop, something we've already seen that MSI is working on.

Mnemonic and Foresee Showcase Several New Enterprise SSD Models

During the COMPUTEX 2024 exhibition from June 4th to 7th, Mnemonic Electronic Co., Ltd. (hereinafter referred to as Mnemonic), Longsys's Taiwan subsidiary, will showcase a series of high-capacity SSD products under the theme "Embracing the Era of High-capacity SSDs," providing solutions for global users of high-capacity SSD products.

The lineup of high-capacity products presented by Mnemonic includes the ORCA 4836 series enterprise NVMe SSDs and the UNCIA 3836 series enterprise SATA SSDs. These products are equipped with the latest enterprise-grade 128-layer TLC NAND flash memory, offering high performance, low latency, adjustable power consumption, and high reliability storage solutions for enterprise-grade users such as servers, cloud computing, and edge computing, with a maximum capacity of up to 7.68 TB.

HBM3e Production Surge Expected to Make Up 35% of Advanced Process Wafer Input by End of 2024

TrendForce reports that the three largest DRAM suppliers are increasing wafer input for advanced processes. Following a rise in memory contract prices, companies have boosted their capital investments, with capacity expansion focusing on the second half of this year. It is expected that wafer input for 1alpha nm and above processes will account for approximately 40% of total DRAM wafer input by the end of the year.

HBM production will be prioritized due to its profitability and increasing demand. However, limited yields of around 50-60% and a wafer area 60% larger than DRAM products mean a higher proportion of wafer input is required. Based on the TSV capacity of each company, HBM is expected to account for 35% of advanced process wafer input by the end of this year, with the remaining wafer capacity used for LPDDR5(X) and DDR5 products.

Intel Prepares Core Ultra 5-238V Lunar Lake-MX CPU with 32 GB LPDDR5X Memory

Intel has prepared the Core Ultra 5-238V, a Lunar Lake-MX CPU that integrates 32 GB of LPDDR5X memory into the CPU package. This new design represents a significant departure from the traditional approach of using separate memory modules, promising enhanced performance and efficiency, similar to what Apple is doing with its M series of processors. The Core Ultra 5-238V is the first of its kind for Intel to hit mass consumers. Previous attempt was with Lakefield, which didn't take off, but had advanced 3D stacked Foveros packaging. With 32 GB of high-bandwidth, low-power LPDDR5X memory directly integrated into the CPU package, the Core Ultra 5-238V eliminates the need for separate memory modules, reducing latency and improving overall system responsiveness. This seamless integration results in faster data transfer rates and lower power consumption with LPDDR5X memory running at 8533 MT.

Applications that demand intensive memory usage, such as video editing, 3D rendering, and high-end gaming, will be the first to experience performance gains. Users can expect smoother multitasking, quicker load times, and more efficient handling of memory-intensive tasks. The Core Ultra 5-238V is equipped with four big Lion Cove and four little Skymont cores, in combination with seven Xe2-LPG cores based on Battlemage GPU microarchitecture. The bigger siblings to Core Ultra 5, the Core Ultra 7 series, will feature eight Xe2-LPG cores instead of seven, with the same CPU core count, while all of them will run the fourth generation NPU.

Micron Delivers Crucial LPCAMM2 with LPDDR5X Memory for the New AI-Ready Lenovo ThinkPad P1 Gen 7 Workstation

Micron Technology, Inc., today announced the availability of Crucial LPCAMM2, the disruptive next-generation laptop memory form factor that features LPDDR5X mobile memory to level up laptop performance for professionals and creators. Consuming up to 58% less active power and with a 64% space savings compared to DDR5 SODIMMs, LPCAMM2 delivers higher bandwidth and dual-channel support with a single module. LPCAMM2 is an ideal high-performance memory solution for handling AI PC and complex workloads and is compatible with the powerful and versatile Lenovo ThinkPad P1 Gen 7 mobile workstations.

"LPCAMM2 is a game-changer for mobile workstation users who want to enjoy the benefits of the latest mobile high performance memory technology without sacrificing superior performance, upgradeability, power efficiency or space," said Jonathan Weech, senior director of product marketing for Micron's Commercial Products Group. "With LPCAMM2, we are delivering a future-proof memory solution, enabling faster speeds and longer battery life to support demanding creative and AI workloads."

Radxa Launches NAS Friendly ROCK 5 ITX Motherboard with Arm SoC

Radxa is a Chinese manufacturer of various Arm based devices and something of a minor competitor to the Raspberry Pi Foundation. The company has just launched its latest product which is called the ROCK 5 ITX. As the name implies, it's a Mini-ITX form factor motherboard, which in itself is rather unusual for Arm based hardware to start with. However, Radxa has designed the ROCK 5 ITX to be a NAS motherboard and this is the first time we've come across such a product, as most Arm based boards are either intended for hobby projects, software development or routers. This makes the ROCK 5 ITX quite unique, at least based on its form factor, as it'll be compatible with standard Mini-ITX chassis.

The SoC on the board is a Rockchip RK3588 which sports four Cortex-A76 cores at up to 2.4 GHz and four Cortex-A55 cores at 1.8 GHz. This is not exactly cutting edge, but should be plenty fast enough for a SATA drive based NAS. The board offers four SATA 6 Gbps connectors via an ASMedia ASM1164 controller, each with an individual power connector next to it. However, Radxa seems to have chosen to use fan-header type power connectors, which means it'll be hard to get replacement power cables. The board also has a PCIe 3.0 x2 M.2 slot for an NVMe drive. The OS boots from eMMC and Radxa supports its own Roobi OS which is Debian Linux based.

SK hynix CEO Says HBM from 2025 Production Almost Sold Out

SK hynix held a press conference unveiling its vision and strategy for the AI era today at its headquarters in Icheon, Gyeonggi Province, to share the details of its investment plans for the M15X fab in Cheongju and the Yongin Semiconductor Cluster in Korea and the advanced packaging facilities in Indiana, U.S.

The event, hosted by theChief Executive Officer Kwak Noh-Jung, three years before the May 2027 completion of the first fab in the Yongin Cluster, was attended by key executives including the Head of AI Infra Justin (Ju-Seon) Kim, Head of DRAM Development Kim Jonghwan, Head of the N-S Committee Ahn Hyun, Head of Manufacturing Technology Kim Yeongsik, Head of Package & Test Choi Woojin, Head of Corporate Strategy & Planning Ryu Byung Hoon, and the Chief Financial Officer Kim Woo Hyun.

SK hynix Strengthens AI Memory Leadership & Partnership With Host at the TSMC 2024 Tech Symposium

SK hynix showcased its next-generation technologies and strengthened key partnerships at the TSMC 2024 Technology Symposium held in Santa Clara, California on April 24. At the event, the company displayed its industry-leading HBM AI memory solutions and highlighted its collaboration with TSMC involving the host's CoWoS advanced packaging technology.

TSMC, a global semiconductor foundry, invites its major partners to this annual conference in the first half of each year so they can share their new products and technologies. Attending the event under the slogan "Memory, the Power of AI," SK hynix received significant attention for presenting the industry's most powerful AI memory solution, HBM3E. The product has recently demonstrated industry-leading performance, achieving input/output (I/O) transfer speed of up to 10 gigabits per second (Gbps) in an AI system during a performance validation evaluation.

AMD "Strix Point" Mobile Processor Confirmed 12-core/24-thread, But Misses Out on PCIe Gen 5

AMD's next-generation Ryzen 9000 "Strix Point" mobile processor, which succeeds the current Ryzen 8040 "Hawk Point" and Ryzen 7040 "Phoenix," is confirmed to feature a CPU core-configuration of 12-core/24-thread, according to a specs-leak by HKEPC citing sources among notebook OEMs. It appears like Computex 2024 will be big for AMD, with the company preparing next-gen processor announcements across the desktop and notebook lines. Both the "Strix Point" mobile processor and "Granite Ridge" desktop processor debut the company's next "Zen 5" microarchitecture.

Perhaps the biggest takeaway from "Zen 5" is that AMD has increased the number of CPU cores per CCX from 8 in "Zen 3" and "Zen 4," to 12 in "Zen 5." While this doesn't affect the core-counts of its CCD chiplets (which are still expected to be 8-core), the "Strix Point" processor appears to use one giant CCX with 12 cores. Each of the "Zen 5" cores has a 1 MB dedicated L2 cache, while the 12 cores share a 24 MB L3 cache. The 12-core/24-thread CPU, besides the generational IPC gains introduced by "Zen 5," marks a 50% increase in CPU muscle over "Hawk Point." It's not just the CPU complex, even the iGPU sees a hardware update.

Lenovo Unveils Its New AI-Ready ThinkPad P1 Gen 7 Mobile Workstation

Today, Lenovo launched its latest mobile workstation offerings meticulously crafted to deliver the exceptional power and performance essential for handling complex workloads. Lenovo's ThinkPad P1 Gen 7, P16v i Gen 2, P16s i Gen 3, and P14s i Gen 5, with their cutting-edge AI technologies, are set to transform the way professionals engage with AI workflows. By collaborating with industry partners, Intel, NVIDIA, and Micron, Lenovo has introduced powerful and performance-packed AI PCs that meet the demands of modern-day AI-intensive tasks. The inclusion of the Intel Core Ultra processors with their integrated neural processing unit (NPU) and NVIDIA RTX Ada Generation GPUs signifies a major advancement in AI technology, boosting overall performance and productivity capabilities.

The latest ThinkPad P series mobile workstations powered by Intel Core Ultra processors and NVIDIA RTX Ada Generation GPUs deliver flexible, high-performance, and energy-efficient AI-ready PCs. The integrated NPU is dedicated to handling light, continuous AI tasks, while the NVIDIA GPU runs more demanding day-to-day AI processing. This combination enables smooth and reliable functioning of AI technologies, serving professionals engaged in diverse tasks ranging from 3D modeling and scene development to AI inferencing and training.

Acer Expands Chromebook Plus Laptop Lineup with New 14-Inch Model Powered by Intel Core Processors

Acer today expanded its line of Chromebook Plus laptops with the Acer Chromebook Plus 514 (CB514-4H/T), providing users with a performance-minded, compact and durable model that enables them to do more with the AI-powered capabilities of ChromeOS. "The new Acer Chromebook Plus 514 (CB514-4H/T) delivers the sought-after combination of a portable design, 14-inch Full HD display and performance-minded technology that lets users get the most out of exciting capabilities offered with Chromebook Plus," said James Lin, General Manager, Notebooks, Acer Inc. "Students, businesses, families, and individuals need to be more productive, connected and empowered than ever, and can achieve this using Acer Chromebook Plus devices."

The new Acer Chromebook Plus 514 is the latest addition to Acer's lineup of Chromebook Plus laptops that offer enhanced Chromebook performance and experiences, emphasizing better hardware designs with upgraded displays and cameras paired with powerful productivity, creativity, and multimedia capabilities. Like all Acer Chromebook Plus laptops, users have the power to do more with the new Chromebook Plus 514 (CB514-4H/T). Powered by an Intel Core i3-N305 processor and an ample 8 GB of LPDDR5 RAM, the Acer Chromebook Plus 514 provides 2x the speed, memory, and storage, giving responsive performance and efficient multitasking, whether running built-in AI-powered apps like Google Docs and Photos, watching favorite shows in full HD on a 1080p display, or movie-making with LumaFusion. Plus, the processor ensures all-day enjoyment with up to 11 hours of usage on the fast-charging battery.

Meta Announces New MTIA AI Accelerator with Improved Performance to Ease NVIDIA's Grip

Meta has announced the next generation of its Meta Training and Inference Accelerator (MTIA) chip, which is designed to train and infer AI models at scale. The newest MTIA chip is a second-generation design of Meta's custom silicon for AI, and it is being built on TSMC's 5 nm technology. Running at the frequency of 1.35 GHz, the new chip is getting a boost to 90 Watts of TDP per package compared to just 25 Watts for the first-generation design. Basic Linear Algebra Subprograms (BLAS) processing is where the chip shines, and it includes matrix multiplication and vector/SIMD processing. At GEMM matrix processing, each chip can process 708 TeraFLOPS at INT8 (presumably meant FP8 in the spec) with sparsity, 354 TeraFLOPS without, 354 TeraFLOPS at FP16/BF16 with sparsity, and 177 TeraFLOPS without.

Classical vector and processing is a bit slower at 11.06 TeraFLOPS at INT8 (FP8), 5.53 TeraFLOPS at FP16/BF16, and 2.76 TFLOPS single-precision FP32. The MTIA chip is specifically designed to run AI training and inference on Meta's PyTorch AI framework, with an open-source Triton backend that produces compiler code for optimal performance. Meta uses this for all its Llama models, and with Llama3 just around the corner, it could be trained on these chips. To package it into a system, Meta puts two of these chips onto a board and pairs them with 128 GB of LPDDR5 memory. The board is connected via PCIe Gen 5 to a system where 12 boards are stacked densely. This process is repeated six times in a single rack for 72 boards and 144 chips in a single rack for a total of 101.95 PetaFLOPS, assuming linear scaling at INT8 (FP8) precision. Of course, linear scaling is not quite possible in scale-out systems, which could bring it down to under 100 PetaFLOPS per rack.
Below, you can see images of the chip floorplan, specifications compared to the prior version, as well as the system.
Return to Keyword Browsing
Dec 23rd, 2024 16:27 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts