News Posts matching #Memory

Return to Keyword Browsing

AMD Ryzen Threadripper PRO 7995WX Emerges: 96 Cores, DDR5 Memory, and Over 5.0 GHz Boost Frequency

AMD appears set to enhance the core count for its renowned Threadripper series. After a prolonged wait, the high-end desktop (HEDT) platform boasting a significant CPU count returns with the Ryzen Threadripper PRO 7995WX, which features an impressive 96 cores and 192 threads. This marks the series' first core count upgrade since the Threadripper 3000 series. The 7995WX CPU was spotted in the HP Z6 G5 Workstation system, potentially one of the inaugural prebuilt systems from AMD's OEM partners. The Threadripper PRO series seems poised to dominate AMD's HEDT offerings, with no indications of non-PRO consumer models emerging for now.

The latest Geekbench listing unveiled the 7995WX CPU's 96-core configuration. Although the base frequency appears misrepresented, benchmark data hints at the 96-core CPU potentially reaching a boost clock of 5.14 GHz, a detail further confirmed by Geekbench's output. Another notable enhancement in the Threadripper series is introducing the DDR5 memory standard. While the benchmarking tool doesn't explicitly mention this, it does highlight a memory configuration of 503.27 GB (512 GB) in use. The CPU managed to score 2095 points for single-core score and 81408 points for multi-core score on Geekbench v5.5 for Linux (Ubuntu 22.04 LTS), making it one of the fastest CPUs in the database.

More Details on SK Hynix 321-Layer NAND Flash Appears at the Flash Memory Summit

Courtesy of an SK Hynix keynote speech at the Flash Memory Summit, we now have a few more details about its upcoming 321-layer NAND Flash. PC Watch Japan who attended the industry event shared some pictures from the keynote which adds some crucial details that were missing from last week's press release. SK Hynix's officially shared performance figures tell us that we should expect up to 12 percent faster program performance, which should be the write performance and up to 13 percent improved read latency. Both of these performance metrics will obviously depend on the SSD controller the NAND is paired with, the related firmware on said controller and so forth.

PC Watch Japan also quotes a program throughput of 194 MB/s, which is 26 MB/s improvement over SK Hynix 176-layer NAND and currently the highest known program throughput of any announced NAND Flash. That said, Kioxia is expecting to hit 205 MB/s with its next generation of 300 layer NAND. SK Hynix also claims 10 percent better read power efficiency, which is really neither here nor there when it comes to modern SSDs, unless we're talking server level SSDs with a dozen of these NAND chips or more. Rather than going with two stacks of 150 plus layers each, SK Hynix went with three times 107 layer stacks, which should be compared to their current 238 layer product which has two stacks of 119 layers. This made the new NAND package easier to produce and should in the long term result in higher yields. Each NAND package is expected to deliver a memory density of 20 Gbit per square millimetre or more, which is almost twice that of its 176-layer NAND.

Supermicro Announces High Volume Production of E3.S All-Flash Storage Portfolio with CXL Memory Expansion

Supermicro, Inc., a Total IT Solution Provider for Cloud, AI/ML, Storage, and 5G/Edge, is delivering a high-throughput, low latency E3.S storage solutions supporting the industry's first PCIe Gen 5 drives and CXL modules to meet the demands of large AI Training and HPC clusters, where massive amounts of unstructured data must be delivered to the GPUs and CPUs to achieve faster results.

Supermicro's Petascale systems are a new class of storage servers supporting the latest industry standard E3.S (7.5 mm) Gen 5 NVMe drives from leading storage vendors for up to 256 TB of high throughput, low latency storage in 1U or up to a half petabyte in 2U. Inside, Supermicro's innovative symmetrical architecture reduced latency by ensuring the shortest signal paths for data and maximized airflow over critical components, allowing them to run at optimal speeds. With these new systems, a standard rack can now hold over 20 Petabytes of capacity for high throughput NVMe-oF (NVMe over Fabrics) configurations, ensuring that GPUs remain saturated with data. Systems are available with either the 4th Gen Intel Xeon Scalable processors or 4th Gen AMD EPYC processors.

V-COLOR Showcases Overclocked DDR5-6800 R-DIMM Ranging from 16GB to 64GB 8-channel Kits

V-COLOR Technology Inc, a leading memory manufacturer is proud to present the revolutionary DDR5 OC R-DIMM Workstation Memory in configurations 16 GB, 32 GB, and 64 GB. Designed for use with the latest unlocked Intel Xeon W-2400X and W-3400X series processors on respective Intel W790 chipset-based motherboards supporting quad-channel and octo-channel memory

Extreme Overclocking: Experience unrivaled power with OC R-DIMM Workstation Memory. Engineered for extreme overclocking capabilities, this memory module dares to push the limits and achieve unparalleled speeds. With v-color technology, effortlessly unlock the full potential of workstations and conquer even the most demanding tasks. Intel XMP 3.0 Certified: Seamlessly compatible with Intel XMP 3.0, OC R-DIMM Workstation Memory ensures a hassle-free and optimized experience. Effortlessly enhance system's performance and achieve the perfect balance between speed and stability.

AMD Radeon RX 7700 XT Confirmed with 192-bit Memory Bus in ASRock Regulatory Leak

AMD Radeon RX 7700 XT is confirmed to feature 12 GB as its standard memory size, and feature a 192-bit wide GDDR6 memory interface, according to a leaked regulatory filing by ASRock for its upcoming graphics cards. We already know from last week's mega leak of the PowerColor RX 7800 XT Red Devil that the card maxes out the "Navi 32" silicon, enabling all 60 RDNA3 CU, and comes with 16 GB of memory across the chip's full 256-bit memory bus. This filing suggests how AMD will carve the RX 7700 XT out.

Probably designed to compete with the GeForce RTX 4070, the RX 7700 XT is based on the same "Navi 32" silicon as the RX 7800 XT, but cut down. AMD is expected to disable some of the 60 CU physically present on the 5 nm GCD, while one of the four 6 nm MCDs will be disabled, giving the chip a 192-bit memory bus to drive its 12 GB of memory. We know from the PowerColor leak that the RX 7800 XT gets 18 Gbps memory speed. It remains to be seen if AMD sticks with this speed for even the RX 7700 XT, in which case, it gets 432 GB/s of memory bandwidth at its disposal. AMD is expected to launch the RX 7800 XT and RX 7700 XT within this quarter (before October).

Micron Launches Memory Expansion Module Portfolio to Accelerate CXL 2.0 Adoption

Micron Technology, Inc. (Nasdaq: MU), today announced sample availability of the Micron CZ120 memory expansion modules to customers and partners. The Micron CZ120 modules come in 128 GB and 256 GB capacities in the E3.S 2T form factor, which uses PCIe Gen 5 x8 interface. Additionally, the CZ120 modules are capable of running up to 36 GB/s memory read/write bandwidth and augment standard server systems when incremental memory capacity and bandwidth is required. The CZ120 modules use Compute Express Link (CXL) standards and fully support the CXL 2.0 Type 3 standard. By leveraging a unique dual-channel memory architecture and Micron's high-volume production DRAM process, the Micron CZ120 delivers higher module capacity and increased bandwidth. Workloads that benefit from more memory capacity include AI training and inference models, SaaS applications, in-memory databases, high-performance computing and general-purpose compute workloads that run on a hypervisor on premise or in the cloud.

"Micron is advancing the adoption of CXL memory with this CZ120 sampling milestone to key customers," commented Siva Makineni, vice president of the Micron Advanced Memory Systems Group. "We have been developing and testing our CZ120 memory expansion modules utilizing both Intel and AMD platforms capable of supporting the CXL standard. Our product innovation coupled with our collaborative efforts with the CXL ecosystem will enable faster acceptance of this new standard, as we work collectively to meet the ever-growing demands of data centers and their memory-intensive workloads."

V-COLOR Achieves Unprecedented Speed with 96GB (2x48GB) DDR5-7800 CL38 with Manta XPrism Series

V-COLOR Technology Inc, a leading memory manufacturer, is proud to present a new speed reached for DDR5 96 GB (2x 48 GB) kit of 7800 CL38.
Aiming to break barriers and elevate performance, with this new high-speed of overclocked 96 GB (2x 48 GB) capacity kit at DDR5-7800 CL38-48-48-126

Testing the limits of DDR5 memory speed, v-color reached DDR5-7800 CL38-48-48-126 at 96 GB (2x48GB) kit configuration, with the ASUS ROG MAXIMUS Z790 APEX and Intel Core i9-13900KF processor. Seamlessly compatible with Intel XMP 3.0, the Manta XPrism RGB DDR5 96 GB (2x 48 GB) ensures a hassle-free and optimized experience. Effortlessly enhance system's performance and achieve the perfect balance between speed and stability.

MSI Releases New AGESA PI 1.0.0.7c BIOS Update for Higher Frequency Memory Modules and Stability Bug Fixes

MSI, today, released a new AMD AGESA PI 1.0.0.7c BIOS update for all MSI's motherboard X670E, X670, B650, A620 product line. For this new BIOS release, MSI focus on and prioritize mainly for higher DDR5 memory module support and also stability bug fixes. The latest update has huge significant increase for supported memory frequency on AMD Ryzen CPUs. Below is a list of models that will be ready at the time of the release while other models will have come support in the following week.

In the screenshots below, demonstrates running a Memory Stress Test, on an AMD Ryzen R7 7700X CPU with a paired of dual-channel DDR5-7200 MHz "EXPO" certified kit on MSI's PRO B650-P WIFI Motherboard will run without any stability issues. Moreover, it also demonstrates running a Memory Stress Test on an AMD Ryzen R9 7900X CPU with MSI's MEG X670E ACE Motherboard can even achieve 8000 MHz (CL36) high frequency. A few more updates specifically on the AGESA 1.0.0.7c added extra for protection for reliability than before and also patched a few potential vulnerabilities and security loopholes.

KIOXIA Announces the First Samples of Hardware that Supports the Linux Foundation's Software-Enabled Flash Community Project

KIOXIA America, Inc. today announced the availability of the first hardware samples that support the Linux Foundation's vendor-neutral Software-Enabled Flash Community Project, which is making flash software-defined. The company is expecting to deliver customer samples in August 2023. Built for the demanding needs of hyperscale environments, Software-Enabled Flash technology helps hyperscale cloud providers and storage developers maximize the value of flash memory. The hardware from KIOXIA is the first step to putting this working technology in the hands of developers.

The first running units will be showcased in live demonstrations in the KIOXIA booth (#307) next week at Flash Memory Summit 2023 (FMS 2023). This new class of drive consists of purpose-built, media-centric flash hardware focused on hyperscale requirements that work with an open source API and libraries to provide the needed functionality. By unlocking the power of flash, this technology breaks free from legacy hard disk drive (HDD) protocols and creates a platform specific to flash media in a hyperscale environment.

New AI Accelerator Chips Boost HBM3 and HBM3e to Dominate 2024 Market

TrendForce reports that the HBM (High Bandwidth Memory) market's dominant product for 2023 is HBM2e, employed by the NVIDIA A100/A800, AMD MI200, and most CSPs' (Cloud Service Providers) self-developed accelerator chips. As the demand for AI accelerator chips evolves, manufacturers plan to introduce new HBM3e products in 2024, with HBM3 and HBM3e expected to become mainstream in the market next year.

The distinctions between HBM generations primarily lie in their speed. The industry experienced a proliferation of confusing names when transitioning to the HBM3 generation. TrendForce clarifies that the so-called HBM3 in the current market should be subdivided into two categories based on speed. One category includes HBM3 running at speeds between 5.6 to 6.4 Gbps, while the other features the 8 Gbps HBM3e, which also goes by several names including HBM3P, HBM3A, HBM3+, and HBM3 Gen2.

Lexar NM790 M.2 2280 PCIe Gen 4 NVMe SSD and New ARES Memory Kits Now Available

Lexar, a leading global brand of flash memory solutions, is excited to announce three additions to its gaming product lineup - the new Lexar NM790 M.2 2280 PCIe Gen 4×4 NVMe SSD, Lexar ARES DDR5 Desktop Memory in 6400MT/s, and Lexar ARES RGB DDR4 Desktop Memory in 3600MT/s.

The NM790 M.2 NVMe SSD is perfect for gamers and content creators, delivering incredible speeds of 7400 MB/s read, 6500 MB/s write thanks to its PCIe Gen 4 tech, which includes HMB 3.0 and Dynamic SLC Cache. So whether users are looking to vanquish foes in their gaming battles or conquer their latest creative pursuits, the NM790 SSD is ready to take on the challenge.

Team Group ELITE PLUS DDR5 and ELITE DDR5-6400 Desktop Memory Modules Hit the Market

Global memory brand Team Group announced the launch of its updated ELITE memory modules with enhanced frequencies today: the ELITE PLUS DDR5 and ELITE DDR5 Desktop Memory 6400 MHz (1.1 V CL52-52-52-103). Both comply with JEDEC memory standards and fulfill the needs of demanding applications and high-performance computing.

In response to the growing demand for high-speed computing and digital technology, Team Group has introduced the upgraded ELITE PLUS DDR5 6400 MHz and ELITE DDR5 6400 MHz memory modules, which boast higher frequencies and low power consumption. The updated specs of the ELITE memory fully meet the needs of learning, entertainment, and more on desktop computers. With the modules' low operating voltage of 1.1 V, power consumption is significantly reduced, and the computer's lifespan is extended. In addition, DDR5's Same-Bank Refresh feature and optimized IC structure can process the double amount of data simultaneously compared to DDR4, which enables computers to operate more smoothly while multi-tasking and significantly improves operating efficiency.

GIGABYTE and HWiNFO Exclusively Collaborate for Accurate Information and New Memory Timings Feature

GIGABYTE TECHNOLOGY Co. Ltd, a leading manufacturer of motherboards, graphics cards, and hardware solutions, is pleased to announce its close collaboration with HWiNFO, a comprehensive system information and diagnostic tool. This partnership aims to enhance the accuracy of hardware information and diagnostics, while introducing innovative features to benefit computer enthusiasts and professionals alike.

HWiNFO, known for its detailed system information and diagnostic capabilities, has partnered with GIGABYTE to integrate their technologies and deliver more precise and comprehensive hardware information to users. By combining GIGABYTE's expertise in hardware manufacturing and HWiNFO's advanced diagnostics, users can now access a wealth of information about their computer components with unparalleled accuracy.

Samsung Electronics Announces Second Quarter 2023 Results

Samsung Electronics today reported financial results for the second quarter ended June 30, 2023. The Company posted KRW 60.01 trillion in consolidated revenue, a 6% decline from the previous quarter, mainly due to a decline in smartphone shipments despite a slight recovery in revenue of the DS (Device Solutions) Division. Operating profit rose sequentially to KRW 0.67 trillion as the DS Division posted a narrower loss, while Samsung Display Corporation (SDC) and the Digital Appliances Business saw improved profitability.

The Memory Business saw results improve from the previous quarter as its focus on High Bandwidth Memory (HBM) and DDR5 products in anticipation of robust demand for AI applications led to higher-than-guided DRAM shipments. System semiconductors posted a decline in profit due to lower utilization rates on weak demand from major applications.

Micron Delivers Industry's Fastest, Highest-Capacity HBM to Advance Generative AI Innovation

Micron Technology, Inc. today announced it has begun sampling the industry's first 8-high 24 GB HBM3 Gen2 memory with bandwidth greater than 1.2 TB/s and pin speed over 9.2 Gb/s, which is up to a 50% improvement over currently shipping HBM3 solutions. With a 2.5 times performance per watt improvement over previous generations, Micron's HBM3 Gen2 offering sets new records for the critical artificial intelligence (AI) data center metrics of performance, capacity and power efficiency. These Micron improvements reduce training times of large language models like GPT-4 and beyond, deliver efficient infrastructure use for AI inference and provide superior total cost of ownership (TCO).

The foundation of Micron's high-bandwidth memory (HBM) solution is Micron's industry-leading 1β (1-beta) DRAM process node, which allows a 24Gb DRAM die to be assembled into an 8-high cube within an industry-standard package dimension. Moreover, Micron's 12-high stack with 36 GB capacity will begin sampling in the first quarter of calendar 2024. Micron provides 50% more capacity for a given stack height compared to existing competitive solutions. Micron's HBM3 Gen2 performance-to-power ratio and pin speed improvements are critical for managing the extreme power demands of today's AI data centers. The improved power efficiency is possible because of Micron advancements such as doubling of the through-silicon vias (TSVs) over competitive HBM3 offerings, thermal impedance reduction through a five-time increase in metal density, and an energy-efficient data path design.

SK hynix Reports Second Quarter 2023 Financial Results

SK hynix Inc. today reported financial results for the second quarter of 2023. The company recorded revenue of 7.306 trillion won, operating loss of 2.882 trillion won (with operating margin of negative 39%), and net loss of 2.988 trillion won (with net margin of negative 41%) for the three-month period ended June 30, 2023.

"Amid an expansion in generative artificial intelligence (AI) market, which has largely been centered on ChatGPT, demand for AI server memory has increased rapidly," the company said. "As a result, sales of premium products such as HBM3 and DDR5 increased, leading to a 44% sequential increase in revenue for the second quarter, while operating loss narrowed by 15%."

NVIDIA is Looking at Samsung for HBM3 Memory and 2.5D Chip Packaging

According to news out of Korea, NVIDIA is considering Samsung as a partner not only for HBM3 memory, but also as a potential partner when it comes to 2.5D chip packaging. The latter is due to TSMC having limited capacity when it comes to handling all of its customers advanced chip packaging needs, although Samsung is apparently not the only potential partner NVIDIA is looking at. Taiwan based SPIL and US based Amkor Technology are two alternative candidates for the 2.5D chip packaging according to the Elec.

As far as HBM3 memory goes, NVIDIA doesn't have as many potential options, with SK Hynix being its current partner, who NVIDIA will continue to work with when it comes to HBM memory for its high-end AI accelerators and GPUs. It's likely that Samsung is trying to win NVIDIA back as a foundry customer, by proving that it's capable of handling the chip packaging for NVIDIA. Samsung will likely use its I-Cube 2.5D packaging technology and the Elec suggests that Samsung would still be using TSMC made GPU wafers which will be mated with Samsung HMB3 memory. Samsung has as yet not started its mass production of HMB3 memory, but have sampled customers with evaluation samples that are said to have received very positive feedback. For now, nothing has been agreed and TSMC is, as we know, looking to expand its 2.5D packaging business by over 40 percent, but the question is how quickly TSMC can move before its customers consider other competitors.

Samsung GDDR7 Memory Operates at Lower Voltage, Built on Same Node as 24 Gbps G6

Samsung on Wednesday announced mass-production of the world's first next-generation GDDR7 memory chips, and Ryan Smith from AnandTech scored a few technical details from the company. Apparently, the company's first production version of GDDR7 memory is built on the same D1z silicon foundry node as its 24 Gbps GDDR6 memory chip—the fastest GDDR6 chip in production. D1z is a 10 nm class foundry node that utilizes EUV lithography.

Smith also scored some electrical specs. The first-gen GDDR7 memory chip offers a data-rate of 32 Gbps at a DRAM voltage of 1.2 V, compared to the 1.35 V that some of the higher speed GDDR6 chips operate at. While the pJpb (pico-Joules per bit) is 7% higher than the current generation in absolute terms, for the 32 Gbps data-rate on offer, it is 20% lower compared to that of the 24 Gbps GDDR6 chip. Put simply, GDDR7 is 20% more energy efficient. Smith remarks that this energy-efficiency gain is purely architectural, and isn't a from any refinements to the D1z node. GDDR7 uses PAM3 signaling compared to the NRZ signaling of conventional GDDR6, and the PAM4 signalling of the GDDR6X non-JEDEC standard that NVIDIA co-developed with Micron Technology.

Samsung Announces Industry's First GDDR7 Memory Development, 32 Gbps Speeds

Samsung Electronics, a world leader in advanced semiconductor technology, today announced that it has completed development of the industry's first Graphics Double Data Rate 7 (GDDR7) DRAM. It will first be installed in next-generation systems of key customers for verification this year, driving future growth of the graphics market and further consolidating Samsung's technological leadership in the field.

Following Samsung's development of the industry's first 24 Gbps GDDR6 DRAM in 2022, the company's 16-gigabit (Gb) GDDR7 offering will deliver the industry's highest speed yet. Innovations in integrated circuit (IC) design and packaging provide added stability despite high-speed operations. "Our GDDR7 DRAM will help elevate user experiences in areas that require outstanding graphics performance, such as workstations, PCs and game consoles, and is expected to expand into future applications such as AI, high-performance computing (HPC) and automotive vehicles," said Yongcheol Bae, Executive Vice President of Memory Product Planning Team at Samsung Electronics. "The next-generation graphics DRAM will be brought to market in line with industry demand and we plan on continuing our leadership in the space."

Team Group Upgrades Industrial DDR5 Memory Capacities

The leading memory brand Team Group has upgraded the capacities of its industrial DDR5 series memory products, by leveraging its outstanding R&D and product design capabilities, taking the lead in launching a 48 GB module, as well as a lower 24 GB option. The capacity upgrades apply to all types of DDR5 products, including the DDR5 non-ECC U/SO-DIMM, DDR5 ECC U/SO-DIMM class, and DDR5 ECC R-DIMM memory modules. They provide higher capacity for applications using high-performance edge computing, embedded computers, personal workstations, and more.

Current industrial DDR5 memory on the market has a maximum capacity of about 32 GB per module. However, with the development of technologies such as cloud, edge computing, the Internet of Things, and big data, the demand for memory capacity is increasing. Edge computing systems need to process large amounts of data from a variety of sensors and devices and perform complex calculations and analyses that require high memory capacity and performance. To meet this demand, Team Group has made capacity upgrades across its industrial DDR5 memory products, offering new 24 GB and 48 GB modules. They bring more application flexibility and enable users to better handle large data sets, complex simulations, and analysis tasks. The enhanced capacities will significantly increase the performance and processing power of edge computing systems, providing users with the ability to run various applications and algorithms more efficiently.

Samsung Starts Mass Production of Automotive UFS 3.1 Memory Solution

Samsung Electronics, a world leader in advanced semiconductor technology, today announced that it has initiated mass production of its new automotive Universal Flash Storage (UFS) 3.1 memory solution optimized for in-vehicle infotainment (IVI) systems. The new solution offers the industry's lowest energy consumption, enabling car manufacturers to provide the best mobility experience for consumers.

The UFS 3.1 lineup will come in 128, 256 and 512-gigabyte (GB) variants to meet different needs of customers. The enhanced lineup allows more efficient battery life management to future automotive applications such as electric or autonomous vehicles. The 256 GB model, for instance, has reduced its energy consumption by about 33% compared to the previous generation product. The 256 GB model also provides a sequential write speed of 700-megabytes-per-second (MB/s) and a sequential read speed of 2,000 MB/s.

TechPowerUp is Hiring a Motherboard Reviewer

TechPowerUp, your place on the web for in-depth PC hardware reviews and enthusiast news, is looking for a desktop motherboard reviewer. Our current reviewer, ir_cow, is going to focus on DRAM module reviews exclusively. The motherboard reviewer job requires a high level understanding of the layout and workings of modern motherboards, including detailed technical photography of its various onboard devices, VRM, memory, commentary on layout and ease-of-installation/use, as well as performance benchmarks, and overclocking capabilities.

We expect our potential motherboard reviewer to be able to identify key components of the motherboard, the various controllers, VRM, PHYs, and chipset; the expansion slot layout and the way PCIe lanes are distributed across the motherboard; and other important aspects, such as the motherboard's own cooling mechanisms. You also need a solid grasp on memory timings, and tuning for a given platform, and the willingness to explore and learn. Our performance testing involves not just the CPU performance on a given motherboard, but also that of certain onboard devices, such as audio, and those of storage interfaces. Some of our recent motherboard reviews should give you a good idea of our review format, article structure, and the testing involved.

Intel Optane Still not Dead, Orders Expanded by Another Quarter

In July 2022, Intel announced that the company was winding down its Optane division, effectively discontinuing the development of 3D XPoint memory that it has been marketing for a long time. Once viewed as a competitive advantage, the support for Optane has been removed from future platforms. However, Intel has announced plans to extend Optane shipments by another quarter amidst additional stock or significant demand from customers buying Optane DIMMs for their enterprises. Initially set to ship the final Optane Persistent Memory 100-series DIMMs on September 30, Intel extends this date by three months to December 29, 2023.

Intel states, "Customers are recommended to secure additional Optane units at the specified 0.44% annualized failure rate (AFR) for safety stock. Intel will make commercially reasonable efforts to support last time order quantities for Intel Optane Persistent Memory 100 Series."

Micron Readying GDDR7 Memory for 2024

Last week Micron Technology CEO, Sanjay Mehrotra, announced during an investors meeting that the company's next generation GPU memory—GDDR7—will be arriving next year: "In graphics, industry analysts continue to expect graphics' TAM compound annual growth rate (CAGR) to outpace the broader market, supported by applications across client and data center. We expect customer inventories to normalize in calendar Q3. We plan to introduce our next-generation G7 product on our industry-leading 1ß node in the first half of calendar year 2024." His proposed launch window seems to align with information gleaned from previous reports—with NVIDIA and AMD lined up to fit GDDR7 SGRAM onto their next-gen mainstream GPUs, although Team Green could be delaying their Ada Lovelace successor into 2025.

Micron already counts these big players as key clients for its current GDDR6 and GDDR6X video memory offerings, but Samsung could be vying for some of that action with its own GDDR7 technology (as announced late last year). Presentation material indicated that Samsung is anticipating data transfer rates in the range of 36 Gbps, with usage of PAM3 signalling. Cadence has also confirmed similar numbers for its (industry first) GDDR7 verification solution, but the different encoding standard will require revising of memory controllers and physical interfaces.

AI and HPC Demand Set to Boost HBM Volume by Almost 60% in 2023

High Bandwidth Memory (HBM) is emerging as the preferred solution for overcoming memory transfer speed restrictions due to the bandwidth limitations of DDR SDRAM in high-speed computation. HBM is recognized for its revolutionary transmission efficiency and plays a pivotal role in allowing core computational components to operate at their maximum capacity. Top-tier AI server GPUs have set a new industry standard by primarily using HBM. TrendForce forecasts that global demand for HBM will experience almost 60% growth annually in 2023, reaching 290 million GB, with a further 30% growth in 2024.

TrendForce's forecast for 2025, taking into account five large-scale AIGC products equivalent to ChatGPT, 25 mid-size AIGC products from Midjourney, and 80 small AIGC products, the minimum computing resources required globally could range from 145,600 to 233,700 Nvidia A100 GPUs. Emerging technologies such as supercomputers, 8K video streaming, and AR/VR, among others, are expected to simultaneously increase the workload on cloud computing systems due to escalating demands for high-speed computing.
Return to Keyword Browsing
May 21st, 2024 10:25 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts