News Posts matching #HBM

Return to Keyword Browsing

Ampere Announces 512-Core AmpereOne Aurora CPU for AI Computing

Ampere has announced a significant update to its product roadmap, highlighting the upcoming 512-core AmpereOne Aurora processor. This new chip is specifically designed to address the growing demands of cloud-native AI computing.

The newly announced AmpereOne Aurora 512 cores processor integrates AI acceleration and on-chip High Bandwidth Memory (HBM), promising three times the performance per rack compared to current AmpereOne processors. Aurora is designed to handle both AI training and inference workloads, indicating Ampere's commitment to becoming a major player in the AI computing space.

SK hynix Board Approves Yongin Semiconductor Cluster Investment Plan

SK hynix Inc. announced today that it has decided to invest about 9.4 trillion won in building the first fab and business facilities of the Yongin Semiconductor Cluster after the board resolution on the 26th. SK hynix designs to start construction of the 1st fab to be built in the Yongin cluster in March next year and complete it in May 2027, and have received an investment approval from the board of directors prior to it. The company will make every effort to build the fab to lay the foundation for the company's future growth and respond to the rapidly increasing demand for AI memory semiconductors.

The Yongin Cluster, which will be built on a 4.15 million square meter site in Wonsam-myeon, Yongin, Gyeonggi Province, is currently under site preparation and infrastructure construction. SK hynix has decided to build four state-of-the-art fabs that will produce next-generation semiconductors, and a semiconductor cooperation complex with more than 50 small local companies. After the construction of the 1st fab, the company aims to complete the remaining three fabs sequentially to grow the Yongin Cluster into a "Global AI semiconductor production base."

Memory Industry Revenue Expected to Reach Record High in 2025 Due to Increasing Average Prices and the Rise of HBM and QLC

TrendForce's latest report on the memory industry reveals that DRAM and NAND Flash revenues are expected to see significant increases of 75% and 77%, respectively, in 2024, driven by increased bit demand, an improved supply-demand structure, and the rise of high-value products like HBM.

Furthermore, industry revenues are projected to continue growing in 2025, with DRAM expected to increase by 51% and NAND Flash by 29%, reaching record highs. This growth is anticipated to revive capital expenditures and boost demand for upstream raw materials, although it will also increase cost pressure for memory buyers.

Micron Technology, Inc. Reports Results for the Third Quarter of Fiscal 2024

Micron Technology, Inc. (Nasdaq: MU) today announced results for its third quarter of fiscal 2024, which ended May 30, 2024.

Fiscal Q3 2024 highlights
  • Revenue of $6.81 billion versus $5.82 billion for the prior quarter and $3.75 billion for the same period last year
  • GAAP net income of $332 million, or $0.30 per diluted share
  • Non-GAAP net income of $702 million, or $0.62 per diluted share
  • Operating cash flow of $2.48 billion versus $1.22 billion for the prior quarter and $24 million for the same period last year
"Robust AI demand and strong execution enabled Micron to drive 17% sequential revenue growth, exceeding our guidance range in fiscal Q3," said Sanjay Mehrotra, President and CEO of Micron Technology. "We are gaining share in high-margin products like High Bandwidth Memory (HBM), and our data center SSD revenue hit a record high, demonstrating the strength of our AI product portfolio across DRAM and NAND. We are excited about the expanding AI-driven opportunities ahead, and are well positioned to deliver a substantial revenue record in fiscal 2025."

DRAM Prices Expected to Increase by 8-13% in Q3

TrendForce reports that a recovery in demand for general servers—coupled with an increased production share of HBM by DRAM suppliers—has led suppliers to maintain their stance on hiking prices. As a result, the ASP of DRAM in the third quarter is expected to continue rising, with an anticipated increase of 8-13%. The price of conventional DRAM is expected to rise by 5-10%, showing a slight contraction compared to the increase in the second quarter.

TrendForce notes that buyers were more conservative about restocking in the second, and inventory levels on both the supplier and buyer sides did not show significant changes. Looking ahead to the third quarter, there is still room for inventory replenishment for smartphones and CSPs, and the peak season for production is soon to commence. Consequently, it is expected that smartphones and servers will drive an increase in memory shipments in the third quarter.

AI Startup Etched Unveils Transformer ASIC Claiming 20x Speed-up Over NVIDIA H100

A new startup emerged out of stealth mode today to power the next generation of generative AI. Etched is a company that makes an application-specific integrated circuit (ASIC) to process "Transformers." The transformer is an architecture for designing deep learning models developed by Google and is now the powerhouse behind models like OpenAI's GPT-4o in ChatGPT, Anthropic Claude, Google Gemini, and Meta's Llama family. Etched wanted to create an ASIC for processing only the transformer models, making a chip called Sohu. The claim is Sohu outperforms NVIDIA's latest and greatest by an entire order of magnitude. Where a server configuration with eight NVIDIA H100 GPU clusters pushes Llama-3 70B models at 25,000 tokens per second, and the latest eight B200 "Blackwell" GPU cluster pushes 43,000 tokens/s, the eight Sohu clusters manage to output 500,000 tokens per second.

Why is this important? Not only does the ASIC outperform Hopper by 20x and Blackwell by 10x, but it also serves so many tokens per second that it enables an entirely new fleet of AI applications requiring real-time output. The Sohu architecture is so efficient that 90% of the FLOPS can be used, while traditional GPUs boast a 30-40% FLOP utilization rate. This translates into inefficiency and waste of power, which Etched hopes to solve by building an accelerator dedicated to power transformers (the "T" in GPT) at massive scales. Given that the frontier model development costs more than one billion US dollars, and hardware costs are measured in tens of billions of US Dollars, having an accelerator dedicated to powering a specific application can help advance AI faster. AI researchers often say that "scale is all you need" (resembling the legendary "attention is all you need" paper), and Etched wants to build on that.

Samsung, SK Hynix, and Micron Compete for GDDR7 Dominance

Competition among Samsung, SK Hynix, and Micron is intensifying, with a focus on enhancing processing speed and efficiency in graphics DRAM (GDDR) for AI accelerators and cryptocurrency mining. Compared with High Bandwidth Memory (HBM), the GDDR7 has a faster data processing speed and a relatively low price. Since Nvidia is expected to use next-generation GDDR7 with its GeForce RTX50 Blackwell GPUs, competition will likely be as strong as the demand. We can see that by looking, for example, at the pace of new GDDR7 releases from the past two years.

In July 2022, Samsung Electronics developed the industry's first 32 Gbps GDDR7 DRAM, capable of processing up to 1.5 TB of data per second, a 1.4 times speed increase and 20% better energy efficiency compared to GDDR6. In February 2023, Samsung demonstrated its first GDDR7 DRAM with a pin rate of 37 Gbps. On June 4, Micron launched its new GDDR7 at Computex 2024, with speeds up to 32 Gbps, a 60% increase in bandwidth, and a 50% improvement in energy efficiency over the previous generation. Shortly after, SK Hynix introduced a 40 Gbps GDDR7, showcased again at Computex 2024, doubling the previous generation's bandwidth to 128 GB per second and improving energy efficiency by 40%.

SK hynix Showcases Its New AI Memory Solutions at HPE Discover 2024

SK hynix has returned to Las Vegas to showcase its leading AI memory solutions at HPE Discover 2024, Hewlett Packard Enterprise's (HPE) annual technology conference. Held from June 17-20, HPE Discover 2024 features a packed schedule with more than 150 live demonstrations, as well as technical sessions, exhibitions, and more. This year, attendees can also benefit from three new curated programs on edge computing and networking, hybrid cloud technology, and AI. Under the slogan "Memory, The Power of AI," SK hynix is displaying its latest memory solutions at the event including those supplied to HPE. The company is also taking advantage of the numerous networking opportunities to strengthen its relationship with the host company and its other partners.

The World's Leading Memory Solutions Driving AI
SK hynix's booth at HPE Discover 2024 consists of three product sections and a demonstration zone which showcase the unprecedented capabilities of its AI memory solutions. The first section features the company's groundbreaking memory solutions for AI, including HBM solutions. In particular, the industry-leading HBM3E has emerged as a core product to meet the growing demands of AI systems due to its exceptional processing speed, capacity, and heat dissipation. A key solution from the company's CXL lineup, CXL Memory Module-DDR5 (CMM-DDR5), is also on display in this section. In the AI era where high performance and capacity are vital, CMM-DDR5 has gained attention for its ability to expand system bandwidth by up to 50% and capacity by up to 100% compared to systems only equipped with DDR5 DRAM.

US Government Considers Tighter Restriction on China's Access to GAA Transistors and HBM Memory

According to sources familiar with the matter and reported by Bloomberg, the Biden administration is considering imposing further export controls to limit China's ability to acquire advanced semiconductor technologies crucial for developing AI systems. Gate-all-around (GAA) transistor technology and high-bandwidth memory (HBM) chips are at the center of the proposed restrictions. These cutting-edge components play a pivotal role in creating powerful AI accelerators. GAA transistors, a key feature in next-generation chips, promise substantial improvements in power efficiency and processing speeds. Meanwhile, HBM chips enable high-speed data transfer between a processor and memory. While existing sanctions prevent American firms from supplying Chinese companies with equipment for manufacturing leading-edge chips, concerns persist that China could still attain advanced capabilities through other means.

For instance, China's leading chipmaker, SMIC, could potentially integrate GAA transistors into its existing 7 nm process node, markedly enhancing performance. Access to HBM would further augment China's ability to develop AI accelerators on par with cutting-edge offerings from US firms. The reflections within the Biden administration show a strategic effort to preserve America's technological edge by denying China access to key semiconductor innovations. However, implementing such stringent export controls is a delicate balancing act, as it risks heightening tensions and prompting Chinese retaliation. No final decision has been made, and officials continue weighing the proposed restrictions' pros and cons. Nonetheless, the discussions highlight the pivotal role that semiconductor technology plays in the great power rivalry between the US and China, especially in the AI era.

SK hynix Showcases Its Next-Gen Solutions at Computex 2024

SK hynix presented its leading AI memory solutions at COMPUTEX Taipei 2024 from June 4-7. As one of Asia's premier IT shows, COMPUTEX Taipei 2024 welcomed around 1,500 global participants including tech companies, venture capitalists, and accelerators under the theme "Connecting AI". Making its debut at the event, SK hynix underlined its position as a first mover and leading AI memory provider through its lineup of next-generation products.

"Connecting AI" With the Industry's Finest AI Memory Solutions
Themed "Memory, The Power of AI," SK hynix's booth featured its advanced AI server solutions, groundbreaking technologies for on-device AI PCs, and outstanding consumer SSD products. HBM3E, the fifth generation of HBM1, was among the AI server solutions on display. Offering industry-leading data processing speeds of 1.18 terabytes (TB) per second, vast capacity, and advanced heat dissipation capability, HBM3E is optimized to meet the requirements of AI servers and other applications. Another technology which has become crucial for AI servers is CXL as it can increase system bandwidth and processing capacity. SK hynix highlighted the strength of its CXL portfolio by presenting its CXL Memory Module-DDR5 (CMM-DDR5), which significantly expands system bandwidth and capacity compared to systems only equipped with DDR5. Other AI server solutions on display included the server DRAM products DDR5 RDIMM and MCR DIMM. In particular, SK hynix showcased its tall 128-gigabyte (GB) MCR DIMM for the first time at an exhibition.

Blackwell Shipments Imminent, Total CoWoS Capacity Expected to Surge by Over 70% in 2025

TrendForce reports that NVIDIA's Hopper H100 began to see a reduction in shortages in 1Q24. The new H200 from the same platform is expected to gradually ramp in Q2, with the Blackwell platform entering the market in Q3 and expanding to data center customers in Q4. However, this year will still primarily focus on the Hopper platform, which includes the H100 and H200 product lines. The Blackwell platform—based on how far supply chain integration has progressed—is expected to start ramping up in Q4, accounting for less than 10% of the total high-end GPU market.

The die size of Blackwell platform chips like the B100 is twice that of the H100. As Blackwell becomes mainstream in 2025, the total capacity of TSMC's CoWoS is projected to grow by 150% in 2024 and by over 70% in 2025, with NVIDIA's demand occupying nearly half of this capacity. For HBM, the NVIDIA GPU platform's evolution sees the H100 primarily using 80 GB of HBM3, while the 2025 B200 will feature 288 GB of HBM3e—a 3-4 fold increase in capacity per chip. The three major manufacturers' expansion plans indicate that HBM production volume will likely double by 2025.

Micron DRAM Production Plant in Japan Faces Two-Year Delay to 2027

Last year, Micron unveiled plans to construct a cutting-edge DRAM factory in Hiroshima, Japan. However, the project has faced a significant two-year delay, pushing back the initial timeline for mass production of the company's most advanced memory products. Originally slated to begin mass production by the end of 2025, Micron now aims to have the new facility operational by 2027. The complexity of integrating extreme ultraviolet lithography (EUV) equipment, which enables the production of highly advanced chips, has contributed to the delay. The Hiroshima plant will produce next-generation 1-gamma DRAM and high-bandwidth memory (HBM) designed for generative AI applications. Micron expects the HBM market, currently dominated by rivals SK Hynix and Samsung, to experience rapid growth, with the company targeting a 25% market share by 2025.

The project is expected to cost between 600 and 800 billion Japanese yen ($3.8 to $5.1 billion), with Japan's government covering one-third of the cost. Micron has received a subsidy of up to 192 billion yen ($1.2 billion) for construction and equipment, as well as a subsidy to cover half of the necessary funding to produce HBM at the plant, amounting to 25 billion yen ($159 million). Despite the delay, the increased investment in the factory reflects Micron's commitment to advancing its memory technology and capitalizing on the growing demand for HBM. An indication of that is the fact that customers have pre-ordered 100% of the HBM capacity for 2024, not leaving a single HBM die unused.

NVIDIA Reportedly Having Issues with Samsung's HBM3 Chips Running Too Hot

According to Reuters, NVIDIA is having some major issues with Samsung's HBM3 chips, as NVIDIA hasn't managed to finalise its validations of the chips. Reuters are citing multiple sources that are familiar with the matter and it seems like Samsung is having some serious issues with its HMB3 chips if the sources are correct. Not only do the chips run hot, which itself is a big issue due to NVIDIA already having issues cooling some of its higher-end products, but the power consumption is apparently not where it should be either. Samsung is said to have tried to get its HBM3 and HBM3E parts validated by NVIDIA since sometime in 2023 according to Reuter's sources, which suggests that there have been issues for at least six months, if not longer.

The sources claim there are issues with both the 8- and 12-layer stacks of HMB3E parts from Samsung, suggesting that NVIDIA might only be able to supply parts from Micron and SK Hynix for now, the latter whom has been supplying HBM3 chips to NVIDIA since the middle of 2022 and HBM3E chips since March of this year. It's unclear if this is a production issue at Samsung's DRAM Fabs, a packaging related issue or something else entirely. The Reuter's piece goes on to speculating about Samsung not having had enough time to develop its HBM parts compared its competitors and that it's a rushed product, but Samsung issued a statement to the publication that it's a matter of customising the product for its customer's needs. Samsung also said that it's "the process of optimising its products through close collaboration with customers" without going into which customer(s). Samsung issued a further statement saying that "claims of failing due to heat and power consumption are not true" and that testing was going as expected.

HBM3e Production Surge Expected to Make Up 35% of Advanced Process Wafer Input by End of 2024

TrendForce reports that the three largest DRAM suppliers are increasing wafer input for advanced processes. Following a rise in memory contract prices, companies have boosted their capital investments, with capacity expansion focusing on the second half of this year. It is expected that wafer input for 1alpha nm and above processes will account for approximately 40% of total DRAM wafer input by the end of the year.

HBM production will be prioritized due to its profitability and increasing demand. However, limited yields of around 50-60% and a wafer area 60% larger than DRAM products mean a higher proportion of wafer input is required. Based on the TSV capacity of each company, HBM is expected to account for 35% of advanced process wafer input by the end of this year, with the remaining wafer capacity used for LPDDR5(X) and DDR5 products.

TSMC Unveils Next-Generation HBM4 Base Dies, Built on 12 nm and 5 nm Nodes

During the European Technology Symposium 2024, TSMC has announced its readiness to manufacture next-generation HBM4 base dies using both 12 nm and 5 nm nodes. This significant development is expected to substantially improve the performance, power consumption, and logic density of HBM4 memory, catering to the demands of high-performance computing (HPC) and artificial intelligence (AI) applications. The shift from a traditional 1024-bit interface to an ultra-wide 2048-bit interface is a key aspect of the new HBM4 standard. This change will enable the integration of more logic and higher performance while reducing power consumption. TSMC's N12FFC+ and N5 processes will be used to produce these base dies, with the N12FFC+ process offering a cost-effective solution for achieving HBM4 performance and the N5 process providing even more logic and lower power consumption at HBM4 speeds.

The company is collaborating with major HBM memory partners, including Micron, Samsung, and SK Hynix, to integrate advanced nodes for HBM4 full-stack integration. TSMC's base die, fabricated using the N12FFC+ process, will be used to install HBM4 memory stacks on a silicon interposer alongside system-on-chips (SoCs). This setup will enable the creation of 12-Hi (48 GB) and 16-Hi (64 GB) stacks with per-stack bandwidth exceeding 2 TB/s. TSMC's collaboration with EDA partners like Cadence, Synopsys, and Ansys ensures the integrity of HBM4 channel signals, thermal accuracy, and electromagnetic interference (EMI) in the new HBM4 base dies. TSMC is also optimizing CoWoS-L and CoWoS-R for HBM4 integration, meaning that massive high-performance chips are already utilizing this technology and getting ready for volume manufacturing.

DRAM Contract Prices for Q2 Adjusted to a 13-18% Increase; NAND Flash around 15-20%

TrendForce's latest forecasts reveal contract prices for DRAM in the second quarter are expected to increase by 13-18%, while NAND Flash contract prices have been adjusted to a 15-20% Only eMMC/UFS will be seeing a smaller price increase of about 10%.

Before the 4/03 earthquake, TrendForce had initially predicted that DRAM contract prices would see a seasonal rise of 3-8% and NAND Flash 13-18%, significantly tapering from Q1 as seen from spot price indicators which showed weakening price momentum and reduced transaction volumes. This was primarily due to subdued demand outside of AI applications, particularly with no signs of recovery in demand for notebooks and smartphones. Inventory levels were gradually increasing, especially among PC OEMs. Additionally, with DRAM and NAND Flash prices having risen for 2-3 consecutive quarters, the willingness of buyers to accept further substantial price increases had diminished.

HBM Prices to Increase by 5-10% in 2025, Accounting for Over 30% of Total DRAM Value

Avril Wu, TrendForce Senior Research Vice President, reports that the HBM market is poised for robust growth, driven by significant pricing premiums and increased capacity needs for AI chips. HBM's unit sales price is several times higher than that of conventional DRAM and about five times that of DDR5. This pricing, combined with product iterations in AI chip technology that increase single-device HBM capacity, is expected to dramatically raise HBM's share in both the capacity and market value of the DRAM market from 2023 to 2025. Specifically, HBM's share of total DRAM bit capacity is estimated to rise from 2% in 2023 to 5% in 2024 and surpass 10% by 2025. In terms of market value, HBM is projected to account for more than 20% of the total DRAM market value starting in 2024, potentially exceeding 30% by 2025.

2024 sees HBM demand growth rate near 200%, set to double in 2025
Wu also pointed out that negotiations for 2025 HBM pricing have already commenced in 2Q24. However, due to the limited overall capacity of DRAM, suppliers have preliminarily increased prices by 5-10% to manage capacity constraints, affecting HBM2e, HBM3, and HBM3e. This early negotiation phase is attributed to three main factors: Firstly, HBM buyers maintain high confidence in AI demand prospects and are willing to accept continued price increases.

Huawei Aims to Develop Homegrown HBM Memory Amidst US Sanctions

According to The Information, in a strategic maneuver to circumvent the constraints imposed by US sanctions, Huawei is accelerating efforts to establish domestic production capabilities for High Bandwidth Memory (HBM) within China. This move addresses the limitations that have hampered the company's advancements in AI and high-performance computing (HPC) sectors. HBM technology plays a pivotal role in enhancing the performance of AI and HPC processors by mitigating memory bandwidth bottlenecks. Recognizing its significance, Huawei has assembled a consortium comprising memory manufacturers backed by the Chinese government and prominent semiconductor companies like Fujian Jinhua Integrated Circuit. This consortium is focused on advancing HBM2 memory technology, which is crucial for Huawei's Ascend-series processors for AI applications.

Huawei's initiative comes at a time when the company faces challenges in accessing HBM from external sources, impacting the availability of its AI processors in the market. Despite facing obstacles such as international regulations restricting the sale of advanced chipmaking equipment to China, Huawei's efforts underscore China's broader push for self-sufficiency in critical technologies essential for AI and supercomputing. By investing in domestic HBM production, Huawei aims to secure a stable supply chain for these vital components, reducing reliance on external suppliers. This strategic shift not only demonstrates Huawei's resilience in navigating geopolitical challenges but also highlights China's determination to strengthen its technological independence in the face of external pressures. As the global tech landscape continues to evolve, Huawei's move to develop homegrown HBM memory could have far-reaching implications for China's AI and HPC capabilities, positioning the country as a significant player in the memory field.

SK hynix Strengthens AI Memory Leadership & Partnership With Host at the TSMC 2024 Tech Symposium

SK hynix showcased its next-generation technologies and strengthened key partnerships at the TSMC 2024 Technology Symposium held in Santa Clara, California on April 24. At the event, the company displayed its industry-leading HBM AI memory solutions and highlighted its collaboration with TSMC involving the host's CoWoS advanced packaging technology.

TSMC, a global semiconductor foundry, invites its major partners to this annual conference in the first half of each year so they can share their new products and technologies. Attending the event under the slogan "Memory, the Power of AI," SK hynix received significant attention for presenting the industry's most powerful AI memory solution, HBM3E. The product has recently demonstrated industry-leading performance, achieving input/output (I/O) transfer speed of up to 10 gigabits per second (Gbps) in an AI system during a performance validation evaluation.

TSMC Celebrates 30th North America Technology Symposium with Innovations Powering AI with Silicon Leadership

TSMC today unveiled its newest semiconductor process, advanced packaging, and 3D IC technologies for powering the next generation of AI innovations with silicon leadership at the Company's 2024 North America Technology Symposium. TSMC debuted the TSMC A16 technology, featuring leading nanosheet transistors with innovative backside power rail solution for production in 2026, bringing greatly improved logic density and performance. TSMC also introduced its System-on-Wafer (TSMC-SoW) technology, an innovative solution to bring revolutionary performance to the wafer level in addressing the future AI requirements for hyperscaler datacenters.

This year marks the 30th anniversary of TSMC's North America Technology Symposium, and more than 2,000 attended the event, growing from less than 100 attendees 30 years ago. The North America Technology Symposium in Santa Clara, California kicks off TSMC Technology Symposiums around the world in the coming months. The symposium also features an "Innovation Zone," designed to highlight the technology achievements of our emerging start-up customers.

SK hynix to Produce DRAM from M15X in Cheongju

SK hynix Inc. announced today that it plans to expand production capacity of the next-generation DRAM including HBM, a core component of the AI infrastructure, in response to the rapidly increasing demand for AI semiconductors. As the board of directors approves the plan, the company will build the M15X fab in Cheongju, North Chungcheong Province, for a new DRAM production base, and invest about 5.3 trillion won for fab construction.

The company plans to start construction at the end of April with an aim to complete in November 2025 for an early mass production. With a gradual increase in equipment investment planned, the total investment in building the new production base will be more than 20 trillion won in the long-term. As a global leader in AI memory, SK hynix expects the expansion in investment to contribute to revitalizing the domestic economy, while refreshing Korea's reputation as a semiconductor powerhouse.

Demand for NVIDIA's Blackwell Platform Expected to Boost TSMC's CoWoS Total Capacity by Over 150% in 2024

NVIDIA's next-gen Blackwell platform, which includes B-series GPUs and integrates NVIDIA's own Grace Arm CPU in models such as the GB200, represents a significant development. TrendForce points out that the GB200 and its predecessor, the GH200, both feature a combined CPU+GPU solution, primarily equipped with the NVIDIA Grace CPU and H200 GPU. However, the GH200 accounted for only approximately 5% of NVIDIA's high-end GPU shipments. The supply chain has high expectations for the GB200, with projections suggesting that its shipments could exceed millions of units by 2025, potentially making up nearly 40 to 50% of NVIDIA's high-end GPU market.

Although NVIDIA plans to launch products such as the GB200 and B100 in the second half of this year, upstream wafer packaging will need to adopt more complex and high-precision CoWoS-L technology, making the validation and testing process time-consuming. Additionally, more time will be required to optimize the B-series for AI server systems in aspects such as network communication and cooling performance. It is anticipated that the GB200 and B100 products will not see significant production volumes until 4Q24 or 1Q25.

DRAM Manufacturers Gradually Resume Production, Impact on Total Q2 DRAM Output Estimated to Be Less Than 1%

Following in the wake of an earthquake that struck on April 3rd, TrendForce undertook an in-depth analysis of its effects on the DRAM industry, uncovering a sector that has shown remarkable resilience and faced minimal interruptions. Despite some damage and the necessity for inspections or disposal of wafers among suppliers, the facilities' strong earthquake preparedness of the facilities has kept the overall impact to a minimum.

Leading DRAM producers, including Micron, Nanya, PSMC, and Winbond had all returned to full operational status by April 8th. In particular, Micron's progression to cutting-edge processes—specifically the 1alpha and 1beta nm technologies—is anticipated to significantly alter the landscape of DRAM bit production. In contrast, other Taiwanese DRAM manufacturers are still working with 38 and 25 nm processes, contributing less to total output. TrendForce estimates that the earthquake's effect on DRAM production for the second quarter will be limited to a manageable 1%.

SK hynix Signs Investment Agreement of Advanced Chip Packaging with Indiana

SK hynix Inc., the world's leading producer of High-Bandwidth Memory (HBM) chips, announced today that it will invest an estimated $3.87 billion in West Lafayette, Indiana to build an advanced packaging fabrication and R&D facility for AI products. The project, the first of its kind in the United States, is expected to drive innovation in the nation's AI supply chain, while bringing more than a thousand new jobs to the region.

The company held an investment agreement ceremony with officials from Indiana State, Purdue University, and the U.S. government at Purdue University in West Lafayette on the 3rd and officially announced the plan. At the event, officials from each party including Governor of Indiana Eric Holcomb, Senator Todd Young, Director of the White House Office of Science and Technology Policy Arati Prabhakar, Assistant Secretary of Commerce Arun Venkataraman, Secretary of Commerce State of Indiana David Rosenberg, Purdue University President Mung Chiang, Chairman of Purdue Research Foundation Mitch Daniels, Mayor of city of West Lafayette Erin Easter, Ambassador of the Republic of Korea to the United States Hyundong Cho, Consul General of the Republic of Korea in Chicago Junghan Kim, SK vice chairman Jeong Joon Yu, SK hynix CEO Kwak Noh-Jung and SK hynix Head of Package & Test Choi Woojin, participated.

Chinese Company Revives AMD Vega GPU in a Unique NAS Motherboard

A Chinese Topton company has brought new life to the AMD Vega graphics architecture by integrating it into a Network Attached Storage (NAS) motherboard. The Topton N9 NAS motherboard features the Intel Core i7-8705G processor, a unique chip that combines Intel CPU cores with AMD's RX Vega M GL graphics. The Intel Core i7-8705G, initially released in 2018, is an unusual choice for a NAS system. This 14 nm processor features four cores, eight threads, and a boost clock of up to 4.1 GHz. What sets it apart is the integrated AMD RX Vega M GL GPU with 20 Compute Units and 4 GB of HBM2 memory.

The Topton N9 NAS motherboard is designed for the 17×17 cm ITX form factor and offers a range of features like maximum support for 64 GB of DDR4 RAM, M.2 NVMe/SATA and SATA 3.0, eight Intel i226-V controllers for 2.5 Gbit networking, USB 3.0, USB Type-C, and HDMI 2.0 connectivity. While the Intel Core i7-8705G may not be the most obvious choice for a NAS system, the Topton N9 motherboard demonstrates how this unique processor can be repurposed to provide affordable computing power. The integrated AMD RX Vega graphics offer capabilities beyond typical NAS requirements, making this motherboard suitable for various applications, such as home firewalls and routers. The collaboration between Intel and AMD in creating the Kaby Lake-G processors was a rare occurrence in the industry. The Topton N9 starts at $288.56 without a fan/cooler, and adding another $20 bumps the price to $308.46.
Return to Keyword Browsing
Dec 19th, 2024 04:18 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts