News Posts matching #Memory

Return to Keyword Browsing

KLEVV Introduces CRAS V RGB ASUS ROG-certified DDR5 Memory Kits

KLEVV, the premier consumer memory and storage brand introduced by Essencore, proudly presents all-new CRAS V RGB ROG-certified DDR5 Gaming Memory. Engineered by the masterful collaboration of KLEVV and ROG (Republic of Gamers)'s expert team, the CRAS V RGB ROG-certified DDR5 memory is purpose-built to unleash mind-bending performance with supreme stability. Available in 16 GB x2 and 24 GB x2 kits with a base clock speed of DDR5-7200, it is tailored for hardcore gamers and PC enthusiasts who demand the best.

Furthermore, when paired with a compatible ROG motherboard, the CRAS V RGB ROG-certified DDR5 can achieve blazing clock speeds of DDR5-7400, enabled by ROG's superior hardware synergy and software tuning, making them the ideal choice for high-end AAA games and power-hungry software. Inspired by KLEVV's award-winning CRAS V RGB series, which won the Red Dot and iF Design Awards in 2024, this special edition memory stands out with its unique and striking duotone color design. Rocking a sleek black-and-white aesthetic, complemented by an eye-catching advanced RGB light array, the CRAS V RGB ROG-certified DDR5 memory perfectly matches an all-ROG system that exudes undeniable power at a glance.

Intel Arc "Battlemage" Graphics Card with 12GB of 19 Gbps GDDR6 Memory Surfaces

A prototype discrete GPU based on the Intel Arc "Battlemage" graphics architecture was spotted in a public boot log by Intel GFX Continuous Integration group. The group is probably testing a prototype discrete GPU with a Linux driver. The OS loads its driver at boot, which puts out a few messages in the boot log, including explicit mention of "Battlemage" as BMG. It also mentions its memory size to be 12 GB, a memory speed of 19 Gbps, and a memory bus width of 192-bit.

It is hence likely that this is a mid-tier GPU from the series, with the top tier one probably featuring a 256-bit memory interface. This aligns with Intel's strategy of targeting the bulk of the gaming graphics market, instead of gunning for the enthusiast class. The new "Battlemage" architecture is expected to make Intel contemporary against rival architectures in the segment, such as NVIDIA "Ada" and AMD RDNA 3, although it remains to be seen if it can square off against the next-generation NVIDIA "Blackwell" and AMD RDNA 4.

MSI Announces New Features and Support for AMD Ryzen 9000 Series Processors

MSI is excited to announce the launch of the latest AMD Ryzen 9000 Series processors, set to debut on the AM5 platform. Powered by advanced 4 nm CPU process technology, the Ryzen 9000 Series promises to revolutionize the computing landscape with unmatched performance, efficiency, and versatility for gamers and content creators. At launch, August 8th, AMD Ryzen 7 9700X, and Ryzen 5 9600X are available while the Ryzen 9 9950X and 9900X will launch on August 15th. These processors will feature up to 16 cores and 32 threads, with a theoretical maximum boost clock speed of 5.7 GHz, 64 MB of L3 cache, and a maximum TDP of 170 W.

AMD Ryzen 9000 Series will also support PCIe 5.0 for the GPU and M.2 while enhancing DDR5 memory speed. Notably, the AMD Ryzen 7 9700X offers approximately 12% better overall performance than the first-gen AMD 3D V-cache CPU. All these processors are compatible with the AM5 socket, and existing AMD 600 Series motherboards and Ryzen 9000 Series processors can seamlessly integrate by updating to the latest BIOS, available on MSI's product support page.

Samsung Electronics Begins Mass Production of Industry's Thinnest LPDDR5X DRAM Packages

Samsung Electronics, the world leader in advanced memory technology, today announced it has begun mass production for the industry's thinnest 12 nanometer (nm)-class, 12-gigabyte (GB) and 16 GB LPDDR5X DRAM packages, solidifying its leadership in the low-power DRAM market. Leveraging its extensive expertise in chip packaging, Samsung is able to deliver ultra-slim LPDDR5X DRAM packages that can create additional space within mobile devices, facilitating better airflow. This supports easier thermal control, a factor that is becoming increasingly critical especially for high-performance applications with advanced features such as on-device AI.

"Samsung's LPDDR5X DRAM sets a new standard for high-performance on-device AI solutions, offering not only superior LPDDR performance but also advanced thermal management in an ultra-compact package," said YongCheol Bae, Executive Vice President of Memory Product Planning at Samsung Electronics. "We are committed to continuous innovation through close collaboration with our customers, delivering solutions that meet the future needs of the low-power DRAM market."

NEO Semiconductor Announces 3D X-AI Chip as HBM Successor

NEO Semiconductor, a leading developer of innovative technologies for 3D NAND flash memory and 3D DRAM, announced today the development of its 3D X-AI chip technology, targeted to replace the current DRAM chips inside high bandwidth memory (HBM) to solve data bus bottlenecks by enabling AI processing in 3D DRAM. 3D X-AI can reduce the huge amount of data transferred between HBM and GPUs during AI workloads. NEO's innovation is set to revolutionize the performance, power consumption, and cost of AI Chips for AI applications like generative AI.

AI Chips with NEO's 3D X-AI technology can achieve:
  • 100X Performance Acceleration: contains 8,000 neuron circuits to perform AI processing in 3D memory.
  • 99% Power Reduction: minimizes the requirement of transferring data to the GPU for calculation, reducing power consumption and heat generation by the data bus.
  • 8X Memory Density: contains 300 memory layers, allowing HBM to store larger AI models.

Corsair Gaming Reports Q2 2024 Financial Results, 100 People Getting Fired

Corsair Gaming, Inc. (Nasdaq: CRSR) ("Corsair" or the "Company"), a leading global provider and innovator of high-performance products for gamers, streamers, content-creators, and gaming PC builders, today announced financial results for the second quarter ended June 30, 2024, and its updated financial outlook for the full year 2024.

Second Quarter 2024 Select Financial Metrics
  • Net revenue was $261.3 million compared to $325.4 million in the second quarter of 2023, a decrease of 19.7%. Gaming Components and Systems segment net revenue was $167.1 million compared to $246.7 million in the second quarter of 2023, while Gamer and Creator Peripherals segment net revenue was $94.2 million compared to $78.8 million in the second quarter of 2023.
  • Net loss attributable to common shareholders was $29.6 million, or a net loss of $0.28 per diluted share, compared to net income of $1.1 million, or a net income of $0.01 per diluted share, in the second quarter of 2023.
  • Adjusted net loss was $6.8 million, or an adjusted net loss of $0.07 per diluted share, compared to adjusted net income of $9.8 million, or an adjusted net income of $0.09 per diluted share, in the second quarter of 2023.
  • Adjusted EBITDA was a loss of $1.2 million, compared to adjusted EBITDA of $17.8 million in the second quarter of 2023.
  • Cash and restricted cash was $94.6 million as of June 30, 2024.

Samsung Electronics Launches Enhanced 1TB microSD Cards

Samsung Electronics, the world leader in advanced memory technology, today announced the release of its 1-terabyte (TB) high-capacity microSD cards PRO Plus and EVO Plus. By adopting Samsung's eighth-generation V-NAND (V8) technology, the PRO Plus and EVO Plus boast enhanced performance and high capacity, ideal for content creators and tech enthusiasts needing quick file transfers and storage for everyday use across their devices in everyday use.

"Creators and tech enthusiasts are increasingly using portable devices such as smartphones and handheld gaming devices to store data that demand high-performance and high-capacity," said Hangu Sohn, Vice President of Memory Brand Product Biz Team at Samsung Electronics. "The new high-capacity microSD cards, PRO Plus and EVO Plus, are response to the demand for storing large amounts of high-quality data in a reliable and secure way."

MaxLinear to Showcase Panther III at Future of Memory and Storage 2024 Trade Show

MaxLinear, Inc., a leading provider of data storage acceleration solutions for enterprise and data center applications, today announced it will demonstrate the advanced compression, encryption, and security performance of its storage acceleration solution, Panther III, at the Future of Memory and Storage (FMS) 2024 trade show from August 6-8, 2024. The demos will show that Panther III can achieve up to 40 times more throughput, up to 190 times better latency, and up to 1000 times less CPU utilization than a software-only solution, leading to significant cost savings in terms of flash drives and needed CPU cores.

MaxLinear's Panther III creates a bold new product category for maximizing the performance of data storage systems - a comprehensive, all-in-one "storage accelerator." Unlike encryption and/or compression solutions, MaxLinear's Panther III consolidates a comprehensive suite of storage acceleration functions, including compression, deduplication, encryption, data protection, and real-time validation, in a single hardware-based solution. Panther III is engineered to offload and expedite specific data processing tasks, thus providing a significant performance boost, storage cost savings, and energy savings compared to traditional software-only, FPGA, and other competitive solutions.

Marvell Introduces Breakthrough Structera CXL Product Line to Address Server Memory Bandwidth and Capacity Challenges in Cloud Data Centers

Marvell Technology, Inc., a leader in data infrastructure semiconductor solutions, today launched the Marvell Structera product line of Compute Express Link (CXL) devices that enable cloud data center operators to overcome memory performance and scaling challenges in general-purpose servers.

To address memory-intensive applications, data center operators add extra servers to get higher memory bandwidth and higher memory capacity. The compute capabilities from the added processors are typically not utilized for these applications, making the servers inefficient from cost and power perspectives. The CXL industry standard addresses this challenge by enabling new architectures that can efficiently add memory to general-purpose servers.

SK hynix Launches Its New GDDR7 Graphics Memory

SK hynix Inc. announced today that it introduced the industry's best-performing GDDR7, a next-generation graphics memory product. The development of GDDR7 in March comes amid growing interest by global customers in the AI space in the DRAM product that meets both specialized performance for graphics processing and fast speed. The company said that it will start volume production in the third quarter.

The new product comes with the operating speed of 32 Gbps, a 60% improvement from the previous generation and the speed can grow up to 40 Gbps depending on the circumstances. When adopted for the high-end graphics cards, the product can also process data of more than 1.5 TB per second, equivalent to 300 Full-HD movies (5 GB each), in a second.

Micron Introduces 9550 NVMe Data Center SSD

Micron Technology, Inc., today announced availability of the Micron 9550 NVMe SSD - the world's fastest data center SSD and industry leader in AI workload performance and power efficiency. The Micron 9550 SSD showcases Micron's deep expertise and innovation by integrating its own controller, NAND, DRAM and firmware into one world-class product. This integrated solution enables class-leading performance, power efficiency and security features for data center operators.

The Micron 9550 SSD delivers best-in-class performance with 14.0 GB/s sequential reads and 10.0 GB/s sequential writes to provide up to 67% better performance over similar competitive SSDs and enables industry-leading performance for demanding workloads such as AI. In addition, its random reads of 3,300 KIOPS are up to 35% better and random writes of 400 KIOPS are up to 33% better than competitive offerings.

JEDEC Unveils Plans for DDR5 MRDIMM and LPDDR6 CAMM Standards to Propel High-Performance Computing and AI

JEDEC Solid State Technology Association, the global leader in the development of standards for the microelectronics industry, proudly announces upcoming standards for advanced memory modules designed to power the next generation of high-performance computing and AI applications. JEDEC today revealed key details about its upcoming standards for DDR5 Multiplexed Rank Dual Inline Memory Modules (MRDIMM) and a next-generation Compression-Attached Memory Module (CAMM) for LPDDR6. The new MRDIMM and CAMM for LPDDR6 are set to revolutionize the industry with unparalleled bandwidth and memory capacity.

The JEDEC MRDIMM standard is set to deliver up to twice the peak bandwidth of native DRAM, enabling applications to surpass current data rates and achieve new levels of performance. It maintains the same capacity, reliability, availability, serviceability (RAS) features as JEDEC RDIMM. The committee aims to double the bandwidth to 12.8 Gbps and increase the pin speed. MRDIMM is envisioned to support more than two ranks and is being designed to utilize standard DDR5 DIMM components ensuring compatibility with conventional RDIMM systems.

Memory Industry Revenue Expected to Reach Record High in 2025 Due to Increasing Average Prices and the Rise of HBM and QLC

TrendForce's latest report on the memory industry reveals that DRAM and NAND Flash revenues are expected to see significant increases of 75% and 77%, respectively, in 2024, driven by increased bit demand, an improved supply-demand structure, and the rise of high-value products like HBM.

Furthermore, industry revenues are projected to continue growing in 2025, with DRAM expected to increase by 51% and NAND Flash by 29%, reaching record highs. This growth is anticipated to revive capital expenditures and boost demand for upstream raw materials, although it will also increase cost pressure for memory buyers.

G.SKILL Announces Trident Z5 Royal Neo Series DDR5 Memory with AMD EXPO, Up to DDR5-8000

G.SKILL International Enterprise Co., Ltd., the world's leading brand of performance overclock memory and PC components, is excited to announce the Trident Z5 Royal Neo series DDR5 memory, designed for AMD AM5 platforms. Trident Z5 Royal Neo memory comes with AMD EXPO (Extended Profiles for Overclocking) technology, and will be available up to DDR5-8000 CL38-48-48-127 extreme speed memory specification in 32 GB (2x16GB) and 48 GB (2x24GB) kit capacities. With a mirrored finish in gold or silver, this new memory series is the ideal DDR5 memory solution for enthusiasts and overclockers to build a stylish, high-performance AMD PC system.

Luxury-Class DDR5 with AMD EXPO
Following the luxury-class Trident Z5 Royal design with CNC-cut aluminum heatspreader in mirrored-finish gold or silver colors and a crystalline light bar for magnificent RGB lighting, the Trident Z5 Royal Neo is designed for AMD AM5 platforms with AMD EXPO profile. Customizable RGB lighting is supported through the G.SKILL Trident Z Lighting Control software or third-party motherboard lighting software.

Samsung Planning for CXL 2.0 DRAM Mass Production Later This Year

Samsung Electronics Co. is putting a lot of effort into securing its involvement in next-generation memory technology, CXL (Compute Express Link). In a media briefing on Thursday, Jangseok Choi, vice president of Samsung's new business planning team, announced plans to mass-produce 256 GB DRAM supporting CXL 2.0 by the end of this year. CXL technology promises to significantly enhance the efficiency of high-performance server systems by providing a unified interface for accelerators, DRAM, and storage devices used with CPUs and GPUs.

The company projects that CXL technology will increase memory capacity per server by eight to ten times, marking a significant leap in computing power. Samsung's long investment in CXL development is now in the final stages with the company currently testing products with partners for performance verification, Samsung recently established the industry's first CXL infrastructure certified by Red Hat. "We expect the CXL market to start blooming in the second half and explosively grow from 2028," Choi stated, highlighting the technology's potential to expand memory capacity and bandwidth far beyond current limitations.

Micron Technology Unveils MRDIMMs to Scale Up Memory Densities on Servers

Micron Technology, Inc., today announced it is now sampling its multiplexed rank dual inline memory module (MRDIMMs). The MRDIMMs will enable Micron customers to run increasingly demanding workloads and obtain maximum value out of their compute infrastructure. For applications requiring more than 128 GB of memory per DIMM slot, Micron MRDIMMs outperform current TSV RDIMMs by enabling the highest bandwidth, largest capacity with the lowest latency and improved performance per watt to accelerate memory-intensive virtualized multi-tenant, HPC and AI data center workloads.1 The new memory offering is the first generation in the Micron MRDIMM family and will be compatible with Intel Xeon 6 processors.

"Micron's latest innovative main memory solution, MRDIMM, delivers the much-needed bandwidth and capacity at lower latency to scale AI inference and HPC applications on next-generation server platforms," said Praveen Vaidyanathan, vice president and general manager of Micron's Compute Products Group. "MRDIMMs significantly lower the amount of energy used per task while offering the same reliability, availability and serviceability capabilities and interface as RDIMMs, thus providing customers a flexible solution that scales performance. Micron's close industry collaborations ensure seamless integration into existing server infrastructures and smooth transitions to future compute platforms."

Samsung Completes Validation of Industry's Fastest LPDDR5X for Use With MediaTek's Flagship Mobile Platform

Samsung Electronics, the world leader in advanced memory technology, today announced it has successfully completed verification of the industry's fastest 10.7 gigabit-per-second (Gbps) Low Power Double Data Rate 5X (LPDDR5X) DRAM for use on MediaTek's next-generation Dimensity platform.

The 10.7 Gbps operation speed verification was carried out using Samsung's LPDDR5X 16-gigabyte (GB) package on MediaTek's upcoming flagship Dimensity 9400 System on Chip (SoC), scheduled to be released in the second half of this year. The two companies have closely collaborated to complete the verification within just three months.

Quinas Receives £1.1m to Enable Industrialisation of ULTRARAM

An Innovate UK project worth £1.1M has been awarded to the Lancaster University spinout firm Quinas, the global semiconductor company IQE and Lancaster and Cardiff Universities. Quinas will coordinate the ambitious project which is the first step towards volume production of the universal computer memory ULTRARAM invented by Lancaster Physics Professor Manus Hayne.

ULTRARAM has extraordinary properties, combining the non-volatility of a data storage memory, like flash, with the speed, energy-efficiency, and endurance of a working memory, like DRAM. Most of the funding for the one-year project will be spent at IQE which will scale up the manufacture of compound semiconductor layers from Lancaster University to an industrial process at the Cardiff based firm. This will involve IQE developing advanced capability for growth of the compound semiconductors gallium antimonide and aluminium antimonide for the first time. The project follows significant investment to boost the UK semiconductor industry and the establishment of the world's first compound semiconductor cluster in South Wales.

JEDEC Approaches Finalization of HBM4 Standard, Eyes Future Innovations

JEDEC Solid State Technology Association, the global leader in the development of standards for the microelectronics industry, today announced it is nearing completion of the next version of its highly anticipated High Bandwidth Memory (HBM) DRAM standard: HBM4. Designed as an evolutionary step beyond the currently published HBM3 standard, HBM4 aims to further enhance data processing rates while maintaining essential features such as higher bandwidth, lower power consumption, and increased capacity per die and/or stack. These advancements are vital for applications that require efficient handling of large datasets and complex calculations, including generative artificial intelligence (AI), high-performance computing, high-end graphics cards, and servers.

HBM4 is set to introduce a doubled channel count per stack compared to HBM3, with a larger physical footprint. To support device compatibility, the standard ensures that a single controller can work with both HBM3 and HBM4 if needed. Different configurations will require various interposers to accommodate the differing footprints. HBM4 will specify 24 Gb and 32 Gb layers, with options for supporting 4-high, 8-high, 12-high and 16-high TSV stacks. The committee has initial agreement on speeds bins up to 6.4 Gbps with discussion ongoing for higher frequencies.

Nanya and Winbond Boost Memory Production Amid Rising Demand and Prices

As memory prices and volumes increase, manufacturers Nanya and Winbond have ceased the production cuts they implemented last year, with production now back to normal levels. Market research agencies and supply chain analysts indicate that memory shipments are expected to continue recovering in Q3 2024. Currently, memory factories are operating at a capacity utilization rate of 90% to full capacity, which is significantly higher than the 60% to 70% capacity utilization rate of wafer foundries with mature processes. Last year, Nanya adjusted its production volume reducing it by up to 20%. This year, production has gradually increased, reaching 70% to over 80% in the second quarter, and has now resumed normal levels.

Nanya anticipates that DRAM market conditions and prices will improve quarter by quarter, with the overall industry trending positively, potentially turning losses into profits in the third quarter. Nanya announced yesterday that its consolidated revenue for June was 3.363 billion yuan, marking a monthly increase of 0.35% and an annual increase of 36.83%, setting a high for the year. The cumulative consolidated revenue for the first half of the year was 19.424 billion yuan, a 44.4% increase compared to the same period last year. Nanya will hold a press conference on July 10 to announce its second-quarter financial results and operating outlook.

Panmnesia Uses CXL Protocol to Expand GPU Memory with Add-in DRAM Card or Even SSD

South Korean startup Panmnesia has unveiled an interesting solution to address the memory limitations of modern GPUs. The company has developed a low-latency Compute Express Link (CXL) IP that could help expand GPU memory with external add-in card. Current GPU-accelerated applications in AI and HPC are constrained by the set amount of memory built into GPUs. With data sizes growing by 3x yearly, GPU networks must keep getting larger just to fit the application in the local memory, benefiting latency and token generation. Panmnesia's proposed approach to fix this leverages the CXL protocol to expand GPU memory capacity using PCIe-connected DRAM or even SSDs. The company has overcome significant technical hurdles, including the absence of CXL logic fabric in GPUs and the limitations of existing unified virtual memory (UVM) systems.

At the heart of Panmnesia's solution is a CXL 3.1-compliant root complex with multiple root ports and a host bridge featuring a host-managed device memory (HDM) decoder. This sophisticated system effectively tricks the GPU's memory subsystem into treating PCIe-connected memory as native system memory. Extensive testing has demonstrated impressive results. Panmnesia's CXL solution, CXL-Opt, achieved two-digit nanosecond round-trip latency, significantly outperforming both UVM and earlier CXL prototypes. In GPU kernel execution tests, CXL-Opt showed execution times up to 3.22 times faster than UVM. Older CXL memory extenders recorded around 250 nanoseconds round trip latency, with CXL-Opt potentially achieving less than 80 nanoseconds. As with CXL, the problem is usually that the memory pools add up latency and performance degrades, while these CXL extenders tend to add to the cost model as well. However, the Panmnesia CXL-Opt could find a use case, and we are waiting to see if anyone adopts this in their infrastructure.
Below are some benchmarks by Panmnesia, as well as the architecture of the CXL-Opt.

SK Hynix to Invest $75 Billion by 2028 in Memory Solutions for AI

South Korean giant SK Group has unveiled plans for substantial investments in AI and semiconductor technologies worth almost $75 billion. SK Group subsidiary, SK Hynix, will lead this initiative with a staggering 103 trillion won ($74.6 billion) investment over the next three years, with plans to realize the investment by 2028. This commitment is in addition to the ongoing construction of a $90 billion mega fab complex in Gyeonggi Province for cutting-edge memory production. SK Group has further pledged an additional $58 billion, bringing the total investment to a whopping $133 billion. This capital infusion aims to enhance the group's competitiveness in the AI value chain while funding operations across its 175 subsidiaries, including SK Hynix.

While specific details remain undisclosed, SK Group is reportedly exploring various options, including potential mergers and divestments. SK Group has signaled that business practices need change amid shifting geopolitical situations and the massive boost that AI is bringing to the overall economy. We may see more interesting products from SK Group in the coming years as it potentially enters new markets centered around AI. This strategic pivot comes after SK Hynix reported its first loss in a decade in 2022. However, the company has since shown signs of recovery, fueled by the surging demand for memory solutions for AI chips. The company currently has a 35% share of the global DRAM market and plans to have an even stronger presence in the coming years. The massive investment aligns with the South Korean government's recently announced $19 billion support package for the domestic semiconductor industry, which will be distributed across companies like SK Hynix and Samsung.

DDR5-6400 Confirmed as Sweetspot Speed of Ryzen 9000 "Zen 5" Desktop Processors

AMD's upcoming Ryzen 9000 series "Granite Ridge" desktop processors based on the "Zen 5" microarchitecture will see a slight improvement in memory overclocking capabilities. A chiplet-based processor, just like the Ryzen 7000 "Raphael," "Granite Ridge" combines one or two "Zen 5" CCDs, each built on the TSMC 4 nm process, with a client I/O die (cIOD) built on the 6 nm node. The cIOD of "Granite Ridge" appears to be almost identical to that of "Raphael." This is the chiplet that contains the processor's DDR5 memory controllers.

As part of the update, Ryzen 9000 "Granite Ridge" should be able to run DDR5-6400 with a 1:1 ratio between the MCLK and FCLK domains. This is a slight increase from the DDR5-6000 sweetspot speed of Ryzen 7000 "Raphael" processors. AMD is reportedly making it possible for motherboard manufacturers and prebuilt OEMs to enable a 1:2 ratio, making it possible to run high memory speeds such as DDR5-8000, although performance returns with memory speeds would begin to diminish beyond the DDR5-6400 @ 1:1 setting. Memory manufacturers should launch a new wave of DDR5 memory kits with AMD EXPO profiles for DDR5-6400.

Micron Confirms US Fab Expansion Plan: Idaho and New York Fabs by 2026-2029

Micron has announced more precise timeframes for the commencement of operations at its two new memory facilities in the United States during its Q3 FY2024 results presentation. The company expects these fabs, located in Idaho and New York, to begin production between late 2026 and 2029. The Idaho fab, currently under construction near Boise, is slated to start operations between September 2026 and September 2027. Meanwhile, the New York facility is projected to come online in the calendar year 2028 or later, pending the completion of regulatory and permitting processes. These timelines align with Micron's original plans announced in 2022 despite recent spending optimizations. The company emphasizes that these investments are crucial to support supply growth in the latter half of this decade.

Micron's capital expenditure for FY2024 is set at approximately $8 billion, with a planned increase to around $12 billion in FY2025. This substantial rise in spending, targeting a mid-30s percentage of revenue, will support various technological advancements and facility expansions. A substantial portion of this increased investment - over $2 billion - will be dedicated to constructing the new fabs in Idaho and New York. Additional funds will support high-bandwidth memory assembly and testing, as well as the development of other fabrication and back-end facilities. Sanjay Mehrotra, Micron's CEO, underscored the importance of these investments, stating that the new capacity is essential to meet long-term demand and maintain the company's market position. He added that these expansions, combined with ongoing technology transitions in Asian facilities, will enable Micron to grow its memory bit supply in line with industry demand.

Micron Technology, Inc. Reports Results for the Third Quarter of Fiscal 2024

Micron Technology, Inc. (Nasdaq: MU) today announced results for its third quarter of fiscal 2024, which ended May 30, 2024.

Fiscal Q3 2024 highlights
  • Revenue of $6.81 billion versus $5.82 billion for the prior quarter and $3.75 billion for the same period last year
  • GAAP net income of $332 million, or $0.30 per diluted share
  • Non-GAAP net income of $702 million, or $0.62 per diluted share
  • Operating cash flow of $2.48 billion versus $1.22 billion for the prior quarter and $24 million for the same period last year
"Robust AI demand and strong execution enabled Micron to drive 17% sequential revenue growth, exceeding our guidance range in fiscal Q3," said Sanjay Mehrotra, President and CEO of Micron Technology. "We are gaining share in high-margin products like High Bandwidth Memory (HBM), and our data center SSD revenue hit a record high, demonstrating the strength of our AI product portfolio across DRAM and NAND. We are excited about the expanding AI-driven opportunities ahead, and are well positioned to deliver a substantial revenue record in fiscal 2025."
Return to Keyword Browsing
Mar 29th, 2025 00:14 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts