News Posts matching #DRAM

Return to Keyword Browsing

Nanya and Winbond Boost Memory Production Amid Rising Demand and Prices

As memory prices and volumes increase, manufacturers Nanya and Winbond have ceased the production cuts they implemented last year, with production now back to normal levels. Market research agencies and supply chain analysts indicate that memory shipments are expected to continue recovering in Q3 2024. Currently, memory factories are operating at a capacity utilization rate of 90% to full capacity, which is significantly higher than the 60% to 70% capacity utilization rate of wafer foundries with mature processes. Last year, Nanya adjusted its production volume reducing it by up to 20%. This year, production has gradually increased, reaching 70% to over 80% in the second quarter, and has now resumed normal levels.

Nanya anticipates that DRAM market conditions and prices will improve quarter by quarter, with the overall industry trending positively, potentially turning losses into profits in the third quarter. Nanya announced yesterday that its consolidated revenue for June was 3.363 billion yuan, marking a monthly increase of 0.35% and an annual increase of 36.83%, setting a high for the year. The cumulative consolidated revenue for the first half of the year was 19.424 billion yuan, a 44.4% increase compared to the same period last year. Nanya will hold a press conference on July 10 to announce its second-quarter financial results and operating outlook.

Panmnesia Uses CXL Protocol to Expand GPU Memory with Add-in DRAM Card or Even SSD

South Korean startup Panmnesia has unveiled an interesting solution to address the memory limitations of modern GPUs. The company has developed a low-latency Compute Express Link (CXL) IP that could help expand GPU memory with external add-in card. Current GPU-accelerated applications in AI and HPC are constrained by the set amount of memory built into GPUs. With data sizes growing by 3x yearly, GPU networks must keep getting larger just to fit the application in the local memory, benefiting latency and token generation. Panmnesia's proposed approach to fix this leverages the CXL protocol to expand GPU memory capacity using PCIe-connected DRAM or even SSDs. The company has overcome significant technical hurdles, including the absence of CXL logic fabric in GPUs and the limitations of existing unified virtual memory (UVM) systems.

At the heart of Panmnesia's solution is a CXL 3.1-compliant root complex with multiple root ports and a host bridge featuring a host-managed device memory (HDM) decoder. This sophisticated system effectively tricks the GPU's memory subsystem into treating PCIe-connected memory as native system memory. Extensive testing has demonstrated impressive results. Panmnesia's CXL solution, CXL-Opt, achieved two-digit nanosecond round-trip latency, significantly outperforming both UVM and earlier CXL prototypes. In GPU kernel execution tests, CXL-Opt showed execution times up to 3.22 times faster than UVM. Older CXL memory extenders recorded around 250 nanoseconds round trip latency, with CXL-Opt potentially achieving less than 80 nanoseconds. As with CXL, the problem is usually that the memory pools add up latency and performance degrades, while these CXL extenders tend to add to the cost model as well. However, the Panmnesia CXL-Opt could find a use case, and we are waiting to see if anyone adopts this in their infrastructure.
Below are some benchmarks by Panmnesia, as well as the architecture of the CXL-Opt.

SK Hynix to Invest $75 Billion by 2028 in Memory Solutions for AI

South Korean giant SK Group has unveiled plans for substantial investments in AI and semiconductor technologies worth almost $75 billion. SK Group subsidiary, SK Hynix, will lead this initiative with a staggering 103 trillion won ($74.6 billion) investment over the next three years, with plans to realize the investment by 2028. This commitment is in addition to the ongoing construction of a $90 billion mega fab complex in Gyeonggi Province for cutting-edge memory production. SK Group has further pledged an additional $58 billion, bringing the total investment to a whopping $133 billion. This capital infusion aims to enhance the group's competitiveness in the AI value chain while funding operations across its 175 subsidiaries, including SK Hynix.

While specific details remain undisclosed, SK Group is reportedly exploring various options, including potential mergers and divestments. SK Group has signaled that business practices need change amid shifting geopolitical situations and the massive boost that AI is bringing to the overall economy. We may see more interesting products from SK Group in the coming years as it potentially enters new markets centered around AI. This strategic pivot comes after SK Hynix reported its first loss in a decade in 2022. However, the company has since shown signs of recovery, fueled by the surging demand for memory solutions for AI chips. The company currently has a 35% share of the global DRAM market and plans to have an even stronger presence in the coming years. The massive investment aligns with the South Korean government's recently announced $19 billion support package for the domestic semiconductor industry, which will be distributed across companies like SK Hynix and Samsung.

Micron Technology, Inc. Reports Results for the Third Quarter of Fiscal 2024

Micron Technology, Inc. (Nasdaq: MU) today announced results for its third quarter of fiscal 2024, which ended May 30, 2024.

Fiscal Q3 2024 highlights
  • Revenue of $6.81 billion versus $5.82 billion for the prior quarter and $3.75 billion for the same period last year
  • GAAP net income of $332 million, or $0.30 per diluted share
  • Non-GAAP net income of $702 million, or $0.62 per diluted share
  • Operating cash flow of $2.48 billion versus $1.22 billion for the prior quarter and $24 million for the same period last year
"Robust AI demand and strong execution enabled Micron to drive 17% sequential revenue growth, exceeding our guidance range in fiscal Q3," said Sanjay Mehrotra, President and CEO of Micron Technology. "We are gaining share in high-margin products like High Bandwidth Memory (HBM), and our data center SSD revenue hit a record high, demonstrating the strength of our AI product portfolio across DRAM and NAND. We are excited about the expanding AI-driven opportunities ahead, and are well positioned to deliver a substantial revenue record in fiscal 2025."

DRAM Prices Expected to Increase by 8-13% in Q3

TrendForce reports that a recovery in demand for general servers—coupled with an increased production share of HBM by DRAM suppliers—has led suppliers to maintain their stance on hiking prices. As a result, the ASP of DRAM in the third quarter is expected to continue rising, with an anticipated increase of 8-13%. The price of conventional DRAM is expected to rise by 5-10%, showing a slight contraction compared to the increase in the second quarter.

TrendForce notes that buyers were more conservative about restocking in the second, and inventory levels on both the supplier and buyer sides did not show significant changes. Looking ahead to the third quarter, there is still room for inventory replenishment for smartphones and CSPs, and the peak season for production is soon to commence. Consequently, it is expected that smartphones and servers will drive an increase in memory shipments in the third quarter.

CSPs to Expand into Edge AI, Driving Average NB DRAM Capacity Growth by at Least 7% in 2025

TrendForce has observed that in 2024, major CSPs such as Microsoft, Google, Meta, and AWS will continue to be the primary buyers of high-end AI servers, which are crucial for LLM and AI modeling. Following establishing a significant AI training server infrastructure in 2024, these CSPs are expected to actively expand into edge AI in 2025. This expansion will include the development of smaller LLM models and setting up edge AI servers to facilitate AI applications across various sectors, such as manufacturing, finance, healthcare, and business.

Moreover, AI PCs or notebooks share a similar architecture to AI servers, offering substantial computational power and the ability to run smaller LLM and generative AI applications. These devices are anticipated to serve as the final bridge between cloud AI infrastructure and edge AI for small-scale training or inference applications.

Kingston Intros FURY Renegade RGB Limited Edition DDR5 Memory

Kingston today formally launched the FURY Renegade RGB Limited Edition DDR5 memory kits. These were shown at the company's Computex 2024 booth earlier this month. The module's design involves a two-tone die-cast metal shroud over the aluminium heat-spreaders, which are crowned by silicone diffusers for the RGB LEDs. The modules have a 19-preset lighting controller. You control the lighting using the first-party FURY CTRL software. Kingston says that the design of these modules are inspired by race cars.

The Kingston FURY Renegade RGB Limited Edition is available in only one density—48 GB (2x 24 GB kit), and in only one speed variant, DDR5-8000, with timings of CL36-48-48, and DRAM voltage of 1.45 V. The module also includes profiles for DDR5-7200 and DDR5-6400, with tighter timings. The modules pack an Intel XMP 3.0 SPD profile that enables the advertised speeds on Intel platforms. Kingston has extensively tested the modules on the latest Intel platforms, such as the 14th Gen Core "Raptor Lake Refresh" for compatibility with the advertised XMP speeds. The company didn't reveal pricing.

SK hynix Showcases Its New AI Memory Solutions at HPE Discover 2024

SK hynix has returned to Las Vegas to showcase its leading AI memory solutions at HPE Discover 2024, Hewlett Packard Enterprise's (HPE) annual technology conference. Held from June 17-20, HPE Discover 2024 features a packed schedule with more than 150 live demonstrations, as well as technical sessions, exhibitions, and more. This year, attendees can also benefit from three new curated programs on edge computing and networking, hybrid cloud technology, and AI. Under the slogan "Memory, The Power of AI," SK hynix is displaying its latest memory solutions at the event including those supplied to HPE. The company is also taking advantage of the numerous networking opportunities to strengthen its relationship with the host company and its other partners.

The World's Leading Memory Solutions Driving AI
SK hynix's booth at HPE Discover 2024 consists of three product sections and a demonstration zone which showcase the unprecedented capabilities of its AI memory solutions. The first section features the company's groundbreaking memory solutions for AI, including HBM solutions. In particular, the industry-leading HBM3E has emerged as a core product to meet the growing demands of AI systems due to its exceptional processing speed, capacity, and heat dissipation. A key solution from the company's CXL lineup, CXL Memory Module-DDR5 (CMM-DDR5), is also on display in this section. In the AI era where high performance and capacity are vital, CMM-DDR5 has gained attention for its ability to expand system bandwidth by up to 50% and capacity by up to 100% compared to systems only equipped with DDR5 DRAM.

Gigabyte Promises 219,000 TBW for New AI TOP 100E SSD

Gigabyte has quietly added a new SSD to its growing lineup and this time around it's something quite different. The drive is part of Gigabyte's new AI TOP (Trillions of Operations per Second) and was announced at Computex with little fanfare. At the show, the company only announced that it would have 150x the TBW compared to regular SSDs and that it was built specifically for AI model training. What that 150x means in reality is that the 2 TB version of the AI TOP 100E SSD will deliver no less than 219,000 TBW (TeraBytes Written), whereas most high-end 2 TB consumer NVMe SSDs end up somewhere around 1,200 TBW. The 1 TB version promises 109,500 TBW and both drives have an MTBF time of 1.6 million hours and a five-year warranty.

Gigabyte didn't reveal the host controller or the exact NAND used, but the drives are said to use 3D NAND flash and both drives have a LPDDR4 DRAM cache of 1 or 2 GB depending on the drive size. However, the pictures of the drive suggest it might be a Phison based reference design. The AI TOP 100E SSDs are standard PCIe 4.0 drives, so the sequential read speed tops out at 7,200 MB/s with the write speed for the 1 TB SKU being up to 6,500 MB/s, with the 2 TB SKU slightly behind at 5,900 MB/s. No other performance figures were provided. The drives are said to draw up to 11 Watts in use, which seems very high for PCIe 4.0 drives. No word on pricing or availability as yet.

SK hynix Showcases Its Next-Gen Solutions at Computex 2024

SK hynix presented its leading AI memory solutions at COMPUTEX Taipei 2024 from June 4-7. As one of Asia's premier IT shows, COMPUTEX Taipei 2024 welcomed around 1,500 global participants including tech companies, venture capitalists, and accelerators under the theme "Connecting AI". Making its debut at the event, SK hynix underlined its position as a first mover and leading AI memory provider through its lineup of next-generation products.

"Connecting AI" With the Industry's Finest AI Memory Solutions
Themed "Memory, The Power of AI," SK hynix's booth featured its advanced AI server solutions, groundbreaking technologies for on-device AI PCs, and outstanding consumer SSD products. HBM3E, the fifth generation of HBM1, was among the AI server solutions on display. Offering industry-leading data processing speeds of 1.18 terabytes (TB) per second, vast capacity, and advanced heat dissipation capability, HBM3E is optimized to meet the requirements of AI servers and other applications. Another technology which has become crucial for AI servers is CXL as it can increase system bandwidth and processing capacity. SK hynix highlighted the strength of its CXL portfolio by presenting its CXL Memory Module-DDR5 (CMM-DDR5), which significantly expands system bandwidth and capacity compared to systems only equipped with DDR5. Other AI server solutions on display included the server DRAM products DDR5 RDIMM and MCR DIMM. In particular, SK hynix showcased its tall 128-gigabyte (GB) MCR DIMM for the first time at an exhibition.

Mnemonic Electronic Debuts at COMPUTEX 2024, Embracing the Era of High-Capacity SSDs

On June 4th, COMPUTEX 2024 was successfully held at the Taipei Nangang Exhibition Center. Mnemonic Electronic Co., Ltd., the Taiwanese subsidiary of Longsys, showcased industry-leading high-capacity SSDs under the theme "Embracing the Era of High-Capacity SSDs." The products on display included the Mnemonic MS90 8TB SATA SSD, FORESEE ORCA 4836 series enterprise NVMe SSDs, FORESEE XP2300 PCIe Gen 4 SSDs, and rich product lines comprising embedded storage, memory modules, memory cards, and more. The company offers reliable industrial-grade, automotive-grade, and enterprise-grade storage products, providing high-capacity solutions for global users.

High-Capacity SSDs
For SSDs, Mnemonic Electronic presented products in various form factors and interfaces, including PCIe M.2, PCIe BGA, SATA M.2, and SATA 2.5-inch. The Mnemonic MS90 8 TB SATA SSD supports the SATA interface with a speed of up to 6 Gb/s (Gen 3) and is backward compatible with Gen 1 and Gen 2. It also supports various SATA low-power states (Partial/Sleep/Device Sleep) and can be used for nearline HDD replacement, surveillance, and high-speed rail systems.

Patriot Shows 14 GB/s PCIe 5.0 NVMe SSD and 11,500 MT/s DDR5 Memory at Computex 2024

At Computex 2024, we paid a visit to the Patriot booth and found a few new product announcements from the company. From record-shattering DDR5 memory speeds to next-generation Gen 5 SSDs, the company has prepared it all. Headlining the showcase is the Viper Xtreme 5 DDR5 memory series, achieving regular speeds of up to 8,200 MT/s and an astonishing 11,500 MT/s when overclocked. Patriot is also launching something for professional workstations with its overclockable ECC RDIMM modules, offering error correction, larger capacities, and the ability to exceed industry specifications through overclocking.

Biwin Brings New PC Memory and Flash Storage Lineup Under its Own Brand to Computex

Biwin is a licensee of SSDs, PC memory, and flash storage products for some for the biggest PC brands out there, including Acer and HP. This year, the company decided to launch a whole product stack under its own brand, so it could sell to the retail channel directly. We also spotted several licensed products under the coveted Acer Predator brand. Let's start our tour with them: the company showed us an Acer Predator Hera memory kit with 48 GB (2x 24 GB), which does an impressive DDR5-8000, at 40-48-48-128, and 1.35 V. The kit includes a DDR5-8000 @ 1.35 V XMP. The module features a mirror-finish metal heat spreader, and an RGB illuminated top. There are also 32 GB (2x 16 GB) kits in the series that go up to DDR5-8200.

Biwin's own first-party brand isn't too far behind the Predator Hera, the company showed us the Biwin (Editor's note: Wookong is local for Asia and will not be part of international branding) DW100 RGB, a high-end memory series, with kit capacities ranging from 32 GB (2x 16 GB) to 64 GB (2x 32 GB), speeds ranging from DDR5-6000 to DDR5-8200, and vDIMM going up to 1.45 V on the top-spec kit. There's also the DX100, which trades a little bit of performance for a more elaborate RGB LED setup. It comes in capacities up to 64 GB (2x 32 GB), and speeds of up to DDR5-8000. The HX100 is the mid-range kit, it lacks any lighting, capacities range up to 64 GB, and speeds of up to DDR5-7200. The timings aren't as tight as the ones on the DX100. Biwin also has a DDR5 LPCAMM2, with capacities of up to 64 GB, and speeds of up to 9600 MT/s. Most of Biwin's DRAM products launch in July 2024.

Samsung Strike Has No Immediate Impact on Memory Production, with No Shipment Shortages

The Samsung Electronics Union is reportedly planning to strike on June 7, TrendForce reports that this strike will not impact DRAM and NAND Flash production, nor will it cause any shipment shortages. Additionally, the spot prices for DRAM and NAND Flash had been declining prior to the strike announcement, and there has been no change in this downtrend since the announcement.

Samsung's global share of DRAM and NAND Flash output in 2023 was 46.8% and 32.4%, respectively. Even though the South Korean plants account for all 46.8% of global DRAM production and about 17.8% of global NAND Flash production, TrendForce identifies four reasons why this strike will not impact production. Firstly, the strike involves employees at Samsung's headquarters in Seocho, Seoul, where union participation in higher, but these employees do not directly engage in production. Secondly, this strike is planned for only one day, which falls within the flexible scheduling range for production.

Micron DRAM Production Plant in Japan Faces Two-Year Delay to 2027

Last year, Micron unveiled plans to construct a cutting-edge DRAM factory in Hiroshima, Japan. However, the project has faced a significant two-year delay, pushing back the initial timeline for mass production of the company's most advanced memory products. Originally slated to begin mass production by the end of 2025, Micron now aims to have the new facility operational by 2027. The complexity of integrating extreme ultraviolet lithography (EUV) equipment, which enables the production of highly advanced chips, has contributed to the delay. The Hiroshima plant will produce next-generation 1-gamma DRAM and high-bandwidth memory (HBM) designed for generative AI applications. Micron expects the HBM market, currently dominated by rivals SK Hynix and Samsung, to experience rapid growth, with the company targeting a 25% market share by 2025.

The project is expected to cost between 600 and 800 billion Japanese yen ($3.8 to $5.1 billion), with Japan's government covering one-third of the cost. Micron has received a subsidy of up to 192 billion yen ($1.2 billion) for construction and equipment, as well as a subsidy to cover half of the necessary funding to produce HBM at the plant, amounting to 25 billion yen ($159 million). Despite the delay, the increased investment in the factory reflects Micron's commitment to advancing its memory technology and capitalizing on the growing demand for HBM. An indication of that is the fact that customers have pre-ordered 100% of the HBM capacity for 2024, not leaving a single HBM die unused.

NVIDIA Reportedly Having Issues with Samsung's HBM3 Chips Running Too Hot

According to Reuters, NVIDIA is having some major issues with Samsung's HBM3 chips, as NVIDIA hasn't managed to finalise its validations of the chips. Reuters are citing multiple sources that are familiar with the matter and it seems like Samsung is having some serious issues with its HMB3 chips if the sources are correct. Not only do the chips run hot, which itself is a big issue due to NVIDIA already having issues cooling some of its higher-end products, but the power consumption is apparently not where it should be either. Samsung is said to have tried to get its HBM3 and HBM3E parts validated by NVIDIA since sometime in 2023 according to Reuter's sources, which suggests that there have been issues for at least six months, if not longer.

The sources claim there are issues with both the 8- and 12-layer stacks of HMB3E parts from Samsung, suggesting that NVIDIA might only be able to supply parts from Micron and SK Hynix for now, the latter whom has been supplying HBM3 chips to NVIDIA since the middle of 2022 and HBM3E chips since March of this year. It's unclear if this is a production issue at Samsung's DRAM Fabs, a packaging related issue or something else entirely. The Reuter's piece goes on to speculating about Samsung not having had enough time to develop its HBM parts compared its competitors and that it's a rushed product, but Samsung issued a statement to the publication that it's a matter of customising the product for its customer's needs. Samsung also said that it's "the process of optimising its products through close collaboration with customers" without going into which customer(s). Samsung issued a further statement saying that "claims of failing due to heat and power consumption are not true" and that testing was going as expected.

Phison Announces Pascari Brand of Enterprise SSDs, Debuts X200 Series Across Key Form-factors

Phison is arguably the most popular brand for SSD controllers in the client segment, but is turning more of attention to the vast enterprise segment. The company had been making first-party enterprise SSDs under its main marquee, but decided that the lineup needed its own brand that enterprise customers could better discern from the controller ASIC main brand. We hence have Pascari and Imagin. Pascari is an entire product family of fully built enterprise SSDs from Phison. The company's existing first-party drives under the main brand will probably migrate to the Pascari catalog. Imagin, on the other hand, is a design service for large cloud and data-center customers, so they could develop bespoke tiered storage solutions at scale.

The Pascari line of enterprise SSDs are designed completely in-house by Phison, feature their latest controllers, firmware, PCB, PMIC, and on-device power-failure protection on select products. The third-party components here are the NAND flash and DRAM chips, which have both been thoroughly evaluated by Phison for the best performance, endurance, and reliability, at their enterprise SSD design facility in Broomfield, Colorado. Phison already had a constellation of industry partners and suppliers to go around with, and the company's drives even power space missions; but the Pascari brand better differentiates the fully-built SSD lineup from the ASIC make. Pascari makes its debut with the X200 series high-performance SSDs for high-access heat data. The drive leverages Phison's latest PCIe Gen 5 controller technology, the most optimized memory components, and availability in all contemporary server storage form-factors.

NEO Semiconductor Reveals a Performance Boosting Floating Body Cell Mechanism for 3D X-DRAM during IEEE IMW 2024 in Seoul

NEO Semiconductor, a leading developer of innovative technologies for 3D NAND flash and DRAM memory, today announced a performance boosting Floating Body Cell Mechanism for 3D X-DRAM. Andy Hsu, Founder & CEO presented groundbreaking Technology CAD (TCAD) simulation results for NEO's 3D X-DRAM during the 16th IEEE International Memory Workshop (IMW) 2024 in Seoul, Republic of Korea.

Neo Semiconductor reveals a unique performance boosting mechanism called Back-gate Channel-depth Modulation (BCM) for Floating Body Cell that can increase data retention by 40,000X and sensing window by 20X.

SK hynix Develops Next-Generation Mobile NAND Solution ZUFS 4.0

SK hynix announced today that it has developed the Zoned UFS, or ZUFS 4.0, a mobile NAND solution product for on-device AI applications. SK hynix said that the ZUFS 4.0, optimized for on-device AI from mobile devices such as smartphones, is the industry's best of its kind. The company expects the latest product to help expand its AI memory leadership to the NAND space, extending its success in the high-performance DRAM represented by HBM.

The ZUFS is a differentiated technology that classifies and stores data generated from smartphones in different zones in accordance with characteristics. Unlike a conventional UFS, the latest product groups and stores data with similar purposes and frequencies in separate zones, boosting the speed of a smartphone's operating system and management efficiency of the storage devices.

Microsoft is Switching from MHz to MT/s in Task Manager for Measuring RAM Speeds

The battle is over. Microsoft is finally changing the measuring methodology in its Task Manager from Mega Hertz (MHz) to Mega Transfers per second (MT/s). This comes amid the industry push for more technical correctness in RAM measuring, where the MHz nomenclature does not technically represent the speed at which the memory is actually running. While DRAM manufacturers list both MHz and MT/s, the advertised MHz number is much higher than the effective speed at which the DRAM is running, resulting in confusion and arguments in the industry about choosing the correct labeling of DRAM. A little history lesson teaches us that when single data rate (SDR) RAM was introduced, 100 MHz memory would perform 100 MT/s. However, when double data rate (DDR) memory appeared, it would allow for two memory transfers per clock cycle.

This would introduce some confusion where the MHz speed is often mixed up with MT/s. Hence, Microsoft is trying to repair the damage and list memory speeds in MT/s. Modern DDR5 memory makers are advertising DDR5 kits with "DDR5-4800" or "DDR5-6000," without any suffix like MHz or MT/s. This is because, for example, a DDR5-6000 kit runs at 6,000 MT/s, the effective speed is only 3,000 MHz. The actual clock of the memory is only half of what is advertised. The MT/s terminology would be more accurate and describe memory better. This Task Manager update is in the Windows 11 Insider Preview Build 22635.3570 in the Beta Channel, which will trickle down to stable Windows 11 updates for everyone soon.

DRAM Contract Prices for Q2 Adjusted to a 13-18% Increase; NAND Flash around 15-20%

TrendForce's latest forecasts reveal contract prices for DRAM in the second quarter are expected to increase by 13-18%, while NAND Flash contract prices have been adjusted to a 15-20% Only eMMC/UFS will be seeing a smaller price increase of about 10%.

Before the 4/03 earthquake, TrendForce had initially predicted that DRAM contract prices would see a seasonal rise of 3-8% and NAND Flash 13-18%, significantly tapering from Q1 as seen from spot price indicators which showed weakening price momentum and reduced transaction volumes. This was primarily due to subdued demand outside of AI applications, particularly with no signs of recovery in demand for notebooks and smartphones. Inventory levels were gradually increasing, especially among PC OEMs. Additionally, with DRAM and NAND Flash prices having risen for 2-3 consecutive quarters, the willingness of buyers to accept further substantial price increases had diminished.

Apacer Showcases the Latest in Backup and Recovery Technology at Automate 2024

Thanks to recent developments in the AI field, and following in the wake of the world's recovery from COVID-19, the transition of factories to partial or full automation proceeds with unstoppable momentum. And the best place to learn about the latest technologies that aim to make this transition as painless as possible is at Automate 2024. This is North America's largest robotics and automation event, and it will be held in Chicago, Illinois from May 6 to 9.

Automate attracts professionals from around the world, and Apacer is no exception. Apacer team will be on hand to discuss the latest technological developments created by our experienced R&D team. Many of these developments were specifically created to reduce the pain points commonly experienced by fully automated facilities. Take CoreSnapshot, for example. This backup and recovery technology can restore a crashed system to full operation in just a few seconds, reducing downtime and associated maintenance costs. Apacer recently updated CoreSnapshot, creating CoreRescue ASR and CoreRescue USR. The name of CoreRescue ASR refers to Auto Self Recovery. This technology will harness AI to learn the system booting process and analyze how long a boot should take. If this average boot time is significantly longer than usual, the system will trigger the self-recovery process and revert to an earlier, uncorrupted version of the drive's content. CoreRescue USR offers similar functionality, except the self-recovery process is triggered by connecting a small USB stick drive.

SK hynix Presents CXL Memory Solutions Set to Power the AI Era at CXL DevCon 2024

SK hynix participated in the first-ever Compute Express Link Consortium Developers Conference (CXL DevCon) held in Santa Clara, California from April 30-May 1. Organized by a group of more than 240 global semiconductor companies known as the CXL Consortium, CXL DevCon 2024 welcomed a majority of the consortium's members to showcase their latest technologies and research results.

CXL is a technology that unifies the interfaces of different devices in a system such as semiconductor memory, storage, and logic chips. As it can increase system bandwidth and processing capacity, CXL is receiving attention as a key technology for the AI era in which high performance and capacity are essential. Under the slogan "Memory, The Power of AI," SK hynix showcased a range of CXL products at the conference that are set to strengthen the company's leadership in AI memory technology.

Micron First to Ship Critical Memory for AI Data Centers

Micron Technology, Inc. (Nasdaq: MU), today announced it is leading the industry by validating and shipping its high-capacity monolithic 32Gb DRAM die-based 128 GB DDR5 RDIMM memory in speeds up to 5,600 MT/s on all leading server platforms. Powered by Micron's industry-leading 1β (1-beta) technology, the 128 GB DDR5 RDIMM memory delivers more than 45% improved bit density, up to 22% improved energy efficiency and up to 16% lower latency over competitive 3DS through-silicon via (TSV) products.

Micron's collaboration with industry leaders and customers has yielded broad adoption of these new high-performance, large-capacity modules across high-volume server CPUs. These high-speed memory modules were engineered to meet the performance needs of a wide range of mission-critical applications in data centers, including artificial intelligence (AI) and machine learning (ML), high-performance computing (HPC), in-memory databases (IMDBs) and efficient processing for multithreaded, multicore count general compute workloads. Micron's 128 GB DDR5 RDIMM memory will be supported by a robust ecosystem including AMD, Hewlett Packard Enterprise (HPE), Intel, Supermicro, along with many others.

Enthusiast Transforms QLC SSD Into SLC With Drastic Endurance and Performance Increase

A few months ago, we covered proof of overclocking an off-the-shelf 2.5-inch SATA III NAND Flash SSD thanks to Gabriel Ferraz, Computer Engineer and TechPowerUp's SSD database maintainer. Now, he is back with another equally interesting project of modifying a Quad-Level Cell (QLC) SATA III SSD into a Single-Level Cell (SLC) SATA III SSD. Using the Crucial BX500 512 GB SSD, he aimed at transforming the QLC drive into a more endurant and higher-performance SLC. Silicon Motion SM2259XT2 powers the drive of choice with a single-core ARC 32-bit CPU clocked at 550 MHz and two channels running at 800 MT/s (400 MHz) without a DRAM cache. This particular SSD uses four NAND Flash dies from Micron with NY240 part numbers. Two dies are controlled per channel. These NAND Flash dies were designed to operate at 1,600 MT/s (800 MHz) but are limited to only 525 MT/s in this drive in the real world.

The average endurance of these dies is 1,500 P/E cycles in NANDs FortisFlash and about 900 P/E cycles in Mediagrade. Transforming the same drive in the pSLC is bumping those numbers to 100,000 and 60,000, respectively. However, getting that to work is the tricky part. To achieve this, you have to download MPtools for the Silicon Motion SM2259XT2 controller from the USBdev.ru website and find the correct die used in the SSD. Then, the software is modified carefully, and a case-sensitive configuration file is modified to allow for SLC mode, which forces the die to run as a SLC NAND Flash die. Finally, firmware folder must be reached and files need to be moved arround in a way seen in the video.
Return to Keyword Browsing
Jul 15th, 2024 21:06 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts