News Posts matching #PCIe

Return to Keyword Browsing

SK hynix Presents Extensive AI Memory Lineup at Expanded FMS 2024

SK hynix has returned to Santa Clara, California to present its full array of groundbreaking AI memory technologies at FMS: the Future of Memory and Storage (FMS) 2024 from August 6-8. Previously known as Flash Memory Summit, the conference changed its name to reflect its broader focus on all types of memory and storage products amid growing interest in AI. Bringing together industry leaders, customers, and IT professionals, FMS 2024 covers the latest trends and innovations shaping the memory industry.

Participating in the event under the slogan "Memory, The Power of AI," SK hynix is showcasing its outstanding memory capabilities through a keynote presentation, multiple technology sessions, and product exhibits.

NVM Express Releases NVMe 2.1 Specifications

NVM Express, Inc. today announced the release of three new specifications and eight updated specifications. This update to NVMe technology builds on the strengths of previous NVMe specifications, introducing significant new features for modern computing environments while also streamlining development and time to market.

"Beginning as a single PCIe SSD specification, NVMe technology has grown into nearly a dozen specifications, including multiple command sets, that provide pivotal support for NVMe technology across all major transports and standardize many aspects of storage," said Peter Onufryk, NVM Express Technical Workgroup Chair. "NVMe technology adoption continues to grow and has succeeded in unifying client, cloud, AI and enterprise storage around a common architecture. The future of NVMe technology is bright and we have 75 new authorized technical proposals underway."

Micron Develops Industry's First PCIe Gen 6 Data Center SSD for Ecosystem Enablement

Micron Technology, Inc., today announced it is the first to develop PCIe Gen 6 data center SSD technology for ecosystem enablement as part of a portfolio of memory and storage products to support the broad demand for AI. Addressing these demands, Raj Narasimhan, senior vice president and general manager of Micron's Compute and Networking Business Unit, will present a keynote at FMS titled, "Data is at the heart of AI: Micron memory and storage are fueling the AI revolution," on Wednesday, Aug. 7, at 11:00 a.m. Pacific time. The session will focus on how Micron's industry-leading products are impacting AI system architectures while enabling faster and more power-efficient solutions to manage vast data sets.

At FMS, Micron will demonstrate that it is the first to develop a PCIe Gen 6 SSD for ecosystem enablement, once again showcasing its storage technology leadership. By making this technology — which delivers sequential read bandwidths of over 26 GB/s — available to partners, Micron is kickstarting the PCIe Gen 6 ecosystem. This achievement builds on Micron's recent announcement of the world's fastest data center SSD, the Micron 9550, and further bolsters Micron's leadership position in AI storage.

Silicon Motion Launches Power Efficient PCIe Gen 5 SSD Controller

Silicon Motion Technology Corporation, a global leader in designing and marketing NAND flash controllers for solid-state storage devices, today announced SM2508, the best power efficiency PCIe Gen 5 NVMe 2.0 client SSD controller for AI PCs and gaming consoles. It's the world's first PCIe Gen 5 client SSD controller using TSMC's 6 nm EUV process, offering a 50% reduction in power consumption compared to competitive offerings in the 12 nm process. With less than 7 W power consumption for the entire SSD, it delivers 1.7x better power efficiency than PCIe Gen 4 SSDs and up to 70% better than current competitive PCIe Gen 5 offerings on the market. Silicon Motion will be showcasing its SM2508 based SSD design and other innovations during the Future of Memory and Storage event from Aug. 6 to 8 at booth #315:

Silicon Motion's SM2508 is a superior-performance, low-power PCIe Gen 5 x4 NVMe 2.0 SSD controller designed for AI-capable PC notebooks. It supports eight NAND channels with up to 3,600 MT/s per channel, delivering sequential performance speeds of up to 14.5 GB/s and 13.6 GB/s and random performance speeds of up to 2.5M IOPS, providing up to 2x higher performance than PCIe Gen 4 products. The SM2508 maximizes PCIe Gen 5 performance with an impressive power consumption of approximately 3 W. It features Silicon Motion's proprietary 8th-generation NANDXtend technology, which includes an on-disk training algorithm designed to reduce ECC timing. This enhancement boosts performance and maximizes power efficiency while ensuring compatibility with the latest 3D TLC/QLC NAND technologies, enabling higher data density and meeting the evolving demands of next-generation AI PCs.

Pineboards Launches AI Bundle Hailo 8L Raspberry Pi HAT+ with NVMe SSD Support

It feels that only a few days have passed since we announced the HatDrive! Nano, and there is much more in the pipeline that we're excited to share with you! Today, though, we have our Pineboards Ai Bundle (Hailo 8L) to whet your ever-hungry AI appetites, and we think you're going to love it.

Combining an M.2 2280 M-Key NVMe connection with an M.2 2230 A/E-Key connection pre-loaded with a Hailo-8L on a bottom-mounted Raspberry Pi 5 HAT enables you to get your AI fix whilst also being able to boot and make use of fast NVMe storage. This builds on the success of our ever-popular Raspberry Pi 5 AI HAT and Google Coral combinations, but massively bumps the processing power, enabling you to do so much more!

Micron Announces Volume Production of Ninth-Generation NAND Flash Technology

Micron Technology, Inc., announced today that it is shipping ninth-generation (G9) TLC NAND in SSDs, making it the first in the industry to achieve this milestone. Micron G9 NAND features the industry's highest transfer speed of 3.6 GB/s, delivering unsurpassed bandwidth for reading and writing data. The new NAND enables best-in-class performance for artificial intelligence (AI) and other data-intensive use cases from personal devices and edge servers to enterprise and cloud data centers.

"The shipment of Micron G9 NAND is a testament to Micron's prowess in process technology and design innovations," said Scott DeBoer, executive vice president of Technology and Products at Micron. "Micron G9 NAND is up to 73% denser than competitive technologies in the market today, allowing for more compact and efficient storage solutions that benefit both consumers and businesses."

Alphawave Semi Launches Industry's First 3nm UCIe IP with TSMC CoWoS Packaging

Alphawave Semi, a global leader in high-speed connectivity and compute silicon for the world's technology infrastructure, has launched the industry's first 3 nm successful silicon bring-up of Universal Chiplet Interconnect Express (UCIe) Die-to-Die (D2D) IP with TSMC's Chip-on-Wafer-on-Substrate (CoWoS) advanced packaging technology.

The complete PHY and Controller subsystem was developed in collaboration with TSMC and targets applications such as hyperscaler, high-performance computing (HPC) and artificial intelligence (AI).

Micron Introduces 9550 NVMe Data Center SSD

Micron Technology, Inc., today announced availability of the Micron 9550 NVMe SSD - the world's fastest data center SSD and industry leader in AI workload performance and power efficiency. The Micron 9550 SSD showcases Micron's deep expertise and innovation by integrating its own controller, NAND, DRAM and firmware into one world-class product. This integrated solution enables class-leading performance, power efficiency and security features for data center operators.

The Micron 9550 SSD delivers best-in-class performance with 14.0 GB/s sequential reads and 10.0 GB/s sequential writes to provide up to 67% better performance over similar competitive SSDs and enables industry-leading performance for demanding workloads such as AI. In addition, its random reads of 3,300 KIOPS are up to 35% better and random writes of 400 KIOPS are up to 33% better than competitive offerings.

OpenAI in Talks with Broadcom About Developing Custom AI Chips to Power Next Generation Models

According to The Information, OpenAI is reportedly in talks with Broadcom about developing a custom AI accelerator to power OpenAI's growing demand for high-performance solutions. Broadcom is a fabless chip designer known for a wide range of silicon solutions spanning from networking, PCIe, SSD controllers, and PHYs all the way up to custom ASICs. The latter part is what OpenAI wants to focus on, but all the aforementioned IP developed by Broadcom is of use in a data center. Suppose OpenAI decides to use Broadcom's solutions. In that case, the fabless silicon designer offers a complete vertical stack of products for inter-system communication using various protocols such as PCIe, system-to-system communication using Ethernet networking with Broadcom Tomahawk 6 and future revisions, alongside storage solutions and many other complimentary elements of a data center.

As a company skilled in making various IPs, it also makes ASIC solutions for other companies and has assisted Google in the making of its Tensor Processing Unit (TPU), which is now in its sixth generation. Google TPUs are massively successful as Google deploys millions of them and provides AI solutions to billions of users across the globe. Now, OpenAI wants to be part of the AI chip game, and Broadcom could come to the rescue with its already-established AI success and various other data center componentry to help make a custom AI accelerator to power OpenAI's infrastructure needed for the next generation of AI models. With each new AI model released by OpenAI, compute demand spikes by several orders of magnitude, and having an AI accelerator that exactly matches their need will help the company move faster and run even bigger AI models.

Tenstorrent Launches Next Generation Wormhole-based Developer Kits and Workstations

Tenstorrent is launching their next generation Wormhole chip featuring PCIe cards and workstations designed for developers who are interested in scalability for multi-chip development using Tenstorrent's powerful open-source software stacks.

These Wormhole-based cards and systems are now available for immediate order on tenstorrent.com:
  • Wormhole n150, powered by a single processor
  • Wormhole n300, powered by two processors
  • TT-LoudBox, a developer workstation powered by four Wormhole n300s (eight processors)

Qualitas Semiconductor Develops First In-House PCIe 6.0 PHY IP

Qualitas Semiconductor Co., Ltd. has developed a new PCIe 6.0 PHY IP, marking a significant advance in computer interconnect technology. This new product, created using advanced 5 nm process technology is designed to meet the high-speed data transfer needs of the AI era. The Qualitas' PCIe PHY IP using 5 nm FinFet CMOS technology consists of hardmacro PMA and PCS compliant to PCIe Base 6.0 specification.

The PCIe 6.0 PHY IP can achieve transmission speeds up to 64GT/s per lane. When using all 16 lanes, it can transfer data at rates up to 256 GB/s. These speeds make it well-suited for data centers and self-driving car technologies, where rapid data processing is essential. Qualitas achieved this performance by implementing 100G PAM4 signaling technology. Highlighting the importance of the new IP, Qualitas CEO Dr. Duho Kim signaled the company's intent to continue pushing boundaries in semiconductor technology.

NVIDIA GeForce RTX 50 Series "Blackwell" TDPs Leaked, All Powered by 16-Pin Connector

In the preparation season for NVIDIA's upcoming GeForce RTX 50 Series of GPUs, codenamed "Blackwell," one power supply manufacturer accidentally leaked the power configurations of all SKUs. Seasonic operates its power supply wattage calculator, allowing users to configure their systems online and get power supply recommendations. This means that the system often gets filled with CPU/GPU SKUs to accommodate the massive variety of components. This time we have the upcoming GeForce RTX 50 series, with RTX 5050 all the way up to the top RTX 5090 GPU. Starting with the GeForce RTX 5050, this SKU is expected to carry a 100 W TDP. Its bigger brother, the RTX 5060, bumps the TDP to 170 W, 55 W higher than the previous generation "Ada Lovelace" RTX 4060.

The GeForce RTX 5070, with a 220 W TDP, is in the middle of the stack, featuring a 20 W increase over the Ada generation. For higher-end SKUs, NVIDIA prepared the GeForce RTX 5080 and RTX 5090, with 350 W and 500 W TDP, respectively. This also represents a jump in TDP from Ada generation with an increase of 30 W for RTX 5080 and 50 W for RTX 5090. Interestingly, this time NVIDIA wants to unify the power connection system of the entire family with a 16-pin 12V-2x6 connector but with an updated PCIe 6.0 CEM specification. The increase in power requirements for the "Blackwell" generation across the SKUs is interesting, and we are eager to see if the performance gains are enough to balance efficiency.

Panmnesia Uses CXL Protocol to Expand GPU Memory with Add-in DRAM Card or Even SSD

South Korean startup Panmnesia has unveiled an interesting solution to address the memory limitations of modern GPUs. The company has developed a low-latency Compute Express Link (CXL) IP that could help expand GPU memory with external add-in card. Current GPU-accelerated applications in AI and HPC are constrained by the set amount of memory built into GPUs. With data sizes growing by 3x yearly, GPU networks must keep getting larger just to fit the application in the local memory, benefiting latency and token generation. Panmnesia's proposed approach to fix this leverages the CXL protocol to expand GPU memory capacity using PCIe-connected DRAM or even SSDs. The company has overcome significant technical hurdles, including the absence of CXL logic fabric in GPUs and the limitations of existing unified virtual memory (UVM) systems.

At the heart of Panmnesia's solution is a CXL 3.1-compliant root complex with multiple root ports and a host bridge featuring a host-managed device memory (HDM) decoder. This sophisticated system effectively tricks the GPU's memory subsystem into treating PCIe-connected memory as native system memory. Extensive testing has demonstrated impressive results. Panmnesia's CXL solution, CXL-Opt, achieved two-digit nanosecond round-trip latency, significantly outperforming both UVM and earlier CXL prototypes. In GPU kernel execution tests, CXL-Opt showed execution times up to 3.22 times faster than UVM. Older CXL memory extenders recorded around 250 nanoseconds round trip latency, with CXL-Opt potentially achieving less than 80 nanoseconds. As with CXL, the problem is usually that the memory pools add up latency and performance degrades, while these CXL extenders tend to add to the cost model as well. However, the Panmnesia CXL-Opt could find a use case, and we are waiting to see if anyone adopts this in their infrastructure.
Below are some benchmarks by Panmnesia, as well as the architecture of the CXL-Opt.

Intel Demonstrates First Fully Integrated Optical IO Chiplet

Intel Corporation has achieved a revolutionary milestone in integrated photonics technology for high-speed data transmission. At the Optical Fiber Communication Conference (OFC) 2024, Intel's Integrated Photonics Solutions (IPS) Group demonstrated the industry's most advanced and first-ever fully integrated optical compute interconnect (OCI) chiplet co-packaged with an Intel CPU and running live data. Intel's OCI chiplet represents a leap forward in high-bandwidth interconnect by enabling co-packaged optical input/output (I/O) in emerging AI infrastructure for data centers and high performance computing (HPC) applications.

"The ever-increasing movement of data from server to server is straining the capabilities of today's data center infrastructure, and current solutions are rapidly approaching the practical limits of electrical I/O performance. However, Intel's groundbreaking achievement empowers customers to seamlessly integrate co-packaged silicon photonics interconnect solutions into next-generation compute systems. Our OCI chiplet boosts bandwidth, reduces power consumption and increases reach, enabling ML workload acceleration that promises to revolutionize high-performance AI infrastructure," said Thomas Liljeberg, senior director, Product Management and Strategy, Integrated Photonics Solutions (IPS) Group.

Realtek is Aiming to Make 5 Gbps Ethernet Switches More Affordable with New Platform

At Computex, Realtek was showing off a new 5 Gbps switch platform which is set to bring much more affordable high-speed Ethernet switches to the consumer market. At the core of the new switch platform sits Realtek's RTL9303 which is an eight port 10 Gbps switch controller. This was released a few years ago as a low cost 10 Gbps switch IC, but as it still required third party PHYs, it never really took off. The RTL9303 is built around an 800 MHz MIPS 34Kc CPU and supports up to 1 GB of DDR3 RAM as well as 64 MB of SPI NOR Flash for the firmware.

When combined with Realtek's RTL8251B 5 Gbps PHY, the end result is a comparably low-cost 5 Gbps switch. According to Ananadtech, Realtek is expecting a US$25 price per port, which is only about $10 more per port than your typical 2.5 Gbps switch today, even though some are as little as US$10 per port. When combined with a Realtek RTL8126 PCIe based 5 Gbps NIC which retails from around US$30, 5 Gbps Ethernet looks like a very sensible option in terms of price/performance. Admittedly 2.5 Gbps Ethernet cards can be had for as little as $13, but they started out at a higher price point compared to what 5 Gbps NICs are already selling for. Meanwhile, 10 Gbps NICs are still stuck at around US$80-90, with switches in most cases costing at least US$45 per port, but often a lot more. 5 Gbps Ethernet also has the advantage of being able to operate on CAT 5e cabling at up to 60 metres and CAT 6 cabling at up 100 metres, which means there's no need to replace older cabling to benefit from it.

Zephyr Unveils ITX-sized Sakura Blizzard RTX 4070 Graphics Card

PC enthusiasts who crave powerful graphics in compact systems have a new option from Zephyr. The Chinese manufacturer has launched the world's first ITX-sized GeForce RTX 4070 graphics card. Dubbed the Sakura Blizzard, this GPU packs NVIDIA's AD104-250 chip and 12 GB of GDDR6X memory into a footprint of just 172 x 123 x 42 mm. While slightly taller than standard PCIe brackets, the two-slot cooler should fit most Mini-ITX cases. The card's cute pink shroud and solitary cooling fan give it a unique aesthetic. But don't let the pink looks fool you - Zephyr claims this compact powerhouse can keep the GPU and memory up to 10°C cooler than some dual-fan RTX 4070 designs, which needs to be taken with a grain of salt. Thermal testing videos show the fan spinning at 2,400 RPM to maintain GPU temperatures around 73°C under load in a 25°C room. Meanwhile, synthetic benchmarks reportedly demonstrate no performance compromises versus full-sized RTX 4070 implementations.

Zephyr's initial production run has already sold out in China. However, a second batch is slated for mid-July availability to meet the apparent higher demand for small form factor RTX 40-series GPUs. The launch comes just weeks after NVIDIA unveiled new "SFF-ready" design guidelines at Computex 2024. As the power-hungry RTX 40 lineup hit the market, many voiced concerns over the cards' ever-growing dimensions. NVIDIA's renewed SFF PC focus signal options like the Sakura Blizzard could become more common. For space-constrained enthusiasts, having top-tier GPU muscle in a properly-cooled Mini-ITX card is a big win. Zephyr's ITX-sized RTX 4070 shows powerful things can come in small packages, and we hope more manufacturers follow this philosophy.

Western Digital Introduces New Enterprise AI Storage Solutions and AI Data Cycle Framework

Fueling the next wave of AI innovation, Western Digital today introduced a six-stage AI Data Cycle framework that defines the optimal storage mix for AI workloads at scale. This framework will help customers plan and develop advanced storage infrastructures to maximize their AI investments, improve efficiency, and reduce the total cost of ownership (TCO) of their AI workflows. AI models operate in a continuous loop of data consumption and generation - processing text, images, audio and video among other data types while simultaneously producing new unique data. As AI technologies become more advanced, data storage systems must deliver the capacity and performance to support the computational loads and speeds required for large, sophisticated models while managing immense volumes of data. Western Digital has strategically aligned its Flash and HDD product and technology roadmaps to the storage requirements of each critical stage of the cycle, and today introduced a new industry-leading, high-performance PCIe Gen 5 SSD to support AI training and inference; a high-capacity 64 TB SSD for fast AI data lakes; and the world's highest capacity ePMR, UltraSMR 32 TB HDD for cost-effective storage at scale.

"There's no doubt that Generative AI is the next transformational technology, and storage is a critical enabler. The implications for storage are expected to be significant as the role of storage, and access to data, influences the speed, efficiency and accuracy of AI Models, especially as larger and higher-quality data sets become more prevalent," said Ed Burns, Research Director at IDC. "As a leader in Flash and HDD, Western Digital has an opportunity to benefit in this growing AI landscape with its strong market position and broad portfolio, which meets a variety of needs within the different AI data cycle stages."

Sparkle Presents Streamer 4K60 Video Capture Card at Computex 2024

At Computex 2024, Sparkle has unveiled its latest product aimed at streamers, the Streamer 4K60. This innovative PCIe video capture card is engineered to cater to the demands of high-quality video production. Sparkle has seamlessly integrated many advanced features into this remarkable device, ensuring that content creators and streamers can unleash their creativity without compromise. The Streamer 4K60 boasts the ability to capture ultra-high-definition video at a smooth 60 frames per second in 4K resolution or even 120 frames per second in 1080p resolution. Its dual HDMI input support facilitates video passthrough and enables users to explore the realms of Picture-in-Picture (PiP) and cooperative streaming functionalities. Furthermore, this cutting-edge device effortlessly combines HDR10 support, a robust metal cover design for durability, and comprehensive software compatibility with popular platforms like OBS and XSplit. For connection to the PC, it requires a PCIe 2.0 x1 connector. For video passthrough and video input, HDMI 2.1 is used. The card is priced at 299 USD and will be available in late July or early August.

Maxsun Puts the PCIe x16 Slot on the Back of its Latest B760 Mini-ITX Motherboard

We've seen a lot of motherboards with the power, SATA and USB connectors on the rear at Computex this year, but Chinese Maxsun decided to put the PCIe x16 slot on the back of its Mini-ITX MS-Terminator B760BKB D5 motherboard. This might seem like a crazy move, but with the right chassis, this means that the graphics card won't need a PCIe extension cable, while fitting in a compact chassis. It's also become increasingly hard to retain signal integrity over ribbon cables and with PCIe 5.0, it might be impossible. Another added benefit of this design is that the board also has a PCIe 3.0 x4 slot available for something like a 10 Gbps network card or just about anything else you'd want to plug into a compact PC.

The overall board specs don't really stick out from the crowd, as the Intel B760 chips is a somewhat limiting factor as well. In addition to the PCIe 5.0 x16 slot and the PCIe 3.0 x4 slot, the board also has a pair of PCIe 4.0 M.2 NVMe slots—one on each side of the PCB—and a slim SAS connector using an SFF-8654 connector. In addition to this, the board also has two DDR5 DIMM slots, four SATA ports, a 2.5 Gbps Ethernet port, a DP 1.2, an HDMI 2.0 port, a USB Type-C port of unknown speed and WiFi 6 and Bluetooth 5.2. The power design consists of a fairly basic 8+1+1 phase setup. We'd like to see this board design to become a standard, as it makes a lot of sense for the SFF market to allow for a better placement of the PCI x16 slot to allow for a more compact chassis without having to compromise on the choice of graphics card.

Mnemonic Electronic Debuts at COMPUTEX 2024, Embracing the Era of High-Capacity SSDs

On June 4th, COMPUTEX 2024 was successfully held at the Taipei Nangang Exhibition Center. Mnemonic Electronic Co., Ltd., the Taiwanese subsidiary of Longsys, showcased industry-leading high-capacity SSDs under the theme "Embracing the Era of High-Capacity SSDs." The products on display included the Mnemonic MS90 8TB SATA SSD, FORESEE ORCA 4836 series enterprise NVMe SSDs, FORESEE XP2300 PCIe Gen 4 SSDs, and rich product lines comprising embedded storage, memory modules, memory cards, and more. The company offers reliable industrial-grade, automotive-grade, and enterprise-grade storage products, providing high-capacity solutions for global users.

High-Capacity SSDs
For SSDs, Mnemonic Electronic presented products in various form factors and interfaces, including PCIe M.2, PCIe BGA, SATA M.2, and SATA 2.5-inch. The Mnemonic MS90 8 TB SATA SSD supports the SATA interface with a speed of up to 6 Gb/s (Gen 3) and is backward compatible with Gen 1 and Gen 2. It also supports various SATA low-power states (Partial/Sleep/Device Sleep) and can be used for nearline HDD replacement, surveillance, and high-speed rail systems.

Marvell Expands Connectivity Portfolio With New PCIe Gen 6 Retimer Product Line

Marvell Technology, a leader in data infrastructure semiconductor solutions, today expanded its connectivity portfolio with the launch of the new Alaska P PCIe retimer product line built to scale data center compute fabrics inside accelerated servers, general-purpose servers, CXL systems and disaggregated infrastructure. The first two products, 8- and 16-lane PCIe Gen 6 retimers, connect AI accelerators, GPUs, CPUs and other components inside server systems.

Artificial intelligence (AI) and machine learning (ML) applications are driving data flows and connections inside server systems at significantly higher bandwidth, necessitating PCIe retimers to meet the required connection distances at faster speeds. PCIe is the industry standard for inside-server-system connections between AI accelerators, GPUs, CPUs and other server components. AI models are doubling their computation requirements every six months1 and are now the primary driver of the PCIe roadmap, with PCIe Gen 6 becoming a requirement.

Lenovo Announces its New AI PC ThinkPad P14s Gen 5 Mobile Workstation Powered by AMD Ryzen PRO Processors

Today, Lenovo launched the Lenovo ThinkPad P14s Gen 5 designed for professionals who need top-notch performance in a portable 14-inch chassis. Featuring a stunning 16:10 display, this mobile workstation is powered by AMD Ryzen PRO 8040 HS-Series processors. These processors are ultra-advanced and energy-efficient, making them perfect for use in thin and light mobile workstations. The AMD Ryzen PRO HS- Series processors also come with built-in Artificial Intelligence (AI) capabilities, including an integrated Neural Processing Unit (NPU) for optimized performance in AI workflows.

The Lenovo ThinkPad P14s Gen 5 is provided with independent software vendor (ISV) certifications and integrated AMD Radeon graphics, making it ideal for running applications like AutoCAD, Revit, and SOLIDWORKS with seamless performance. This mobile workstation is ideal for mobile power users, offering advanced ThinkShield security features and passes comprehensive MIL-SPEC testing for ultimate durability.

European Supercomputer Chip SiPearl Rhea Delayed, But Upgraded with More Cores

The rollout of SiPearl's much-anticipated Rhea processor for European supercomputers has been pushed back by a year to 2025, but the delay comes with a silver lining - a significant upgrade in core count and potential performance. Originally slated to arrive in 2024 with 72 cores, the homegrown high-performance chip will now pack 80 cores when it eventually launches. This decisive move by SiPearl and its partners is a strategic choice to ensure the utmost quality and capabilities for the flagship European processor. The additional 12 months will allow the engineering teams to further refine the chip's architecture, carry out extensive testing, and optimize software stacks to take full advantage of Rhea's computing power. Now called the Rhea1, the chip is a crucial component of the European Processor Initiative's mission to develop domestic high-performance computing technologies and reduce reliance on foreign processors. Supercomputer-scale simulations spanning climate science, drug discovery, energy research and more all require astonishing amounts of raw compute grunt.

By scaling up to 80 cores based on the latest Arm Neoverse V1, Rhea1 aims to go toe-to-toe with the world's most powerful processors optimized for supercomputing workloads. The SiPearl wants to utilize TSCM's N6 manufacturing process. The CPU will have 256-bit DDR5 memory connections, 104 PCIe 5.0 lanes, and four stacks of HBM2E memory. The roadmap shift also provides more time for the expansive European supercomputing ecosystem to prepare robust software stacks tailored for the upgraded Rhea silicon. Ensuring a smooth deployment with existing models and enabling future breakthroughs are top priorities. While the delay is a setback for SiPearl's launch schedule, the substantial upgrade could pay significant dividends for Europe's ambitions to join the elite ranks of worldwide supercomputer power. All eyes will be on Rhea's delivery in 2025, mainly from Europe's governments, which are funding the project.

Apacer Showcases the Latest in Backup and Recovery Technology at Automate 2024

Thanks to recent developments in the AI field, and following in the wake of the world's recovery from COVID-19, the transition of factories to partial or full automation proceeds with unstoppable momentum. And the best place to learn about the latest technologies that aim to make this transition as painless as possible is at Automate 2024. This is North America's largest robotics and automation event, and it will be held in Chicago, Illinois from May 6 to 9.

Automate attracts professionals from around the world, and Apacer is no exception. Apacer team will be on hand to discuss the latest technological developments created by our experienced R&D team. Many of these developments were specifically created to reduce the pain points commonly experienced by fully automated facilities. Take CoreSnapshot, for example. This backup and recovery technology can restore a crashed system to full operation in just a few seconds, reducing downtime and associated maintenance costs. Apacer recently updated CoreSnapshot, creating CoreRescue ASR and CoreRescue USR. The name of CoreRescue ASR refers to Auto Self Recovery. This technology will harness AI to learn the system booting process and analyze how long a boot should take. If this average boot time is significantly longer than usual, the system will trigger the self-recovery process and revert to an earlier, uncorrupted version of the drive's content. CoreRescue USR offers similar functionality, except the self-recovery process is triggered by connecting a small USB stick drive.

AMD "Strix Point" Mobile Processor Confirmed 12-core/24-thread, But Misses Out on PCIe Gen 5

AMD's next-generation Ryzen 9000 "Strix Point" mobile processor, which succeeds the current Ryzen 8040 "Hawk Point" and Ryzen 7040 "Phoenix," is confirmed to feature a CPU core-configuration of 12-core/24-thread, according to a specs-leak by HKEPC citing sources among notebook OEMs. It appears like Computex 2024 will be big for AMD, with the company preparing next-gen processor announcements across the desktop and notebook lines. Both the "Strix Point" mobile processor and "Granite Ridge" desktop processor debut the company's next "Zen 5" microarchitecture.

Perhaps the biggest takeaway from "Zen 5" is that AMD has increased the number of CPU cores per CCX from 8 in "Zen 3" and "Zen 4," to 12 in "Zen 5." While this doesn't affect the core-counts of its CCD chiplets (which are still expected to be 8-core), the "Strix Point" processor appears to use one giant CCX with 12 cores. Each of the "Zen 5" cores has a 1 MB dedicated L2 cache, while the 12 cores share a 24 MB L3 cache. The 12-core/24-thread CPU, besides the generational IPC gains introduced by "Zen 5," marks a 50% increase in CPU muscle over "Hawk Point." It's not just the CPU complex, even the iGPU sees a hardware update.
Return to Keyword Browsing
Aug 14th, 2024 11:34 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts