News Posts matching #PCIe

Return to Keyword Browsing

NVIDIA GeForce RTX 50 Series "Blackwell" TDPs Leaked, All Powered by 16-Pin Connector

In the preparation season for NVIDIA's upcoming GeForce RTX 50 Series of GPUs, codenamed "Blackwell," one power supply manufacturer accidentally leaked the power configurations of all SKUs. Seasonic operates its power supply wattage calculator, allowing users to configure their systems online and get power supply recommendations. This means that the system often gets filled with CPU/GPU SKUs to accommodate the massive variety of components. This time we have the upcoming GeForce RTX 50 series, with RTX 5050 all the way up to the top RTX 5090 GPU. Starting with the GeForce RTX 5050, this SKU is expected to carry a 100 W TDP. Its bigger brother, the RTX 5060, bumps the TDP to 170 W, 55 W higher than the previous generation "Ada Lovelace" RTX 4060.

The GeForce RTX 5070, with a 220 W TDP, is in the middle of the stack, featuring a 20 W increase over the Ada generation. For higher-end SKUs, NVIDIA prepared the GeForce RTX 5080 and RTX 5090, with 350 W and 500 W TDP, respectively. This also represents a jump in TDP from Ada generation with an increase of 30 W for RTX 5080 and 50 W for RTX 5090. Interestingly, this time NVIDIA wants to unify the power connection system of the entire family with a 16-pin 12V-2x6 connector but with an updated PCIe 6.0 CEM specification. The increase in power requirements for the "Blackwell" generation across the SKUs is interesting, and we are eager to see if the performance gains are enough to balance efficiency.

Panmnesia Uses CXL Protocol to Expand GPU Memory with Add-in DRAM Card or Even SSD

South Korean startup Panmnesia has unveiled an interesting solution to address the memory limitations of modern GPUs. The company has developed a low-latency Compute Express Link (CXL) IP that could help expand GPU memory with external add-in card. Current GPU-accelerated applications in AI and HPC are constrained by the set amount of memory built into GPUs. With data sizes growing by 3x yearly, GPU networks must keep getting larger just to fit the application in the local memory, benefiting latency and token generation. Panmnesia's proposed approach to fix this leverages the CXL protocol to expand GPU memory capacity using PCIe-connected DRAM or even SSDs. The company has overcome significant technical hurdles, including the absence of CXL logic fabric in GPUs and the limitations of existing unified virtual memory (UVM) systems.

At the heart of Panmnesia's solution is a CXL 3.1-compliant root complex with multiple root ports and a host bridge featuring a host-managed device memory (HDM) decoder. This sophisticated system effectively tricks the GPU's memory subsystem into treating PCIe-connected memory as native system memory. Extensive testing has demonstrated impressive results. Panmnesia's CXL solution, CXL-Opt, achieved two-digit nanosecond round-trip latency, significantly outperforming both UVM and earlier CXL prototypes. In GPU kernel execution tests, CXL-Opt showed execution times up to 3.22 times faster than UVM. Older CXL memory extenders recorded around 250 nanoseconds round trip latency, with CXL-Opt potentially achieving less than 80 nanoseconds. As with CXL, the problem is usually that the memory pools add up latency and performance degrades, while these CXL extenders tend to add to the cost model as well. However, the Panmnesia CXL-Opt could find a use case, and we are waiting to see if anyone adopts this in their infrastructure.
Below are some benchmarks by Panmnesia, as well as the architecture of the CXL-Opt.

Intel Demonstrates First Fully Integrated Optical IO Chiplet

Intel Corporation has achieved a revolutionary milestone in integrated photonics technology for high-speed data transmission. At the Optical Fiber Communication Conference (OFC) 2024, Intel's Integrated Photonics Solutions (IPS) Group demonstrated the industry's most advanced and first-ever fully integrated optical compute interconnect (OCI) chiplet co-packaged with an Intel CPU and running live data. Intel's OCI chiplet represents a leap forward in high-bandwidth interconnect by enabling co-packaged optical input/output (I/O) in emerging AI infrastructure for data centers and high performance computing (HPC) applications.

"The ever-increasing movement of data from server to server is straining the capabilities of today's data center infrastructure, and current solutions are rapidly approaching the practical limits of electrical I/O performance. However, Intel's groundbreaking achievement empowers customers to seamlessly integrate co-packaged silicon photonics interconnect solutions into next-generation compute systems. Our OCI chiplet boosts bandwidth, reduces power consumption and increases reach, enabling ML workload acceleration that promises to revolutionize high-performance AI infrastructure," said Thomas Liljeberg, senior director, Product Management and Strategy, Integrated Photonics Solutions (IPS) Group.

Realtek is Aiming to Make 5 Gbps Ethernet Switches More Affordable with New Platform

At Computex, Realtek was showing off a new 5 Gbps switch platform which is set to bring much more affordable high-speed Ethernet switches to the consumer market. At the core of the new switch platform sits Realtek's RTL9303 which is an eight port 10 Gbps switch controller. This was released a few years ago as a low cost 10 Gbps switch IC, but as it still required third party PHYs, it never really took off. The RTL9303 is built around an 800 MHz MIPS 34Kc CPU and supports up to 1 GB of DDR3 RAM as well as 64 MB of SPI NOR Flash for the firmware.

When combined with Realtek's RTL8251B 5 Gbps PHY, the end result is a comparably low-cost 5 Gbps switch. According to Ananadtech, Realtek is expecting a US$25 price per port, which is only about $10 more per port than your typical 2.5 Gbps switch today, even though some are as little as US$10 per port. When combined with a Realtek RTL8126 PCIe based 5 Gbps NIC which retails from around US$30, 5 Gbps Ethernet looks like a very sensible option in terms of price/performance. Admittedly 2.5 Gbps Ethernet cards can be had for as little as $13, but they started out at a higher price point compared to what 5 Gbps NICs are already selling for. Meanwhile, 10 Gbps NICs are still stuck at around US$80-90, with switches in most cases costing at least US$45 per port, but often a lot more. 5 Gbps Ethernet also has the advantage of being able to operate on CAT 5e cabling at up to 60 metres and CAT 6 cabling at up 100 metres, which means there's no need to replace older cabling to benefit from it.

Zephyr Unveils ITX-sized Sakura Blizzard RTX 4070 Graphics Card

PC enthusiasts who crave powerful graphics in compact systems have a new option from Zephyr. The Chinese manufacturer has launched the world's first ITX-sized GeForce RTX 4070 graphics card. Dubbed the Sakura Blizzard, this GPU packs NVIDIA's AD104-250 chip and 12 GB of GDDR6X memory into a footprint of just 172 x 123 x 42 mm. While slightly taller than standard PCIe brackets, the two-slot cooler should fit most Mini-ITX cases. The card's cute pink shroud and solitary cooling fan give it a unique aesthetic. But don't let the pink looks fool you - Zephyr claims this compact powerhouse can keep the GPU and memory up to 10°C cooler than some dual-fan RTX 4070 designs, which needs to be taken with a grain of salt. Thermal testing videos show the fan spinning at 2,400 RPM to maintain GPU temperatures around 73°C under load in a 25°C room. Meanwhile, synthetic benchmarks reportedly demonstrate no performance compromises versus full-sized RTX 4070 implementations.

Zephyr's initial production run has already sold out in China. However, a second batch is slated for mid-July availability to meet the apparent higher demand for small form factor RTX 40-series GPUs. The launch comes just weeks after NVIDIA unveiled new "SFF-ready" design guidelines at Computex 2024. As the power-hungry RTX 40 lineup hit the market, many voiced concerns over the cards' ever-growing dimensions. NVIDIA's renewed SFF PC focus signal options like the Sakura Blizzard could become more common. For space-constrained enthusiasts, having top-tier GPU muscle in a properly-cooled Mini-ITX card is a big win. Zephyr's ITX-sized RTX 4070 shows powerful things can come in small packages, and we hope more manufacturers follow this philosophy.

Western Digital Introduces New Enterprise AI Storage Solutions and AI Data Cycle Framework

Fueling the next wave of AI innovation, Western Digital today introduced a six-stage AI Data Cycle framework that defines the optimal storage mix for AI workloads at scale. This framework will help customers plan and develop advanced storage infrastructures to maximize their AI investments, improve efficiency, and reduce the total cost of ownership (TCO) of their AI workflows. AI models operate in a continuous loop of data consumption and generation - processing text, images, audio and video among other data types while simultaneously producing new unique data. As AI technologies become more advanced, data storage systems must deliver the capacity and performance to support the computational loads and speeds required for large, sophisticated models while managing immense volumes of data. Western Digital has strategically aligned its Flash and HDD product and technology roadmaps to the storage requirements of each critical stage of the cycle, and today introduced a new industry-leading, high-performance PCIe Gen 5 SSD to support AI training and inference; a high-capacity 64 TB SSD for fast AI data lakes; and the world's highest capacity ePMR, UltraSMR 32 TB HDD for cost-effective storage at scale.

"There's no doubt that Generative AI is the next transformational technology, and storage is a critical enabler. The implications for storage are expected to be significant as the role of storage, and access to data, influences the speed, efficiency and accuracy of AI Models, especially as larger and higher-quality data sets become more prevalent," said Ed Burns, Research Director at IDC. "As a leader in Flash and HDD, Western Digital has an opportunity to benefit in this growing AI landscape with its strong market position and broad portfolio, which meets a variety of needs within the different AI data cycle stages."

Sparkle Presents Streamer 4K60 Video Capture Card at Computex 2024

At Computex 2024, Sparkle has unveiled its latest product aimed at streamers, the Streamer 4K60. This innovative PCIe video capture card is engineered to cater to the demands of high-quality video production. Sparkle has seamlessly integrated many advanced features into this remarkable device, ensuring that content creators and streamers can unleash their creativity without compromise. The Streamer 4K60 boasts the ability to capture ultra-high-definition video at a smooth 60 frames per second in 4K resolution or even 120 frames per second in 1080p resolution. Its dual HDMI input support facilitates video passthrough and enables users to explore the realms of Picture-in-Picture (PiP) and cooperative streaming functionalities. Furthermore, this cutting-edge device effortlessly combines HDR10 support, a robust metal cover design for durability, and comprehensive software compatibility with popular platforms like OBS and XSplit. For connection to the PC, it requires a PCIe 2.0 x1 connector. For video passthrough and video input, HDMI 2.1 is used. The card is priced at 299 USD and will be available in late July or early August.

Maxsun Puts the PCIe x16 Slot on the Back of its Latest B760 Mini-ITX Motherboard

We've seen a lot of motherboards with the power, SATA and USB connectors on the rear at Computex this year, but Chinese Maxsun decided to put the PCIe x16 slot on the back of its Mini-ITX MS-Terminator B760BKB D5 motherboard. This might seem like a crazy move, but with the right chassis, this means that the graphics card won't need a PCIe extension cable, while fitting in a compact chassis. It's also become increasingly hard to retain signal integrity over ribbon cables and with PCIe 5.0, it might be impossible. Another added benefit of this design is that the board also has a PCIe 3.0 x4 slot available for something like a 10 Gbps network card or just about anything else you'd want to plug into a compact PC.

The overall board specs don't really stick out from the crowd, as the Intel B760 chips is a somewhat limiting factor as well. In addition to the PCIe 5.0 x16 slot and the PCIe 3.0 x4 slot, the board also has a pair of PCIe 4.0 M.2 NVMe slots—one on each side of the PCB—and a slim SAS connector using an SFF-8654 connector. In addition to this, the board also has two DDR5 DIMM slots, four SATA ports, a 2.5 Gbps Ethernet port, a DP 1.2, an HDMI 2.0 port, a USB Type-C port of unknown speed and WiFi 6 and Bluetooth 5.2. The power design consists of a fairly basic 8+1+1 phase setup. We'd like to see this board design to become a standard, as it makes a lot of sense for the SFF market to allow for a better placement of the PCI x16 slot to allow for a more compact chassis without having to compromise on the choice of graphics card.

Mnemonic Electronic Debuts at COMPUTEX 2024, Embracing the Era of High-Capacity SSDs

On June 4th, COMPUTEX 2024 was successfully held at the Taipei Nangang Exhibition Center. Mnemonic Electronic Co., Ltd., the Taiwanese subsidiary of Longsys, showcased industry-leading high-capacity SSDs under the theme "Embracing the Era of High-Capacity SSDs." The products on display included the Mnemonic MS90 8TB SATA SSD, FORESEE ORCA 4836 series enterprise NVMe SSDs, FORESEE XP2300 PCIe Gen 4 SSDs, and rich product lines comprising embedded storage, memory modules, memory cards, and more. The company offers reliable industrial-grade, automotive-grade, and enterprise-grade storage products, providing high-capacity solutions for global users.

High-Capacity SSDs
For SSDs, Mnemonic Electronic presented products in various form factors and interfaces, including PCIe M.2, PCIe BGA, SATA M.2, and SATA 2.5-inch. The Mnemonic MS90 8 TB SATA SSD supports the SATA interface with a speed of up to 6 Gb/s (Gen 3) and is backward compatible with Gen 1 and Gen 2. It also supports various SATA low-power states (Partial/Sleep/Device Sleep) and can be used for nearline HDD replacement, surveillance, and high-speed rail systems.

Marvell Expands Connectivity Portfolio With New PCIe Gen 6 Retimer Product Line

Marvell Technology, a leader in data infrastructure semiconductor solutions, today expanded its connectivity portfolio with the launch of the new Alaska P PCIe retimer product line built to scale data center compute fabrics inside accelerated servers, general-purpose servers, CXL systems and disaggregated infrastructure. The first two products, 8- and 16-lane PCIe Gen 6 retimers, connect AI accelerators, GPUs, CPUs and other components inside server systems.

Artificial intelligence (AI) and machine learning (ML) applications are driving data flows and connections inside server systems at significantly higher bandwidth, necessitating PCIe retimers to meet the required connection distances at faster speeds. PCIe is the industry standard for inside-server-system connections between AI accelerators, GPUs, CPUs and other server components. AI models are doubling their computation requirements every six months1 and are now the primary driver of the PCIe roadmap, with PCIe Gen 6 becoming a requirement.

Lenovo Announces its New AI PC ThinkPad P14s Gen 5 Mobile Workstation Powered by AMD Ryzen PRO Processors

Today, Lenovo launched the Lenovo ThinkPad P14s Gen 5 designed for professionals who need top-notch performance in a portable 14-inch chassis. Featuring a stunning 16:10 display, this mobile workstation is powered by AMD Ryzen PRO 8040 HS-Series processors. These processors are ultra-advanced and energy-efficient, making them perfect for use in thin and light mobile workstations. The AMD Ryzen PRO HS- Series processors also come with built-in Artificial Intelligence (AI) capabilities, including an integrated Neural Processing Unit (NPU) for optimized performance in AI workflows.

The Lenovo ThinkPad P14s Gen 5 is provided with independent software vendor (ISV) certifications and integrated AMD Radeon graphics, making it ideal for running applications like AutoCAD, Revit, and SOLIDWORKS with seamless performance. This mobile workstation is ideal for mobile power users, offering advanced ThinkShield security features and passes comprehensive MIL-SPEC testing for ultimate durability.

European Supercomputer Chip SiPearl Rhea Delayed, But Upgraded with More Cores

The rollout of SiPearl's much-anticipated Rhea processor for European supercomputers has been pushed back by a year to 2025, but the delay comes with a silver lining - a significant upgrade in core count and potential performance. Originally slated to arrive in 2024 with 72 cores, the homegrown high-performance chip will now pack 80 cores when it eventually launches. This decisive move by SiPearl and its partners is a strategic choice to ensure the utmost quality and capabilities for the flagship European processor. The additional 12 months will allow the engineering teams to further refine the chip's architecture, carry out extensive testing, and optimize software stacks to take full advantage of Rhea's computing power. Now called the Rhea1, the chip is a crucial component of the European Processor Initiative's mission to develop domestic high-performance computing technologies and reduce reliance on foreign processors. Supercomputer-scale simulations spanning climate science, drug discovery, energy research and more all require astonishing amounts of raw compute grunt.

By scaling up to 80 cores based on the latest Arm Neoverse V1, Rhea1 aims to go toe-to-toe with the world's most powerful processors optimized for supercomputing workloads. The SiPearl wants to utilize TSCM's N6 manufacturing process. The CPU will have 256-bit DDR5 memory connections, 104 PCIe 5.0 lanes, and four stacks of HBM2E memory. The roadmap shift also provides more time for the expansive European supercomputing ecosystem to prepare robust software stacks tailored for the upgraded Rhea silicon. Ensuring a smooth deployment with existing models and enabling future breakthroughs are top priorities. While the delay is a setback for SiPearl's launch schedule, the substantial upgrade could pay significant dividends for Europe's ambitions to join the elite ranks of worldwide supercomputer power. All eyes will be on Rhea's delivery in 2025, mainly from Europe's governments, which are funding the project.

Apacer Showcases the Latest in Backup and Recovery Technology at Automate 2024

Thanks to recent developments in the AI field, and following in the wake of the world's recovery from COVID-19, the transition of factories to partial or full automation proceeds with unstoppable momentum. And the best place to learn about the latest technologies that aim to make this transition as painless as possible is at Automate 2024. This is North America's largest robotics and automation event, and it will be held in Chicago, Illinois from May 6 to 9.

Automate attracts professionals from around the world, and Apacer is no exception. Apacer team will be on hand to discuss the latest technological developments created by our experienced R&D team. Many of these developments were specifically created to reduce the pain points commonly experienced by fully automated facilities. Take CoreSnapshot, for example. This backup and recovery technology can restore a crashed system to full operation in just a few seconds, reducing downtime and associated maintenance costs. Apacer recently updated CoreSnapshot, creating CoreRescue ASR and CoreRescue USR. The name of CoreRescue ASR refers to Auto Self Recovery. This technology will harness AI to learn the system booting process and analyze how long a boot should take. If this average boot time is significantly longer than usual, the system will trigger the self-recovery process and revert to an earlier, uncorrupted version of the drive's content. CoreRescue USR offers similar functionality, except the self-recovery process is triggered by connecting a small USB stick drive.

AMD "Strix Point" Mobile Processor Confirmed 12-core/24-thread, But Misses Out on PCIe Gen 5

AMD's next-generation Ryzen 9000 "Strix Point" mobile processor, which succeeds the current Ryzen 8040 "Hawk Point" and Ryzen 7040 "Phoenix," is confirmed to feature a CPU core-configuration of 12-core/24-thread, according to a specs-leak by HKEPC citing sources among notebook OEMs. It appears like Computex 2024 will be big for AMD, with the company preparing next-gen processor announcements across the desktop and notebook lines. Both the "Strix Point" mobile processor and "Granite Ridge" desktop processor debut the company's next "Zen 5" microarchitecture.

Perhaps the biggest takeaway from "Zen 5" is that AMD has increased the number of CPU cores per CCX from 8 in "Zen 3" and "Zen 4," to 12 in "Zen 5." While this doesn't affect the core-counts of its CCD chiplets (which are still expected to be 8-core), the "Strix Point" processor appears to use one giant CCX with 12 cores. Each of the "Zen 5" cores has a 1 MB dedicated L2 cache, while the 12 cores share a 24 MB L3 cache. The 12-core/24-thread CPU, besides the generational IPC gains introduced by "Zen 5," marks a 50% increase in CPU muscle over "Hawk Point." It's not just the CPU complex, even the iGPU sees a hardware update.

Aetina Accelerates Embedded AI with High-performance, Small Form-factor Aetina IA380E-QUFL Graphics Card

Aetina, a leading Edge AI solution provider, announced the launch of the Aetina IA380E-QUFL at Embedded World 2024 in Nuremberg, Germany. This groundbreaking product is a small form factor PCIe graphics card powered by the high-performance Intel Arc A380E GPU.

Unmatched Power in a Compact Design
The Aetina IA380E-QUFL delivers workstation-level performance packed into a low-profile, single-slot form factor. This innovative solution consumes only 50 W, making it ideal for space and power-constrained edge computing environments. Embedded system manufacturers and integrators can leverage the power of 4.096 TFLOPs peak FP32 performance delivered by the Intel Arc A380E GPU.

Intel Launches Gaudi 3 AI Accelerator: 70% Faster Training, 50% Faster Inference Compared to NVIDIA H100, Promises Better Efficiency Too

During the Vision 2024 event, Intel announced its latest Gaudi 3 AI accelerator, promising significant improvements over its predecessor. Intel claims the Gaudi 3 offers up to 70% improvement in training performance, 50% better inference, and 40% better efficiency than Nvidia's H100 processors. The new AI accelerator is presented as a PCIe Gen 5 dual-slot add-in card with a 600 W TDP or an OAM module with 900 W. The PCIe card has the same peak 1,835 TeraFLOPS of FP8 performance as the OAM module despite a 300 W lower TDP. The PCIe version works as a group of four per system, while the OAM HL-325L modules can be run in an eight-accelerator configuration per server. This likely will result in a lower sustained performance, given the lower TDP, but it confirms that the same silicon is used, just finetuned with a lower frequency. Built on TSMC's N5 5 nm node, the AI accelerator features 64 Tensor Cores, delivering double the FP8 and quadruple FP16 performance over the previous generation Gaudi 2.

The Gaudi 3 AI chip comes with 128 GB of HBM2E with 3.7 TB/s of bandwidth and 24 200 Gbps Ethernet NICs, with dual 400 Gbps NICs used for scale-out. All of that is laid out on 10 tiles that make up the Gaudi 3 accelerator, which you can see pictured below. There is 96 MB of SRAM split between two compute tiles, which acts as a low-level cache that bridges data communication between Tensor Cores and HBM memory. Intel also announced support for the new performance-boosting standardized MXFP4 data format and is developing an AI NIC ASIC for Ultra Ethernet Consortium-compliant networking. The Gaudi 3 supports clusters of up to 8192 cards, coming from 1024 nodes comprised of systems with eight accelerators. It is on track for volume production in Q3, offering a cost-effective alternative to NVIDIA accelerators with the additional promise of a more open ecosystem. More information and a deeper dive can be found in the Gaudi 3 Whitepaper.

AMD Launches Ryzen Embedded 8000 Series Processors with Integrated NPUs for Industrial AI

AMD has introduced the Ryzen Embedded 8000 Series processors, the first AMD embedded devices to combine NPUs based on the AMD XDNA architecture with traditional CPU and GPU elements, optimized for workload versatility and adaptability targeting industrial AI applications. Embedded solution engineers and developers can harness the processing power and leadership features for a variety of industrial AI applications including machine vision, robotics, and industrial automation. AI is widely used in machine vision applications today to enhance quality control and inspection processes.

AI can also help robots make real-time, route-planning decisions and adapt to dynamic environments. In industrial automation, AI processing helps intelligent edge devices perform complex analysis and decision-making without relying on cloud connectivity. This allows for real-time monitoring, predictive maintenance, and autonomous control of industrial processes, enhancing operational efficiency and reducing downtime.

Cervoz Introduces T425 Series of Industrial M.2 NVMe SSDs

Cervoz brings its new storage solution to industrial applications with the launch of its new T425 Series M.2 NVMe SSDs-M.2 2230 (B+M) and M.2 2242 (B+M). Available in the compact 2230 and 2242 form factors, these PCIe Gen3x2 SSDs pack impressive performance into small footprints. Engineered for reliability and efficiency, the T425 Series provides industrial-grade solutions for embedded systems and space-constrained applications.

Space-Saving Form Factors for Seamless Integration
The tiny size of the T425 Series SSDs enables easy integration into small, fanless devices where internal space is limited. From in-vehicle systems and handheld scanners to medical equipment and industrial PCs, these SSDs allow seamless upgrades without compromising capacity or performance.

Silicon Motion Unveils High-Performance Single Chip PCIe Gen4.0 BGA Ferri SSD with i-temp for Industrial and Automotive Applications

Silicon Motion Technology Corporation ("Silicon Motion"), a global leader in designing and marketing NAND flash controllers for solid-state storage devices, today introduced the new generation FerriSSD NVMe PCIe Gen 4 x4 BGA SSD. This latest solution features support for i-temp and integrates advanced IntelligentSeries technology, delivering robust data integrity in extreme temperature environments that meet the rigorous demands of industrial embedded systems and automotive applications.

The latest FerriSSD BGA SSD supports PCIe Gen 4 x4 and uses high density 3D NAND within a compact 16 mm x 20 mm BGA chip-scale package. With storage capacities up to 1 TB, these high-performance embedded SSDs utilize Silicon Motion's latest innovations to achieve high sequential read speeds exceeding 6 GB/s and sequential write speeds exceeding 4 GB/s. Equipped with Silicon Motion's proprietary IntelligentSeries data protection technology that enhances reliability and performance through the use of encryption, data caching, data scanning and protect features, as well as supporting the i-temp requirements of operating in extreme temperatures from -40°C to + 85°C. This latest FerriSSD offers a high performance and highly reliable embedded storage solution for a broad range of applications and operating environments including in-car computing, thin client computing, point-of-sale terminals, multifunction printers, telecommunications equipment, factory automation tools, and a wide range of server applications.

Other World Computing Launches SoftRAID 8 Setting a New Standard for Reliability, Speed and Data Safeguards

Other World Computing, the leading provider of computer hardware, accessories, and software that bring artistic expression and the digital world together for creative professionals and consumers of technology, today unveiled SoftRAID 8, a groundbreaking new software release that redefines RAID management for Mac and Windows environments.

Quickly accessing data with the right safeguards is a difficult balance. Whether it is enhancing multimedia production workflows, protecting critical business files, or ensuring uninterrupted access to valuable data, OWC's SoftRAID is the ideal solution to manage RAID arrays. SoftRAID implements the latest performance technology unleashing remarkable speeds on a RAID system. Simply connect the drive array, format the preferred RAID level, and experience the breakneck speeds first-hand. RAID management has never been this powerful and easy to use.

SK hynix Unveils Highest-Performing SSD for AI PCs at NVIDIA GTC 2024

SK hynix unveiled a new consumer product based on its latest solid-state drive (SSD), PCB01, which boasts industry-leading performance levels at GPU Technology Conference (GTC) 2024. Hosted by NVIDIA in San Jose, California from March 18-21, GTC is one of the world's leading conferences for AI developers. Applied to on-device AI PCs, PCB01 is a PCIe fifth-generation SSD which recently had its performance and reliability verified by a major global customer. After completing product development in the first half of 2024, SK hynix plans to launch two versions of PCB01 by the end of the year which target both major technology companies and general consumers.

Optimized for AI PCs, Capable of Loading LLMs Within One Second
Offering the industry's highest sequential read speed of 14 gigabytes per second (GB/s) and a sequential write speed of 12 GB/s, PCB01 doubles the speed specifications of its previous generation. This enables the loading of LLMs required for AI learning and inference in less than one second. To make on-device AIs operational, PC manufacturers create a structure that stores an LLM in the PC's internal storage and quickly transfers the data to DRAMs for AI tasks. In this process, the PCB01 inside the PC efficiently supports the loading of LLMs. SK hynix expects these characteristics of its latest SSD to greatly increase the speed and quality of on-device AIs.

ScaleFlux To Integrate Arm Cortex-R82 Processors in Its Next-Generation Enterprise SSD Controllers

ScaleFlux, a leader in deploying computational storage at scale, today announced its commitment to integrating the Arm Cortex -R82 processor in its forthcoming line of enterprise Solid State Drive (SSD) controllers. The Cortex-R82, is the highest performance real-time processor from Arm and the first to implement the 64-bit Armv8-R AArch64 architecture, representing a significant advancement in processing power and efficiency for enterprise storage solutions.

ScaleFlux's adoption of the Cortex-R82 is a strategic move to leverage the processor's high performance and energy efficiency. This collaboration underscores ScaleFlux's dedication to delivering cutting-edge technology in its SSD controllers, enhancing data processing capabilities and efficiency for data center and AI infrastructure worldwide.

QNAP Releases the TL-R2400PES-RP PCIe JBOD Storage Enclosure

QNAP Systems, Inc. today launched the new PCIe JBOD storage enclosure TL-R2400PES-RP, featuring PCIe Gen 3 x8, and providing up to 64 Gb/s data transfer. Following the release of 12 and 16-bay models of the TL-Rx00PES-RP PCIe JBOD series, QNAP extends the series lineup by introducing the 24-bay 4U rackmount TL-R2400PES-RP model. Users can expand existing NAS storage volumes to petabyte-class by connecting multiple TL-Rx00PES-RP series JBODs, without requiring RAID-rebuilding on the host NAS. The TL-Rx00PES-RP series uses SATA drives, allowing businesses to choose from a wide range of enterprise hard drives. This series is ideal for businesses who want to archive/back up virtualization applications, surveillance recordings, multimedia, and other large data.

The TL-Rx00PES-RP series power on/off is linked with the host NAS, which helps reduce hardware management tasks for IT staff. Storage expansion cards QXP-3X8PES (PCIe Gen 3 x8) or the QXP-3X4PES (PCIe Gen 3 x4) are required for the NAS to scale up using TL-Rx00PES-RP series expansion enclosures.

Supermicro Accelerates Performance of 5G and Telco Cloud Workloads with New and Expanded Portfolio of Infrastructure Solutions

Supermicro, Inc. (NASDAQ: SMCI), a Total IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, delivers an expanded portfolio of purpose-built infrastructure solutions to accelerate performance and increase efficiency in 5G and telecom workloads. With one of the industry's most diverse offerings, Supermicro enables customers to expand public and private 5G infrastructures with improved performance per watt and support for new and innovative AI applications. As a long-term advocate of open networking platforms and a member of the O-RAN Alliance, Supermicro's portfolio incorporates systems featuring 5th Gen Intel Xeon processors, AMD EPYC 8004 Series processors, and the NVIDIA Grace Hopper Superchip.

"Supermicro is expanding our broad portfolio of sustainable and state-of-the-art servers to address the demanding requirements of 5G and telco markets and Edge AI," said Charles Liang, president and CEO of Supermicro. "Our products are not just about technology, they are about delivering tangible customer benefits. We quickly bring data center AI capabilities to the network's edge using our Building Block architecture. Our products enable operators to offer new capabilities to their customers with improved performance and lower energy consumption. Our edge servers contain up to 2 TB of high-speed DDR5 memory, 6 PCIe slots, and a range of networking options. These systems are designed for increased power efficiency and performance-per-watt, enabling operators to create high-performance, customized solutions for their unique requirements. This reassures our customers that they are investing in reliable and efficient solutions."

Cadence Digital and Custom/Analog Flows Certified for Latest Intel 18A Process Technology

Cadence's digital and custom/analog flows are certified on the Intel 18A process technology. Cadence design IP supports this node from Intel Foundry, and the corresponding process design kits (PDKs) are delivered to accelerate the development of a wide variety of low-power consumer, high-performance computing (HPC), AI and mobile computing designs. Customers can now begin using the production-ready Cadence design flows and design IP to achieve design goals and speed up time to market.

"Intel Foundry is very excited to expand our partnership with Cadence to enable key markets for the leading-edge Intel 18A process technology," said Rahul Goyal, Vice President and General Manager, Product and Design Ecosystem, Intel Foundry. "We will leverage Cadence's world-class portfolio of IP, AI design technologies, and advanced packaging solutions to enable high-volume, high-performance, and power-efficient SoCs in Intel Foundry's most advanced process technology. Cadence is an indispensable partner supporting our IDM2.0 strategy and the Intel Foundry ecosystem."
Return to Keyword Browsing
Jul 15th, 2024 20:50 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts