News Posts matching #Scalable

Return to Keyword Browsing

Intel Prepares 500-Watt Xeon 6 SKUs of Granite Rapids and Sierra Forest

Intel is preparing to unveil its cutting-edge Xeon 6 series server CPUs, known as Granite Rapids and Sierra Forest. These forthcoming processors are set to deliver a significant boost in performance, foreshadowing a new era of computing power, albeit with a trade-off in increased power consumption. Two days ago, Yuuki_Ans posted information about the Beechnut City validation platform. Today, he updated the X thread with more information that Intel is significantly boosting core counts across its new Xeon 6 lineup. The flagship Xeon 6 6980P is a behemoth, packing 128 cores with a blistering 500 Watt Thermal Design Power (TDP) rating. In fact, Intel is equipping five of its Xeon 6 CPUs with a sky-high 500 W TDP, including the top four Granite Rapids parts and even the flagship Sierra Forest SKU, which is composed entirely of efficiency cores. This marks a substantial increase from Intel's previous Xeon Scalable processors, which maxed out at 350-385 Watts.

The trade-off for this performance boost is a dramatic rise in power consumption. By nearly doubling the TDP ceiling, Intel can double the core count from 64 to 128 cores on its Granite Rapids CPUs, vastly improving its multi-core capabilities. However, this focus on raw performance over power efficiency means server manufacturers must redesign their cooling solutions to accommodate Intel's flagship 500 W parts adequately. Failure to do so could lead to potential thermal throttling issues. Intel's next-gen Xeon CPU architectures are shaping up to be one of the most considerable generational leaps in recent memory. Still, they come with a trade-off in power consumption that vendors and data centers will need to address. Densely packing thousands of these 500-Watt SKUs will lead to new power and thermal challenges, and we wait to see future data center projects utilizing them.

Intel Xeon Scalable Gets a Rebrand: Intel "Xeon 6" with Granite Rapids and Sierra Forest Start a New Naming Scheme

During the Vision 2024 event, Intel announced that its upcoming Xeon processors will be branded under the new "Xeon 6" moniker. This rebranding effort aims to simplify the company's product stack and align with the recent changes made to its consumer CPU naming scheme. In contrast to the previous "x Generation Xeon Scalable", the new branding aims to simplify the product family. The highly anticipated Sierra Forest and Granite Ridge chips will be the first processors to bear the Xeon 6 branding, and they are set to launch in the coming months. Intel has confirmed that Sierra Forest, designed entirely with efficiency cores (E-cores), remains on track for release this quarter. Supermicro has already announced early availability and remote testing programs for these chips. Intel's Sierra Forest is set to deliver a substantial leap in performance. According to the company, it will offer a 2.4X improvement in performance per watt and a staggering 2.7X better performance per rack compared to the previous generation. This means that 72 Sierra Forest server racks will provide the same performance as 200 racks equipped with older second-gen Xeon CPUs, leading to significant power savings and a boost in overall efficiency for data centers upgrading their system.

Intel has also teased an exciting feature in its forthcoming Granite Ridge processors-support for the MXFP4 data format. This new precision format, backed by the Open Compute Project (OCP) and major industry players like NVIDIA, AMD, and Arm, promises to revolutionize performance. It could reduce next-token latency by up to 6.5X compared to fourth-gen Xeons using FP16. Additionally, Intel stated that Granite Ridge will be capable of running 70 billion parameter Llama-2 models, a capability that could open up new possibilities in data processing. Intel claims that 70 billion 4-bit models run entirely on Xeon in just 86 milliseconds. While Sierra Forest is slated for this quarter, Intel has not provided a specific launch timeline for Granite Ridge, stating only that it will arrive "soon after" its E-core counterpart. The Xeon 6 branding aims to simplify the product stack and clarify customer performance tiers as the company gears up for these major releases.

Intel Xeon "Granite Rapids-SP" 80-core Engineering Sample Leaked

A CPU-Z screenshot has been shared by YuuKi_AnS—the image contains details about an alleged next-gen Intel Xeon Scalable processor engineering sample (ES). The hardware tipster noted in (yesterday's post) that an error had occurred in the application's identification of this chunk of prototype silicon. CPU-Z v2.09 has recognized the basics—an Intel Granite Rapids-SP processor that is specced with 80 cores, 2.5 GHz max frequency, a whopping 672 MB of L3 cache, and a max. TDP rating of 350 W. The counting of 320 threads seems to be CPU-Z's big mistake here—previous Granite Rapids-related leaks have not revealed Team Blue's Hyper-Threading technology producing such impressive numbers.

The alleged prototype status of this Xeon chip is very apparent in CPU-Z's tracking of single and multi-core performance—the benchmark results are really off the mark, when compared to finalized current-gen scores (produced by rival silicon). Team Blue's next-gen Xeon series is likely positioned to catch up with AMD EPYC's deployment of large core counts—"Granite Rapids" has been linked to the Intel 3 foundry node, reports from last month suggest that XCC-type processors could be configured with "counts going up to 56-core/112-threads." Micron is prepping next-gen "Tall Form Factor" memory modules, designed with future enterprise processor platforms in mind—including Intel's Xeon Scalable "Granite Rapids" family. Industry watchdogs posit that Team Blue will be launching this series in the coming months.

MiTAC Unleashes Revolutionary Server Solutions, Powering Ahead with 5th Gen Intel Xeon Scalable Processors Accelerated by Intel Data Center GPUs

MiTAC Computing Technology, a subsidiary of MiTAC Holdings Corp., proudly reveals its groundbreaking suite of server solutions that deliver unsurpassed capabilities with the 5th Gen Intel Xeon Scalable Processors. MiTAC introduces its cutting-edge signature platforms that seamlessly integrate the Intel Data Center GPUs, both Intel Max Series and Intel Flex Series, an unparalleled leap in computing performance is unleashed targeting HPC and AI applications.

MiTAC Announce its Full Array of Platforms Supporting the latest 5th Gen Intel Xeon Scalable Processors
Last year, Intel transitioned the right to manufacture and sell products based on Intel Data Center Solution Group designs to MiTAC. MiTAC confidently announces a transformative upgrade to its product offerings, unveiling advanced platforms that epitomize the future of computing. Featured with up to 64 cores, expanded shared cache, increased UPI and DDR5 support, the latest 5th Gen Intel Xeon Scalable Processors deliver remarkable performance per watt gains across various workloads. MiTAC's Intel Server M50FCP Family and Intel Server D50DNP Family fully support the latest 5th Gen Intel Xeon Scalable Processors, made possible through a quick BIOS update and easy technical resource revisions which provide unsurpassed performance to diverse computing environments.

IBM Introduces LinuxONE 4 Express, a Value-oriented Hybrid Cloud & AI Platform

IBM has announced IBM LinuxONE 4 Express, extending the latest performance, security and AI capabilities of LinuxONE to small and medium sized businesses and within new data center environments. The pre-configured rack mount system is designed to offer cost savings and to remove client guess work when spinning up workloads quickly and getting started with the platform to address new and traditional use cases such as digital assets, medical imaging with AI, and workload consolidation.

Building an integrated hybrid cloud strategy for today and years to come
As businesses move their products and services online quickly, oftentimes, they are left with a hybrid cloud environment created by default, with siloed stacks that are not conducive to alignment across businesses or the introduction of AI. In a recent IBM IBV survey, 84% of executives asked acknowledged their enterprise struggles in eliminating silo-to-silo handoffs. And 78% of responding executives said that an inadequate operating model impedes successful adoption of their multicloud platform. With the pressure to accelerate and scale the impact of data and AI across the enterprise - and improve business outcomes - another approach that organizations can take is to more carefully identify which workloads should be on-premises vs in the cloud.

6th Gen Intel Xeon "Granite Rapids" CPU L3 Cache Totals 480 MB

Intel has recently updated its Software Development Emulator (now version 9.33.0)—InstLatX64 noted some intriguing cache designations for Fifth Generation Xeon Scalable Processors. The "Emerald Rapids" family was introduced at last December's "AI Everywhere" event—with sample units released soon after for review. Tom's Hardware was impressed by the Platinum 8592+ CPU's tripled L3 Cache (over the previous generation): "(it) contributed significantly to gains in Artificial Intelligence inference, data center, video encoding, and general compute workloads. While AMD EPYC generally remains the player to beat in the enterprise CPU space, Emerald Rapids marks a significant improvement from Intel's side of that battlefield, especially as it pertains to Artificial Intelligence workloads and multi-core performance in general."

Intel's SDE 9.33.0 update confirms 320 MB of L3 cache for "Emerald Rapids," but the next line down provides a major "Granite Rapids" insight—480 MB of L3 cache, representing a 2.8x leap over the previous generation. Team Blue's 6th Gen (all P-core) Xeon processor series is expected to launch within the latter half of 2024. The American multinational technology company is evidently keen to take on AMD in the enterprise CPU market segment, although Team Red is already well ahead with its current crop of L3 cache designations. EPYC CPUs in Genoa and Genoa-X guises offer maximum totals of 384 MB and 1152 MB (respectively). Intel's recently launched "Emerald Rapids" server chips are observed as being a good match against Team Red EPYC "Bergamo" options.

GIGABYTE Launches Servers Powered by Intel Xeon E-2400 processors and Shares Updates to Support 5th Gen Intel Xeon Scalable Processors

GIGABYTE Technology: Giga Computing, a subsidiary of GIGABYTE and an industry leader in high-performance servers and IT infrastructure, is thrilled to present a cutting-edge series of servers optimized for the newly launched Intel Xeon E-2400 processors. These servers deliver essential computing power, ensuring a dependable workload for a wide range of enterprise and edge computing applications, all while maintaining an impressive price-to-performance ratio.

"We are thrilled to unveil our latest server product line, which is engineered to deliver unparalleled performance and reliability," said Vincent Wang, sales VP at Giga Computing. "By leveraging the power of the new Intel Xeon E processors, our servers empower businesses to elevate their computational capabilities, enabling them to achieve greater efficiency and productivity. Whether it's for enterprise applications or edge computing tasks, GIGABYTE servers are the cornerstone of innovation in the digital landscape." ⁠

TYAN Upgrades HPC, AI and Data Center Solutions with the Power of 5th Gen Intel Xeon Scalable Processors

TYAN, a leading server platform design manufacturer and a MiTAC Computing Technology Corporation subsidiary, today introduced upgraded server platforms and motherboards based on the brand-new 5th Gen Intel Xeon Scalable Processors, formerly codenamed Emerald Rapids.

5th Gen Intel Xeon processor has increased to 64 cores, featuring a larger shared cache, higher UPI and DDR5 memory speed, as well as PCIe 5.0 with 80 lanes. Growing and excelling with workload-optimized performance, 5th Gen Intel Xeon delivers more compute power and faster memory within the same power envelope as the previous generation. "5th Gen Intel Xeon is the second processor offering inside the 2023 Intel Xeon Scalable platform, offering improved performance and power efficiency to accelerate TCO and operational efficiency", said Eric Kuo, Vice President of Server Infrastructure Business Unit, MiTAC Computing Technology Corporation. "By harnessing the capabilities of Intel's new Xeon CPUs, TYAN's 5th-Gen Intel Xeon-supported solutions are designed to handle the intense demands of HPC, data centers, and AI workloads.

TYAN Unveils its Robuste Immersion Cooling Solution that Delivering Significant PUE Enhancement at SC23

TYAN, an industry leader in server platform design and a subsidiary of MiTAC Computing Technology Corporation, unveils an immersion cooling solution that delivering significant PUE (Power Usage Effectiveness) enhancement and showcases its latest server platforms powered by 4th Gen. Intel Xeon Scalable Processors targeting HPC, AI and Cloud Computing applications at SC23, Booth #1917.

Significant PUE Enhancement shown in an Immersion-cooling Tank vs. Conventional Air-cooling Operation Cabinet
The immersion cooling system live demonstrated at TYAN booth during SC23 is a 4U hybrid single phase tank enclosure equipped with 4 units of TYAN GC68A-B7136 cloud computing servers. Comparing to conventional Air-cooling operating cabinet, this hybrid immersion cooling system could offer huge improvement of PUE which makes it become an ideal mission-critical solution for the users aimed in energy-saving and green products.

Intel "Emerald Rapids" 8592+ and 8558U Xeon CPUs with 64C and 48C Configurations Spotted

Intel's next-generation Emerald Rapids Xeon lineup is just around the corner, and we are now receiving more leaks as the launch nears. Today, we get to see leaks of two models: a 64-core Xeon 8592+ Platinum and a 48-core Xeon 8558U processor. First is the Xeon 8592+ Platinum, which is possibly Intel's top-end design with 64 cores and 128 threads. Running at the base frequency of 1.9 GHz, the CPU can boost up to 3.9 GHz. This SKU carries 488 MB of total cache, where 120 MB is dedicated to L2 and 320 MB is there for L3. With a TDP of 350 Watts, the CPU can even be adjusted to 420 Watts.

Next up, we have the Xeon 8558U processor, which has been spotted in Geekbench. The Xeon 8558U is a 48-core, 96-threaded CPU with a 2.0 GHz base clock whose boost frequency has yet to be shown or enabled, likely because it is an engineering sample. It carries 96 MB of L2 cache and 260 MB of L3 cache, making for a total of 356 MB of cache (which includes L1D and L1I as well). Both of these SKUs should launch with the remaining models in the Emerald Rapids family, dubbed 5th generation Xeon Scalable, on December 14 this year.

Intel, Dell Technologies and University of Cambridge Announce Deployment of Dawn Supercomputer

Dell Technologies, Intel and the University of Cambridge announce the deployment of the co-designed Dawn Phase 1 supercomputer. Leading technical teams built the U.K.'s fastest AI supercomputer that harnesses the power of both artificial intelligence (AI) and high performance computing (HPC) to solve some of the world's most pressing challenges. This sets a clear way forward for future U.K. technology leadership and inward investment into the U.K. technology sector. Dawn kickstarts the recently launched U.K. AI Research Resource (AIRR), which will explore the viability of associated systems and architectures. Dawn brings the U.K. closer to reaching the compute threshold of a quintillion (1018) floating point operations per second - one exaflop, better known as exascale. For perspective: Every person on earth would have to make calculations 24 hours a day for more than four years to equal a second's worth of processing power in an exascale system.

"Dawn considerably strengthens the scientific and AI compute capability available in the U.K., and it's on the ground, operational today at the Cambridge Open Zettascale Lab. Dell PowerEdge XE9640 servers offer a no-compromises platform to host the Intel Data Center GPU Max Series accelerator, which opens up the ecosystem to choice through oneAPI. I'm very excited to see the sorts of early science this machine can deliver and continue to strengthen the Open Zettascale Lab partnership between Dell Technologies, Intel and the University of Cambridge, and further broaden that to the U.K. scientific and AI community," said Adam Roe, EMEA HPC technical director at Intel.

Supermicro Starts Shipments of NVIDIA GH200 Grace Hopper Superchip-Based Servers

Supermicro, Inc., a Total IT Solution manufacturer for AI, Cloud, Storage, and 5G/Edge, is announcing one of the industry's broadest portfolios of new GPU systems based on the NVIDIA reference architecture, featuring the latest NVIDIA GH200 Grace Hopper and NVIDIA Grace CPU Superchip. The new modular architecture is designed to standardize AI infrastructure and accelerated computing in compact 1U and 2U form factors while providing ultimate flexibility and expansion ability for current and future GPUs, DPUs, and CPUs. Supermicro's advanced liquid-cooling technology enables very high-density configurations, such as a 1U 2-node configuration with 2 NVIDIA GH200 Grace Hopper Superchips integrated with a high-speed interconnect. Supermicro can deliver thousands of rack-scale AI servers per month from facilities worldwide and ensures Plug-and-Play compatibility.

"Supermicro is a recognized leader in driving today's AI revolution, transforming data centers to deliver the promise of AI to many workloads," said Charles Liang, president and CEO of Supermicro. "It is crucial for us to bring systems that are highly modular, scalable, and universal for rapidly evolving AI technologies. Supermicro's NVIDIA MGX-based solutions show that our building-block strategy enables us to bring the latest systems to market quickly and are the most workload-optimized in the industry. By collaborating with NVIDIA, we are helping accelerate time to market for enterprises to develop new AI-enabled applications, simplifying deployment and reducing environmental impact. The range of new servers incorporates the latest industry technology optimized for AI, including NVIDIA GH200 Grace Hopper Superchips, BlueField, and PCIe 5.0 EDSFF slots."

Fujitsu Details Monaka: 150-core Armv9 CPU for AI and Data Center

Ever since the creation of A64FX for the Fugaku supercomputer, Fujitsu has been plotting the development of next-generation CPU design for accelerating AI and general-purpose HPC workloads in the data center. Codenamed Monaka, the CPU is the latest creation for TSMC's 2 nm semiconductor manufacturing node. Based on Armv9-A ISA, the CPU will feature up to 150 cores with Scalable Vector Extensions 2 (SVE2), so it can process a wide variety of vector data sets in parallel. Using a 3D chiplet design, the 150 cores will be split into different dies and placed alongside SRAM and I/O controller. The current width of the SVE2 implementation is unknown.

The CPU is designed to support DDR5 memory and PCIe 6.0 connection for attaching storage and other accelerators. To bring cache coherency among application-specific accelerators, CXL 3.0 is present as well. Interestingly, Monaka is planned to arrive in FY2027, which starts in 2026 on January 1st. The CPU will supposedly use air cooling, meaning the design aims for power efficiency. Additionally, it is essential to note that Monaka is not a processor that will power the post-Fugaku supercomputer. The post-Fugaku supercomputer will use post-Monaka design, likely iterating on the design principles that Monaka uses and refining them for the launch of the post-Fugaku supercomputer scheduled for 2030. Below are the slides from Fujitsu's presentation, in Japenese, which highlight the design goals of the CPU.

MiTAC to Showcase Cloud and Datacenter Solutions, Empowering AI at Intel Innovation 2023

Intel Innovation 2023 - September 13, 2023 - MiTAC Computing Technology, a professional IT solution provider and a subsidiary of MiTAC Holdings Corporation, will showcase its DSG (Datacenter Solutions Group) product lineup powered by 4th Gen Intel Xeon Scalable processors for enterprise, cloud and AI workloads at Intel Innovation 2023, booth #H216 in the San Jose McEnery Convention Center, USA, from September 19-20.

"MiTAC has seamlessly and successfully managed the Intel DSG business since July. The datacenter solution product lineup enhances MiTAC's product portfolio and service offerings. Our customers can now enjoy a comprehensive one-stop service, ranging from motherboards and barebones servers to Intel Data Center blocks and complete rack integration for their datacenter infrastructure needs," said Eric Kuo, Vice President of the Server Infrastructure Business Unit at MiTAC Computing Technology.

Supermicro Launches Industry Leading vSAN HCI Solution

Supermicro, Inc., a Total IT Solution Provider for Cloud, AI/ML, Storage, and 5G/Edge, today announced a new VMware vSAN solution optimized to run enterprise class hyperconverged virtualized workloads. As virtualized workloads become more advanced, processing power and storage performance requirements increase, requiring greater capacity to meet application SLAs and maximize virtual machine density. This solution also utilizes the latest Intel AMX accelerator for AI workloads.

When compared to the Supermicro X11 BigTwin, benchmark testing conducted by Supermicro showed up to 4.7X higher IO throughput and 8.2X lower latency on the HCIBench benchmark, up to 4.9X faster image classification inference throughput on the ResNet50 model and up to 4X faster natural language processing throughput on BERT-Large model. In addition, the superior power and efficiency of the Supermicro X13 BigTwin architecture can deliver up to 3X cost and performance improvement within the same node footprint compared to a similar deployment based on older generation Supermicro systems, creating a compelling case for organizations to upgrade their aging infrastructure.

MaxLinear Announces Production Availability of Panther III Storage Accelerator OCP Adapter Card

MaxLinear, Inc., a leader in data storage accelerator solutions, today announced the production-release of the OCP 3.0 storage accelerator adapter card for Panther III. The ultra-low latency accelerator is designed to quicken key storage workloads, including database acceleration, storage offload, encryption, compression, and deduplication enablement for maximum data reduction. The Panther III OCP card is ideal for use in modern data centers, including public to edge clouds, enterprise data centers, and telecommunications infrastructure, allowing users to access, process, and transfer data up to 12 times faster than without a storage accelerator. The OCP version of the card is available immediately with a PCIe version available in Q3 2023.

"In an era where the amount of data generated exceeds new storage installations by multiple fold, Panther III helps reduce the massive storage gap while improving TCO per bit stored," said Dylan Patel, Chief Analyst at SemiAnalysis.

IBM Launches AI-informed Cloud Carbon Calculator

IBM has launched a new tool to help enterprises track greenhouse gas (GHG) emissions across cloud services and advance their sustainability performance throughout their hybrid, multicloud journeys. Now generally available, the IBM Cloud Carbon Calculator - an AI-informed dashboard - can help clients access emissions data across a variety of IBM Cloud workloads such as AI, high performance computing (HPC) and financial services.

Across industries, enterprises are embracing modernization by leveraging hybrid cloud and AI to digitally transform with resiliency, performance, security, and compliance at the forefront, all while remaining focused on delivering value and driving more sustainable business practices. According to a recent study by IBM, 42% of CEOs surveyed pinpoint environmental sustainability as their top challenge over the next three years. At the same time, the study reports that CEOs are facing pressure to adopt generative AI while also weighing the data management needs to make AI successful. The increase in data processing required for AI workloads can present new challenges for organizations that are looking to reduce their GHG emissions. With more than 43% of CEOs surveyed already using generative AI to inform strategic decisions, organizations should prepare to balance executing high performance workloads with sustainability.

AIC Launches HA401-TU, a New High-availability Server Model

AIC has launched the new high-availability storage server HA401-TU, which is optimized for mission-critical, enterprise-level storage applications. This cluster-in-a-box solution with active-active failover design and eliminates single points of failure. HA401-TU is a 4U high-availability (HA) server with 2 controller nodes and supports 24 3.5" SAS 12 Gb/s drives. Each controller node is equipped with an AIC Tucana server board that is powered by dual 3rd Gen Intel Xeon Scalable processors and Intel C621A chipset, which supports UPI speed up to 11.2 GT/s. HA401-TU provides enterprise users with a number of crucial benefits. The redundant hardware components ensure that there is no single point of failure.

With the hot-swappable functionality, the controller canisters protect enterprises from the loss of revenue that can occur when access to mission-critical data or applications is disrupted. Both controller nodes process data input/output (I/O) operations and users can experience simultaneous and balanced access to logical devices. In the event of failover, the secondary node will automatically take over the devices, client connections and all the processes and services running in the system. This high-availability design significantly enhances the overall performance of clusters, enabling seamless handling of demanding workloads.

ASUS Unveils ESC N8-E11, an HGX H100 Eight-GPU Server

ASUS today announced ESC N8-E11, its most advanced HGX H100 eight-GPU AI server, along with a comprehensive PCI Express (PCIe) GPU server portfolio—the ESC8000 and ESC4000 series empowered by Intel and AMD platforms to support higher CPU and GPU TDPs to accelerate the development of AI and data science.

ASUS is one of the few HPC solution providers with its own all-dimensional resources that consist of the ASUS server business unit, Taiwan Web Service (TWS) and ASUS Cloud—all part of the ASUS group. This uniquely positions ASUS to deliver in-house AI server design, data-center infrastructure, and AI software-development capabilities, plus a diverse ecosystem of industrial hardware and software partners.

ASUS Demonstrates Liquid Cooling and AI Solutions at ISC High Performance 2023

ASUS today announced a showcase of the latest HPC solutions to empower innovation and push the boundaries of supercomputing, at ISC High Performance 2023 in Hamburg, Germany on May 21-25, 2023. The ASUS exhibition, at booth H813, will reveal the latest supercomputing advances, including liquid-cooling and AI solutions, as well as outlining a slew of sustainability breakthroughs - plus a whole lot more besides.

Comprehensive Liquid-Cooling Solutions
ASUS is working with Submer, the industry-leading liquid-cooling provider to demonstrate immersion-cooling solutions at ISC High Performance 2023, focused on ASUS RS720-E11-IM - the Intel -based 2U4N server that leverages our trusted legacy server architecture and popular features to create a compact new design. This fresh outlook improves the accessibility on I/O ports, storage and cable routing, and strengthens the structure to allow the server to be placed vertically in the tank, with durability assured.

Supermicro Launches Industry's First NVIDIA HGX H100 8 and 4-GPU H100 Servers with Liquid Cooling

Supermicro, Inc., a Total IT Solution Provider for Cloud, AI/ML, Storage, and 5G/Edge, continues to expand its data center offerings with liquid cooled NVIDIA HGX H100 rack scale solutions. Advanced liquid cooling technologies entirely from Supermicro reduce the lead time for a complete installation, increase performance, and result in lower operating expenses while significantly reducing the PUE of data centers. Savings for a data center are estimated to be 40% for power when using Supermicro liquid cooling solutions compared to an air-cooled data center. In addition, up to 86% reduction in direct cooling costs compared to existing data centers may be realized.

"Supermicro continues to lead the industry supporting the demanding needs of AI workloads and modern data centers worldwide," said Charles Liang, president, and CEO of Supermicro. "Our innovative GPU servers that use our liquid cooling technology significantly lower the power requirements of data centers. With the amount of power required to enable today's rapidly evolving large scale AI models, optimizing TCO and the Total Cost to Environment (TCE) is crucial to data center operators. We have proven expertise in designing and building entire racks of high-performance servers. These GPU systems are designed from the ground up for rack scale integration with liquid cooling to provide superior performance, efficiency, and ease of deployments, allowing us to meet our customers' requirements with a short lead time."

Nfina Technologies Releases Two New 3rd Gen Intel Xeon Scalable Processor-based Systems

Nfina announces the addition of two new server systems to its lineup, customized for small to medium businesses and virtualized environments. Featuring 3rd Gen Intel Xeon Scalable Processors, these scalable server systems fill a void in the marketplace, bringing exceptional multi-socket processing performance, easy setup, operability, and Nfina's five-year warranty.

"We are excited to add two new 3rd generation Intel systems to Nfina's lineup. Performance, scalability, and flexibility are key deciding factors when expanding our offerings," says Warren Nicholson, President and CEO of Nfina. "Both servers are optimized for high- performance computing, virtualized environments, and growing data needs." He continues by saying, "The two servers can also be leased through our managed services division. We provide customers with choices that fit the size of their application and budget - not a one size fits all approach."

Intel Presents a Refreshed Xeon CPU Roadmap for 2023-2025

All eyes - especially investors' eyes - are on Intel's data center business today. Intel's Sandra Rivera, Greg Lavender and Lisa Spelman hosted a webinar focused on the company's Data Center and Artificial Intelligence business unit. They offered a big update on Intel's latest market forecasts, hardware plans and the way Intel is empowering developers with software.

Executives dished out updates on Intel's data center business for investors. This included disclosures about future generations of Intel Xeon chips, progress updates on 4th Gen Intel Xeon Scalable processors (code-named Sapphire Rapids) and demos of Intel hardware tackling the competition, heavy AI workloads and more.

Xeon Roadmap Roll Call
Among Sapphire Rapids, Emerald Rapids, Sierra Forest and Granite Rapids, there is a lot going on in the server CPU business. Here's your Xeon roadmap updates in order of appearance:

TYAN to Showcase Cloud Platforms for Data Centers at CloudFest 2023

TYAN, an industry-leading server platform design manufacturer and a MiTAC Computing Technology Corporation subsidiary, will showcase its latest cloud server platforms powered by AMD EPYC 9004 Series processors and 4th Gen Intel Xeon Scalable processors for next-generation data centers at CloudFest 2023, Booth #H12 in Europa-Park from March 21-23.

"With the exponential advancement of technologies like AI and Machine Learning, data centers require robust hardware and infrastructure to handle complex computations while running AI workloads and processing big data," said Danny Hsu, Vice President of MiTAC Computing Technology Corporation's Server Infrastructure BU. "TYAN's cloud server platforms with storage performance and computing capability can support the ever-increasing demand for computational power and data processing."

Supermicro Expands Storage Solutions Portfolio for Intensive I/O Workloads with Industry Standard Based All-Flash Servers Utilizing EDSFF E3.S, and E1

Supermicro, Inc., a Total IT Solution Provider for Cloud, AI/ML, Storage, and 5G/Edge, is announcing the latest addition to its revolutionary ultra-high performance, high-density petascale class all-flash NVMe server family. Supermicro systems in this high-performance storage product family will support the next-generation EDSFF form factor, including the E3.S and E1.S devices, in form factors that accommodate 16- and 32 high-performance PCIe Gen 5 NVMe drive bays.

The initial offering of the updated product line will support up to one-half of a petabyte of storage space in a 1U 16 bay rackmount system, followed by a full petabyte of storage space in a 2U 32 bay rackmount system for both Intel and AMD PCIe Gen 5 platforms. All of the Supermicro systems that support either the E1.S or E3.s form factors enable customers to realize the benefits in various application-optimized servers.
Return to Keyword Browsing
May 1st, 2024 04:42 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts