News Posts matching #HPC

Return to Keyword Browsing

Samsung Electronics Announces First Quarter 2023 Results, Profits Lowest in 14 Years

Samsung Electronics today reported financial results for the first quarter ended March 31, 2023. The Company posted KRW 63.75 trillion in consolidated revenue, a 10% decline from the previous quarter, as overall consumer spending slowed amid the uncertain global macroeconomic environment. Operating profit was KRW 0.64 trillion as the DS (Device Solutions) Division faced decreased demand, while profit in the DX (Device eXperience) Division increased.

The DS Division's profit declined from the previous quarter due to weak demand in the Memory Business, a decline in utilization rates in the Foundry Business and continued weak demand and inventory adjustments from customers. Samsung Display Corporation (SDC) saw earnings in the mobile panel business decline quarter-on-quarter amid a market contraction, while the large panel business slightly narrowed its losses. The DX Division's results improved on the back of strong sales of the premium Galaxy S23 series as well as an enhanced sales mix focusing on premium TVs.

TSMC Showcases New Technology Developments at 2023 Technology Symposium

TSMC today showcased its latest technology developments at its 2023 North America Technology Symposium, including progress in 2 nm technology and new members of its industry-leading 3 nm technology family, offering a range of processes tuned to meet diverse customer demands. These include N3P, an enhanced 3 nm process for better power, performance and density, N3X, a process tailored for high performance computing (HPC) applications, and N3AE, enabling early start of automotive applications on the most advanced silicon technology.

With more than 1,600 customers and partners registered to attend, the North America Technology Symposium in Santa Clara, California is the first of the TSMC's Technology Symposiums around the world in the coming months. The North America symposium also features an Innovation Zone spotlighting the exciting technologies of 18 emerging start-up customers.

Samsung Hit With $303 Million Fine, Sued Over Alleged Memory Patent Infringements

Netlist Inc. an enterprise solid state storage drive specialist has been awarded over $303 million in damages by a federal jury in Texas on April 21, over apparent patent infringement on Samsung's part. Netlist has alleged that the South Korean multinational electronics corporation had knowingly infringed on five patents, all relating to improvements in data processing within the design makeup of memory modules intended for high performance computing (HPC) purposes. The Irvine, CA-based computer-memory specialist has sued Samsung in the past - with a legal suit filed at the Federal District Court for the Central District of California.

Netlist was seemingly pleased by the verdict reached at the time (2021) when the court: "granted summary judgements in favor of Netlist and against Samsung for material breach of various obligations under the Joint Development and License Agreement (JDLA), which the parties executed in November 2015. A summary judgment is a final determination rendered by the judge and has the same force and effect as a final ruling after a jury trial in litigation."

SK hynix Develops Industry's First 12-Layer HBM3, Provides Samples To Customers

SK hynix announced today it has become the industry's first to develop 12-layer HBM3 product with a 24 gigabyte (GB) memory capacity, currently the largest in the industry, and said customers' performance evaluation of samples is underway. HBM (High Bandwidth Memory): A high-value, high-performance memory that vertically interconnects multiple DRAM chips and dramatically increases data processing speed in comparison to traditional DRAM products. HBM3 is the 4th generation product, succeeding the previous generations HBM, HBM2 and HBM2E

"The company succeeded in developing the 24 GB package product that increased the memory capacity by 50% from the previous product, following the mass production of the world's first HBM3 in June last year," SK hynix said. "We will be able to supply the new products to the market from the second half of the year, in line with growing demand for premium memory products driven by the AI-powered chatbot industry." SK hynix engineers improved process efficiency and performance stability by applying Advanced Mass Reflow Molded Underfill (MR-MUF)# technology to the latest product, while Through Silicon Via (TSV)## technology reduced the thickness of a single DRAM chip by 40%, achieving the same stack height level as the 16 GB product.

AMD Joins AWS ISV Accelerate Program

AMD announced it has joined the Amazon Web Services (AWS) Independent Software Vendor (ISV) Accelerate Program, a co-sell program for AWS Partners - like AMD - who provide integrated solutions on AWS. The program helps AWS Partners drive new business by directly connecting participating ISVs with the AWS Sales organization.

Through the AWS ISV Accelerate Program, AMD will receive focused co-selling support from AWS, including, access to further sales enablement resources, reduced AWS Marketplace listing fees, and incentives for AWS Sales teams. The program will also allow participating ISVs access to millions of active AWS customers globally.

Bulk Order of GPUs Points to Twitter Tapping Big Time into AI Potential

According to Business Insider, Twitter has made a substantial investment into hardware upgrades at its North American datacenter operation. The company has purchased somewhere in the region of 10,000 GPUs - destined for the social media giant's two remaining datacenter locations. Insider sources claim that Elon Musk has committed to a large language model (LLM) project, in an effort to rival OpenAI's ChatGPT system. The GPUs will not provide much computational value in the current/normal day-to-day tasks at Twitter - the source reckons that the extra processing power will be utilized for deep learning purposes.

Twitter has not revealed any concrete plans for its relatively new in-house artificial intelligence project but something was afoot when, earlier this year, Musk recruited several research personnel from Alphabet's DeepMind division. It was theorized that he was incubating a resident AI research lab at the time, following personal criticisms levelled at his former colleagues at OpenAI, ergo their very popular and much adopted chatbot.

Intel Discontinues Brand New Max 1350 Data Center GPU, Successor Targets Alternative Markets

Intel has decided to re-organize its Max series of Data Center GPUs (codenamed Ponte Vecchio), as revealed to Tom's Hardware this week, with a particular model - the Data Center Max GPU 1350 set for removal from the lineup. Industry experts are puzzled by this decision, given that the 1350 has been officially "available" on the market since January 2023, following soon after the announcement of the entire Max range in November 2022. Intel has removed listings and entries for the Data Center GPU Max 1350 from its various web presences.

A (sort of) successor is in the works, Intel has lined up the Data Center Max GPU 1450 for release later in the year. This model will have a trimmed I/O bandwidth - this modification is likely targeting companies in China, where performance standards are capped at a certain level (via U.S. sanctions on GPU exports). An Intel spokesperson provided further details and reasons for rearranging the Max product range: "We launched the Intel Data Center Max GPU 1550 (600 W), which was initially targeted for liquid-cooled solutions only. We have since expanded our support by offering Intel Data Center Max GPU 1550 (600 W) to include air-cooled solutions."

Chinese GPU Maker Biren Technology Loses its Co-Founder, Only Months After Revealing New GPUs

Golf Jiao, a co-founder and general manager of Biren Technology, has left the company late last month according to insider sources in China. No official statement has been issued by the executive team at Biren Tech, and Jiao has not provided any details regarding his departure from the fabless semiconductor design company. The Shanghai-based firm is a relatively new startup - it was founded in 2019 by several former NVIDIA, Qualcomm and Alibaba veterans. Biren Tech received $726.6 million in funding for its debut range of general-purpose graphics processing units (GPGPUs), also defined as high-performance computing graphics processing units (HPC GPUs).

The company revealed its ambitions to take on NVIDIA's Ampere A100 and Hopper H100 compute platforms, and last August announced two HPC GPUs in the form of the BR100 and BR104. The specifications and performance charts demonstrated impressive figures, but Biren Tech had to roll back its numbers when it was hit by U.S Government enforced sanctions in October 2022. The fabless company had contracted with TSMC to produce its Biren range, and the new set of rules resulted in shipments from the Taiwanese foundry being halted. Biren Tech cut its work force by a third soon after losing its supply chain with TSMC, and the engineering team had to reassess how the BR100 and BR104 would perform on a process node larger than the original 7 nm design. It was decided that a downgrade in transfer rates would appease the legal teams, and get newly redesigned Biren silicon back onto the assembly line.

NVIDIA Executive Says Cryptocurrencies Add Nothing Useful to Society

In an interview with The Guardian, NVIDIA's Chief Technical Officer (CTO) Michael Kagan added his remarks on the company and its cryptocurrency position. Being the maker of the world's most powerful graphics cards and compute accelerators, NVIDIA is the most prominent player in the industry regarding any computing application from cryptocurrencies to AI and HPC. In the interview, Mr. Kegan expressed his opinions and argued that newly found applications such as ChatGTP bring much higher value to society compared to cryptocurrencies. "All this crypto stuff, it needed parallel processing, and [Nvidia] is the best, so people just programmed it to use for this purpose. They bought a lot of stuff, and then eventually it collapsed, because it doesn't bring anything useful for society. AI does," said Kegan, adding that "I never believed that [crypto] is something that will do something good for humanity. You know, people do crazy things, but they buy your stuff, you sell them stuff. But you don't redirect the company to support whatever it is."

When it comes to AI and other applications, the company has a very different position. "With ChatGPT, everybody can now create his own machine, his own programme: you just tell it what to do, and it will. And if it doesn't work the way you want it to, you tell it 'I want something different," he added, arguing that the new AI applications have usability level beyond that of crypto. Interestingly, trading applications are also familiar to NVIDIA, as they had clients (banks) using their hardware for faster trading execution. Mr. Kegan noted: "We were heavily involved in also trading: people on Wall Street were buying our stuff to save a few nanoseconds on the wire, the banks were doing crazy things like pulling the fibers under the Hudson taut to make them a little bit shorter, to save a few nanoseconds between their datacentre and the stock exchange."

Supermicro Expands GPU Solutions Portfolio with Deskside Liquid-Cooled AI Development Platform, Powered by NVIDIA

Supermicro, Inc., a Total IT Solution Provider for Cloud, AI/ML, Storage, and 5G/Edge, is announcing the first in a line of powerful yet quiet and power-efficient NVIDIA-accelerated AI Development platforms which gives information professionals and developers the most powerful technology available today at their deskside. The new AI development platform, the SYS-751GE-TNRT-NV1, is an application-optimized system that excels when developing and running AI-based software. This innovative system gives developers and users a complete HPC and AI resource for department workloads. In addition, this powerful system can support a small team of users running training, inference, and analytics workloads simultaneously.

The self-contained liquid-cooling feature addresses the thermal design power needs of the four NVIDIA A100 Tensor Core GPUs and the two 4th Gen Intel Xeon Scalable CPUs to enable full performance while improving the overall system's efficiency and enabling quiet (approximately 30dB) operation in an office environment. In addition, this system is designed to accommodate high-performing CPUs and GPUs, making it ideal for AI/DL/ML and HPC applications. The system can reside in an office environment or be rack-mounted when installed in a data center environment, simplifying IT management.

ASUS Announces NVIDIA-Certified Servers and ProArt Studiobook Pro 16 OLED at GTC

ASUS today announced its participation in NVIDIA GTC, a developer conference for the era of AI and the metaverse. ASUS will offer comprehensive NVIDIA-certified server solutions that support the latest NVIDIA L4 Tensor Core GPU—which accelerates real-time video AI and generative AI—as well as the NVIDIA BlueField -3 DPU, igniting unprecedented innovation for supercomputing infrastructure. ASUS will also launch the new ProArt Studiobook Pro 16 OLED laptop with the NVIDIA RTX 3000 Ada Generation Laptop GPU for mobile creative professionals.

Purpose-built GPU servers for generative AI
Generative AI applications enable businesses to develop better products and services, and deliver original content tailored to the unique needs of customers and audiences. ASUS ESC8000 and ESC4000 are fully certified NVIDIA servers that support up to eight NVIDIA L4 Tensor Core GPUs, which deliver universal acceleration and energy efficiency for AI with up to 2.7X more generative AI performance than the previous GPU generation. ASUS ESC and RS series servers are engineered for HPC workloads, with support for the NVIDIA Bluefield-3 DPU to transform data center infrastructure, as well as NVIDIA AI Enterprise applications for streamlined AI workflows and deployment.

Supermicro Servers Now Featuring NVIDIA HGX and PCIe-Based H100 8-GPU Systems

Supermicro, Inc., a Total IT Solution Provider for AI/ML, Cloud, Storage, and 5G/Edge, today has announced that it has begun shipping its top-of-the-line new GPU servers that feature the latest NVIDIA HGX H100 8-GPU system. Supermicro servers incorporate the new NVIDIA L4 Tensor Core GPU in a wide range of application-optimized servers from the edge to the data center.

"Supermicro offers the most comprehensive portfolio of GPU systems in the industry, including servers in 8U, 6U, 5U, 4U, 2U, and 1U form factors, as well as workstations and SuperBlade systems that support the full range of new NVIDIA H100 GPUs," said Charles Liang, president and CEO of Supermicro. "With our new NVIDIA HGX H100 Delta-Next server, customers can expect 9x performance gains compared to the previous generation for AI training applications. Our GPU servers have innovative airflow designs which reduce fan speeds, lower noise levels, and consume less power, resulting in a reduced total cost of ownership (TCO). In addition, we deliver complete rack-scale liquid-cooling options for customers looking to further future-proof their data centers."

NVIDIA to Lose Two Major HPC Partners in China, Focuses on Complying with Export Control Rules

NVIDIA's presence in high-performance computing has steadily increased, with various workloads benefiting from the company's AI and HPC accelerator GPUs. One of the important markets for the company is China, and export regulations are about to complicate NVIDIA's business dealing with the country. NVIDIA's major partners in the Asia Pacific region are Inspur and Huawei, which make servers powered by A100 and H100 GPU solutions. Amid the latest Biden Administration complications, the US is considering limiting more export of US-designed goods to Chinese entities. Back in 2019, the US blacklisted Huawei and restricted the sales of the latest GPU hardware to the company. Last week, the Biden Administration also blacklisted Inspur, the world's third-largest server maker.

In the Morgan Stanley conference, NVIDIA's Chief Financial Officer Colette Cress noted that: "Inspur is a partner for us, when we indicate a partner, they are helping us stand up computing for the end customers. As we work forward, we will probably be working with other partners, for them to stand-up compute within the Asia-Pac region or even other parts of the world. But again, our most important focus is focusing on the law and making sure that we follow export controls very closely. So in this case, we will look in terms of other partners to help us." This indicates that NVIDIA will lose millions of dollars in revenue due to the inability to sell its GPUs to partners like Inspur. As the company stated, complying with the export regulations is the most crucial focus.

Intel's Ponte Vecchio HPC GPU Successor Rialto Bridge Gets the Axe

Late on Friday in a newsroom posting by Intel's Interim GM Jeff McVeigh a roadmap uplift was quietly revealed. Rialto Bridge, the process improved version of Ponte Vecchio currently shipping under the Max Series GPU branding, has been pulled from the roadmap in favor of doubling down on the future design code-named Falcon Shores. Rialto Bridge was first announced last May at SC22 as the direct successor to Ponte Vecchio, and was set to begin sampling later this year. In the same post Intel also cancelled Lancaster Sound, their Visual Cloud GPU meant to replace the Arctic Sound Flex series of GPUs based on similar Xe cores to Arc Alchemist. In its stead the follow-up architecture Melville Sound will receive focused development efforts.

Falcon Shores is described as a new foundational chiplet architecture that will integrate more diverse compute tiles, creating what Intel originally dubbed the XPU. This next architectural step would combine what Intel is already doing with products such as Sapphire Rapids and Ponte Vecchio into one CPU+GPU package, and would offer even further flexibility to add other kinds of accelerators. With this roadmap update there is some uncertainty as to whether the XPU designation will make the transition as it is notably absent in the letter. It is clear though that Falcon Shores will directly replace Ponte Vecchio as the next HPC GPU, with or without CPU tiles included.

Revenue from Enterprise SSDs Totaled Just US$3.79 Billion for 4Q22 Due to Slumping Demand and Widening Decline in SSD Contract Prices, Says TrendForce

Looking back at 2H22, as server OEMs slowed down the momentum of their product shipments, Chinese server buyers also held a conservative outlook on future demand and focused on inventory reduction. Thus, the flow of orders for enterprise SSDs remained sluggish. However, NAND Flash suppliers had to step up shipments of enterprise SSDs during 2H22 because the demand for storage components equipped in notebook (laptop) computers and smartphones had undergone very large downward corrections. Compared with other categories of NAND Flash products, enterprise SSDs represented the only significant source of bit consumption. Ultimately, due to the imbalance between supply and demand, the QoQ decline in prices of enterprise SSDs widened to 25% for 4Q22. This price plunge, in turn, caused the quarterly total revenue from enterprise SSDs to drop by 27.4% QoQ to around US$3.79 billion. TrendForce projects that the NAND Flash industry will again post a QoQ decline in the revenue from this product category for 1Q23.

Ayar Labs Demonstrates Industry's First 4-Tbps Optical Solution, Paving Way for Next-Generation AI and Data Center Designs

Ayar Labs, a leader in the use of silicon photonics for chip-to-chip optical connectivity, today announced public demonstration of the industry's first 4 terabit-per-second (Tbps) bidirectional Wavelength Division Multiplexing (WDM) optical solution at the upcoming Optical Fiber Communication Conference (OFC) in San Diego on March 5-9, 2023. The company achieves this latest milestone as it works with leading high-volume manufacturing and supply partners including GlobalFoundries, Lumentum, Macom, Sivers Photonics and others to deliver the optical interconnects needed for data-intensive applications. Separately, the company was featured in an announcement with partner Quantifi Photonics on a CW-WDM-compliant test platform for its SuperNova light source, also at OFC.

In-package optical I/O uniquely changes the power and performance trajectories of system design by enabling compute, memory and network silicon to communicate with a fraction of the power and dramatically improved performance, latency and reach versus existing electrical I/O solutions. Delivered in a compact, co-packaged CMOS chiplet, optical I/O becomes foundational to next-generation AI, disaggregated data centers, dense 6G telecommunications systems, phased array sensory systems and more.

Supermicro Accelerates A Wide Range of IT Workloads with Powerful New Products Featuring 4th Gen Intel Xeon Scalable Processors

Supermicro, Inc. (NASDAQ: SMCI), a Total IT Solution Provider for Cloud, AI/ML, Storage, and 5G/Edge, will be showcasing its latest generation of systems that accelerate workloads for the entire Telco industry, specifically at the edge of the network. These systems are part of the newly introduced Supermicro Intel-based product line; the better, faster, and greener systems based on the brand new 4th Gen Intel Xeon Scalable processors (formerly codenamed Sapphire Rapids) that deliver up to 60% better workload-optimized performance. From a performance standpoint these new systems that demonstrate up to 30X faster AI inference speedups on large models for AI and edge workloads with the NVIDIA H100 GPUs. In addition, Supermicro systems support the new Intel Data Center GPU Max Series (formerly codenamed Ponte Vecchio) across a wide range of servers. The Intel Data Center GPU Max Series contains up to 128 Xe-HPC cores and will accelerate a range of AI, HPC, and visualization workloads. Supermicro X13 AI systems will support next-generation built-in accelerators and GPUs up to 700 W from Intel, NVIDIA, and others.

Supermicro's wide range of product families is deployed in a broad range of industries to speed up workloads and allow faster and more accurate decisions. With the addition of purpose-built servers tuned for networking workloads, such as Open RAN deployments and private 5G, the 4th Gen Intel Xeon Scalable processor vRAN Boost technology reduces power consumption while improving performance. Supermicro continues to offer a wide range of environmentally friendly servers for workloads from the edge to the data center.

AMD Envisions Stacked DRAM on top of Compute Chiplets in the Near Future

AMD in its ISSCC 2023 presentation detailed how it has advanced data-center energy-efficiency and managed to keep up with Moore's Law, even as semiconductor foundry node advances have tapered. Perhaps its most striking prediction for server processors and HPC accelerators is multi-layer stacked DRAM. The company has, for some time now, made logic products, such as GPUs, with stacked HBM. These have been multi-chip modules (MCMs), in which the logic die and HBM stacks sit on top of a silicon interposer. While this conserves PCB real-estate compared to discrete memory chips/modules; it is inefficient on the substrate, and the interposer is essentially a silicon die that has microscopic wiring between the chips stacked on top of it.

AMD envisions that the high-density server processor of the near-future will have many layers of DRAM stacked on top of logic chips. Such a method of stacking conserves both PCB and substrate real-estate, allowing chip-designers to cram even more cores and memory per socket. The company also sees a greater role of in-memory compute, where trivial simple compute and data-movement functions can be executed directly on the memory, saving round-trips to the processor. Lastly, the company talked about the possibility of an on-package optical PHY, which would simplify network infrastructure.

Server DRAM Will Overtake Mobile DRAM in Supply in 2023 and Comprise 37.6% of Annual Total DRAM Bit Output, Says TrendForce

Since 2022, DRAM suppliers have been adjusting their product mixes so as to assign more wafer input to server DRAM products while scaling back the wafer input for mobile DRAM products. This trend is driven by two reasons. First, the demand outlook is bright for the server DRAM segment. Second, the mobile DRAM segment was in significant oversupply during 2022. Moving into 2023, the projections on the growth of smartphone shipments and the increase in the average DRAM content of smartphones remain quite conservative. Therefore, DRAM suppliers intend to keep expanding the share of server DRAM in their product mixes. According to TrendForce's analysis on the distribution of the DRAM industry's total bit output for 2023, server DRAM is estimated to comprise around 37.6%, whereas mobile DRAM is estimated to comprise around 36.8%. Hence, server DRAM will formally surpass mobile DRAM in terms of the portion of the overall supply within this year.

Atos to Build Max Planck Society's new BullSequana XH3000-based Supercomputer, Powered by AMD MI300 APU

Atos today announces a contract to build and install a new high-performance computer for the Max Planck Society, a world-leading science and technology research organization. The new system will be based on Atos' latest BullSequana XH3000 platform, which is powered by AMD EPYC CPUs and Instinct accelerators. In its final configuration, the application performance will be three times higher than the current "Cobra" system, which is also based on Atos technologies.

The new supercomputer, with a total order value of over 20 million euros, will be operated by the Max Planck Computing and Data Facility (MPCDF) in Garching near Munich and will provide high-performance computing (HPC) capacity for many institutes of the Max Planck Society. Particularly demanding scientific projects, such as those in astrophysics, life science research, materials research, plasma physics, and AI will benefit from the high-performance capabilities of the new system.

NVIDIA Pairs 4th Gen Intel Xeon Scalable Processors with H100 GPUs

AI is at the heart of humanity's most transformative innovations—from developing COVID vaccines at unprecedented speeds and diagnosing cancer to powering autonomous vehicles and understanding climate change. Virtually every industry will benefit from adopting AI, but the technology has become more resource intensive as neural networks have increased in complexity. To avoid placing unsustainable demands on electricity generation to run this computing infrastructure, the underlying technology must be as efficient as possible.

Accelerated computing powered by NVIDIA GPUs and the NVIDIA AI platform offer the efficiency that enables data centers to sustainably drive the next generation of breakthroughs. And now, timed with the launch of 4th Gen Intel Xeon Scalable processors, NVIDIA and its partners have kicked off a new generation of accelerated computing systems that are built for energy-efficient AI. When combined with NVIDIA H100 Tensor Core GPUs, these systems can deliver dramatically higher performance, greater scale and higher efficiency than the prior generation, providing more computation and problem-solving per watt.

TYAN Refines Server Performance with 4th Gen Intel Xeon Scalable Processors

TYAN, an industry-leading server platform design manufacturer and a MiTAC Computing Technology Corporation subsidiary, today introduced 4th Gen Intel Xeon Scalable processor-based server platforms highlighting built-in accelerators to improve performance across the fastest-growing workloads in AI, analytics, cloud, storage, and HPC.

"Greater availability of new technology like 4th Gen Intel Xeon Scalable processors continue to drive the changes in the business landscape", said Danny Hsu, Vice President of MiTAC Computing Technology Corporation's Server Infrastructure Business Unit. "The advances in TYAN's new portfolio of server platforms with features such as DDR5, PCIe 5.0 and Compute Express Link 1.1 are bringing high levels of compute power within reach from smaller organizations to data centers."

AIC Introduces Server Systems Powered By 4th Gen Intel Xeon Scalable Processors

AIC Inc., (from now on referred to as"AIC"), a leading provider in enterprise storage and server solutions, today unveiled its new server systems powered by 4th Gen Intel Xeon Scalable processors (formerly codenamed Sapphire Rapids). The new server platforms are designed to accelerate performance across the most in-demand workloads that businesses rely on including enterprise, storage, AI and HPC.

The newly launched AIC servers, SB102-HK, SB201-HK and HP202-KT, are designed to offer superior processing performance and energy efficiency by leveraging the innovative features of 4th Gen Intel Xeon Scalable processors. With built-in accelerators, the 4th Gen Intel Xeon Scalable processors optimize the utilization of CPU core resources and feature enhanced memory bandwidth with DDR5, advanced I/O with PCIe Gen 5 and Compute Express Link (CXL) 2.0/1.1, and the ability to accelerate PyTorch real-time inference performance by up to 10x using Intel Advanced Matrix Extensions (Intel AMX) compared to the previous generation. The new AIC servers are empowered by advanced security technologies from 4th Gen Intel Xeon Scalable processors, allowing them to protect data and unlock new opportunities for business collaborations.

Lenovo Unveils Next Generation of Intel-Based Smart Infrastructure Solutions to Accelerate IT Modernization

Today, Lenovo unveiled 25 new ThinkSystem and ThinkAgile server and hyperconverged solutions powered by Intel's 4th Generation Xeon Scalable Processors as part of its recently announced Infrastructure Solutions V3 portfolio. Designed to help accelerate global IT modernization for organizations of all sizes, the integrated solutions deliver advanced performance, efficiency and management capabilities specifically optimized for complex workloads, including mission-critical, AI, HPC and containerized applications.

"In today's competitive business climate, modern infrastructure solutions that generate faster insights and more efficiently enable complex workloads from the edge to the cloud are critical across every major industry," said Kamran Amini, Vice President and General Manager of Server & Storage, Lenovo Infrastructure Solutions Group. "With the performance and management improvements of the Intel-based ThinkSystem V3 portfolio, customers can reduce their IT footprint by up to three times to achieve greater ROI and more easily transform their infrastructure with one seamless platform designed for today's AI, virtualization, multi-cloud and sustainable computing demands."

Giga Computing Announces Its GIGABYTE Server Portfolio for the 4th Gen Intel Xeon Scalable Processor

Giga Computing is an industry leader in high-performance servers and workstations, today announced the next-generation of GIGABYTE servers and server motherboards for the new 4th Gen Intel Xeon Scalable processor to achieve efficient performance gains with built-in accelerators. The new processors have the most built-in accelerators of any processor on the market to help maximize performance efficiency for emerging workloads; and do so while boosting virtualization and AI performance. Generational improvements make this platform ideal for AI, cloud computing, advanced analytics, HPC, networking, and storage applications. For these markets, Giga Computing has announced fourteen new series that constitute seventy-eight configurations for customers to choose from. And all these new GIGABYTE products support the full portfolio of 4th Gen Intel Xeon Scalable processors, including those with high bandwidth memory (HBM) in the Intel Xeon Max Series.
Return to Keyword Browsing
Sep 17th, 2024 02:55 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts