Apr 10th, 2025 05:55 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts

News Posts matching #Server

Return to Keyword Browsing

NVIDIA Will Bring Agentic AI Reasoning to Enterprises with Google Cloud

NVIDIA is collaborating with Google Cloud to bring agentic AI to enterprises seeking to locally harness the Google Gemini family of AI models using the NVIDIA Blackwell HGX and DGX platforms and NVIDIA Confidential Computing for data safety. With the NVIDIA Blackwell platform on Google Distributed Cloud, on-premises data centers can stay aligned with regulatory requirements and data sovereignty laws by locking down access to sensitive information, such as patient records, financial transactions and classified government information. NVIDIA Confidential Computing also secures sensitive code in the Gemini models from unauthorized access and data leaks.

"By bringing our Gemini models on premises with NVIDIA Blackwell's breakthrough performance and confidential computing capabilities, we're enabling enterprises to unlock the full potential of agentic AI," said Sachin Gupta, vice president and general manager of infrastructure and solutions at Google Cloud. "This collaboration helps ensure customers can innovate securely without compromising on performance or operational ease." Confidential computing with NVIDIA Blackwell provides enterprises with the technical assurance that their user prompts to the Gemini models' application programming interface—as well as the data they used for fine-tuning—remain secure and cannot be viewed or modified. At the same time, model owners can protect against unauthorized access or tampering, providing dual-layer protection that enables enterprises to innovate with Gemini models while maintaining data privacy.

5th Gen AMD EPYC Processors Deliver Leadership Performance for Google Cloud C4D and H4D Virtual Machines

Today, AMD announced the new Google Cloud C4D and H4D virtual machines (VMs) are powered by 5th Gen AMD EPYC processors. The latest additions to Google Cloud's general-purpose and HPC-optimized VMs deliver leadership performance, scalability, and efficiency for demanding cloud workloads; for everything from data analytics and web serving to high-performance computing (HPC) and AI.

Google Cloud C4D instances deliver impressive performance, efficiency, and consistency for general-purpose computing workloads and AI inference. Based on Google Cloud's testing, leveraging the advancements of the AMD "Zen 5" architecture allowed C4D to deliver up to 80% higher throughput/vCPU compared to previous generations. H4D instances, optimized for HPC workloads, feature AMD EPYC CPUs with Cloud RDMA for efficient scaling of up to tens of thousands of cores.

Industry's First-to-Market Supermicro NVIDIA HGX B200 Systems Demonstrate AI Performance Leadership

Super Micro Computer, Inc. (SMCI), a Total IT Solution Provider for AI/ML, HPC, Cloud, Storage, and 5G/Edge, has announced first-to-market industry leading performance on several MLPerf Inference v5.0 benchmarks, using the 8-GPU. The 4U liquid-cooled and 10U air-cooled systems achieved the best performance in select benchmarks. Supermicro demonstrated more than 3 times the tokens per second (Token/s) generation for Llama2-70B and Llama3.1-405B benchmarks compared to H200 8-GPU systems. "Supermicro remains a leader in the AI industry, as evidenced by the first new benchmarks released by MLCommons in 2025," said Charles Liang, president and CEO of Supermicro. "Our building block architecture enables us to be first-to-market with a diverse range of systems optimized for various workloads. We continue to collaborate closely with NVIDIA to fine-tune our systems and secure a leadership position in AI workloads." Learn more about the new MLPerf v5.0 Inference benchmarks here.

Supermicro is the only system vendor publishing record MLPerf inference performance (on select benchmarks) for both the air-cooled and liquid-cooled NVIDIA HGX B200 8-GPU systems. Both air-cooled and liquid-cooled systems were operational before the MLCommons benchmark start date. Supermicro engineers optimized the systems and software to showcase the impressive performance. Within the operating margin, the Supermicro air-cooled B200 system exhibited the same level of performance as the liquid-cooled B200 system. Supermicro has been delivering these systems to customers while we conducted the benchmarks. MLCommons emphasizes that all results be reproducible, that the products are available and that the results can be audited by other MLCommons members. Supermicro engineers optimized the systems and software, as allowed by the MLCommons rules.

China's RiVAI Technologies Introduces "Lingyu" RISC-V Server Processor

RiVAI Technologies, a Shenzhen-based semiconductor firm founded in 2018, unveiled this first fully domestic high-performance RISC-V server processor designed for compute-intensive applications. The Lingyu CPU features 32 general-purpose computing cores working alongside eight specialized intelligent computing cores (LPUs) in a heterogeneous "one-core, dual architecture" design. It aims for performance comparable to current x86 server processors, with the chip implementing optimized data pathways and enhanced pipelining mechanisms to maintain high clock frequencies under computational load. The architecture specifically targets maximum throughput for parallel processing workloads typical in data center environments. The chip aims to serve HPC clusters, all-flash storage arrays, and AI large language model inference operations.

Since its inception, RiVAI has accumulated 37 RISC-V-related patents and established partnerships with over 50 industry collaborators, including academic research relationships. Professor David Patterson, a RISC-V architecture pioneer, provides technical guidance to the company's development efforts. The processor's dual-architecture approach enables dynamic workload distribution between conventional processing tasks and specialized computational operations, potentially improving performance-per-watt metrics compared to traditional single-architecture designs. The Lingyu launch significantly advances China's semiconductor self-sufficiency strategy, potentially accelerating RISC-V ecosystem development while providing Chinese data centers with domestically engineered high-performance computing solutions, ultimately bypassing x86 and Arm solutions.

Intel Xeon Remains Only Server CPU on MLPerf

Today, MLCommons released its latest MLPerf Inference v5.0 benchmarks, showcasing Intel Xeon 6 with Performance-cores (P-cores) across six key benchmarks. The results reveal a remarkable 1.9x boost in AI performance over the previous generation of processors, affirming Xeon 6 as a top solution for modern AI systems.

"The latest MLPerf results demonstrate Intel Xeon 6 as the ideal CPU for AI workloads, offering a perfect balance of performance and energy efficiency. Intel Xeon remains the leading CPU for AI systems, with consistent gen-over-gen performance improvements across a variety of AI benchmarks." - Karin Eibschitz Segal, Intel corporate vice president and interim general manager of the Data Center and AI Group

IBM & Intel Announce the Availability of Gaudi 3 AI Accelerators on IBM Cloud

Yesterday, at Intel Vision 2025, IBM announced the availability of Intel Gaudi 3 AI accelerators on IBM Cloud. This offering delivers Intel Gaudi 3 in a public cloud environment for production workloads. Through this collaboration, IBM Cloud aims to help clients more cost-effectively scale and deploy enterprise AI. Intel Gaudi 3 AI accelerators on IBM Cloud are currently available in Frankfurt (eu-de) and Washington, D.C. (us-east) IBM Cloud regions, with future availability for the Dallas (us-south) IBM Cloud region in Q2 2025.

IBM's AI in Action 2024 report found that 67% of surveyed leaders reported revenue increases of 25% or more due to including AI in business operations. Although AI is demonstrating promising revenue increases, enterprises are also balancing the costs associated with the infrastructure needed to drive performance. By leveraging Intel's Gaudi 3 on IBM Cloud, the two companies are aiming to help clients more cost effectively test, innovate and deploy generative AI solutions. "By bringing Intel Gaudi 3 AI accelerators to IBM Cloud, we're enabling businesses to help scale generative AI workloads with optimized performance for inferencing and fine-tuning. This collaboration underscores our shared commitment to making AI more accessible and cost-effective for enterprises worldwide," said Saurabh Kulkarni, Vice President, Datacenter AI Strategy and Product Management, Intel.

Other World Computing (OWC) Launches OWC Jellyfish B24 and S24 Storage Solutions

Other World Computing (OWC), !a trusted leader in high-performance storage, docks, and memory card solutions that empower professionals in video and audio production, photography, and business with the tools to seamlessly maximize the performance and reliability of their workflows, today announced the general availability (GA) launch of the OWC Jellyfish B24 and OWC Jellyfish S24, two powerful new additions to its award-winning shared storage lineup. Designed to meet the evolving needs of media teams, the OWC Jellyfish B24 delivers a cost-effective, high-capacity solution for seamless collaboration and nearline backup, while the OWC Jellyfish S24 offers a full SSD production server with lightning-fast performance for demanding video workflows. With scalable expansion options and rock-solid reliability, these new OWC Jellyfish solutions give video editors, post-production teams, and content creators the tools they need to work faster, collaborate more easily, and keep their projects moving - without storage ever slowing them down.

Video editors, post-production teams, and content creators are constantly juggling massive file sizes, complex collaborations, and the need for seamless access to their media - all while making sure their work is safely backed up. But as video resolutions continue to climb to 4K, 8K, and beyond, many storage solutions just can't keep up, creating frustrating bottlenecks that slow down the creative process. The OWC Jellyfish B24 and S24 are built to solve these problems, delivering high-performance, scalable shared storage that keeps workflows smooth, file transfers fast, and backups reliable. Whether a team needs affordable nearline storage - with plenty of capacity, or lightning-fast SSDs - for real-time editing, these solutions ensure creatives can focus on what they do best - telling great stories - without storage getting in the way.

MiTAC Computing Unveils Advanced AI Server Solutions Accelerated by NVIDIA at GTC 2025

MiTAC Computing Technology Corporation, a leading server platform design manufacturer and a subsidiary of MiTAC Holdings Corporation, will present its latest innovations in AI infrastructure at GTC 2025. At booth #1505, MiTAC Computing will showcase its cutting-edge AI server platforms, fully optimized for NVIDIA MGX architecture, including the G4527G6, which supports NVIDIA H200 NVL platform and NVIDIA RTX PRO 6000 Blackwell Server Edition to address the evolving demands of enterprise AI workloads.

Enabling Next-Generation AI with High-Performance Computing
With the increasing adoption of generative AI and accelerated computing, MiTAC Computing introduces the latest NVIDIA MGX-based server solutions, the MiTAC G4527G6, designed to support complex AI and high-performance computing (HPC) workloads. Powered by Intel Xeon 6 processors, the G4527G6 accommodates up to eight NVIDIA GPUs, 8 TB of DDR5-6400 memory, sixteen hot-swappable E1.s drives, four NVIDIA ConnectX -7 NICs for high-speed east-west data transfer, and an NVIDIA BlueField -3 DPU for efficient north-south connectivity. The G4527G6 further enhances workload scalability with the NVIDIA NVLink interconnect, ensuring seamless performance for enterprise AI and high-performance computing (HPC) applications.

Lenovo Announces Hybrid AI Advantage with NVIDIA Blackwell Support

Today, at NVIDIA GTC, Lenovo unveiled new Lenovo Hybrid AI Advantage with NVIDIA solutions designed to accelerate AI adoption and boost business productivity by fast-tracking agentic AI that can reason, plan and take action to reach goals faster. The validated, full-stack AI solutions enable enterprises to quickly build and deploy AI agents for a broad range of high-demand use cases, increasing productivity, agility and trust while accelerating the next wave of AI reasoning for the new era of agentic AI.

New global IDC research commissioned by Lenovo reveals that ROI remains the greatest AI adoption barrier, despite a three-fold spend increase. AI agents are revolutionizing enterprise workflows and lowering barriers to ROI by supporting employees with complex problem-solving, coding, and multistep planning that drives speed, innovation and productivity. As CIOs and business leaders seek tangible return on AI investment, Lenovo is delivering hybrid AI solutions that unleash and customize agentic AI at every scale.

Supermicro Expands Enterprise AI Portfolio With Support for Upcoming NVIDIA RTX PRO 6000 Blackwell Server Edition and NVIDIA H200 NVL Platform

Supermicro, Inc., a Total IT Solution Provider for AI/ML, HPC, Cloud, Storage, and 5G/Edge, today announced support for the new NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs on a range of workload-optimized GPU servers and workstations. Specifically optimized for the NVIDIA Blackwell generation of PCIe GPUs, the broad range of Supermicro servers will enable more enterprises to leverage accelerated computing for LLM-inference and fine-tuning, agentic AI, visualization, graphics & rendering, and virtualization. Many Supermicro GPU-optimized systems are NVIDIA Certified, guaranteeing compatibility and support for NVIDIA AI Enterprise to simplify the process of developing and deploying production AI.

"Supermicro leads the industry with its broad portfolio of application optimized GPU servers that can be deployed in a wide range of enterprise environments with very short lead times," said Charles Liang, president and CEO of Supermicro. "Our support for the NVIDIA RTX PRO 6000 Blackwell Server Edition GPU adds yet another dimension of performance and flexibility for customers looking to deploy the latest in accelerated computing capabilities from the data center to the intelligent edge. Supermicro's broad range of PCIe GPU-optimized products also support NVIDIA H200 NVL in 2-way and 4-way NVIDIA NVLink configurations to maximize inference performance for today's state-of-the-art AI models, as well as accelerating HPC workloads."

Giga Computing Showcases Rack Scale Solutions at NVIDIA GTC 2025

Giga Computing, a subsidiary of GIGABYTE and an industry leader in generative AI servers and advanced cooling technologies, today announced participation at NVIDIA GTC 2025 to bring to the market the best in GPU-based solutions for generative AI, media acceleration, and large language models (LLM). To this end, GIGABYTE booth #1409 at NVIDIA GTC showcases a rack-scale turnkey AI solution, GIGAPOD, that offers both air and liquid-cooling designs for the NVIDIA HGX B300 NVL16 system. Also, on display at the booth is a compute node from the newly announced NVIDIA GB300 NVL72 rack-scale solution. And for modularized compute architecture are two servers supporting the newly announced NVIDIA RTX PRO 6000 Blackwell Server Edition.

Complete AI solution - GIGAPOD
With the depth of expertise in hardware and system design, Giga Computing has combined infrastructure hardware, platform software, and architecting service to deliver scalable units composed of GIGABYTE GPU servers with NVIDIA GPU baseboards, while running GIGABYTE POD Manager, a powerful software suite designed to enhance operational efficiency, streamline management, and optimize resource utilization. GIGAPOD's scalable unit is designed for either nine air-cooled racks or five liquid-cooled racks. Giga Computing offers two approaches for the same goal, one powerful GPU cluster using NVIDIA HGX Hopper and Blackwell GPU platforms at scale to meet demand for all AI data centers.

NVIDIA Launches RTX PRO 6000 Blackwell Series Professional Graphics Cards

NVIDIA today launched the RTX PRO 6000 Blackwell series of professional graphics cards. These cards are based on the latest GeForce "Blackwell" graphics architecture, and the three chips the company already launched on it. Leading the pack, is the RTX PRO 6000, a card that completely maxes out the massive "GB202" silicon, featuring more shaders than even the GeForce RTX 5090, albeit at lower clock speeds. The idea behind this product is to give pro-vis users more shader power, driving a large amount of GDDR7 ECC memory. Specifically, the card comes with 24,064 CUDA cores across all 192 SM physically present on the silicon, besides 768 Tensor cores, 192 RT cores, 768 TMUs, and 192 ROPs. The card gets a humungous 96 GB of ECC GDDR7 memory across the chip's 512-bit wide memory interface, probably using 48 Gbit density memory chips. The card has a TGP of 600 W, making out the 12V2x6 power input. It comes with a board design resembling the RTX 5090.

Next up, is the RTX PRO 6000 Blackwell Max-Q. This card has essentially the same core-configuration as the RTX PRO 6000, but with a reduced TGP, and a simpler 2-slot board design that uses a lateral-blower. This card is meant for machines with multiple such cards installed, though something that isn't quite a rendering server. Lastly, there's the RTX PRO 6000 Server Edition. This card, again, has identical core-config to the others in the lineup, but with a board design optimized for rackmount servers and large rendering farms. The cooler relies on the rack's airflow for cooling.

Server Market Revenue Increased 91% During the Q4 2024, NVIDIA Continues Dominating the GPU Server Space

According to the International Data Corporation (IDC) Worldwide Quarterly Server Tracker, the server market reached a record $77.3 billion dollars in revenue during the last quarter of the year. This quarter showed the second highest growth rate since 2019 with a year-over-year increase of 91% in vendor revenue. Revenue generated from x86 servers increased 59.9% in 2024Q4 to $54.8 billion while Non-x86 servers increased 262.1% year over year to $22.5 billion.

Revenue for servers with an embedded GPU in the fourth quarter of 2024 grew 192.6% year-over-year and for the full year 2024, more than half of the server market revenue came from servers with an embedded GPU. NVIDIA continues dominating the server GPU space with over 90% of the total shipments with and embedded GPU in 2024Q4. The fast pace at which hyperscalers and cloud service providers have been adopting servers with embedded GPUs has fueled the server market growth which has more than doubled in size since 2020 with revenue of $235.7 billion dollars for the full year 2024.

ASRock Rack Debuts High-Performance AI Server Solutions at GTC 2025

ASRock Rack Inc., the leading innovative server company, will showcase its high-performance AI server lineup at GTC 2025 in San Jose, California, from March 18-21. The featured solutions include next-gen AI servers based on the NVIDIA Blackwell Ultra platform and RTX PRO 6000 Blackwell Server Edition, and the debut of the liquid-cooled 4U8X-GNR2.

Agentic AI, autonomous systems that perceive, decide, and act, is gaining traction across industries, from healthcare to robotics. The growing demand for real-time interactions and adaptive learning is driving the need for accelerated computing servers with exceptional computational power. At GTC today, ASRock Rack showcases servers based on NVIDIA Blackwell Ultra, the latest addition to the NVIDIA Blackwell accelerated computing platform, offering optimized compute and increased memory, leading the way for a new era of AI reasoning, agentic AI, and physical AI applications.

MSI Powers the Future of Cloud Computing at CloudFest 2025

MSI, a leading global provider of high-performance server solutions, unveiled its next-generation server platforms—ORv3 Servers, DC-MHS Servers, and NVIDIA MGX AI Servers—at CloudFest 2025, held from March 18-20 at booth H02. The ORv3 Servers focus on modularity and standardization to enable seamless integration and rapid scalability for hyperscale growth. Complementing this, the DC-MHS Servers emphasize modular flexibility, allowing quick reconfiguration to adapt to diverse data center requirements while maximizing rack density for sustainable operations. Together with NVIDIA MGX AI Servers, which deliver exceptional performance for AI and HPC workloads, MSI's comprehensive solutions empower enterprises and hyperscalers to redefine cloud infrastructure with unmatched flexibility and performance.

"We're excited to present MSI's vision for the future of cloud infrastructure." said Danny Hsu, General Manager of MSI's Enterprise Platform Solutions. "Our next-generation server platforms address the critical needs of scalability, efficiency, and sustainability. By offering modular flexibility, seamless integration, and exceptional performance, we empower businesses, hyperscalers, and enterprise data centers to innovate, scale, and lead in this cloud-powered era."

Equal1 Launches Bell-1: The First Quantum System Purpose-Built for the HPC Era

Equal1 today unveils Bell-1, the first quantum system purpose-built for the HPC era. Unlike first-generation quantum computers that demand dedicated rooms, infrastructure, and complex cooling systems, Bell-1 is designed for direct deployment in HPC-class environments. As a rack-mountable quantum node, it integrates directly alongside classical compute—as compact as a GPU server, yet exponentially more powerful for the world's hardest problems. Bell-1 is engineered to eliminate the traditional barriers of cost, infrastructure, and complexity, setting a new benchmark for scalable quantum computing integration.

Bell-1 rewrites the rule book. While today's quantum computers demand specialized infrastructure, Bell-1 is a silicon-powered quantum computer that integrates seamlessly into existing HPC environments. Simply rack it, plug it in, and unlock quantum capabilities wherever your classical computers already operate. No new cooling systems. No extraordinary power demands. Just quantum computing that works in the real world, as easy to deploy as a high-end GPU server. It plugs into a standard power socket, operates at just 1600 W, and delivers on-demand quantum computing for computationally intensive workloads.

GIGABYTE Showcases Cutting-Edge AI and Cloud Computing Solutions at CloudFest 2025

Giga Computing, a subsidiary of GIGABYTE, a global leader in IT technology solutions, is thrilled to announce its participation at CloudFest 2025, the world's premier cloud, hosting, and internet infrastructure event. As a key exhibitor, Giga Computing will highlight its latest innovations in AI, cloud computing, and edge solutions at the GIGABYTE booth. In line with its commitment to shaping the future of AI development and deployment, the GIGABYTE booth will showcase its industry-leading hardware and platforms optimized for AI workloads, cloud applications, and edge computing. As cloud adoption continues to accelerate, Giga Computing solutions are designed to empower businesses with unparalleled performance, scalability, and efficiency.

At CloudFest 2025, Giga Computing invites attendees to visit booth #E03 to experience firsthand its cutting-edge cloud computing solutions. From state-of-the-art hardware to innovative total solutions, a comprehensive suite of products and services designed to meet the evolving needs of the cloud industry are being showcased.

Supermicro Intros New Systems Optimized for Edge and Embedded Workloads

Supermicro, Inc. a Total IT Solution Provider for AI/ML, HPC, Cloud, Storage, and 5G/Edge, is introducing a wide range of new systems which are fully optimized for edge and embedded workloads. Several of these new compact servers, which are based on the latest Intel Xeon 6 SoC processor family (formerly codenamed Granite Rapids-D), empower businesses to optimize real-time AI inferencing and enable smarter applications across many key industries.

"As the demand for Edge AI solutions grows, businesses need highly reliable, compact systems that can process data at the edge in real-time," said Charles Liang, president and CEO of Supermicro. "At Supermicro, we design and deploy the industry's broadest range of application optimized systems from the data center to the far edge. Our latest generation of edge servers deliver advanced AI capabilities for enhanced efficiency and decision-making close to where the data is generated. With up to 2.5 times core count increase at the edge with improved performance per watt and per core, these new Supermicro compact systems are fully optimized for workloads such as Edge AI, telecom, networking, and CDN."

Advantech Announces Edge & AI Solutions with 5th Gen AMD EPYC Embedded Series Processors

Advantech, a leading provider of edge computing and edge AI solutions, is pleased to announce their high-performance server and network appliances are now powered by the latest AMD EPYC Embedded 9005 Series processors. By leveraging these cutting-edge platforms, Advantech is driving edge computing and AI to new heights—making solutions ideal for 5G edge cloud, AI, machine learning, and enhanced data security.

"We are excited about the launch of Advantech's latest generation of innovative edge computing and AI solutions powered by AMD EPYC Embedded 9005 Series processors," said Amey Deosthali, Senior Director of embedded core markets at AMD. "Optimized for embedded markets, EPYC Embedded 9005 provides exceptional compute performance for edge AI applications while delivering enhanced IO capabilities, product longevity, and system resiliency."

MiTAC Computing Showcases Cutting-Edge AI and HPC Servers at Supercomputing Asia 2025

MiTAC Computing Technology Corp., a subsidiary of MiTAC Holdings Corp. and a global leader in server design and manufacturing, will showcase its latest AI and HPC innovations at Supercomputing Asia 2025, taking place from March 11 at Booth #B10. The event highlights MiTAC's commitment to delivering cutting-edge technology with the introduction of the G4520G6 AI server and the TN85-B8261 HPC server—both engineered to meet the growing demands of artificial intelligence, machine learning, and high-performance computing (HPC) applications.

G4520G6 AI Server: Performance, Scalability, and Efficiency Redefined
The G4520G6AI server redefines computing performance with an advanced architecture tailored for intensive workloads. Key features include:
  • Exceptional Compute Power- Supports dual Intel Xeon 6 Processors with TDP up to 350 W, delivering high-performance multicore processing for AI-driven applications.
  • Enhanced Memory Performance- Equipped with 32 DDR5 DIMM slots (16 per CPU) and 8 memory channels, supporting up to 8,192 GB DDR5 RDIMM/3DS RDIMM at 6400 MT/s for superior memory bandwidth.

Avalue Technology Unveils HPM-GNRDE High-Performance Server Motherboard

Avalue Technology introduces the HPM-GNRDE high-performance server motherboard, powered by the latest Intel Xeon 6 Processors (P-Core) 6500P & 6700P.

Designed to deliver quality computing performance, ultra-fast memory bandwidth, and advanced PCIe 5.0 expansion, the HPM-GNRDE is the ideal solution for AI workloads, high-performance computing (HPC), Cloud data centers, and enterprise applications. The HPM-GNRDE will make its debut at embedded world 2025, showcasing Avalue's innovation in high-performance computing.

Bitspower, KRAMBU & XFX Collaborate on Next-Gen AI Server Cooling

Bitspower, KRAMBU, and XFX have joined forces to bring advanced liquid-cooling solutions to AMD's cutting-edge RDNA 4.0 architecture. As AI workloads demand ever-increasing processing power, efficient thermal management is crucial to unlocking peak performance and long-term reliability. By integrating Bitspower's industry-leading cooling technology with KRAMBU's expertise in high-performance computing and XFX's GPU innovation, this partnership delivers a powerful, scalable solution for data centers and enterprise applications.

Designed for maximum efficiency, these liquid-cooled servers support up to 20 GPUs per system, significantly increasing computing density and accelerating AI, machine learning, and data analytics workloads. At the heart of these systems is AMD's Radeon RX 9070 XT, built on the 5 nm RDNA 4 Navi 48 architecture, which boasts 25% greater transistor density than NVIDIA's Blackwell GPUs—packing 53.9 billion transistors into a die smaller than GB203. By maintaining lower temperatures under heavy workloads, Bitspower's cooling solutions allow these high-density servers to operate at peak performance while reducing thermal throttling and energy consumption.

Team Group To Showcase Cutting-Edge Solutions at Embedded World 2025

Team Group Inc. announced today that it will participate in Embedded World 2025, a premier global event for embedded technology. The company will unveil its latest industrial-grade solutions, spanning three key product lines designed to meet the evolving demands of AI, high-performance computing, and data security. These innovations demonstrate Team Group's commitment to advancing efficiency and stability in industrial applications while laying a solid foundation for future technological progress.

Highlighting its leadership in embedded applications and groundbreaking technology, Team Group's D500R WORM Memory Card has been recognized with the prestigious Embedded Award. Join Team Group from March 11-13, 2025, at NürnbergMesse Convention Center in Nuremberg, Germany (Hall 5 / 5-239) to explore the latest breakthroughs and optimal solutions in industrial solutions.

Weak Consumer Electronics Demand Drives 4Q24 NAND Flash Revenue Down 6.2% QoQ, Says TrendForce

TrendForce's latest research reveals that the NAND Flash market faced downward pressure in 4Q24 as PC and smartphone manufacturers continued inventory clearance efforts, leading to significant supply chain adjustments. Consequently, NAND Flash prices reversed downward, with ASP dropping 4% QoQ, while overall bit shipments declined by 2%. Total industry revenue fell 6.2% QoQ to US$16.52 billion.

Looking ahead to 1Q25, the traditional slow season effect remains unavoidable despite suppliers actively reducing production. Server and other key end-market inventory restocking has slowed, and with both order volumes and contract prices declining sharply. NAND Flash industry revenue is expected to drop by up to 20% QoQ. However, as production cuts take effect and prices stabilize, the NAND Flash market is expected to recover in the second half of 2025.

Alibaba Adds New "C930" Server-grade Chip to XuanTie RISC-V Processor Series

Damo Academy—a research and development wing of Alibaba—launched its debut "server-grade processor" design late last week, in Beijing. According to a South China Morning Post (SCMP) news article, the C930 model is a brand-new addition to the e-commerce platform's XuanTie RISC-V CPU series. Company representatives stated that their latest product is designed as a server-level and high-performance computing (HPC) solution. Going back to March 2024, TechPowerUp and other Western hardware news outlets picked up on Alibaba's teasing of the Xuantie C930 SoC, and a related Xuantie 907 matrix processing unit. Fast-forward to the present day; Damo Academy has disclosed that initial shipments—of finalized C930 units—will be sent out to customers this month.

The newly released open-source RISC-V architecture-based HPC chip is an unknown quantity in terms of technical specifications. Damo Academy reps did not provide any detailed information during last Friday's conference (February 28). SCMP's report noted the R&D division's emphasizing of "its role in advancing RISC-V adoption" within various high-end fields. Apparently, the XuanTie engineering team has: "supported the implementation of more than thirty percent of RISC-V high-performance processors." Upcoming additions will arrive in the form of the C908X for AI acceleration, R908A for automotive processing solutions, and an XL200 model for high-speed interconnection. These XuanTie projects are reportedly still deep in development.
Return to Keyword Browsing
Apr 10th, 2025 05:55 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts