News Posts matching #Enterprise

Return to Keyword Browsing

NVIDIA & Storage Industry Leaders Unveil New Class of Enterprise Infrastructure for the AI Era

At GTC 2025, NVIDIA announced the NVIDIA AI Data Platform, a customizable reference design that leading providers are using to build a new class of AI infrastructure for demanding AI inference workloads: enterprise storage platforms with AI query agents fueled by NVIDIA accelerated computing, networking and software. Using the NVIDIA AI Data Platform, NVIDIA-Certified Storage providers can build infrastructure to speed AI reasoning workloads with specialized AI query agents. These agents help businesses generate insights from data in near real time, using NVIDIA AI Enterprise software—including NVIDIA NIM microservices for the new NVIDIA Llama Nemotron models with reasoning capabilities—as well as the new NVIDIA AI-Q Blueprint.

Storage providers can optimize their infrastructure to power these agents with NVIDIA Blackwell GPUs, NVIDIA BlueField DPUs, NVIDIA Spectrum-X networking and the NVIDIA Dynamo open-source inference library. Leading data platform and storage providers—including DDN, Dell Technologies, Hewlett Packard Enterprise, Hitachi Vantara, IBM, NetApp, Nutanix, Pure Storage, VAST Data and WEKA—are collaborating with NVIDIA to create customized AI data platforms that can harness enterprise data to reason and respond to complex queries. "Data is the raw material powering industries in the age of AI," said Jensen Huang, founder and CEO of NVIDIA. "With the world's storage leaders, we're building a new class of enterprise infrastructure that companies need to deploy and scale agentic AI across hybrid data centers."

Server Market Revenue Increased 91% During the Q4 2024, NVIDIA Continues Dominating the GPU Server Space

According to the International Data Corporation (IDC) Worldwide Quarterly Server Tracker, the server market reached a record $77.3 billion dollars in revenue during the last quarter of the year. This quarter showed the second highest growth rate since 2019 with a year-over-year increase of 91% in vendor revenue. Revenue generated from x86 servers increased 59.9% in 2024Q4 to $54.8 billion while Non-x86 servers increased 262.1% year over year to $22.5 billion.

Revenue for servers with an embedded GPU in the fourth quarter of 2024 grew 192.6% year-over-year and for the full year 2024, more than half of the server market revenue came from servers with an embedded GPU. NVIDIA continues dominating the server GPU space with over 90% of the total shipments with and embedded GPU in 2024Q4. The fast pace at which hyperscalers and cloud service providers have been adopting servers with embedded GPUs has fueled the server market growth which has more than doubled in size since 2020 with revenue of $235.7 billion dollars for the full year 2024.

MSI Powers the Future of Cloud Computing at CloudFest 2025

MSI, a leading global provider of high-performance server solutions, unveiled its next-generation server platforms—ORv3 Servers, DC-MHS Servers, and NVIDIA MGX AI Servers—at CloudFest 2025, held from March 18-20 at booth H02. The ORv3 Servers focus on modularity and standardization to enable seamless integration and rapid scalability for hyperscale growth. Complementing this, the DC-MHS Servers emphasize modular flexibility, allowing quick reconfiguration to adapt to diverse data center requirements while maximizing rack density for sustainable operations. Together with NVIDIA MGX AI Servers, which deliver exceptional performance for AI and HPC workloads, MSI's comprehensive solutions empower enterprises and hyperscalers to redefine cloud infrastructure with unmatched flexibility and performance.

"We're excited to present MSI's vision for the future of cloud infrastructure." said Danny Hsu, General Manager of MSI's Enterprise Platform Solutions. "Our next-generation server platforms address the critical needs of scalability, efficiency, and sustainability. By offering modular flexibility, seamless integration, and exceptional performance, we empower businesses, hyperscalers, and enterprise data centers to innovate, scale, and lead in this cloud-powered era."

Global Top 10 IC Design Houses See 49% YoY Growth in 2024, NVIDIA Commands Half the Market

TrendForce reveals that the combined revenue of the world's top 10 IC design houses reached approximately US$249.8 billion in 2024, marking a 49% YoY increase. The booming AI industry has fueled growth across the semiconductor sector, with NVIDIA leading the charge, posting an astonishing 125% revenue growth, widening its lead over competitors, and solidifying its dominance in the IC industry.

Looking ahead to 2025, advancements in semiconductor manufacturing will further enhance AI computing power, with LLMs continuing to emerge. Open-source models like DeepSeek could lower AI adoption costs, accelerating AI penetration from servers to personal devices. This shift positions edge AI devices as the next major growth driver for the semiconductor industry.

Silicon Motion Announces PCIe Gen5 Enterprise SSD Reference Design Kit Supporting up to 128TB

Silicon Motion Technology Corporation ("Silicon Motion"), a global leader in designing and marketing NAND flash controllers for solid-state storage devices, today announced the sampling of its groundbreaking MonTitan SSD Reference Design Kit (RDK) that supports up to 128 TB with QLC NAND. Designed on the advanced MonTitan PCIe Gen 5 SSD Development Platform. This new offering aims to accelerate enterprise and data center storage AI SSD solutions by providing a robust and efficient RDK for OEMs and partners.

The SSD RDK incorporates Silicon Motion's PCIe Dual Ported enterprise-grade SM8366 controller, which supports PCIe Gen 5 x4 NVMe 2.0 and OCP 2.5 data center specifications offering unmatched performance, QoS, and capacity for next-generation large data lake storage needs.

Meta Reportedly Reaches Test Phase with First In-house AI Training Chip

According to a Reuters technology report, Meta's engineering department is engaged in the testing of their "first in-house chip for training artificial intelligence systems." Two inside sources have declared this significant development milestone; involving a small-scale deployment of early samples. The owner of Facebook could ramp up production, upon initial batches passing muster. Despite a recent-ish showcasing of an open-architecture NVIDIA "Blackwell" GB200 system for enterprise, Meta leadership is reported to be pursuing proprietary solutions. Multiple big players—in the field of artificial intelligence—are attempting to breakaway from a total reliance on Team Green. Last month, press outlets concentrated on OpenAI's alleged finalization of an in-house design, with rumored involvement coming from Broadcom and TSMC.

One of the Reuters industry moles believes that Meta has signed up with TSMC—supposedly, the Taiwanese foundry was responsible for the production of test batches. Tom's Hardware reckons that Meta and Broadcom were working together with the tape out of the social media giant's "first AI training accelerator." Development of the company's "Meta Training and Inference Accelerator" (MTIA) series has stretched back a couple of years—according to Reuters, this multi-part project: "had a wobbly start for years, and at one point scrapped a chip at a similar phase of development...Meta last year, started using an MTIA chip to perform inference, or the process involved in running an AI system as users interact with it, for the recommendation systems that determine which content shows up on Facebook and Instagram news feeds." Leadership is reportedly aiming to get custom silicon solutions up and running for AI training by next year. Past examples of MTIA hardware were deployed with open-source RISC-V cores (for inference tasks), but is not clear whether this architecture will form the basis of Meta's latest AI chip design.

Kingston Debuts DC3000ME PCIe 5.0 NVMe U.2 Enterprise SSD with eTLC NAND

Kingston has introduced its new DC3000ME line of enterprise-grade PCIe 5.0 SSDs. These top-tier storage devices come in a U.2 15 mm form factor (100.50 mm × 69.8 mm × 14.8 mm) and use 3D eTLC NAND flash memory. The drives include built-in power loss protection and AES 256-bit hardware-based encryption. Kingston DC3000ME SSDs are designed for server applications such as AI, HPC, OLTP, databases, cloud infrastructure, and edge computing. As an enterprise-grade product, the DC3000ME SSDs also feature various built-in telemetry such as media wear, temperature, health, etc.

At this moment, they are offered in three sizes: 3.84 TB, 7.68 TB, and 15.36 TB. Each version has 1DWPD durability with a 5-year warranty. In terms of power consumption, we have 8 W when idle and up to 24 W during writing operations. Kingston points out the drives' steady I/O performance and quick response times, with read delays under 10µs at 99% and write delays under 70µs. (up to 14,000/10,000 MB/s sequential read/write and up to 2,800,000/500,000 4k random read/write IOPS). The drives also have NVMe-MI 1.2b remote management, end-to-end data safety, and support for TCG Opal 2.0. Exact pricing is still to be announced, however we found them online at €686,9 (3.84 TB), €1226,9 (7.68 TB), €2252,9 (15.36 TB).

Phison Showcases Edge AI and Embedded Solutions at Embedded World

Embedded World is one of the most influential exhibitions in the global embedded technology sector, attracting numerous experts in industrial computing, automotive electronics, IoT, and AI technologies each year. Phison Electronics (8299TT), a leading innovator of NAND controller and NAND storage solutions, will participate in the Taiwan Excellence Pavilion at Embedded World 2025 in Germany from March 11 to 13.

Phison will showcase its exclusive AI solution aiDAPTIV+, the enterprise ultra-high-capacity PASCARI PCIe 5.0 122.88 TB SSD, and its latest automotive storage technology, the MPT5 Automotive PCIe Gen 4 SSD, demonstrating over 15 years of technical expertise in the embedded market.

Insiders Predict Introduction of NVIDIA "Blackwell Ultra" GB300 AI Series at GTC, with Fully Liquid-cooled Clusters

Supply chain insiders believe that NVIDIA's "Blackwell Ultra" GB300 AI chip design will get a formal introduction at next week's GTC 2025 conference. Jensen Huang's keynote presentation is scheduled—the company's calendar is marked with a very important date: Tuesday, March 18. Team Green's chief has already revealed a couple of Blackwell B300 series details to investors; a recent earnings call touched upon the subject of a second half (of 2025) launch window. Industry moles have put spotlights on the GB300 GPU's alleged energy hungry nature. According to inside tracks, power consumption has "significantly" increased when compared to a slightly older equivalent; NVIDIA's less refined "Blackwell" GB200 design.

A Taiwan Economic Daily news article predicts an upcoming "second cooling revolution," due to reports of "Blackwell Ultra" parts demanding greater heat dissipation solutions. Supply chain leakers have suggested effective countermeasures—in the form of fully liquid-cooled systems: "not only will more water cooling plates be introduced, but the use of water cooling quick connectors will increase four times compared to GB200." The pre-Christmas 2024 news cycle proposed a 1400 W TDP rating. Involved "Taiwanese cooling giants" are expected to pull in tidy sums of money from the supply of optimal heat dissipating gear, with local "water-cooling quick-connector" manufacturers also tipped to benefit greatly. The UDN report pulled quotes from a variety of regional cooling specialists; the consensus being that involved partners are struggling to keep up with demand across GB200 and GB300 product lines.

Avalue Technology Unveils HPM-GNRDE High-Performance Server Motherboard

Avalue Technology introduces the HPM-GNRDE high-performance server motherboard, powered by the latest Intel Xeon 6 Processors (P-Core) 6500P & 6700P.

Designed to deliver quality computing performance, ultra-fast memory bandwidth, and advanced PCIe 5.0 expansion, the HPM-GNRDE is the ideal solution for AI workloads, high-performance computing (HPC), Cloud data centers, and enterprise applications. The HPM-GNRDE will make its debut at embedded world 2025, showcasing Avalue's innovation in high-performance computing.

QNAP Releases Cloud NAS Operating System QuTScloud c5.2

QNAP Systems, Inc. today released QuTScloud c5.2, the latest version of its Cloud NAS operating system. This update introduces Security Center, a proactive security application that monitors Cloud NAS file activities and defends against ransomware threats. Additionally, QuTScloud c5.2 provides extensive optimizations, streamlining operations and management for a more seamless user experience.

QuTScloud Cloud NAS revolutionizes enterprise data storage and management. By deploying a QuTScloud image on virtual machines, businesses can flexibly implement Cloud NAS on public cloud platforms or virtualization environments. With a subscription-based pricing model starting at just US $4.99 per month, users can allocate resources efficiently and optimize costs.

Advantech Launches the SQFlash EDSFF and EU-2 PCIe Gen5 x4 SSDs

Advantech, a global leader in industrial flash storage solutions, introduces the SQFlash EDSFF and EU-2 PCIe Gen 5 x4 SSDs, designed to meet the demands of next-generation enterprise and data center applications. Till 2024, PCIe Gen 4 solutions accounted for 50% of the market, and PCIe Gen 5 products are rapidly gaining traction across diverse applications. Among PCIe storage products, form factors such as U.2, E3.S, and E1.S are driving market growth.

The SQFlash E1.S SSD, built on the EDSFF standard, delivers exceptional performance with PCIe Gen 5 read and write speeds of up to 14,000 MB/s and 8,500 MB/s, respectively, while offering scalability, power efficiency, and thermal optimization. Meanwhile, the SQFlash EU-2 PCIe Gen.5 x4 SSD leverages cutting-edge PCIe 5.0 technology, setting new standards in speed, reliability, and thermal management, making it ideal for data centers, enterprise computing, real-time analytics, and AI-driven workloads.

Alibaba Adds New "C930" Server-grade Chip to XuanTie RISC-V Processor Series

Damo Academy—a research and development wing of Alibaba—launched its debut "server-grade processor" design late last week, in Beijing. According to a South China Morning Post (SCMP) news article, the C930 model is a brand-new addition to the e-commerce platform's XuanTie RISC-V CPU series. Company representatives stated that their latest product is designed as a server-level and high-performance computing (HPC) solution. Going back to March 2024, TechPowerUp and other Western hardware news outlets picked up on Alibaba's teasing of the Xuantie C930 SoC, and a related Xuantie 907 matrix processing unit. Fast-forward to the present day; Damo Academy has disclosed that initial shipments—of finalized C930 units—will be sent out to customers this month.

The newly released open-source RISC-V architecture-based HPC chip is an unknown quantity in terms of technical specifications. Damo Academy reps did not provide any detailed information during last Friday's conference (February 28). SCMP's report noted the R&D division's emphasizing of "its role in advancing RISC-V adoption" within various high-end fields. Apparently, the XuanTie engineering team has: "supported the implementation of more than thirty percent of RISC-V high-performance processors." Upcoming additions will arrive in the form of the C908X for AI acceleration, R908A for automotive processing solutions, and an XL200 model for high-speed interconnection. These XuanTie projects are reportedly still deep in development.

Montage Technology Delivers I/O Expansion Solution for New-Generation CPU Platforms

Montage Technology today announced the mass production of its I/O expansion device for the new-generation CPU platforms—the I/O Hub (IOH) chip M88IO3020. This product is specifically designed for Intel's Birch Stream platform, aiming to provide a highly integrated and flexible I/O expansion solution for applications such as cloud computing, big data, and enterprise storage.

Montage's IOH chip establishes connectivity with Intel's latest Granite Rapids processors via the PCIe bus, achieving a maximum bandwidth of 64 Gbps. The chip features configurable high-speed interfaces including PCIe, SATA, and USB to meet diverse application requirements.

AMD to Discuss Advancing of AI "From the Enterprise to the Edge" at MWC 2025

GSMA MWC Barcelona, runs from March 3 to 6, 2025 at the Fira Barcelona Gran Via in Barcelona, Spain. AMD is proud to participate in forward-thinking discussions and demos around AI, edge and cloud computing, the long-term revolutionary potential of moonshot technologies like quantum processing, and more. Check out the AMD hospitality suite in Hall 2 (Stand 2M61) and explore our demos and system design wins. Attendees are welcome to stop by informally or schedule a time slot with us.

As modern networks evolve, high-performance computing, energy efficiency, and AI acceleration are becoming just as critical as connectivity itself. AMD is at the forefront of this transformation, delivering solutions that power next-generation cloud, AI, and networking infrastructure. Our demos this year showcase AMD EPYC, AMD Instinct, and AMD Ryzen AI processors, as well as AMD Versal adaptive SoC and Zynq UltraScale+ RFSoC devices.

Qualcomm and IBM Scale Enterprise-grade Generative AI from Edge to Cloud

Ahead of Mobile World Congress 2025, Qualcomm Technologies, Inc. and IBM (NYSE: IBM) announced an expanded collaboration to drive enterprise-grade generative artificial intelligence (AI) solutions across edge and cloud devices designed to enable increased immediacy, privacy, reliability, personalization, and reduced cost and energy consumption. Through this collaboration, the companies plan to integrate watsonx.governance for generative AI solutions powered by Qualcomm Technologies' platforms, and enable support for IBM's Granite models through the Qualcomm AI Inference Suite and Qualcomm AI Hub.

"At Qualcomm Technologies, we are excited to join forces with IBM to deliver cutting-edge, enterprise-grade generative AI solutions for devices across the edge and cloud," said Durga Malladi, senior vice president and general manager, technology planning and edge solutions, Qualcomm Technologies, Inc. "This collaboration enables businesses to deploy AI solutions that are not only fast and personalized but also come with robust governance, monitoring, and decision-making capabilities, with the ability to enhance the overall reliability of AI from edge to cloud."

IBM Completes Acquisition of HashiCorp, Creates Comprehensive, End-to-End Hybrid Cloud Platform

IBM (NYSE: IBM) today announced it has completed its acquisition of HashiCorp, whose products automate and secure the infrastructure that underpins hybrid cloud applications and generative AI. Together the companies' capabilities will help clients accelerate innovation, strengthen security, and get more value from the cloud.

Today nearly 75% of enterprises are using hybrid cloud, including public clouds from hyperscalers and on-prem data centers, which can enable true innovation with a consistent approach to delivering and managing that infrastructure at scale. Enterprises are looking for ways to more efficiently manage and modernize cloud infrastructure and security tasks from initial planning and design, to ongoing maintenance. By 2028, it is projected that generative AI will lead to the creation of 1 billion new cloud-native applications. Supporting this scale requires infrastructure automation far beyond the capacity of the workforce alone.

IBM Introduces New Multi-Modal and Reasoning AI "Granite" Models Built for the Enterprise

IBM today debuted the next generation of its Granite large language model (LLM) family, Granite 3.2, in a continued effort to deliver small, efficient, practical enterprise AI for real-world impact. All Granite 3.2 models are available under the permissive Apache 2.0 license on Hugging Face. Select models are available today on IBM watsonx.ai, Ollama, Replicate, and LM Studio, and expected soon in RHEL AI 1.5 - bringing advanced capabilities to businesses and the open-source community.

NComputing Launches RX540 Thin Client Powered by Raspberry Pi Compute Module 5 Platform

NComputing, a global leader in thin client computing solutions, announces the launch of its RX540 thin client, powered by the Raspberry Pi Compute Module 5 platform. This state-of-the-art device offers a substantial leap in CPU performance, memory bandwidth, and local GPU performance.

Optimized for a wide range of Virtual Desktop Infrastructure (VDI) and Desktop-as-a-Service (DaaS) solutions, the RX540 provides an exceptional blend of functionality, performance, and affordability. By optimizing LEAF OS on the CM5 platform, NComputing has built a thin client that delivers performance on par with traditional x86-64 endpoints while maintaining cost-effectiveness, enhancing multitasking capabilities and boosting productivity across the board.

Advantech Unveils New AI, Industrial and Network Edge Servers Powered by Intel Xeon 6 Processors

Advantech, a global leader in industrial and embedded computing, today announced the launch of seven new server platforms built on Intel Xeon 6 processors, optimized for industrial, transportation and communications applications. Designed to meet the increasing demands of complex AI storage and networking workloads, these innovative edge servers and network appliances provide superior performance, reliability, and scalability for system integrators, solutions, and service provider customers.

Intel Xeon 6 - Exceptional Performance for the Widest Range of Workloads
Intel Xeon 6 processors feature advanced performance and efficiency cores, delivering up to 86 cores per CPU, DDR5 memory support with speeds up to 6400 MT/s, and PCIe Gen 5 lanes for high-speed connectivity. Designed to optimize both compute-intensive and scale-out workloads, these processors ensure seamless integration across a wide array of applications.

MSI Announces New Server Platforms Supporting Intel Xeon 6 Family of Processors

MSI introduces new server platforms powered by the latest Intel Xeon 6 family of processors with the Performance Cores (P-Cores). Engineered for high-density performance, seamless scalability, and energy-efficient operations, these servers deliver exceptional throughput, dynamic workload flexibility, and optimized power efficiency. Optimized for AI-driven applications, modern data centers, and cloud-native workloads, MSI's new platforms help lower total cost of ownership (TCO) while maximizing infrastructure efficiency and resource optimization.

"As data-driven transformation accelerates across industries, businesses require solutions that not only deliver performance but also enable sustainable growth and operational agility," said Danny Hsu, General Manager of MSI's Enterprise Platform Solutions. "Our Intel Xeon 6 processor-based servers are designed to support this shift by offering high-core scalability, energy-efficient performance, and dynamic workload optimization. These capabilities empower organizations to maximize compute density, streamline their digital ecosystems, and respond to evolving market demands with greater speed and efficiency."

Intel Xeon 6 Processors With E-Core Achieve Ecosystem Adoption Speed by Industry-Leading 5G Core Solution Partners

Intel today showcased how Intel Xeon 6 processors with Efficient-cores (E-cores) have dramatically accelerated time-to-market adoption for the company's solutions in collaboration with the ecosystem. Since product introduction in June 2024, 5G core solution partners have independently validated a 3.2x performance improvement, a 3.8x performance per watt increase and, in collaboration with the Intel Infrastructure Power Manager launched at MWC 2024, a 60% reduction in run-time power consumption.

"As 5G core networks continue to build out using Intel Xeon processors, which are deployed in the vast majority of 5G networks worldwide, infrastructure efficiency, power savings and uncompromised performance are essential criteria for communication service providers (CoSPs). Intel is pleased to announce that our 5G core solution partners have accelerated the adoption of Intel Xeon 6 with E-cores and are immediately passing along these benefits to their customers. In addition, with Intel Infrastructure Power Manager, our partners have a run-time software solution that is showing tremendous progress in reducing server power in CoSP environments on existing and new infrastructure." -Alex Quach, Intel vice president and general manager of Wireline and Core Network Division

Intel Unveils Leadership AI and Networking Solutions with Xeon 6 Processors

As enterprises modernize infrastructure to meet the demands of next-gen workloads like AI, high-performing and efficient compute is essential across the full spectrum - from data centers to networks, edge and even the PC. To address these challenges, Intel today launched its Xeon 6 processors with Performance-cores (P-cores), providing industry-leading performance for the broadest set of data center and network infrastructure workloads and best-in-class efficiency to create an unmatched server consolidation opportunity.

"We are intensely focused on bringing cutting-edge leadership products to market that solve our customers' greatest challenges and help drive the growth of their business," said Michelle Johnston Holthaus, interim co-CEO of Intel and CEO of Intel Products. "The Xeon 6 family delivers the industry's best CPU for AI and groundbreaking features for networking, while simultaneously driving efficiency and bringing down the total cost of ownership."

MITAC Computing Announces Intel Xeon 6 CPU-powered Next-gen AI & HPC Server Series

MiTAC Computing Technology Corporation, a leading server platform design manufacturer and a subsidiary of MiTAC Holdings Corporation, today announced the launch of its latest server systems and motherboards powered by the latest Intel Xeon 6 with P-core processors. These industry-leading processors are designed for compute-intensive workloads, providing up to twice the performance for the widest range of workloads including AI and HPC.

Driving Innovation in AI and High-Performance Computing
"For over a decade, MiTAC Computing has collaborated with Intel to push the boundaries of server technology, delivering cutting-edge solutions optimized for AI and high-performance computing (HPC)," said Rick Hwang, President of MiTAC Computing Technology Corporation. "With the integration of the latest Intel Xeon 6 P-core processors our servers now unlock groundbreaking AI acceleration, boost computational efficiency, and scale cloud operations to new heights. These innovations provide our customers with a competitive edge, empowering them to tackle demanding workloads with superior empower our customers with a competitive edge through superior performance and an optimized total cost of ownership."

GIGABYTE Showcases Comprehensive AI Computing Portfolio at MWC 2025

GIGABYTE, a global leader in computing innovation and technology, will showcase its full-spectrum AI computing solutions that bridge development to deployment at MWC 2025, taking place from March 3-6.

"AI+" and "Enterprise-Reinvented" are two of the themes for MWC. As enterprises accelerate their digital transformation and intelligent upgrades, the transition of AI applications from experimental development to democratized commercial deployment has become a critical turning point in the industry. Continuing its "ACCEVOLUTION" initiative, GIGABYTE provides the comprehensive infrastructure products and solutions spanning cloud-based supercomputing centers to edge computing terminals, aiming to accelerate the next evolution and empower industries to scale AI applications efficiently.
Return to Keyword Browsing
Mar 24th, 2025 01:45 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts