News Posts matching #Enterprise

Return to Keyword Browsing

Enterprise SSD Market Sees Strong 3Q24 Growth, Revenue Soars 28.6% on Surging Demand for High-Capacity Models

TrendForce's latest investigations found that the enterprise SSD market experienced significant growth in 3Q24, driven by robust demand from AI-related applications. Prices surged as suppliers struggled to keep pace with market needs, pushing overall industry revenue up by an impressive 28.6% QoQ. Demand for high-capacity models was especially strong, fueled by the arrival of NVIDIA's H-series products and sustained orders for AI training servers. As a result, the total procurement volume for enterprise SSDs rose 15% compared to the previous quarter.

Looking ahead to 4Q24, TrendForce forecasts a slowdown in enterprise SSD revenue as procurement demand begins to cool. Total procurement volume is expected to dip, with the peak buying period behind and server OEM orders being slightly revised downward. As shipment volume declines, overall industry revenue is also projected to decrease in the fourth quarter.

KIOXIA NVMe SSD Cryptographic Module Achieves FIPS 140-3 Level 2 Validation

KIOXIA America, Inc. today announced that the cryptographic module used in KIOXIA CM7 Series PCIe 5.0 NVMe Enterprise SSDs has been validated to meet Federal Information Processing Standard (FIPS) 140-3, Level 2 for cryptographic modules.

The FIPS 140-3 standard specifies a set of security requirements of the Cryptographic Module Validation Program administered by the National Institute of Standards and Technology (NIST), used as a security metric for federal agencies to procure validated IT equipment. Companies and federal agencies may prefer or may now be required to deploy newer, more stringent government standards - which SSDs validated to FIPS 140-3 requirements would meet. Compared to the previous FIPS 140-2 requirements, 140-3 provides higher standards for SSDs, including a stronger authentication method and updated implementation guidance.

IonQ Unveils Its First Quantum Computer in Europe, Online Now at a Record #AQ36

IonQ, a leader in the quantum computing and networking industry, today announced the delivery of IonQ Forte Enterprise to its first European Innovation Center at the uptownBasel campus in Arlesheim, Switzerland. Achieved in partnership with QuantumBasel, this major milestone marks the first datacenter-ready quantum computer IonQ has delivered that will operate outside the United States and the first quantum system for commercial use in Switzerland.

Forte Enterprise is now online servicing compute jobs while performing at a record algorithmic qubit count of #AQ36, which is significantly more powerful than the promised #AQ35. With each additional #AQ, the useful computational space for running quantum algorithms doubles. A system with #AQ36 is capable of considering more than 68 billion different possibilities simultaneously. With this milestone, IonQ once again leads the industry in delivering production-ready systems to customers.

Advantech Unveils AMD-Powered Network Appliances

To address the growing demands for agile embedded networking, intelligent edge, and secure communication, Advantech, a leading provider of network security solutions, has launched a new series of x86 network appliances: FWA-6183, FWA-5082, and FWA-1081. Powered by AMD EPYC 9004 and 8004, and AMD Ryzen V3000 series processors, this series delivers advanced computing performance, high bandwidth, and lower TDP. These appliances are optimized for a wide range of workloads, from SMEs to larger-scale enterprise of network security applications, including edge computing, WAN optimization, DPI/IPS/IDS, SD-WAN/SASE, and NGFW/UTM.

Key AMD Embedded Network Advantages
AMD EPYC & Ryzen Series Processors:
  • Breakthrough Performance
  • Up to 96 cores/192 threads, ensuring scalable processing power.
  • Expansive I/O Options
  • PCIe Gen 5 with up to 128 lanes for high bandwidth and maximum I/O flexibility
  • Optimized Power Efficiency

Aetina Debuts at SC24 With NVIDIA MGX Server for Enterprise Edge AI

Aetina, a subsidiary of the Innodisk Group and an expert in edge AI solutions, is pleased to announce its debut at Supercomputing (SC24) in Atlanta, Georgia, showcasing the innovative SuperEdge NVIDIA MGX short-depth edge AI server, AEX-2UA1. By integrating an enterprise-class on-premises large language model (LLM) with the advanced retrieval-augmented generation (RAG) technique, Aetina NVIDIA MGX short-depth server demonstrates exceptional enterprise edge AI performance, setting a new benchmark in Edge AI innovation. The server is powered by the latest Intel Xeon 6 processor and dual high-end double-width NVIDIA GPUs, delivering ultimate AI computing power in a compact 2U form factor, accelerating Gen AI at the edge.

The SuperEdge NVIDIA MGX server expands Aetina's product portfolio from specialized edge devices to comprehensive AI server solutions, propelling a key milestone in Innodisk Group's AI roadmap, from sensors and storage to AI software, computing platforms, and now AI edge servers.

Microsoft is Introducing a $349 Mini PC That Streams Windows 11 from the Cloud

Microsoft is introducing Windows 365 Link, a compact cloud PC for business users. The device costs $349 and measures just 120 x 120 x 30 mm, making it smaller than Apple's Mac mini. The compact size comes from the fanless cooling design and the fact that the device doesn't have local storage capabilities. This small computer has quite a variety of connectivity options, including one USB-C, three USB-A ports, HDMI, DisplayPort, and Ethernet connections, supports two 4K monitors, and has Bluetooth 5.3 and Wi-Fi 6E wireless capabilities. The specific hardware details are not yet revealed by Microsoft.

It requires Windows 365 with Microsoft Intune and Entra ID, and it works with 365 Frontline, Enterprise, and Business editions. As with other cloud-based solutions, Microsoft will lock some of the security options, "features like Secure Boot, the dedicated Trusted Platform Module, Hypervisor Code Integrity, BitLocker encryption, and the Microsoft Defender for Endpoint detection and response sensor can't be turned off, further helping to secure the device". Microsoft plans to launch the device in April 2025, with early previews in the US, Canada, UK, Germany, Japan, Australia and New Zealand. Businesses interested in testing the device can contact their Microsoft account team before December 15, 2024, to join the preview program.

QSAN Technology Expands Its XCube Enterprise NAS Series

QSAN Technology Inc., a leading provider of innovative storage solutions, is thrilled to announce the latest update to its enterprise NAS product line with enterprise unified storage manager - QSM 4, designed to set new standards for performance and management. The XN8100R and XN5100R series (single controller models) now deliver significantly enhanced capabilities for today's data-intensive business environments.

Enhanced Performance for Greater Efficiency with Enterprise NAS
QSAN's XCubeNAS models with the newly developed QFS (QSAN File System) demonstrate up to 10x faster data access than QSM 3 models. The high-core models of the XN8100R series work with SD4 NVMe SSD to further optimize latency and IOPS, delivering exceptional responsiveness and high-speed data access.

HighPoint Releases the RocketAIC 6542AWW With 500 TB of RAID Storage Capacity in a Compact Chassis

HighPoint's RocketAIC 6542AWW has established a new milestone for turnkey NVMe Storage. Despite the external RAID solution's tidy dimensions (it stands only 4.84 inches tall, 8.27 inches deep, and 9.25 inches in length), it is equipped with eight of Solidigm's D5-P5336 NVMe SSDs, and provides an astonishing 491.52 TB of enterprise storage capable of delivering 28GBs of real-world transfer speed; ideal for Edge Applications that demand fast, high-density storage solutions with small hardware footprint.

The compact external storage chassis' integrated power supply and powerful cooling system enable the NVMe media to be completely isolated from the host hardware platform. Not only does this free up interior space for the host server or workstation, it can significantly improve reliability and efficiency by offsetting power consumption and ensuring that waste heat generated by the SSDs never enters the computing environment.

HPE Expands Direct Liquid-Cooled Supercomputing Solutions With Two AI Systems for Service Providers and Large Enterprises

Today, Hewlett Packard Enterprise announces its new high performance computing (HPC) and artificial intelligence (AI) infrastructure portfolio that includes leadership-class HPE Cray Supercomputing EX solutions and two systems optimized for large language model (LLM) training, natural language processing (NLP) and multi-modal model training. The new supercomputing solutions are designed to help global customers fast-track scientific research and invention.

"Service providers and nations investing in sovereign AI initiatives are increasingly turning to high-performance computing as the critical backbone enabling large-scale AI training that accelerates discovery and innovation," said Trish Damkroger, senior vice president and general manager, HPC & AI Infrastructure Solutions at HPE. "Our customers turn to us to fast-track their AI system deployment to realize value faster and more efficiently by leveraging our world-leading HPC solutions and decades of experience in delivering, deploying and servicing fully-integrated systems."

Innodisk Introduces E1.S Edge Server SSD for Edge Computing and AI Applications

Innodisk, a leading global AI solution provider, has introduced its new E1.S SSD, which is specifically designed to meet the demands of growing edge computing applications. The E1.S edge server SSD offers exceptional performance, reliability, and thermal management capabilities to address the critical needs of modern data-intensive environments and bridge the gap between traditional industrial SSDs and data center SSDs.

As AI and 5G technologies rapidly evolve, the demands on data processing and storage continue to grow. The E1.S SSD addresses the challenges of balancing heat dissipation and performance, which has become a major concern for today's SSDs. Traditional industrial and data center SSDs often struggle to meet the needs of edge applications. Innodisk's E1.S eliminates these bottlenecks with its Enterprise and Data Center Standard Form Factor (EDSFF) design and offers a superior alternative to U.2 and M.2 SSDs.

DapuStor Officially Launches High-Capacity QLC eSSDs up to 64TB

As AI accelerates data expansion, enterprises face increasing challenges in managing large volumes of data effectively. Tiered storage solutions have emerged as the preferred approach for balancing performance and costs. Solid State Drives (SSDs), with their fast read and write speeds, low latency and high power efficiency, are becoming the dominant storage selection for data centers and AI servers. Among SSD techniques, QLC SSDs offer unique advantages in costs and storage density, making them particularly suited for read-intensive AI applications. Therefore, high-capacity SSDs, such as 32 TB and 64 TB models, are gaining traction as a new storage solution in the market.

Read-Intensive Applications: Mainstream Enterprise SSD Use Cases
According to the latest research from FI (Forward Insight), up to 91% of current PCIe SSDs deployments are used in applications with DWPD of less than 1, and the share is expected to reach 99% by 2028. this shift underscores the increasing prevalence of read-intensive applications and data centers, a space where QLC SSDs excel.

Cisco Unveils Plug-and-Play AI Solutions Powered by NVIDIA H100 and H200 Tensor Core GPUs

Today, Cisco announced new additions to its data center infrastructure portfolio: an AI server family purpose-built for GPU-intensive AI workloads with NVIDIA accelerated computing, and AI PODs to simplify and de-risk AI infrastructure investment. They give organizations an adaptable and scalable path to AI, supported by Cisco's industry-leading networking capabilities.

"Enterprise customers are under pressure to deploy AI workloads, especially as we move toward agentic workflows and AI begins solving problems on its own," said Jeetu Patel, Chief Product Officer, Cisco. "Cisco innovations like AI PODs and the GPU server strengthen the security, compliance, and processing power of those workloads as customers navigate their AI journeys from inferencing to training."

Increased Production and Weakened Demand to Drive NAND Flash Prices Down 3-8% in 4Q24

TrendForce's latest findings reveal that NAND Flash products have been impacted by weaker-than-expected seasonal demand in the second half of 2024, leading to a decline in wafer contract prices in Q3. This downward trend is projected to deepen, with prices expected to drop by more than 10% in Q4.

Enterprise SSDs are the only segment likely to see modest price growth—supported by stable order momentum—with contract prices forecast to rise by 0-5% in Q4. However, PC SSDs and UFS will see more cautious procurement strategies from buyers, as weaker-than-expected sales of end products drive buyers to adopt a conservative approach. As a result, TrendForce projects overall NAND Flash contract prices will decline by 3-8% in Q4.

Western Digital Enterprise SSDs Certified to Support NVIDIA GB200 NVL72 System for Compute-Intensive AI Environments

Western Digital Corp. today announced that its PCIe Gen 5 DC SN861 E.1S enterprise-class NVMe SSDs have been certified to support the NVIDIA GB200 NVL72 rack-scale system.

The rapid rise of AI, ML, and large language models (LLMs) is creating a challenge for companies with two opposing forces. Data generation and consumption are accelerating, while organizations face pressure to quickly derive value from this data. Performance, scalability, and efficiency are essential for AI technology stacks as storage demands rise. Certified to be compatible with the GB200 NVL72 system, Western Digital's enterprise SSD addresses the growing needs of the AI market for high-speed accelerated computing combined with low latency to serve compute-intensive AI environments.

KIOXIA Introduces PCIe 5.0 NVMe EDSFF E1.S SSDs for Cloud and Hyperscale Environments

KIOXIA America, Inc. today announced the availability of its new KIOXIA XD8 Series PCIe 5.0 Enterprise and Datacenter Standard Form Factor (EDSFF) E1.S SSDs. The new drives are the third generation of E1.S SSDs from KIOXIA and are compliant with PCIe 5.0 (32 GT/s x 4) and NVMe 2.0 specifications, and support the Open Compute Project (OCP) Datacenter NVMe SSD v2.5 specification.

Designed for cloud and hyperscale environments, the KIOXIA XD8 Series meets the growing demand for higher performance, enhanced efficiency, and greater scalability in data centers. The new drives empower cloud providers and hyperscalers to optimize their infrastructure, delivering superior performance while maintaining operational efficiency.

ASRock Rack Unveils New Server Platforms Supporting AMD EPYC 9005 Series Processors and AMD Instinct MI325X Accelerators at AMD Advancing AI 2024

ASRock Rack Inc., a leading innovative server company, announced upgrades to its extensive lineup to support AMD EPYC 9005 Series processors. Among these updates is the introduction of the new 6U8M-TURIN2 GPU server. This advanced platform features AMD Instinct MI325X accelerators, specifically optimized for intensive enterprise AI applications, and will be showcased at AMD Advancing AI 2024.

ASRock Rack Introduce GPU Servers Powered by AMD EPYC 9005 series processors
AMD today revealed the 5th Generation AMD EPYC processors, offering a wide range of core counts (up to 192 cores), frequencies (up to 5 GHz), and expansive cache capacities. Select high-frequency processors, such as the AMD EPYC 9575F, are optimized for use as host CPUs in GPU-enabled systems. Additionally, the just launched AMD Instinct MI325X accelerators feature substantial HBM3E memory and 6 TB/s of memory bandwidth, enabling quick access and efficient handling of large datasets and complex computations.

Supermicro Introduces New Versatile System Design for AI Delivering Optimization and Flexibility at the Edge

Super Micro Computer, Inc., a Total IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, announces the launch of a new, versatile, high-density infrastructure platform optimized for AI inferencing at the network edge. As companies seek to embrace complex large language models (LLM) in their daily operations, there is a need for new hardware capable of inferencing high volumes of data in edge locations with minimal latency. Supermicro's innovative system combines versatility, performance, and thermal efficiency to deliver up to 10 double-width GPUs in a single system capable of running in traditional air-cooled environments.

"Owing to the system's optimized thermal design, Supermicro can deliver all this performance in a high-density 3U 20 PCIe system with 256 cores that can be deployed in edge data centers," said Charles Liang, president and CEO of Supermicro. "As the AI market is growing exponentially, customers need a powerful, versatile solution to inference data to run LLM-based applications on-premises, close to where the data is generated. Our new 3U Edge AI system enables them to run innovative solutions with minimal latency."

Inflection AI and Intel Launch Enterprise AI System

Today, Inflection AI and Intel announced a collaboration to accelerate the adoption and impact of AI for enterprises as well as developers. Inflection AI is launching Inflection for Enterprise, an industry-first, enterprise-grade AI system powered by Intel Gaudi and Intel Tiber AI Cloud (AI Cloud), to deliver empathetic, conversational, employee-friendly AI capabilities and provide the control, customization and scalability required for complex, large-scale deployments. This system is available presently through the AI Cloud and will be shipping to customers as an industry-first AI appliance powered by Gaudi 3 in Q1 2025.

"Through this strategic collaboration with Inflection AI, we are setting a new standard with AI solutions that deliver immediate, high-impact results. With support for open-source models, tools, and competitive performance per watt, Intel Gaudi 3 solutions make deploying GenAI accessible, affordable, and efficient for enterprises of any size." -Justin Hotard, Intel executive vice president and general manager of the Data Center and AI Group

Accenture to Train 30,000 of Its Employees on NVIDIA AI Full Stack

Accenture and NVIDIA today announced an expanded partnership, including Accenture's formation of a new NVIDIA Business Group, to help the world's enterprises rapidly scale their AI adoption. With generative AI demand driving $3 billion in Accenture bookings in its recently-closed fiscal year, the new group will help clients lay the foundation for agentic AI functionality using Accenture's AI Refinery, which uses the full NVIDIA AI stack—including NVIDIA AI Foundry, NVIDIA AI Enterprise and NVIDIA Omniverse—to advance areas such as process reinvention, AI-powered simulation and sovereign AI.

Accenture AI Refinery will be available on all public and private cloud platforms and will integrate seamlessly with other Accenture Business Groups to accelerate AI across the SaaS and Cloud AI ecosystem.

Intel Clearwater Forest Pictured, First 18A Node High Volume Product

Yesterday, Intel launched its Xeon 6 family of server processors based on P-cores manufactured on Intel 3 node. While the early reviews seem promising, Intel is preparing a more advanced generation of processors that will make or break its product and foundry leadership. Codenamed "Clearwater Forest," these CPUs are expected to be the first high-volume production chips based on the Intel 18A node. We have pictures of the five-tile Clearwater Forest processor thanks to Tom's Hardware. During the Enterprise Tech Tour event in Portland, Oregon, Tom's Hardware managed to take a picture of the complex Clearwater Forest design. With compute logic built on 18A, this CPU uses Intel's 3-T process technology, which serves as the foundation for the base die, marking its debut in this role. Compute dies are stacked on this base die, making the CPU building more complex but more flexible.

The Foveros Direct 3D and EMIB technologies enable large-scale integration on a package, achieving capabilities that previous monolithic single-chip designs could not deliver. Other technologies like RibbonFET and PowerVia will also be present for Clearwater Forest. If everything continues to advance according to plan, we expect to see this next-generation CPU sometime next year. However, it is crucial to note that if this CPU shows that the high-volume production of Intel 18A is viable, many Intel Foundry customers would be reassured that Intel can compete with TSMC and Samsung in producing high-performance silicon on advanced nodes at scale.

HP Showcases New AI Solutions at HP Imagine Event

Today at HP Imagine, HP Inc. (NYSE: HPQ) announced the company's newest innovations, including next-gen AI PCs, AI-enabled video conferencing solutions, and a scalable GPU performance sharing solution for AI developers - all designed to transform the future of work.

"HP is deeply ambitious in its commitment to reshape the way people work, fostering growth, nurturing creativity, and unleashing limitless innovation," said Alex Cho, President of Personal Systems at HP Inc. "We're bringing AI to life and delivering powerful new experiences through our next-gen AI PCs, advanced audio and video solutions, and innovative AI development platform."

GIGABYTE Introduces Accelerated Computing Servers With NVIDIA HGX H200

Giga Computing, a subsidiary of GIGABYTE and an industry leader in generative AI servers and advanced cooling technologies, today added two new 8-GPU baseboard servers to the GIGABYTE G593 series that support the NVIDIA HGX H200, a GPU memory platform ideal for large AI datasets, as well as scientific simulations and other memory-intensive workloads.

G593 Series for Scale-up Computing in AI & HPC
With dedicated real estate for cooling GPUs, the G593 series achieves stable, demanding performance in its compact 5U chassis with high airflow for incredible compute density. Maintaining the same power requirements as the air-cooled NVIDIA HGX H100-based systems, the NVIDIA H200 Tensor Core GPU optimally pairs with the road-tested GIGABYTE G593 series server that is purpose-built for an 8-GPU baseboard. To alleviate the memory bandwidth constraints on AI, including AI inference, the NVIDIA H200 GPU offers a sizable increase in memory capacity and bandwidth compared to the NVIDIA H100 Tensor Core GPU. The H200 GPU has up to 141 GB of HBM3e memory and 4.8 TB/s of memory bandwidth, translating to a 1.7X increase in memory capacity and 1.4X increase in throughput.

Eurocom Launches 14-inch and 16-inch Blitz Ultra Enterprise Class Laptop

Eurocom launches the Blitz Ultra, a 14-inch and 16-inch enterprise class laptop loaded with features related to security and manageability while carrying unmatched connectivity and expandability. It is powered by Intel's ultra-efficient 14th-gen Intel Ultra Meteor Lake Processor, a robust 73Wh 10hrs+ battery and a captivating 16:10 screen. Your productivity will soar to new heights. Heavy duty, yet lightweight 1.6 kg / 3.52 lbs design that meets military standard MIL-STD 810H. When it comes to security features, the Blitz Ultra has built-in TPM 2.0 data-encryption module, BIOS support for SED (Self Encrypting Drives) and a Kensington lock making it the ultimate enterprise-class laptop for security, connectivity and expandability.

"The Eurocom Blitz Ultra is designed for government, military, security, healthcare and corporate professionals engaged in mission-critical computing and/or handling corporate IP assets and/or customer's sensitive data. It provides secure access via data encryption via TPM 2.0 module. Blitz Ultra has a Factory- installed Offline Permanent Disconnect Option. This is an optional upgrade to physically remove all connectivity and communications components to ensure a 100% offline system for maximum security of sensitive data and protection of intellectual property. " - Mark Bialic, Eurocom President.

GIGABYTE Rolls Out High Memory Capacity Servers Using AMD EPYC 9004 Processors

Giga Computing, a subsidiary of GIGABYTE and an industry leader in generative AI servers and advanced cooling technologies, today released two GIGABYTE R-series servers (R183-ZK0 and R283-ZK0) with enhanced performance and reliability for cloud services and data-intensive applications. These highly scalable memory capacity servers support AMD EPYC 9004 processors and are ready for select 5th generation AMD EPYC processors with up to 192 CPU cores.

These new GIGABYTE servers are the first and only ones in the market that are one node with two CPUs that support 48 memory DIMMs. GIGABYTE's rich history in motherboard design and engineering with great signal integrity make this possible, a server with 12 TB memory using 256 GB DDR5 3DS RDIMMs. To accommodate a 12-memory channel platform with a 2DPC configuration and without compromising, a new memory layout was developed.

Supermicro Launches Plug-and-Play SuperCluster for NVIDIA Omniverse

Supermicro, Inc., a Total IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, is announcing a new addition to its SuperCluster portfolio of plug-and-play AI infrastructure solutions for the NVIDIA Omniverse platform to deliver the high-performance generative AI-enhanced 3D workflows at enterprise scale. This new SuperCluster features the latest Supermicro NVIDIA OVX systems and allows enterprises to easily scale as workloads increase.

"Supermicro has led the industry in developing GPU-optimized products, traditionally for 3D graphics and application acceleration, and now for AI," said Charles Liang, president and CEO of Supermicro. "With the rise of AI, enterprises are seeking computing infrastructure that combines all these capabilities into a single package. Supermicro's SuperCluster features fully interconnected 4U PCIe GPU NVIDIA-Certified Systems for NVIDIA Omniverse, with up to 256 NVIDIA L40S PCIe GPUs per scalable unit. The system helps deliver high performance across the Omniverse platform, including generative AI integrations. By developing this SuperCluster for Omniverse, we're not just offering a product; we're providing a gateway to the future of application development and innovation."
Return to Keyword Browsing
Dec 22nd, 2024 13:29 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts