News Posts matching #Enterprise

Return to Keyword Browsing

Sharp NEC Launches Two MultiSync 27-inch Enterprise Displays

Sharp NEC Display Solutions Europe introduces the first desktop displays launched under the Sharp brand to evidence its ongoing commitment to Quality, Service and Sustainability. Rooted in the MultiSync legacy of quality and reliability, the two new 27" monitor models, Sharp MultiSync EA272Q and Sharp MultiSync EA272U, are ideally suited for use in corporate offices, home-office scenarios as well as for control room applications.

For detail-rich applications, the high pixel density resolution of the Sharp MultiSync EA272Q (QuadHD) and the Sharp MultiSync EA272U (UltraUHD), coupled with 27-inch active screen area result in expansive digital canvases for the most productive personal workspaces. Users can extend their visual command even further with a near uninterrupted multi-screen configuration with ultra-narrow bezel and DisplayPort OUT daisy-chain. Featuring powerful 90 W USB Type-C connectivity and embedded LAN, the desktop models become fully functioning Docking Hub solutions. With just a single cable to connect notebook/PC to the display whilst powering the device, the workspace remains clear of cable clutter and e-waste is minimised. The displays' sustainable functionalities continue, achieving energy savings of up to 30% by enabling sensor technology to automatically reduce brightness or power down as appropriate to usage scenarios.

AMD to Acquire Silo AI to Expand Enterprise AI Solutions Globally

AMD today announced the signing of a definitive agreement to acquire Silo AI, the largest private AI lab in Europe, in an all-cash transaction valued at approximately $665 million. The agreement represents another significant step in the company's strategy to deliver end-to-end AI solutions based on open standards and in strong partnership with the global AI ecosystem. The Silo AI team consists of world-class AI scientists and engineers with extensive experience developing tailored AI models, platforms and solutions for leading enterprises spanning cloud, embedded and endpoint computing markets.

Silo AI CEO and co-founder Peter Sarlin will continue to lead the Silo AI team as part of the AMD Artificial Intelligence Group, reporting to AMD senior vice president Vamsi Boppana. The acquisition is expected to close in the second half of 2024.

Q3 Contract Prices of NAND Flash Products Constrained by Increased Production and Lower End-User Demand; Estimated to Rise by 5-10%

TrendForce reports that while the enterprise sector continues to invest in server infrastructure—especially with the rising adoption of AI driving demand for enterprise SSDs—the consumer electronics market remains lackluster. This, combined with NAND suppliers aggressively ramping up production in the second half of the year, is expected to push the NAND Flash sufficiency ratio up to 2.3% in the third quarter, curbing the blended price hike to a modest 5-10%.

This year, NAND Flash prices saw a robust rebound as manufacturers kept production in check during the first half, helping them regain profitability. However, with a noticeable ramp-up in production and sluggish retail demand, wafer spot prices have dropped significantly. Some wafer prices are now over 20% below contract prices, casting doubts on the sustainability of future price hikes.

Western Digital Introduces New Enterprise AI Storage Solutions and AI Data Cycle Framework

Fueling the next wave of AI innovation, Western Digital today introduced a six-stage AI Data Cycle framework that defines the optimal storage mix for AI workloads at scale. This framework will help customers plan and develop advanced storage infrastructures to maximize their AI investments, improve efficiency, and reduce the total cost of ownership (TCO) of their AI workflows. AI models operate in a continuous loop of data consumption and generation - processing text, images, audio and video among other data types while simultaneously producing new unique data. As AI technologies become more advanced, data storage systems must deliver the capacity and performance to support the computational loads and speeds required for large, sophisticated models while managing immense volumes of data. Western Digital has strategically aligned its Flash and HDD product and technology roadmaps to the storage requirements of each critical stage of the cycle, and today introduced a new industry-leading, high-performance PCIe Gen 5 SSD to support AI training and inference; a high-capacity 64 TB SSD for fast AI data lakes; and the world's highest capacity ePMR, UltraSMR 32 TB HDD for cost-effective storage at scale.

"There's no doubt that Generative AI is the next transformational technology, and storage is a critical enabler. The implications for storage are expected to be significant as the role of storage, and access to data, influences the speed, efficiency and accuracy of AI Models, especially as larger and higher-quality data sets become more prevalent," said Ed Burns, Research Director at IDC. "As a leader in Flash and HDD, Western Digital has an opportunity to benefit in this growing AI landscape with its strong market position and broad portfolio, which meets a variety of needs within the different AI data cycle stages."

MiTAC, TYAN Unveil Intel Xeon 6 Servers for AI, HPC, Cloud, and Enterprise at Computex 2024

MiTAC Computing Technology and its server brand TYAN, the leading manufacturer in server platform design worldwide, unveil their new server systems and motherboards optimized for today's AI, HPC, cloud, and enterprise workloads at COMPUTEX 2024, Booth # M1120 in Taipei, Taiwan from June 4 to June 7. Harnessing the power of the latest Intel Xeon 6 processor and 4th and 5th Gen Intel Xeon Scalable processors, these solutions deliver cutting-edge performance.

"For over a decade, MiTAC has worked with Intel at the forefront of server technology innovation, consistently delivering cutting-edge solutions tailored for AI and high-performance computing (HPC). The integration of Intel's latest Xeon 6 processors into our MiTAC and TYAN server platforms transforms computational capabilities, significantly enhancing AI performance, boosting efficiency, and scaling cloud operations. These advancements empower our customers with a competitive edge through superior performance and optimized total cost of ownership." said Rick Hwang, President of MiTAC Computing Technology Corporation.

MSI New Server Platforms Drive Cloud-Scale Efficiency with Intel Xeon 6 Processor

MSI, a leading global server provider, today introduced its latest Intel Xeon 6 processor-based server platforms at Computex 2024, booth #M0806 in Taipei, Taiwan from June 4-7. These new products showcase a new level of performance and efficiency, tailored to meet the diverse demands of cloud-native and hyperscale workloads.

"Data center infrastructure requirements are diversifying, with certain workloads requiring performance measures beyond just cores and watts," said Danny Hsu, General Manager of Enterprise Platform Solutions. "To better cater to today's evolving compute demands, data centers must achieve consistent performance even during peak loads. MSI's new server platforms, built around Intel Xeon 6 processor, drive density and power efficiency, making them ideal for cloud-scale workloads."

GIGABYTE Joins COMPUTEX to Unveil Energy Efficiency and AI Acceleration Solutions

Giga Computing, a subsidiary of GIGABYTE and an industry leader in AI servers and green computing, today announced its participation in COMPUTEX and unveiling of solutions tackling complex AI workloads at scale, as well as advanced cooling infrastructure that will lead to greater energy efficiency. Additionally, to support innovations in accelerated computing and generative AI, GIGABYTE will have NVIDIA GB200 NVL72 systems available in Q1 2025. Discussions around GIGABYTE products will be held in booth #K0116 in Hall 1 at the Taipei Nangang Exhibition Center. As an NVIDIA-Certified System provider, GIGABYTE servers also support NVIDIA NIM inference microservices, part of the NVIDIA AI Enterprise software platform.

Redefining AI Servers and Future Data Centers
All new and upcoming CPU and accelerated computing technologies are being showcased at the GIGABYTE booth alongside GIGA POD, a rack-scale AI solution by GIGABYTE. The flexibility of GIGA POD is demonstrated with the latest solutions such as the NVIDIA HGX B100, NVIDIA HGX H200, NVIDIA GH200 Grace Hopper Superchip, and other OAM baseboard GPU systems. As a turnkey solution, GIGA POD is designed to support baseboard accelerators at scale with switches, networking, compute nodes, and more, including support for NVIDIA Spectrum -X to deliver powerful networking capabilities for generative AI infrastructures.

AMD, Broadcom, Cisco, Google, HPE, Intel, Meta and Microsoft Form Ultra Accelerator Link (UALink) Promoter Group to Combat NVIDIA NVLink

AMD, Broadcom, Cisco, Google, Hewlett Packard Enterprise (HPE), Intel, Meta and Microsoft today announced they have aligned to develop a new industry standard dedicated to advancing high-speed and low latency communication for scale-up AI systems linking in Data Centers.

Called the Ultra Accelerator Link (UALink), this initial group will define and establish an open industry standard that will enable AI accelerators to communicate more effectively. By creating an interconnect based upon open standards, UALink will enable system OEMs, IT professionals and system integrators to create a pathway for easier integration, greater flexibility and scalability of their AI-connected data centers.

Micron First to Achieve Qualification Sample Milestone to Accelerate Ecosystem Adoption of CXL 2.0 Memory

Micron Technology, a leader in innovative data center solutions, today announced it has achieved its qualification sample milestone for the Micron CZ120 memory expansion modules using Compute Express Link (CXL). Micron is the first in the industry to achieve this milestone, which accelerates the adoption of CXL solutions within the data center to tackle the growing memory challenges stemming from existing data-intensive workloads and emerging artificial intelligence (AI) and machine learning (ML) workloads.

Using a new and emerging CXL standard, the CZ120 required substantial hardware testing for reliability, quality and performance across CPU providers and OEMs, along with comprehensive software testing for compatibility and compliance with OS and hypervisor vendors. This achievement reflects the collaboration and commitment across the data center ecosystem to validate the advantages of CXL memory. By testing the combined products for interoperability and compatibility across hardware and software, the Micron CZ120 memory expansion modules satisfy the rigorous standards for reliability, quality and performance required by customers' data centers.

GIGABYTE Announces Support for AMD EPYC 4004 Series Processors

GIGABYTE Technology, Giga Computing, a subsidiary of GIGABYTE and an industry leader in AI servers, server motherboards, and workstations, today announced its support of AMD EPYC 4004 Series processors on AM5 socket servers and server motherboards for entry-level enterprise customers. This will require a BIOS update, which will come pre-installed in the near future.

The new AMD EPYC 4004 platform, built on the AM5 socket, delivers enterprise-grade features that allow small businesses and cloud services to have dependable daily operations with minimal downtime. For reliability and manageability, the platform has been validated for compatibility with server operating systems: Ubuntu, RHEL, and Windows Server. By doing so IT administrators can better control and monitor systems, as well as protect businesses against cyberthreats.

Phison Announces Pascari Brand of Enterprise SSDs, Debuts X200 Series Across Key Form-factors

Phison is arguably the most popular brand for SSD controllers in the client segment, but is turning more of attention to the vast enterprise segment. The company had been making first-party enterprise SSDs under its main marquee, but decided that the lineup needed its own brand that enterprise customers could better discern from the controller ASIC main brand. We hence have Pascari and Imagin. Pascari is an entire product family of fully built enterprise SSDs from Phison. The company's existing first-party drives under the main brand will probably migrate to the Pascari catalog. Imagin, on the other hand, is a design service for large cloud and data-center customers, so they could develop bespoke tiered storage solutions at scale.

The Pascari line of enterprise SSDs are designed completely in-house by Phison, feature their latest controllers, firmware, PCB, PMIC, and on-device power-failure protection on select products. The third-party components here are the NAND flash and DRAM chips, which have both been thoroughly evaluated by Phison for the best performance, endurance, and reliability, at their enterprise SSD design facility in Broomfield, Colorado. Phison already had a constellation of industry partners and suppliers to go around with, and the company's drives even power space missions; but the Pascari brand better differentiates the fully-built SSD lineup from the ASIC make. Pascari makes its debut with the X200 series high-performance SSDs for high-access heat data. The drive leverages Phison's latest PCIe Gen 5 controller technology, the most optimized memory components, and availability in all contemporary server storage form-factors.

Micron First to Ship Critical Memory for AI Data Centers

Micron Technology, Inc. (Nasdaq: MU), today announced it is leading the industry by validating and shipping its high-capacity monolithic 32Gb DRAM die-based 128 GB DDR5 RDIMM memory in speeds up to 5,600 MT/s on all leading server platforms. Powered by Micron's industry-leading 1β (1-beta) technology, the 128 GB DDR5 RDIMM memory delivers more than 45% improved bit density, up to 22% improved energy efficiency and up to 16% lower latency over competitive 3DS through-silicon via (TSV) products.

Micron's collaboration with industry leaders and customers has yielded broad adoption of these new high-performance, large-capacity modules across high-volume server CPUs. These high-speed memory modules were engineered to meet the performance needs of a wide range of mission-critical applications in data centers, including artificial intelligence (AI) and machine learning (ML), high-performance computing (HPC), in-memory databases (IMDBs) and efficient processing for multithreaded, multicore count general compute workloads. Micron's 128 GB DDR5 RDIMM memory will be supported by a robust ecosystem including AMD, Hewlett Packard Enterprise (HPE), Intel, Supermicro, along with many others.

AI Demand Drives Rapid Growth in QLC Enterprise SSD Shipments for 2024

North American customers are increasing their orders for storage products as energy efficiency becomes a key priority for AI inference servers. This, in turn, is driving up demand for QLC enterprise SSDs. Currently, only Solidigm and Samsung have certified QLC products, with Solidigm actively promoting its QLC products and standing to benefit the most from this surge in demand. TrendForce predicts shipments of QLC enterprise SSD bits to reach 30 exabytes in 2024—increasing fourfold in volume from 2023.

TrendForce identifies two main reasons for the increasing use of QLC SSDs in AI applications: the products' fast read speeds and TCO advantages. AI inference servers primarily perform read operations, which occur less frequently than the data writing required by AI training servers. In comparison to HDDs, QLC enterprise SSDs offer superior read speeds and have capacities that have expanded up to 64 TB.

MSI Showcases GPU Servers for Media and Entertainment Industry at 2024 NAB Show

MSI, a leading global server provider, is showcasing its latest GPU servers powered by AMD processors at the 2024 NAB Show, Booth #SL9137 in the Las Vegas Convention Center from April 14-17. These servers are designed to meet the evolving needs of modern creative projects in Media and Entertainment industry. "As AI continues to reshape the Media and Entertainment industry, it brings unprecedented speed and performance to tasks such as animation, visual effects, video editing, and rendering," said Danny Hsu, General Manager of Enterprise Platform Solutions. "MSI's GPU platforms empower content creators to execute every project with efficiency, speed, and uncompromising quality."

The G4101 is a 4U 4GPU server platform, purpose-built to unleash the full potential of creative professionals in the Media and Entertainment industry. It supports a single AMD EPYC 9004 Series processor equipped with a liquid cooling module, along with twelve DDR5 RDIMM slots. Additionally, it features four PCIe 5.0 x16 slots tailored for triple-slot graphic cards with coolers, ensuring increased airflow and sustained performance. With twelve front 2.5-inch U.2 NVMe/SATA drive bays, it offers high-speed and flexible storage options, catering to the diverse needs of AI workloads. The G4101 combines air flow spacing and liquid closed-loop cooling, making it the optimal thermal management solution for even the most demanding tasks.

PNY Expands Enterprise Portfolio With PICO's Enterprise VR Solutions

PNY Technologies, a global leader in memory and storage solutions, has expanded its enterprise portfolio through a strategic partnership with PICO, a VR solutions company with independent innovation and R&D capabilities. This partnership highlights PNY's dedication to providing state-of-the-art solutions across a spectrum of applications, from enhancing sports, entertainment, and video consumption to revolutionizing industries such as education, healthcare, and corporate training.

This partnership leverages PICO's XR platform solutions, enhancing PNY's enterprise offerings with unparalleled performance, scalability, and cost efficiency. This move reinforces PNY's position as a key player in the enterprise market.

Samsung Introduces "Petabyte SSD as a Service" at GTC 2024, "Petascale" Servers Showcased

Leaked Samsung PBSSD presentation material popped up online a couple of days prior to the kick-off day of NVIDIA's GTC 2024 conference (March 18)—reports (at the time) jumped on the potential introduction of a "petabyte (PB)-level SSD solution," alongside an enterprise subscription service for the US market. Tom's Hardware took the time to investigate this matter—in-person—on the showroom floor up in San Jose, California. It turns out that interpretations of pre-event information were slightly off—according to on-site investigations: "despite the name, PBSSD is not a petabyte-scale solid-state drive (Samsung's highest-capacity drive can store circa 240 TB), but rather a 'petascale' storage system that can scale-out all-flash storage capacity to petabytes."

Samsung showcased a Supermicro Petascale server design, but a lone unit is nowhere near capable of providing a petabyte of storage—the Tom's Hardware reporter found out that the demonstration model housed: "sixteen 15.36 TB SSDs, so for now the whole 1U unit can only pack up to 245.76 TB of 3D NAND storage (which is pretty far from a petabyte), so four of such units will be needed to store a petabyte of data." Company representatives also had another Supermicro product at their booth: "(an) H13 all-flash petascale system with CXL support that can house eight E3.S SSDs (with) four front-loading E3.S CXL bays for memory expansion."

Dell Expands Generative AI Solutions Portfolio, Selects NVIDIA Blackwell GPUs

Dell Technologies is strengthening its collaboration with NVIDIA to help enterprises adopt AI technologies. By expanding the Dell Generative AI Solutions portfolio, including with the new Dell AI Factory with NVIDIA, organizations can accelerate integration of their data, AI tools and on-premises infrastructure to maximize their generative AI (GenAI) investments. "Our enterprise customers are looking for an easy way to implement AI solutions—that is exactly what Dell Technologies and NVIDIA are delivering," said Michael Dell, founder and CEO, Dell Technologies. "Through our combined efforts, organizations can seamlessly integrate data with their own use cases and streamline the development of customized GenAI models."

"AI factories are central to creating intelligence on an industrial scale," said Jensen Huang, founder and CEO, NVIDIA. "Together, NVIDIA and Dell are helping enterprises create AI factories to turn their proprietary data into powerful insights."

NVIDIA Announces New Switches Optimized for Trillion-Parameter GPU Computing and AI Infrastructure

NVIDIA today announced a new wave of networking switches, the X800 series, designed for massive-scale AI. The world's first networking platforms capable of end-to-end 800 Gb/s throughput, NVIDIA Quantum-X800 InfiniBand and NVIDIA Spectrum -X800 Ethernet push the boundaries of networking performance for computing and AI workloads. They feature software that further accelerates AI, cloud, data processing and HPC applications in every type of data center, including those that incorporate the newly released NVIDIA Blackwell architecture-based product lineup.

"NVIDIA Networking is central to the scalability of our AI supercomputing infrastructure," said Gilad Shainer, senior vice president of Networking at NVIDIA. "NVIDIA X800 switches are end-to-end networking platforms that enable us to achieve trillion-parameter-scale generative AI essential for new AI infrastructures."

NVIDIA Launches Blackwell-Powered DGX SuperPOD for Generative AI Supercomputing at Trillion-Parameter Scale

NVIDIA today announced its next-generation AI supercomputer—the NVIDIA DGX SuperPOD powered by NVIDIA GB200 Grace Blackwell Superchips—for processing trillion-parameter models with constant uptime for superscale generative AI training and inference workloads.

Featuring a new, highly efficient, liquid-cooled rack-scale architecture, the new DGX SuperPOD is built with NVIDIA DGX GB200 systems and provides 11.5 exaflops of AI supercomputing at FP4 precision and 240 terabytes of fast memory—scaling to more with additional racks.

Samsung Expected to Unveil Enterprise "PBSSD" Subscription Service at GTC

Samsung Electronics is all set to discuss the future of AI, alongside Jensen Huang, at NVIDIA's upcoming GTC 2024 conference. South Korean insiders have leaked the company's intentions, only days before the event's March 18 kickoff time. Their recently unveiled 36 GB HBM3E 12H DRAM product is expected to be the main focus of official presentations—additionally, a new storage subscription service is marked down for a possible live introduction. An overall "Redefining AI Infrastructure" presentation could include—according to BusinessKorea—a planned launch of: "petabyte (PB)-level SSD solution, dubbed 'PBSSD,' along with a subscription service in the US market within the second quarter (of 2024) to address the era of ultra-high-capacity data."

A Samsung statement—likely sourced from leaked material—summarized this business model: "the subscription service will help reduce initial investment costs in storage infrastructure for our customers and cut down on maintenance expenses." Under agreed upon conditions, customers are not required to purchasing ultra-high-capacity SSD solutions outright: "enterprises using the service can flexibly utilize SSD storage without the need to build separate infrastructure, while simultaneously receiving various services from Samsung Electronics related to storage management, security, and upgrades." A special session—"The Value of Storage as a Service for AI/ML and Data Analysis"—is alleged to be on the company's GTC schedule.

Microsoft Z1000 960 GB NVMe SSD Leaked

According to TPU's SSD database, the Microsoft Z1000 M.2 22110 form factor solid-state drive launched back in 2020—last week, well-known hardware tipster, yuuki_ans, leaked a set of photos and specifications. Their March 7 social media post showcases close-ups of a potential enterprise product—sporting a CNEX Labs CNX-2670AA-CB2T controller, Toshiba BiCS4 96-layer eTLC NAND flash dies and 1 GB Micron MT40A1G8SA-075:E DDR4 RAM cache. The mysterious storage device appears to be an engineering sample (PV1.1)—an attached label lists a possible manufacturing date of May 18, 2020, but its part number and serial code are redacted in yuuki's set of photos. PCIe specifications are not disclosed, but experts reckon that a 4.0 standard is present here (given the prototype's age).

The long form factor and presence of a CNEX Labs controller suggest that Microsoft has readied a 960 GB capacity model for usage in data servers. Unoccupied spaces on the board provide evidence of different configurations. Extra BGA mounting points could introduce another DRAM chip, and there is enough room for additional capacitors—via solder pads on both sides of the Z1000's PCB. It is speculated that 2 TB and 4 TB variants exist alongside the leaked 960 GB example—a "broad portfolio" of finalized Z1000 products could be in service right now, but the wider public is unlikely to see these items outside of Microsoft facilities.

NVIDIA Calls for Global Investment into Sovereign AI

Nations have long invested in domestic infrastructure to advance their economies, control their own data and take advantage of technology opportunities in areas such as transportation, communications, commerce, entertainment and healthcare. AI, the most important technology of our time, is turbocharging innovation across every facet of society. It's expected to generate trillions of dollars in economic dividends and productivity gains. Countries are investing in sovereign AI to develop and harness such benefits on their own. Sovereign AI refers to a nation's capabilities to produce artificial intelligence using its own infrastructure, data, workforce and business networks.

Why Sovereign AI Is Important
The global imperative for nations to invest in sovereign AI capabilities has grown since the rise of generative AI, which is reshaping markets, challenging governance models, inspiring new industries and transforming others—from gaming to biopharma. It's also rewriting the nature of work, as people in many fields start using AI-powered "copilots." Sovereign AI encompasses both physical and data infrastructures. The latter includes sovereign foundation models, such as large language models, developed by local teams and trained on local datasets to promote inclusiveness with specific dialects, cultures and practices. For example, speech AI models can help preserve, promote and revitalize indigenous languages. And LLMs aren't just for teaching AIs human languages, but for writing software code, protecting consumers from financial fraud, teaching robots physical skills and much more.

Enterprise SSD Industry Hits US$23.1 Billion in Revenue in 4Q23, Growth Trend to Continue into Q1 This Year

The third quarter of 2023 witnessed suppliers dramatically cutting production, which underpinned enterprise SSD prices. The fourth quarter saw a resurgence in contract prices, driven by robust buying activity and heightened demand from server brands and buoyed by optimistic capital expenditure forecasts for 2024. This, combined with increased demand from various end products entering their peak sales period and ongoing reductions in OEM NAND Flash inventories, resulted in some capacity shortages. Consequently, fourth-quarter enterprise SSD prices surged by over 15%. TrendForce highlights that this surge in demand and prices led to a 47.6% QoQ increase in enterprise SSD industry revenues in 4Q23, reaching approximately $23.1 billion.

The stage is set for continued fervor as we settle into the new year and momentum from server brand orders continues to heat up—particularly from Chinese clients. On the supply side, falling inventory levels and efforts to exit loss-making positions have prompted enterprise SSD prices to climb, with contract prices expected to increase by over 25%. This is anticipated to fuel a 20% revenue growth in Q1.

IBM Intros AI-enhanced Data Resilience Solution - a Cyberattack Countermeasure

Cyberattacks are an existential risk, with 89% of organizations ranking ransomware as one of the top five threats to their viability, according to a November 2023 report from TechTarget's Enterprise Strategy Group, a leading analyst firm. And this is just one of many risks to corporate data—insider threats, data exfiltration, hardware failures, and natural disasters also pose significant danger. Moreover, as the just-released 2024 IBM X-Force Threat Intelligence Index states, as the generative AI market becomes more established, it could trigger the maturity of AI as an attack surface, mobilizing even further investment in new tools from cybercriminals. The report notes that enterprises should also recognize that their existing underlying infrastructure is a gateway to their AI models that doesn't require novel tactics from attackers to target.

To help clients counter these threats with earlier and more accurate detection, we're announcing new AI-enhanced versions of the IBM FlashCore Module technology available inside new IBM Storage FlashSystem products and a new version of IBM Storage Defender software to help organizations improve their ability to detect and respond to ransomware and other cyberattacks that threaten their data. The newly available fourth generation of FlashCore Module (FCM) technology enables artificial intelligence capabilities within the IBM Storage FlashSystem family. FCM works with Storage Defender to provide end-to-end data resilience across primary and secondary workloads with AI-powered sensors designed for earlier notification of cyber threats to help enterprises recover faster.

Huawei Launches OptiXtrans DC908 Pro, a Next-gen DCI Platform for the AI Era

At MWC Barcelona 2024, Huawei launched the Huawei OptiXtrans DC908 Pro, a new platform for Data Center Interconnect (DCI) designed for the intelligent era. This innovative platform ensures the efficient, secure, and stable transmission of data between data centers (DCs), setting a new standard for DCI networks. As AI continues to proliferate across various service scenarios, the demand for foundation models has intensified, leading to an explosion in data volume. DCs are now operating at the petabyte level, and DCI networks have evolved from single-wavelength 100 Gbit/s to single-wavelength Tbit/s.

In response to the challenges posed by massive data transmission in the intelligent era, Huawei introduces the next-generation DCI platform, the Huawei OptiXtrans DC908 Pro. Compared to its predecessor, the DC908 Pro offers higher bandwidth, reliability, and intelligence.
Return to Keyword Browsing
Jul 15th, 2024 23:12 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts