News Posts matching #Enterprise

Return to Keyword Browsing

Industry's First-to-Market Supermicro NVIDIA HGX B200 Systems Demonstrate AI Performance Leadership

Super Micro Computer, Inc. (SMCI), a Total IT Solution Provider for AI/ML, HPC, Cloud, Storage, and 5G/Edge, has announced first-to-market industry leading performance on several MLPerf Inference v5.0 benchmarks, using the 8-GPU. The 4U liquid-cooled and 10U air-cooled systems achieved the best performance in select benchmarks. Supermicro demonstrated more than 3 times the tokens per second (Token/s) generation for Llama2-70B and Llama3.1-405B benchmarks compared to H200 8-GPU systems. "Supermicro remains a leader in the AI industry, as evidenced by the first new benchmarks released by MLCommons in 2025," said Charles Liang, president and CEO of Supermicro. "Our building block architecture enables us to be first-to-market with a diverse range of systems optimized for various workloads. We continue to collaborate closely with NVIDIA to fine-tune our systems and secure a leadership position in AI workloads." Learn more about the new MLPerf v5.0 Inference benchmarks here.

Supermicro is the only system vendor publishing record MLPerf inference performance (on select benchmarks) for both the air-cooled and liquid-cooled NVIDIA HGX B200 8-GPU systems. Both air-cooled and liquid-cooled systems were operational before the MLCommons benchmark start date. Supermicro engineers optimized the systems and software to showcase the impressive performance. Within the operating margin, the Supermicro air-cooled B200 system exhibited the same level of performance as the liquid-cooled B200 system. Supermicro has been delivering these systems to customers while we conducted the benchmarks. MLCommons emphasizes that all results be reproducible, that the products are available and that the results can be audited by other MLCommons members. Supermicro engineers optimized the systems and software, as allowed by the MLCommons rules.

MangoBoost Achieves Record-Breaking MLPerf Inference v5.0 Results with AMD Instinct MI300X

MangoBoost, a provider of cutting-edge system solutions designed to maximize AI data center efficiency, has set a new industry benchmark with its latest MLPerf Inference v5.0 submission. The company's Mango LLMBoost AI Enterprise MLOps software has demonstrated unparalleled performance on AMD Instinct MI300X GPUs, delivering the highest-ever recorded results for Llama2-70B in the offline inference category. This milestone marks the first-ever multi-node MLPerf inference result on AMD Instinct MI300X GPUs. By harnessing the power of 32 MI300X GPUs across four server nodes, Mango LLMBoost has surpassed all previous MLPerf inference results, including those from competitors using NVIDIA H100 GPUs.

Unmatched Performance and Cost Efficiency
MangoBoost's MLPerf submission demonstrates a 24% performance advantage over the best-published MLPerf result from Juniper Networks utilizing 32 NVIDIA H100 GPUs. Mango LLMBoost achieved 103,182 tokens per second (TPS) in the offline scenario and 93,039 TPS in the server scenario on AMD MI300X GPUs, outperforming the previous best result of 82,749 TPS on NVIDIA H100 GPUs. In addition to superior performance, Mango LLMBoost + MI300X offers significant cost advantages. With AMD MI300X GPUs priced between $15,000 and $17,000—compared to the $32,000-$40,000 cost of NVIDIA H100 GPUs (source: Tom's Hardware—H100 vs. MI300X Pricing)—Mango LLMBoost delivers up to 62% cost savings while maintaining industry-leading inference throughput.

Forget Reboots, Live Patches are Coming to Windows 11 Enterprise Clients

Microsoft is introducing live patch updates for Windows 11 Enterprise, version 24H2, that allow critical security fixes to be applied without interrupting users. These updates, known as hotpatches, are available for x64 devices running on AMD or Intel CPUs. Hotpatch updates are designed to install quickly and take effect immediately. Unlike standard monthly security updates that require a system restart, hotpatch updates provide instant protection against vulnerabilities while allowing users to continue working. This new process can reduce the number of restarts from twelve per year to just four. The update schedule follows a quarterly cycle. In January, April, July, and October, devices install a complete security update with new features and fixes that do require a restart. In the two months that follow each of these baseline updates, devices receive hotpatch updates that only include security fixes and do not need a reboot. This approach ensures that essential protections are applied quickly without impacting daily work.

To use hotpatch updates, organizations need a Microsoft subscription that includes Windows 11 Enterprise (or Windows 365 Enterprise) and devices running build 26100.2033 or later. These devices must also be managed using Microsoft Intune, where IT administrators can set up a hotpatch-enabled quality update policy. The Intune admin center automatically detects eligible devices and manages the update process. Hotpatch updates are currently available on Intel and AMD-powered devices. For Arm64 devices, hotpatch updates are still in public preview and require an extra configuration step: disabling CHPE support via a registry key or the upcoming DisableCHPE CSP. This update system represents a more efficient way to secure Windows client devices. By minimizing the need for restarts and delivering updates in a predictable, quarterly cycle, Microsoft aims to help organizations protect their systems with minimal disruption. We expect these live patches to trickle down to more Windows 11 versions, like Home and Pro editions.

AAEON Launches UP 710S Edge, Its Smallest Mini PC Powered by Intel N Series Processor

AAEON's UP brand, has released the UP 710S Edge, the company's smallest Intel Processor N-powered Mini PC with Wi-Fi support, measuring just 92 mm x 77 mm x 38 mm.

Positioned as a compact platform for companies to upgrade industrial automation setups, the UP 710S Edge is available in models featuring the full Intel Processor N series family (formerly Alder Lake-N), as well as offering an 8-bit GPIO with optional SPI, I2C, and PWM, a first for the product line.

IBM & Intel Announce the Availability of Gaudi 3 AI Accelerators on IBM Cloud

Yesterday, at Intel Vision 2025, IBM announced the availability of Intel Gaudi 3 AI accelerators on IBM Cloud. This offering delivers Intel Gaudi 3 in a public cloud environment for production workloads. Through this collaboration, IBM Cloud aims to help clients more cost-effectively scale and deploy enterprise AI. Intel Gaudi 3 AI accelerators on IBM Cloud are currently available in Frankfurt (eu-de) and Washington, D.C. (us-east) IBM Cloud regions, with future availability for the Dallas (us-south) IBM Cloud region in Q2 2025.

IBM's AI in Action 2024 report found that 67% of surveyed leaders reported revenue increases of 25% or more due to including AI in business operations. Although AI is demonstrating promising revenue increases, enterprises are also balancing the costs associated with the infrastructure needed to drive performance. By leveraging Intel's Gaudi 3 on IBM Cloud, the two companies are aiming to help clients more cost effectively test, innovate and deploy generative AI solutions. "By bringing Intel Gaudi 3 AI accelerators to IBM Cloud, we're enabling businesses to help scale generative AI workloads with optimized performance for inferencing and fine-tuning. This collaboration underscores our shared commitment to making AI more accessible and cost-effective for enterprises worldwide," said Saurabh Kulkarni, Vice President, Datacenter AI Strategy and Product Management, Intel.

Supermicro Ships Over 20 New Systems that Redefine Single-Socket Performance

Super Micro Computer, Inc., a Total IT Solution Provider for AI/ML, HPC, Cloud, Storage, and 5G/Edge, is announcing the availability of new single-socket servers capable of supporting applications that required dual-socket servers for a range of data center workloads. By leveraging a single-socket architecture, enterprises and data center operators can reduce initial acquisition costs, ongoing operational costs such as power and cooling, and reduce the physical footprint of server racks compared to previous generations of systems based on older processors.

"We are entering a new era of compute where energy-efficient and thermally optimized single-socket architectures are becoming a viable alternative to traditional dual-processor servers," said Charles Liang, president and CEO of Supermicro. "Our new single-socket servers support 100% more cores per system than previous generations and have been designed to maximize acceleration, networking, and storage flexibility. Supporting up to 500-watt TDP processors, these new systems can be configured to fulfill a wide range of workload requirements."

Japanese Retailer Reportedly Prepping NVIDIA RTX PRO 6000 96 GB Stock For Sale in May, Leak Indicates $8435+ Pricing

During GTC 2025, NVIDIA unveiled the professional (PRO) side of its "Blackwell" GPU line—headlined by a monstrous GDDR7 96 GB option, that unleashes the full potential of their GB202 die. Industry watchdogs anticipated sky-high pricing, as befits such a potent specification sheet/feature set. As reported by VideoCardz over the past weekend, a North American enterprise PC hardware store—Connection—has populated its webshop with several of Team Green's brand-new RTX PRO Blackwell Series SKUs. The publication received tip-offs from a portion of its readership; including some well-heeled individuals who have already claimed pre-orders. Starting off, the investigation highlighted upper crust offerings: "the flagship model, called the RTX PRO 6000 with 96 GB of VRAM, will launch at $8435 (bulk) to $8565 (box), and this price seemingly applies to both models: the Workstation Edition and a sub-variant called Max-Q. Both are equipped with the same specs, but the latter is capped at 300 W TDP while retaining 88% of the Al performance, claimed NVIDIA."

Connection has removed its RTX PRO 6000 Blackwell and RTX PRO 6000 Blackwell Max-Q product pages, but the rest of Team Green's professional stack is still visible (see relevant screenshot below). The RTX PRO 5000 Blackwell 48 GB card is priced at $4569.24 (or $4439.50 for bulk). The cheapest offering is a $696.54 RTX PRO 2000 Blackwell 8 GB model. Officially, NVIDIA and its main professional series board partner—PNY—only revealed 4500, 5000 and 6000 product tiers. VideoCardz put a spotlight on some of these unannounced options, including: "the RTX 4000 non-SFF version, while this retailer has six listings for such SKUs (two SFF and two non-SFF, both in bulk and box variants). Presumably, this would suggest that NVIDIA may launch a non-SFF version later. However, the company didn't put 'SFF' in the official card's name, so perhaps this information is no longer valid, and there's only one model." According to a GDM/Hermitage AkiHabara Japan press release, a local reseller—Elsa—is preparing NVIDIA RTX PRO 6000 Blackwell Workstation Edition and RTX PRO 6000 Blackwell Max-Q Workstation Edition stock for scheduled release "in May 2025, while the other models are scheduled for release around summer." Additionally, another retailer (ASK Co., Ltd.): "has stated that the price and release date are subject to inquiry."

NVIDIA & Storage Industry Leaders Unveil New Class of Enterprise Infrastructure for the AI Era

At GTC 2025, NVIDIA announced the NVIDIA AI Data Platform, a customizable reference design that leading providers are using to build a new class of AI infrastructure for demanding AI inference workloads: enterprise storage platforms with AI query agents fueled by NVIDIA accelerated computing, networking and software. Using the NVIDIA AI Data Platform, NVIDIA-Certified Storage providers can build infrastructure to speed AI reasoning workloads with specialized AI query agents. These agents help businesses generate insights from data in near real time, using NVIDIA AI Enterprise software—including NVIDIA NIM microservices for the new NVIDIA Llama Nemotron models with reasoning capabilities—as well as the new NVIDIA AI-Q Blueprint.

Storage providers can optimize their infrastructure to power these agents with NVIDIA Blackwell GPUs, NVIDIA BlueField DPUs, NVIDIA Spectrum-X networking and the NVIDIA Dynamo open-source inference library. Leading data platform and storage providers—including DDN, Dell Technologies, Hewlett Packard Enterprise, Hitachi Vantara, IBM, NetApp, Nutanix, Pure Storage, VAST Data and WEKA—are collaborating with NVIDIA to create customized AI data platforms that can harness enterprise data to reason and respond to complex queries. "Data is the raw material powering industries in the age of AI," said Jensen Huang, founder and CEO of NVIDIA. "With the world's storage leaders, we're building a new class of enterprise infrastructure that companies need to deploy and scale agentic AI across hybrid data centers."

Server Market Revenue Increased 91% During the Q4 2024, NVIDIA Continues Dominating the GPU Server Space

According to the International Data Corporation (IDC) Worldwide Quarterly Server Tracker, the server market reached a record $77.3 billion dollars in revenue during the last quarter of the year. This quarter showed the second highest growth rate since 2019 with a year-over-year increase of 91% in vendor revenue. Revenue generated from x86 servers increased 59.9% in 2024Q4 to $54.8 billion while Non-x86 servers increased 262.1% year over year to $22.5 billion.

Revenue for servers with an embedded GPU in the fourth quarter of 2024 grew 192.6% year-over-year and for the full year 2024, more than half of the server market revenue came from servers with an embedded GPU. NVIDIA continues dominating the server GPU space with over 90% of the total shipments with and embedded GPU in 2024Q4. The fast pace at which hyperscalers and cloud service providers have been adopting servers with embedded GPUs has fueled the server market growth which has more than doubled in size since 2020 with revenue of $235.7 billion dollars for the full year 2024.

MSI Powers the Future of Cloud Computing at CloudFest 2025

MSI, a leading global provider of high-performance server solutions, unveiled its next-generation server platforms—ORv3 Servers, DC-MHS Servers, and NVIDIA MGX AI Servers—at CloudFest 2025, held from March 18-20 at booth H02. The ORv3 Servers focus on modularity and standardization to enable seamless integration and rapid scalability for hyperscale growth. Complementing this, the DC-MHS Servers emphasize modular flexibility, allowing quick reconfiguration to adapt to diverse data center requirements while maximizing rack density for sustainable operations. Together with NVIDIA MGX AI Servers, which deliver exceptional performance for AI and HPC workloads, MSI's comprehensive solutions empower enterprises and hyperscalers to redefine cloud infrastructure with unmatched flexibility and performance.

"We're excited to present MSI's vision for the future of cloud infrastructure." said Danny Hsu, General Manager of MSI's Enterprise Platform Solutions. "Our next-generation server platforms address the critical needs of scalability, efficiency, and sustainability. By offering modular flexibility, seamless integration, and exceptional performance, we empower businesses, hyperscalers, and enterprise data centers to innovate, scale, and lead in this cloud-powered era."

Global Top 10 IC Design Houses See 49% YoY Growth in 2024, NVIDIA Commands Half the Market

TrendForce reveals that the combined revenue of the world's top 10 IC design houses reached approximately US$249.8 billion in 2024, marking a 49% YoY increase. The booming AI industry has fueled growth across the semiconductor sector, with NVIDIA leading the charge, posting an astonishing 125% revenue growth, widening its lead over competitors, and solidifying its dominance in the IC industry.

Looking ahead to 2025, advancements in semiconductor manufacturing will further enhance AI computing power, with LLMs continuing to emerge. Open-source models like DeepSeek could lower AI adoption costs, accelerating AI penetration from servers to personal devices. This shift positions edge AI devices as the next major growth driver for the semiconductor industry.

Silicon Motion Announces PCIe Gen5 Enterprise SSD Reference Design Kit Supporting up to 128TB

Silicon Motion Technology Corporation ("Silicon Motion"), a global leader in designing and marketing NAND flash controllers for solid-state storage devices, today announced the sampling of its groundbreaking MonTitan SSD Reference Design Kit (RDK) that supports up to 128 TB with QLC NAND. Designed on the advanced MonTitan PCIe Gen 5 SSD Development Platform. This new offering aims to accelerate enterprise and data center storage AI SSD solutions by providing a robust and efficient RDK for OEMs and partners.

The SSD RDK incorporates Silicon Motion's PCIe Dual Ported enterprise-grade SM8366 controller, which supports PCIe Gen 5 x4 NVMe 2.0 and OCP 2.5 data center specifications offering unmatched performance, QoS, and capacity for next-generation large data lake storage needs.

Meta Reportedly Reaches Test Phase with First In-house AI Training Chip

According to a Reuters technology report, Meta's engineering department is engaged in the testing of their "first in-house chip for training artificial intelligence systems." Two inside sources have declared this significant development milestone; involving a small-scale deployment of early samples. The owner of Facebook could ramp up production, upon initial batches passing muster. Despite a recent-ish showcasing of an open-architecture NVIDIA "Blackwell" GB200 system for enterprise, Meta leadership is reported to be pursuing proprietary solutions. Multiple big players—in the field of artificial intelligence—are attempting to breakaway from a total reliance on Team Green. Last month, press outlets concentrated on OpenAI's alleged finalization of an in-house design, with rumored involvement coming from Broadcom and TSMC.

One of the Reuters industry moles believes that Meta has signed up with TSMC—supposedly, the Taiwanese foundry was responsible for the production of test batches. Tom's Hardware reckons that Meta and Broadcom were working together with the tape out of the social media giant's "first AI training accelerator." Development of the company's "Meta Training and Inference Accelerator" (MTIA) series has stretched back a couple of years—according to Reuters, this multi-part project: "had a wobbly start for years, and at one point scrapped a chip at a similar phase of development...Meta last year, started using an MTIA chip to perform inference, or the process involved in running an AI system as users interact with it, for the recommendation systems that determine which content shows up on Facebook and Instagram news feeds." Leadership is reportedly aiming to get custom silicon solutions up and running for AI training by next year. Past examples of MTIA hardware were deployed with open-source RISC-V cores (for inference tasks), but is not clear whether this architecture will form the basis of Meta's latest AI chip design.

Kingston Debuts DC3000ME PCIe 5.0 NVMe U.2 Enterprise SSD with eTLC NAND

Kingston has introduced its new DC3000ME line of enterprise-grade PCIe 5.0 SSDs. These top-tier storage devices come in a U.2 15 mm form factor (100.50 mm × 69.8 mm × 14.8 mm) and use 3D eTLC NAND flash memory. The drives include built-in power loss protection and AES 256-bit hardware-based encryption. Kingston DC3000ME SSDs are designed for server applications such as AI, HPC, OLTP, databases, cloud infrastructure, and edge computing. As an enterprise-grade product, the DC3000ME SSDs also feature various built-in telemetry such as media wear, temperature, health, etc.

At this moment, they are offered in three sizes: 3.84 TB, 7.68 TB, and 15.36 TB. Each version has 1DWPD durability with a 5-year warranty. In terms of power consumption, we have 8 W when idle and up to 24 W during writing operations. Kingston points out the drives' steady I/O performance and quick response times, with read delays under 10µs at 99% and write delays under 70µs. (up to 14,000/10,000 MB/s sequential read/write and up to 2,800,000/500,000 4k random read/write IOPS). The drives also have NVMe-MI 1.2b remote management, end-to-end data safety, and support for TCG Opal 2.0. Exact pricing is still to be announced, however we found them online at €686,9 (3.84 TB), €1226,9 (7.68 TB), €2252,9 (15.36 TB).

Phison Showcases Edge AI and Embedded Solutions at Embedded World

Embedded World is one of the most influential exhibitions in the global embedded technology sector, attracting numerous experts in industrial computing, automotive electronics, IoT, and AI technologies each year. Phison Electronics (8299TT), a leading innovator of NAND controller and NAND storage solutions, will participate in the Taiwan Excellence Pavilion at Embedded World 2025 in Germany from March 11 to 13.

Phison will showcase its exclusive AI solution aiDAPTIV+, the enterprise ultra-high-capacity PASCARI PCIe 5.0 122.88 TB SSD, and its latest automotive storage technology, the MPT5 Automotive PCIe Gen 4 SSD, demonstrating over 15 years of technical expertise in the embedded market.

Insiders Predict Introduction of NVIDIA "Blackwell Ultra" GB300 AI Series at GTC, with Fully Liquid-cooled Clusters

Supply chain insiders believe that NVIDIA's "Blackwell Ultra" GB300 AI chip design will get a formal introduction at next week's GTC 2025 conference. Jensen Huang's keynote presentation is scheduled—the company's calendar is marked with a very important date: Tuesday, March 18. Team Green's chief has already revealed a couple of Blackwell B300 series details to investors; a recent earnings call touched upon the subject of a second half (of 2025) launch window. Industry moles have put spotlights on the GB300 GPU's alleged energy hungry nature. According to inside tracks, power consumption has "significantly" increased when compared to a slightly older equivalent; NVIDIA's less refined "Blackwell" GB200 design.

A Taiwan Economic Daily news article predicts an upcoming "second cooling revolution," due to reports of "Blackwell Ultra" parts demanding greater heat dissipation solutions. Supply chain leakers have suggested effective countermeasures—in the form of fully liquid-cooled systems: "not only will more water cooling plates be introduced, but the use of water cooling quick connectors will increase four times compared to GB200." The pre-Christmas 2024 news cycle proposed a 1400 W TDP rating. Involved "Taiwanese cooling giants" are expected to pull in tidy sums of money from the supply of optimal heat dissipating gear, with local "water-cooling quick-connector" manufacturers also tipped to benefit greatly. The UDN report pulled quotes from a variety of regional cooling specialists; the consensus being that involved partners are struggling to keep up with demand across GB200 and GB300 product lines.

Avalue Technology Unveils HPM-GNRDE High-Performance Server Motherboard

Avalue Technology introduces the HPM-GNRDE high-performance server motherboard, powered by the latest Intel Xeon 6 Processors (P-Core) 6500P & 6700P.

Designed to deliver quality computing performance, ultra-fast memory bandwidth, and advanced PCIe 5.0 expansion, the HPM-GNRDE is the ideal solution for AI workloads, high-performance computing (HPC), Cloud data centers, and enterprise applications. The HPM-GNRDE will make its debut at embedded world 2025, showcasing Avalue's innovation in high-performance computing.

QNAP Releases Cloud NAS Operating System QuTScloud c5.2

QNAP Systems, Inc. today released QuTScloud c5.2, the latest version of its Cloud NAS operating system. This update introduces Security Center, a proactive security application that monitors Cloud NAS file activities and defends against ransomware threats. Additionally, QuTScloud c5.2 provides extensive optimizations, streamlining operations and management for a more seamless user experience.

QuTScloud Cloud NAS revolutionizes enterprise data storage and management. By deploying a QuTScloud image on virtual machines, businesses can flexibly implement Cloud NAS on public cloud platforms or virtualization environments. With a subscription-based pricing model starting at just US $4.99 per month, users can allocate resources efficiently and optimize costs.

Advantech Launches the SQFlash EDSFF and EU-2 PCIe Gen5 x4 SSDs

Advantech, a global leader in industrial flash storage solutions, introduces the SQFlash EDSFF and EU-2 PCIe Gen 5 x4 SSDs, designed to meet the demands of next-generation enterprise and data center applications. Till 2024, PCIe Gen 4 solutions accounted for 50% of the market, and PCIe Gen 5 products are rapidly gaining traction across diverse applications. Among PCIe storage products, form factors such as U.2, E3.S, and E1.S are driving market growth.

The SQFlash E1.S SSD, built on the EDSFF standard, delivers exceptional performance with PCIe Gen 5 read and write speeds of up to 14,000 MB/s and 8,500 MB/s, respectively, while offering scalability, power efficiency, and thermal optimization. Meanwhile, the SQFlash EU-2 PCIe Gen.5 x4 SSD leverages cutting-edge PCIe 5.0 technology, setting new standards in speed, reliability, and thermal management, making it ideal for data centers, enterprise computing, real-time analytics, and AI-driven workloads.

Alibaba Adds New "C930" Server-grade Chip to XuanTie RISC-V Processor Series

Damo Academy—a research and development wing of Alibaba—launched its debut "server-grade processor" design late last week, in Beijing. According to a South China Morning Post (SCMP) news article, the C930 model is a brand-new addition to the e-commerce platform's XuanTie RISC-V CPU series. Company representatives stated that their latest product is designed as a server-level and high-performance computing (HPC) solution. Going back to March 2024, TechPowerUp and other Western hardware news outlets picked up on Alibaba's teasing of the Xuantie C930 SoC, and a related Xuantie 907 matrix processing unit. Fast-forward to the present day; Damo Academy has disclosed that initial shipments—of finalized C930 units—will be sent out to customers this month.

The newly released open-source RISC-V architecture-based HPC chip is an unknown quantity in terms of technical specifications. Damo Academy reps did not provide any detailed information during last Friday's conference (February 28). SCMP's report noted the R&D division's emphasizing of "its role in advancing RISC-V adoption" within various high-end fields. Apparently, the XuanTie engineering team has: "supported the implementation of more than thirty percent of RISC-V high-performance processors." Upcoming additions will arrive in the form of the C908X for AI acceleration, R908A for automotive processing solutions, and an XL200 model for high-speed interconnection. These XuanTie projects are reportedly still deep in development.

Montage Technology Delivers I/O Expansion Solution for New-Generation CPU Platforms

Montage Technology today announced the mass production of its I/O expansion device for the new-generation CPU platforms—the I/O Hub (IOH) chip M88IO3020. This product is specifically designed for Intel's Birch Stream platform, aiming to provide a highly integrated and flexible I/O expansion solution for applications such as cloud computing, big data, and enterprise storage.

Montage's IOH chip establishes connectivity with Intel's latest Granite Rapids processors via the PCIe bus, achieving a maximum bandwidth of 64 Gbps. The chip features configurable high-speed interfaces including PCIe, SATA, and USB to meet diverse application requirements.

AMD to Discuss Advancing of AI "From the Enterprise to the Edge" at MWC 2025

GSMA MWC Barcelona, runs from March 3 to 6, 2025 at the Fira Barcelona Gran Via in Barcelona, Spain. AMD is proud to participate in forward-thinking discussions and demos around AI, edge and cloud computing, the long-term revolutionary potential of moonshot technologies like quantum processing, and more. Check out the AMD hospitality suite in Hall 2 (Stand 2M61) and explore our demos and system design wins. Attendees are welcome to stop by informally or schedule a time slot with us.

As modern networks evolve, high-performance computing, energy efficiency, and AI acceleration are becoming just as critical as connectivity itself. AMD is at the forefront of this transformation, delivering solutions that power next-generation cloud, AI, and networking infrastructure. Our demos this year showcase AMD EPYC, AMD Instinct, and AMD Ryzen AI processors, as well as AMD Versal adaptive SoC and Zynq UltraScale+ RFSoC devices.

Qualcomm and IBM Scale Enterprise-grade Generative AI from Edge to Cloud

Ahead of Mobile World Congress 2025, Qualcomm Technologies, Inc. and IBM (NYSE: IBM) announced an expanded collaboration to drive enterprise-grade generative artificial intelligence (AI) solutions across edge and cloud devices designed to enable increased immediacy, privacy, reliability, personalization, and reduced cost and energy consumption. Through this collaboration, the companies plan to integrate watsonx.governance for generative AI solutions powered by Qualcomm Technologies' platforms, and enable support for IBM's Granite models through the Qualcomm AI Inference Suite and Qualcomm AI Hub.

"At Qualcomm Technologies, we are excited to join forces with IBM to deliver cutting-edge, enterprise-grade generative AI solutions for devices across the edge and cloud," said Durga Malladi, senior vice president and general manager, technology planning and edge solutions, Qualcomm Technologies, Inc. "This collaboration enables businesses to deploy AI solutions that are not only fast and personalized but also come with robust governance, monitoring, and decision-making capabilities, with the ability to enhance the overall reliability of AI from edge to cloud."

IBM Completes Acquisition of HashiCorp, Creates Comprehensive, End-to-End Hybrid Cloud Platform

IBM (NYSE: IBM) today announced it has completed its acquisition of HashiCorp, whose products automate and secure the infrastructure that underpins hybrid cloud applications and generative AI. Together the companies' capabilities will help clients accelerate innovation, strengthen security, and get more value from the cloud.

Today nearly 75% of enterprises are using hybrid cloud, including public clouds from hyperscalers and on-prem data centers, which can enable true innovation with a consistent approach to delivering and managing that infrastructure at scale. Enterprises are looking for ways to more efficiently manage and modernize cloud infrastructure and security tasks from initial planning and design, to ongoing maintenance. By 2028, it is projected that generative AI will lead to the creation of 1 billion new cloud-native applications. Supporting this scale requires infrastructure automation far beyond the capacity of the workforce alone.

IBM Introduces New Multi-Modal and Reasoning AI "Granite" Models Built for the Enterprise

IBM today debuted the next generation of its Granite large language model (LLM) family, Granite 3.2, in a continued effort to deliver small, efficient, practical enterprise AI for real-world impact. All Granite 3.2 models are available under the permissive Apache 2.0 license on Hugging Face. Select models are available today on IBM watsonx.ai, Ollama, Replicate, and LM Studio, and expected soon in RHEL AI 1.5 - bringing advanced capabilities to businesses and the open-source community.
Return to Keyword Browsing
Apr 8th, 2025 15:43 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts