News Posts matching #Enterprise

Return to Keyword Browsing

IBM Power11 Raises the Bar for Enterprise IT

Today, IBM revealed IBM Power11, the next generation of IBM Power servers. Redesigned with innovations across its processor, hardware architecture, and virtualization software stack, Power11 is designed to deliver the availability, resiliency, performance, and scalability enterprises demand, for seamless hybrid deployment on-premises or in IBM Cloud.

Organizations across industries have long run their most mission-critical, data-intensive workloads on IBM Power, most notably those within the banking, healthcare, retail, and government spaces. Now, enterprises face an onslaught of new technologies and solutions as they transition into the age of AI. IDC found that one billion new logical applications are expected by 2028, and the proliferation of these systems poses new complexities for companies. IBM built Power11 to deliver simplified, always-on operations with hybrid cloud flexibility for enterprises to maintain competitiveness in the AI era.

DapuStor Unveils J5060 QLC 122 TB Ultra Capacity SSD Series

DapuStor proudly announces the launch of its ultra-high capacity J5060 QLC SSD, delivering an unprecedented 122.88 TB of capacity—twice that of its predecessor, the 61 TB J5060. This milestone redefines what's possible in high-density flash storage.

Massive Capacity, Minimal Footprint
For personal users, the J5060 122 TB SSD can store up to 10,000 90-minute 4K movies, all in a device that fits in the palm of your hand—a perfect combination of capacity and portability. For enterprise deployments, the benefits are even greater. Replacing 24 TB HDDs with the J5060 can reduce system footprint and complexity by up to 5x, significantly lowering equipment count and saving valuable data center space.

Inventory Headwinds Weigh on Top 5 Enterprise SSD Vendors in 1Q25; Recovery Expected as AI Demand Grows

TrendForce's latest investigations reveal that several negative factors weighed on the enterprise SSD market in the first quarter of 2025. These include production challenges for next-gen AI systems and persistent inventory overhang in North America. As a result, major clients significantly scaled back orders, causing the ASP of enterprise SSDs to plunge nearly 20%. This led to QoQ revenue declines for the top five enterprise SSD vendors, reflecting a period of market adjustment.

However, conditions are expected to improve in the second quarter. As shipments of NVIDIA's new chips ramp up, demand for AI infrastructure in North America is rising. Meanwhile, Chinese CSPs are steadily expanding storage capacity in their data centers. Together, these trends are set to reinvigorate the enterprise SSD market, with overall revenue projected to return to positive growth.

Supermicro Unveils Industry's Broadest Enterprise AI Solution Portfolio for NVIDIA Blackwell Architecture

Supermicro, Inc., a Total IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, is announcing an expansion of the industry's broadest portfolio of solutions designed for NVIDIA Blackwell Architecture to the European market. The introduction of more than 30 solutions reinforces Supermicro's industry leadership by providing the most comprehensive and efficient solution stack for NVIDIA HGX B200, GB200 NVL72, and RTX PRO 6000 Blackwell Server Edition deployments, enabling rapid time-to-online for European enterprise AI factories across any environment. Through close collaboration with NVIDIA, Supermicro's solution stack enables the deployment of NVIDIA Enterprise AI Factory validated design and supports the upcoming introduction of NVIDIA Blackwell Ultra solutions later this year, including NVIDIA GB300 NVL72 and HGX B300.

"With our first-to-market advantage and broad portfolio of NVIDIA Blackwell solutions, Supermicro is uniquely positioned to meet the accelerating demand for enterprise AI infrastructure across Europe," said Charles Liang, president and CEO of Supermicro. "Our collaboration with NVIDIA, combined with our global manufacturing capabilities and advanced liquid cooling technologies, enables European organizations to deploy AI factories with significantly improved efficiency and reduced implementation timelines. We're committed to providing the complete solution stack enterprises need to successfully scale their AI initiatives."

MSI Powers AI's Next Leap for Enterprises at ISC 2025

MSI, a global leader in high-performance server solutions, is showcasing its enterprise-grade, high-performance server platforms at ISC 2025, taking place June 10-12 at booth #E12. Built on standardized and modular architectures, MSI's AI servers are designed to power next-generation AI and accelerated computing workloads, enabling enterprises to rapidly advance their AI innovations.

"As AI workloads continue to grow and evolve toward inference-driven applications, we're seeing a significant shift in how enterprises approach AI deployment," said Danny Hsu, General Manager of Enterprise Platform Solutions at MSI. "With modular and standards-based architectures, enterprise data centers can now adopt AI technologies more quickly and cost-effectively than ever before. This marks a new era where AI is not only powerful but also increasingly accessible to businesses of all sizes.

NVIDIA Blackwell Delivers Breakthrough Performance in Latest MLPerf Training Results

NVIDIA is working with companies worldwide to build out AI factories—speeding the training and deployment of next-generation AI applications that use the latest advancements in training and inference. The NVIDIA Blackwell architecture is built to meet the heightened performance requirements of these new applications. In the latest round of MLPerf Training—the 12th since the benchmark's introduction in 2018—the NVIDIA AI platform delivered the highest performance at scale on every benchmark and powered every result submitted on the benchmark's toughest large language model (LLM)-focused test: Llama 3.1 405B pretraining.

The NVIDIA platform was the only one that submitted results on every MLPerf Training v5.0 benchmark—underscoring its exceptional performance and versatility across a wide array of AI workloads, spanning LLMs, recommendation systems, multimodal LLMs, object detection and graph neural networks. The at-scale submissions used two AI supercomputers powered by the NVIDIA Blackwell platform: Tyche, built using NVIDIA GB200 NVL72 rack-scale systems, and Nyx, based on NVIDIA DGX B200 systems. In addition, NVIDIA collaborated with CoreWeave and IBM to submit GB200 NVL72 results using a total of 2,496 Blackwell GPUs and 1,248 NVIDIA Grace CPUs.

Chinese Tech Firms Reportedly Unimpressed with Overheating of Huawei AI Accelerator Samples

Mid-way through last month, Tencent's President—Martin Lau—confirmed that this company had stockpiled a huge quantity of NVIDIA H20 AI GPUs, prior to new trade restrictions coming into effect. According to earlier reports, China's largest tech firms have collectively spent $16 billion on hardware acquisitions in Q1'25. Team Green engineers are likely engaged in the creation of "nerfed" enterprise-grade chip designs—potentially ready for deployment later on in 2025. Huawei leadership is likely keen to take advantage of this situation, although it will be difficult to compete with the sheer volume of accumulated H20 units. The Shenzhen, Guangdong-based giant's Ascend AI accelerator family is considered to be a valid alternative to equivalent "sanction-conformant" NVIDIA products.

The controversial 910C model and a successor seem to be worthy candidates; as demonstrated by preliminary performance data, but fresh industry murmurs suggest teething problems. The Information has picked up inside track chatter from unnamed moles at ByteDance and Alibaba. During test runs, staffers noted the overheating of Huawei Ascend 910C trial samples. Additionally, they highlighted limitations within the Huawei Compute Architecture for Neural Networks (CANN) software platform. NVIDIA's extremely mature CUDA ecosystem holds a significant advantage here. Several of China's prime AI players—including DeepSeek—are reportedly pursuing in-house AI chip development projects; therefore positioning themselves as competing with Huawei, in a future scenario.

NVIDIA on AI Factories: The More You Buy, the More You Make

How NVIDIA's AI factory platform balances maximum performance and minimum latency, optimizing AI inference to power the next industrial revolution. When we prompt generative AI to answer a question or create an image, large language models generate tokens of intelligence that combine to provide the result. One prompt. One set of tokens for the answer. This is called AI inference. Agentic AI uses reasoning to complete tasks. AI agents aren't just providing one-shot answers. They break tasks down into a series of steps, each one a different inference technique. One prompt. Many sets of tokens to complete the job.

The engines of AI inference are called AI factories—massive infrastructures that serve AI to millions of users at once. AI factories generate AI tokens. Their product is intelligence. In the AI era, this intelligence grows revenue and profits. Growing revenue over time depends on how efficient the AI factory can be as it scales. AI factories are the machines of the next industrial revolution.

HPE Expands Its Aruba Networking Wired and Wireless Portfolio

Hewlett Packard Enterprise today announced expansions of its HPE Aruba Networking wired and wireless portfolio, along with new HPE Aruba Networking CX 10K distributed services switches, which feature built-in programmable data processing units (DPU) from AMD Pensando to offload security and network services to free up resources for complex AI workload processing.

The new expansions from HPE Aruba Networking include:
  • The HPE Aruba Networking CX 10040 is HPE's latest distributed services switch -- also known as a "smart switch" -- that doubles the scale and performance of the previous networking and security solution.
  • Four new HPE Aruba Networking CX 6300M campus networking switches, which provide faster data speeds for enterprise IoT, AI, or high-performance computing with a more compact footprint.
  • New Wi-Fi 7 access points (APs) and capabilities for AI-driven indoor and outdoor connectivity that deliver the highest quality of service for data, voice, and video communications.

NVIDIA Blackwell a Focal Point in AI Factories; As Built by Dell Technologies

Over a century ago, Henry Ford pioneered the mass production of cars and engines to provide transportation at an affordable price. Today, the technology industry manufactures the engines for a new kind of factory—those that produce intelligence. As companies and countries increasingly focus on AI, and move from experimentation to implementation, the demand for AI technologies continues to grow exponentially. Leading system builders are racing to ramp up production of the servers for AI factories—the engines of AI factories—to meet the world's exploding demand for intelligence and growth. Dell Technologies is a leader in this renaissance. Dell and NVIDIA have partnered for decades and continue to push the pace of innovation. In its last earnings call, Dell projected that its AI server business will grow at least $15 billion this year.

"We're on a mission to bring AI to millions of customers around the world," said Michael Dell, chairman and chief executive officer, Dell Technologies, in a recent announcement at Dell Technologies World. "With the Dell AI Factory with NVIDIA, enterprises can manage the entire AI lifecycle across use cases, from training to deployment, at any scale." The latest Dell AI servers, powered by NVIDIA Blackwell, offer up to 50x more AI reasoning inference output and 5x improvement in throughput compared with the Hopper platform. Customers use them to generate tokens for new AI applications that will help solve some of the world's biggest challenges, from disease prevention to advanced manufacturing.

AI Demand Fuels Enterprise SSD Growth; 3Q25 NAND Flash Prices Likely to Rise Further

TrendForce's latest investigations reveal that continued AI investments by major North American CSPs are expected to drive a significant increase in enterprise SSD demand in the third quarter of 2025. The enterprise SSD market is likely to shift toward undersupply with finished product inventory levels remaining low, supporting a potential price increase of up to 10% QoQ.

TrendForce notes that earlier this year, suppliers adopted a conservative production strategy as they gradually brought the NAND Flash market back into supply-demand balance. However, the introduction of new U.S. reciprocal tariff policies in early April disrupted the market's momentum in Q2 and introduced volatility into price trends. Although some PC manufacturers accelerated shipments in Q2, it failed to substantially boost overall demand for NAND Flash products. Meanwhile, persistent weakness in the retail market has prompted suppliers to tighten capacity controls even further.

Synology Announces PAS7700, Active-Active NVMe Storage for Enterprise Applications

Synology today unveiled PAS7700, an active-active NVMe all-flash storage solution engineered to deliver uninterrupted high-performance services for enterprise mission-critical workloads. Combining dual-controllers with 48 NVMe SSD bays in a space-efficient 4U chassis, PAS7700 scales seamlessly to 1.65 PB of raw capacity with the addition of seven expansion units. PAS7700 features comprehensive support for a range of file and block protocols including NVMe-oF. With redundant memory upgradable to 2,048 GB across both controllers and support for high-speed 100GbE networking, PAS7700 delivers exceptional performance, availability, and scalability to meet enterprise storage demands.

"PAS7700 is the culmination of Synology's 25 years of engineering experience in data management and storage," said Kenneth Hsu, Director of the System Group at Synology. "By combining our deep software and hardware development expertise with close collaboration with partners and enterprise customers, we've engineered PAS7700 to deliver ultra-high performance at a price point previously unseen in the enterprise storage market."

Red Hat & AMD Strengthen Strategic Collaboration - Leading to More Efficient GenAI

Red Hat, the world's leading provider of open source solutions, and AMD today announced a strategic collaboration to propel AI capabilities and optimize virtualized infrastructure. With this deepened alliance, Red Hat and AMD will expand customer choice across the hybrid cloud, from deploying optimized, efficient AI models to more cost-effectively modernizing traditional virtual machines (VMs). As workload demand and diversity continue to rise with the introduction of AI, organizations must have the capacity and resources to meet these escalating requirements. The average datacenter, however, is dedicated primarily to traditional IT systems, leaving little room to support intensive workloads such as AI. To answer this need, Red Hat and AMD are bringing together the power of Red Hat's industry-leading open source solutions with the comprehensive portfolio of AMD high-performance computing architectures.

AMD and Red Hat: Driving to more efficient generative AI
Red Hat and AMD are combining the power of Red Hat AI with the AMD portfolio of x86-based processors and GPU architectures to support optimized, cost-efficient and production-ready environments for AI-enabled workloads. AMD Instinct GPUs are now fully enabled on Red Hat OpenShift AI, empowering customers with the high-performing processing power necessary for AI deployments across the hybrid cloud without extreme resource requirements. In addition, using AMD Instinct MI300X GPUs with Red Hat Enterprise Linux AI, Red Hat and AMD conducted testing on Microsoft Azure ND MI300X v5 to successfully demonstrate AI inferencing for scaling small language models (SLMs) as well as large language models (LLM) deployed across multiple GPUs on a single VM, reducing the need to deploy across multiple VMs and reducing performance costs.

Red Hat Introduces Red Hat Enterprise Linux 10

Red Hat, the world's leading provider of open source solutions, today introduced Red Hat Enterprise Linux 10, the evolution of the world's leading enterprise Linux platform to help meet the dynamic demands of hybrid cloud and the transformative power of AI. More than just an iteration, Red Hat Enterprise Linux 10 provides a strategic and intelligent backbone for enterprise IT to navigate increasing complexity, accelerate innovation and build a more secure computing foundation for the future.

As enterprise IT grapples with the proliferation of hybrid environments and the imperative to integrate AI workloads, the need for an intelligent, resilient and durable operating system has never been greater. Red Hat Enterprise Linux 10 rises to this challenge, delivering a platform engineered for agility, flexibility and manageability, all while retaining a strong security posture against the software threats of the future.

NVIDIA & Microsoft Accelerate Agentic AI Innovation - From Cloud to PC

Agentic AI is redefining scientific discovery and unlocking research breakthroughs and innovations across industries. Through deepened collaboration, NVIDIA and Microsoft are delivering advancements that accelerate agentic AI-powered applications from the cloud to the PC. At Microsoft Build, Microsoft unveiled Microsoft Discovery, an extensible platform built to empower researchers to transform the entire discovery process with agentic AI. This will help research and development departments across various industries accelerate the time to market for new products, as well as speed and expand the end-to-end discovery process for all scientists.

Microsoft Discovery will integrate the NVIDIA ALCHEMI NIM microservice, which optimizes AI inference for chemical simulations, to accelerate materials science research with property prediction and candidate recommendation. The platform will also integrate NVIDIA BioNeMo NIM microservices, tapping into pretrained AI workflows to speed up AI model development for drug discovery. These integrations equip researchers with accelerated performance for faster scientific discoveries. In testing, researchers at Microsoft used Microsoft Discovery to detect a novel coolant prototype with promising properties for immersion cooling in data centers in under 200 hours, rather than months or years with traditional methods.

Marvell Custom Cloud Platform Upgraded with NVIDIA NVLink Fusion Tech

Marvell Technology, Inc., a leader in data infrastructure semiconductor solutions, today announced it is teaming with NVIDIA to offer NVLink Fusion technology to customers employing Marvell custom cloud platform silicon. NVLink Fusion is an innovative new offering from NVIDIA for integrating custom XPU silicon with NVIDIA NVLink connectivity, rack-scale hardware architecture, software and other technology, providing customers with greater flexibility and choice in developing next-generation AI infrastructure.

The Marvell custom platform strategy seeks to deliver breakthrough results through unique semiconductor designs and innovative approaches. By combining expertise in system and semiconductor design, advanced process manufacturing, and a comprehensive portfolio of semiconductor platform solutions and IP—including electrical and optical serializer/deserializers (SerDes), die-to-die interconnects for 2D and 3D devices, advanced packaging, silicon photonics, co-packaged copper, custom high-bandwidth memory (HBM), system-on-chip (SoC) fabrics, optical IO, and compute fabric interfaces such as PCIe Gen 7—Marvell is able to create platforms in collaboration with customers that transform infrastructure performance, efficiency and value.

Dell Technologies Unveils Next Generation Enterprise AI Solutions with NVIDIA

The world's top provider of AI-centric infrastructure, Dell Technologies, announces innovations across the Dell AI Factory with NVIDIA - all designed to help enterprises accelerate AI adoption and achieve faster time to value.

Why it matters
As enterprises make AI central to their strategy and progress from experimentation to implementation, their demand for accessible AI skills and technologies grows exponentially. Dell and NVIDIA continue the rapid pace of innovation with updates to the Dell AI Factory with NVIDIA, including robust AI infrastructure, solutions and services that streamline the path to full-scale implementation.

MiTAC Computing Unveils Full Server Lineup for Data Centers and Enterprises with Intel Xeon 6 at Computex 2025

MiTAC Computing Technology Corporation, a leading server platform designer, manufacturer, and a subsidiary of MiTAC Holdings Corporation, has launched its full suite of next-generation servers for data centers and enterprises at COMPUTEX 2025 (Booth M1110). Powered by Intel Xeon 6 processors, including those with Performance-cores (P-cores), MiTAC's new platforms are purpose-built for AI, HPC, cloud, and enterprise applications.

"For over five decades, MiTAC and Intel have built a close, collaborative relationship that continues to push innovation forward. Our latest server lineup reflects this legacy—combining Intel's cutting-edge processing power with MiTAC Computing's deep expertise in system design to deliver scalable, high-efficiency solutions for modern data centers." - Rick Hwang, President of MiTAC Computing.

MSI Unveils Next-Level AI Solutions Using NVIDIA MGX and DGX Station at COMPUTEX 2025

MSI, a leading global provider of high-performance server solutions, unveils its latest AI innovations using NVIDIA MGX and NVIDIA DGX Station reference architectures at COMPUTEX 2025, held from May 20-23 at booth J0506. Purpose-built to address the growing demands of AI, HPC, and accelerated computing workloads, MSI's AI solutions feature modular, scalable building blocks designed to deliver next-level AI performance for enterprises and cloud data center environments.

"AI adoption is transforming enterprise data centers as organizations move quickly to integrate advanced AI capabilities," said Danny Hsu, General Manager of Enterprise Platform Solutions at MSI. "With the explosive growth of generative AI and increasingly diverse workloads, traditional servers can no longer keep pace. MSI's AI solutions, built on the NVIDIA MGX and NVIDIA DGX Station reference architectures, deliver the scalability, flexibility, and performance enterprises need to future-proof their infrastructure and accelerate their AI innovation."

MiTAC Computing Launches the Latest Scale-out AI Server G4527G6 by NVIDIA MGX at Computex 2025

MiTAC Computing Technology Corporation, a leading server platform design, manufacturer, and a subsidiary of MiTAC Holdings Corporation (TSE:3706), will present its latest innovations in AI infrastructure at COMPUTEX 2025. At booth M1110, MiTAC Computing will display its next-level AI server platforms MiTAC G4527G6, fully optimized for NVIDIA MGX architecture, which supports NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs and the NVIDIA H200 NVL platform to address the evolving demands of enterprise AI workloads.

Next-Gen AI with High-Performance Computing
With the increasing adoption of generative AI and accelerated computing, MiTAC Computing introduces the latest NVIDIA MGX-based server solution, the MiTAC G4527G6, designed to support complex AI and high-performance computing (HPC) workloads. Built on Intel Xeon 6 processors, the G4527G6 accommodates up to eight NVIDIA GPUs, 8 TB of DDR5-6400 memory, sixteen hot-swappable E1.s drives, and an NVIDIA BlueField-3 DPU for efficient north-south connectivity. Crucially, it integrates four next-generation NVIDIA ConnectX-8 SuperNICs, delivering up to 800 gigabits per second (Gb/s) of NVIDIA InfiniBand and Ethernet networking—significantly enhancing system performance for AI factories and cloud data center environments.

NVIDIA Discusses the Revenue-Generating Potential of AI Factories

AI is creating value for everyone—from researchers in drug discovery to quantitative analysts navigating financial market changes. The faster an AI system can produce tokens, a unit of data used to string together outputs, the greater its impact. That's why AI factories are key, providing the most efficient path from "time to first token" to "time to first value." AI factories are redefining the economics of modern infrastructure. They produce intelligence by transforming data into valuable outputs—whether tokens, predictions, images, proteins or other forms—at massive scale.

They help enhance three key aspects of the AI journey—data ingestion, model training and high-volume inference. AI factories are being built to generate tokens faster and more accurately, using three critical technology stacks: AI models, accelerated computing infrastructure and enterprise-grade software. Read on to learn how AI factories are helping enterprises and organizations around the world convert the most valuable digital commodity—data—into revenue potential.

Tencent President Discusses Significant Stockpiling of AI GPUs - Open to Future Adoption of Native Designs

Martin Lau, President of Tencent, has divulged that his company has accumulated a "pretty strong stockpile" of NVIDIA AI chips. In a mid-week earnings call, the Chinese executive reckoned that this surplus will come in handy—upon the company unleashing its full-on upcoming "AI strategy." Lau was responding to a question regarding ripples caused by a recent introduction of revised licensing requirements for "high-end GPUs." His lengthy reply seems to align with "leaked April time" information; when industry analysts theorized a massive $16 billion spend—reportedly, big Chinese tech firms had splurged out with swift acquisitions of NVIDIA H20 GPUs. Lau commented on present day conditions: "it's actually a very dynamic situation right now. Since the last earnings call, we have seen an H20 ban, and then after that there was the BIS new guidelines that just came in overnight...If you look at the allocation of the usage of these chips, obviously they'll be used for the applications that will generate immediate returns for us. For example, in the advertising business as well as content recommendation product, where we actually would be using a lot of these GPUs to generate results and generate returns for us. Secondly, in terms of the training of our large language models, they will be of the next priority and the training actually requires higher-end chips."

Team Green's engineering team has likely been strong-armed into designing further compromised hardware; as "exclusive" sanction-conforming options for important enterprise customers in China. Tencent seems to have enough pre-ban specimens to tide things over, for a while. The firm's president envisioned a comfortable position, for the foreseeable future: "over the past few months, we (started) to move off the concept or the belief of American tech companies—which they call 'the scaling law'—which required continuous expansion of the training cluster. And now we can see even with a smaller cluster you can actually achieve very good training results. And there's a lot of potential that we can get on the post-training side which do not necessarily meet very large clusters. We should have enough high-end chips to continue our training of models for a few more generations going forward." Huawei's controversial Ascend 910C AI accelerator seems to be the top alternative contender; tech watchdogs believe that this design's fortunes will be closely tied to the rising dominance of DeepSeek. Fairly recent leaks have indicated impressive progress being made within China's domestic AI accelerator infrastructure.

KIOXIA Announces First Enterprise NVMe SSD with 8th Gen BiCS FLASH Technology

KIOXIA America, Inc. today announced the development and prototype demonstration of its new KIOXIA CM9 Series PCIe 5.0 NVMe SSDs. These next-generation drives are the first enterprise SSDs built with KIOXIA's 8th generation BiCS FLASH TLC-based 3D flash memory, which incorporates CBA (CMOS directly Bonded to Array) technology - an architectural innovation that delivers significant advances in power efficiency, performance and density, while doubling available capacity per flash device.

As the demands of modern computing intensify, applications such as AI, machine learning, and high-performance computing require a sophisticated solid state storage infrastructure - requiring not only enterprise-class performance but also higher power efficiency to ensure scalability and manageable operational costs. Addressing these requirements is central to the design of the KIOXIA CM9 Series, which is purpose-built to support next-generation data center workloads.

Lenovo Unveils New Generation of AI PC Desktops and Monitors

Lenovo unveiled its new generation of business devices for modern workplaces, featuring a comprehensive selection of AI-powered ThinkCentre M Series Gen 6 desktops and the ThinkVision T Series Gen 40 monitors. Designed to address the needs of businesses of all sizes, the ThinkCentre family of desktops combine performance and reliability in different form factors, including tower, compact and all-in-one (AIO), while the ThinkVision T Series monitors blend outstanding display, connectivity, and manageability.

"Nearly half of businesses believe that AI-powered devices boost employee productivity, and 90 percent of those are already piloting, planning or exploring AI-powered PC rollouts, according to a recent Lenovo and IDC global survey of IT decision-makers. AI has and will continue to reshape the future of work, and Lenovo is proud to lead the way with a new generation of AI desktop PCs and monitors that meet the changing needs of businesses in this new era," said Johnson Jia, senior vice president of Intelligent Devices Group's Global Innovation Center at Lenovo. "Our latest ThinkCentre M Series Gen 6 desktops and ThinkVision T Series Gen 40 monitors power businesses of all sizes with scalable performance to unlock next-gen AI productivity and creativity."

MiTAC Computing Deploys Latest AMD EPYC 4005 Series Processors

MiTAC Computing Technology Corp., a subsidiary of MiTAC Holdings Corp. and a leading manufacturer in server platform design, introduced its latest offering featuring the AMD EPYC 4005 Series processors. These updated server solutions offer enhanced performance and energy efficiency to meet the growing demands of modern business workloads, including AI, cloud services, and data analytics.

"The new AMD EPYC 4005 Series processors deliver the performance and capabilities our customers need at a price point that makes ownership more attractive and attainable," said Derek Dicker, corporate vice president, Enterprise and HPC Business, AMD. "We're enabling businesses to own their computing infrastructure at an economical price, while providing the performance, security features and efficiency modern workloads demand."
Return to Keyword Browsing
Jul 12th, 2025 03:00 CDT change timezone

New Forum Posts

Popular Reviews

TPU on YouTube

Controversial News Posts