News Posts matching #AI

Return to Keyword Browsing

Acer Unveils New Swift Go AI PCs with Intel Core Ultra Processors

Acer expanded its Swift family of thin and light laptops with new Intel Core Ultra processors featuring Intel's first neural processing unit (NPU) and integrated AI acceleration capabilities. Now even more performance-minded, capable, and intuitive for content creation, schoolwork, productivity, and play, the new Swift laptops' powerful processing and AI-supported features further the laptop's usability.

"After unveiling our first Intel Core Ultra laptops last month, we're debuting even more products in our Swift line to help a wider range of customers take advantage of premium laptop experiences and AI-supported technology for more exciting and effective PC use," said James Lin, General Manager, Notebooks, IT Products Business, Acer. "Plus, these laptops feature impressive updates that help customers do more - and do them even better."

Acer Expands SpatialLabs Stereoscopic 3D Portfolio with New Laptop and Gaming Monitor

Acer today announced the extension of its SpatialLabs stereoscopic 3D lineup to the Aspire line of laptops and Predator gaming monitors.

The new Aspire 3D 15 SpatialLabs Edition laptop delivers captivating 3D content for entertainment and creation on its 15.6-inch UHD display; It also comes with a suite of AI-powered SpatialLabs applications for 3D viewing and content creation, without the need for specialized glasses, delighting users when watching their favorite content and empowering developers to see their designs in their real 3D forms. With Microsoft Copilot in Windows 11, users can experience upscaled creativity and productivity with AI-powered task assistance, while Acer's suite of AI-supported solutions in Acer PurifiedView and PurifiedVoice elevate conference calls on the 3D laptop.

Intel Appoints Justin Hotard to Lead Data Center and AI Group

Intel Corporation today announced the appointment of Justin Hotard as executive vice president and general manager of its Data Center and AI Group (DCAI), effective Feb. 1. He joins Intel with more than 20 years of experience driving transformation and growth in computing and data center businesses, and is a leader in delivering scalable AI systems for the enterprise.

Hotard will become a member of Intel's executive leadership team and report directly to CEO Pat Gelsinger. He will be responsible for Intel's suite of data center products spanning enterprise and cloud, including its Intel Xeon processor family, graphics processing units (GPUs) and accelerators. He will also play an integral role in driving the company's mission to bring AI everywhere.

Microsoft Copilot Becomes a Dedicated Key on Windows-Powered PC Keyboards

Microsoft today announced the introduction of a new Copilot key devoted to its AI assistant on Windows PC keyboards. The key will provide instant access to Microsoft's conversational Copilot feature, offering a ChatGPT-style AI bot right from a button press. The Copilot key represents the first significant Windows keyboard change in nearly 30 years since the addition of the Windows key itself in the 90s. Microsoft sees it as similarly transformative - making AI an integrated part of devices. The company expects broad adoption from PC manufacturers starting this spring. The Copilot key will likely substitute keys like menu or Office on standard layouts. While currently just launching Copilot, Microsoft could also enable combo presses in the future.

The physical keyboard button helps make AI feel native rather than an add-on, as Microsoft aggressively pushes Copilot into Windows 11 and Edge. The company declared its aim to make 2024 the "year of the AI PC", with Copilot as the entry point. Microsoft envisions AI eventually becoming seamlessly woven into computing through system, silicon, and hardware advances. The Copilot key may appear minor, but it signals that profound change is on the horizon. However, users will only embrace the vision if Copilot proves consistently beneficial rather than gimmicky. Microsoft is betting that injecting AI deeper into PCs will provide usefulness, justifying the disruption. With major OS and hardware partners already committed to adopting the Copilot key, Microsoft's AI-first computer vision is materializing rapidly. The button press that invokes Copilot may soon feel as natural as hitting the Windows key or spacebar. As we await the reported launch of Windows 12, we can expect deeper integration with Copilot to appear.

MINISFORUM Unveils V3 AMD Tablet

MINISFORUM unveiled a high-end tablet convertible based on the Windows 11 x64 platform. Called simply the V3 AMD Tablet, this 3-in-1 convertible can be used as a 14-inch tablet, or combined with a dock that adds a keyboard, trackpad, and a stand. The tablet measures 318 mm x 213.8 mm x mm 9.8 mm (WxDxH), weighing 946 g. Its 14-inch 16:10 aspect-ratio display offers a 2560 x 1600 pixels resolution, with 165 Hz refresh rate, 100% DCI-P3 coverage, and 500 nits maximum brightness. This display is backed by a sensitive touchscreen that supports MPP 2.6 SLA and sensitivity suitable for a natural handwriting stylus.

Connectivity includes Wi-Fi 6E with Bluetooth 5.3, a USB-C V-Link (DP in), two 40 Gbps USB4, a fingerprint reader, and 4-pole headset jack. Under the hood, the MINISFORUM V3 is powered by an AMD Ryzen 7 8040U series "Hawk Point" processor, paired with 32 GB of LPDDR5-6400 memory, and a 2 TB M.2 Gen 4 NVMe SSD. The SoC has a 28 W configured TDP, and MINISFORUM has innovated a four flat copper heatpipe, dual fan cooling solution. The tablet also has a 4-speaker setup and multi-directional microphone. The front camera is 2 MP with full Windows Hello compatibility, while the rear cam is 5 MP. Powering it all is a 50.82 Wh battery, and a 65 W USB-PD power source over a type-C connector. Windows 11 Pro 23H2 with Ryzen AI enablement comes pre-installed. The company didn't reveal pricing.

Report: Global Semiconductor Capacity Projected to Reach Record High 30 Million Wafers Per Month in 2024

Global semiconductor capacity is expected to increase 6.4% in 2024 to top the 30 million *wafers per month (wpm) mark for the first time after rising 5.5% to 29.6 wpm in 2023, SEMI announced today in its latest quarterly World Fab Forecast report.

The 2024 growth will be driven by capacity increases in leading-edge logic and foundry, applications including generative AI and high-performance computing (HPC), and the recovery in end-demand for chips. The capacity expansion slowed in 2023 due to softening semiconductor market demand and the resulting inventory correction.

Intel and DigitalBridge Launch Articul8, an Enterprise Generative AI Company

Intel Corp and DigitalBridge Group, Inc., a global investment firm, today announced the formation of Articul8 AI, Inc. (Articul8), an independent company offering enterprise customers a full-stack, vertically-optimized and secure generative artificial intelligence (GenAI) software platform. The platform delivers AI capabilities that keep customer data, training and inference within the enterprise security perimeter. The platform also provides customers the choice of cloud, on-prem or hybrid deployment.

Articul8 was created with intellectual property (IP) and technology developed at Intel, and the two companies will remain strategically aligned on go-to-market opportunities and collaborate on driving GenAI adoption in the enterprise. Arun Subramaniyan, formerly vice president and general manager in Intel's Data Center and AI Group, has assumed leadership of Articul8 as its CEO.

DEEPX's DX-M1 Chip Recognized at CES 2024 as Leading AI of Things Solution

DEEPX (CEO, Lokwon Kim), an original AI semiconductor technology company, is announcing that it has surpassed 40 customers for its flagship chip solution, DX-M1—the only AI accelerator on the market to combine low power consumption, high efficiency and performance, and cost-effectiveness. The groundbreaking solution has been deployed for a hands-on trial to this customer pool, which spans global companies and domestic Korean enterprises across various sectors.

DEEPX is currently running an Early Engagement Customer Program (EECP) to provide customers with early access to its small camera module, a one-chip solution featuring DX-V1; M.2 module featuring DX-M1; and DXNN, the company's developer environment. This allows customers to receive pre-production validation of DEEPX's hardware and software, integrate them into mass-produced products, and realize AI technology innovations with the brand's technical support.

SK hynix to Exhibit AI Memory Leadership at CES 2024

SK hynix Inc. announced today that it will showcase the technology for ultra-high performance memory products, the core of future AI infrastructure, at CES 2024, the most influential tech event in the world taking place from January 9 through 12 in Las Vegas. SK hynix said that it will highlight its future vision represented by its Memory Centric at the show and promote the importance of memory products accelerating the technological innovation in the AI era and its competitiveness in the global memory markets.

The company will run a space titled SK Wonderland jointly with other major SK Group affiliates including SK Inc., SK Innovation and SK Telecom, and showcase its major AI memory products including HBM3E. SK hynix plans to provide HBM3E, the world's best-performing memory product that it successfully developed in August, to the world's largest AI technology companies by starting mass production from the first half of 2024.

Neuchips to Showcase Industry-Leading Gen AI Inferencing Accelerators at CES 2024

Neuchips, a leading AI Application-Specific Integrated Circuits (ASIC) solutions provider, will demo its revolutionary Raptor Gen AI accelerator chip (previously named N3000) and Evo PCIe accelerator card LLM solutions at CES 2024. Raptor, the new chip solution, enables enterprises to deploy large language models (LLMs) inference at a fraction of the cost of existing solutions.

"We are thrilled to unveil our Raptor chip and Evo card to the industry at CES 2024," said Ken Lau, CEO of Neuchips. "Neuchips' solutions represent a massive leap in price to performance for natural language processing. With Neuchips, any organisation can now access the power of LLMs for a wide range of AI applications."

Alphawave Semi Partners with Keysight to Deliver a Complete PCIe 6.0 Subsystem Solution

Alphawave Semi (LSE: AWE), a global leader in high-speed connectivity for the world's technology infrastructure, today announced successful collaboration with Keysight Technologies, a market-leading design, emulation, and test solutions provider, demonstrating interoperability between Alphawave Semi's PCIe 6.0 64 GT/s Subsystem (PHY and Controller) Device and Keysight PCIe 6.0 64 GT/s Protocol Exerciser, negotiating a link to the maximum PCIe 6.0 data rate. Alphawave Semi, already on the PCI-SIG 5.0 Integrators list, is accelerating next-generation PCIe 6.0 Compliance Testing through this collaboration.

Alphawave Semi's leading-edge silicon implementation of the new PCIe 6.0 64 GT/s Flow Control Unit (FLIT)-based protocol enables higher data rates for hyperscale and data infrastructure applications. Keysight and Alphawave Semi achieved another milestone by successfully establishing a CXL 2.0 link setting the stage for future cache coherency in the datacenter.

LG Ushers in 'Zero Labor Home' With Its Smart Home AI Agent at CES 2024

LG Electronics (LG) is ready to unveil its innovative smart home Artificial Intelligence (AI) agent at CES 2024. LG's smart home AI agent boasts robotic, AI and multi-modal technologies that enable it to move, learn, comprehend and engage in complex conversations. An all-around home manager and companion rolled into one, LG's smart life solution enhances users' daily lives and showcases the company's commitment to realizing its "Zero Labor Home" vision.

With its advanced 'two-legged' wheel design, LG's smart home AI agent is able to navigate the home independently. The intelligent device can verbally interact with users and express emotions through movements made possible by its articulated leg joints. Moreover, the use of multi-modal AI technology, which combines voice and image recognition along with natural language processing, enables the smart home AI agent to understand context and intentions as well as actively communicate with users.

Samsung Electronics and Red Hat Partnership To Lead Expansion of CXL Memory Ecosystem With Key Milestone

Samsung Electronics, a world leader in advanced memory technology, today announced that for the first time in the industry, it has successfully verified Compute Express Link (CXL) memory operations in a real user environment with open-source software provider Red Hat, leading the expansion of its CXL ecosystem.

Due to the exponential growth of data throughput and memory requirements for emerging fields like generative AI, autonomous driving and in-memory databases (IMDBs), the demand for systems with greater memory bandwidth and capacity is also increasing. CXL is a unified interface standard that connects various processors, such as CPUs, GPUs and memory devices through a PCIe interface that can serve as a solution for limitations in existing systems in terms of speed, latency and expandability.

SUNON: Pioneering Innovative Liquid Cooling Solutions for Modern Data Centers

In the era of high-tech development and the ever-increasing demand for data processing power, data centers are consuming more energy and generating excess heat. As a global leader in thermal solutions, SUNON is at the forefront, offering a diverse range of cutting-edge liquid cooling solutions tailored to advanced data centers equipped with high-capacity CPU and GPU computing for AI, edge, and cloud servers.

SUNON's liquid cooling design services are ideally suited for modern data centers, generative AI computing, and high-performance computing (HPC) applications. These solutions are meticulously customized to fit the cooling space and server density of each data center. With their compact yet comprehensive design, they guarantee exceptional cooling efficiency and reliability, ultimately contributing to a significant reduction in a client's total cost of ownership (TCO) in the long term. In the pursuit of net-zero emissions standards, SUNON's liquid cooling solutions play a pivotal role in enhancing corporate sustainability. They o ff er a win-win scenario for clients seeking to transition toward greener and more digitalized operations.

MemryX Demos Production Ready AI Accelerator (MX3) During 2024 CES Show

MemryX Inc. is announcing the availability of production level silicon of its cutting-edge AI Accelerator (MX3). MemryX is a pioneering startup specializing in accelerating artificial intelligence (AI) processing for edge devices. In less than 30 days after receiving production silicon from TSMC, MemryX will publicly showcase the ability to efficiently run hundreds of unaltered AI models at the 2024 Consumer Electronics Show (CES) in Las Vegas from Jan 9 through Jan 12.

Apple Wants to Store LLMs on Flash Memory to Bring AI to Smartphones and Laptops

Apple has been experimenting with Large Language Models (LLMs) that power most of today's AI applications. The company wants these LLMs to serve the users best and deliver them efficiently, which is a difficult task as they require a lot of resources, including compute and memory. Traditionally, LLMs have required AI accelerators in combination with large DRAM capacity to store model weights. However, Apple has published a paper that aims to bring LLMs to devices with limited memory capacity. By storing LLMs on NAND flash memory (regular storage), the method involves constructing an inference cost model that harmonizes with the flash memory behavior, guiding optimization in two critical areas: reducing the volume of data transferred from flash and reading data in larger, more contiguous chunks. Instead of storing the model weights on DRAM, Apple wants to utilize flash memory to store weights and only pull them on-demand to DRAM once it is needed.

Two principal techniques are introduced within this flash memory-informed framework: "windowing" and "row-column bundling." These methods collectively enable running models up to twice the size of the available DRAM, with a 4-5x and 20-25x increase in inference speed compared to native loading approaches on CPU and GPU, respectively. Integrating sparsity awareness, context-adaptive loading, and a hardware-oriented design pave the way for practical inference of LLMs on devices with limited memory, such as SoCs with 8/16/32 GB of available DRAM. Especially with DRAM prices outweighing NAND Flash, setups such as smartphone configurations could easily store and inference LLMs with multi-billion parameters, even if the DRAM available isn't sufficient. For a more technical deep dive, read the paper on arXiv here.

Phison Predicts 2024: Security is Paramount, PCIe 5.0 NAND Flash Infrastructure Imminent as AI Requires More Balanced AI Data Ecosystem

Phison Electronics Corp., a global leader in NAND flash controller and storage solutions, today announced the company's predictions for 2024 trends in NAND flash infrastructure deployment. The company predicts that rapid proliferation of artificial intelligence (AI) technologies will continue apace, with PCIe 5.0-based infrastructure providing high-performance, sustainable support for AI workload consistency as adoption rapidly expands. PCIe 5.0 NAND flash solutions will be at the core of a well-balanced hardware ecosystem, with private AI deployments such as on-premise large language models (LLMs) driving significant growth in both everyday AI and the infrastructure required to support it.

"We are moving past initial excitement over AI toward wider everyday deployment of the technology. In these configurations, high-quality AI output must be achieved by infrastructure designed to be secure, while also being affordable. The organizations that leverage AI to boost productivity will be incredibly successful," said Sebastien Jean, CTO, Phison US. "Building on the widespread proliferation of AI applications, infrastructure providers will be responsible for making certain that AI models do not run up against the limitations of memory - and NAND flash will become central to how we configure data center architectures to support today's developing AI market while laying the foundation for success in our fast-evolving digital future."

Top Ten IC Design Houses Ride Wave of Seasonal Consumer Demand and Continued AI Boom to See 17.8% Increase in Quarterly Revenue in 3Q23

TrendForce reports that 3Q23 has been a historic quarter for the world's leading IC design houses as total revenue soared 17.8% to reach a record-breaking US$44.7 billion. This remarkable growth is fueled by a robust season of stockpiling for smartphones and laptops, combined with a rapid acceleration in the shipment of generative AI chips and components. NVIDIA, capitalizing on the AI boom, emerged as the top performer in revenue and market share. Notably, analog IC supplier Cirrus Logic overtook US PMIC manufacturer MPS to snatch the tenth spot, driven by strong demand for smartphone stockpiling.

NVIDIA's revenue soared 45.7% to US$16.5 billion in the third quarter, bolstered by sustained demand for generative AI and LLMs. Its data center business—accounting for nearly 80% of its revenue—was a key driver in this exceptional growth.

Intel Preparing Habana "Gaudi2C" SKU for the Chinese AI Market

Intel's software team has added support in its open-source Linux drivers for an unannounced Habana "Gaudi2C" AI accelerator variant. Little is documented about the mystery Gaudi2C, which shares a core identity with Intel's flagship Gaudi2 data center training and inference chip, otherwise broadly available. The new revision is distinguished only by a PCI ID of "3" in the latest patch set for Linux 6.8. Speculations circulate that Gaudi2C may be a version tailored to meet China-specific demands, similar to Intel's Gaudi2 HL-225B SKU launched in July with reduced interconnect links. With US export bans restricting sales of advanced hardware to China, including Intel's leading Gaudi2 products, creating reduced-capability spinoffs that meet export regulations lets Intel maintain crucial Chinese revenue.

Meanwhile, Intel's upstream Linux contributions remain focused on hardening Gaudi/Gaudi2 support, now considered "very stable" by lead driver developer Oded Gabbay. Minor new additions reflect maturity, not instability. The open-sourced foundations contrast NVIDIA's proprietary driver model, a key Intel competitive argument for service developers using Habana Labs hardware. With the SynapseAI software suite reaching stability, some enterprises could consider Gaudi accelerators as an alternative to NVIDIA. And with Gaudi3 arriving next year, the ecosystem will get a better competitive advantage with increased performance targets.

Moore Threads Launches MTT S4000 48 GB GPU for AI Training/Inference and Presents 1000-GPU Cluster

Chinese chipmaker Moore Threads has launched its first domestically-produced 1000-card AI training cluster, dubbed the KUAE Intelligent Computing Center. A central part of the KUAE cluster is Moore Threads new MTT S4000 accelerator card with 48 GB VRAM utilizing the company's third-generation MUSA GPU architecture and 768 GB/s memory bandwidth. In FP32, the card can output 25 TeraFLOPS; in TF32, it can achieve 50 TeraFLOPS; and in FP16/BF16, up to 200 TeraFLOPS. Also supported is INT8 at 200 TOPS. The MTT S4000 focuses on both training and inference, leveraging Moore Thread's high-speed MTLink 1.0 intra-system interconnect to scale cards for distributed model parallel training of datasets with hundreds of billions of parameters. The card also provides graphics, video encoding/decoding, and 8K display capabilities for graphics workloads. Moore Thread's KUAE cluster combines the S4000 GPU hardware with RDMA networking, distributed storage, and integrated cluster management software. The KUAE Platform oversees multi-datacenter resource allocation and monitoring. KUAE ModelStudio hosts training frameworks and model repositories to streamline development.

With integrated solutions now proven at thousands of GPUs, Moore Thread is positioned to power ubiquitous intelligent applications - from scientific computing to the metaverse. The KUAE cluster reportedly achieves near-linear 91% scaling. Taking 200 billion training data as an example, Zhiyuan Research Institute's 70 billion parameter Aquila2 can complete training in 33 days; a model with 130 billion parameters can complete training in 56 days on the KUAE cluster. In addition, the Moore Threads KUAE killocard cluster supports long-term continuous and stable operation, supports breakpoint resume training, and has an asynchronous checkpoint that is less than 2 minutes. For software, Moore Threads also boasts full compatibility with NVIDIA's CUDA framework, where its MUSIFY tool translates CUDA code to MUSA GPU architecture at supposedly zero cost of migration, i.e., no performance penalty.

Samsung Announces the Galaxy Book4 Series: The Most Intelligent and Powerful Galaxy Book Yet

Samsung Electronics Co., Ltd. today announced the release of its most intelligent PC lineup yet: Galaxy Book4 Ultra, Book4 Pro and Book4 Pro 360. The latest series comes with a new intelligent processor, a more vivid and interactive display and a robust security system—beginning a new era of AI PCs that offers ultimate productivity, mobility and connectivity. These enhancements not only improve the device itself but also elevate the entire Samsung Galaxy ecosystem, advancing the PC category and accelerating Samsung's vision of AI innovation—for both today and tomorrow.

"Samsung is committed to empowering people to experience new possibilities that enhance their everyday lives. This new paradigm can be achieved through our expansive Galaxy ecosystem and open collaboration with other industry leaders," said TM Roh, President and Head of Mobile eXperience Business at Samsung Electronics. "The Galaxy Book4 series plays a key role in bringing best-in-class connectivity to our ecosystem that will broaden how people interact with their PC, phone, tablet and other devices for truly intelligent and connected experiences."

GIGABYTE Introduces New AORUS 17 and AORUS 15 AI-Powered Gaming Laptops with Intel Core Ultra 7 Processors

GIGABYTE, the world's leading computer brand, proudly introduces the latest evolution in gaming laptops for 2024 - the AORUS 17 and AORUS 15 - delivering cutting-edge performance in their signature sleek and portable package.

Powered by the all-new Intel Core Ultra 7 processors and equipped with full-powered NVIDIA GeForce RTX 40 Series Laptop GPUs alongside expandable DDR5 memory, the AORUS 17 and AORUS 15 effortlessly handle demanding gaming and creative tasks on the go. The exclusive WINDFORCE Infinity cooling technology ensures optimal performance in a super-portable chassis, while the addition of Dolby Vision and Dolby Atmos technologies provides an immersive personal cinema experience.

Acer Debuts AI-Ready Swift Go 14 Laptop with New Intel Core Ultra Processors and OLED Display

Acer today announced new models of the AI-ready Acer Swift Go 14 (SFG14-72) powered by Intel Core Ultra processors that feature Intel Arc graphics processing unit (GPU) and Intel AI Boost, its new integrated neural processing unit (NPU), to deliver efficient computing performance of AI workloads and immersive experiences on the thin-and-light laptop. Students, professionals, and creators can leverage the Swift Go 14's array of AI features such as Acer PurifiedVoice and Acer PurifiedView for videoconferencing and customization tools on the OLED laptop. Accomplishing tasks and workflows are also made easier on the Swift laptop with Microsoft's Copilot in Windows 11.

"Our new Swift Go 14 goes beyond its stylish design and high-resolution display, delivering the latest suite of collaboration technology to support a wide variety of functions and lifestyles," said James Lin, General Manager, Notebooks, Acer Inc. "The Swift Go 14 is one of the first devices in the market to be outfitted with Intel Core Ultra processors, paving the way to enhance support of generative AI tasks on more Acer devices moving forward."

Acer Unleashes New Predator Triton Neo 16 with Intel Core Ultra Processors

Acer today announced the new Predator Triton Neo 16 (PTN16-51) gaming laptop, designed with the new Intel Core Ultra processors with dedicated AI acceleration capabilities and NVIDIA GeForce RTX 40 Series GPUs that support demanding games and creative applications. Players and content creators can marvel at enhanced video game scenes and designs on the laptop's 16-inch display with up to a stunning 3.2K resolution and 165 Hz refresh rate and Calman-Verified displays, producing accurate colors right out-of-the-box.

The state-of-the-art cooling system combines a 5th Gen AeroBlade fan and liquid metal thermal grease on the CPU to keep the laptop running at full steam, while users stay on top of communications and device management thanks to the AI-enhanced Acer PurifiedVoice 2.0 software and the PredatorSense utility app. This Windows 11 gaming PC also provides players with amazing performance experiences and one month of Xbox Game Pass for access to hundreds of high-quality PC games.

TYAN Upgrades HPC, AI and Data Center Solutions with the Power of 5th Gen Intel Xeon Scalable Processors

TYAN, a leading server platform design manufacturer and a MiTAC Computing Technology Corporation subsidiary, today introduced upgraded server platforms and motherboards based on the brand-new 5th Gen Intel Xeon Scalable Processors, formerly codenamed Emerald Rapids.

5th Gen Intel Xeon processor has increased to 64 cores, featuring a larger shared cache, higher UPI and DDR5 memory speed, as well as PCIe 5.0 with 80 lanes. Growing and excelling with workload-optimized performance, 5th Gen Intel Xeon delivers more compute power and faster memory within the same power envelope as the previous generation. "5th Gen Intel Xeon is the second processor offering inside the 2023 Intel Xeon Scalable platform, offering improved performance and power efficiency to accelerate TCO and operational efficiency", said Eric Kuo, Vice President of Server Infrastructure Business Unit, MiTAC Computing Technology Corporation. "By harnessing the capabilities of Intel's new Xeon CPUs, TYAN's 5th-Gen Intel Xeon-supported solutions are designed to handle the intense demands of HPC, data centers, and AI workloads.
Return to Keyword Browsing
Feb 22nd, 2025 08:22 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts