News Posts matching #TensorFlow

Return to Keyword Browsing

Intel Announces "Cooper Lake" 4P-8P Xeons, New Optane Memory, PCIe 4.0 SSDs, and FPGAs for AI

Intel today introduced its 3rd Gen Intel Xeon Scalable processors and additions to its hardware and software AI portfolio, enabling customers to accelerate the development and use of AI and analytics workloads running in data center, network and intelligent-edge environments. As the industry's first mainstream server processor with built-in bfloat16 support, Intel's new 3rd Gen Xeon Scalable processors makes artificial intelligence (AI) inference and training more widely deployable on general-purpose CPUs for applications that include image classification, recommendation engines, speech recognition and language modeling.

"The ability to rapidly deploy AI and data analytics is essential for today's businesses. We remain committed to enhancing built-in AI acceleration and software optimizations within the processor that powers the world's data center and edge solutions, as well as delivering an unmatched silicon foundation to unleash insight from data," said Lisa Spelman, Intel corporate vice president and general manager, Xeon and Memory Group.

AMD Announces Radeon Pro VII Graphics Card, Brings Back Multi-GPU Bridge

AMD today announced its Radeon Pro VII professional graphics card targeting 3D artists, engineering professionals, broadcast media professionals, and HPC researchers. The card is based on AMD's "Vega 20" multi-chip module that incorporates a 7 nm (TSMC N7) GPU die, along with a 4096-bit wide HBM2 memory interface, and four memory stacks adding up to 16 GB of video memory. The GPU die is configured with 3,840 stream processors across 60 compute units, 240 TMUs, and 64 ROPs. The card is built in a workstation-optimized add-on card form-factor (rear-facing power connectors and lateral-blower cooling solution).

What separates the Radeon Pro VII from last year's Radeon VII is full double precision floating point support, which is 1:2 FP32 throughput compared to the Radeon VII, which is locked to 1:4 FP32. Specifically, the Radeon Pro VII offers 6.55 TFLOPs double-precision floating point performance (vs. 3.36 TFLOPs on the Radeon VII). Another major difference is the physical Infinity Fabric bridge interface, which lets you pair up to two of these cards in a multi-GPU setup to double the memory capacity, to 32 GB. Each GPU has two Infinity Fabric links, running at 1333 MHz, with a per-direction bandwidth of 42 GB/s. This brings the total bidirectional bandwidth to a whopping 168 GB/s—more than twice the PCIe 4.0 x16 limit of 64 GB/s.

ASUS Announces Tinker Edge R with AI Machine-Learning Capabilities

ASUS today announced Tinker Edge R, a single-board computer (SBC) specially designed for AI applications. It uses a Rockchip RK3399Pro NPU, a machine-learning (ML) accelerator that speeds up processing efficiency, lowers power demands and makes it easier to build connected devices and intelligent applications.

With this integrated ML accelerator, Tinker Edge R can perform three tera-operations per second (3 TOPS), using low power consumption. It also features an optimized neural-network (NN) architecture, which means Tinker Edge R can support multiple ML frameworks and allow lots of common ML models to be compiled and run easily.
ASUS Tinker Edge R

Arm Delivers New Edge Processor IPs for IoT

Today, Arm announced significant additions to its artificial intelligence (AI) platform, including new machine learning (ML) IP, the Arm Cortex -M55 processor and Arm Ethos -U55 NPU, the industry's first microNPU (Neural Processing Unit) for Cortex-M, designed to deliver a combined 480x leap in ML performance to microcontrollers. The new IP and supporting unified toolchain enable AI hardware and software developers with more ways to innovate as a result of unprecedented levels of on-device ML processing for billions of small, power-constrained IoT and embedded devices.

Intel Announces Broadest Product Portfolio for Moving, Storing, and Processing Data

Intel Tuesday unveiled a new portfolio of data-centric solutions consisting of 2nd-Generation Intel Xeon Scalable processors, Intel Optane DC memory and storage solutions, and software and platform technologies optimized to help its customers extract more value from their data. Intel's latest data center solutions target a wide range of use cases within cloud computing, network infrastructure and intelligent edge applications, and support high-growth workloads, including AI and 5G.

Building on more than 20 years of world-class data center platforms and deep customer collaboration, Intel's data center solutions target server, network, storage, internet of things (IoT) applications and workstations. The portfolio of products advances Intel's data-centric strategy to pursue a massive $300 billion data-driven market opportunity.

Micron 5210 ION SSD Now Generally Available

Micron Technology, Inc., today announced the next step towards market leadership for its quad-level cell (QLC) NAND technology with immediate broad market availability of the popular Micron 5210 ION enterprise SATA SSD, the world's first QLC SSD, which began shipping to select customers and partners in May of this year. Available through global distributors, the Micron 5210 ION enterprise SATA SSD further accelerates Micron's lead in the QLC market, enabling replacement of hard disk drives (HDDs) with SSDs and building on Micron's recent launch of the Crucial P1 NVMe QLC SSD for consumer markets.

Enterprise storage needs are increasing as data center applications deliver real-time user insights and intelligent and enhanced user experiences, leveraging artificial intelligence (AI), machine learning, big data and real-time analytics. At the same time, there is a growing consumer need for higher storage capacity to support digital experiences. QLC SSDs are uniquely designed to address these requirements.

QNAP Introduces the TS-2888X AI-ready NAS

QNAP Systems, Inc. introduces the brand-new TS-2888X AI-Ready NAS, an all-in-one AI solution combining robust storage and a ready-to-use software environment that simplifies AI workflows with high cost-efficiency. Built using next-gen Intel Xeon W processors with up to 18 cores and employing a hybrid storage architecture with eight hard drives and twenty high-performance SSDs (including 4 U.2 SSDs), the TS-2888X also supports installing up to 4 high-end graphics cards and runs QNAP's AI developer package "QuAI". The TS-2888X packs everything required for machine learning AI to help organizations quickly and easily implement AI applications.

"Compared with typical AI workstations, the TS-2888X combines high-performance computing with huge-capacity storage to greatly reduce latency, accelerate data transfer, and to eliminate performance bottlenecks caused by network connectivity," said David Tsao, Product Manager of QNAP, adding "integrating AI-focused hardware and software reduces the time and complexity for implementing and managing AI tasks, making the TS-2888X the ideal AI solution for most organizations."

AMD and Xilinx Announce a New World Record for AI Inference

At today's Xilinx Developer Forum in San Jose, Calif., our CEO, Victor Peng was joined by the AMD CTO Mark Papermaster for a Guinness. But not the kind that comes in a pint - the kind that comes in a record book. The companies revealed the AMD and Xilinx have been jointly working to connect AMD EPYC CPUs and the new Xilinx Alveo line of acceleration cards for high-performance, real-time AI inference processing. To back it up, they revealed a world-record 30,000 images per-second inference throughput!

The impressive system, which will be featured in the Alveo ecosystem zone at XDF today, leverages two AMD EPYC 7551 server CPUs with its industry-leading PCIe connectivity, along with eight of the freshly-announced Xilinx Alveo U250 acceleration cards. The inference performance is powered by Xilinx ML Suite, which allows developers to optimize and deploy accelerated inference and supports numerous machine learning frameworks such as TensorFlow. The benchmark was performed on GoogLeNet, a widely used convolutional neural network.

VIA Launches ALTA DS 3 Edge AI System Powered by Qualcomm Snapdragon 820E

VIA Technologies, Inc., today announced the launch of the VIA ALTA DS 3 Edge AI system. Powered by the Qualcomm Snapdragon 820E Embedded Platform, the system enables the rapid development and deployment of intelligent signage, kiosk, and access control devices that require real-time image and video capture, processing, and display capabilities.

The VIA ALTA DS 3 harnesses the cutting-edge compute, graphics, and AI processing capabilities of the Qualcomm Snapdragon 820E Embedded Platform to facilitate the creation of vibrant new user experiences by allowing customers to combine their own AI applications with immersive multimedia signage display content in a compact, low-power system.

The Laceli AI Compute Stick is Here to Compete Against Intel's Movidius

Gyrfalcon Technology Inc, an emerging AI chip maker in Silicon Valley, CA, launches its Laceli AI Compute Stick after Intel Movidius announced its deep learning Neural Compute Stick in July of last year. With the company's first ultra-low power, high performance AI processor Lightspeeur 2801S, the Laceli AI Compute Stick runs a 2.8 TOPS performance within 0.3 Watt of power, which is 90 times more efficient than the Movidius USB Stick (0.1 TOPS within 1 Watt of power.)

Lightspeeur is based on Gyrfalcon Technology Inc's APiM architecture, which uses memory as the AI processing unit. This eliminates the huge data movement that results in high power consumption. The architecture features true, on-chip parallelism, in situ computing, and eliminates memory bottlenecks. It has roughly 28,000 parallel computing cores and does not require external memory for AI inference.
Return to Keyword Browsing
Jul 15th, 2025 16:57 CDT change timezone

New Forum Posts

Popular Reviews

TPU on YouTube

Controversial News Posts