News Posts matching #Deep Learning

Return to Keyword Browsing

Intel Puts Out Additional "Cascade Lake" Performance Numbers

Intel late last week put out additional real-world HPC and AI compute performance numbers of its upcoming "Cascade Lake" 2x 48-core (96 cores in total) machine, compared to AMD's EPYC 7601 2x 32-core (64 cores in total) machine. You'll recall that on November 5th, the company put out Linpack, System Triad, and Deep Learning Inference numbers, which are all synthetic benchmarks. In a new set of slides, the company revealed a few real-world HPC/AI application performance numbers, including MIMD Lattice Computation (MILC), Weather Research and Forecasting (WRF), OpenFOAM, NAMD scalable molecular dynamics, and YaSK.

The Intel 96-core setup with 12-channel memory interface belts out up to 1.5X performance in MILC, up to 1.6X in WRF and OpenFOAM, up to 2.1X in NAMD, and up to 3.1X in YASK, compared to an AMD EPYC 7601 2P machine. The company also put out system configuration and disclaimer slides with the usual forward-looking CYA. "Cascake Lake" will be Intel's main competitor to AMD's EPYC "Rome" 64-core 4P-capable processor that comes out by the end of 2018. Intel's product is a multi-chip module of two 24~28 core dies, with a 2x 6-channel DDR4 memory interface.

Intel Announces Cascade Lake Advanced Performance and Xeon E-2100

Intel today announced two new members of its Intel Xeon processor portfolio: Cascade Lake advanced performance (expected to be released the first half of 2019) and the Intel Xeon E-2100 processor for entry-level servers (general availability today). These two new product families build upon Intel's foundation of 20 years of Intel Xeon platform leadership and give customers even more flexibility to pick the right solution for their needs.

"We remain highly focused on delivering a wide range of workload-optimized solutions that best meet our customers' system requirements. The addition of Cascade Lake advanced performance CPUs and Xeon E-2100 processors to our Intel Xeon processor lineup once again demonstrates our commitment to delivering performance-optimized solutions to a wide range of customers," said Lisa Spelman, Intel vice president and general manager of Intel Xeon products and data center marketing.

Intel and Philips Accelerate Deep Learning Inference on CPUs in Medical Imaging

Using Intel Xeon Scalable processors and the OpenVINO toolkit, Intel and Philips tested two healthcare use cases for deep learning inference models: one on X-rays of bones for bone-age-prediction modeling, the other on CT scans of lungs for lung segmentation. In these tests, Intel and Philips achieved a speed improvement of 188 times for the bone-age-prediction model, and a 38 times speed improvement for the lung-segmentation model over the baseline measurements.

Intel Xeon Scalable processors appear to be the right solution for this type of AI workload. Our customers can use their existing hardware to its maximum potential, while still aiming to achieve quality output resolution at exceptional speeds," said Vijayananda J., chief architect and fellow, Data Science and AI at Philips HealthSuite Insights.

Intel "Cooper Lake" Latest 14nm Stopgap Between "Cascade Lake" and "Ice Lake"

With no end to its 10 nm transition woes in sight (at least not until late-2019), Intel is left with refinement of its existing CPU micro-architectures on the 14 nanometer node. The client-desktop segment sees the introduction of the "Whiskey Lake" (aka Coffee Lake Refresh) later this year; while the enterprise segment gets the 14 nm "Cascade Lake." To its credit, Cascade Lake introduces a few major platform innovations, such as support for Optane Persistent Memory, silicon-level hardening against recent security vulnerabilities, and Deep Learning Boost, which is hardware-accelerated neural net building/training, and the introduction of VNNI (Variable Length Neural Network Instructions). "Cascade Lake" makes its debut towards the end of 2018. It will be succeeded in 2019 by Ice Lake the new "Cooper Lake" architecture.

"Cooper Lake" is a refresh of "Cascade Lake," and a stopgap in Intel's saga of getting 10 nm right, so it could build "Ice Lake" on it. It will be built on the final (hopefully) iteration of the 14 nm node. It will share its platform with "Cascade Lake," and so Optane Persistent Memory support carriers over. What's changed is the Deep Learning Boost feature-set, which will be augmented with a few new instructions, including BFLOAT16 (a possible half-precision floating point instruction). Intel could also be presented with the opportunity to crank up clock speeds across the board.

GIGABYTE Announces Two New Powerful Deep Learning Engines

GIGABYTE, an industry leader in server hardware for high performance computing, has released two new powerful 4U GPU servers to bring massive parallel computing capabilities into your datacenter: the 8 x SXM2 GPU G481-S80, and the 10 x GPU G481-HA0. Both products offer some of the highest GPU density of this form factor available on the market.

As artificial intelligence is becoming more widespread in our daily lives, such as for image recognition, autonomous vehicles or medical research, more organizations need deep learning capabilities in their datacenter. Deep learning requires a powerful engine that can deal with the massive volumes of data processing required. GIGABYTE is proud to provide our customers with two new solutions for such an engine.
Return to Keyword Browsing
Dec 20th, 2024 03:35 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts