Tuesday, December 5th 2017

IBM Unveils Industry's Most Advanced Server Designed for Artificial Intelligence

IBM today unveiled its next-generation Power Systems Servers incorporating its newly designed POWER9 processor. Built specifically for compute-intensive AI workloads, the new POWER9 systems are capable of improving the training times of deep learning frameworks by nearly 4x allowing enterprises to build more accurate AI applications, faster.

The new POWER9-based AC922 Power Systems are the first to embed PCI-Express 4.0, next-generation NVIDIA NVLink and OpenCAPI, which combined can accelerate data movement, calculated at 9.5x faster than PCI-E 3.0 based x86 systems. The system was designed to drive demonstrable performance improvements across popular AI frameworks such as Chainer, TensorFlow and Caffe, as well as accelerated databases such as Kinetica. As a result, data scientists can build applications faster, ranging from deep learning insights in scientific research, real-time fraud detection and credit risk analysis.
POWER9 is at the heart of the soon-to-be most powerful data-intensive supercomputers in the world, the U.S. Department of Energy's "Summit"and "Sierra" supercomputers, and has been tapped by Google.

"Google is excited about IBM's progress in the development of the latest POWER technology," said Bart Sano, VP of Google Platforms "The POWER9 OpenCAPI Bus and large memory capabilities allow for further opportunities for innovation in Google data centers."

"We've built a game-changing powerhouse for AI and cognitive workloads," said Bob Picciano, SVP of IBM Cognitive Systems. "In addition to arming the world's most powerful supercomputers, IBM POWER9 Systems is designed to enable enterprises around the world to scale unprecedented insights, driving scientific discovery enabling transformational business outcomes across every industry."
Accelerating the Future with POWER9
Deep learning is a fast growing machine learning method that extracts information by crunching through millions of processes and data to detect and rank the most important aspects of the data.

To meet these growing industry demands, four years ago IBM set out to design the POWER9 chip on a blank sheet to build a new architecture to manage free-flowing data, streaming sensors and algorithms for data-intensive AI and deep learning workloads on Linux.

IBM is the only vendor that can provide enterprises with an infrastructure that incorporates cutting-edge hardware and software with the latest open-source innovations.

With PowerAI, IBM has optimized and simplified the deployment of deep learning frameworks and libraries on the Power architecture with acceleration, allowing data scientists to be up and running in minutes. IBM Research is developing a wide array of technologies for the Power architecture. IBM researchers have already cut deep learning times from days to hours with the PowerAI Distributed Deep Learning toolkit.
Building an Open Ecosystem to Fuel Innovation
The era of AI demands more than tremendous processing power and unprecedented speed; it also demands an open ecosystem of innovative companies delivering technologies and tools. IBM serves as a catalyst for innovation to thrive, fueling an open, fast-growing community of more than 300 OpenPOWER Foundation and OpenCAPI Consortium members.
Add your own comment

8 Comments on IBM Unveils Industry's Most Advanced Server Designed for Artificial Intelligence

#2
Renald
Why IBM is always out of the context of the market ?

AI is about scalability, because it's very expensive. And there you go, they just tell us "we have the biggest one, and for sure the most expensive one. And on top of that, it's POWER9 architecture. U MAD BRO ?"

Seriously, everybody works with x86 ... and mainly because scalability of the system and handling of the cost is easier. Here you have to buy nuclear head of 20MT to start. WTF ...
Posted on Reply
#3
erixx
I want to marry her! :oops:
Posted on Reply
#4
CheapMeat
I wish AMD & Intel did more L4 (eDRAM or HBM2) on their enthusiast and server lines. This thing has an amazing about of cache and IBM's SMT is also great, 4-way and 8-way. Anandtech did a review and it scales so well.
Posted on Reply
#5
efikkan
RenaldWhy IBM is always out of the context of the market ?

AI is about scalability, because it's very expensive. And there you go, they just tell us "we have the biggest one, and for sure the most expensive one. And on top of that, it's POWER9 architecture. U MAD BRO ?"

Seriously, everybody works with x86 ... and mainly because scalability of the system and handling of the cost is easier. Here you have to buy nuclear head of 20MT to start. WTF ...
They might have a tough time due to the architecture and the high price, but Power does have a niche market with customers needing high bandwidth for GPUs.

But Power is dying, just like SPARC, Alpha and Itanium. Its last stand is very specialized compute workloads, but even there I don't see a bright future for Power. And if IBM goes all in on AI, then they surely will kill it. Power + Nvidia Tesla may offer the ultimate AI performance at the moment, but this will change within few years when AI ASICs arrive, and then super-expensive Power CPUs along with Teslas wouldn't stand a chance against "cheap" specialized chips.
CheapMeatI wish AMD & Intel did more L4 (eDRAM or HBM2) on their enthusiast and server lines. This thing has an amazing about of cache and IBM's SMT is also great, 4-way and 8-way. Anandtech did a review and it scales so well.
Well, only certain workloads scale well above two threads per core. As you increase thread count, the cache efficiency decreases, resulting in more cache misses and stalls. Power9 actually introduces 4-way SMT in addition to 8-way SMT over Power8 which had 8-way SMT, because it will scale better for many workloads, such as VMs. Having many threads per core will only be beneficial when the threads are mostly stalled anyway and the cache misses doesn't get much worse from it.
Posted on Reply
#6
Renald
efikkanthis will change within few years when AI ASICs arrive
Indeed !
Every modern CPU/GPU are not made to do this kind of work, and ASIC systems are way more advanced and cheaper (mainly in terms of space and consumption) in the end.

Even if a niche exists and is profitable, this one is going to collapse really fast in the next years. I think it's a really bad move to engage this kind of change right now. Many European companies, or having activities in Europe will have to manage the new law about Data storage (GDPR link : publications.europa.eu/en/publication-detail/-/publication/3e485e15-11bd-11e6-ba9a-01aa75ed71a1/language-en).
It's going to be a fucking huge mess in the next few years and we are more focused on this rather than buying a powerful server.

Not respecting this can cause 10M€ (yes, Million) or 5% of your revenue. And you pay the higher ! So startup can face a 10M€ lawsuit ...
Posted on Reply
#7
rtwjunkie
PC Gaming Enthusiast
Why is everyone complaining about cost? It's not for you consumers.

:p It's the beginning of Skynet...we will all be irrelevant anyway soon.
Posted on Reply
#8
bim27142
rtwjunkieWhy is everyone complaining about cost? It's not for you consumers.
my thoughts exactly...
Posted on Reply
Nov 21st, 2024 15:53 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts