Monday, September 21st 2015
NVIDIA GP100 Silicon Moves to Testing Phase
NVIDIA's next-generation flagship graphics processor, codenamed "GP100," has reportedly graduated to testing phase. That is when a limited batch of completed chips are sent from the foundry partner to NVIDIA for testing and evaluation. The chips tripped speed-traps on changeover airports, on their way to NVIDIA. 3DCenter.org predicts that the GP100, based on the company's "Pascal" GPU architecture, will feature no less than 17 billion transistors, and will be built on the 16 nm FinFET+ node at TSMC. The GP100 will feature an HBM2 memory interface. HBM2 allows you to cram up to 32 GB of memory. The flagship product based on GP100 could feature about 16 GB of memory. NVIDIA's design goal could be to squeeze out anywhere between 60-90% higher performance than the current-generation flagship GTX TITAN-X.
Source:
3DCenter.org
65 Comments on NVIDIA GP100 Silicon Moves to Testing Phase
Seriously though, goodbye 28nm, you shall not be missed, about time we moved to a smaller process :rockout:
I really do hope they are 16nm parts. It's long past due.
It will cost
thembuyers more to pull a Titan/GF Ti on GP100Neither brand produces 'affordable' flagships these days. Unfortunately.
What is more important is how the architecture stacks out as AMD do have a bit of a laurel to sit on for DX12. Pascal has been touted as 'mixed' compute but that doesn't mean too much without knowing what the mix is. It needs heavy parallelism to match GCN's ability to render lots of disparate info queues. All those transistors will be less meaningful if Pascal doesn't address DX12's bare metal language.
Pascal is a long way from being finished.
We will all know what it can do not at launch, but when review samples is tested.
Anyway I'll be waiting for a full GP104 based product.
FWIW, I think you'll find that AMD will also target mixed compute modes (FP16/32/64) for the same reasons that Nvidia and ARM are integrating it. Not every workload requires the power budget or restraints of FP32 or FP64.
The full slide : on-demand.gputechconf.com/gtc/2015/presentation/S5715-Keynote-Jen-Hsun-Huang.pdf And you're right, right now the best deep learning architecture for NVIDIA GPU and cuDNN is the deep CNN (Convolutional Neural Network) which most researcher uses for image (2D) classification and detection.
$500 ten years ago is now $610 today due to inflation.
data.bls.gov/cgi-bin/cpicalc.pl?cost1=500&year1=2005&year2=2015