Friday, April 22nd 2016

NVIDIA GP104 "Pascal" ASIC Pictured

Here are two of the first pictures of NVIDIA's upcoming "GP104" graphics processor. This chip will drive at least three new GeForce SKUs bound for a June 2016 launch; and succeeds the GM204 silicon, which drives the current-gen GTX 980 and GTX 970. Based on the "Pascal" architecture, the GPU will be built on TSMC's latest 16 nm FinFET+ node. The chip appears to feature a 256-bit wide GDDR5 memory interface, and is rumored to feature a memory clock of 8 Gbps, yielding a memory bandwidth of 256 GB/s.
Sources: ChipHell, AnandTech Forums
Add your own comment

56 Comments on NVIDIA GP104 "Pascal" ASIC Pictured

#2
happita
Darn. No GDDR5X then. It woulda been nice, but from what I know production is still not in full swing to get them into chips in time. Maybe they'll do a memory swap halfway into Pascal's life and call them GTX 1070X and GTX 1080X?
Posted on Reply
#3
Nihilus
Hmm looks like the GTX "1080" will have a core count that is a bit lower than the 980ti (despite crazy rumors) AND have 50% lower band width, possibly 4gb and 8gb variants.

Not sure how you can get excited about that. Launch price will probably be close tpo what the faster 980ti is now too.

The 1080ti is rumored to launch in July as well, but I HIGHLY doubt that. maybe Nvidia's way of making people hold off of the Polaris/ Vega train.
Posted on Reply
#4
dj-electric
NihilusHmm looks like the GTX "1080" will have a core count that is a bit lower than the 980ti (despite crazy rumors)
who said that? it might be true, but we're talking about 16nm here, not 28nm. Core counts rise as lithography decreases.
Posted on Reply
#5
Masoud1980
Hi all good site users
I am from Iran and one of the users who give so much to this site lovely head
Excuse me, I use Google translator for English

I will be grateful as complete and accurate answer my question

My first question: Is it really 970 RAM NVIDIA's deliberately manipulated?
Second question: Does Nvidia's 1080 and 1070 will be the name of Pascal cards? Do Rival AMD are angry?
Posted on Reply
#6
dj-electric
A1: Nobody really knows if the GTX970 memory thing intentional. Some might say it was a simple miscommunication between engineers and the PR team, while others may claim that NVIDIA never wanted the public to know about the whole memory speed thing. Of course that this didnt prevent GTX 970 from being one of the most successful pieces of hardware of all times

A2: Nobody publicly will tell you with 100% certainty that GTX 1080 and GTX 1070 are the names. The people who know that are the ones at NVIDIA and people who received samples, and are signed on a non-disclosure agreement (NDA). People that work in AIB companies like Asus, Gigabyte and MSI for example.
Posted on Reply
#7
Masoud1980
Dj-ElectriCA1: Nobody really knows if the GTX970 memory thing intentional. Some might say it was a simple miscommunication between engineers and the PR team, while others may claim that NVIDIA never wanted the public to know about the whole memory speed thing.

A2: Nobody publicly will tell you with 100% certainty that GTX 1080 and GTX 1070 are the names. The people who know that are the ones at NVIDIA and people who received samples, and are signed on a non-disclosure agreement (NDA). People that work in AIB companies like Asus, Gigabyte and MSI for example.
gratefulthatyou are a complete answer'

I'm one of the NVIDIA subject-but I think that the 970 of VRAM were intentionally manipulated because of the tests that I see little difference between full HD Darren but when 2K or 4K is the amount of defective VRAM broadcasting 970 makes the difference and 970 are 980

Third question: What is really the difference between anger and nano because not all properties except one were 50Mhz core Clock frequency Nano - the descendants of frames between them in games too
Posted on Reply
#8
Fluffmeister
Dj-ElectriCA1: Nobody really knows if the GTX970 memory thing intentional. Some might say it was a simple miscommunication between engineers and the PR team, while others may claim that NVIDIA never wanted the public to know about the whole memory speed thing.
I think ultimately it boils down to creating a reasonable separation between their product stacks, either way the GTX 970 still shines bright... I certainly loved mine.

And despite the conspiracy theories, miscommunication does happen:

www.google.co.uk/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=amd%20800%20million

Keeping it local:
www.techpowerup.com/156123/amd-realizes-that-bulldozer-has-800-million-less-transistors-than-it-thought
AMD Realizes That Bulldozer Has 800 Million LESS Transistors Than It Thought!
Posted on Reply
#9
PP Mguire
People are too caught up in the VRAM. Memory bandwidth hasn't been an issue, and the amount is supposed to be 8GB for P104. What we have needed is raw GPU power to push 4k pixel density and that's what this should achieve or at the very least be a single GPU 1440p beast. With Titan X's increasing my memory clock doesn't achieve more FPS but increasing core clock greatly increases FPS in 4k. Meaning we don't really need HBM2, it'll just be nice come P100 release.
Posted on Reply
#10
Nihilus
Masoud1980gratefulthatyou are a complete answer'

I'm one of the NVIDIA subject-but I think that the 970 of VRAM were intentionally manipulated because of the tests that I see little difference between full HD Darren but when 2K or 4K is the amount of defective VRAM broadcasting 970 makes the difference and 970 are 980

Third question: What is really the difference between anger and nano because not all properties except one were 50Mhz core Clock frequency Nano - the descendants of frames between them in games too
Haha, 'Anger' here was Fury translated to Persian than untranslated to 'Anger' Fury=Anger. I understand now.

Anyhow, Fury and nano are very close in games. Nano uses less power. For the same price, I would go with the Fury personally.
I think some made too big of a deal on the GTX 970 vRam debacle. It is still a great card at 1080p and 1440p in some cases. Having 4 GB instead of 3.5 GB vram at full speed would not have changed it's capabilities.
Posted on Reply
#11
PP Mguire
NihilusHaha, 'Anger' here was Fury translated to Persian than untranslated to 'Anger' Fury=Anger. I understand now.

Anyhow, Fury and nano are very close in games. Nano uses less power. For the same price, I would go with the Fury personally.
I think some made too big of a deal on the GTX 970 vRam debacle. It is still a great card at 1080p and 1440p in some cases. Having 4 GB instead of 3.5 GB vram at full speed would not have changed it's capabilities.
Good luck trying to tell some people here that. They act like it's the end of the world. My roomie definitely is happy with his 970.
Posted on Reply
#12
HD64G
NihilusHmm looks like the GTX "1080" will have a core count that is a bit lower than the 980ti (despite crazy rumors) AND have 50% lower band width, possibly 4gb and 8gb variants.

Not sure how you can get excited about that. Launch price will probably be close tpo what the faster 980ti is now too.

The 1080ti is rumored to launch in July as well, but I HIGHLY doubt that. maybe Nvidia's way of making people hold off of the Polaris/ Vega train.
+1000

My thoughts exactly. Performance GPUs from green camp won't be on sale before autumn imho.
Posted on Reply
#13
Basard
Dj-ElectriCwho said that? it might be true, but we're talking about 16nm here, not 28nm. Core counts rise as lithography decreases.
I have no idea what them guys are doing over there... AMD or Nvidia... my "new" 780 has 48/192/2304 ROPS/TPUS/Shaders while the 980 has 64/128/2048.... The shader count's don't do anything but piss me off, they seem so pointless. They've gone up so high, but the only thing that seems to matter is finding the right balance, I guess.

I didn't pay much attention back in the "Pixel Pipeline" days, I caught that train after is set sail, with an X800 back when 7950GT and GTX was king of the mole hill. the x800 had 12-12-12 is all i know, maybe it was 8-8-8..... I don't know what the numbers represented back then, just that they were Pixel pipelines, I assume they mean the same thing. So what's with the "Unified Shaders?" Meh, all hoop-la....

Even having red or green say "we have this many" it's like AMD vs Intel.... So what? If they are garbage cores then who cares.

Meh, now I'm just making myself look dumb. LOL.....:laugh:

I'll quit before I finish my first beer before this gets out of hand!
Posted on Reply
#14
FordGT90Concept
"I go fast!1!11!1!"
DarkOCeanThat's pretty small...
You read my mind.
Posted on Reply
#15
newtekie1
Semi-Retired Folder
NihilusHmm looks like the GTX "1080" will have a core count that is a bit lower than the 980ti (despite crazy rumors) AND have 50% lower band width, possibly 4gb and 8gb variants.

Not sure how you can get excited about that. Launch price will probably be close tpo what the faster 980ti is now too.
The 970 had a core count that was a lot lower than the 780TI. The 970 had a 256-bit bus while the 780ti had 384 like the 980ti. Yet the 970 performs basically the same as the 780ti, and was half the price. So, yeah, I'm excited about the 1070.
Posted on Reply
#16
Frick
Fishfaced Nincompoop
BasardI have no idea what them guys are doing over there... AMD or Nvidia... my "new" 780 has 48/192/2304 ROPS/TPUS/Shaders while the 980 has 64/128/2048.... The shader count's don't do anything but piss me off, they seem so pointless. They've gone up so high, but the only thing that seems to matter is finding the right balance, I guess.

I didn't pay much attention back in the "Pixel Pipeline" days, I caught that train after is set sail, with an X800 back when 7950GT and GTX was king of the mole hill. the x800 had 12-12-12 is all i know, maybe it was 8-8-8..... I don't know what the numbers represented back then, just that they were Pixel pipelines, I assume they mean the same thing. So what's with the "Unified Shaders?" Meh, all hoop-la....

Even having red or green say "we have this many" it's like AMD vs Intel.... So what? If they are garbage cores then who cares.

Meh, now I'm just making myself look dumb. LOL.....:laugh:

I'll quit before I finish my first beer before this gets out of hand!
Pixel pipelines were very important, if you could unlock them with a BIOS edit giving you a very nice boost even before overclocking.
Posted on Reply
#17
BiggieShady
BasardI don't know what the numbers represented back then, just that they were Pixel pipelines, I assume they mean the same thing. So what's with the "Unified Shaders?" Meh, all hoop-la....
This is how I remember it: back in the days, pixels were processed in pixel pipeline and vertices in vertex pipeline ... the balance was everything because you couldn't use pixel pipes for vertices and vice versa. Why pipelines? Because good part part of the functionality that is actual layout of the pipes, was fixed like a plumbing of a building. Things were half programmable - you could change how the engine renders pixels on surfaces, but not all the way to what the engine does to complete a frame.
Today everything is programmable, unified shading units process both pixels and vertices and layout of the pipes is not fixed (graphics pipelines are kind of virtual today and exist in software - we have DX11 graphics pipeline and DX12 graphics pipeline mapped completely to two different but universal parallel compute architectures that may or may not be used for graphics).
Today with 16nm and all the optimization done on a level of a single unified shader processor, they get smaller so they can put much more of them on the same surface. Now, the balance is: do we need less smart shader processors but more of them, or smarter shader cores and less of them?
The trend seems to be towards more less smart cores with better cache systems ... which is in line with the approach that favors parallelism.
Posted on Reply
#19
Assimilator
DarkOCeanThat's pretty small...
Is that what she said to you? :p
Posted on Reply
#20
Enterprise24
Small die size. May be not 2/3 cores of GP100 (like Maxwell) but 1/2 cores of GP100 (nearly like Kepler).
GP104 should have 1920-2048 cores not 2560 cores.
Posted on Reply
#21
medi01
Wow, look at AMD stock... +50% overnight.

"Analysts say"... AMD "poised to gain market share" yada yada, Q2 Polaris for the win, CPU licensing deal with Chinese, GPU deal with Apple, promises of "sweet deals" on perf/dollar and perf/watt front

Ye, I'll keep my fingers crossed. AMD back into game would be good for all consumers.
Posted on Reply
#22
Ruru
S.T.A.R.S.
Hm, no shim there unlike GTX980/780/680 etc.
Posted on Reply
#23
the54thvoid
Intoxicated Moderator
medi01Wow, look at AMD stock... +50% overnight.

"Analysts say"... AMD "poised to gain market share" yada yada, Q2 Polaris for the win, CPU licensing deal with Chinese, GPU deal with Apple, promises of "sweet deals" on perf/dollar and perf/watt front

Ye, I'll keep my fingers crossed. AMD back into game would be good for all consumers.
Yes, it will be nice when AMD actually have a good market presence. For too long the superiority of Nvidia has allowed them to simply stay a step ahead of AMD, without even trying too hard on R&D. With AMD promising so much more we can only expect Nvidia to actually try harder. Despite Fury's reasonably good performance, we can hope AMD's Vega chip (not Polaris) to give us a huge leap in performance. With Zen just coming too, with promised awesome gains I can see both Intel and Nvidia retreating fro the markets. Yes, AMD will have a reigning period where they are unstoppable.

But of course, none of that is actually anywhere near true until they release the products. For now, AMD is still clawing it's way back up and Intel and Nvidia are pissing about on other work.
Posted on Reply
#24
Masoud1980
NihilusHaha, 'Anger' here was Fury translated to Persian than untranslated to 'Anger' Fury=Anger. I understand now.

Anyhow, Fury and nano are very close in games. Nano uses less power. For the same price, I would go with the Fury personally.
I think some made too big of a deal on the GTX 970 vRam debacle. It is still a great card at 1080p and 1440p in some cases. Having 4 GB instead of 3.5 GB vram at full speed would not have changed it's capabilities.
Thanks for the answer
Google Translate Translate does not forgive good
I feel certain familiar forgive or Iranians or Persians you're right?
Posted on Reply
#25
rtwjunkie
PC Gaming Enthusiast
NihilusHmm looks like the GTX "1080" will have a core count that is a bit lower than the 980ti (despite crazy rumors) AND have 50% lower band width, possibly 4gb and 8gb variants.
So.....the 1080 (or whatever it shall be) with 256-bit bus has 50% less bandwidth than the 256-bit bus of the 980? :confused: Am I understanding your complaint right?

It fills the same slot the 980 does now (upper mid-level), so I'm not sure why you would compare it to 980Ti. Is it very likely to equal or come very close to the 980Ti in performance? Yes, which is a win all around for consumers, as it will be cheaper than the current 980Ti flagship.
Posted on Reply
Add your own comment
Aug 2nd, 2024 00:15 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts