Monday, June 8th 2015

NVIDIA Tapes Out "Pascal" Based GP100 Silicon

Sources tell 3DCenter.org that NVIDIA has successfully taped out its next big silicon based on its upcoming "Pascal" GPU architecture, codenamed GP100. A successor to GM200, this chip will be the precursor to several others based on this architecture. A tape-out means that the company has successfully made a tiny quantity of working prototypes for internal testing and further development. It's usually seen as a major milestone in a product development cycle.

With "Pascal," NVIDIA will pole-vault HBM1, which is making its debut with AMD's "Fiji" silicon; and jump straight to HBM2, which will allow SKU designers to cram up to 32 GB of video memory. 3DCenter.org speculates that GP100 could feature anywhere between 4,500 to 6,000 CUDA cores. The chip will be built on TSMC's upcoming 16 nanometer silicon fab process, which will finally hit the road by 2016. The GP100, and its companion performance-segment silicon, the GP104 (successor to GM204), are expected to launch between Q2 and Q3, 2016.
Source: 3DCenter.org
Add your own comment

49 Comments on NVIDIA Tapes Out "Pascal" Based GP100 Silicon

#1
Bad Bad Bear
Hopefully 4K screens will have 100Hz + refresh rates with G-Sync by then :) 2 of these Pascal beauties should churn a pretty pixel or two in Sli
Posted on Reply
#2
ZoneDymo
Bad Bad BearHopefully 4K screens will have 100Hz + refresh rates with G-Sync by then :) 2 of these Pascal beauties should churn a pretty pixel or two in Sli
Companies must love consumers like you, instead of demanding single cards that can do that you instantly are happily dreaming of the day you need to purchase two 600+ dollar cards to do that.

On the article:
Is it just me or does Nvidia look borderline desperate at this moment to keep consumers away from the not even shown r9 390x?
Making some statement about their next card and how they will follow AMD with HBM memory but jump straight to version 2.0.
Posted on Reply
#3
btarunr
Editor & Senior Moderator
ZoneDymoMaking some statement about their next card and how they will follow AMD with HBM memory but jump straight to version 2.0.
They didn't make any statement.
Posted on Reply
#4
the54thvoid
Super Intoxicated Moderator
ZoneDymoIs it just me or does Nvidia look borderline desperate at this moment to keep consumers away from the not even shown r9 390x?
Making some statement about their next card and how they will follow AMD with HBM memory but jump straight to version 2.0.
Yeah, it's just you. As Bta says, its from a leaked source, its not a press release. I didn't see you making the same statements when AMD Arctic island info was released around Titan X time. Or did I miss that?
Posted on Reply
#5
arbiter
ZoneDymoOn the article:
Is it just me or does Nvidia look borderline desperate at this moment to keep consumers away from the not even shown r9 390x?
Making some statement about their next card and how they will follow AMD with HBM memory but jump straight to version 2.0.
Pascel is a 2016 gpu, Why would Nvidia use HBM1 just to be behind by a year in GPU memory? You have to realize what you said it completely stupid. HBM2 is slated for later this year. As for being desperate, not sure if you seen the numbers of market share but i wouldn't say nvidia is desperate in the slightest at this to keep customers from r9 390x GDDR5 card.
Posted on Reply
#6
Bad Bad Bear
ZoneDymoCompanies must love consumers like you, instead of demanding single cards that can do that you instantly are happily dreaming of the day you need to purchase two 600+ dollar cards to do that.
Why settle for 1 when I can afford 2 or 3 ? Overkill tastes like sweet honey to me. I'll write a stern letter to Nvidia immediately and demand they release a single card that will push 4K res with ultra settings, or else !!!!!:roll:
Posted on Reply
#7
ZoneDymo
Bad Bad BearWhy settle for 1 when I can afford 2 or 3 ? Overkill tastes like sweet honey to me. I'll write a stern letter to Nvidia immediately and demand they release a single card that will push 4K res with ultra settings, or else !!!!!:roll:
or else...you wont buy their product? good! and if many more would do so then maybe we would get some honest progress instead of them just pushing 10% performance every time for 600 dollars.
They do it this way to get the most money out of people, sure company so max profit blabla but as a consumer you could stand up against it by not buying bs updates.
Posted on Reply
#8
ZoneDymo
btarunrThey didn't make any statement.
Oh right, they "leaked" it, right right right.
the54thvoidYeah, it's just you. As Bta says, its from a leaked source, its not a press release. I didn't see you making the same statements when AMD Arctic island info was released around Titan X time. Or did I miss that?
Dont know anything about arctic islands or any statement sooo yeah probably did not see me make a statement about something I was and still am unaware of.
arbiterPascel is a 2016 gpu, Why would Nvidia use HBM1 just to be behind by a year in GPU memory? You have to realize what you said it completely stupid. HBM2 is slated for later this year. As for being desperate, not sure if you seen the numbers of market share but i wouldn't say nvidia is desperate in the slightest at this to keep customers from r9 390x GDDR5 card.
Its not about them going to use newer tech in newer cards.
Its about how the formulate it, if AMD would come out now and said "yeah well, we are going to release a card a little later that will have 100% faster HBM3" you would not think that its a little "dont waste your time on that product, wait a bit and get something much better from us!"?
Posted on Reply
#10
Breit
What happened to Volta? Shouldn't Volta be the first architecture from nVidia with stacked DRAM and Pascal the architecture after Volta?
Posted on Reply
#11
Bad Bad Bear
[
ZoneDymoor else...you wont buy their product? good! and if many more would do so then maybe we would get some honest progress instead of them just pushing 10% performance every time for 600 dollars.
They do it this way to get the most money out of people, sure company so max profit blabla but as a consumer you could stand up against it by not buying bs updates.
Young man, the 'bleeding edge' has and never will be about value for money. Never assume that users on this forum are not capable of researching a product's real performance capabilities and making an informed decision,however subjective that may be. Each to their own, of course.
Posted on Reply
#12
NC37
And if the hype is to be believed, Pascal will be really something compared to Maxwell. Which I'm sure they want people to do. Hold out till 2016 and not get caught up in with the Fury bandwagon.
Posted on Reply
#13
arbiter
ZoneDymoIts not about them going to use newer tech in newer cards.
Its about how the formulate it, if AMD would come out now and said "yeah well, we are going to release a card a little later that will have 100% faster HBM3" you would not think that its a little "dont waste your time on that product, wait a bit and get something much better from us!"?
Even if HBM3 was 100% faster, given the performance of HBM2, doubt it would provider any big jump compared to HBM2. The Bottleneck at that point won't be memory so much as is it would be the GPU being fast enough to process it all. Kinda like CPU's are now with 1600mhz ddr3 ram vs 2400mhz ddr3. Yea there is some boost in certain uses but not many that majority of people ever notice less you do that kinda work every day on a time sensitive schedule.
Posted on Reply
#14
Breit
arbiterEven if HBM3 was 100% faster, given the performance of HBM2, doubt it would provider any big jump compared to HBM2. The Bottleneck at that point won't be memory so much as is it would be the GPU being fast enough to process it all. Kinda like CPU's are now with 1600mhz ddr3 ram vs 2400mhz ddr3. Yea there is some boost in certain uses but not many that majority of people ever notice less you do that kinda work every day on a time sensitive schedule.
I'm not sure you can compare it that way...?! o_O
For system memory latency is what matters most as opposed to graphics memory, where bandwidth matters most. Going from DDR3-1600 with CL7 to DDR3-2400 with CL10, there isn't much latency gained (if any), only bandwidth. Unfortunately that doesn't translate into much of a performance boost in most applications.
On the other hand, GPUs are mostly bandwidth-starved anyway so more is always better. Don't forget, that current-gen GPUs are built around the memory architecture and the bandwidth it can deliver. So with nearly limitless memory bandwidth I'm sure they'll come up with a GPU architecture to put that bandwidth to good use. :rolleyes:
Posted on Reply
#15
Vayra86
Oh come on.

We 'the customers' are not going to force AMD or Nvidia to push large amounts of performance gains with every new release, even with a 50/50 market share. The whole market doesn't work like that, and has never worked like that. Ever since the Riva TNT days we are seeing marginal performance increases and all companies involved always push for a relative performance increase per gen. This makes sense, economically and because the market works very strongly with a 'trickle down' effect for new tech.

HBM is exactly the same as everything else. Fury won't be an astounding performance difference and neither will Pascal. Does that mean HBM will be 'handicapped' to perform less than it could? No - the increased bandwidth is being balanced out with the power of the GPU core. Resulting in a minor performance increase across the board. Overall HBM will relieve some bottlenecks imposed by GDDR5 but all that does is introduce a new bottleneck on the Core.

The occasional 'jumps' that we see are always being pushed back into the relativity of the pricing scheme. Look at the 970. A major performance jump, but they just cut down the silicon and place the fully enabled chip at a much higher price point.
Posted on Reply
#16
Aquinus
Resident Wat-man
Bad Bad BearWhy settle for 1 when I can afford 2 or 3 ? Overkill tastes like sweet honey to me. I'll write a stern letter to Nvidia immediately and demand they release a single card that will push 4K res with ultra settings, or else !!!!!:roll:
Most people don't like it when others brag about the resources they have available to do such things. Given the hardware in your machine, it seems to me you have more money than brains because most people would have to think long and hard before buying a 650 USD GPU and you have 3 of them. Don't even get me started on EE CPUs. Now you might want "the best," but don't act like a snob about it. Nothing pisses me off more than people who show off how they have money.

The simple fact is that we want GPU technology to move forward without a ludicrous price attached to it so us lowly regular people can afford them. If you have so much money, maybe you could send everyone in this thread a brand new 980 Ti. I'm sure each person would be forever grateful. Otherwise, leave the gloating and arrogance at home. We really don't care if you can afford the most expensive, latest tech, every time it's released.
Posted on Reply
#17
Bad Bad Bear
AquinusMost people don't like it when others brag about the resources they have available to do such things. Given the hardware in your machine, it seems to me you have more money than brains because most people would have to think long and hard before buying a 650 USD GPU and you have 3 of them. Don't even get me started on EE CPUs. Now you might want "the best," but don't act like a snob about it. Nothing pisses me off more than people who show off how they have money.

The simple fact is that we want GPU technology to move forward without a ludicrous price attached to it so us lowly regular people can afford them. If you have so much money, maybe you could send everyone in this thread a brand new 980 Ti. I'm sure each person would be forever grateful. Otherwise, leave the gloating and arrogance at home. We really don't care if you can afford the most expensive, latest tech, every time it's released.
Calm down. The relevance of my post was not to brag,simply to point out that the motives behind people's purchases are very subjective and not always centred around value. I have studied and worked very hard to afford the things I like, so accusing me of snobbery and vocalising class distinctions is just silly and emotive. If you choose to knee-jerk react over my system specs then so be it. I conceded to ZoneDynamo in my last post that "to each their own".
Posted on Reply
#18
Vayra86
AquinusMost people don't like it when others brag about the resources they have available to do such things. Given the hardware in your machine, it seems to me you have more money than brains because most people would have to think long and hard before buying a 650 USD GPU and you have 3 of them. Don't even get me started on EE CPUs. Now you might want "the best," but don't act like a snob about it. Nothing pisses me off more than people who show off how they have money.

The simple fact is that we want GPU technology to move forward without a ludicrous price attached to it so us lowly regular people can afford them. If you have so much money, maybe you could send everyone in this thread a brand new 980 Ti. I'm sure each person would be forever grateful. Otherwise, leave the gloating and arrogance at home. We really don't care if you can afford the most expensive, latest tech, every time it's released.
Well said, sir.

The first thought in my mind with all these people who jump on the newest GPU at every gen (it is mostly the same people every time) is: 'your stupidity knows no bounds, and you are proud of showing it'. Kudos for that, I guess? :p

Fun fact, those people are also the ones displaying the *least* amount of actual knowledge about GPU's and gaming in particular. I wonder why. A vast majority of these are now disgruntled Titan / Titan X owners who wonder why that 1k dollar card won't get them 60fps/ultra on 4K. So they buy another...
Posted on Reply
#19
Caring1
okidnathe idea to put stacked DRAM on discrete GPU first came from NVIDIA.
3 years after AMD designed it to use on their APU's, ie graphics. :slap:
You can be pedantic if you want, but they didn't invent it, just chose another route.
Posted on Reply
#20
Tatty_Two
Gone Fishing
Gentlemen, this is an Enthusiast site, we should not criticise those that want to spend LOTS of money on their hobby just as we should not criticise me for having a Haswell Dual core system for my part time interest, I see no Ego's here, can we get back on topic please.
Posted on Reply
#21
ensabrenoir
ZoneDymoor else...you wont buy their product? good! and if many more would do so then maybe we would get some honest progress instead of them just pushing 10% performance every time for 600 dollars.
They do it this way to get the most money out of people, sure company so max profit blabla but as a consumer you could stand up against it by not buying bs updates.
......what business puts out a product thats massively better than the previous version year after year? Even if they could....they wouldn't. (Businesses long term survival tend to stem from returning customers) Should we punish sucessful companies because they are good at their jobs? Should we all rally to save a bad one who'll squander our hard earned income anyway....... sheeesh....who knew video cards were so deep.....
Posted on Reply
#22
qubit
Overclocked quantum bit
Pascal's gonna be the real upgrade that I'm waiting for and will go nicely with my Skylake CPU. I just hope the GTX version won't be cut down like the 980 Ti.
Tatty_OneGentlemen, this is an Enthusiast site, we should not criticise those that want to spend LOTS of money on their hobby just as we should not criticise me for having a Haswell Dual core system for my part time interest, I see no Ego's here, can we get back on topic please.
You only have two cores?!! (The qubit professionally disapprovez :p)
Posted on Reply
#23
Prima.Vera
I don't know about you guys, but I prefer a game with photo-realistic graphics on 1080p instead of playable on 4K.
Posted on Reply
#24
midnightoil
BreitWhat happened to Volta? Shouldn't Volta be the first architecture from nVidia with stacked DRAM and Pascal the architecture after Volta?
Volta was delayed by 2-3 years to 2018. NVIDIA never intended to use HBM, but they backed the wrong horse in HMC / didn't think AMD would actually launch consumer products this soon. Hence why a while ago they announced Pascal, which is a stop-gap HBM product before they leave it for HMC with Volta. Both are heavily aimed at people using CUDA and supercomputer setups, though ... hence why almost nothing has been said about consumer versions ... it's all been about NVLink and their partnership with IBM (both things which have zero bearing on consumer market - or even graphics professionals).

I don't believe the story, though .. TSMC supposedly have a lot of problems with their 16nmFF process, and the 16nmFF+ that Pascal will use is nowhere close. I think we'll see a small Pascal chip if we're lucky (750ti replacement) around this time next year, then Q3-Q4 for 970-980 replacement, then new Titan SKU in Q1'17. That's being optimistic.

On the other hand I think AMD fully intend to launch the whole Artic Islands range at this time next year (on Samsung / GF 14nmFFLP+).

IMO either Pascal in its entirety or its timing could be a complete disaster for NVIDIA.
Posted on Reply
#25
64K
Pascal is probably going to surprise a few people with it's performance. I think we will be seeing a higher performance increase over Maxwell than we got over Kepler. Nvidia has provided the engineering for this GPU but if TSMC can't step up to the plate then they may be in short supply and retailers will gouge.
Posted on Reply
Add your own comment
Nov 30th, 2024 18:30 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts