Friday, March 11th 2016
NVIDIA "GP104" Silicon to Feature GDDR5X Memory Interface
It looks like NVIDIA's next GPU architecture launch will play out much like its previous two generations - launching the second biggest chip first, as a well-priced "enthusiast" SKU that outperforms the previous-generation enthusiast product, and launching the biggest chip later, as the high-end enthusiast product. The second-biggest chip based on NVIDIA's upcoming "Pascal" architecture, the "GP104," which could let NVIDIA win crucial $550 and $350 price-points, will be a lean machine. NVIDIA will design the chip to keep manufacturing costs low enough to score big in price-performance, and a potential price-war with AMD.
As part of its efforts to keep GP104 as cost-effective as possible, NVIDIA could give exotic new tech such as HBM2 memory a skip, and go with GDDR5X. Implementing GDDR5X could be straightforward and cost-effective for NVIDIA, given that it's implemented the nearly-identical GDDR5 standard on three previous generations. The new standard will double densities, and one could expect NVIDIA to build its GP104-based products with 8 GB of standard memory amounts. GDDR5X breathed a new lease of life to GDDR5, which had seen its clock speeds plateau around 7 Gbps/pin. The new standard could come in speeds of up to 10 Gbps at first, and eventually 12 Gbps and 14 Gbps. NVIDIA could reserve HBM2 for its biggest "Pascal" chip, on which it could launch its next TITAN product.
The GP104 will be built on the 16 nm FinFET process, by TSMC. NVIDIA is hoping to unveil the first GP104-based products by April, at the Graphics Technology Conference (GTC) event, which it hosts annually; with possible market availability by late-May or early-June, 2016.
Source:
Benchlife.info
As part of its efforts to keep GP104 as cost-effective as possible, NVIDIA could give exotic new tech such as HBM2 memory a skip, and go with GDDR5X. Implementing GDDR5X could be straightforward and cost-effective for NVIDIA, given that it's implemented the nearly-identical GDDR5 standard on three previous generations. The new standard will double densities, and one could expect NVIDIA to build its GP104-based products with 8 GB of standard memory amounts. GDDR5X breathed a new lease of life to GDDR5, which had seen its clock speeds plateau around 7 Gbps/pin. The new standard could come in speeds of up to 10 Gbps at first, and eventually 12 Gbps and 14 Gbps. NVIDIA could reserve HBM2 for its biggest "Pascal" chip, on which it could launch its next TITAN product.
The GP104 will be built on the 16 nm FinFET process, by TSMC. NVIDIA is hoping to unveil the first GP104-based products by April, at the Graphics Technology Conference (GTC) event, which it hosts annually; with possible market availability by late-May or early-June, 2016.
135 Comments on NVIDIA "GP104" Silicon to Feature GDDR5X Memory Interface
Imagine that you buy a car that is advertised to have 4 wheel drive and later discover, that it's only, lol, 3. (somehow) Would your car become any worse because of it? Nope. Did manufacturer save costs by not doing? Yep. Did he lie to you? Yep. That's the whole point.
I see both sides of this. In the end A LOT of people made more out of this than needed to be (and seemingly are still at it...). Performance never changed, people rarely hit the slowdown, there is actually 4GB, not 3.5GB, it's just slower, that last 512MB. Nobody shit themselves when they did this on the 660ti... but now it's a big deal... MEH.
So, they knew what they were buying, no lies told to those after. In short, it really isn't an issue for most people, and user reviews, which indicate the buyer knew of the issue before purchase time on multiple sites back this up.
a) there are special profiles needed for some games
b) nVidia has the power to make this GPU useless if they want to.
WIth a normal memory architecture, that's not that easy and you can still use your graphics card.
But with the Memory Architecture of the GTX 970, nVidia is able to make this chip useless, more or less with the flip of a switch.
And have you guys learned nothing of the Witcher 3??
Yes, that game, where a GTX 780/Titan was beaten by a ~200€ Card like the GTX 960.
This happened with a couple of other games, like Batman...
Thus you can use this as confirmation that nVidia drops support on older hardware easily (well, at least the optimisation stuff).
What do you think will happen when there is a successor to the GTX 970??
Do you really think that nVidia will still optimize for a not manufactured/sold card??
And they also could use the memory archtecture to cripple this card, so that most newer games are unplayable -> most GTX 970 users will run to the next store and buy the next Geforce card...
I had tbe 780, and used it to play TW3 the first two times I played the game. Most who know me know I am an image quality over frame rate person. I used the 780 to render almost all high to very high settings, except shadows on medium, and no hairworks. I still played at a consistent 50 to 60 fps.
Also owning a 960, it's almost embarrassing for the 960 how badly it gets beaten. For it to put out a mere 30 to 35 fps, most settings MUST be dropped way down to medium. Think about that...same resolution, settings that don't produce as good visuals as the 780 had, and it is only JUST playable on a 960.
In conclusion, the only thing a 960 wins at versus a 780 in TW3 is the title of Lower Performing Card.
hopefully it'll be cheaper on the electric bill than the hd5870 i have at the mo
Imo for someone at high-end Mawell GM200 chip a GP104 gtx1080 is not really a worthy upgrade,
I would maybe change my new 980Ti for a full GP100 chip, but I don't see the point at 1080p, well unless its literally 2x faster..
Think I'll just stick with my original plan to wait for Volta and AMD offering in Q1 y2018?.. :)