Friday, March 11th 2016
NVIDIA "GP104" Silicon to Feature GDDR5X Memory Interface
It looks like NVIDIA's next GPU architecture launch will play out much like its previous two generations - launching the second biggest chip first, as a well-priced "enthusiast" SKU that outperforms the previous-generation enthusiast product, and launching the biggest chip later, as the high-end enthusiast product. The second-biggest chip based on NVIDIA's upcoming "Pascal" architecture, the "GP104," which could let NVIDIA win crucial $550 and $350 price-points, will be a lean machine. NVIDIA will design the chip to keep manufacturing costs low enough to score big in price-performance, and a potential price-war with AMD.
As part of its efforts to keep GP104 as cost-effective as possible, NVIDIA could give exotic new tech such as HBM2 memory a skip, and go with GDDR5X. Implementing GDDR5X could be straightforward and cost-effective for NVIDIA, given that it's implemented the nearly-identical GDDR5 standard on three previous generations. The new standard will double densities, and one could expect NVIDIA to build its GP104-based products with 8 GB of standard memory amounts. GDDR5X breathed a new lease of life to GDDR5, which had seen its clock speeds plateau around 7 Gbps/pin. The new standard could come in speeds of up to 10 Gbps at first, and eventually 12 Gbps and 14 Gbps. NVIDIA could reserve HBM2 for its biggest "Pascal" chip, on which it could launch its next TITAN product.
The GP104 will be built on the 16 nm FinFET process, by TSMC. NVIDIA is hoping to unveil the first GP104-based products by April, at the Graphics Technology Conference (GTC) event, which it hosts annually; with possible market availability by late-May or early-June, 2016.
Source:
Benchlife.info
As part of its efforts to keep GP104 as cost-effective as possible, NVIDIA could give exotic new tech such as HBM2 memory a skip, and go with GDDR5X. Implementing GDDR5X could be straightforward and cost-effective for NVIDIA, given that it's implemented the nearly-identical GDDR5 standard on three previous generations. The new standard will double densities, and one could expect NVIDIA to build its GP104-based products with 8 GB of standard memory amounts. GDDR5X breathed a new lease of life to GDDR5, which had seen its clock speeds plateau around 7 Gbps/pin. The new standard could come in speeds of up to 10 Gbps at first, and eventually 12 Gbps and 14 Gbps. NVIDIA could reserve HBM2 for its biggest "Pascal" chip, on which it could launch its next TITAN product.
The GP104 will be built on the 16 nm FinFET process, by TSMC. NVIDIA is hoping to unveil the first GP104-based products by April, at the Graphics Technology Conference (GTC) event, which it hosts annually; with possible market availability by late-May or early-June, 2016.
135 Comments on NVIDIA "GP104" Silicon to Feature GDDR5X Memory Interface
"NVIDIA could give exotic new tech such as HBM2 memory a skip, and go with GDDR5X. "
Translate to;
"NVIDIA "GP104" Silicon to Feature GDDR5X Memory Interface"
eg.
About the same as;
"I could grow wings and fly"
Translate to;
"Man grows wings and flies"
Why do those other things matter? why would you worry about those? will anything in the universe change if the entire earth would blow up? would it matter that something changes in the first place?
If you take the discussion away from what memory is used on what videocard to starving kids in Africa then why not take it to an even more nonsensical redundant point of life itself and anything.
Would anything be different if you where never born, would it matter?
Obviously if anyone had the choice between GDDR5X memory and HDM2 and GDDR5X would mean the starving children would be fed, we would go with that, but it has NOTHING AT ALL TO DO WITH THE CURRENT DISCUSSION.
Hope that cleared some things up, now stop making silly non-arguments. Ermm I think you have to read a few steps back, it was the person I was replying to that made the ridiculous leap (hyperbole) from
"what memory is used on what videocard"
to
"erh mah gerd what about the poor children and dictators and cancer etc etc this all does not matter people!!!".
What a GPU company does or sells is nothing in the grand scheme of things that matter, and won't actually affect your life.
Nowhere did I go on about starving kids in Africa and whatnot. That was your stretch, not mine.
rtwjunkie simply said to another poster that they might want to put the business of the GTX 970 in particular, and Nvidia in general into an appropriate context rather than railing against a piece of hardware in some OTT outpouring of anger.
You then decided to insert yourself into the conversation attempting to undermine a rtw's perfectly reasonable and measured stance by launching into some hyperbolic nonsense, and are now upping the derp ante by trying to play the persecution card.
And yes, as an MA graduate of poli-sci, I'm much more focused on things that really matter, like finally getting a proportional voting system in Canada, which actually looks like it can happen now.
As for this:
Frick: "And wow, TWO games in which AMD are faster? Need moar data."
OK, here's another one:
fudzilla.com/news/graphics/40084-amd-dooms-nvidia-in-benchmarks
techfrag.com/2016/02/18/amd-beats-nvidia-in-doom-alpha-benchmarks/
So in the new Doom, even the R9 280X (yes 280X, not 290X) is beating the GreedForce 980Ti. I guess when it gets up to TEN, you'll be saying "TEN games in which AMD are faster? Need moar data," and when it's a hundred, "A HUNDRED games in which AMD are faster? Need moar data," etc. etc.
Perhaps I misjudged your response, because it's really hard to go about deapising any company. Having been a senior manager in a Fortune 500 company, I can tell you, business is business, and they all pretty much operate the same. They all have the top goal of making as much money as they can.
As for GameWorks, this is the very last straw for me. It's now so clearly a mere rear-guard action to slow down AMD's increasingly successful DX12 counter-offensive because nVidia was obviously caught off-guard by its release and quick adoption, and I understand it from a business perspective, but it's really like sabotaging another company's products. It's not just aggressive competition anymore, it's more like paying some guy to stick his foot out to trip a competitor during a race, or to slash a tire on a competitor's car during a pit-stop. It's dirty and underhanded, and actually, it doesn't upset me, on the contrary, it reassures me that nVidia truly is a desperate company now, or else they wouldn't be risking such naked cheating and getting caught, because legions of tech nerds ARE going to find out, and when they do, there is a point where a majority of them will say the same thing I'm saying, and tell all the people who trust their tech advice not to buy nVidia products and support that company.
If you watched the 'Nvidia gameworks - game over for you' youtube video I linked to, you'll see near the end of it that it's not just Radeon owners getting hosed by the sabotage nVidia's pulling right now, with unnecessarily high tessellation that you can't even see, tessellating water that isn't even visible to the player, or integrating PhysX into game engines so you can't easily turn it off, it's previous generation nVidia owners, too. nVidia's now gimping their OWN cards from just one generation back, just to sell more Maxwells, and as the author of that video points out, they're likely to deliberately gimp Maxwells, too, once Pascal is out in order to accelerate adoption of Pascal. For me, this is no longer a 'fan-boy' issue, it's a simple self-respect issue. Nobody with self-respect, in my opinion, can watch and learn about nVidia's actions and still buy their products, knowing the underhanded lengths they'll go to squeeze a buck out of you.
Considering these are posted in the middle of last month and Nvidia just released a preliminary Vulcan driver I'd be willing to bet AMD is using Vulcan and Nvidia is using OpenGL 4.4. The increased memory usage for Nvidia over AMD seems to indicate this.
So we have 2 alphas and a game with reported issues on DX12. How about actually waiting to see matured instances being benchmarked by proper sites that actually know what they're doing instead of sites trying to create a clickbait article for fanboys to argue over?
And I see the new post and still at it with the 4GB crap? What games don't run fine on the 970 because of the supposed memory issue? I will literally fire up my 970 and test them because I bet they run fine. My roomie plays all AAA titles with no issues @ 1080p on it mated to a 4690k.
If you want to crapshoot Nvidia for supposed dishonest behavior and unethical business practices then why are you using an Intel CPU? They have been caught red handed doing down right dirty shit which is moot in comparison to this inflated RAM debacle. Sounds like a bunch of excuses to me if I'm honest.
Going on about Gameworks? You realize AMD has it's own version of Gameworks right? You realize that both companies do the same crap to market their cards by pairing up with Devs for AAA titles, right? You realize that both companies are doing what they need to do to have an edge to sell product, right? Sheesh.
Nvidia is not desperate at all. They've had the majority market in their hands due to having simply faster cards for a couple of generations which is why they are able to sell midrange chips for full price then make more money by releasing the big chips later. My 980s creamed my buddy's 290x's in everything and we have identical CPUs. I now have Titans and he has Fury X Crossfire and I still have the upper hand. If AMD had any upper hand I wouldn't be using Nvidia cards, that's for sure, but they don't and I have a good feeling it'll continue to be this way for the next generation as well. You can rant and rave to try and justify your distaste for Nvidia but to the majority of us it's just blatant bashing for no reason. If you don't want to consume some logic and put aside your decade old hate for the company then at least give the rest of us some peace and not derail an Nvidia thread while continuing to game happily on your AMD graphics cards.
As for the 'inflated ram' debacle, if nVidia didn't really do anything wrong, why did the CEO pretend to apologize (www.technobuffalo.com/2015/02/25/nvidias-ceo-apologizes-for-gtx-970-memory-controversy/), and try to reassure everybody that it 'won't happen again', and yet allow board partners to continue doing it? That doesn't sound kosher to me? I'm so sorry I did this unethical thing that I'm going to continue doing it for as long as I can?
As for 'ranting and raving', I think I've maintained a civil level of decorum, and backed up everything I've said with an explanation from personal experience or references, so I hardly think it's just 'blatant bashing for no reason.' As for giving you peace so you can continue to be deceived by your beloved company, absolutely. I've said my piece here, and will stop 'derailing' the thread. Looking forward to seeing you with a Radeon in your system specs sometime soon. :)
Fury X had (4x128gbps 1GB) 512gbps of bandwidth...and Polaris 11 will probably have (2x256gbps 4GB) 512gbps of bandwidth. If you go by straight compute, that's good for a Fury X up to 1110-1120mhz, around 1200mhz if you figure in where memory compression is applicable. While cache structure and compression could change, let's assume for a sec it doesn't substantially. I think it's somewhat fair to assume 14nm can probably clock up to ~1200mhz supremely efficiently and top out around 1400mhz or so. I think it's pretty obvious if you're AMD you're essentially shrinking Fiji one way or another, with or without subtracting 4/8 compute (which should add to over-all efficiency) units and (or not) raising clockspeed to compensate. Given they probably want to use it in the mobile space, and these parts should be all about simply staying within 225w at most (probably with a buffer at stock, let's say 200w...gotta make sure a 375w x2 will work and perhaps a super-efficient voltage/clock can fit under 150w) I'm inclined to believe that gives them wiggle room to opt for less units and to pump up (at least the potential of) the clock, even if raising the clock is 1/2 as efficient as building those (mostly redundant) transistors in.
For instance, they could do something like 3584 @ 1200mhz, which for all intent and purposes should be similar over-all to a Fury X in compute but much more efficient/easy to yield, faster then a stock 980ti (which is what, essentially 3520 units at 11xxmhz?), and could potentially clock higher to tap out max bandwidth (perhaps compete/beat an overclocked 980ti). I'm not ruling out 3840 and/or higher stock clocks either and a lower-end part tackling 980ti, perhaps specifically to put a nail in GM200. Let's not forget there is 1600mhz HBM2 coming as well, which fits pretty darn well with product differentiation (80%), competing with GM200, if not a perfect spot to try to stay under 150w...
Does this whole thing sound familiar?
It would be like an optimal mix of when AMD shrank a 420mm2 bear of a chip down to 192mm2 (R600->RV670, a 2.1875x smaller chip on a 2x smaller process, or a net efficiency to arch of 10% die space) and then cranked the shit out of the voltage (that chip ran at 1.3+v) to clock/yield it decently while selling it for peanuts, mixed with that other time when their 256mm2 chip (using a new memory standard; gddr5) was clocked to put a nail in the former high-end from nvidia (G92b) and gave the affordable version of nvidia's big-ass chip (GT200) a run for it's money...all while being good-enough to hit certain performance levels to keep people content at an affordable price. You know the bolded sentence above? Well, 14nm should be closer to 2.1-2.32x smaller and AMD has said their improvements to the arch account for ~30% of the efficiency improvement (which when added together starts to look a lot like the amount brighter Polaris looks compared to when it was observed blah blah blah). While that's probably accounting for the shrink (ie the net efficiency is divided by the shrink), that's still a similar if not greater efficiency improvement in arch as rv670....somewhere to the tune of ~13-15%. Just throwing it out there, but 4096/3584 is also ~14%...and surely having half (even if faster) memory controllers amounts to something.
As for nvidia:
Guys....do you know with the way Maxwell is set up it essentially requires 1/3 less bandwidth than AMD (not counting compression of Tonga/Fiji, which lowers it to around to 25% or slightly less) due to cache/sfu setup? It's true. That alone should pretty much offset any worries between AMD's 512gbps and whatever nvidia comes to bat with, assuming they can at least muster 13000mhz gddr5x (6500mhz x 2). Given Micron has said they have exactly that in the labs (coincidence I'm sure, not at all theoretically a requirement imposed by nvidia) I wouldn't be too worried. While we'd all love for nvidia to come to bat with a higher compute ratio (say 224-240sp per module instead of 192 of Kepler or 128 of Maxwell) there's no saying they won't...and simply won't up the ratio of cache to supplement the design. It's gonna be fine.
At the end of the day,
I have no idea who's going to perform better in any situation, but I wouldn't be surprised if both designs are fairly similar (and optimized versions of their former high-end). My gut (and yours, probably) says nvidia wins the pure gaming metrics. I also don't know who will be cheaper, but my gut (and yours probably) says AMD. Still though, if both can achieve 4k30 in mostly everything...and can overclock to 4k60 in the titles we expect they should...does it really matter? Each will surely have their strengths, be it compute features, pure gaming performance, price etc...but I think we're in for one hell of a fight.
I'm just happy one company, let-alone both, is targeting <225w, 8GB, and probably short/easily coolable cards. That's what I want, and I'm fairly certain the community needs, especially at an affordable price. While I feel for the guys that want 4k60 at all costs (and I'm one of them)...hey, there's always multi-gpu....and at least now it will make sense (given rarely does a game use over 8GB frame-buffer, the same which can't be said for 6GB, let-alone 4GB even if fast-enough to switch textures out quickly).
If the new GTX_80 performance is comparable to the 980 Ti and the GTX_80 core peaks out at 1200-1300mhz vs 1500-1600mhz on the 980 Tis core then we are not talking about too much gain per dollar seeing that Nvidia will likely charge $550 for the GTX_80 at release.
That said, hopefully we see atleast 20-25% advantage over comparable cards and awesome clocking potential again. Would also be sick to have a VRM capable of 400+ watts with a chip that draws 150 watts @ stock speeds but I am sure they will lock and skimp on the reference design as always.
But, as a hobbyist PC builder, I will buy what is best for the job and last year, that was Nvidia's 980ti. I held off for the release of Fury X as the hype promised so much but it failed to 'blow me away'. So I bought my current card after that disappointing release.
If AMD RTG are on the ascendency (and I wish they broke them off as ATI) I will buy again what is best for going forward. What is beneficial to all of us would be seeing the Pascal architecture in a GP104 card. There will be no doubts the benchmark test suites in use will show Nvidia strengths and weaknesses. If they haven't evolved their warp schedulers to deal with more queues then we'll know about it.
As long as AMD allow their next card to have AIB versions I'll be happy to buy.
The starving kids in Africa is exactly the same joke stretch you made with your heart valve problems etc, it literally has nothing to do with the conversation and what to get worked up about, honestly how you cannot see this is beyond me.
We are talking about GPUs and you start about retire...well do I really have to repeat it, you put all that irrelevant information right on display...
Its again a non-argument.
and wait...Starving children in Africa is a stretch but "deal with raising and providing for your children" is not? its basically the same issue except with a little less selfishness (aka your children above other children) involved. In context of pc hardware being discussed on a pc hardware forum you mean?
yeah...dont know what he was thinking...
totally this is the place we should talk about starving children, cancer, etc etc what is important in life, like life itself....yep seems just about right.
Honestly how you cannot see that what rtwjunkie said is exactly the opposite of putting things in context, aka taking them OUT of context is beyond me.
My remark about life itself is taking the out of context to a further extreme to illustrate how much of a non argument it really is.
and if that is too much to understand then Im sorry, I really cannot see who I can possibly make it any clearer.
JHH publically apologizing was a professional courtesy which wasn't necessary at all. In fact, nobody got cheated and there is a very technical reason why the last half of RAM can't be addressed at full speed when RAM is occupied. Actually it isn't that technical at all and anybody with limited hardware knowledge can understand why it happens. The real fact of the matter is it doesn't actually hurt performance and I've done extensive testing to prove this as did renowned tech sites. I call it an inflated debacle simply because one dude ran some code, said a thing, and then everybody went spreading a metric fuck ton of misinformation which in turn caused JHH to apologize like that. In fact, if Nvidia was such an evil company as you so believe he wouldn't have bothered and the company as a whole would have ignored it completely without a care. People keep gobbling the cards up because of the above fact that they run great and are great little 1080p beasts. If the last half of VRAM was such an issue then it wouldn't be the dominating card on Steam, and wouldn't have such a market presence like it does. In fact, it wouldn't be one of the most sought after cards of the generation for gaming.
Sure, you're way more civil than I've seen and I'll give you that, but it doesn't really make my statement any less true. You had 2 bad experiences and want to go off on your own stance like a bitter old woman (not a personal attack, just how it looks) that Nvidia is the most evil company out there when really they're not. They didn't steal your sweet roll, they're a business out there to make money just like AMD. I'd say Samsung is by far worse than Nvidia when it comes to it but how many Samsung products do you own (that's rhetorical btw). I wouldn't say Nvidia is my 'beloved' company, I just go for the absolute best performance in graphics. I don't have AMD product in my specs but I do own a lot of AMD product with the most recent being the 390x. I don't own a Fury card because my bud has 2 of them and I can borrow them when I want to for testing. I have no problem using AMD product if they have something superior though, because I go for whoever has the best performance and ignore the politically correct bullshit. I don't take bad experiences to hinder my purchasing cycle unless it's something to do with CS and RMA (like Gigabyte). I don't go on every Gigabyte thread bashing their product though, I just simply don't buy their stuff and leave it at that. No they don't have proprietary tech, but I do know there are some instances where some of the stuff they do come out with runs obviously and probably purposely worse on Nvidia cards (clearly remembers TressFX initial release). It's really no different. Most of the enhanced technology that Nvidia offers in their Gameworks program doesn't even get integrated into games even though a LOT of them are awesome. I've never seen Grassworks, Waveworks, or a few others simply because they are game changers and developers don't want to segregate their gaming experience to one party. I know of two titles that are supposed to implement these features but haven't seen them see the light of day. It is well known fact that AMD still collaborates with developers just like Nvidia does on certain games to push their brand and it's literally the same thing. One such example was linked in this very thread, Hitman. AoS is another well known example and typically AMD cards show a better performance figure than Nvidia does on such titles because the games are coded initially to run better. I guess this is where PhysX is naturally going to be the counter argument, but on AMD based rigs the PhysX is downgraded and ran on the CPU and in most cases can be turned off either by a setting ingame or otherwise. The whole deal is exceptionally blown out of proportion but in all reality if it all wasn't so split things like Grassworks are awesome. Idk how many times I've come across grassy segments in a game and thought to myself this would make my experience 10x better if this looked actually realistic instead of a bunch of flat sprites tossed together to form foliage. Being able to walk on a grassy field and it actually flattens and interacts with the character is just a minute detail that people overlook but could be very awesome. Of course, most would argue against simply to argue with the reason "burn Nvidia, down with the evil Gameworks!", whatever.
Besides, the top tier will probably still be using it, so it's erroneous to cast HBM as the memory of all Pascal cards. They could even keep it for the compute part only.
I mean, smartphones sometimes contain different chips despite being the same model. If advertised performance is delivered the internal hardware is actually irrelevant.
Edit: NV link won't be on mainstream desktop either, its for the compute cards.