Friday, March 11th 2016

NVIDIA "GP104" Silicon to Feature GDDR5X Memory Interface

It looks like NVIDIA's next GPU architecture launch will play out much like its previous two generations - launching the second biggest chip first, as a well-priced "enthusiast" SKU that outperforms the previous-generation enthusiast product, and launching the biggest chip later, as the high-end enthusiast product. The second-biggest chip based on NVIDIA's upcoming "Pascal" architecture, the "GP104," which could let NVIDIA win crucial $550 and $350 price-points, will be a lean machine. NVIDIA will design the chip to keep manufacturing costs low enough to score big in price-performance, and a potential price-war with AMD.

As part of its efforts to keep GP104 as cost-effective as possible, NVIDIA could give exotic new tech such as HBM2 memory a skip, and go with GDDR5X. Implementing GDDR5X could be straightforward and cost-effective for NVIDIA, given that it's implemented the nearly-identical GDDR5 standard on three previous generations. The new standard will double densities, and one could expect NVIDIA to build its GP104-based products with 8 GB of standard memory amounts. GDDR5X breathed a new lease of life to GDDR5, which had seen its clock speeds plateau around 7 Gbps/pin. The new standard could come in speeds of up to 10 Gbps at first, and eventually 12 Gbps and 14 Gbps. NVIDIA could reserve HBM2 for its biggest "Pascal" chip, on which it could launch its next TITAN product.

The GP104 will be built on the 16 nm FinFET process, by TSMC. NVIDIA is hoping to unveil the first GP104-based products by April, at the Graphics Technology Conference (GTC) event, which it hosts annually; with possible market availability by late-May or early-June, 2016.
Source: Benchlife.info
Add your own comment

135 Comments on NVIDIA "GP104" Silicon to Feature GDDR5X Memory Interface

#51
bpgt64
How does
"NVIDIA could give exotic new tech such as HBM2 memory a skip, and go with GDDR5X. "

Translate to;
"NVIDIA "GP104" Silicon to Feature GDDR5X Memory Interface"

eg.

About the same as;
"I could grow wings and fly"

Translate to;
"Man grows wings and flies"
Posted on Reply
#52
ZoneDymo
rtwjunkieRead the rest of what i wrote. There are real things to worry about.
I did read, where did you think my words came from?
Why do those other things matter? why would you worry about those? will anything in the universe change if the entire earth would blow up? would it matter that something changes in the first place?

If you take the discussion away from what memory is used on what videocard to starving kids in Africa then why not take it to an even more nonsensical redundant point of life itself and anything.

Would anything be different if you where never born, would it matter?

Obviously if anyone had the choice between GDDR5X memory and HDM2 and GDDR5X would mean the starving children would be fed, we would go with that, but it has NOTHING AT ALL TO DO WITH THE CURRENT DISCUSSION.
Hope that cleared some things up, now stop making silly non-arguments.
HumanSmoke10/10 for hyperbole.
Life can, and often does, stretch decades with a near infinite variation of experiences. A graphics card for an enthusiast typically measures its lifespan in a year or less. During that time the options in decent gaming might be measured on the fingers of one hand between immature drivers, broken games, short games split over months thanks to artificially partitioning a game into bite-sized DLC's.
I have one life, but I've lost count of the cards and generations of cards that have passed through my hands. Once upon a time, my 9700 PRO was manna from heaven - now just a fading memory, as were my 8800's, GTX 280's, HD 4890's, HD 5970, triple HD 5850's, GTX 580's, GTX 670's and a host of other cards I've owned since Mach 64 and S3 based Diamond Stealth cards.

Apples and Oranges dude.
Ermm I think you have to read a few steps back, it was the person I was replying to that made the ridiculous leap (hyperbole) from
"what memory is used on what videocard"
to
"erh mah gerd what about the poor children and dictators and cancer etc etc this all does not matter people!!!".
Posted on Reply
#53
rtwjunkie
PC Gaming Enthusiast
ZoneDymoI did read, where did you think my words came from?
Why do those other things matter? why would you worry about those? will anything in the universe change if the entire earth would blow up? would it matter that something changes in the first place?

If you take the discussion away from what memory is used on what videocard to starving kids in Africa then why not take it to an even more nonsensical redundant point of life itself and anything.

Would anything be different if you where never born, would it matter?

Obviously if anyone had the choice between GDDR5X memory and HDM2 and GDDR5X would mean the starving children would be fed, we would go with that, but it has NOTHING AT ALL TO DO WITH THE CURRENT DISCUSSION.
Hope that cleared some things up, now stop making silly non-arguments.
No you did not read. Reading comprehension is fundamental. What I said was, once you start working for retirement, have people you know and love near death because of heart valve problems, deal with raising and providing for your children, work to pay your bills and enjoy as well as deal with real life, then getting as worked up and angry as anubis44 is about a GPU company becomes laughable.

What a GPU company does or sells is nothing in the grand scheme of things that matter, and won't actually affect your life.

Nowhere did I go on about starving kids in Africa and whatnot. That was your stretch, not mine.
Posted on Reply
#54
64K
You're funny @ZoneDymo I like you but please stop trolling this thread. Thanks.
Posted on Reply
#55
HumanSmoke
ZoneDymoErmm I think you have to read a few steps back, it was the person I was replying to that made the ridiculous leap (hyperbole) from
"what memory is used on what videocard"
to
"erh mah gerd what about the poor children and dictators and cancer etc etc this all does not matter people!!!".
Then you clearly have no understanding of what the word "hyperbole" actually means.
rtwjunkie simply said to another poster that they might want to put the business of the GTX 970 in particular, and Nvidia in general into an appropriate context rather than railing against a piece of hardware in some OTT outpouring of anger.
You then decided to insert yourself into the conversation attempting to undermine a rtw's perfectly reasonable and measured stance by launching into some hyperbolic nonsense, and are now upping the derp ante by trying to play the persecution card.
Posted on Reply
#56
PP Mguire
HumanSmokeThen you clearly have no understanding of what the word "hyperbole" actually means.
rtwjunkie simply said to another poster that they might want to put the business of the GTX 970 in particular, and Nvidia in general into an appropriate context rather than railing against a piece of hardware in some OTT outpouring of anger.
You then decided to insert yourself into the conversation attempting to undermine a rtw's perfectly reasonable and measured stance by launching into some hyperbolic nonsense, and are now upping the derp ante by trying to play the persecution card.
Lold at upping the derp ante.
Posted on Reply
#57
anubis44
rtwjunkieWhy do you effin care? It's always the people that haven't used a 970 that get angry.

And angry you are. Really dude, you're going to die an early death of a stroke getting all worked up about things THAT DON'T MATTER AT ALLin the grand scheme of LIFE. There's enough real crap in this world to focus on.
Oh, don't worry. I'm not worked up about this stuff at all. I resolved not to buy nVidia products, and left it at that. I just reply once in a while to people who seem so enamored with the company that it's sickening, never for one moment am I 'angry' about it. In fact, my revenge is not being 'angry', it's owning over 12,000 shares of AMD stock right now at an average cost of $2.00/share :), (but that's not why I'm criticizing nVidia. It's because I really despise how they operate).

And yes, as an MA graduate of poli-sci, I'm much more focused on things that really matter, like finally getting a proportional voting system in Canada, which actually looks like it can happen now.

As for this:

Frick: "And wow, TWO games in which AMD are faster? Need moar data."

OK, here's another one:

fudzilla.com/news/graphics/40084-amd-dooms-nvidia-in-benchmarks
techfrag.com/2016/02/18/amd-beats-nvidia-in-doom-alpha-benchmarks/

So in the new Doom, even the R9 280X (yes 280X, not 290X) is beating the GreedForce 980Ti. I guess when it gets up to TEN, you'll be saying "TEN games in which AMD are faster? Need moar data," and when it's a hundred, "A HUNDRED games in which AMD are faster? Need moar data," etc. etc.
Posted on Reply
#58
rtwjunkie
PC Gaming Enthusiast
anubis44And yes, as an MA graduate of poli-sci, I'm much more focused on things that really matter, like finally getting a proportional voting system in Canada, which actually looks like it can happen now.
I'm glad to hear and and pleased to know it's not keeping you up at night.

Perhaps I misjudged your response, because it's really hard to go about deapising any company. Having been a senior manager in a Fortune 500 company, I can tell you, business is business, and they all pretty much operate the same. They all have the top goal of making as much money as they can.
Posted on Reply
#59
anubis44
rtwjunkieI'm glad to hear and and pleased to know it's not keeping you up at night.

Perhaps I misjudged your response, because it's really hard to go about deapising any company. Having been a senior manager in a Fortune 500 company, I can tell you, business is business, and they all pretty much operate the same. They all have the top goal of making as much money as they can.
Yes, I understand, and I acknowledge that many companies do things and get away with them. I may not even know about some of these transgressions. But there are certain ethical boundaries for me, and when a company crosses one of these ethical boundaries, I won't support them. It's that simple. There's marketing, and then there's just simple lying. AMD has never sold a product as having x amount of memory, but really had y amount of usable memory. It's straight up dishonesty, and nVidia can only get away with it on a technicality, because most people don't understand the implications of .5GB of the 4 only running at 1/7th the speed of the rest of it. There are plenty of games right now that are requiring a full 4GB of video memory that should be running fine on the GTX970, and don't, and the owners of these cards are being told to 'man up', or that they're 'AMD fanboys', when really, they bought these cards expecting that '4GB' printed on the box actually meant 4GB.

As for GameWorks, this is the very last straw for me. It's now so clearly a mere rear-guard action to slow down AMD's increasingly successful DX12 counter-offensive because nVidia was obviously caught off-guard by its release and quick adoption, and I understand it from a business perspective, but it's really like sabotaging another company's products. It's not just aggressive competition anymore, it's more like paying some guy to stick his foot out to trip a competitor during a race, or to slash a tire on a competitor's car during a pit-stop. It's dirty and underhanded, and actually, it doesn't upset me, on the contrary, it reassures me that nVidia truly is a desperate company now, or else they wouldn't be risking such naked cheating and getting caught, because legions of tech nerds ARE going to find out, and when they do, there is a point where a majority of them will say the same thing I'm saying, and tell all the people who trust their tech advice not to buy nVidia products and support that company.

If you watched the 'Nvidia gameworks - game over for you' youtube video I linked to, you'll see near the end of it that it's not just Radeon owners getting hosed by the sabotage nVidia's pulling right now, with unnecessarily high tessellation that you can't even see, tessellating water that isn't even visible to the player, or integrating PhysX into game engines so you can't easily turn it off, it's previous generation nVidia owners, too. nVidia's now gimping their OWN cards from just one generation back, just to sell more Maxwells, and as the author of that video points out, they're likely to deliberately gimp Maxwells, too, once Pascal is out in order to accelerate adoption of Pascal. For me, this is no longer a 'fan-boy' issue, it's a simple self-respect issue. Nobody with self-respect, in my opinion, can watch and learn about nVidia's actions and still buy their products, knowing the underhanded lengths they'll go to squeeze a buck out of you.
Posted on Reply
#60
Prima.Vera
One other thing that is certain in life, besides death and taxes, people love to derail forums and write shit.
Posted on Reply
#61
PP Mguire
anubis44Oh, don't worry. I'm not worked up about this stuff at all. I resolved not to buy nVidia products, and left it at that. I just reply once in a while to people who seem so enamored with the company that it's sickening, never for one moment am I 'angry' about it. In fact, my revenge is not being 'angry', it's owning over 12,000 shares of AMD stock right now at an average cost of $2.00/share :), (but that's not why I'm criticizing nVidia. It's because I really despise how they operate).

And yes, as an MA graduate of poli-sci, I'm much more focused on things that really matter, like finally getting a proportional voting system in Canada, which actually looks like it can happen now.

As for this:

Frick: "And wow, TWO games in which AMD are faster? Need moar data."

OK, here's another one:

fudzilla.com/news/graphics/40084-amd-dooms-nvidia-in-benchmarks
techfrag.com/2016/02/18/amd-beats-nvidia-in-doom-alpha-benchmarks/

I guess when it gets up to TEN, you'll be saying "TEN games in which AMD are faster? Need moar data," and when it's a hundred, "A HUNDRED games in which AMD are faster? Need moar data," etc. etc.
Lol you post the worst in terms of benchmarks. Dude, at 1080p they are showing a 280x beating all of the cards with 61fps average when the game looks like it's capped at 60fps.

Considering these are posted in the middle of last month and Nvidia just released a preliminary Vulcan driver I'd be willing to bet AMD is using Vulcan and Nvidia is using OpenGL 4.4. The increased memory usage for Nvidia over AMD seems to indicate this.

So we have 2 alphas and a game with reported issues on DX12. How about actually waiting to see matured instances being benchmarked by proper sites that actually know what they're doing instead of sites trying to create a clickbait article for fanboys to argue over?

And I see the new post and still at it with the 4GB crap? What games don't run fine on the 970 because of the supposed memory issue? I will literally fire up my 970 and test them because I bet they run fine. My roomie plays all AAA titles with no issues @ 1080p on it mated to a 4690k.

If you want to crapshoot Nvidia for supposed dishonest behavior and unethical business practices then why are you using an Intel CPU? They have been caught red handed doing down right dirty shit which is moot in comparison to this inflated RAM debacle. Sounds like a bunch of excuses to me if I'm honest.

Going on about Gameworks? You realize AMD has it's own version of Gameworks right? You realize that both companies do the same crap to market their cards by pairing up with Devs for AAA titles, right? You realize that both companies are doing what they need to do to have an edge to sell product, right? Sheesh.
Nvidia is not desperate at all. They've had the majority market in their hands due to having simply faster cards for a couple of generations which is why they are able to sell midrange chips for full price then make more money by releasing the big chips later. My 980s creamed my buddy's 290x's in everything and we have identical CPUs. I now have Titans and he has Fury X Crossfire and I still have the upper hand. If AMD had any upper hand I wouldn't be using Nvidia cards, that's for sure, but they don't and I have a good feeling it'll continue to be this way for the next generation as well. You can rant and rave to try and justify your distaste for Nvidia but to the majority of us it's just blatant bashing for no reason. If you don't want to consume some logic and put aside your decade old hate for the company then at least give the rest of us some peace and not derail an Nvidia thread while continuing to game happily on your AMD graphics cards.
Posted on Reply
#62
anubis44
PP MguireAnd I see the new post and still at it with the 4GB crap? What games don't run fine on the 970 because of the supposed memory issue? I will literally fire up my 970 and test them because I bet they run fine. My roomie plays all AAA titles with no issues @ 1080p on it mated to a 4690k.

If you want to crapshoot Nvidia for supposed dishonest behavior and unethical business practices then why are you using an Intel CPU? They have been caught red handed doing down right dirty shit which is moot in comparison to this inflated RAM debacle. Sounds like a bunch of excuses to me if I'm honest.

Going on about Gameworks? You realize AMD has it's own version of Gameworks right? You realize that both companies do the same crap to market their cards by pairing up with Devs for AAA titles, right? You realize that both companies are doing what they need to do to have an edge to sell product, right? Sheesh.
Nvidia is not desperate at all. They've had the majority market in their hands due to having simply faster cards for a couple of generations which is why they are able to sell midrange chips for full price then make more money by releasing the big chips later. My 980s creamed my buddy's 290x's in everything and we have identical CPUs. I now have Titans and he has Fury X Crossfire and I still have the upper hand. If AMD had any upper hand I wouldn't be using Nvidia cards, that's for sure, but they don't and I have a good feeling it'll continue to be this way for the next generation as well. You can rant and rave to try and justify your distaste for Nvidia but to the majority of us it's just blatant bashing for no reason. If you don't want to consume some logic and put aside your decade old hate for the company then at least give the rest of us some peace and not derail an Nvidia thread while continuing to game happily on your AMD graphics cards.
I'm currently using an Intel CPU because, as I explained in a much older thread, I decided to confirm for myself reports of 'much faster performance' than my FX-8350 system in many games. So I loaded up on some gravol, held my nose (it was the first Intel CPU I'd bought since my Celeron 300a that overclocked to 450MHz back in 1998, after all), and bought an i5 4690K on deep discount for $229.00 Canadian, and built a system around it. It turned out to be exactly 3FPS faster than the FX-8350 in my favourite game, Company of Heroes 2, at the definitely not GPU bound resolution of 1680x1050, but I had already given my FX-8350 system to my girlfriend, so I'm stuck with this piece of crap. :) Believe me, when Zen comes out, I'll be tearing out this i5 4690K and donating it to a relative faster than you can say 'Intel caught cheating, gets slap on the wrist'.

As for the 'inflated ram' debacle, if nVidia didn't really do anything wrong, why did the CEO pretend to apologize (www.technobuffalo.com/2015/02/25/nvidias-ceo-apologizes-for-gtx-970-memory-controversy/), and try to reassure everybody that it 'won't happen again', and yet allow board partners to continue doing it? That doesn't sound kosher to me? I'm so sorry I did this unethical thing that I'm going to continue doing it for as long as I can?

As for 'ranting and raving', I think I've maintained a civil level of decorum, and backed up everything I've said with an explanation from personal experience or references, so I hardly think it's just 'blatant bashing for no reason.' As for giving you peace so you can continue to be deceived by your beloved company, absolutely. I've said my piece here, and will stop 'derailing' the thread. Looking forward to seeing you with a Radeon in your system specs sometime soon. :)
Posted on Reply
#63
alwayssts
I think ya'll pretty much agree with me on the direction things are going.

Fury X had (4x128gbps 1GB) 512gbps of bandwidth...and Polaris 11 will probably have (2x256gbps 4GB) 512gbps of bandwidth. If you go by straight compute, that's good for a Fury X up to 1110-1120mhz, around 1200mhz if you figure in where memory compression is applicable. While cache structure and compression could change, let's assume for a sec it doesn't substantially. I think it's somewhat fair to assume 14nm can probably clock up to ~1200mhz supremely efficiently and top out around 1400mhz or so. I think it's pretty obvious if you're AMD you're essentially shrinking Fiji one way or another, with or without subtracting 4/8 compute (which should add to over-all efficiency) units and (or not) raising clockspeed to compensate. Given they probably want to use it in the mobile space, and these parts should be all about simply staying within 225w at most (probably with a buffer at stock, let's say 200w...gotta make sure a 375w x2 will work and perhaps a super-efficient voltage/clock can fit under 150w) I'm inclined to believe that gives them wiggle room to opt for less units and to pump up (at least the potential of) the clock, even if raising the clock is 1/2 as efficient as building those (mostly redundant) transistors in.

For instance, they could do something like 3584 @ 1200mhz, which for all intent and purposes should be similar over-all to a Fury X in compute but much more efficient/easy to yield, faster then a stock 980ti (which is what, essentially 3520 units at 11xxmhz?), and could potentially clock higher to tap out max bandwidth (perhaps compete/beat an overclocked 980ti). I'm not ruling out 3840 and/or higher stock clocks either and a lower-end part tackling 980ti, perhaps specifically to put a nail in GM200. Let's not forget there is 1600mhz HBM2 coming as well, which fits pretty darn well with product differentiation (80%), competing with GM200, if not a perfect spot to try to stay under 150w...

Does this whole thing sound familiar?

It would be like an optimal mix of when AMD shrank a 420mm2 bear of a chip down to 192mm2 (R600->RV670, a 2.1875x smaller chip on a 2x smaller process, or a net efficiency to arch of 10% die space) and then cranked the shit out of the voltage (that chip ran at 1.3+v) to clock/yield it decently while selling it for peanuts, mixed with that other time when their 256mm2 chip (using a new memory standard; gddr5) was clocked to put a nail in the former high-end from nvidia (G92b) and gave the affordable version of nvidia's big-ass chip (GT200) a run for it's money...all while being good-enough to hit certain performance levels to keep people content at an affordable price. You know the bolded sentence above? Well, 14nm should be closer to 2.1-2.32x smaller and AMD has said their improvements to the arch account for ~30% of the efficiency improvement (which when added together starts to look a lot like the amount brighter Polaris looks compared to when it was observed blah blah blah). While that's probably accounting for the shrink (ie the net efficiency is divided by the shrink), that's still a similar if not greater efficiency improvement in arch as rv670....somewhere to the tune of ~13-15%. Just throwing it out there, but 4096/3584 is also ~14%...and surely having half (even if faster) memory controllers amounts to something.


As for nvidia:

Guys....do you know with the way Maxwell is set up it essentially requires 1/3 less bandwidth than AMD (not counting compression of Tonga/Fiji, which lowers it to around to 25% or slightly less) due to cache/sfu setup? It's true. That alone should pretty much offset any worries between AMD's 512gbps and whatever nvidia comes to bat with, assuming they can at least muster 13000mhz gddr5x (6500mhz x 2). Given Micron has said they have exactly that in the labs (coincidence I'm sure, not at all theoretically a requirement imposed by nvidia) I wouldn't be too worried. While we'd all love for nvidia to come to bat with a higher compute ratio (say 224-240sp per module instead of 192 of Kepler or 128 of Maxwell) there's no saying they won't...and simply won't up the ratio of cache to supplement the design. It's gonna be fine.

At the end of the day,

I have no idea who's going to perform better in any situation, but I wouldn't be surprised if both designs are fairly similar (and optimized versions of their former high-end). My gut (and yours, probably) says nvidia wins the pure gaming metrics. I also don't know who will be cheaper, but my gut (and yours probably) says AMD. Still though, if both can achieve 4k30 in mostly everything...and can overclock to 4k60 in the titles we expect they should...does it really matter? Each will surely have their strengths, be it compute features, pure gaming performance, price etc...but I think we're in for one hell of a fight.

I'm just happy one company, let-alone both, is targeting <225w, 8GB, and probably short/easily coolable cards. That's what I want, and I'm fairly certain the community needs, especially at an affordable price. While I feel for the guys that want 4k60 at all costs (and I'm one of them)...hey, there's always multi-gpu....and at least now it will make sense (given rarely does a game use over 8GB frame-buffer, the same which can't be said for 6GB, let-alone 4GB even if fast-enough to switch textures out quickly).
Posted on Reply
#64
sweet
PP MguireGoing on about Gameworks? You realize AMD has it's own version of Gameworks right?
Actually AMD doesn't have any black-box features like Gamesworks. Their proposed features are mostly open source.
Posted on Reply
#65
RejZoR
Wasn't it initially said that GDDR5X doesn't need new controllers unlike HBM ? That's why GDDR5X is more competitive while still doubling (or what was the boost anyway) the performance of regular GDDR5.
Posted on Reply
#66
t88powered
The biggest question I look forward to having answered is will the 16nm silicone clock like the 3rd generation 28nm did.

If the new GTX_80 performance is comparable to the 980 Ti and the GTX_80 core peaks out at 1200-1300mhz vs 1500-1600mhz on the 980 Tis core then we are not talking about too much gain per dollar seeing that Nvidia will likely charge $550 for the GTX_80 at release.

That said, hopefully we see atleast 20-25% advantage over comparable cards and awesome clocking potential again. Would also be sick to have a VRM capable of 400+ watts with a chip that draws 150 watts @ stock speeds but I am sure they will lock and skimp on the reference design as always.
Posted on Reply
#67
the54thvoid
Super Intoxicated Moderator
anubis44I'm currently using an Intel CPU because, as I explained in a much older thread, I decided to confirm for myself reports of 'much faster performance' than my FX-8350 system in many games. So I loaded up on some gravol, held my nose (it was the first Intel CPU I'd bought since my Celeron 300a that overclocked to 450MHz back in 1998, after all), and bought an i5 4690K on deep discount for $229.00 Canadian, and built a system around it. It turned out to be exactly 3FPS faster than the FX-8350 in my favourite game, Company of Heroes 2, at the definitely not GPU bound resolution of 1680x1050, but I had already given my FX-8350 system to my girlfriend, so I'm stuck with this piece of crap. :) Believe me, when Zen comes out, I'll be tearing out this i5 4690K and donating it to a relative faster than you can say 'Intel caught cheating, gets slap on the wrist'.

As for the 'inflated ram' debacle, if nVidia didn't really do anything wrong, why did the CEO pretend to apologize (www.technobuffalo.com/2015/02/25/nvidias-ceo-apologizes-for-gtx-970-memory-controversy/), and try to reassure everybody that it 'won't happen again', and yet allow board partners to continue doing it? That doesn't sound kosher to me? I'm so sorry I did this unethical thing that I'm going to continue doing it for as long as I can?

As for 'ranting and raving', I think I've maintained a civil level of decorum, and backed up everything I've said with an explanation from personal experience or references, so I hardly think it's just 'blatant bashing for no reason.' As for giving you peace so you can continue to be deceived by your beloved company, absolutely. I've said my piece here, and will stop 'derailing' the thread. Looking forward to seeing you with a Radeon in your system specs sometime soon. :)
I can understand your position. I will not buy Apple products because I disagree with their business policy and marketing smoke and mirrors.
But, as a hobbyist PC builder, I will buy what is best for the job and last year, that was Nvidia's 980ti. I held off for the release of Fury X as the hype promised so much but it failed to 'blow me away'. So I bought my current card after that disappointing release.

If AMD RTG are on the ascendency (and I wish they broke them off as ATI) I will buy again what is best for going forward. What is beneficial to all of us would be seeing the Pascal architecture in a GP104 card. There will be no doubts the benchmark test suites in use will show Nvidia strengths and weaknesses. If they haven't evolved their warp schedulers to deal with more queues then we'll know about it.

As long as AMD allow their next card to have AIB versions I'll be happy to buy.
Posted on Reply
#68
HumanSmoke
RejZoRWasn't it initially said that GDDR5X doesn't need new controllers unlike HBM ? That's why GDDR5X is more competitive while still doubling (or what was the boost anyway) the performance of regular GDDR5.
The memory controllers need minor revision (almost certainly done by now)- pinout is increased (190 pins per IC compared with 170 for standard GDDR5 chips) to allow for the doubled data rate per cycle, and the prefetch architecture is likewise doubled. The IMC logic blocks are very likely similar enough to allow them to interchanged readily with the rest of the GPU. The plus side is that the chips are supposed to operate in the 2-2.5W range (so 16W-20W for 8x1GB chips) and offer the quad data rate. The downside is that due to the increased pinout, trace layout for the PCB will be a little more complex and assembly will be a little more exacting - the GDDR5X chips are also slightly smaller (14x10mm rather than 14x12mm for GDDR5) so the higher pin count means smaller solder balls closer together.
the54thvoidI can understand your position. I will not buy Apple products because I disagree with their business policy and marketing smoke and mirrors.
I tend to also *try* to boycott some companies based on how they operate. Apple is one I can happily avoid, but the true predators of the tech world are quite hard to stay sanitized from. Samsung is about the most despicable conglomerate on the face of the earth - bribery, extortion, kickbacks, price fixing, bid rigging - they pretty much tick every box for scumbag practices, but it can be bloody difficult to boycott a company that is OEM/ODM for so many third party products. The same can be said for Qualcomm and LG (anubis44's panel of choice) and a host of other companies whose tentacles wind through a huge volume of products not directly carrying their brand.
Posted on Reply
#69
matar
I cant wait if they sell the (GTX 1070) for 350 I will buy 2 for SLi rather then one (GTX 1080) for 550
Posted on Reply
#70
Kaotik
the problem is, gddr5x manufacturing won't start until late summer, Micron has mentioned august earlier. There's no way for nvidia to launch gddr5x product in april or may
Posted on Reply
#71
ZoneDymo
rtwjunkieNo you did not read. Reading comprehension is fundamental. What I said was, once you start working for retirement, have people you know and love near death because of heart valve problems, deal with raising and providing for your children, work to pay your bills and enjoy as well as deal with real life, then getting as worked up and angry as anubis44 is about a GPU company becomes laughable.

What a GPU company does or sells is nothing in the grand scheme of things that matter, and won't actually affect your life.

Nowhere did I go on about starving kids in Africa and whatnot. That was your stretch, not mine.
ermm so you are contradicting yourself, you say I did not read yet then begin about reading comprehension...aka reading but not understanding...good job.

The starving kids in Africa is exactly the same joke stretch you made with your heart valve problems etc, it literally has nothing to do with the conversation and what to get worked up about, honestly how you cannot see this is beyond me.

We are talking about GPUs and you start about retire...well do I really have to repeat it, you put all that irrelevant information right on display...
Its again a non-argument.

and wait...Starving children in Africa is a stretch but "deal with raising and providing for your children" is not? its basically the same issue except with a little less selfishness (aka your children above other children) involved.
HumanSmokeThen you clearly have no understanding of what the word "hyperbole" actually means.
rtwjunkie simply said to another poster that they might want to put the business of the GTX 970 in particular, and Nvidia in general into an appropriate context rather than railing against a piece of hardware in some OTT outpouring of anger.
You then decided to insert yourself into the conversation attempting to undermine a rtw's perfectly reasonable and measured stance by launching into some hyperbolic nonsense, and are now upping the derp ante by trying to play the persecution card.
In context of pc hardware being discussed on a pc hardware forum you mean?
yeah...dont know what he was thinking...
totally this is the place we should talk about starving children, cancer, etc etc what is important in life, like life itself....yep seems just about right.
Honestly how you cannot see that what rtwjunkie said is exactly the opposite of putting things in context, aka taking them OUT of context is beyond me.
My remark about life itself is taking the out of context to a further extreme to illustrate how much of a non argument it really is.
and if that is too much to understand then Im sorry, I really cannot see who I can possibly make it any clearer.
Posted on Reply
#72
HumanSmoke
Kaotikthe problem is, gddr5x manufacturing won't start until late summer, Micron has mentioned august earlier. There's no way for nvidia to launch gddr5x product in april or may
That is incorrect. Micron announced that mass production wouldn't start until the summer. That does not mean that chips won't be available during the ramp to volume production and Micron have confirmed as much and have had a test, evaluation, and verification program going for some time:
Micron’s GDDR5X program is in full swing and first components have already completed manufacturing. We plan to hit mass production this summer.
Silicon programs don't ramp to volume instantaneously. As I mentioned earlier, Hynix announced mass production of HBM just a week before the Fury X launch. Is it really feasible that the whole Fiji GPU package, card assembly, packaging, and shipping were all accomplished in a week?
Posted on Reply
#73
PP Mguire
anubis44I'm currently using an Intel CPU because, as I explained in a much older thread, I decided to confirm for myself reports of 'much faster performance' than my FX-8350 system in many games. So I loaded up on some gravol, held my nose (it was the first Intel CPU I'd bought since my Celeron 300a that overclocked to 450MHz back in 1998, after all), and bought an i5 4690K on deep discount for $229.00 Canadian, and built a system around it. It turned out to be exactly 3FPS faster than the FX-8350 in my favourite game, Company of Heroes 2, at the definitely not GPU bound resolution of 1680x1050, but I had already given my FX-8350 system to my girlfriend, so I'm stuck with this piece of crap. :) Believe me, when Zen comes out, I'll be tearing out this i5 4690K and donating it to a relative faster than you can say 'Intel caught cheating, gets slap on the wrist'.

As for the 'inflated ram' debacle, if nVidia didn't really do anything wrong, why did the CEO pretend to apologize (www.technobuffalo.com/2015/02/25/nvidias-ceo-apologizes-for-gtx-970-memory-controversy/), and try to reassure everybody that it 'won't happen again', and yet allow board partners to continue doing it? That doesn't sound kosher to me? I'm so sorry I did this unethical thing that I'm going to continue doing it for as long as I can?

As for 'ranting and raving', I think I've maintained a civil level of decorum, and backed up everything I've said with an explanation from personal experience or references, so I hardly think it's just 'blatant bashing for no reason.' As for giving you peace so you can continue to be deceived by your beloved company, absolutely. I've said my piece here, and will stop 'derailing' the thread. Looking forward to seeing you with a Radeon in your system specs sometime soon. :)
So you're basing your experience on one game that isn't demanding at all? The minimum requirements for that game are based around Core 2. It definitely didn't hit machines as hard as the first one did when it came out. It was rather docile compared to the tech we had available 3 years ago. The 4690k is basically better than the 8350 at practically anything besides a few multi-threaded instances. Even the benchmarks you posted showed DX12 carrying the FX line because their single threaded performance is so low. Idk how many people I've built rigs for that were coming from an FX 8*** or 6*** and said it was a night and day difference, but they actually go in with an unbiased view and play a wide variety of games. Course if you want to continue chugging on the hate train, be my guest.

JHH publically apologizing was a professional courtesy which wasn't necessary at all. In fact, nobody got cheated and there is a very technical reason why the last half of RAM can't be addressed at full speed when RAM is occupied. Actually it isn't that technical at all and anybody with limited hardware knowledge can understand why it happens. The real fact of the matter is it doesn't actually hurt performance and I've done extensive testing to prove this as did renowned tech sites. I call it an inflated debacle simply because one dude ran some code, said a thing, and then everybody went spreading a metric fuck ton of misinformation which in turn caused JHH to apologize like that. In fact, if Nvidia was such an evil company as you so believe he wouldn't have bothered and the company as a whole would have ignored it completely without a care. People keep gobbling the cards up because of the above fact that they run great and are great little 1080p beasts. If the last half of VRAM was such an issue then it wouldn't be the dominating card on Steam, and wouldn't have such a market presence like it does. In fact, it wouldn't be one of the most sought after cards of the generation for gaming.

Sure, you're way more civil than I've seen and I'll give you that, but it doesn't really make my statement any less true. You had 2 bad experiences and want to go off on your own stance like a bitter old woman (not a personal attack, just how it looks) that Nvidia is the most evil company out there when really they're not. They didn't steal your sweet roll, they're a business out there to make money just like AMD. I'd say Samsung is by far worse than Nvidia when it comes to it but how many Samsung products do you own (that's rhetorical btw). I wouldn't say Nvidia is my 'beloved' company, I just go for the absolute best performance in graphics. I don't have AMD product in my specs but I do own a lot of AMD product with the most recent being the 390x. I don't own a Fury card because my bud has 2 of them and I can borrow them when I want to for testing. I have no problem using AMD product if they have something superior though, because I go for whoever has the best performance and ignore the politically correct bullshit. I don't take bad experiences to hinder my purchasing cycle unless it's something to do with CS and RMA (like Gigabyte). I don't go on every Gigabyte thread bashing their product though, I just simply don't buy their stuff and leave it at that.
sweetActually AMD doesn't have any black-box features like Gamesworks. Their proposed features are mostly open source.
No they don't have proprietary tech, but I do know there are some instances where some of the stuff they do come out with runs obviously and probably purposely worse on Nvidia cards (clearly remembers TressFX initial release). It's really no different. Most of the enhanced technology that Nvidia offers in their Gameworks program doesn't even get integrated into games even though a LOT of them are awesome. I've never seen Grassworks, Waveworks, or a few others simply because they are game changers and developers don't want to segregate their gaming experience to one party. I know of two titles that are supposed to implement these features but haven't seen them see the light of day. It is well known fact that AMD still collaborates with developers just like Nvidia does on certain games to push their brand and it's literally the same thing. One such example was linked in this very thread, Hitman. AoS is another well known example and typically AMD cards show a better performance figure than Nvidia does on such titles because the games are coded initially to run better. I guess this is where PhysX is naturally going to be the counter argument, but on AMD based rigs the PhysX is downgraded and ran on the CPU and in most cases can be turned off either by a setting ingame or otherwise. The whole deal is exceptionally blown out of proportion but in all reality if it all wasn't so split things like Grassworks are awesome. Idk how many times I've come across grassy segments in a game and thought to myself this would make my experience 10x better if this looked actually realistic instead of a bunch of flat sprites tossed together to form foliage. Being able to walk on a grassy field and it actually flattens and interacts with the character is just a minute detail that people overlook but could be very awesome. Of course, most would argue against simply to argue with the reason "burn Nvidia, down with the evil Gameworks!", whatever.
Posted on Reply
#74
StefanM
FYI: the whitepaper from last November reads HBM

Posted on Reply
#75
the54thvoid
Super Intoxicated Moderator
StefanMFYI: the whitepaper from last November reads HBM

White paper is not finished article. If a selected component isn't ready to be utilised, they may use an alternative, I.E. GDDR5X.
Besides, the top tier will probably still be using it, so it's erroneous to cast HBM as the memory of all Pascal cards. They could even keep it for the compute part only.
I mean, smartphones sometimes contain different chips despite being the same model. If advertised performance is delivered the internal hardware is actually irrelevant.

Edit: NV link won't be on mainstream desktop either, its for the compute cards.
Posted on Reply
Add your own comment
Nov 26th, 2024 10:02 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts