Wednesday, October 17th 2012

NVIDIA Kepler Refresh GPU Family Detailed

A 3DCenter.org report shed light on what NVIDIA's GPU lineup for 2013 could look like. According to the report, NVIDIA's next-generation GPUs could follow a similar path to previous-generation "Fermi Refresh" (GF11x), which turned the performance-per-Watt equation around back in favor of NVIDIA, even though the company's current GeForce Kepler has an established energy-efficiency lead. The "Kepler Refresh" family of GPUs (GK11x), according to the report, could see significant increases in cost-performance, with a bit of clever re-shuffling of the GPU lineup.

NVIDIA's GK104 GPU exceeded performance expectations, which allowed it to drive this generation's flagship single-GPU graphics card for NVIDIA, the GTX 680, giving the company time to perfect the most upscaled chip of this generation, and for its foundry partners to refine its 28 nm manufacturing process. When it's time for Kepler Refresh to go to office, TSMC will have refined its process enough for mass-production of GK110, a 7.1 billion transistor chip on which NVIDIA's low-volume Tesla K20 GPU compute accelerator is currently based.

The GK110 will take back the reins of powering NVIDIA's flagship single-GPU product, the GeForce GTX 780. This product could offer a massive 40-55% performance increase over GeForce GTX 680, with a price ranging anywhere between US $499 and $599. The same chip could even power the second fastest single-GPU SKU, the GTX 770. The GK110 physically packs 2880 CUDA cores, and a 384-bit wide GDDR5 memory interface.

Moving on, the real successor to the GK104, the GK114, could form the foundation for high-performance SKUs such as the GTX 760 Ti and 760. The chip has the same exact specifications as the GK104, leaving NVIDIA to tinker with clock speeds to increase performance. The GK114 will be relegated to performance-segment SKUs from the high-end segment it currently powers, and so even with minimal increases in clock speed, the chip will have achieved sizable performance gains over current GTX 660 Ti and GTX 660.

Lastly, the GK106 could see a refresh to GK116, too, retaining specifications and leaving room for clock speed increases, much in the same way as GK114, except, it gets a demotion to GTX 750 Ti, GTX 750, as well, and so with minimal R&D, the GTX 750 series gains a sizable performance gain over its previous generation.
Source: 3DCenter.org
Add your own comment

127 Comments on NVIDIA Kepler Refresh GPU Family Detailed

#51
BigMack70
@cadaveca

Unless I'm seriously misunderstanding you, you're arguing that GK110 is/was scrapped due to inherent problems with die size even though we're sitting here reading and commenting on an article that is proposing that GK110 is going to be released just fine for the 7xx series cards.

That makes no sense.

Much more likely is that their yields sucked last year on this chip and so they bumped GK104 up a tier from 660ti to 680, while putting GK110 on the back burner until they got yield issues fixed.
Posted on Reply
#52
cadaveca
My name is Dave
BigMack70That makes no sense.
Neither does the doubling of transistor count, but only a 55% performance increase...unless...it's so big they had to drop clocks by 40%...


:eek:



:shadedshu

What's in the news doesn't make sense. period. I'm not gonna argue it's bogus...if you don't realize that yourself..well...I won't argue with you.


:D


It's twice the size of GTX680, this news says, but only 55% faster.

We know it'll use the smae silicon process as current GTX680...how can die size NTO be a problem?


If it wasn't, then a doubling of the same transitors should equal a doubling of performance, no?
Posted on Reply
#54
cadaveca
My name is Dave
It would still leave it twice as large as GTX680...
Posted on Reply
#55
NHKS
BigMack70We'll never know with 100% certainty, but I think that it makes better sense of the available data that the original GTX 6xx lineup was to include both Gk110 (680/670?) and GK104 (660ti/660).
that's right.. 'no one can say for sure', but I am inclined to believe your case.. also that atleast on paper, ie, in the 'plans', nvidia could possibly have included the GK1x0(100 or 110) in the GTX 6xx line-up, but since its a big chip(500+ mm²) and TSMC's 28nm process was in its nascent stages, nvidia might have changed plans anticipating poor yields(they might have made a pre-production study too)
cadavecaHD 7970:

www.techpowerup.com/reviews/ASUS/HD_7970_Matrix/images/gpu.jpg

GTX 680:

www.techpowerup.com/reviews/Point_Of_View/GeForce_GTX_680_TGT_Ultra_4_GB/images/gpu.jpg


Note how the AMD chip has nearly 33% more transistors, but is barely physically larger than GTX 680.

If nVidia could have fit more functionality into the same space, they would have.

...
you seem to make a valid point, sir.. but I am not convinced just looking at the pics(they are zoomed at slightly diff levels, based on the match-stick size).. moreover, based on calc :
Tahiti ≈ 365 mm²
Gk104 ≈ 295 mm²

Difference in size ≈ 24%
Difference in # of. transistors ≈ 33%

looking at the above numbers, Tahiti does pack more transistors but the die sizes are not too close either
Posted on Reply
#56
BigMack70
cadavecaIt would still leave it twice as large as GTX680...
Hey, if this is the smoking gun that for you means that all this isn't possible/makes no sense, then that's fine.

However, there's nothing that inherently makes that the case. S|A's article (linked above) offers a fairly good explanation here, I think.

And of course, we're only dealing in rumorville, so we'll see what happens.
Posted on Reply
#57
cadaveca
My name is Dave
NHKSyou seem to make a valid point, sir.. but I am not convinced just looking at the pics(they are zoomed at slightly diff levels, based on the match-stick size).. moreover, based on calc :
Tahiti ≈ 365 mm²
Gk104 ≈ 295 mm²

Difference in size ≈ 24%
Difference in # of. transistors ≈ 33%

looking at the above numbers, Tahiti does pack more transistors but the die sizes are not too close either
Yeah, and Tahiti has 384-bit bus, so really needs to be physically bigger, for more connections to PCB for the added ram chips.

see, to me, a mid-range chip is under 250mm², like GTX 660 and HD 7870. All these claims of GTX680 being mid-range do not make sense.
BigMack70Hey, if this is the smoking gun that for you means that all this isn't possible/makes no sense, then that's fine.

However, there's nothing that inherently makes that the case. S|A's article (linked above) offers a fairly good explanation here, I think.

And of course, we're only dealing in rumorville, so we'll see what happens.
No, really, the smoking gun is that design schedules ALWAYS work this way.


See, nVidia and AMD are both contrained by what TSMC offers. They both buy wafers from TSMC, TSMC makes all chips for both, and as such, they are even using the same process.


AMD packs 33% more transistors into HD 7970. It's not 33% bigger.

Nvidia may be able to further increase transistor density, for sure, but it's not going to be enough that even qualifies GTX 680 as "mid-range"
Posted on Reply
#58
BigMack70
cadavecaYeah, and Tahiti has 384-bit bus, so really needs to be physically bigger, for more connections to PCB for the added ram chips.

see, to me, a mid-range chip is under 250mm², like GTX 660 and HD 7870. All these claims of GTX680 being mid-range do not make sense.
That's because you a priori exclude the possibilty of GK110's existence/plausibility based on size. If GK110 exists as stated, then it defines what the high end chip is, and GK104 is comfortably midrange in comparison.
Posted on Reply
#59
cadaveca
My name is Dave
What I am denying is the ability to cool a chip that large in size, yes.

I'm not denying it might have been planned...but reality says, since chips take liek 2 years to design, that they knew since day one it wasn't going to happen. They knew LONG before those "claims" came out that GTX 680 was the chip we got.




GK110 or GK100 or whatever...was NEVER meant to be GTX 680. Nor was it meant to compete with the current HD7970.


I'm not denying that a new GPU is coming, either. :p
Posted on Reply
#60
BigMack70
cadavecaI'm not denying it might have been planned...
All I've been arguing is that they planned GK110 as the GTX 680 and had to scrap it and bump the GK104 up a level.

Basically, I'm arguing that GK104 was not drawn up as a high end GPU. It wound up filling that role just fine, but that doesn't mean it was planned that way.
Posted on Reply
#61
eidairaman1
The Exiled Airman
i can only guess reason this is happening is because they losing a lil share unless they are getting geared to launch a new series after the HD 8s come out
Posted on Reply
#62
Casecutter
HumanSmokelogic would dictate that the best GK 110's are destined to end up as Tesla/Quadro parts, which would leave the GeForce parts as either salvage and/or high leakage parts. In either event I wouldn't expect the GTX 780 to be widely available, which is why the pricing is a head scratcher. GTX 780 (GK 110) @ $550 (or more, depending on % of full die)
That’s going to be the question… has TSMC got their process to the point that makes parts that are viable to gaming enthusiasts and not out of bounds on power. I think with geldings from Tesla and a more tailored second gen boost mapping this is saying they can, but I would say it not $550. Something tells me these will all be as GTX690’s, Nvidia only outing a singular design and construction as a factory release.
iOI just don't believe these performance increase claims. They gonna double the transistor count but won´t get even 50% more speed. GK110 will be highly optimised for DP computing.
GK110 will definately shine in some selected benchmarks but there will be a lot of die area that won´t be touched by even the latest games.
Big Kepler just makes no sense as a gaming card. Huge die size, huge power consumption and a huge pricetag. GK114 might de worth waiting for...
Nvidia can minimize the shortcoming, and exhort the virtues to attain a card that enthusiasts will exculpate, just to exclaim its presence in the market exclaiming how great thou art! (for $600 and a 280W TDP)
crazyeyesreaperthey were conservative in order to get better yields essentially most chips yes can do 1050 but not all can at the proper voltage or TDP level, they also have to harvest chips for the 7950 lower clocks meant more chips more usable chips means greater volume to put on store shelves.

Regardless the refresh will probably see Nvidia take the lead but not by a whole lot they have more room to play when it comes to TDP than AMD does right now.
I think it was always a TSMC issue that caused both their woe's, but yes GK104 once Nvidia got good stuff surprise themselves as to what could be wrung out, but they had to use Boost to insure they wouldn't have chip committing Hari Kari. This time around boost theyll get more aggressive and tolerate to heat and power, so that's where the gains will really come from, but will effectively quell any OC’n.
Posted on Reply
#63
cadaveca
My name is Dave
BigMack70All I've been arguing is that they planned GK110 as the GTX 680 and had to scrap it and bump the GK104 up a level.

Basically, I'm arguing that GK104 was not drawn up as a high end GPU. It wound up filling that role just fine, but that doesn't mean it was planned that way.
It HAD to be. You can only fit so many transistors into so much space. :p THere is no way it coudl have ever worked, just like AMD's 2560x shader Cypress couldn't work either.
Posted on Reply
#64
BigMack70
So Nvidia couldn't possibly have just made a design mistake?? :wtf:

Because companies never do that...
Posted on Reply
#65
NHKS
cadavecasee, to me, a mid-range chip is under 250mm², like GTX 660 and HD 7870. All these claims of GTX680 being mid-range do not make sense.
if you expect mid-range chips to have sub-250mm² die sizes, then GF104 (GTX 460) & even GF114 were well-over 300mm².. as for me, I am going by 'naming' convention of Fermi gen.. it had GF100 & GF110 as the high-end chips, so same could be said for Kepler(knowing that GK110 exists)...

anyways with due respect I wish to end it here, to each his own (its all speculation)..
Posted on Reply
#66
Benetanegia
cadavecaAll these claims of GTX680 being mid-range do not make sense.
It may not make sense to you, but it makes all the sense in the world. You are arguing against history. Are you going to suggest that GF104 was not a midrange chip? It was 332 mm^2. Significantly bigger than GK104 and definitely bigger than your 250 mm^2 figure.

All Nvidia high-end chips (GPU + HPC) of past generations have been close to or bigger than 500 mm^2. G80 484mm^2, GT200 576mm^2. GF100 520 mm^2.

Time to have a reality check man. GK100/110 IS the high-end chip. A chip that Nvidia decided it was not economically feasible this past months when TSMC supply was so constrained and yields (for everybody) were not good. End of story. It really is. There's no problem with it other than that and the fact that by being bigger it's going to have lower yields and lower number of dies, nothing that Nvidia didn't do previously or that are afraid of. GK106 took long to release too. Was it because it was not posible? No, because it was economically less "interesting" than GK104 and so was GK110. If they could win with a 294mm^2 chip there was absolutely no reason to release the big one and have lower margins as they had to with first Fermi "generation". HPC moves slower and relies on designs like the Oak Ridge supercomputer that would not been ready back at the time, so more reason to delay.
Posted on Reply
#67
cadaveca
My name is Dave
Oh, I never mean to say that my expectations are the same as what the industry sets, but yes, if a 28nm, and let me repeat...a 28nm chip is over 250mm, then yes, I would not consider it a mid-range chip. If you need more than that space(and neither AMD or nvidia did), then you've got some serious engineering issues, for sure.


Of course bigger processes took up more space. :p


Silly.:roll:


I never said GK100 or GK110 is NOT the high-end chip...sure is...but it was NEVER meant to be GTX680.

TSMC had yield issues. :p That is comical. Yeah, blame the infant technology. :laugh:

Of course it was horrible. nVidia KNEW it would be, as did AMD...and they dealt with it, as they have with every process.
BigMack70So Nvidia couldn't possibly have just made a design mistake?? :wtf:

Because companies never do that...
Actually, no, i think nVidia did NOT make a big mistake, at all, and that GK100 was planned for next year ALWAYS, rather than this January or whatever.


It's not like Kepler is some new thing..it's a tweaked Fermi. nVidia admitted big mistakes with Fermi, so I do expect that there were extra-cautious with kepler.


As will be the next chip.
Posted on Reply
#68
BigMack70
I like that you repeatedly just a priori dismiss dozens and dozens of reputable stories/rumors from the past year for no real reason other than your own theories. :laugh:
Posted on Reply
#69
cadaveca
My name is Dave
BigMack70I like that you repeatedly just a priori dismiss dozens and dozens of reputable stories/rumors from the past year for no real reason other than your own theories. :laugh:
Stories and rumours. Yep.


Except, of course, as a reviewer, I do have a bit more info than the average joe, although, not as much as many other reviewers do, I'm sure.


See, the difference between me and other reviewers..I do this for fun, as a hobby..and not for cash.


I'm not posting news for hits, because that garners money for the site with ads...


TPU isn't built upon that, at all.

This is specualtion, after all, not fact, so yeah, I offer a different perspective...So?

At the end of the day, it's me playing with the hardware NOW you guys want to buy IN THE FUTURE. I don't really care who has the faster chips, who is cheaper, or what you buy...this stuff just shows up on my doorstep, with ZERO cost.



I'm just not afraid to be wrong. :laugh: In the future, we can say "look, this was right, and this wasn't"...and I won't care if I'm wrong. :p You might...but I won't.:roll:
Posted on Reply
#70
BigMack70
Pulling rank as a reviewer doesn't mean rumors/stories are untrustworthy just because you don't believe them and/or they don't fit your ideas of what is or is not going on. Maybe if we were talking about some isolated or crazy things, but not when we're talking about widespread info.
Posted on Reply
#71
cadaveca
My name is Dave
BigMack70Pulling rank as a reviewer doesn't mean rumors/stories are untrustworthy just because you don't believe them and/or they don't fit your ideas of what is or is not going on. Maybe if we were talking about some isolated or crazy things, but not when we're talking about widespread info.
If I had actual info about an unreleased product, I wouldn't be able to talk about it.

That's where me being a reviewer is important.


Who cares that I review stuff. It's not important, really. Like, really...big deal..I get to play with broken stuff 9/10 times, when it's pre-release. I've said it before, I'd much rather have stuff later, but I guess some OEMs value my feedback prior to launch. That's like the whole "ES is better for OC" BS.

That fact I do that for them, for free...well...it's not a big of a deal that most seem to think it is. I actually think it's kind of the opposite...

At the same time though, those that DO have info about unreleased products, like myself, also cannot say much, except what they are allowed, or their info cannot be real.


THAT is a fact I learned as as reviewer, that many seem to not know. That is just how it works. Either this info is force-fed, or it's fake.
Posted on Reply
#72
eidairaman1
The Exiled Airman
cadavecaIf I had actual info about an unreleased product, I wouldn't be able to talk about it.

That's where me being a reviewer is important.


Who cares that I review stuff. It's not important, really. Liek big deal..I get to play with broken stuff 9/10 times, when it's pre-release. I've said it before, I'd much rather have stuff later, but I guess some OEMs value my feedback prior to launch.

That fact I do that for them, for free...well...it's not a big of a deal that most seem to think it is.

At the same time though, those that DO have info about unreleased products, liek myself, also cannot say much, except what they are allowed, or their info cannot be real.


THAT is a fact I learned as as reviewer, that many seem to not know. That is just how it works.
well thats how they improve it later but its for PR honestly when NDA is lifted too bro
Posted on Reply
#73
Benetanegia
cadavecaI never said GK100 or GK110 is NOT the high-end chip...sure is...but it was NEVER meant to be GTX680.
Explain why GK110 wastes so much space in 240 texture mapping units and ROPs, and tesselators and whatnot, if it was never meant for high-end gaming card? ;)
TSMC had yeild issues. :p That is comical. Yeah, blame the infant technology. Of course it was horrible. nVidia KNEW it would be, as did AMD...and they dealt with it, as they have with every process.
Of course they dealt with it. They released the mid-range chip as the high-end card knowing that it would be able to compete with AMD's fastest chip. :laugh:

No one's blaming the "infant tech". Both AMD and Nvidia design their chips according to TSMC's guidances on the process. They have to, since they have to design the chips long before TSMC is ready for production. They design around them and weight in the feasibility and profitability based on them. Guidances are one thing and reality is often a very different one. Of course AMD by being a fabbed chip maker in the past, knows better than Nvidia how to deal with them. We are not discussing that so to the point. Trying to deny that volume and yield issues are TSMC's problem is stupid. Ther guidances for the process and reality didn't match and everyone has suffered from it, be it Nvdia, Qualcomm or AMD, even if AMD has not been as vocal. Each company has very different things to address in their conference calls and trying to extract any conclusions from whether they talk about TSMC issues or not is again stupid. AMD is in far more trouble and has much more things to excuse than having to explain why profit margins on the GPU bussiness are slightly lower than expected.

So imagine we are Nvidia. 28nm is not as good as it was "promised" to be. We get close to Kepler release dates. Volume is not good, yields are not good either, neither worse then 40nm, as Jen Hsun Huang said. But Nvidia had 2 options, repeat GF100 or release GK104 as the high-end. The answer is simple. In a waffer you can have 201 GK104 die candidates. And ~100 GK110 candidates. Again, knowing that GK104 would be close to Tahiti performance or beat it, it's an easy choice*. GK104 at $500. There was no price point at which GK110 would have been more profitable, no matter how much faster than HD7970 it could have been. With the severely low 28nm volume, they would never be able to sell enough GK110 cards so as to be more profitable than they have been with GK104, even if they had acheved 100% market share.

* More so when you know that the next node willl not be ready until 2-3 years later. You'll have to do a refresh and you'll have to make it appealing, faster, so by doing what they did, they can kill 2 birds with a single stone.
Posted on Reply
#74
BigMack70
What I'm saying is that your status as a reviewer gives no inherent credibility to your dismissal of tech rumors/stories (sorry to break it to you...). That might be true if the stories were from people clueless about tech, or if everyone who is well informed about GPUs agreed with you, but that's not the case. When you get corroborating evidence from many reliable and semi-reliable tech sources, there's something to it.

en.wikipedia.org/wiki/Argument_from_authority#Disagreement
en.wikipedia.org/wiki/Appeal_to_accomplishment
Posted on Reply
#75
WhoDecidedThat
Hmmm. Nvidia. One request. Try to release these GPU's without GPU Boost. It really hampers overclocking. If the GTX 680 didn't have GPU Boost, it would have easily reached 1.4 GHz with good binning.
Posted on Reply
Add your own comment
Nov 26th, 2024 05:11 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts