Wednesday, October 17th 2012

NVIDIA Kepler Refresh GPU Family Detailed

A 3DCenter.org report shed light on what NVIDIA's GPU lineup for 2013 could look like. According to the report, NVIDIA's next-generation GPUs could follow a similar path to previous-generation "Fermi Refresh" (GF11x), which turned the performance-per-Watt equation around back in favor of NVIDIA, even though the company's current GeForce Kepler has an established energy-efficiency lead. The "Kepler Refresh" family of GPUs (GK11x), according to the report, could see significant increases in cost-performance, with a bit of clever re-shuffling of the GPU lineup.

NVIDIA's GK104 GPU exceeded performance expectations, which allowed it to drive this generation's flagship single-GPU graphics card for NVIDIA, the GTX 680, giving the company time to perfect the most upscaled chip of this generation, and for its foundry partners to refine its 28 nm manufacturing process. When it's time for Kepler Refresh to go to office, TSMC will have refined its process enough for mass-production of GK110, a 7.1 billion transistor chip on which NVIDIA's low-volume Tesla K20 GPU compute accelerator is currently based.

The GK110 will take back the reins of powering NVIDIA's flagship single-GPU product, the GeForce GTX 780. This product could offer a massive 40-55% performance increase over GeForce GTX 680, with a price ranging anywhere between US $499 and $599. The same chip could even power the second fastest single-GPU SKU, the GTX 770. The GK110 physically packs 2880 CUDA cores, and a 384-bit wide GDDR5 memory interface.

Moving on, the real successor to the GK104, the GK114, could form the foundation for high-performance SKUs such as the GTX 760 Ti and 760. The chip has the same exact specifications as the GK104, leaving NVIDIA to tinker with clock speeds to increase performance. The GK114 will be relegated to performance-segment SKUs from the high-end segment it currently powers, and so even with minimal increases in clock speed, the chip will have achieved sizable performance gains over current GTX 660 Ti and GTX 660.

Lastly, the GK106 could see a refresh to GK116, too, retaining specifications and leaving room for clock speed increases, much in the same way as GK114, except, it gets a demotion to GTX 750 Ti, GTX 750, as well, and so with minimal R&D, the GTX 750 series gains a sizable performance gain over its previous generation.
Source: 3DCenter.org
Add your own comment

127 Comments on NVIDIA Kepler Refresh GPU Family Detailed

#76
BigMack70
blanarahulHmmm. Nvidia. One request. Try to release these GPU's without GPU Boost. It really hampers overclocking. If the GTX 680 didn't have GPU Boost, it would have easily reached 1.4 GHz with good binning.
+1 I would LOVE an option to disable GPU-Boost. Maybe put a dual BIOS or throw an option into the control panel or something.

If they do that and start allowing voltage control again, they'll have a far superior product for people wanting to overclock. GPU Boost nonsense + no voltage control kept me away from the GTX 680.
Posted on Reply
#77
eidairaman1
The Exiled Airman
BigMack70+1 I would LOVE an option to disable GPU-Boost. Maybe put a dual BIOS or throw an option into the control panel or something.

If they do that and start allowing voltage control again, they'll have a far superior product for people wanting to overclock. GPU Boost nonsense + no voltage control kept me away from the GTX 680.
with the trend NV is following especially after forcing EVGA to disable voltage tuning, i honestly dont think they will listen to customer feedback in this sense
Posted on Reply
#78
BigMack70
I know, and I'm very sad about that. I often prefer Nvidia's GPUs, but if AMD offers me something that I can turn into a superior product via overclocking while Nvidia cripples that capability, as happened with the 7970 and 680, I'll take AMD's offering every time.
Posted on Reply
#79
cadaveca
My name is Dave
BigMack70What I'm saying is that your status as a reviewer gives no inherent credibility to your dismissal of tech rumors/stories (sorry to break it to you...). That might be true if the stories were from people clueless about tech, or if everyone who is well informed about GPUs agreed with you, but that's not the case. When you get corroborating evidence from many reliable and semi-reliable tech sources, there's something to it.

en.wikipedia.org/wiki/Argument_from_authority#Disagreement
en.wikipedia.org/wiki/Appeal_to_accomplishment
I never said it did. I said that you must assume that what I post IS speculation only, since I do what I do, and any real info about un-released products I cannot post.


And like-wise, the same applies to any tech site.


That is all. GK110 is un-released, nobody except nVidia employees and those that work at nvidia board pertners know anything about it, and none of them can comment due to NDA.


So anything, anything at all about it...is questionable.


Heck, it might not even actually exist, and is only an idea.


Post a pic of a GK100 chip, full specs and everything else officail form nvidia, and I'll stop my speculation.

Otherwise, if you don't like my post..that's just too bad. the report button is to the left, if you like.

you cna say asll you like that it was planned, you have no proof, and neither do I. And neither of us, if we did, could post it. So I can think and post what I like, and so can you. It's no big deal...only you are making it a big deal that I do not agree with this news.
Posted on Reply
#80
Benetanegia
Post a pic of a GK100 chip, full specs and everything else officail form nvidia, and I'll stop my speculation.
cadavecayou cna say asll you like that it was planned, you have no proof, and neither do I. And neither of us, if we did, could post it. So I can think and post what I like, and so can you. It's no big deal...only you are making it a big deal that I do not agree with this news.
Argument from ignorance. You ARE claiming both that GK100 never existed and that it didn't exist because it can not be made, based on the fact that we can not provide proof to disprove your theory. You are the only one claiming anything using this argument from ignorance falacy to back it up.

The rest of us is just saying that it is entirely posible and probable that GK100 existed and was simply delayed or slightly redesigned into GK110, in a move similar to GF100 -> GF110. The proofs although rumors, are out there and have been there for a long time. Rumors about chips don't always end up being entirely true, but there's always some true to it. GK100 was mentioned many times. GK110 DOES exist. 2+2=4

All in all Nvidia has already shipped cards based on the 7.1 b transistor GK110 chip, so the notion that such a chip cannot be made is obviously false.
Posted on Reply
#81
BigMack70
I've just never seen someone so ready to cavalierly dismiss a multitude of tech rumors based on their own idea of what is or is not possible from a manufacturing perspective...

To each his own I guess.
Posted on Reply
#82
cadaveca
My name is Dave
BenetanegiaArgument from ignorance. You ARE claiming both that GK100 never existed and that it didn't exist because it can not be made, based on the fact that we can not provide proof to disprove your theory. You are the only one claiming anything using this argument from ignorance falacy to back it up.

The rest of us is just saying that it is entirely posible and probable that GK100 existed and was simply delayed or slightly redesigned into GK110, in a move similar to GF100 -> GF110. The proofs although rumors, are out there and have been there for a long time. Rumors about chips don't always end up being entirely true, but there's always some true to it. GK100 was mentioned many times. GK110 DOES exist. 2+2=4

All in all Nvidia has already shipped cards based on the 7.1 b transistor GK110 chip, so the notion that such a chip cannot be made is obviously false.
BigMack70I've just never seen someone so ready to cavalierly dismiss a multitude of tech rumors based on their own idea of what is or is not possible from a manufacturing perspective...

To each his own I guess.
Nah, actually, I'm claiming this since i know all the specs of GK110 already. I even have a die shot. And yeah, liek you said, it is now for sale.

You can find info just as easy, too.

And becuase of this, I do think nvidia knew long before AMD's 7970 release that GK110 was not possible(which is when that news of GTX680 being a mid-range chip), and as such it wasn't meant to be GTX680, ever. Is GK110 the ultimate Kepler design...sure. but it was NEVER intended to be released as GTX680. It was always meant as a Tesla GPGPU card.

Liekwise, AMD knew that Steamroller...and excavator were coming...and that they arre the "big daddy" of the Bulldozer design...but that doesn't mean that Bulldozer or Piledriver are mid-range chips.
Posted on Reply
#83
Benetanegia
cadavecaNah, actually, I'm claiming this since i know all the specs of GK110 already. I even have a die shot.

You can find it just as easy, too.

Anmd becuase of this, I do think nvidia knew long before AMD's release that GK110 was not possible, and as such it wasn;'t meant to be.

Liekwise, AMD knew that Steamroller...and excavator were coming...and that they arre the "big daddy" of the Bulldozer design...but that doesn't mean that Bulldozer or Piledriver are mid-range chips.
Everybody knows the specs and has seen die shots, for a long time already. That means nothing to the discussion at hand. Specs and dies shots say nothing about whether it is feasible to do or not (it IS, it's been already been created AND shipped to customers) and certainly says nothing regarding the intentions of Nvidia.

If GK100/110 was so unfeasible as a gaming card that it was never meant to be one, they would design a new chip to fill in that massive ~250mm^2 difference that exists between GK104 and GK110, instead of using GK110 as the refreshed high-end card. GK110 being an HPC chip wouldn't have so many gaming features wasting space either.

EDIT: SteamRoller, etc. Worst analogy ever.
Posted on Reply
#84
cadaveca
My name is Dave
BenetanegiaIf GK100/110 was so unfeasible as a gaming card that it was never meant to be one, they would design a new chip to fill in that massive $ 250mm^2 difference that exists between GK104 and GK110, instead of using GK110 as the refreshed high-end card. GK110 being an HPC chip wouldn't have so many gaming features wasting space either.
I dunno. You know, the one thing that nVidia is really good as is getting the most dollar for R&D, and designing another chip kinda goes against that mantra.

I mean, it's like dropping the hot clock. They knew they had to.

Keeping within the 300W power envelope with the full-blown Kepler design was obviously not possible, proved by Fermi, IMHO.

Jen Hsun said "The interconnecting mesh was the problem" for Fermi. That mesh...is cache.

And gaming doesn't need that cache. But... HPC does. Gaming needs power-savings, and dropping the hotclock and lowering cache and DP lowered power consumption enough that GTX680 is pretty damn good.

GK104..was that chip you just mentioned.


HPC is more money. WAY MORE MONEY. So for THAT market, yes, a customized chip makes sense.


See, Fermi was the original here. GF100 is the original, NOT GK100. or GK110.


If nvidia started with Kepler as the new core design, then I would have sided with you guys, for sure, but really, to me, Kepler is a bunch of customized Fermi designs, customized in such a way to deliver the best product possible for the lowest cost, for each market.

You may think the Steamroller analogy is wrong here, but to me, that is EXACTLY what Kepler is. And you know what..nVidia says the same thing, too. :p


The hotclock to me, and the lack of DP functionality, says it all. hotclock lets you use less die space, but requires more power. DP functionality also requires more power, because it requires more cache. Dropping 128-bits of memory control..again, to save on power...


If the current GTX680 was meant to be a mid-range chip, after doing all that to save on power, damn, Nvidia really does suck hard. :p
Posted on Reply
#85
Casecutter
blanarahulTry to release these GPU's without GPU Boost.
Sure and if you click the check box to enable OC or break the seal on the switch and then it bricks? I would love to have heard how GK104 could do with the dynamic nanny turned off... While you may believe it might find 1.4Ghz... would it live on for any duration?

I speculate it wouldn’t or Nvidia would've not put in place such restrictions if there wasn't good reasons. Will they still have it that way for next generation? Yes, almost assuredly but at that point better TDP and improve clock and thermal profiles will mean there's will be no gain over operating at an exaggerated-fixed clock. I think for mainstream both sides will continue to refine boost type control. It provides them the best of both worlds, lower claimed power usage, while the highest FpS return.
Posted on Reply
#86
[H]@RD5TUFF
crazyeyesreaperi love how everyone is saying AMD will have a hard time competing :roll: did everyone forget that yawn that is the 7970 GHz edition still beat out the GTX 680 and this gen for the most part each company is equal at the typical price points.

8970 is expected to be 40% faster than the 7970

GTX 780 is expected to be 40-55% faster than the 680

add in overclocking on both and we end up with the exact same situation as this generation. So in reality it just plain doesnt matter lol performance is all i care about and who gets product onto store shelfs and from their into my hands. Doesn't matter whos fastest if it takes 6 months for stock to catch up.
:roll::roll::roll::roll::roll::roll:

It's allways entertaining when fanboys get butt hurt over peoples opinions, the fact is a stock reference 7970 is slower than a stock reference 680, I know this hurts you to accept this fact but it is true. As for the Ghz edition, compare it to a 680 that is factory overclocked and the result is thew same. My 680 classified walks all over any 7970, so :cry::cry: less and check facts more.
Posted on Reply
#87
BigMack70
Guys, we should only look to cadaveca now for tech rumors, this guy obviously knows what's up and we can't trust dozens of other knowledgeable people/sites. They all just make stuff up and obviously only release info to get page views.

:rolleyes:
Posted on Reply
#88
BigMack70
[H]@RD5TUFFMy 680 classified walks all over any 7970, so :cry::cry: less and check facts more.
My 1200/1800 Lightnings very seriously doubt that... OC'd 680s and 7970s are about even overall, trading blows depending on the game/benchmark.

The 7970 gets beat by the 680, sure, but the pricing has updated itself to reflect that now - the 7970 is priced about the same as the 670, and the GHz edition priced around the 680.
Posted on Reply
#89
Benetanegia
cadavecaI dunno. You know, the one thing that nVidia is really good as is getting the most dollar for R&D, and designing another chip kinda goes against that mantra.

I mean, it's like dropping the hot clock. They knew they had to.

Keeping within the 300W power envelope with the full-blown Kepler design was obviously not possible, proved by Fermi, IMHO.

Jen Hsun said "The interconnecting mesh was the problem" for Fermi. That mesh...is cache.

And gaming doesn't need that cache. But... HPC does. Gaming needs power-savings, and dropping the hotclock and lowering cache and DP lowered power consumption enough that GTX680 is pretty damn good.

GK104..was that chip you just mentioned.


HPC is more money. WAY MORE MONEY. So for THAT market, yes, a customized chip makes sense.


See, Fermi was the original here. GF100 is the original, NOT GK100. or GK110.


If nvidia started with Kepler as the new core design, then I would have sided with you guys, for sure, but really, to me, Kepler is a bunch of customized Fermi designs, customized in such a way to deliver the best product possible for the lowest cost, for each market.

You may think the Steamroller analogy is wrong here, but to me, that is EXACTLY what Kepler is. And you know what..nVidia says the same thing, too. :p


The hotclock to me, and teh lack of DP functionality, says it all. hotclock lets you use less die space, but requires more power. DP functionality also requires more power, because it requires more cache.
:laugh: at Kepler being Fermi. Sure and Fermi is Tesla arch (GT200). :laugh:

If we go by similarities, as in they look the same to me with a few tweaks we can go back to G80 days. Same on AMD side. But you know what? They have very little in common. Abandoning hot-clocks is not a trivial thing. Tripling the number of SPs on a similar transistor budget is not trivial either and it denotes exactly te opposite of what you're saying. Fermi and Kepler schematics may look the same, but they aren't the same at all.

As to the rest. It makes little sense to think that GK104 is the only thing they had planned. In previous geerations they created 500 mm^2 chips that were 60%-80% faster than their previous gen and AMD was close, 15%-20% behind. But on this gen they said: "you know what? What the heck. Let's create a 300mm^2 chip that is only 25% faster than our previous gen. Let's make the smallest (by far) jump on performance that we've ever had, let's just leave all that potential there. Later we'll make GK110 a 550 mm^2, so we know we can do it, and it's going to be a refresh part so it IS going to be a gaming card, but for now, let's not make a 450mm^2 chip, or a 350mm^2, no, no sir, a 294mm^2 and with a 256 bit interface that will clearly be the bottleneck even at 6000 MHz, let's just let AMD rip us a new one..."

EDIT: If GK110 had not been fabbed and shipped to customers already, you'd have the start of a point. But since it's already been shipped, it means that it's physically posible to create a 7.1 b chip and make it economically viable (the process hasn't changed much in 6 months). So like I said something in the middle, lika a 5b transistor and/or 400mm^2 would be entirely posible and Nvidia would have gone with that, because AMD's trend has been upwards in regards to die size and there's no way in hell Nvidia would have tried to compete with a 294mm^2 chip, when they knew 100% that AMD had a bigger chip AND they have been historically more competent at making more in less area. Nvidia can be a lot of things, but they are not stupid and would not commit suicide.
Posted on Reply
#90
cadaveca
My name is Dave
BigMack70Guys, we should only look to cadaveca now for tech rumors, this guy obviously knows what's up and we can't trust dozens of other knowledgeable people/sites. They all just make stuff up and obviously only release info to get page views.

:rolleyes:
Yep. :p


The fact you can't ignore that bit, says something.


What, I cannot speculate myself?

And when you can't attack my points, you go after my character? lulz.

As if I want to be the source of rumours. :p Yes, I want to be a gossip queen.


Like, do you get that? I'm not the one that posted the news...BTA didn't either...he just brought it here for us to discuss...

These same sites you trust, get it wrong just as often as right. Oh yeah, Bulldozer is awesome, smokes INtel outright..yeah..that worked...


HD7990 form AMD in auguest....but it was Powercolor...


Rumours are usually only part-truths, so to count them all as fact...is not my porogative. :p
Benetanegia:laugh: at Kepler being Fermi. Sure and Fermi is Tesla arch (GT200). :laugh:

If we go by similarities, as in they look the same to me with a few tweaks we can go back to G80 days. Same on AMD side. But you know what? They have very little in common. Abandoning hot-clocks is not a trivial thing. Tripling the number of SPs on a similar transistor budget is not trivial either and it denotes exactly te opposite of what you're saying. Fermi and Kepler schematics may look the same, but they aren't the same at all.

As to the rest. It makes little sense to think that GK104 is the only thing they had planned. In previous geerations they created 500 mm^2 chips that were 60%-80% faster than their previous gen and AMD was close, 15%-20% behind. But on this gen they said: "you know what? What the heck. Let's create a 300mm^2 chip that is only 25% faster than our previous gen. Let's make the smallest (by far) jump on performance that we've ever had, let's just leave all that potential there. Later we'll make GK110 a 550 mm^2, so we know we can do it, and it's going to be a refresh part so it IS going to be a gaming card, but for now, let's not make a 450mm^2 chip, or a 350mm^2, no, no sir, a 294mm^2 and with a 256 bit interface that will clearly be the bottleneck even at 6000 MHz, let's just let AMD rip us a new one..."
Well, that's just it. This is complicated stuff.

I am not saying at all that GK104 was the only thing...it isn't. But GK110 was never meant to be a GTX part. Kepler is where the Geforce and Tesla become truly seperate products.



And yeah, it probably did work exactly like that...300mm2...best they could get IN THAT SPACE, since this dictates that they can get so many chips per wafer. You know, designs do work like that, so they can optimize wafer usage...right?
Posted on Reply
#91
BigMack70
Didn't mean it as an attack on your character, I'm just saying that your last couple posts had an "I know what I'm talking about because I'm a reviewer and you peons don't" flavor to them, that's all.

Could just be reading them wrong, I suppose, but I think not.

Anyways, rumors are rumors, but they exist for a reason, and this particular family of rumors has been around for almost a year now... plenty long enough to indicate there's something to it.

Enough debating about rumors for me, though.
Posted on Reply
#92
cadaveca
My name is Dave
BigMack70Could just be reading them wrong, I suppose, but I think not.
Yeah, you're reading that wrong. I was saying explicitly that I don't know WTF I'm talking about here, since I'm a reviewer. If I did know what I was tlaknig about, I'd not be able to discuss it.
Posted on Reply
#93
Benetanegia
cadavecaBut GK110 was never meant to be a GTX part.
Oh gosh. Before you say that one more time, can you please explain at least once, why it has so many units that are completely useless in a Tesla card?

Looking at the whitepaper, anyone who knows a damn about GPUs can see that GK110 has been designed to be a fast GPU as much as it's been designed to be a fast HPC chip. Even GF100/110 was castrated in that regards compared to GF104, and G80 and G9x had the same kind of castration, but in Kepler the family where "Geforce and Tesla become truly seperate products." they choose to mantain all those innecessary TMU, tesselators and geometry engines.

- If GK104 was at least close to 400mm^2, your argument would hold some water. At 294mm^2 it does not.
- If GK104 was 384 bits, your argument would hold water. At 256 bit, it doe not.
- If GK110 didn't exist and had not released 6 months after GK104 did...
- If GK110 had no gaming features and wasn't used as the high-end refresh card...
- If GK104 had been named GK100... you get it.
Posted on Reply
#94
cadaveca
My name is Dave
BenetanegiaOh gosh. Before you say that one more time, can you please explain at least once, why it has so many units that are completely useless in a Tesla card?
Because all those things are needed for medical imaging. HPC products still need 3D video capability too. Medical imaging is a very vast market, worth billions. 3D is not gaming. That's where you miss some things. :p

And no, I do not agree with the summation that GK110 was intended to be a "fast GPU". The needed die size says that is not really possible.


But, since it's for HPC, where precision is needed over speed as a priority, that's OK, and lowered clocks, but greater functionality, makes sense.


However, for the desktop market, where speed wins overall, the functionality side isn't so much needed, so it was stripped out. This makes for two distinct product lines, with staggered releases, and hence not competing for each other.

I mean likewise, what do all those HPC features have to do with a gaming product? :p
Posted on Reply
#95
[H]@RD5TUFF
BigMack70My 1200/1800 Lightnings very seriously doubt that... OC'd 680s and 7970s are about even overall, trading blows depending on the game/benchmark.

The 7970 gets beat by the 680, sure, but the pricing has updated itself to reflect that now - the 7970 is priced about the same as the 670, and the GHz edition priced around the 680.
You confusing value in a debate about performance, not the same thing at all, nor valid in any way.:shadedshu
Posted on Reply
#96
Stephen.
BigMack70Anyways, rumors are rumors, but they exist for a reason, and this particular family of rumors has been around for almost a year now... plenty long enough to indicate there's something to it.
Yes there is something and what we know for sure is GK110 TESLA/QUADRO,... for now.

And as cadaveca said, the info we have right now is just rumors and speculation, lets just wait and sooner or later we will all know for sure.
Posted on Reply
#97
Benetanegia
cadavecaBecause all those things are needed for medical imaging. HPC products still need 3D video capability too. Medical imaging is a very vast market, worth billions. 3D is not gaming. That's where you miss some things. :p
Medical imaging is not HPC. Maybe you should have been more clear. That being said, Nvidia has announced GK110 based Tesla, but no Quadro:

www.techpowerup.com/170096/NVIDIA-Maximus-Fuels-Workstation-Revolution-With-Kepler-Architecture.html

Their Maximus platform is composed off GK104 based Quadro and GK110 based Tesla cards. So I think that you're missing much more than I.

And oh, I don't doubt there will be a GK110 based Quadro, but it's not been announced yet afaik. I've only heard about them in the same rumors as the GeForce part so... ;)
And no, I do not agree with the summation that GK110 was intended to be a "fast GPU". The needed die size says that is not really possible.
And yet it all points to Nvidia using it. And in the past they have used chips of the same size and quite successfully.

And an HPC chip has never been profitable on it's own and I don't think it is right now either.
Posted on Reply
#98
cadaveca
My name is Dave
BenetanegiaAnd an HPC chip has never been profitable on it's own and I don't think it is right now either.
I bet nVidia would disagree.


For me, Medical imaging is part of the HPC market. Precise imaging isn't needed just for medical uses either, anything that needs a picture that is accurate, from oil and gas exploration to military uses, all fall under the same usage. Both Tesla and Quadro cards are meant to be used together, building an infrastucture that can scale to consumer demands, called Maximus. If you need more rendering power, say for movie production, you got it, or if you need more compute, for stock market simulation, that's there too, so I fail to agree you've posted much that agrees with your stance there. Nvidia doesn't build single GPUs...they build compute infrastructure.


Welcome to 2012.
With this second generation of Maximus, compute work is assigned to run on the new NVIDIA Tesla K20 GPU computing accelerator, freeing up the new NVIDIA Quadro K5000 GPU to handle graphics functions. Maximus unified technology transparently and automatically assigns visualization and simulation or rendering work to the right processor.
Did you read that press release?


:p

I mean, that whole press release is nVidia claiming it IS profitible, or they wouldn't be marketing towards it. :p



In fact, that press release kinda proves my whole original point, now doesn't it? ;) GK104 for imaging(3D, Quadro and Geforce), GK110 for compute(Tesla).


Like, maybe I'm crazy...but...well...whatever. I'm gonna play some BF3. :p
Posted on Reply
#99
crazyeyesreaper
Not a Moderator
i could care less if you want to call me a fanboy [H]@RD5TUFF but honestly it just makes you look like a child.

i could care less about your classified 680s blah blah i still had my card months before 680 was available and enjoying roughly the same performance.

Simple fact is if i want to be a dick and pull useless numbers the 7970 holds the World Record for 3DMark 11 Extreme, Heaven Extreme, among others

when both cards are clocked they perform the same, they excell in certain games over their rival and vice versa

AvP favors AMD
BF3 favors NVIDIA
etc
Posted on Reply
#100
BigMack70
[H]@RD5TUFFYou confusing value in a debate about performance, not the same thing at all, nor valid in any way.:shadedshu
Except that your 680 isn't going to outperform an overclocked 7970. They'll be about the same.

And I'm not confusing anything... the 7970 and 670 are about the same, and the 7970GE and 680 are about the same. If you overclock the 7970 or 7970GE, they'll match an overclocked 680 - trading blows depending on the game/test and overall being about the same.

I know you have to defend your silly purchase of an overclocking-oriented voltage locked card, though :laugh:
Posted on Reply
Add your own comment
Nov 22nd, 2024 17:55 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts