Wednesday, October 17th 2012

NVIDIA Kepler Refresh GPU Family Detailed

A 3DCenter.org report shed light on what NVIDIA's GPU lineup for 2013 could look like. According to the report, NVIDIA's next-generation GPUs could follow a similar path to previous-generation "Fermi Refresh" (GF11x), which turned the performance-per-Watt equation around back in favor of NVIDIA, even though the company's current GeForce Kepler has an established energy-efficiency lead. The "Kepler Refresh" family of GPUs (GK11x), according to the report, could see significant increases in cost-performance, with a bit of clever re-shuffling of the GPU lineup.

NVIDIA's GK104 GPU exceeded performance expectations, which allowed it to drive this generation's flagship single-GPU graphics card for NVIDIA, the GTX 680, giving the company time to perfect the most upscaled chip of this generation, and for its foundry partners to refine its 28 nm manufacturing process. When it's time for Kepler Refresh to go to office, TSMC will have refined its process enough for mass-production of GK110, a 7.1 billion transistor chip on which NVIDIA's low-volume Tesla K20 GPU compute accelerator is currently based.

The GK110 will take back the reins of powering NVIDIA's flagship single-GPU product, the GeForce GTX 780. This product could offer a massive 40-55% performance increase over GeForce GTX 680, with a price ranging anywhere between US $499 and $599. The same chip could even power the second fastest single-GPU SKU, the GTX 770. The GK110 physically packs 2880 CUDA cores, and a 384-bit wide GDDR5 memory interface.

Moving on, the real successor to the GK104, the GK114, could form the foundation for high-performance SKUs such as the GTX 760 Ti and 760. The chip has the same exact specifications as the GK104, leaving NVIDIA to tinker with clock speeds to increase performance. The GK114 will be relegated to performance-segment SKUs from the high-end segment it currently powers, and so even with minimal increases in clock speed, the chip will have achieved sizable performance gains over current GTX 660 Ti and GTX 660.

Lastly, the GK106 could see a refresh to GK116, too, retaining specifications and leaving room for clock speed increases, much in the same way as GK114, except, it gets a demotion to GTX 750 Ti, GTX 750, as well, and so with minimal R&D, the GTX 750 series gains a sizable performance gain over its previous generation.
Source: 3DCenter.org
Add your own comment

127 Comments on NVIDIA Kepler Refresh GPU Family Detailed

#101
Benetanegia
Visualization and HPC are not the same thing even if both require high computation abilities. Games require high computation and are not labelled HPC. ;)
cadavecaDid you read that press release?

:p

I mean, that whole press release is nVidia claiming it IS profitible, or they wouldn't be marketing towards it. :p
Yeah I read it and that says nothing regarding to its profitability on its own. That's why a GK110 based GeForce cards are going to be released. ;)
In fact, that press release kinda proves my whole original point, now doesn't it? GK104 for imaging(3D, Quadro and Geforce), GK110 for compute(Tesla).
No it doesn't. It would prove it if GK110 had no gaming features. Visualization != Gaming. DirectX and all of it's gaming features are not needed, why does it have it, and why does it improve on them over GK104, in fact.
Posted on Reply
#102
cadaveca
My name is Dave
BenetanegiaDirectX and all of it's gaming features are not needed, why does it have it, and why does it improve on them over GK104, in fact.
Because Windows is the standard GUI for most users, and Windows uses DirectX?


:laugh:


This isn't the first generation Maximus tech, either...
Posted on Reply
#103
Benetanegia
cadavecaBecause Windows is the standard GUI for most users, and Windows uses DirectX?


:laugh:
Erm OpenGL is used in MOST if not all of those systems??
This isn't the first generation Maximus tech, either...
lol and what does that tell? It definitely does not tell that GK104 -> visualization and GK110 -> HPC. It tell us that Nvidia is willing to mix and match Quadro and Tesla cards to get more $$ and that's all. :laugh:

The thing is for the time being there's no Quadro GK110 as much as there's no GeForce GK110. And the reason is not that one is feasible and the other isn't. Such big chips were posible in GeForce in the past and surely are right now (more so since 28nm is so much better in regards to power consumption). And you'll see them, you can be sure of this, when Nvidis sees fit.
Posted on Reply
#104
HumanSmoke
GK 110 like most of the big-die Nvidia GPU's is aimed more at the professional market than gaming. Gaming allows for some high-visibility PR and a useful ongoing marketing tool going forward...they represent an iconic face of each generation- but as a segment, $500+ gaming cards are a miniscule part of sales....it's also the reason Nvidia developed CUDA, and also why Nvidia have a stranglehold on the professional graphics market. At $3k per single GPU card it's relatively easy to see where Nvidia's priorities lie.


A couple of point- can't be fucked looking for the quotes on this drag race of a thread.

Medical imaging. My GF works in radiology (CAT, MRI etc) and the setup is Quadro for image output and 3D representation and Tesla for computation (math co-processor). There is no real difference between medical imaging and any HPC task ( weather forecast, economics/physics/ warfare simulation or any other complex number crunching).
Die size (Dave?) Posting pictures means the square root of fuck-all. Show a picture of an Nvidia chip that isn't covered by a heatspreader if you're making a comparison. BTW: A few mm here or there doesn't sound like a lot, but it impacts the number of usable die candidates substantially ( Die per wafer calculator)

GK110 is pretty much on schedule judging by it's estimated tape out. It looks to have had no more than two silicon revisions (and possibly only one) from initial risk wafer lot to commercial shipping. ORNL started receiving GK110 last month.

EDIT: Graph link
Posted on Reply
#105
cadaveca
My name is Dave
HumanSmokeShow a picture of an Nvidia chip that isn't covered by a heatspreader if you're making a comparison.
I did. :p
BenetanegiaErm OpenGL is used in MOST if not all of those systems??
Sure, but nearly everyone runs windows. Linux, yes if it's a server, but most stuff that have actual users making use of it is Windows-based. Not sure why..honestly...but it is what it is. From Banks to hospitals, most run Windows.
BenetanegiaIt tell us that Nvidia is willing to mix and match Quadro and Tesla cards to get more $$ and that's all.
Actually..no.
As a result of the time needed to context switch, Quadro products are not well suited to doing rendering and compute at the same time. They certainly can, but depending on what applications are being used and what they’re trying to do the result can be that compute eats up a great deal of GPU time, leaving the GUI to only update at a few frames per second with significant lag. On the consumer side NVIDIA’s ray-tracing Design Garage tech demo is a great example of this problem, and we took a quick video on a GTX 580 showcasing how running the CUDA based ray-tracer severely impacts GUI performance.

Alternatively, a highly responsive GUI means that the compute tasks aren’t getting a lot of time, and are only executing at a fraction of the performance that the hardware is capable of. As part of their product literature NVIDIA put together a few performance charts, and while they should be taken with a grain of salt, they do quantify the performance advantage of moving compute over to a dedicated GPU.

For these reasons if an application needs to do both compute and rendering at the same time then it’s best served by sending the compute task to a dedicated GPU. This is the allocation work developers previously had to take into account and that NVIDIA wants to eliminate. At the end of the day the purpose of Maximus is to efficiently allow applications to do both rendering and compute by throwing their compute workload on another GPU, because no one wants to spend $3500 on a Quadro 6000 only for it to get bogged down.
www.anandtech.com/show/5094/nvidias-maximus-technology-quadro-tesla-launching-today/2



That's from before Kepler's launch. Long before. Nvidia has long planned dual-GPU infrastucture, because really, that's what makes sense. So making GK104 as GTX without all the cache, and GK110, with the cache for compute, and then doing the same for the next generation too, makes a whole lot of sense.
Posted on Reply
#106
Benetanegia
cadavecaSure, but nearly everyone runs windows. Linux, yes if it's a server, but most stuff that have actual users making use of it is Windows-based. Not sure why..honestly...but it is what it is. From Banks to hospitals, most run Windows.
Yes, Windows, but not DirectX with all it's features that for the pro market do mostly nothing but make the shader processors much fatter. If there was a really serious push from Nvidia to split the markets they would have done it. Putting GPU features in GK110 when they are so clearly pushing for Maximus integration is stupid if they really wanted to split the market. And they are not stupid. Look I remember the exact same things being said back when Fermi (GF100) was unveiled, because just like with GK110 they unveiled it in an HPC event and the focus was 100% on HPC features. But they were excellent gaming GPUs too. Power consumption was "bad" HCP features strippled GF104/114 too. Same perf/watt as GTX 580, nothing to do with the added HPC features. And GK110 is much of the same, I'm pretty sure of that.

tpucdn.com/reviews/ASUS/HD_7970_Matrix/images/perfwatt_1920.gif
cadavecalThat's from before Kepler's launch. Long before. Nvidia has long planned dual-GPU infrastucture, because really, that's what makes sense. So making GK104 as GTX without all the cache, and GK110, with the cache for compute, and then doing the same for the next generation too, makes a whole lot of sense.
Don't you see that your logic fails. That's why you are not being coherent. Why does GK110 have gaming/visualization features AT ALL if it was never meant for it sicne the beginning and they have Maximum as the final goal. It's as simple as that. You're now trying to legitimize the idea that Maximus is the way to go* and that it's Nvidia's plan since last generation. Again why those features on GK110 again?? Makes no sense, don't you see that. It's either one thing or the other, both cannot cohexist. Either GK110 was thought as a visualization/gaming (DirectX) powerhouse or not. If Maximus is Nvidia's idea for HPC and was their only intention with GK110. A GK104 + GK110 completely stripped off any other functionality than HPC would have ended up in an actually smaller die area, both conbined. But they didn't go that route and you really really have to think why. Why did they follow that route. Why there's rumors about GK110 based GeFerces and so on.

*I agree but it's beyond the point, and my comment regarding $$ is also true and you know that :) the context switching is simply also convenient and Kepler has context switching vastly improve so at some point 2 cards would not be required.
Posted on Reply
#107
HumanSmoke
cadavecaI did. :p
I was think more along the lines of:
Posted on Reply
#108
cadaveca
My name is Dave
:roll:


Like really....that pic to me says it all. :roll:

Is it really 512mm?

What clockspeeds are the Tesla cards? 600 Mhz, I'm guessing?
BenetanegiaWhy did they follow that route.
Because their customers asked for it.

CUDA can use all those features you call "useless". It's not quite like how you put it...there's not really much if any dedicated hardware for the purposes you mention. At least not any that takes up any die space worth mentioning.


See that picture above? Point to me where these "DirectX features" are located...
Posted on Reply
#109
HumanSmoke
cadaveca:roll:
Like really....that pic to me says it all. :roll:
Is it really 512mm?
What clockspeeds are the Tesla cards? 600 Mhz, I'm guessing?
The overlay shows a GF100 of 521mm^2

The K20 spec released says 705M. Standard practice to keep the board power under the 225W limit (this is what happens when you try to keep clocks high to inflate FLOP performance in a compute enviroment) . I'd expect the GeForce card to be bound closer to (if not fudging over) the ATX 300W limit ( maybe 900 MHz or so)
With the shader count, the larger cache structure, provision for 72-bit (64 + 8 ECC) memory controllers I think the rumoured 550mm^2 die size is probably very close- another thing that argues against the GK110 being a gaming card (at least primarily). AFAIK, Nvidia's own whitepaper describes their ongoing strategy as gaming and compute becoming distinct product/architectural lines for the most part ( see Maxwell and the imminent threat of Intel's Xeon Phi)
Posted on Reply
#110
Benetanegia
cadaveca:roll:


Like really....that pic to me says it all. :roll:

Is it really 512mm?

What clockspeeds are the Tesla cards? 600 Mhz, I'm guessing?
So you didn't know the size of GF100/110? How are you even attempting to make a half-arsed argument regarding all of this (especially what is feasible) if you lack such an essential bit of info? (I demand you are denied reviewing rights right now :ohwell: j/k)

And yes that pic says it all. If you really think that after 521 mm^2 GF110 they would put up a 294mm^2 chip against AMD's 365 mm^2 one, you are deluded sir.
CUDA can use all those features you call "useless".
No it doesn't.
It's not quite like how you put it...there's not really much if any dedicated hardware for the purposes you mention. At least not any that takes up any die space worth mentioning.
Yes it does. Shader Processors have to be fatter, include more instructions or differently to how it would be best for HPC. The ISA has to be much wider, resulting in more complex fetch and decode, which not only widens the front end, but it makes it significantly slower. And there's tesselation of course. There's absolutely no sense in adding more functionality than it would be required. If functionality is there is because it's meant to be used.
See that picture above? Point to me where these "DirectX features" are located...
Are you serious? Too many beers today or what? You seem to be trolling now... :ohwell:
Posted on Reply
#111
cadaveca
My name is Dave
HumanSmokeThe overlay shows a GF100 of 521mm^2

The K20 spec released says 705M. Standard practice to keep the board power under the 225W limit (this is what happens when you try to keep clocks high to inflate FLOP performance in a compute enviroment) . I'd expect the GeForce card to be bound closer to (if not fudging over) the ATX 300W limit ( maybe 900 MHz or so)
With the shader count, the larger cache structure, provision for 72-bit (64 + 8 ECC) memory controllers I think the rumoured 550mm^2 die size is probably very close- another thing that argues against the GK110 being a gaming card (at least primarily). AFAIK, Nvidia's own whitepaper describes their ongoing strategy as gaming and compute becoming distinct product/architectural lines for the most part ( see Maxwell and the imminent threat of Intel's Xeon Phi)
Huh. We don't seem to disagree, then. That's what I had thought. Just Bene here disagrees, but maybe only becuase I said GK104 was always supposed to be GTX680, not GK110.


And yes, Bene..I has no idea...as i said like 5 times earlier....because I review motherboards and memory, not GPUs. GPUs are W1zz's territory.


wait.


K20 is GF110, not GK110.


:p

Still...damn that's a huge chip.
BenetanegiaAre you serious? Too many beers today or what? You seem to be trolling now...
Yes, serious. Show me EXACTLY where DirectX makes the die bigger. Because from what I've been lead to beelive by nVidia, it's actually the opposite of what you indicate...as does the rest fo the info i got from them. :p
Posted on Reply
#112
Benetanegia
cadavecaYes, serious. Show me EXACTLY where DirectX makes the die bigger. Because from what I've been lead to beelive by nVidia, it's actually the opposite of what you indicate...as does the rest fo the info i got from them. :p
It's not something you can see there lol, that's why I asked if you were being serious?

Ever wondered why DX11 Shader Processors require more transistors/die area and are clock for clock slower than DX10 SPs? (i.e HD4870 vs HD5770)

Whatever, an easier example to show how stupid the question is. Can you point me out where the tesselators are? Tesselators actually are a separate entity, unlike functionality included in Shader Processors.

BTW K20 is GK110, lol.

EDIT: A more fittig analogy:



Please point me to where exactly they planted potatoes, where wheat and where corn.
Posted on Reply
#113
[H]@RD5TUFF
BigMack70Except that your 680 isn't going to outperform an overclocked 7970. They'll be about the same.

And I'm not confusing anything... the 7970 and 670 are about the same, and the 7970GE and 680 are about the same. If you overclock the 7970 or 7970GE, they'll match an overclocked 680 - trading blows depending on the game/test and overall being about the same.

I know you have to defend your silly purchase of an overclocking-oriented voltage locked card, though :laugh:
Yes it's totally silly to score a nice video card . . . I am not defending anything, but I would seriously doubt many if any 7970's outside of ones with water cooling or LN would be able to keep up with my 680 @ 1320 core, while being cooled on air . .

That said you seem to only care about suckling on AMD's teat, and disparaging things you don't like, rather than having a discussion of substance.:shadedshu
Posted on Reply
#114
HumanSmoke
cadavecaHuh. We don't seem to disagree, then. That's what I had thought
We probably are in agreement on the point. But then, Nvidia have been integrating GPGPU since G80 (8800GTX/Ultra). Back then the strategy seemed a one-size-fits-all mentality (plus Jen Hsun's ego of win at all costs I would suggest) that is fine so long as the GPU's gestation isn't protracted and yields are good. G200 (GTX 280 etc) seemed to signal that Nvidia was walking a knife edge of what can be achieved against the pitfalls of process design, and Fermi seems to have been a big wake up call and an example of what can go wrong will go wrong. Loss of prestige could well have translated into loss of market share had it not been that the pro market is very slow to change/update and Nvidia's software enviroment being top notch.

Kepler compute cards have orders in the region of 100-150,000 units. At $3K + apiece (even taking into account low end GK104, since a 4xGPU "S" 1U system is bound to materialize taking the place of the S2090) it isn't hard too see how Nvidia would look at a modular mix-and-match approach to a gaming/workstation GPU and compute/workstation GPU. In a way, it matches AMD's past strategy, which is ironic considering that AMD have adopted compute at the expense of a larger die. The difference is that AMD wouldn't contemplate a monolithic GPU like GK110 - the risk is too great (process worries), and the return too small (not enough presence in the markets that it would be aimed at).
Posted on Reply
#115
cadaveca
My name is Dave
HumanSmokeWe probably are in agreement on the point. But then, Nvidia have been integrating GPGPU since G80 (8800GTX/Ultra). Back then the strategy seemed a one-size-fits-all mentality (plus Jen Hsun's ego of win at all costs I would suggest) that is fine so long as the GPU's gestation isn't protracted and yields are good. G200 (GTX 280 etc) seemed to signal that Nvidia was walking a knife edge of what can be achieved against the pitfalls of process design, and Fermi seems to have been a big wake up call and an example of what can go wrong will go wrong. Loss of prestige could well have translated into loss of market share had it not been that the pro market is very slow to change/update and Nvidia's software enviroment being top notch.

Kepler compute cards have orders in the region of 100-150,000 units. At $3K + apiece (even taking into account low end GK104, since a 4xGPU "S" 1U system is bound to materialize taking the place of the S2090) it isn't hard too see how Nvidia would look at a modular mix-and-match approach to a gaming/workstation GPU and compute/workstation GPU. In a way, it matches AMD's past strategy, which is ironic considering that AMD have adopted compute at the expense of a larger die. The difference is that AMD wouldn't contemplate a monolithic GPU like GK110 - the risk is too great (process worries), and the return too small (not enough presence in the markets that it would be aimed at).
Yeah, I actually kinda like how they are similar, but since thgey both make GPUs for kinda teh same audience, that only makes sense.

As to the whole monolithix thing, since the Fermi thing, it made sense to me that they would diverge, since they identified the problem there, and then realized that it could be an issue again in the future..one they could avoid on their higher-numbers-sold-but-less-profit products.

And considering the market, and nvidia's plans with ARM, it makes sense they'd want to sell you both a Tesla card, and a Quadro card, and a motherboard for it all with an arm chip. It's the same as buying CPU/GPU/board...
Posted on Reply
#116
BigMack70
[H]@RD5TUFFYes it's totally silly to score a nice video card . . . I am not defending anything, but I would seriously doubt many if any 7970's outside of ones with water cooling or LN would be able to keep up with my 680 @ 1320 core, while being cooled on air . .

That said you seem to only care about suckling on AMD's teat, and disparaging things you don't like, rather than having a discussion of substance.:shadedshu
A 680 with a GPU boost frequency of ~1300 is roughly equivalent to a 7970 with a core clock of ~1200...

If you managed to get a GPU core clock of 1300 and a boost of 1400+, you got very lucky, akin to someone who got a 7970 and managed to hit 1300 MHz.
Posted on Reply
#117
HumanSmoke
BigMack70A 680 with a GPU boost frequency of ~1300 is roughly equivalent to a 7970 with a core clock of ~1200...
If you managed to get a GPU core clock of 1300 and a boost of 1400+, you got very lucky, akin to someone who got a 7970 and managed to hit 1300 MHz.
If the GK114 and Sea Islands are both basically refreshes as seems likely, then it sould also seem likely that Nvidia have more wiggle room on clocks since the power draw of GK 104 is lower than that of Tahiti. I'd assume that with 28nm being more mature that the next round of silicon would be more refined (lower leakage) for both vendors, so unless there is a fundamental redesign in silicon, I'd assume that AMD would look to lower power usage (less heat, higher boost/OC over stock), and Nvidia, higher clocks including memory to counter bandwidth limitation.

The other alternative is that AMD and Nvidia hack and slash the GPU, which doesn't seem all that likely. Adding compute, beefiing up the ROP/TMU count adds substantially to power draw and die size.
Posted on Reply
#118
BigMack70
HumanSmokeIf the GK114 and Sea Islands are both basically refreshes as seems likely, then it sould also seem likely that Nvidia have more wiggle room on clocks since the power draw of GK 104 is lower than that of Tahiti. I'd assume that with 28nm being more mature that the next round of silicon would be more refined (lower leakage) for both vendors, so unless there is a fundamental redesign in silicon, I'd assume that AMD would look to lower power usage (less heat, higher boost/OC over stock), and Nvidia, higher clocks including memory to counter bandwidth limitation.

The other alternative is that AMD and Nvidia hack and slash the GPU, which doesn't seem all that likely. Adding compute, beefiing up the ROP/TMU count adds substantially to power draw and die size.
I think it's too early to draw conclusions about this, as the 680 can have its power draw go absolutely through the roof with OC+OV:


Right now, we see that the 7970 and 680 perform about the same when overclocked and so there's no real performance crown. Rather or not one of the companies manages to get that crown in the next round remains to be seen.
Posted on Reply
#120
HumanSmoke
BigMack70I think it's too early to draw conclusions about this, as the 680 can have its power draw go absolutely through the roof with OC+OV
Well, firstly, overvolting comparison is really only valid if comparing max overclock (BTW: It's common courtesy to link to the site that did the review (EVGA GTX 680 Classy 4GB)
In point of fact, you've just made my point.
EVGA 680 @ 1287 Core, 1377 boost, 6500 effective memory = 425W under OCCT
HD 7970GE @ 1150 Core, 1200 boost, 6400 effective memory= 484W under OCCT
If 425 watts is "absolutely through the roof", what's 484 watts ?

[source]
BigMack70Right now, we see that the 7970 and 680 perform about the same when overclocked and so there's no real performance crown
True enough, but since overclocked performance isn't guaranteed, stock-vs-stock is probably a better indicator of current performance. Overclocking ability might be more an indicator of how a refresh might perform.
Posted on Reply
#121
BigMack70
I didn't mean to indicate that the 7970's didn't, only that we don't know enough from this current gen to predict how next gen will play out with power and clocks. This goes double since we keep hearing rumors of a really big chip e.g. GK110 showing up.
Posted on Reply
#122
crazyeyesreaper
Not a Moderator
really we are enthusiasts boohoo gpu x uses more power than gpu y honestly who give a fuck really? I don't buy based on TDP or power usage I buy based on availability and performance, As do most of you a lower power requirement is just icing on the damn cake. I dont care if the GPU uses 200w or 500w long as it does its job.
Posted on Reply
#123
HumanSmoke
crazyeyesreaperreally we are enthusiasts boohoo gpu x uses more power than gpu y honestly who give a fuck really? I don't buy based on TDP or power usage I buy based on availability and performance
Missed the point by a country mile.
Lower power usage envelope now generally means more leeway on clocks on the refresh (all other things being equal)
crazyeyesreaperAs do most of you a lower power requirement is just icing on the damn cake. I dont care if the GPU uses 200w or 500w long as it does its job.
Who gives a fuck about an individual user in an industry context? When individual users buy more cards than OEM's then AMD and Nvidia will give Dell, HP and every other prebuilder the dismissive wanking gesture and thumb their nose at the ATX specification. Until then, both AMD and Nvidia are pretty much going to adhere to the 300W limit. OEM's don't buy out-of-spec parts.

/Majority of posters talk about the inductry situation, crazyeyes talks about crazyeyes situation
BigMack70I didn't mean to indicate that the 7970's didn't, only that we don't know enough from this current gen to predict how next gen will play out with power and clocks. This goes double since we keep hearing rumors of a really big chip e.g. GK110 showing up.
Everyone here is speculating on details from an article that is itself speculating on the possible makeup of a IHV's card refresh. I put forward an hypothesis based on previous design history (HD 4870 -> HD 4890, GTX 480 -> GTX 580 for isolated examples) where refinement and design headroom produced performance increases. It is by no means the only argument-as shown by the thread, but I don't see it as being proved false by the graph you added- or the one I added to compliment it. And if we are commenting upon a speculative article with known fact only, I think the post count on the thread could be reduced by ~120 posts.
Posted on Reply
#124
atticus14
HumanSmokeInteresting stuff if true. GK110 takes GK104's place in the product stack, and the GTX 680 refresh gets pricing in the GTX 660 Ti's territory. Given that AMD's refresh seems to be looking at the same ~15% increase, it would seem that AMD might end up being pressured pretty hard in perf/mm^2, perf/$ and margins if they have to fight a GTX 680 successor that is 40% cheaper than the current model. A pricing overhaul like that will surely lay waste to the resell market- by the same token, a GTX670 or 680 SLI setup should be cheap as chips come March.

At least all the people screaming about Nvidia pricing a supposed mainstream/performance GK 104 at enthusiast prices, will now be able to vent their rage elsewhere.
hmm? This only confirms their right to rage. If this comes to pass and they are justified the real rage has only begun!
Posted on Reply
#125
F0XFOUND
GoldenTigerOh, perhaps it was originally, but GK100 was certainly not "held back" so they could "put out a midrange card as high-end for mad profits!!!!" as some people like to proclaim.
cadavecaThis is always what I thought. If nVidia could truly release a card twice as fast as what AMD has, using the same foundry, then they would, since that would ensure far more sales and profit than selling something that "saves on costs" instead.

In fact, had nVidia done this, to a degree, would amount to price fixing, and of course, is illegal.

Of course, now that both cards are here, and we can see the physical size of each chip, we can easily tell that this is certainly NOT the case, at all, so whatever, it's all just marketing drivel.

In fact, it wouldn't really be any different than AMD talking about Steamroller. :p "Man, we got this chip coming...";)
If GK104 was truly the high end chip for the GTX 680 then why Nvidia claimed it was 3 times as powerful as the GTX 580 and could run the Samaritan demo on just one 680. Can the retail GTX 680 be 3x as fast as the GTX 580 and/or run the Samaritan demo all by itself? I remember the unreleased GTX 680 was touted as it could.
Posted on Reply
Add your own comment
Nov 22nd, 2024 17:02 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts