• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA Kepler Refresh GPU Family Detailed

Joined
Mar 23, 2012
Messages
570 (0.12/day)
Processor Intel i9-9900KS @ 5.2 GHz
Motherboard Gigabyte Z390 Aorus Master
Cooling Corsair H150i Elite
Memory 32GB Viper Steel Series DDR4-4000
Video Card(s) RTX 3090 Founders Edition
Storage 2TB Sabrent Rocket NVMe + 2TB Intel 960p NVMe + 512GB Samsung 970 Evo NVMe + 4TB WD Black HDD
Display(s) 65" LG C9 OLED
Case Lian Li O11D-XL
Audio Device(s) Audeze Mobius headset, Logitech Z906 speakers
Power Supply Corsair AX1000 Titanium
You confusing value in a debate about performance, not the same thing at all, nor valid in any way.:shadedshu

Except that your 680 isn't going to outperform an overclocked 7970. They'll be about the same.

And I'm not confusing anything... the 7970 and 670 are about the same, and the 7970GE and 680 are about the same. If you overclock the 7970 or 7970GE, they'll match an overclocked 680 - trading blows depending on the game/test and overall being about the same.

I know you have to defend your silly purchase of an overclocking-oriented voltage locked card, though :laugh:
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.48/day)
Location
Reaching your left retina.
Visualization and HPC are not the same thing even if both require high computation abilities. Games require high computation and are not labelled HPC. ;)

Did you read that press release?

:p

I mean, that whole press release is nVidia claiming it IS profitible, or they wouldn't be marketing towards it. :p

Yeah I read it and that says nothing regarding to its profitability on its own. That's why a GK110 based GeForce cards are going to be released. ;)

In fact, that press release kinda proves my whole original point, now doesn't it? GK104 for imaging(3D, Quadro and Geforce), GK110 for compute(Tesla).

No it doesn't. It would prove it if GK110 had no gaming features. Visualization != Gaming. DirectX and all of it's gaming features are not needed, why does it have it, and why does it improve on them over GK104, in fact.
 

cadaveca

My name is Dave
Joined
Apr 10, 2006
Messages
17,232 (2.53/day)
DirectX and all of it's gaming features are not needed, why does it have it, and why does it improve on them over GK104, in fact.

Because Windows is the standard GUI for most users, and Windows uses DirectX?


:laugh:


This isn't the first generation Maximus tech, either...
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.48/day)
Location
Reaching your left retina.
Because Windows is the standard GUI for most users, and Windows uses DirectX?


:laugh:

Erm OpenGL is used in MOST if not all of those systems??

This isn't the first generation Maximus tech, either...

lol and what does that tell? It definitely does not tell that GK104 -> visualization and GK110 -> HPC. It tell us that Nvidia is willing to mix and match Quadro and Tesla cards to get more $$ and that's all. :laugh:

The thing is for the time being there's no Quadro GK110 as much as there's no GeForce GK110. And the reason is not that one is feasible and the other isn't. Such big chips were posible in GeForce in the past and surely are right now (more so since 28nm is so much better in regards to power consumption). And you'll see them, you can be sure of this, when Nvidis sees fit.
 
Last edited:
Joined
Sep 7, 2011
Messages
2,785 (0.58/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
GK 110 like most of the big-die Nvidia GPU's is aimed more at the professional market than gaming. Gaming allows for some high-visibility PR and a useful ongoing marketing tool going forward...they represent an iconic face of each generation- but as a segment, $500+ gaming cards are a miniscule part of sales....it's also the reason Nvidia developed CUDA, and also why Nvidia have a stranglehold on the professional graphics market. At $3k per single GPU card it's relatively easy to see where Nvidia's priorities lie.


A couple of point- can't be fucked looking for the quotes on this drag race of a thread.

Medical imaging. My GF works in radiology (CAT, MRI etc) and the setup is Quadro for image output and 3D representation and Tesla for computation (math co-processor). There is no real difference between medical imaging and any HPC task ( weather forecast, economics/physics/ warfare simulation or any other complex number crunching).
Die size (Dave?) Posting pictures means the square root of fuck-all. Show a picture of an Nvidia chip that isn't covered by a heatspreader if you're making a comparison. BTW: A few mm here or there doesn't sound like a lot, but it impacts the number of usable die candidates substantially ( Die per wafer calculator )

GK110 is pretty much on schedule judging by it's estimated tape out. It looks to have had no more than two silicon revisions (and possibly only one) from initial risk wafer lot to commercial shipping. ORNL started receiving GK110 last month.

EDIT: Graph link
 
Last edited:

cadaveca

My name is Dave
Joined
Apr 10, 2006
Messages
17,232 (2.53/day)
Show a picture of an Nvidia chip that isn't covered by a heatspreader if you're making a comparison.

I did. :p

Erm OpenGL is used in MOST if not all of those systems??

Sure, but nearly everyone runs windows. Linux, yes if it's a server, but most stuff that have actual users making use of it is Windows-based. Not sure why..honestly...but it is what it is. From Banks to hospitals, most run Windows.

It tell us that Nvidia is willing to mix and match Quadro and Tesla cards to get more $$ and that's all.

Actually..no.

As a result of the time needed to context switch, Quadro products are not well suited to doing rendering and compute at the same time. They certainly can, but depending on what applications are being used and what they’re trying to do the result can be that compute eats up a great deal of GPU time, leaving the GUI to only update at a few frames per second with significant lag. On the consumer side NVIDIA’s ray-tracing Design Garage tech demo is a great example of this problem, and we took a quick video on a GTX 580 showcasing how running the CUDA based ray-tracer severely impacts GUI performance.

Alternatively, a highly responsive GUI means that the compute tasks aren’t getting a lot of time, and are only executing at a fraction of the performance that the hardware is capable of. As part of their product literature NVIDIA put together a few performance charts, and while they should be taken with a grain of salt, they do quantify the performance advantage of moving compute over to a dedicated GPU.

For these reasons if an application needs to do both compute and rendering at the same time then it’s best served by sending the compute task to a dedicated GPU. This is the allocation work developers previously had to take into account and that NVIDIA wants to eliminate. At the end of the day the purpose of Maximus is to efficiently allow applications to do both rendering and compute by throwing their compute workload on another GPU, because no one wants to spend $3500 on a Quadro 6000 only for it to get bogged down.

http://www.anandtech.com/show/5094/nvidias-maximus-technology-quadro-tesla-launching-today/2



That's from before Kepler's launch. Long before. Nvidia has long planned dual-GPU infrastucture, because really, that's what makes sense. So making GK104 as GTX without all the cache, and GK110, with the cache for compute, and then doing the same for the next generation too, makes a whole lot of sense.
 
Last edited:

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.48/day)
Location
Reaching your left retina.
Sure, but nearly everyone runs windows. Linux, yes if it's a server, but most stuff that have actual users making use of it is Windows-based. Not sure why..honestly...but it is what it is. From Banks to hospitals, most run Windows.

Yes, Windows, but not DirectX with all it's features that for the pro market do mostly nothing but make the shader processors much fatter. If there was a really serious push from Nvidia to split the markets they would have done it. Putting GPU features in GK110 when they are so clearly pushing for Maximus integration is stupid if they really wanted to split the market. And they are not stupid. Look I remember the exact same things being said back when Fermi (GF100) was unveiled, because just like with GK110 they unveiled it in an HPC event and the focus was 100% on HPC features. But they were excellent gaming GPUs too. Power consumption was "bad" HCP features strippled GF104/114 too. Same perf/watt as GTX 580, nothing to do with the added HPC features. And GK110 is much of the same, I'm pretty sure of that.

http://tpucdn.com/reviews/ASUS/HD_7970_Matrix/images/perfwatt_1920.gif

lThat's from before Kepler's launch. Long before. Nvidia has long planned dual-GPU infrastucture, because really, that's what makes sense. So making GK104 as GTX without all the cache, and GK110, with the cache for compute, and then doing the same for the next generation too, makes a whole lot of sense.

Don't you see that your logic fails. That's why you are not being coherent. Why does GK110 have gaming/visualization features AT ALL if it was never meant for it sicne the beginning and they have Maximum as the final goal. It's as simple as that. You're now trying to legitimize the idea that Maximus is the way to go* and that it's Nvidia's plan since last generation. Again why those features on GK110 again?? Makes no sense, don't you see that. It's either one thing or the other, both cannot cohexist. Either GK110 was thought as a visualization/gaming (DirectX) powerhouse or not. If Maximus is Nvidia's idea for HPC and was their only intention with GK110. A GK104 + GK110 completely stripped off any other functionality than HPC would have ended up in an actually smaller die area, both conbined. But they didn't go that route and you really really have to think why. Why did they follow that route. Why there's rumors about GK110 based GeFerces and so on.

*I agree but it's beyond the point, and my comment regarding $$ is also true and you know that :) the context switching is simply also convenient and Kepler has context switching vastly improve so at some point 2 cards would not be required.
 
Last edited:
Joined
Sep 7, 2011
Messages
2,785 (0.58/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.

cadaveca

My name is Dave
Joined
Apr 10, 2006
Messages
17,232 (2.53/day)
:roll:


Like really....that pic to me says it all. :roll:

Is it really 512mm?

What clockspeeds are the Tesla cards? 600 Mhz, I'm guessing?

Why did they follow that route.

Because their customers asked for it.

CUDA can use all those features you call "useless". It's not quite like how you put it...there's not really much if any dedicated hardware for the purposes you mention. At least not any that takes up any die space worth mentioning.


See that picture above? Point to me where these "DirectX features" are located...
 
Joined
Sep 7, 2011
Messages
2,785 (0.58/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
:roll:
Like really....that pic to me says it all. :roll:
Is it really 512mm?
What clockspeeds are the Tesla cards? 600 Mhz, I'm guessing?

The overlay shows a GF100 of 521mm^2

The K20 spec released says 705M. Standard practice to keep the board power under the 225W limit (this is what happens when you try to keep clocks high to inflate FLOP performance in a compute enviroment) . I'd expect the GeForce card to be bound closer to (if not fudging over) the ATX 300W limit ( maybe 900 MHz or so)
With the shader count, the larger cache structure, provision for 72-bit (64 + 8 ECC) memory controllers I think the rumoured 550mm^2 die size is probably very close- another thing that argues against the GK110 being a gaming card (at least primarily). AFAIK, Nvidia's own whitepaper describes their ongoing strategy as gaming and compute becoming distinct product/architectural lines for the most part ( see Maxwell and the imminent threat of Intel's Xeon Phi)
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.48/day)
Location
Reaching your left retina.
:roll:


Like really....that pic to me says it all. :roll:

Is it really 512mm?

What clockspeeds are the Tesla cards? 600 Mhz, I'm guessing?

So you didn't know the size of GF100/110? How are you even attempting to make a half-arsed argument regarding all of this (especially what is feasible) if you lack such an essential bit of info? (I demand you are denied reviewing rights right now :ohwell: j/k)

And yes that pic says it all. If you really think that after 521 mm^2 GF110 they would put up a 294mm^2 chip against AMD's 365 mm^2 one, you are deluded sir.


CUDA can use all those features you call "useless".

No it doesn't.

It's not quite like how you put it...there's not really much if any dedicated hardware for the purposes you mention. At least not any that takes up any die space worth mentioning.

Yes it does. Shader Processors have to be fatter, include more instructions or differently to how it would be best for HPC. The ISA has to be much wider, resulting in more complex fetch and decode, which not only widens the front end, but it makes it significantly slower. And there's tesselation of course. There's absolutely no sense in adding more functionality than it would be required. If functionality is there is because it's meant to be used.

See that picture above? Point to me where these "DirectX features" are located...

Are you serious? Too many beers today or what? You seem to be trolling now... :ohwell:
 

cadaveca

My name is Dave
Joined
Apr 10, 2006
Messages
17,232 (2.53/day)
The overlay shows a GF100 of 521mm^2

The K20 spec released says 705M. Standard practice to keep the board power under the 225W limit (this is what happens when you try to keep clocks high to inflate FLOP performance in a compute enviroment) . I'd expect the GeForce card to be bound closer to (if not fudging over) the ATX 300W limit ( maybe 900 MHz or so)
With the shader count, the larger cache structure, provision for 72-bit (64 + 8 ECC) memory controllers I think the rumoured 550mm^2 die size is probably very close- another thing that argues against the GK110 being a gaming card (at least primarily). AFAIK, Nvidia's own whitepaper describes their ongoing strategy as gaming and compute becoming distinct product/architectural lines for the most part ( see Maxwell and the imminent threat of Intel's Xeon Phi)

Huh. We don't seem to disagree, then. That's what I had thought. Just Bene here disagrees, but maybe only becuase I said GK104 was always supposed to be GTX680, not GK110.


And yes, Bene..I has no idea...as i said like 5 times earlier....because I review motherboards and memory, not GPUs. GPUs are W1zz's territory.


wait.


K20 is GF110, not GK110.


:p

Still...damn that's a huge chip.

Are you serious? Too many beers today or what? You seem to be trolling now...

Yes, serious. Show me EXACTLY where DirectX makes the die bigger. Because from what I've been lead to beelive by nVidia, it's actually the opposite of what you indicate...as does the rest fo the info i got from them. :p
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.48/day)
Location
Reaching your left retina.
Yes, serious. Show me EXACTLY where DirectX makes the die bigger. Because from what I've been lead to beelive by nVidia, it's actually the opposite of what you indicate...as does the rest fo the info i got from them. :p

It's not something you can see there lol, that's why I asked if you were being serious?

Ever wondered why DX11 Shader Processors require more transistors/die area and are clock for clock slower than DX10 SPs? (i.e HD4870 vs HD5770)

Whatever, an easier example to show how stupid the question is. Can you point me out where the tesselators are? Tesselators actually are a separate entity, unlike functionality included in Shader Processors.

BTW K20 is GK110, lol.

EDIT: A more fittig analogy:



Please point me to where exactly they planted potatoes, where wheat and where corn.
 
Last edited:
Joined
Nov 13, 2009
Messages
5,614 (1.02/day)
Location
San Diego, CA
System Name White Boy
Processor Core i7 3770k @4.6 Ghz
Motherboard ASUS P8Z77-I Deluxe
Cooling CORSAIR H100
Memory CORSAIR Vengeance 16GB @ 2177
Video Card(s) EVGA GTX 680 CLASSIEFIED @ 1250 Core
Storage 2 Samsung 830 256 GB (Raid 0) 1 Hitachi 4 TB
Display(s) 1 Dell 30U11 30"
Case BIT FENIX Prodigy
Audio Device(s) none
Power Supply SeaSonic X750 Gold 750W Modular
Software Windows Pro 7 64 bit || Ubuntu 64 Bit
Benchmark Scores 2017 Unigine Heaven :: P37239 3D Mark Vantage
Except that your 680 isn't going to outperform an overclocked 7970. They'll be about the same.

And I'm not confusing anything... the 7970 and 670 are about the same, and the 7970GE and 680 are about the same. If you overclock the 7970 or 7970GE, they'll match an overclocked 680 - trading blows depending on the game/test and overall being about the same.

I know you have to defend your silly purchase of an overclocking-oriented voltage locked card, though :laugh:

Yes it's totally silly to score a nice video card . . . I am not defending anything, but I would seriously doubt many if any 7970's outside of ones with water cooling or LN would be able to keep up with my 680 @ 1320 core, while being cooled on air . .

That said you seem to only care about suckling on AMD's teat, and disparaging things you don't like, rather than having a discussion of substance.:shadedshu
 
Joined
Sep 7, 2011
Messages
2,785 (0.58/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
Huh. We don't seem to disagree, then. That's what I had thought
We probably are in agreement on the point. But then, Nvidia have been integrating GPGPU since G80 (8800GTX/Ultra). Back then the strategy seemed a one-size-fits-all mentality (plus Jen Hsun's ego of win at all costs I would suggest) that is fine so long as the GPU's gestation isn't protracted and yields are good. G200 (GTX 280 etc) seemed to signal that Nvidia was walking a knife edge of what can be achieved against the pitfalls of process design, and Fermi seems to have been a big wake up call and an example of what can go wrong will go wrong. Loss of prestige could well have translated into loss of market share had it not been that the pro market is very slow to change/update and Nvidia's software enviroment being top notch.

Kepler compute cards have orders in the region of 100-150,000 units. At $3K + apiece (even taking into account low end GK104, since a 4xGPU "S" 1U system is bound to materialize taking the place of the S2090) it isn't hard too see how Nvidia would look at a modular mix-and-match approach to a gaming/workstation GPU and compute/workstation GPU. In a way, it matches AMD's past strategy, which is ironic considering that AMD have adopted compute at the expense of a larger die. The difference is that AMD wouldn't contemplate a monolithic GPU like GK110 - the risk is too great (process worries), and the return too small (not enough presence in the markets that it would be aimed at).
 

cadaveca

My name is Dave
Joined
Apr 10, 2006
Messages
17,232 (2.53/day)
We probably are in agreement on the point. But then, Nvidia have been integrating GPGPU since G80 (8800GTX/Ultra). Back then the strategy seemed a one-size-fits-all mentality (plus Jen Hsun's ego of win at all costs I would suggest) that is fine so long as the GPU's gestation isn't protracted and yields are good. G200 (GTX 280 etc) seemed to signal that Nvidia was walking a knife edge of what can be achieved against the pitfalls of process design, and Fermi seems to have been a big wake up call and an example of what can go wrong will go wrong. Loss of prestige could well have translated into loss of market share had it not been that the pro market is very slow to change/update and Nvidia's software enviroment being top notch.

Kepler compute cards have orders in the region of 100-150,000 units. At $3K + apiece (even taking into account low end GK104, since a 4xGPU "S" 1U system is bound to materialize taking the place of the S2090) it isn't hard too see how Nvidia would look at a modular mix-and-match approach to a gaming/workstation GPU and compute/workstation GPU. In a way, it matches AMD's past strategy, which is ironic considering that AMD have adopted compute at the expense of a larger die. The difference is that AMD wouldn't contemplate a monolithic GPU like GK110 - the risk is too great (process worries), and the return too small (not enough presence in the markets that it would be aimed at).

Yeah, I actually kinda like how they are similar, but since thgey both make GPUs for kinda teh same audience, that only makes sense.

As to the whole monolithix thing, since the Fermi thing, it made sense to me that they would diverge, since they identified the problem there, and then realized that it could be an issue again in the future..one they could avoid on their higher-numbers-sold-but-less-profit products.

And considering the market, and nvidia's plans with ARM, it makes sense they'd want to sell you both a Tesla card, and a Quadro card, and a motherboard for it all with an arm chip. It's the same as buying CPU/GPU/board...
 
Joined
Mar 23, 2012
Messages
570 (0.12/day)
Processor Intel i9-9900KS @ 5.2 GHz
Motherboard Gigabyte Z390 Aorus Master
Cooling Corsair H150i Elite
Memory 32GB Viper Steel Series DDR4-4000
Video Card(s) RTX 3090 Founders Edition
Storage 2TB Sabrent Rocket NVMe + 2TB Intel 960p NVMe + 512GB Samsung 970 Evo NVMe + 4TB WD Black HDD
Display(s) 65" LG C9 OLED
Case Lian Li O11D-XL
Audio Device(s) Audeze Mobius headset, Logitech Z906 speakers
Power Supply Corsair AX1000 Titanium
Yes it's totally silly to score a nice video card . . . I am not defending anything, but I would seriously doubt many if any 7970's outside of ones with water cooling or LN would be able to keep up with my 680 @ 1320 core, while being cooled on air . .

That said you seem to only care about suckling on AMD's teat, and disparaging things you don't like, rather than having a discussion of substance.:shadedshu

A 680 with a GPU boost frequency of ~1300 is roughly equivalent to a 7970 with a core clock of ~1200...

If you managed to get a GPU core clock of 1300 and a boost of 1400+, you got very lucky, akin to someone who got a 7970 and managed to hit 1300 MHz.
 
Joined
Sep 7, 2011
Messages
2,785 (0.58/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
A 680 with a GPU boost frequency of ~1300 is roughly equivalent to a 7970 with a core clock of ~1200...
If you managed to get a GPU core clock of 1300 and a boost of 1400+, you got very lucky, akin to someone who got a 7970 and managed to hit 1300 MHz.
If the GK114 and Sea Islands are both basically refreshes as seems likely, then it sould also seem likely that Nvidia have more wiggle room on clocks since the power draw of GK 104 is lower than that of Tahiti. I'd assume that with 28nm being more mature that the next round of silicon would be more refined (lower leakage) for both vendors, so unless there is a fundamental redesign in silicon, I'd assume that AMD would look to lower power usage (less heat, higher boost/OC over stock), and Nvidia, higher clocks including memory to counter bandwidth limitation.

The other alternative is that AMD and Nvidia hack and slash the GPU, which doesn't seem all that likely. Adding compute, beefiing up the ROP/TMU count adds substantially to power draw and die size.
 
Joined
Mar 23, 2012
Messages
570 (0.12/day)
Processor Intel i9-9900KS @ 5.2 GHz
Motherboard Gigabyte Z390 Aorus Master
Cooling Corsair H150i Elite
Memory 32GB Viper Steel Series DDR4-4000
Video Card(s) RTX 3090 Founders Edition
Storage 2TB Sabrent Rocket NVMe + 2TB Intel 960p NVMe + 512GB Samsung 970 Evo NVMe + 4TB WD Black HDD
Display(s) 65" LG C9 OLED
Case Lian Li O11D-XL
Audio Device(s) Audeze Mobius headset, Logitech Z906 speakers
Power Supply Corsair AX1000 Titanium
If the GK114 and Sea Islands are both basically refreshes as seems likely, then it sould also seem likely that Nvidia have more wiggle room on clocks since the power draw of GK 104 is lower than that of Tahiti. I'd assume that with 28nm being more mature that the next round of silicon would be more refined (lower leakage) for both vendors, so unless there is a fundamental redesign in silicon, I'd assume that AMD would look to lower power usage (less heat, higher boost/OC over stock), and Nvidia, higher clocks including memory to counter bandwidth limitation.

The other alternative is that AMD and Nvidia hack and slash the GPU, which doesn't seem all that likely. Adding compute, beefiing up the ROP/TMU count adds substantially to power draw and die size.

I think it's too early to draw conclusions about this, as the 680 can have its power draw go absolutely through the roof with OC+OV:


Right now, we see that the 7970 and 680 perform about the same when overclocked and so there's no real performance crown. Rather or not one of the companies manages to get that crown in the next round remains to be seen.
 
Joined
Feb 24, 2009
Messages
3,516 (0.61/day)
System Name Money Hole
Processor Core i7 970
Motherboard Asus P6T6 WS Revolution
Cooling Noctua UH-D14
Memory 2133Mhz 12GB (3x4GB) Mushkin 998991
Video Card(s) Sapphire Tri-X OC R9 290X
Storage Samsung 1TB 850 Evo
Display(s) 3x Acer KG240A 144hz
Case CM HAF 932
Audio Device(s) ADI (onboard)
Power Supply Enermax Revolution 85+ 1050w
Mouse Logitech G602
Keyboard Logitech G710+
Software Windows 10 Professional x64
Joined
Sep 7, 2011
Messages
2,785 (0.58/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
I think it's too early to draw conclusions about this, as the 680 can have its power draw go absolutely through the roof with OC+OV
Well, firstly, overvolting comparison is really only valid if comparing max overclock (BTW: It's common courtesy to link to the site that did the review (EVGA GTX 680 Classy 4GB)
In point of fact, you've just made my point.
EVGA 680 @ 1287 Core, 1377 boost, 6500 effective memory = 425W under OCCT
HD 7970GE @ 1150 Core, 1200 boost, 6400 effective memory= 484W under OCCT
If 425 watts is "absolutely through the roof", what's 484 watts ?

[source]
Right now, we see that the 7970 and 680 perform about the same when overclocked and so there's no real performance crown
True enough, but since overclocked performance isn't guaranteed, stock-vs-stock is probably a better indicator of current performance. Overclocking ability might be more an indicator of how a refresh might perform.
 
Joined
Mar 23, 2012
Messages
570 (0.12/day)
Processor Intel i9-9900KS @ 5.2 GHz
Motherboard Gigabyte Z390 Aorus Master
Cooling Corsair H150i Elite
Memory 32GB Viper Steel Series DDR4-4000
Video Card(s) RTX 3090 Founders Edition
Storage 2TB Sabrent Rocket NVMe + 2TB Intel 960p NVMe + 512GB Samsung 970 Evo NVMe + 4TB WD Black HDD
Display(s) 65" LG C9 OLED
Case Lian Li O11D-XL
Audio Device(s) Audeze Mobius headset, Logitech Z906 speakers
Power Supply Corsair AX1000 Titanium
I didn't mean to indicate that the 7970's didn't, only that we don't know enough from this current gen to predict how next gen will play out with power and clocks. This goes double since we keep hearing rumors of a really big chip e.g. GK110 showing up.
 

crazyeyesreaper

Not a Moderator
Staff member
Joined
Mar 25, 2009
Messages
9,816 (1.72/day)
Location
04578
System Name Old reliable
Processor Intel 8700K @ 4.8 GHz
Motherboard MSI Z370 Gaming Pro Carbon AC
Cooling Custom Water
Memory 32 GB Crucial Ballistix 3666 MHz
Video Card(s) MSI RTX 3080 10GB Suprim X
Storage 3x SSDs 2x HDDs
Display(s) ASUS VG27AQL1A x2 2560x1440 8bit IPS
Case Thermaltake Core P3 TG
Audio Device(s) Samson Meteor Mic / Generic 2.1 / KRK KNS 6400 headset
Power Supply Zalman EBT-1000
Mouse Mionix NAOS 7000
Keyboard Mionix
really we are enthusiasts boohoo gpu x uses more power than gpu y honestly who give a fuck really? I don't buy based on TDP or power usage I buy based on availability and performance, As do most of you a lower power requirement is just icing on the damn cake. I dont care if the GPU uses 200w or 500w long as it does its job.
 
Joined
Sep 7, 2011
Messages
2,785 (0.58/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
really we are enthusiasts boohoo gpu x uses more power than gpu y honestly who give a fuck really? I don't buy based on TDP or power usage I buy based on availability and performance
Missed the point by a country mile.
Lower power usage envelope now generally means more leeway on clocks on the refresh (all other things being equal)

As do most of you a lower power requirement is just icing on the damn cake. I dont care if the GPU uses 200w or 500w long as it does its job.
Who gives a fuck about an individual user in an industry context? When individual users buy more cards than OEM's then AMD and Nvidia will give Dell, HP and every other prebuilder the dismissive wanking gesture and thumb their nose at the ATX specification. Until then, both AMD and Nvidia are pretty much going to adhere to the 300W limit. OEM's don't buy out-of-spec parts.

/Majority of posters talk about the inductry situation, crazyeyes talks about crazyeyes situation
I didn't mean to indicate that the 7970's didn't, only that we don't know enough from this current gen to predict how next gen will play out with power and clocks. This goes double since we keep hearing rumors of a really big chip e.g. GK110 showing up.
Everyone here is speculating on details from an article that is itself speculating on the possible makeup of a IHV's card refresh. I put forward an hypothesis based on previous design history (HD 4870 -> HD 4890, GTX 480 -> GTX 580 for isolated examples) where refinement and design headroom produced performance increases. It is by no means the only argument-as shown by the thread, but I don't see it as being proved false by the graph you added- or the one I added to compliment it. And if we are commenting upon a speculative article with known fact only, I think the post count on the thread could be reduced by ~120 posts.
 
Last edited:
Joined
Apr 28, 2012
Messages
12 (0.00/day)
Interesting stuff if true. GK110 takes GK104's place in the product stack, and the GTX 680 refresh gets pricing in the GTX 660 Ti's territory. Given that AMD's refresh seems to be looking at the same ~15% increase, it would seem that AMD might end up being pressured pretty hard in perf/mm^2, perf/$ and margins if they have to fight a GTX 680 successor that is 40% cheaper than the current model. A pricing overhaul like that will surely lay waste to the resell market- by the same token, a GTX670 or 680 SLI setup should be cheap as chips come March.

At least all the people screaming about Nvidia pricing a supposed mainstream/performance GK 104 at enthusiast prices, will now be able to vent their rage elsewhere.

hmm? This only confirms their right to rage. If this comes to pass and they are justified the real rage has only begun!
 
Top