Thursday, August 9th 2018

NVIDIA GTX 1080-successor a Rather Hot Chip, Reference Cooler Has Dual-Fans

The GeForce GTX 1080 set high standards for efficiency. Launched as a high-end product that was faster than any other client-segment graphics card at the time, the GTX 1080 made do with just a single 8-pin PCIe power connector, and had a TDP of just 180W. The reference-design PCB, accordingly, has a rather simple VRM setup. The alleged GTX 1080-successor, called either GTX 1180 or GTX 2080 depending on who you ask, could deviate from its ideology of extreme efficiency. There were telltale signs of this departure on the first bare PCB shots.

The PCB pictures revealed preparation for an unusually strong VRM design, given that this is an NVIDIA reference board. It draws power from a combination of 6-pin and 8-pin PCIe power connectors, and features a 10+2 phase setup, with up to 10 vGPU and 2 vMem phases. The size of the pads for the ASIC and no more than 8 memory chips confirmed that the board is meant for the GTX 1080-successor. Adding to the theory of this board being unusually hot is an article by Chinese publication Benchlife.info, which mentions that the reference design (Founders Edition) cooling solution does away with a single lateral blower, and features a strong aluminium fin-stack heatsink ventilated by two top-flow fans (like most custom-design cards). Given that NVIDIA avoided such a design for even big-chip cards such as the GTX 1080 Ti FE or the TITAN V, the GTX 1080-successor is proving to be an interesting card to look forward to. But then what if this is the fabled GTX 1180+ / GTX 2080+, slated for late-September?
Sources: VideoCardz, BenchLife.info
Add your own comment

75 Comments on NVIDIA GTX 1080-successor a Rather Hot Chip, Reference Cooler Has Dual-Fans

#26
Vya Domus
RTX looks more and more like fringe technology to me, something that can't be used extensively for proper results. Reminds me a lot of tessellation, we got bombarded with it when DX11 arrived, you would have been convinced that in 3-4 years every surface rendered in a game would be tessellated. Fast forward to today and it's still used sparingly as it's cost still far outweighs the results and it likely remain that way , stuff like parallax occlusion mapping proved to be much more feasible across all types of hardware and platforms.

Integrating that into hardware and building an entire product line based on it ? That might be too much even for Nvidia.
Posted on Reply
#27
FordGT90Concept
"I go fast!1!11!1!"
uuuaaaaaaTotally, I think it will also be the same node of the current gen, which is fitting with the subject of this threads' OP.
Exactly. Probably 12nm which means Turing chips are big, hot, and hungry.
Posted on Reply
#28
Fluffmeister
FordGT90ConceptExactly. Probably 12nm which means Turing chips are big, hot, and hungry.
But hopefully fast, and as we all know heat and power consumption are a none issue these days apparently.
Posted on Reply
#29
Vya Domus
Fluffmeisterand as we all know heat and power consumption are a none issue these days apparently.
Not when the sticker is green.
Posted on Reply
#31
bug
Vya DomusRTX looks more and more like fringe technology to me, something that can't be used extensively for proper results. Reminds me a lot of tessellation, we got bombarded with it when DX11 arrived, you would have been convinced that in 3-4 years every surface rendered in a game would be tessellated. Fast forward to today and it's still used sparingly as it's cost still far outweighs the results and it likely remain that way , stuff like parallax occlusion mapping proved to be much more feasible across all types of hardware and platforms.

Integrating that into hardware and building an entire product line based on it ? That might be too much even for Nvidia.
Lighting is not a local thing, it's scene-wide and much more likely to yield visible results. That said, at the end of the day it's still just a tool, so it very much depends on how you use it. That, and if history repeats itself, only the second hardware iteration will be able to handle it properly (but that's just an assumption on my part).

Also, if you truly believed that about tessellation, you must have missed TruForm before it.
Vya DomusNot when the sticker is green.
Or blue?
Posted on Reply
#32
cucker tarlson
uuuaaaaaaThis is allegedly leaked info by an nVidia employee:

7nm would be great.
Posted on Reply
#33
$ReaPeR$
most people don't give a shit as long as its perceptualized as the "fastest" gpu. this will sell like "hot" cakes no matter its actual specs and abilities just because nvidia is a "trusted" brand in the eyes of the average consumer. this will be the fastest gpu tho and probably the hottest if nvidia is trying to put in it the "compute" abilities that previous gens didn't get. OR as mentioned previously nvidia is trying to outsell its AIB vendors by implementing such a design. OR its a combination of both. regardless, atm all this is speculation, we shall see whats what in due time.
Posted on Reply
#34
Tsukiyomi91
I would just wait for proper reviews where retail-ready samples are tested & see if these new hot running chips are worth the upgrade or not. Until then, I'll hold onto my wallet.
Posted on Reply
#36
bug
Tsukiyomi91I would just wait for proper reviews where retail-ready samples are tested & see if these new hot running chips are worth the upgrade or not. Until then, I'll hold onto my wallet.
Worth the upgrade is relative. If you own another card from last generation's same tier, it's usually not worth it. If you own something older or are looking to jump up a tier, it's usually worth it.
Posted on Reply
#37
medi01
Vya Domus
FluffmeisterBut hopefully fast, and as we all know heat and power consumption are a none issue these days apparently.
Not when the sticker is green.
The irony of these posts is incredible.
FluffmeisterTwo fans though, the drama!
Yeah, VRMs, shmms, what are those anyhow?
Posted on Reply
#38
Fluffmeister
medi01The irony of these posts is incredible.
Irony, double standards, it's all comedy gold frankly.
medi01Yeah, VRMs, shmms, what are those anyhow?
My card has two fans now, and they aren't even spinning.
Posted on Reply
#39
medi01
FluffmeisterIrony, double standards, it's all comedy gold frankly.
Right, it was "somebody else" not bentoverbackwards team greens calling cards consuming 250W "power hogs" and 50-70W difference in power consumption "huge".

I'm preparing popcorn for the price justification talks.
Posted on Reply
#40
Fluffmeister
medi01Right, it was "somebody else" not bentoverbackwards team greens calling cards consuming 250W "power hogs" and 50-70W difference in power consumption "huge".

I'm preparing popcorn for the price justification talks.
I guess AMD shouldn't have made that stupid video mocking Fermi's power consumption in the first place, it was always going to come back and bite them in the arse.

Also, people moaning about the price of Nvidia cards is a given.
Posted on Reply
#41
Vya Domus
FluffmeisterI guess AMD shouldn't have made that stupid video mocking Fermi's power consumption in the first place
Making fun of something that was actually true and relevant is stupid now huh ? Figured as much, comedy sure has gotten weird over the years.
Posted on Reply
#42
bug
medi01Right, it was "somebody else" not bentoverbackwards team greens calling cards consuming 250W "power hogs" and 50-70W difference in power consumption "huge".

I'm preparing popcorn for the price justification talks.
So your problem is what, exactly?
Vya DomusMaking fun of something that was actually true and relevant is stupid now huh ? Figured as much, comedy sure has gotten weird over the years.
It's not stupid, it just goes to show even AMD agrees power draw is important. What puts them in a bad light is they lost the power efficiency crown right after that hardware generation and never got it back.
Posted on Reply
#43
Fluffmeister
bugIt's not stupid, it just goes to show even AMD agrees power draw is important. What puts them in a bad light is they lost the power efficiency crown right after that hardware generation and never got it back.
Bingo, at least someone gets it.
Posted on Reply
#44
bug
FluffmeisterBingo, at least someone gets it.
Don't worry, everybody gets it. It's just more convenient to play dumb when you don't like it ;)
Posted on Reply
#45
Fluffmeister
bugDon't worry, everybody gets it. It's just more convenient to play dumb when you don't like it ;)
Yep, so in short medi01 is upset about green teams members calling cards "power hogs" and highlighting 50-70W difference in power consumption as "huge".

Yet it was his beloved that made it a pissing contest in the first place.

I make as sarcastic comment about power consumption not being important apparently.... and you know the rest.
Posted on Reply
#46
bug
FluffmeisterYep, so in short medi01 is upset about green teams members calling cards "power hogs" and highlighting 50-70W difference in power consumption as "huge".

Yet it was his beloved that made it a pissing contest in the first place.

I make as sarcastic comment about power consumption not being important apparently.... and you know the rest.
It's such a useless discussion, Idk why it gets brought up so often. Yes, Nvidia had Fermi (and even FX5000 before that). Yes, AMD has fallen back since their failure to implement TBR. But other than that, they were pretty close to each other. So usually you can pick from either camp. But when the difference widens, it makes sense to warn potential buyers. That is all. The only idiotic thing here is trying to make it look like only one of the players can fall behind. More idiotic to keep reiterating that.
Posted on Reply
#47
Diverge
krykryRemember that Titan V gets a lot of power efficiency from HBM2 which uses about a third of the same power GDDR5 uses for the same performance, and doesn't have a significant increase in amount of cores. So HBM2 efficiency covers the computing cores increase.

...Which means that Titan V and Titan xp perf/watt are roughly same.
Memory power draw is a small % of the total power draw of the graphics card. The GPU itself draws most of the power.
Posted on Reply
#48
Slizzo
For what it's worth, the PCB for the Founder's GTX 1080 had solder points for a 6-pin PCI-E power connector in addition to the 8-pin that was used.

And while the PCB has two fan headers doesn't mean both will be used. Definitely possible, yes. But it's folly to assume this is definitely happening.
Posted on Reply
#49
Prince Valiant
FluffmeisterHey, at least their top don't isn't using LC yet, unlike...
I know your intent with this post but at least FE cards would have a reason to exist if that were the case.
Posted on Reply
#50
efikkan
It's about time that they use a decent stock cooler. Both Pascal and Vega cards boost way into throttle territory, and this card will probably (unfortunately) push this even further.
btarunrThe GeForce GTX 1080 set high standards for efficiency. Launched as a high-end product that was faster than any other client-segment graphics card at the time, the GTX 1080 made do with just a single 8-pin PCIe power connector, and had a TDP of just 180W. The reference-design PCB, accordingly, has a rather simple VRM setup. The alleged GTX 1080-successor, called either GTX 1180 or GTX 2080 depending on who you ask, could deviate from its ideology of extreme efficiency. There were telltale signs of this departure on the first bare PCB shots.
I guess you mean higher TDP rather than lower efficiency? GV104 does have up to 40% more CUDA cores, some TDP increase is to be expected, but "Volta" is still more energy efficient.
stimpy88If this is true, and it's running hot, then it looks like nVidia is struggling to innovate, and is instead relying on overclocking the chip to get performance...

nVidia has had a long time with no pressure on them to make this "new" GPU, so this is rather telling, if true.
Then I would recommend you get educated on the Volta architecture.

Some notable quotes:
- Similar to Pascal GP100, the GV100 SM incorporates 64 FP32 cores and 32 FP64 cores per SM. However, the GV100 SM uses a new partitioning method to improve SM utilization and overall performance.
- Integration within the shared memory block ensures the Volta GV100 L1 cache has much lower latency and higher bandwidth than the L1 caches in past NVIDIA GPUs.
- Unlike Pascal GPUs, which could not execute FP32 and INT32 instructions simultaneously, the Volta GV100 SM includes separate FP32 and INT32 cores, allowing simultaneous execution of FP32 and INT32 operations at full throughput, while also increasing instruction issue throughput.

Learn the basics before you start complaining about lack of innovation.
uuuaaaaaaThis is allegedly leaked info by an nVidia employee:
<snip>
Definitely fake. This is just the usual rambling from AdoredTV, the Youtuber behind "AMD's master plan", and other ridiculous claims.
FordGT90ConceptExactly. Probably 12nm which means Turing chips are big, hot, and hungry.
It should come as no surprise that an architecture launched towards the end of a node lifecycle should be larger and push the node further. The same will also happen with 7nm; first relatively modest GPUs, then gradually pushing the node.

"Volta" is still more energy efficient than Pascal, and miles ahead of the competition.
Posted on Reply
Add your own comment
Nov 21st, 2024 10:45 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts