• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA GTX 1080-successor a Rather Hot Chip, Reference Cooler Has Dual-Fans

It's about time that they use a decent stock cooler. Both Pascal and Vega cards boost way into throttle territory, and this card will probably (unfortunately) push this even further.

The GeForce GTX 1080 set high standards for efficiency. Launched as a high-end product that was faster than any other client-segment graphics card at the time, the GTX 1080 made do with just a single 8-pin PCIe power connector, and had a TDP of just 180W. The reference-design PCB, accordingly, has a rather simple VRM setup. The alleged GTX 1080-successor, called either GTX 1180 or GTX 2080 depending on who you ask, could deviate from its ideology of extreme efficiency. There were telltale signs of this departure on the first bare PCB shots.
I guess you mean higher TDP rather than lower efficiency? GV104 does have up to 40% more CUDA cores, some TDP increase is to be expected, but "Volta" is still more energy efficient.

If this is true, and it's running hot, then it looks like nVidia is struggling to innovate, and is instead relying on overclocking the chip to get performance...

nVidia has had a long time with no pressure on them to make this "new" GPU, so this is rather telling, if true.
Then I would recommend you get educated on the Volta architecture.

Some notable quotes:
- Similar to Pascal GP100, the GV100 SM incorporates 64 FP32 cores and 32 FP64 cores per SM. However, the GV100 SM uses a new partitioning method to improve SM utilization and overall performance.
- Integration within the shared memory block ensures the Volta GV100 L1 cache has much lower latency and higher bandwidth than the L1 caches in past NVIDIA GPUs.
- Unlike Pascal GPUs, which could not execute FP32 and INT32 instructions simultaneously, the Volta GV100 SM includes separate FP32 and INT32 cores, allowing simultaneous execution of FP32 and INT32 operations at full throughput, while also increasing instruction issue throughput.

Learn the basics before you start complaining about lack of innovation.

This is allegedly leaked info by an nVidia employee:
<snip>
Definitely fake. This is just the usual rambling from AdoredTV, the Youtuber behind "AMD's master plan", and other ridiculous claims.

Exactly. Probably 12nm which means Turing chips are big, hot, and hungry.
It should come as no surprise that an architecture launched towards the end of a node lifecycle should be larger and push the node further. The same will also happen with 7nm; first relatively modest GPUs, then gradually pushing the node.

"Volta" is still more energy efficient than Pascal, and miles ahead of the competition.
 
So many people "freaking out" because of a rumor based on a rumor. :laugh:
 
expecting the same level of innovation from companies with vastly different market share is.. i'll be kind and say non-rational. so when we are talking about nvidia and intel, yes, it is expected for them to be in the bleeding edge of tech innovation regardless of what the competition is doing and not just sit on their asses and wait until amd pokes them. BUT that's not how the market works so please cut the crap about them doing something they are not. we all saw intel's "innovation" with the quad-core 7700k, and we are seeing nvidias "innovation" with the 2-year-old 10xx series. but.. "it's amd's fault for not being competitive enough"... lel :kookoo: it really says a lot about the primitiveness of human civilization when competition is the main reason for innovation.
 
Remember that Titan V gets a lot of power efficiency from HBM2 which uses about a third of the same power GDDR5 uses for the same performance, and doesn't have a significant increase in amount of cores. So HBM2 efficiency covers the computing cores increase.

...Which means that Titan V and Titan xp perf/watt are roughly same.

Umm no, 21.1 billion transisitors vs 12 (im pretty sure GP102 has 12). Thats almost a 2x increase, not to mention that the titan v has fp16, fp64 and tensor cores, which are not featured in gp102 (they are there but like 1/32 fp64 to fp32, whereas the titan v has 1/2 dp to sp ratio)
 
In that adoredtv leak,
10_nvidia_rejestruje_znaki_turing_geforce_rtx_oraz_quadro_rtx_2.jpg


I think he didn't really pay attention to that 23/4 number. 23 could be SM (2944 cuda), but that "4" number on rtx 2080 could be tensor units, 4x64/128 = 256/512 tensor on rtx 2080 ?
 
Last edited:
They could make anything hot if they wanted to. Why not a souped up GTX1080 with two PCI-E 8 pins? This is just a standard MOAR POWER approach. For a more efficient card, wait for GTX2070 or lower.

What's the point of tensor cores on a consumer graphics card? I thought those were for AI. What do they do for gaming? Maybe the miners will take advantage of them somehow?
 
They could make anything hot if they wanted to. Why not a souped up GTX1080 with two PCI-E 8 pins? This is just a standard MOAR POWER approach. For a more efficient card, wait for GTX2070 or lower.

What's the point of tensor cores on a consumer graphics card? I thought those were for AI. What do they do for gaming? Maybe the miners will take advantage of them somehow?
They can accerlerate ray tracing
 
Why not a souped up GTX1080 with two PCI-E 8 pins?
Because that would be 366w worth of 12v power. Pascal maxes out at 250w--even the HBM2 models.
 
Because that would be 366w worth of 12v power. Pascal maxes out at 250w--even the HBM2 models.
It was a theoretical, out of my ass example to illustrate my point. They can always juice it up.

What causes Pascal to "max out" at 250w? Surely they could push that figure further, if they wanted to. Such a card would be absurd, but it could be done, especially if they're just gonna roid up the current architecture without much real innovation. As said already, those who want the fastest will still buy it even if it consumes 300w or more, because it's the fastest. As for me, I think my 1070 will serve me well for a long time. I'll probably upgrade only when something comes along that's faster and also draws less power...
 
They could have released GP102 on day one too. Reason they didn't is lack of competition. NVIDIA has two major reasons for debuting 20## series:
1) promote ray tracing/tensor cores
2) move to 12nm to decrease production costs
Both secure higher profitability in the future regardless of what AMD does.
 
NVIDIA has two major reasons for debuting 20## series:
1) promote ray tracing/tensor cores
2) move to 12nm to decrease production costs
Both secure higher profitability in the future regardless of what AMD does.

12nm is not a major advancement and I am yet to see proper RTX implementations in actual games. Actually scratch that, I am yet to see RTX being used to do anything remotely useful at all for these types of products.

None of these things secure Nvidia anything, whatever advantage 12nm brings will be offset by the increase in die space and clocks and the latter is nothing more that a promise for an obscure technology which in spite of all the aggressive coverage and marketing lacks any prospect of becoming relevant and feasible in the future (in other words no support from the console side of things). This is one of Nvidia's weakest releases in the last decade , similar to the 500 and 700 series more so to the former actually.

Nvidia is well within the realm of diminishing returns on their efforts to garner the remaining market share and they know it , hence the late and lackluster new generation of cards. Against popular belief that's their number one reason not to put a lot of effort in their products not the fact that there is no competition , they have dealt with that more than a decade ago when they made it so that whatever their competition does they still win.
 
Last edited:
12nm is not a major advancement and I am yet to see proper RTX implementations in actual games. Actually scratch that, I am yet to see RTX being used to do anything remotely useful at all for these types of products.
You have to look at it from NVIDIA's perspective: pay Epic to implement RTX into Unreal Engine and Unity into their engine. Release some cards with RTX support and some without. Years later, games start coming out on these engines with native RTX support and they made the average consumer spend 25% more for an RTX branded card over GTX branded. They make more money per card sold, they have a hardware feature only their cards and software support, and they present barriers to anyone else trying to accomplish the same thing (especially AMD and Intel). It's win-win-win for NVIDIA and costs them next to nothing they didn't already pay for (because Volta).

It's PhysX all over again. Soon they'll be dancing around, screaming in people's face that they need a second GPU so they can real time ray trace in their games.

RTX does have professional uses but NVIDIA isn't going to settle for that.
 
They could have released GP102 on day one too. Reason they didn't is lack of competition.
Untrue. Nvidia struggled for months with GP102, the Titan X (Pascal) was sold out for a long time, and the GTX 1080 Ti was delayed from Q4 2016 to 2017-03-05.

move to 12nm to decrease production costs
TSMC "12nm" isn't a node shrink over 16nm. At best it provides marginally better yields.
 
Untrue. Nvidia struggled for months with GP102, the Titan X (Pascal) was sold out for a long time, and the GTX 1080 Ti was delayed from Q4 2016 to 2017-03-05.
NVIDIA was waiting, and waiting, and waiting for Vega. By the point Titan X sold out, they likely already had GTX 1080 Ti and Titan Xp ready to push out the door but didn't because Vega was still coming. GTX 1080 Ti came out after the Vega bench leaked that showed it performing between GTX 1070 and GTX 1080. NVIDIA preempted the debut of the Frontier Edition by two months and quickly responded to Vega's launch with the 1070 Ti six months later.


TSMC "12nm" isn't a node shrink over 16nm. At best it provides marginally better yields.
Which means more profitability.


NVIDIA is launching GTX 20xx knowing AMD doesn't have a response; hence, creating their own market pressure to get people to upgrade via RTX.
 
Which means more profitability.

They didn't do it for profitability though, it was a node developed to allow them to make the biggest dies with the highest transistor density possible. They even admitted it was insanely expensive, we are talking about a fabless semiconductor company co-developing a node with TSMC , it was anything but profitable but it got them what they wanted.
 
expecting the same level of innovation from companies with vastly different market share is.. i'll be kind and say non-rational. so when we are talking about nvidia and intel, yes, it is expected for them to be in the bleeding edge of tech innovation regardless of what the competition is doing and not just sit on their asses and wait until amd pokes them. BUT that's not how the market works so please cut the crap about them doing something they are not. we all saw intel's "innovation" with the quad-core 7700k, and we are seeing nvidias "innovation" with the 2-year-old 10xx series. but.. "it's amd's fault for not being competitive enough"... lel :kookoo: it really says a lot about the primitiveness of human civilization when competition is the main reason for innovation.

Actually, that's as rational as it gets. If you can't compete, gtfo, you're in the wrong business.
If companies performed proportionally to their market share, the margin between them would only widen to the point that in order to keep the little guy alive, they'd need government intervention. But I guess it's hard to think straight when you have one point to make. And point only.
 
Why does wording in OP sound like it was written by an nVidia employee working in marketing department?

Elaborate on "hope for AMD", I might have missed the problems they had recently, looking at their stock.

I've been ignoring AMDs stock because, to my nose, I smell a "pump and dump" bitcoin like odor in the push articles.

1. While Ryzen is an engineering marvel in the sense that it delivers well in both workstation and consumer / gaming market niches, it's isn't topping the charts in either. So while an excellent choice for the gamer who spends 30% of his / her time video editing, outside that, it's hard to justify on a numbers basis. But for the gamer, all the extra cores are not delivering fps ... for the video editor, the $7800 1950x leads the $450 7820x by less than 1 percentage point. It's kinda like a hamburger chain having the best gluten free veggie burger sandwich ... great accomplishment, but not going to have big impact on bottom line.

2. While AMD got a nice bump from recent releases, ther market share has decreased 1.16% in the last two months and 1.05 % in the last month alone. And with Intel having some significant new releases in Q3 and Q4, that can only continue the trend.

3. While year over year, AMD has shown a nice bump, nVidia's year over year is still larger.

4. AMD doesn't have a competitive offering from the 1060 on up.

5. nVidia has expanded their dominance in an additional market tier with each successive generation for 3 generations now

6. nVidia has 5.5 times market share of AMD in the discrete GFX card arena and Intel has 5.6 times the market sharein CPU arena.

7. For every 5 non-nvidia GPUs detected by steam servers, 3 of them are AMD, 2 of them are Intel. AMDs most popular card ranks 30th in market share.

8. A significant % of AMD's stock buys (25% last time I saw published) are still being sold short within 24 hours

9. In US, tarriffs ... and more tariffs. This will cut across all vendors but 25% tariff on imports (so far) and will certainly mean less new builds. And of those built, budget considerations will push down sales on the high margin, top end products.
 
AMD doesn't have a competitive offering from the 1060 on up.
<snip>
For every 5 non-nvidia GPUs detected by steam servers, 3 of them are AMD, 2 of them are Intel. AMDs most popular card ranks 30th in market share.
Unfortunately, they don't have anything mid-to-upper range lined up for the next three years either…
Their next candidate Navi will be targeting consoles, and all their GPU focus is on SoCs these days. Their market participation on the desktop is practically non-existent, and the best we can hope for is Navi to reach up into the mid-range. Most of AMD's GPU sales for the desktop are low-end OEM parts.

These SoC deals might seem like easy money for AMD now, but they do take the focus away from making competitive products in the gaming market. If they fall further behind, they wouldn't even land such deals.
 
The GPU side of AMD really needs some kind of major innovation right now. I honestly don't mind refining a good currently existing architecture as long as it's done right, but what they have now is just not keeping up. I don't think they can squeeze much more out of it. Even Vega was a flop.

Pascal, on the other hand, is a good example of refinement done right. It's basically a re-tuned Maxwell (or so I've read, I'm no hardware engineer). Now whatever monstrosity they're coming up with now doesn't seem to go that way. Seems they're just taking the same architecture and attaching the afterburners... which I think is fine, in the game generation anyway (e.g. a fictional GTX1090 or something) but to make a whole new product line that way is just silly.

I was also okay with Intel basically refining the same architecture since Sandy Bridge. Each time it got a little faster and more efficient. There were a lot of things they did I didn't like, but I was okay with the architecture itself. 6 core chips could have come a little sooner to the mainstream platform (Skylake would have been good), but again there's that lack of innovation... only Intel did it because they could. AMD's GPU team has got some serious issues right now.
 
@John Naylor It's true Ryzen doesn't dominate the desktop landscape as much as reviews will have you believe. But at the same time, AMD did something brilliant and put all those cores to good use in Epyc. The server market is where the big margins are, it's just that reviews never peek into that segment.

@hat AMD was in such a mess, they couldn't possibly compete on several fronts at once. Let's just hope they use the cash Zen rakes in to boost their GPU division as well.
 
Are you for real?
Are you?
Vega is more power hungry, more expensive and harder to come by than equivalent Nvidia cards. It does offer roughly the same performance, but we're not defining "competitive" going by HP alone, are we?
 
Are you?
Vega is more power hungry, more expensive and harder to come by than equivalent Nvidia cards. It does offer roughly the same performance, but we're not defining "competitive" going by HP alone, are we?
You wrote "from 1060 and up". AMD has RX 580 to compete with 1060. You should have written "from 1070 and up". The guy's totally right.
 
Back
Top