Thursday, January 31st 2019

Mobile NVIDIA GeForce RTX GPUs Will Vary Wildly in Performance, Clocks Lowered Substantially

NVIDIA is in the process of rolling out the first implementations of its RTX 2000 series GPUs in mobile form, and if the going is as is being reported, it's going to be a little rough for users to actually extrapolate their performance from product to product. This is because manufacturers are apparently getting a whole lot of leeway in how to clock their products, according to their solution's thermal characteristics and design philosophy.

What this means is that NVIDIA's RTX 2080 Max-Q, for example, can be clocked as low as 735 MHz, which is a more than 50% downclock from its desktop counterpart (1,515 MHz). The non-Max-Q implementation of NVIDIA's RTX 2080, for now, seems to be clocked at around 1,380 MHz, which is still a close to 200 Mhz downclock. Of course, these lowered clocks are absolutely normal - and necessary - for these products, particularly on a huge chip such as the one powering the RTX 2080. The problem arises when manufacturers don't disclose clockspeeds of the GPU in their particular implementation - a user might buy, say, an MSI laptop and an ASUS one with the exact same apparent configuration, but GPUs operating at very different clockspeeds, with very different levels of performance. Users should do their due research when it comes to the point of choosing what mobile solution sporting one of these NVIDIA GPUs they should choose.
Sources: TechSpot, Tweakers.net
Add your own comment

100 Comments on Mobile NVIDIA GeForce RTX GPUs Will Vary Wildly in Performance, Clocks Lowered Substantially

#27
ShurikN
EarthDogHeres a benchmark I just ran into with 2080 maxQ... www.notebookcheck.net/We-benchmark-the-mobile-RTX-2060-2070-and-2080-Max-Q-and-compare-them-to-the-desktop-RTX-2080-and-GTX-1080.402036.0.html

"In short, the mobile RTX 2080 and RTX 2080 Max-Q are about 15 percent and 30 percent slower than the desktop RTX 2080 according to 3DMark benchmarks. The deltas are somewhat similar to - if not just slightly wider than - the deltas between mobile GTX 1080 and GTX 1080 Max-Q to the desktop GTX 1080 by about 5 percentage points each".

So, the gap is bigger, by a small amount according to this little write up... certainly not the end of the world as some would like to convey... ;)

It is quite typical for these to drop a card level down which tends to put it in bed with last gen same tier.
Yeah, but look at it from this perspective.
The 2070 replaces the 1080 and either matches it or beats it. And that's true for desktop and laptop. No arguing there.
But the 2070 max-q looses to the to the 1080 max-q in all the games tested on that site. Not to mention the fact that 1080m-q laptop had a weaker 7700HQ against a 8750H powerting the 2070m-q.
Posted on Reply
#28
R0H1T
ShurikNYeah, but look at it from this perspective.
The 2070 replaces the 1080 and either matches it or beats it. And that's true for desktop and laptop. No arguing there.
But the 2070 max-q looses to the to the 1080 max-q in all the games tested on that site. Not to mention the fact that 1080m-q laptop had a weaker 7700HQ against a 8750H powerting the 2070m-q.
Well probably there's your answer, a hex core in a laptop is not cool - literally speaking.
Posted on Reply
#29
ShurikN
R0H1TWell probably there's your answer, a hex core in a laptop is not cool - literally speaking.
Both are 45w tdp parts, that's not the issue. What could be the issue is that it's two different laptops, with two different cooling systems.
But while it's not apples to apples, it does show A picture.
Posted on Reply
#32
R0H1T
ShurikNBoth are 45w tdp parts, that's not the issue, so the only issue it could be is that it's two different laptops, with two different cooling systems.
That's true but both chips will have different PL1 & PL2 limits, not to mention 8750H will downclock more frequently. Unfortunately there's no objective way to measure the two & gauge which is superior wrt cooling.
ShurikNBut while it's not apples to apples, it does show A picture.
Yes & the picture is distorted IMO. We need more data to ascertain how good, or bad, the RTX mGPUs are.
Posted on Reply
#33
ShurikN
R0H1TYes & the picture is distorted IMO. We need more data to ascertain how good, or bad, the RTX mGPUs are.
I'm not bringing into question regular RTX mobile parts, but rather max-q versions, which were butchered much more than Pascal max-q. That's my biggest issue with this.
Posted on Reply
#35
londiste
ShurikNYeah, but look at it from this perspective.
The 2070 replaces the 1080 and either matches it or beats it. And that's true for desktop and laptop. No arguing there.
But the 2070 max-q looses to the to the 1080 max-q in all the games tested on that site. Not to mention the fact that 1080m-q laptop had a weaker 7700HQ against a 8750H powerting the 2070m-q.
It seems 1080 MaxQ is 90W TDP part while 2070 MaxQ is 80W part.
Posted on Reply
#36
EarthDog
ShurikNYeah, but look at it from this perspective.
The 2070 replaces the 1080 and either matches it or beats it. And that's true for desktop and laptop. No arguing there.
But the 2070 max-q looses to the to the 1080 max-q in all the games tested on that site. Not to mention the fact that 1080m-q laptop had a weaker 7700HQ against a 8750H powerting the 2070m-q.
There is an argument there. It is quite common that laptop Max-Q cards drop a whole card level. This has happened for generations and is nothing new.

YOu state that the 2070 Max-Q loses to the 1080 Max-Q, but in this image, I see it beating it out except for whatever that overall performance score is...please note that these are all GRAPHICS scores in the benchmark, not overall so the CPU being one generation behind or a couple of cores/threads does not play a major role here.



Obviously synthetics don't give the most encompassing results, but we can see here it beats out the last gen by several %. Where did you see game benchmarks here? That site, to me, isn't easy to navigate.
bugYes, but that's what I meant. We don't know how much DXR eats into the TDP. Without DXR those cards could boost significantly higher than with it turned on. I hope I worded it better now.
I'd say not. I have no idea what you are trying to say. :) If you need more out of the card, don't enable RT for the significant FPS drop. It has little (nothing?) to due with TDP but everything to do with the performance impact of RT.
Posted on Reply
#38
Vya Domus
RecusStop trying to be smartass. If you are talking about relative performance it shows 2080 mobile is 1% faster than 1080 Ti.
Forget it, you can't do it after all.
Posted on Reply
#39
Recus
Vya DomusForget it, you can't do it after all.
Vya DomusThe 2080 Max-Q can be slower than even a desktop 1080 and sometimes barley any faster than a 1080 Max-Q despite the major spec bump. This is deception on a whole new level.


Next lie?
Posted on Reply
#40
GlacierNine
ShurikNI'm not bringing into question regular RTX mobile parts, but rather max-q versions, which were butchered much more than Pascal max-q. That's my biggest issue with this.
This really isn't complicated.

RTX GPUs on desktop consume more power and run hotter than GTX 10-series GPUs did. That is established fact, even when not using RTX/DXR.

GTX 10-Series GPUs in laptops and mobile devices already had to be downclocked to avoid thermal and power consumption issues.

If the RTX chips are hotter and more power hungry, then the downclocking will have to be more aggressive assuming cooling stays the same, otherwise they will simply burn.

If the downclocking is more aggressive, then the performance gap between laptops and desktops will widen.

End of story. Done. Finished. That's that. The laws of physics dictate no less.
Posted on Reply
#41
EarthDog
ShurikNwww.notebookcheck.net/Nvidia-GeForce-RTX-2060-RTX-2070-RTX-2080-Laptop-GPUs-Performance-Review.401266.0.html
TY. Are we reading the same thing?

BF V (1080p/QHD/UHD) = 7/11/14% lead over 1080 MaxQ in that (ONE) gaming title. The CPU difference isn't much here considering the card gains more distance the higher the resolution goes.
GlacierNineGTX 10-Series GPUs in laptops and mobile devices already had to be downclocked to avoid thermal and power consumption issues.
So do all laptop GPUs in most cases... in particular all Max-Qs do...
Posted on Reply
#42
Vya Domus
RecusNext lie?
Sure, how about this one : You can read and understand what is being said.

The 2080 Max-Q can be slower than even a desktop 1080 and sometimes barley any faster than a 1080 Max-Q despite the major spec bump.
Posted on Reply
#43
ShurikN
EarthDogTY. Are we reading the same thing?

BF V (1080p/QHD/UHD) = 7/11/14% lead over 1080 MaxQ in that (ONE) gaming title. The CPU difference isn't much here considering the card gains more distance the higher the resolution goes.
You understood me wrong, I was comparing 1080m-q against a 2070m-q and it's 22/8/4% faster across three resolution. You were reading the regular 2070 graphs.
Posted on Reply
#44
londiste
GlacierNineRTX GPUs on desktop consume more power and run hotter than GTX 10-series GPUs did. That is established fact, even when not using RTX/DXR
RTX 2080 is 215W vs GTX 1080Ti 250W
RTX 2070 175W vs GTX 1080 180W
RTX 2060 160W vs GTX 1070Ti 180W
Vya DomusThe 2080 Max-Q can be slower than even a desktop 1080 and sometimes barley any faster than a 1080 Max-Q despite the major spec bump. This is deception on a whole new level.
Assuming MaxQ means minimum possible spec (which it mostly does), RTX 2080 MaxQ is 80W, GTX 1080 MaxQ is 90W and Desktop GTX 1080 is 180W.
Power is the main limitation in mobile.
From the Shadow of Mordor graph, 9.5% better at 12% power limit deficit is not a bad result.
Posted on Reply
#45
moproblems99
Vya DomusNo matter how power efficient the architecture and node is, you simply can't get these monstrously large chips inside a laptop without major compromises. I see Nvidia is still at it trying to use the same name and imply the same level of performance when in reality that is far from the truth. The 2080 Max-Q can be slower than even a desktop 1080 and sometimes barley any faster than a 1080 Max-Q despite the major spec bump. This is deception on a whole new level.
While I don't necessarily disagree with you, people really need to start using their gray matter. A logical person knows damn well you can't squeeze a 2080 into laptop tit for tat. I think it is high time stupid people start paying the consequences for being stupid. Darwinism needs to make a come back.

On the other hand, this is a slippery slope for NV. Get too many stupid people buying these expecting desktop performance and getting, well, laptop performance then they'll get a little unwanted negative publicity and negative performance reviews on forums.
Posted on Reply
#46
ShurikN
Is that a Vega 56 inside a gaming laptop. I didn't even know those existed...
And they also downclocked the hell out of it, it's slower than a 1070
By a lot :laugh:
Posted on Reply
#47
EarthDog
ShurikNYou understood me wrong, I was comparing 1080m-q against a 2070m-q and it's 22/8/4% faster across three resolution. You were reading the regular 2070 graphs.
Ahhh, I did. I get you now. :)

Still, this is expected... I don't understand why (well, I do understand why...) this is 'whole nother level' when its normal, within a few %, to see this behavior...I agree that more clarity can come to this situation...


...however I do not agree with the toxicity levels and lack of reporting posts instead of replying (not you shur)... its the same people creating the same toxic environment here. I need to find my ignore button and stop worrying about the sinking ship since nobody else appears to.
londisteRTX 2080 is 215W vs GTX 1080Ti 250W
RTX 2070 175W vs GTX 1080 180W
RTX 2060 160W vs GTX 1070Ti 180W
That is through performance not generational namesake.... Im not sure why some users are thinking this way. The next gen Honda accord is still an accord. Just because its as fast as an Acura TLX, doesn't suddenly turn a Honda into an Acura... bad analogy aside, let's think about that a little. :)
Posted on Reply
#48
londiste
EarthDogThat is through performance not generational namesake....
Does not matter. We can do the generational namesakes but then we'd have to account for performance anyway. Turing efficiency remains better than Pascal which was the point. Not by very much but the difference is well measurable and noticeable.
RTX 2080 at 215W vs GTX 1080 at 180W - 20% more power, ~40% more performance (from here)
RTX 2070 at 175W vs GTX 1070 at 150W - 17% more power, ~42% more performance
RTX 2060 at 160W vs GTX 1060 at 120W - 33% more power, ~60% more performance
Posted on Reply
#49
ShurikN
londisteDoes not matter. We can do the generational namesakes but then we'd have to account for performance anyway. Turing efficiency remains better than Pascal which was the point. Not by very much but the difference is well measurable and noticeable.
RTX 2080 at 215W vs GTX 1080 at 180W - 20% more power, ~40% more performance (from here)
RTX 2070 at 175W vs GTX 1070 at 150W - 17% more power, ~42% more performance
RTX 2060 at 160W vs GTX 1060 at 120W - 33% more power, ~60% more performance
Yeah but you forgot to mention die sizes. Of course it will have more performance when the chips have more cores. Also it's on 12nm. And considering that max clock speeds haven't increased, some power reduction was to be expected simply from smaller process.
Posted on Reply
#50
Recus
Vya DomusSure, how about this one : You can read and understand what is being said.

Wins in some games, loses in some games. Nothing extreme bad for heavily downclocked 2080 Max-Q as you claimed.
Posted on Reply
Add your own comment
May 10th, 2024 01:00 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts