Thursday, January 31st 2019

Mobile NVIDIA GeForce RTX GPUs Will Vary Wildly in Performance, Clocks Lowered Substantially

NVIDIA is in the process of rolling out the first implementations of its RTX 2000 series GPUs in mobile form, and if the going is as is being reported, it's going to be a little rough for users to actually extrapolate their performance from product to product. This is because manufacturers are apparently getting a whole lot of leeway in how to clock their products, according to their solution's thermal characteristics and design philosophy.

What this means is that NVIDIA's RTX 2080 Max-Q, for example, can be clocked as low as 735 MHz, which is a more than 50% downclock from its desktop counterpart (1,515 MHz). The non-Max-Q implementation of NVIDIA's RTX 2080, for now, seems to be clocked at around 1,380 MHz, which is still a close to 200 Mhz downclock. Of course, these lowered clocks are absolutely normal - and necessary - for these products, particularly on a huge chip such as the one powering the RTX 2080. The problem arises when manufacturers don't disclose clockspeeds of the GPU in their particular implementation - a user might buy, say, an MSI laptop and an ASUS one with the exact same apparent configuration, but GPUs operating at very different clockspeeds, with very different levels of performance. Users should do their due research when it comes to the point of choosing what mobile solution sporting one of these NVIDIA GPUs they should choose.
Sources: TechSpot, Tweakers.net
Add your own comment

100 Comments on Mobile NVIDIA GeForce RTX GPUs Will Vary Wildly in Performance, Clocks Lowered Substantially

#51
Vya Domus
moproblems99While I don't necessarily disagree with you, people really need to start using their gray matter. A logical person knows damn well you can't squeeze a 2080 into laptop tit for tat. I think it is high time stupid people start paying the consequences for being stupid. Darwinism needs to make a come back.

On the other hand, this is a slippery slope for NV. Get too many stupid people buying these expecting desktop performance and getting, well, laptop performance then they'll get a little unwanted negative publicity and negative performance reviews on forums.
Nvidia is smart.

All this began with the 980 that they made into a fully enabled chip that could be used in laptops in addition to the cut down 980M which was released previously. With the 10 series they did things in a different order, first the fully fledged parts were made available and then Max-Q ones which are nothing more than M versions, functionally.

Not they are releasing them concurrently, the way they keep changing all this isn't incidental. I for one didn't even knew there were distinct Mobile and Max-Q versions already out, if you look on their site there is nothing that explicitly explains the difference between the two. Not only that but we know find out even among those you may get wildly variable performance.

They intentionally smeared the line them between so much to the point where even adept consumers would be hard pressed to know for sure what exactly are they going to get.
Posted on Reply
#52
EarthDog
Max-Q defines the size of the card and it different from their mobile coutnerparts, correct. This is nothing new. MAx-Q designs are typically found in less expensive, but typically more power efficient, and thin models. It was expected upon release of them last gen and nothing has changed here. They are a different tier compared to their more robust mobile and especially desktop counterparts. This should not be a surprise. :)

This, from NVIDIA, explicitly explains what Max-Q is: www.nvidia.com/en-us/geforce/gaming-laptops/max-q/
or this: blogs.nvidia.com/blog/2018/12/14/what-is-max-q/

Why we are comparing last gen desktop cards with this gen Max-Q is beyond me...

Anyway, enjoy gents. :)
Posted on Reply
#53
Vya Domus
EarthDogMAx-Q designs are typically found in less expensive, but typically more power efficient
You sure about that ? Can you find a comparable 2080 Max-Q laptop cheaper than a regular one ?

This is why this is exceptionally problematic, they sell you something that's more expensive without explicitly saying what sort of performance you are going to get. It will be a surprise, that's the point.
Posted on Reply
#54
jabbadap
Less expensive? The thinner the notebook is, more expensive it is.
Vya DomusYou sure about that ? Can you find a 2080 Max-Q laptop cheaper than a regular one ?

This is why this is exceptionally problematic, they sell you something that's more expensive without explicitly saying what sort of performance you are going to get. It will be a surprise, that's the point.
If it is clearly marketed as MaxQ, while most of the pascal thin laptops were, I don't see that much of a problem. Yes, they are two different products, but I don't really see why would they need to have wholly different naming scheme(Heck config is same, just clocks are lower to achieve lower tdp). It goes like an old good days, Gt, ti vs ultra, pro vs xt, etc. and I don't remember any one complaining much of those.
Posted on Reply
#55
moproblems99
EarthDogMax-Q defines the size of the card and it different from their mobile coutnerparts, correct. This is nothing new. MAx-Q designs are typically found in less expensive, but typically more power efficient, and thin models. It was expected upon release of them last gen and nothing has changed here. They are a different tier compared to their more robust counterparts. This should not be a surprise. :)

This, from NVIDIA, explicitly explains what Max-Q is: www.nvidia.com/en-us/geforce/gaming-laptops/max-q/
or this: blogs.nvidia.com/blog/2018/12/14/what-is-max-q/

Why we are comparing last gen desktop cards with this gen Max-Q is beyond me...

Anyway, enjoy gents. :)
I guess my expectations are to have a crappy gaming experience on a lap top then when I get surprised, it is even better. If I am on a laptop then that means I am travelling and probably in a place I have never been before so I'll go explore life instead of sitting in front of a screen in a hotel.
Posted on Reply
#57
EarthDog
Apologies... for thinner laptops. The Max-Q design is also physical as well as the movie magic that makes them work. But they try to save power for these thinner devices and heat mitigation. Again, I can see where an average consumer would have issues distinguishing things, and the more clear the better, but honestly, a bit of searching (or asking questions) goes a long way. Even my mom called to ask me what the difference was between a Max-Q card and mobile version... its obvious enough to ask a question (and my mom is soooo not tech savvy, LOL). I certainly wouldn't use some of the more severe terms that were tossed about in this thread to describe them, but again... par for the course.
Posted on Reply
#58
Vayra86
EarthDogApologies... for thinner laptops. The Max-Q design is also physical as well as the movie magic that makes them work. But they try to save power for these thinner devices and heat mitigation. Again, I can see where an average consumer would have issues distinguishing things, and the more clear the better, but honestly, a bit of searching (or asking questions) goes a long way. Even my mom called to ask me what the difference was between a Max-Q card and mobile version... its obvious enough to ask a question (and my mom is soooo not tech savvy, LOL). I certainly wouldn't use some of the more severe terms that were tossed about in this thread to describe them, but again... par for the course.
Sorry but Nvidia's laptop product stacks have always been a mess, complete with cross-generational rebrands and downright misleading product names. You can ask questions all day and still not have a good picture of it. Much the same as Intel's approach. And its all intentional, let's not fool ourselves here, and it seems to work. So yes, you can ask questions, but most people simply don't have the capacity to grasp it.

I mean even your very example, and that is considering a fully refreshed product stack with clear naming conventions (Max-Q, 10xx)... but now the clockspeed may vary wildly. So even if you ask 'what is Max-Q' you don't know a thing about actual performance relative to the rest.
Posted on Reply
#59
GlacierNine
londisteRTX 2080 is 215W vs GTX 1080Ti 250W
RTX 2070 175W vs GTX 1080 180W
RTX 2060 160W vs GTX 1070Ti 180W
Assuming MaxQ means minimum possible spec (which it mostly does), RTX 2080 MaxQ is 80W, GTX 1080 MaxQ is 90W and Desktop GTX 1080 is 180W.
Power is the main limitation in mobile.
From the Shadow of Mordor graph, 9.5% better at 12% power limit deficit is not a bad result.
First off, you're comparing different levels of card, so your comparison is invalid here. Secondly, the paper specs do not reflect real-world power consumption.

When TPU tested both cards, 2080Ti Founders consumed 18W more than the 1080Ti Founders while running Furmark, and the peak gaming load for 1080Ti was 267W versus 289W.


Secondly, we aren't discussing whether 2080Ti laptops will be faster than 1080Ti ones. We're discussing whether fitting a 2080Ti into a laptop requires more aggressive downclocking than fitting a 1080Ti into a laptop does.

Since the 2080Ti consumes more power, the simple answer is yes, it will require more aggressive downclocking, because while it is very easy to make a GPU cooler larger and better on desktop, the same cannot be said of increasing the size of cooler in a laptop.

The Desktop's thermal envelope can expand, and the laptop's stays the same. That means the performance delta also grows.

Posted on Reply
#60
Vayra86
GlacierNineFirst off, you're comparing different levels of card, so your comparison is invalid here. Secondly, the paper specs do not reflect real-world power consumption.

When TPU tested both cards, 2080Ti Founders consumed 18W more than the 1080Ti Founders while running Furmark, and the peak gaming load for 1080Ti was 267W versus 289W.


Yes, but the 2080ti FE also has a higher TDP of 260W on the box. Seems pretty consistent to me. Additionally, we're not mentioning the power plan. Nvidia CP offers three power management modes and its unlikely to be set to 'adaptive' for benchmarking.
Posted on Reply
#61
GlacierNine
Vayra86Yes, but the 2080ti FE also has a higher TDP of 260W on the box. Seems pretty consistent to me.
Secondly, we aren't discussing whether 2080Ti laptops will be faster than 1080Ti ones. We're discussing whether fitting a 2080Ti into a laptop requires more aggressive downclocking than fitting a 1080Ti into a laptop does.

Since the 2080Ti consumes more power, the simple answer is yes, it will require more aggressive downclocking, because while it is very easy to make a GPU cooler larger and better on desktop, the same cannot be said of increasing the size of cooler in a laptop.

The Desktop's thermal envelope can expand, and the laptop's stays the same. That means the performance delta also grows.
This was the point I was making all along and I've never strayed from it. I'm not debating which is ultimately 5% faster. I'm pointing out that if your 150W laptop GPU is limited to 100W before it thermal throttles, and you then replace it with a 160W GPU, even if it's faster in the end, you lose more performance proportional to the capability of the silicon, than you did with the 150W part, even though both are being constrained thermally.
Posted on Reply
#62
EarthDog
Vayra86Sorry but Nvidia's laptop product stacks have always been a mess, complete with cross-generational rebrands and downright misleading product names. You can ask questions all day and still not have a good picture of it. Much the same as Intel's approach. And its all intentional, let's not fool ourselves here, and it seems to work. So yes, you can ask questions, but most people simply don't have the capacity to grasp it.

I mean even your very example, and that is considering a fully refreshed product stack with clear naming conventions (Max-Q, 10xx)... but now the clockspeed may vary wildly. So even if you ask 'what is Max-Q' you don't know a thing about actual performance relative to the rest.
Clockspeed varies wildly not on SKU, but on environmental conditions, right? That is nothing new either, right?

Ugh, so annoying when I try to read the first post to get this, important, information and its nowhere to be found. Soooooooooooooooooooo tired of copy paste news...
Posted on Reply
#63
londiste
ShurikNYeah but you forgot to mention die sizes. Of course it will have more performance when the chips have more cores. Also it's on 12nm. And considering that max clock speeds haven't increased, some power reduction was to be expected simply from smaller process.
We were talking about power consumption and efficiency. 12nm is a very minor step from 14/16nm.
Die size will affect things like GPU/card price but this was not what we were discussing right now.
GlacierNineSecondly, the paper specs do not reflect real-world power consumption.
It does and quite precisely so.
Posted on Reply
#65
jabbadap
EarthDogClockspeed varies wildly not on SKU, but on environmental conditions, right? That is nothing new either, right?
Well yeah, when pascal laptops came, thermal compound replacement was almost mandatory to stop them throttle(Those HP Omens). Not only the gpu but intel cpu too.

Well this one I don't really like, according to Notebookcheck there is 90W tdp variant of rtx 2080 too: base 990, boost 1230, tdp 90W.
Posted on Reply
#66
EarthDog
I'm just trying to wrap my head around why some people's heads are popping off over this news. This is how NV laptop GPUs work, peeps. There is little difference.

Mobile/MaxQ Pascal GPUs performance varied wildly as well.


News leading the lemmings off the cliff, gents!
Posted on Reply
#67
GlacierNine
londisteIt does and quite precisely so.
Clearly not, or the real world power consumption wouldn't be higher than advertised when TPU tested the items.

Secondly, read the rest of my post instead of constantly attempting to muddy the waters.

This thread is about how aggressively, compared to desktop parts, the Max-Q versions of RTX parts will have to be downclocked in order to meet the thermal constraints of being in a laptop.

The answer to that is quite simply "More aggressively than 10 series GPUs had to be", because the entire RTX product line consumes more power and therefore generates more heat than the 10 Series did.

The performance comparison bullshit you're trying to discuss was not the original topic of this thread. It was brought up by some idiot on the first page who made an unrelated assertion that everyone else is now jumping on them for. I don't care about that. I care about the thread topic, and I'm discussing the thread topic, not the self-gratifying tangent you're trying to drag me into.
Posted on Reply
#68
trparky
Combine the fact that most OEMs are pushing the whole "thin for the sake of thin" which of course results in notebooks not being able to properly cool themselves, what did you think was going to happen?
Posted on Reply
#69
londiste
GlacierNineThe answer to that is quite simply "More aggressively than 10 series GPUs had to be", because the entire RTX product line consumes more power and therefore generates more heat than the 10 Series did.
Given that Turing is more power-efficient than Pascal, is the "more aggressively" part that important?
Posted on Reply
#70
EarthDog
trparkyCombine the fact that most OEMs are pushing the whole "thin for the sake of thin" which of course results in notebooks not being able to properly cool themselves, what did you think was going to happen?
The same thing that has been happening in this space already.. :)

IMO, there is enough clarity (more is always better!) already for an average user to at least ask a question what the difference is between the two. The specs CLEARLY show there is a range for both mobile GPUs and Max-Q GPUs which will vary depending on the laptop that it is in. This is not new, this is, in fact, old news. Could more clarity be brought to it... sure...

...but HOW should NVIDIA report this data? Sell themselves short and list the performance off minimum boost? Or simply report a range of possible clocks speeds and note that it will vary depending on the environment like they have done. Consumers, when making purchases like this, hell almost ANY major purchase like this, due diligence needs to go into it!!! Someone mentioned earlier about Darwinism.... and I have to agree... especially considering my mom (the noobliest of the noobs in PCs) even picked it out when looking at specs!
Posted on Reply
#71
trparky
That of course depends upon if people do the proper research and not just listen to the marketing speak and look at all the pretty marketing cards that are placed next to the systems in the big box stores.
Posted on Reply
#72
B-Real
bugThis isn't really different from desktops. Better cooling solution allow for higher TDPs which in turn allow for higher frequencies. The only thing different is manufacturers get more leeway to play with the TDP.
There is a huge difference... if you check different AIB cooling solutions, generally there are only 1-3 fps differences in the smallest (FHD) resolution. If you check these clock rates, these indicate much bigger performance difference.
Posted on Reply
#73
Unregistered
The 1080 max-q was already rubbish closer to the 1070 (mobile)than the real thing, it's just stupid. nVidia should've done something similar to the 1070 mobile, given the GPU more cuda cores so the lower clocks are less painful. The whole max-q thing is a scam, they sell two very different GPUs under the same naming scheme, a normal 1080 can run lower if they wanted it to, so I think it's maybe some poor quality GPUs that can't handle higher clocks get named as max-q.
Posted on Edit | Reply
#74
EarthDog
trparkyThat of course depends upon if people do the proper research and not just listen to the marketing speak and look at all the pretty marketing cards that are placed next to the systems in the big box stores.
Of course! But how is BB (any brick and mortar) advertises it versus NVIDIA and its specifications really NVIDIA's fault? How far down the line do they need to go to be absolved of someone who is unrelated advertising?

So I ask (anyone) again... how should they advertise it? For those who are up in arms... what would be the best way to advertise this??? How do you pin down performance metrics and clocks when it can vary 500 MHz and several % performance difference from min to max clocks? 2080m is different from 2080 Max-Q. Its a different name. I expect differences in a Honda Accord LE to the Special Edition as well. Do I know what those are? No, but I can search and see the differences in the dealership or online (much like this card, look up specs and performance analysis)
Posted on Reply
#75
trparky
EarthDogSo I ask (anyone) again... how should they advertise it? For those who are up in arms... what would be the best way to advertise this??? How do you pin down performance metrics and clocks when it can vary 500 MHz and several % performance difference from min to max clocks? 2080m is different from 2080 Max-Q. Its a different name. I expect differences in a Honda Accord LE to the Special Edition as well. Do I know what those are? No, but I can search and see the differences in the dealership or online (much like this card, look up specs and performance analysis)
You got me on that one, I don't know. You can only dumb things down so much.
Posted on Reply
Add your own comment
May 10th, 2024 17:09 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts