Thursday, January 31st 2019
Mobile NVIDIA GeForce RTX GPUs Will Vary Wildly in Performance, Clocks Lowered Substantially
NVIDIA is in the process of rolling out the first implementations of its RTX 2000 series GPUs in mobile form, and if the going is as is being reported, it's going to be a little rough for users to actually extrapolate their performance from product to product. This is because manufacturers are apparently getting a whole lot of leeway in how to clock their products, according to their solution's thermal characteristics and design philosophy.
What this means is that NVIDIA's RTX 2080 Max-Q, for example, can be clocked as low as 735 MHz, which is a more than 50% downclock from its desktop counterpart (1,515 MHz). The non-Max-Q implementation of NVIDIA's RTX 2080, for now, seems to be clocked at around 1,380 MHz, which is still a close to 200 Mhz downclock. Of course, these lowered clocks are absolutely normal - and necessary - for these products, particularly on a huge chip such as the one powering the RTX 2080. The problem arises when manufacturers don't disclose clockspeeds of the GPU in their particular implementation - a user might buy, say, an MSI laptop and an ASUS one with the exact same apparent configuration, but GPUs operating at very different clockspeeds, with very different levels of performance. Users should do their due research when it comes to the point of choosing what mobile solution sporting one of these NVIDIA GPUs they should choose.
Sources:
TechSpot, Tweakers.net
What this means is that NVIDIA's RTX 2080 Max-Q, for example, can be clocked as low as 735 MHz, which is a more than 50% downclock from its desktop counterpart (1,515 MHz). The non-Max-Q implementation of NVIDIA's RTX 2080, for now, seems to be clocked at around 1,380 MHz, which is still a close to 200 Mhz downclock. Of course, these lowered clocks are absolutely normal - and necessary - for these products, particularly on a huge chip such as the one powering the RTX 2080. The problem arises when manufacturers don't disclose clockspeeds of the GPU in their particular implementation - a user might buy, say, an MSI laptop and an ASUS one with the exact same apparent configuration, but GPUs operating at very different clockspeeds, with very different levels of performance. Users should do their due research when it comes to the point of choosing what mobile solution sporting one of these NVIDIA GPUs they should choose.
100 Comments on Mobile NVIDIA GeForce RTX GPUs Will Vary Wildly in Performance, Clocks Lowered Substantially
All this began with the 980 that they made into a fully enabled chip that could be used in laptops in addition to the cut down 980M which was released previously. With the 10 series they did things in a different order, first the fully fledged parts were made available and then Max-Q ones which are nothing more than M versions, functionally.
Not they are releasing them concurrently, the way they keep changing all this isn't incidental. I for one didn't even knew there were distinct Mobile and Max-Q versions already out, if you look on their site there is nothing that explicitly explains the difference between the two. Not only that but we know find out even among those you may get wildly variable performance.
They intentionally smeared the line them between so much to the point where even adept consumers would be hard pressed to know for sure what exactly are they going to get.
This, from NVIDIA, explicitly explains what Max-Q is: www.nvidia.com/en-us/geforce/gaming-laptops/max-q/
or this: blogs.nvidia.com/blog/2018/12/14/what-is-max-q/
Why we are comparing last gen desktop cards with this gen Max-Q is beyond me...
Anyway, enjoy gents. :)
This is why this is exceptionally problematic, they sell you something that's more expensive without explicitly saying what sort of performance you are going to get. It will be a surprise, that's the point.
I burned that review into the ground @ Tweakers.net :D For good reason, actually similar reasons to those mentioned here, and the review completely fails to highlight that point.
I mean even your very example, and that is considering a fully refreshed product stack with clear naming conventions (Max-Q, 10xx)... but now the clockspeed may vary wildly. So even if you ask 'what is Max-Q' you don't know a thing about actual performance relative to the rest.
When TPU tested both cards, 2080Ti Founders consumed 18W more than the 1080Ti Founders while running Furmark, and the peak gaming load for 1080Ti was 267W versus 289W.
Secondly, we aren't discussing whether 2080Ti laptops will be faster than 1080Ti ones. We're discussing whether fitting a 2080Ti into a laptop requires more aggressive downclocking than fitting a 1080Ti into a laptop does.
Since the 2080Ti consumes more power, the simple answer is yes, it will require more aggressive downclocking, because while it is very easy to make a GPU cooler larger and better on desktop, the same cannot be said of increasing the size of cooler in a laptop.
The Desktop's thermal envelope can expand, and the laptop's stays the same. That means the performance delta also grows.
Ugh, so annoying when I try to read the first post to get this, important, information and its nowhere to be found. Soooooooooooooooooooo tired of copy paste news...
Die size will affect things like GPU/card price but this was not what we were discussing right now. It does and quite precisely so.
Again. This isn't new...
Here is Max-Q - www.anandtech.com/show/11471/nvidia-announces-geforce-gtx-maxq
Well this one I don't really like, according to Notebookcheck there is 90W tdp variant of rtx 2080 too: base 990, boost 1230, tdp 90W.
Mobile/MaxQ Pascal GPUs performance varied wildly as well.
News leading the lemmings off the cliff, gents!
Secondly, read the rest of my post instead of constantly attempting to muddy the waters.
This thread is about how aggressively, compared to desktop parts, the Max-Q versions of RTX parts will have to be downclocked in order to meet the thermal constraints of being in a laptop.
The answer to that is quite simply "More aggressively than 10 series GPUs had to be", because the entire RTX product line consumes more power and therefore generates more heat than the 10 Series did.
The performance comparison bullshit you're trying to discuss was not the original topic of this thread. It was brought up by some idiot on the first page who made an unrelated assertion that everyone else is now jumping on them for. I don't care about that. I care about the thread topic, and I'm discussing the thread topic, not the self-gratifying tangent you're trying to drag me into.
IMO, there is enough clarity (more is always better!) already for an average user to at least ask a question what the difference is between the two. The specs CLEARLY show there is a range for both mobile GPUs and Max-Q GPUs which will vary depending on the laptop that it is in. This is not new, this is, in fact, old news. Could more clarity be brought to it... sure...
...but HOW should NVIDIA report this data? Sell themselves short and list the performance off minimum boost? Or simply report a range of possible clocks speeds and note that it will vary depending on the environment like they have done. Consumers, when making purchases like this, hell almost ANY major purchase like this, due diligence needs to go into it!!! Someone mentioned earlier about Darwinism.... and I have to agree... especially considering my mom (the noobliest of the noobs in PCs) even picked it out when looking at specs!
So I ask (anyone) again... how should they advertise it? For those who are up in arms... what would be the best way to advertise this??? How do you pin down performance metrics and clocks when it can vary 500 MHz and several % performance difference from min to max clocks? 2080m is different from 2080 Max-Q. Its a different name. I expect differences in a Honda Accord LE to the Special Edition as well. Do I know what those are? No, but I can search and see the differences in the dealership or online (much like this card, look up specs and performance analysis)