Thursday, January 31st 2019

Mobile NVIDIA GeForce RTX GPUs Will Vary Wildly in Performance, Clocks Lowered Substantially

NVIDIA is in the process of rolling out the first implementations of its RTX 2000 series GPUs in mobile form, and if the going is as is being reported, it's going to be a little rough for users to actually extrapolate their performance from product to product. This is because manufacturers are apparently getting a whole lot of leeway in how to clock their products, according to their solution's thermal characteristics and design philosophy.

What this means is that NVIDIA's RTX 2080 Max-Q, for example, can be clocked as low as 735 MHz, which is a more than 50% downclock from its desktop counterpart (1,515 MHz). The non-Max-Q implementation of NVIDIA's RTX 2080, for now, seems to be clocked at around 1,380 MHz, which is still a close to 200 Mhz downclock. Of course, these lowered clocks are absolutely normal - and necessary - for these products, particularly on a huge chip such as the one powering the RTX 2080. The problem arises when manufacturers don't disclose clockspeeds of the GPU in their particular implementation - a user might buy, say, an MSI laptop and an ASUS one with the exact same apparent configuration, but GPUs operating at very different clockspeeds, with very different levels of performance. Users should do their due research when it comes to the point of choosing what mobile solution sporting one of these NVIDIA GPUs they should choose.
Sources: TechSpot, Tweakers.net
Add your own comment

100 Comments on Mobile NVIDIA GeForce RTX GPUs Will Vary Wildly in Performance, Clocks Lowered Substantially

#76
king of swag187
Wow! It's almost like laptop cooling has gotten worse over time and given the power requirements for RTX cards, they have to limit them to these insane power targets!
Whats funny is that the fully fledged RTX 2080 in the P775TM1 was only around 20FPS faster than the 1080 in the same laptop, both 9900K's/32GB RAM and 150W TDP
Posted on Reply
#77
HTC
king of swag187Wow! It's almost like laptop cooling has gotten worse over time and given the power requirements for RTX cards, they have to limit them to these insane power targets!
Whats funny is that the fully fledged RTX 2080 in the P775TM1 was only around 20FPS faster than the 1080 in the same laptop, both 9900K's/32GB RAM and 150W TDP
The problem is that the chips have become smaller but the TDP hasn't decreased as much (in proportion), meaning the heat is much more concentrated, which can be circumvented in desktop environment using beefier coolers but not in laptop environment, where space is quite limited.
Posted on Reply
#78
EarthDog
These chips are huge, actually...
Posted on Reply
#79
Vya Domus
Larger processors at lower clocks tend to be much more power efficient. The fact that they have to pare back the clocks so much suggests these were hardly made with mobile in mind.
Posted on Reply
#80
Vayra86
EarthDogClockspeed varies wildly not on SKU, but on environmental conditions, right? That is nothing new either, right?

Ugh, so annoying when I try to read the first post to get this, important, information and its nowhere to be found. Soooooooooooooooooooo tired of copy paste news...
bugThis isn't really different from desktops. Better cooling solution allow for higher TDPs which in turn allow for higher frequencies. The only thing different is manufacturers get more leeway to play with the TDP.
This summarizes it perfectly. And at the same time, manufacturers, along with Nvidia as the happy provider of model names/product stacks, loves to play along. Why is a significantly worse performing card 'called' the same as its desktop 'equivalent' SKU. That is deceptive. It is the same deception as Intel calling dualcores an i7 because they have HT... and on desktop you find them as an i3. These companies know it is deceptive, and they know that it works. All marketing they do, contains these same product names. Nowhere will you see the clocks advertised along with it, nor the performance gaps between desktop and laptop 'equivalents'. Only by reading the fine print will you discover this difference - and you don't know what you don't know.

We can debate Darwinism and while I fully agree that people are required to do their research/due diligence, I still think its a horrible practice.

As for 'how should they do it'... simple. Get the closest performing desktop equivalent model name and put that sticker on your laptop part. That goes for both the Max-Q's and the regular variants. If I recall, Nvidia used to put an "M" at the end until Pascal. That was also a distinction less deceptive than the way they are marketing it since Max-Q launched.
Posted on Reply
#81
Gasaraki
"Users should do their due research when it comes to the point of choosing what mobile solution sporting one of these NVIDIA GPUs they should choose."

People should be doing that all the time.
bugKeep in mind we don't have many benchmarks comparing TDP with DXR on and off. Without DXR, it's possible these will offer decent performance (definitely not desktop levels, though).
No. It doesn't work like that. There's no such thing as DXR off. That core is part of the chip.
Posted on Reply
#82
jabbadap
Vayra86This summarizes it perfectly. And at the same time, manufacturers, along with Nvidia as the happy provider of model names/product stacks, loves to play along. Why is a significantly worse performing card 'called' the same as its desktop 'equivalent' SKU. That is deceptive. It is the same deception as Intel calling dualcores an i7 because they have HT... and on desktop you find them as an i3. These companies know it is deceptive, and they know that it works.

We can debate Darwinism and while I fully agree that people are required to do their research/due diligence, I still think its a horrible practice.

As for 'how should they do it'... simple. Get the closest performing desktop equivalent model name and put that sticker on your laptop part. That goes for both the Max-Q's and the regular variants. If I recall, Nvidia used to put an "M" at the end until Pascal. That was also a distinction less deceptive than the way they are marketing it since Max-Q launched.
Uhh, I would argue that current naming scheme is way better than it used to be. At least Nvidia is really using same chips on same named skus. I.E. in Fermi times gtx580m was not even gf110 gpu, it was tier lower gf114.
Posted on Reply
#83
Gasaraki
People need to stop comparing MaxQ vs. regular cards vs. desktop cards. A 1080 MaxQ and a regular 1080 in a laptop didn't have the same performance and both are slower than the desktop version of the 1080. That's common sense. The desktop version has better cooling, better power, can boost longer.

This stupid article is just getting people to argue about stupid things that's obvious. The performance will vary because some laptops have bad cooling etc.
EarthDogMax-Q defines the size of the card and it different from their mobile coutnerparts, correct. This is nothing new. MAx-Q designs are typically found in less expensive, but typically more power efficient, and thin models. It was expected upon release of them last gen and nothing has changed here. They are a different tier compared to their more robust mobile and especially desktop counterparts. This should not be a surprise. :)

This, from NVIDIA, explicitly explains what Max-Q is: www.nvidia.com/en-us/geforce/gaming-laptops/max-q/
or this: blogs.nvidia.com/blog/2018/12/14/what-is-max-q/

Why we are comparing last gen desktop cards with this gen Max-Q is beyond me...

Anyway, enjoy gents. :)
Thank god there's someone that understands...
Posted on Reply
#85
king of swag187
HTCThe problem is that the chips have become smaller but the TDP hasn't decreased as much (in proportion), meaning the heat is much more concentrated, which can be circumvented in desktop environment using beefier coolers but not in laptop environment, where space is quite limited.
The size of die's has gone up, as well as the TDP as cooling capacity has gone down
Makes me miss the days of the Clevo X7200 and the M17X R2..
Posted on Reply
#86
bug
Vayra86This summarizes it perfectly. And at the same time, manufacturers, along with Nvidia as the happy provider of model names/product stacks, loves to play along. Why is a significantly worse performing card 'called' the same as its desktop 'equivalent' SKU. That is deceptive. It is the same deception as Intel calling dualcores an i7 because they have HT... and on desktop you find them as an i3. These companies know it is deceptive, and they know that it works. All marketing they do, contains these same product names. Nowhere will you see the clocks advertised along with it, nor the performance gaps between desktop and laptop 'equivalents'. Only by reading the fine print will you discover this difference - and you don't know what you don't know.

We can debate Darwinism and while I fully agree that people are required to do their research/due diligence, I still think its a horrible practice.

As for 'how should they do it'... simple. Get the closest performing desktop equivalent model name and put that sticker on your laptop part. That goes for both the Max-Q's and the regular variants. If I recall, Nvidia used to put an "M" at the end until Pascal. That was also a distinction less deceptive than the way they are marketing it since Max-Q launched.
Well, from an engineering and historicalpoint of view, mobile GPU used to be quite different from their desktop counterparts. Hell, oftentimes, the mobile GPUs were from a whole previous generation. Nvidia has first started using the same designs across product lines and now, for the first time, they can actually sell you GPUs running the same configuration as their desktop siblings. To me, the constraints of a laptop are well known and I fully expect anything that runs on a laptop to do so considerablly slower than on a desktop (part of the reason I still hate laptops and my main rig is a fully-fledged desktop), but if that is confusing/deceptive to you... well, there's nothing for me to add there.
Posted on Reply
#87
danbert2000
Vayra86This summarizes it perfectly. And at the same time, manufacturers, along with Nvidia as the happy provider of model names/product stacks, loves to play along. Why is a significantly worse performing card 'called' the same as its desktop 'equivalent' SKU. That is deceptive. It is the same deception as Intel calling dualcores an i7 because they have HT... and on desktop you find them as an i3. These companies know it is deceptive, and they know that it works. All marketing they do, contains these same product names. Nowhere will you see the clocks advertised along with it, nor the performance gaps between desktop and laptop 'equivalents'. Only by reading the fine print will you discover this difference - and you don't know what you don't know.

We can debate Darwinism and while I fully agree that people are required to do their research/due diligence, I still think its a horrible practice.

As for 'how should they do it'... simple. Get the closest performing desktop equivalent model name and put that sticker on your laptop part. That goes for both the Max-Q's and the regular variants. If I recall, Nvidia used to put an "M" at the end until Pascal. That was also a distinction less deceptive than the way they are marketing it since Max-Q launched.
There's a distinct difference between the old M GPUs and the new Max-Q ones. Maxwell and earlier, the chips would actually be different between, say, a 970 and a 970m. They would usually have a specific mobility chip that would be wider and lower clocked. Starting with Pascal, the chips got so efficient that Nvidia just kept the same chip for the same branding. They would reduce the clocks slightly, but the performance would be generally the same. Then the Max-Q chips came and dropped the clocks and voltages even further, but it is still the same chip.

I have no problem with them selling the same chip in a graphics card and a laptop and having the same branding. And Max-Q is just a factory underclocked and undervolted part. I do agree that they need to have a better distinction between the performance levels, but there's a reason they don't put an "m" after these chips. They are the desktop chips in a mobile form factor.
Posted on Reply
#88
Captain_Tom
Well the approach they took with the MX150 was incredibly successful, so why wouldn't they?

On the surface the MX150 is a 25w cheap mobile card that theoretically should outperform its desktop counterpart (GT 1030) by a solid 5-10%. However the most prevalent version of the MX150 was actually a version limited to an 8w TDP. I have such a laptop with one of these, and in stock configuration it is almost worse than having Intel graphics - and that is because it starts jumping between 500MHz and 1200MHz after 5 min of gaming (causing immense stutter).

However if you force clocks to stay at 1150MHz, overclock the memory, and undervolt it substantially - it can actually match the GT 1030 within an 8w profile. Very cool - but annoying for non-overclockers. I expect the same of these RTX Laptop cards.

Nvidia wants to be able to say they can fit a GTX 2060 in an ultrabook...... even if it will perform like an AMD APU under prolonged load lol. But to those up to the challenge, I bet you can get it close to 1070 performance after undervolting.
jabbadapUhh, I would argue that current naming scheme is way better than it used to be. At least Nvidia is really using same chips on same named skus. I.E. in Fermi times gtx580m was not even gf110 gpu, it was tier lower gf114.
Even funnier was the GTX 480m was a GTX 465 running at (no joke) only around 300-400MHz if I remember correctly lol. It had incredible reliability and overheating issues (like all of Fermi) while also only really being usable in laptops the size of a desktop.
Posted on Reply
#89
XXL_AI
80Watts!? What the ... for the 2060? Nope. Try again Nvidia.
Posted on Reply
#90
jchambers2586
I have a notebook with a mxm slot how do I buy one of these gpus when they launch?
Posted on Reply
#91
londiste
XXL_AI80Watts!? What the ... for the 2060? Nope. Try again Nvidia.
Wait 'til you hear about the 80W RTX 2080 :)
Posted on Reply
#92
XXL_AI
londisteWait 'til you hear about the 80W RTX 2080 :)
nope, I have no intention of waiting, its better to get a desktop gpu with an external box and use it at nvidiasmi -pl (half of the supported power)
Posted on Reply
#93
jabbadap
jchambers2586I have a notebook with a mxm slot how do I buy one of these gpus when they launch?
Contact your notebook vendor if it's even possible. They usually don't want to make upgrading possible. If it's possible then there's sites like eurocom where to buy upgrade kits, which costs a lot(i.e. gtx 1070 with all needed heatsinks and mounting bits costs $1k). In fact it's so expensive that you would better of buying new notebook anyway.
Posted on Reply
#94
EarthDog
XXL_AInope, I have no intention of waiting, its better to get a desktop gpu with an external box and use it at nvidiasmi -pl (half of the supported power)
why would you do that when an external.box can handle a full size card and cooling?
Posted on Reply
#95
Mescalamba
Hot piece of HW eh? :D

Lets hope that memory or IMC wont die in these too.
Posted on Reply
#96
king of swag187
jchambers2586I have a notebook with a mxm slot how do I buy one of these gpus when they launch?
Likely they won't fit in whatever laptop you have unless you have one of the most recent Clevo's or GT7x or GT8x from MSI
jabbadapContact your notebook vendor if it's even possible. They usually don't want to make upgrading possible. If it's possible then there's sites like eurocom where to buy upgrade kits, which costs a lot(i.e. gtx 1070 with all needed heatsinks and mounting bits costs $1k). In fact it's so expensive that you would better of buying new notebook anyway.
Eurocom is hot steaming garbage
You can get a proper 1070 MXM card for around the $500 mark, still not great, but GL finding a laptop with a 1070 for around $500
Posted on Reply
#97
John Naylor
GlacierNineThis really isn't complicated.

GTX 10-Series GPUs in laptops and mobile devices already had to be downclocked to avoid thermal and power consumption issues.

If the RTX chips are hotter and more power hungry, then the downclocking will have to be more aggressive assuming cooling stays the same, otherwise they will simply burn.

End of story. Done. Finished. That's that. The laws of physics dictate no less.
The laws of physics, well thermodynamics, don't change ... but the comparison is not apples and apples. What are we comparing ?

We can compare the models by series ... in which case the 2080 will likely use more power than the 1080. But the apples and apples comparison must be on the basis of performance delivered ... in which the 2080 should be compared with something close in performance such as the 1080 Ti

MSI 1080 Ti Gaming = 282 watts peak gaming per TPU testing
MSI 2080 Gaming = 244 watts peak gaming per TPU testing

Of course manufacturers and vendors will do their best to hide which of the 20X0 series GPU is being used. And nVidia is supplying what their cleints are asking for, people are going to choose a stylish laptop and will want 2080 performance. The ones who won'tlook poast the web price or store placard will go home with a thin mas market lappie with a heavily downclocked 2080 . Others will spend less on a well designed, preferably custom built laptop with a hefty cooling system and a 2070 with similar performance.

Be like an engineer, forget about "sexy" thin laptops, purchase from a quality outfit known for high performance, or better yet "custom built" and verify that you are getting what you want.... If it doesn't weight 5.5 pounds (15") or 7.5_ pounds (17"), it's not likely going to support Qmax AND deliver long battery life

RTX-OPS - 37T versus T53T
Giga Rays/s - 5 versus 7
Boost Clock (MHz) - 1095 versus 1590
Base Clock (MHz) - 735 versus 1380
Thermal Design Power - 80 W versus 150+ W

Here's MSI's offerings ... had to configure more than Id like extras wise to get apples and apples.

MSI GS75 Stealth 202- $3,227
17.3" FHD, IPS-Level 144Hz 3ms 100%sRGB 72%NTSC
NVIDIA GeForce RTX 2080 8GB GDDR6 w/ Qmax
8th Generation Intel® Core™ Coffee Lake i7-8750H
16GB (1x16GB) DDR4 2666MHz SO-DIMM Memory
512GB Samsung 970 PRO M.2 NVMe Solid State Drive
2TB Western Digital BLUE SATA III Solid State Disk Drive
802.11AC wireless LAN + Bluetooth
4.96 lbs with 4-cell Battery

MSI GE75 Raider 048 - $3,198
17.3" FHD, IPS-Level 144Hz 3ms 100%sRGB 72%NTSC
NVIDIA GeForce RTX 2080 8GB GDDR6
8th Generation Intel® Core™ Coffee Lake i7-8750H
16GB (1x16GB) DDR4 2666MHz SO-DIMM Memory
512GB Samsung 970 PRO M.2 NVMe Solid State Drive
2TB Western Digital BLUE SATA III Solid State Disk Drive
802.11AC wireless LAN + Bluetooth
5.75 lbs with 6-cell Battery

So the QMax is an extra $29 but battey is smaller ... Id be concerned about the extra power combinerd with smaller power and perhaps less robust cooling.
Posted on Reply
Add your own comment
May 9th, 2024 21:44 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts