Saturday, January 23rd 2021

NVIDIA to Drop Max-Q and Max-P Differentiators in Mobile GPU Specifications

NVIDIA has recently introduced its 3000 series of Ampere graphics cards designed for mobile/laptop devices. And usually, these GPUs in the past few years have been divided into two configurations: Max-P and Max-Q. The Max-P variant was a maximum performance configuration meant for more power usage and higher temperatures, representing a standard GPU configuration. The Max-Q design was, according to NVIDIA, "a system-wide approach to deliver high performance in thin and light gaming laptops. Every aspect of the laptop, chip, software, PCB design, power delivery, and thermals, are optimized for power and performance." Meaning that the Max-Q variants are more TGP limited compared to the Max-P configuration.


Update 23rd of January 11:35 UTC: NVIDIA spokesperson told Tom's Hardware that: "No, Max-Q branding is not going away. When we originally introduced Max-Q back in 2017, the brand was initially used in GPU naming since Max-Q referred to the GPU TGP only. Today, 3rd Generation Max-Q is broader, and is a holistic set of platform technologies and design approach to building powerful and thin laptops. In addition, to be more transparent about a laptop's exact capabilities, RTX 30 Series laptops now show more information than ever, listing exact TGP, clocks and features supported. You will find this in the control panel which now reports maximum power (TGP+Boost), and support for key features including Dynamic Boost 2, WhisperMode 2, Advanced Optimus, and others, all of which fall under the Max-Q umbrella. We strongly encourage OEMs to list clocks and other technologies a laptop supports, including Advanced Optimus, Dynamic Boost 2, and more. Ultimately, like all laptop features and specs, it is up to the OEM to market what their particular laptop configuration supports.)"

Today, according to Notebookcheck, NVIDIA has decided to drop these naming differentiators in the products, where customers are now unable to know whatever their product uses the high-TGP or low-TGP configuration of a specific GPU SKU. From now on, laptop makers will no longer list the GPU configuration in their specifications and will just list a GPU model, without any explanation whatever it is a Max-P or a Max-Q arrangement. Originally the company did announce both types of SKUs for the 3000 series GPU based laptops. However, this change has happened recently, and now when looking at the laptop specifications, the GPU variant is not listed. Below you can check out the slide posted on Chinese social media spotted by @9550pro on Twitter, showing the differences in Max-P and Max-Q offerings. Laptop OEMs have already excluded the Max-P specification from the specification list.
Sources: Notebookcheck, via VideoCardz
Add your own comment

45 Comments on NVIDIA to Drop Max-Q and Max-P Differentiators in Mobile GPU Specifications

#26
evernessince
P4-630So read reviews first before buying a gaming laptop.
That's not a possibility for every laptop model and their many variations. You are lucky if you get one review per model, let alone all it's potential variations. On top of that, there are instances in the past where Nvidia has sold "the same model" of a chip with vastly different performance capabilities so even if you did read reviews, there was no telling whether you are getting the faster one or the slower one.
Posted on Reply
#27
Vya Domus
Well, good luck navigating this minefield if you want a high end laptop I guess, you're on your own. Shame they keep getting away with these falsely advertised parts but since they never seem to have gotten in any trouble this will carry on forever.
londisteI do not even know what the good solution to this would be.
Come on, you know very well what the solution is. You list what's the TDP limit imposed for the GPU on your product page. Nvidia can easily make OEMs do this, it's their product. Of course, that would require both Nvidia and the OEM to have a backbone and offer truthful and comprehensive information to their customers.

Alternatively, they can go back to just having a single tier with one specification and that's it. If your laptop can't accommodate the power and cooling required then you don't use it and you implement a lower performance part, in practice that's what you get performance wise anyway. The real problem here isn't the TDP, performance and all, it's the fact that all of this is just a really contrived way for them to sell you an "RTX 3080" powered laptop that is anything but that under the hood. In other words this is done to sell you a falsely advertised product.

I think the only real solution at this point is a really nasty lawsuit and a big enough settlement/fine to hurt Nvidia and the OEMs.
Posted on Reply
#28
Minus Infinity
How about the still fraudulent advertising being allowed. Eg 3070 mobile is not remotely 3070 desktop performance and should be labelled 3060 also based off the cuda core count.

Note to self don't buy laptops with Nvidia GPUs. Such a d!ck company
Posted on Reply
#29
Crackong
Paying 3080 prices and it is GA104 ?
Nope, just Nope
Posted on Reply
#30
Mussels
Freshwater Moderator
They dont want it to be clear - it's been garbage getting decent specs for laptop purchases for years now, i've helped people shop for the damn things and found out display models had different specs to what arrived, but the advertised specs were vague enough they couldnt return them
Posted on Reply
#31
londiste
Vya DomusCome on, you know very well what the solution is. You list what's the TDP limit imposed for the GPU on your product page. Nvidia can easily make OEMs do this, it's their product. Of course, that would require both Nvidia and the OEM to have a backbone and offer truthful and comprehensive information to their customers.
Would you trust OEMs with this? GPU manufacturer would have to be exerting quite some control to keep that meaningful.
Easy example - set TDP limit to whatever but have card limited by cooling. Done, spec looks OK and is technically valid :D
Vya DomusAlternatively, they can go back to just having a single tier with one specification and that's it. If your laptop can't accommodate the power and cooling required then you don't use it and you implement a lower performance part, in practice that's what you get performance wise anyway. The real problem here isn't the TDP, performance and all, it's the fact that all of this is just a really contrived way for them to sell you an "RTX 3080" powered laptop that is anything but that under the hood. In other words this is done to sell you a falsely advertised product.
I get your point and in principle I would agree but I think in reality the possible range is too big for that plus there are niches to cater for. For example an 80W 3080 - it'll be more expensive but also faster than an 80W 3070. Slower but wider has benefits.
Posted on Reply
#32
Vayra86
londisteWould you trust OEMs with this? GPU manufacturer would have to be exerting quite some control to keep that meaningful.
Easy example - set TDP limit to whatever but have card limited by cooling. Done, spec looks OK and is technically valid :D


I get your point and in principle I would agree but I think in reality the possible range is too big for that plus there are niches to cater for. For example an 80W 3080 - it'll be more expensive but also faster than an 80W 3070. Slower but wider has benefits.
Right now the choice lands squarely in the most advantageous way for the companies selling product. Any semblance of ethical behaviour is out the window. This is apparently the norm for laptops and its the reason the vast majority of consumers has been very eager to move to tablets and smartphones or even cloud to get their mobile gaming fix.

We all know there is absolutely no necessity to not cater laptop models to GPU TDPs. Every gen there is a new range devices can be sized for. Additionally, Nvidia is perfectly well capable of filling a stack with more than 3 models and having their performance match the desktop counterparts more closely.

Who's really winning here, I wonder? As usual this is just the devastating effect of unregulated commerce and greed.
Posted on Reply
#33
londiste
Would for example 3 models with fixed TDPs - from Nvidia lineup say 3060 at 80W, 3070 at 115W and 3080 at 150W - be a good solution? OEMs clearly do not think so. Ignoring what they want would be bad for the bottom line of GPU manufacturer. Lower TDP means smaller and lighter laptop. Again, if one is be willing to pay for it, 3080 at 80W can be a thing. Yes, it will fall short of 3080 at 150W but will outperform 3060 at 80W and possibly 3070 at 115W. Sure, the GPU range could then be extended beyond the 3 models but it would defeat the purpose of simple and understandable lineup.

Like I said, I do not know what a perfect solution would be.
It seems to me that having the GPU TDP spec for specific laptop is the best we can hope for and a reasonable compromise.

This is not an Nvidia thing. AMD has the same vague specs for mobile GPUs and for the same reasons.
Posted on Reply
#34
Vayra86
The purpose was never simple and understandable that is the whole point. Companies can do many things to make a product stack clear and understandable, but they obviously don't prefer to do so, because misdirection is better for margins.

Nvidia also manages to fill OEM channels with similar name but different core count GPUs, even entirely different memory setups. Its the same filth and yes all companies are guilty of it. That still doesn't make it right and there is absolutely no excuse for not including twenty model numbers instead of hammering it down to three and having radically different performance hidden behind the TDP.

Nvidia had marketing for their wider GPUs and apparently it didn't help their margin - thát is the gist of this article. You don't need to make silly excuses for them or find the logic behind the madness. This is all about money and selling the smallest possible chip with the biggest possible number.
Posted on Reply
#35
nguyen
Much ado about nothing here, <15in laptop will have 65W TDP GPU, 15in laptop will have 80-90W TDP GPU and 17in laptop will have 115-150W GPU, it is that simple.
Customer will first have to pick the chassis most beneficial to their need then pick out the specs depending on how deep their pocket is.
Nobody should care if their 15in laptop with higher spec is slower than 17in laptop with lower spec (3080 80W vs 3070 150W), if they do then just build a freaking PC.
Posted on Reply
#36
Vader
Isn't nvidia the one that makes the cards, dictates the gpu's TBP and then sets that power limit in the BIOS of the card?

As i see it, when a order is placed by the OEM, nvidia knows exactly the TBP that will be used for the cards they are selling, why are they making this anti-consumer move when they could come up with a naming scheme that accurately informs the buyer the level of performance to be expected?

Something like 3070 TL (thin and light 80w) MS (mainstream 90w) MP (max performance 115w).
Posted on Reply
#37
windwhirl
VaderIsn't nvidia the one that makes the cards, dictates the gpu's TBP and then sets that power limit in the BIOS of the card?

As i see it, when a order is placed by the OEM, nvidia knows exactly the TBP that will be used for the cards they are selling, why are they making this anti-consumer move when they could come up with a naming scheme that accurately informs the buyer the level of performance to be expected?

Something like 3070 TL (thin and light 80w) MS (mainstream 90w) MP (max performance 115w).
I think there's a configurable TDP setting for this kind of cards
Posted on Reply
#38
Mussels
Freshwater Moderator
windwhirlI think there's a configurable TDP setting for this kind of cards
Yup there is, that was the whole point of P and Q - to let us know what they chose.


They'll probably just let this gen be auto throttled, giving us good hardware on paper that runs like ass in reality.
Posted on Reply
#39
Vader
windwhirlI think there's a configurable TDP setting for this kind of cards
There is, but the selected setting is in the vBIOS (or included in the main BIOS), there is no way nvidia wouldn't know which setting the OEM chose if they have to provide the proper vBIOS to make the card work
Posted on Reply
#40
nguyen
MusselsYup there is, that was the whole point of P and Q - to let us know what they chose.


They'll probably just let this gen be auto throttled, giving us good hardware on paper that runs like ass in reality.
Welcome to gaming laptop where everything can and will throttle :D
Posted on Reply
#41
bug
Imho, laptops are a lost cause.

I mean, whatever shenanigans manufacturers try to pull with PCs or smartphones, sooner or later they get called out. Laptops? Almost never. Why? Because almost nobody reviews laptops in the first place. Why? Because they are so "customized" by manufacturers, it's basically impossible to objectively compare between several models.
Between configurable TDP for CPU and GPU, various casings and cooling solutions and TDPs that can vary based on said cooling solutions, manufacturers have long had the tools to make two laptops built from the same parts, behave very differently.

Now, I'm not raving about removal of P and Q. At least you knew which segment you're buying into. But between two Ps or two Qs, you already had no idea which performs better. And while you can dig the cTDP somewhere, you'd still have no idea how a card will perform, because cooling still affects how often it throttles.
Posted on Reply
#42
DeathtoGnomes
bugImho, laptops are a lost cause.

I mean, whatever shenanigans manufacturers try to pull with PCs or smartphones, sooner or later they get called out. Laptops? Almost never. Why? Because almost nobody reviews laptops in the first place. Why? Because they are so "customized" by manufacturers, it's basically impossible to objectively compare between several models.
Between configurable TDP for CPU and GPU, various casings and cooling solutions and TDPs that can vary based on said cooling solutions, manufacturers have long had the tools to make two laptops built from the same parts, behave very differently.

Now, I'm not raving about removal of P and Q. At least you knew which segment you're buying into. But between two Ps or two Qs, you already had no idea which performs better. And while you can dig the cTDP somewhere, you'd still have no idea how a card will perform, because cooling still affects how often it throttles.
I have to say I agree with this.

I do think gamers will do better reseach on a new purchase than the average computer user, wanting to know which card spec performs better in this game or that. OEMs desire to showcase their own bells and whistles take second fiddle to GPU makers so with this move from TeemGrean ( :p ) they can let their PR departments have their fun again.
Posted on Reply
#43
AusWolf
Get ready for under 1 kg laptops equipped with an RTX 3090 Mobile with a stunning 800 MHz boost clock!
Posted on Reply
#44
Chomiq
So first laptops with 30xx series GPU's show up at online retailers. TDP is nowhere to be seen. Nvidia should expect a class action lawsuit due to this.

ASUS TUF Dash F15 FX516PR-AZ024 with 3070:

Specification only lists GPU model and the amount of VRAM.
There's no information that the GPU has a TDP of 80 W.

That's from the retailer, now this is from spec listed on Asus' website:

Again, no information about TDP.
Posted on Reply
#45
bug
ChomiqSo first laptops with 30xx series GPU's show up at online retailers. TDP is nowhere to be seen. Nvidia should expect a class action lawsuit due to this.

ASUS TUF Dash F15 FX516PR-AZ024 with 3070:

Specification only lists GPU model and the amount of VRAM.
There's no information that the GPU has a TDP of 80 W.

That's from the retailer, now this is from spec listed on Asus' website:

Again, no information about TDP.
This isn't on Nvidia any more that it is on VRM suppliers. Nvidia supplied the parts, manufacturers chose a TDP, they should bear (or not) the responsibility for making it available.
Posted on Reply
Add your own comment
Nov 22nd, 2024 03:41 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts