- Joined
- Sep 17, 2014
- Messages
- 22,278 (6.02/day)
- Location
- The Washing Machine
Processor | 7800X3D |
---|---|
Motherboard | MSI MAG Mortar b650m wifi |
Cooling | Thermalright Peerless Assassin |
Memory | 32GB Corsair Vengeance 30CL6000 |
Video Card(s) | ASRock RX7900XT Phantom Gaming |
Storage | Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB |
Display(s) | Gigabyte G34QWC (3440x1440) |
Case | Lian Li A3 mATX White |
Audio Device(s) | Harman Kardon AVR137 + 2.1 |
Power Supply | EVGA Supernova G2 750W |
Mouse | Steelseries Aerox 5 |
Keyboard | Lenovo Thinkpad Trackpoint II |
Software | W11 IoT Enterprise LTSC |
Benchmark Scores | Over 9000 |
Yes, in a few synthetics it looks like there isn't much performance difference. But did you look at the actual game tests? In any of the tests that actually have a MX130 score, the MX150's minimums beat the MX130.
FF XV low 720p: MX130 = 22.5FPS MX150 Minimum = 25.2FPS
Assisin's Creed Origins Low 720p: MX130 = 29FPS MX150 Minimum = 42FPS
Middle Earth: SoW Low 720p: MX130 = 43FPS MX150 Minimum = 47FPS
Rocket League Low 720p: MX130 = 94FPS MX150 Minimum = 127FPS
So, obviously, even the weaker version of the MX150 is still outperforming the MX130 in real world tests.
But, again, the biggest part of this is likely going to come down to the thermals of the laptop. Which is why the more efficient, and less heat outputting, MX150 is performing better than the MX130 in the real world tests. Because, even notebookcheck's own reviews on laptops with the weaker MX150 note that it will boost to over 1600MHz when the laptop is cool. The problem is these 13" ultrathins don't keep things cool for very long, again the same reviews note that even the processor begins to lower the boost under load too due to heat.
And the MX130's TPD is 30W for the configuration that is coming close to matching the weaker MX150. While the weaker MX150 only has a TDP of 10W. So that is why the weaker MX150 exists. If they would put a top MX130 in one of these 13" laptops, it would throttle so hard it wouldn't even come close to scoring as well as it did in those synthetics, forget about the real world tests. I doubt it would even be able to maintain its base clock.
You're right, I'm not contesting a word of what you're saying. But now I'm pretending to be Mr Average Joe - with a small, even average knowledge of tech and having learned to always check benchmarks. Gonna go shopping for a laptop with a 'bit more' than just a weak IGP, and I find Nvidia's MX150 on the net. It takes an awful lot of knowledge and even reading between the lines and into a laptop's cooling capabilities to really learn anything at all. The MX150 by itself tells me nothing - worst case, it'll do less than my old slightly bulkier laptop with an MX130 in it.
Thát is what's leaving a really sour taste in my mouth. And you mentioned Intel's confusing forest of mobile CPUs, and yes, I agree on that as well. Was it intended? Of course this was intended... and that is why I feel these articles are very much right to exist and pop up once in a while or in fact, EVERY TIME a new low has been reached. The gaps are widening as the form factors get more extreme, and this is a really bad trend.
Apart from this, taking a longer look at the four game benches you linked, I see some striking links: AC Origins is CPU heavy and Rocket League is running at a very high FPS = also high on CPU load. FFXV and Middle Earth are most certainly not, and here we see the MX130 and MX150 scores get frighteningly close. It underlines what you're saying about heat and Maxwell versus Pascal, at the same time, it shows that the graphics performance of the MX150 is extremely inconsistent whereas the MX130 is not. On the desktop, this is touted as Pascal's greatest feature and praised for its headroom; on mobile, its abused to the limit to feign a greater performance than we're actually getting. This is a change from the norm, with regards to what Nvidia is/was offering. What's next, reversed GPU Boost? 'It may boost to the specsheet's clock once in a while, if you're lucky'? Because essentially that is what this is.
Its exactly like you're telling it: Nvidia doesn't need to do this for ANY reason whatsoever, except to mislead and support OEM's in doing that trick with them. Even if they had just specified a TDP and clock speed range, not even all the possible configurations but just min/max, the performance gap would be excusable and explainable. Even Intel doesn't play the game so badly, they at least specify the core counts somewhere.
Last edited: