Monday, April 13th 2020
Leaked Benchmark shows Possible NVIDIA MX450 with GDDR6 Memory
A new listing was spotted on the 3DMark results browser for what could be the NVIDIA MX450 laptop GPU. The MX450 is expected to be based on the TU117, the same as the GTX 1650 speculated @_rogame. The leaked benchmark shows the MX450 having a clock speed of 540 MHz and 2 GB of GDDR6 memory. The memory is listed as having a speed of 2505 MHz meaning a potential memory speed of 10Gbit/s. It is interesting to see the shift to GDDR6 in NVIDIA's suite of products likely due to a shortage in GDDR5 or simply that GDDR6 is now cheaper.
The TU117 GPU found in the GTX 1650 GDDR6 has proven itself to be a solid 1080p gaming option. The chip is manufactured on TSMC's 12 nm process and features 1024 shading units, 64 texture mapping units and 32 ROPs. The MX450 should provide a significant boost over integrated graphics at a TDP of 25 W, and will sit under the GTX 1650 Mobile due to its reduced RAM and power/thermal constraints.
Sources:
Guru3D, @_rogame
The TU117 GPU found in the GTX 1650 GDDR6 has proven itself to be a solid 1080p gaming option. The chip is manufactured on TSMC's 12 nm process and features 1024 shading units, 64 texture mapping units and 32 ROPs. The MX450 should provide a significant boost over integrated graphics at a TDP of 25 W, and will sit under the GTX 1650 Mobile due to its reduced RAM and power/thermal constraints.
21 Comments on Leaked Benchmark shows Possible NVIDIA MX450 with GDDR6 Memory
and shame on you
In this case, one of the reasons for a very low core clockspeed is likely attributed to the GDDR6 used. The second reason is if the rumor is true that this is using the TU117 chip, we have witnessed that it requires quite a lot of power when jumping from GTX 1650 to 1650 Super. To shrink the 100+ W to 25W means a significant reduction in clockspeed.
In fact, I think the MX350 should be sufficient to fend off competition for now. MX450 should be based on a new fab, because 12nm is clearly struggling with power requirement with a hefty improvement in specs.
In any modern GPU, VRAM is a relatively small percentage of the power usage. Even if you double the power consumption of GDDR5, it would make such a small difference in the total board power consumption that you'd be hard pressed to separate it from the margin of error in your measurements (and all evidence points towards GDDR6 actually consuming less power clock-for-clock - don't forget the 14Gbps GDDR6 in the 1660 Super is actually a lower clock than the 8Gbps GDDR5 in the vanilla 1660)
Meanwhile, the power cost of extra GPU performance goes up exponentially thanks to the P=I^2R and V=IR relationship as part of the power usage calculations. If you want 10% higher clocks that'll likely require 15% more voltage, which will result in a power increase of ~25%. That's why overclocking guzzles so much energy. 25% more juice for 10% extra clockspeed.
So yeah, the 1660 super uses 15-20% more power than the 1660, and provides about 12-15% more performance. I 100% guarantee you that the consumption increase is due to raised GPU utilisation resulting in higher core clocks. I'm making an educated guess here, but it correlates with everything else unlike the insane assumption that lower-clocked, more efficient memory is somehow driving the power consumption up by 15-20% total board power which - if it was caused by the memory change alone - would imply that GDDR6 uses around 900% more power than GDDR5, not the 20% less power claimed by Micron/Samsung.
Granted, GPU memory controllers have almost certainly become more efficient since RX 480, but I would still be surprised if GDDR consumes under 20% of the TBP in even the latest cards. Like everyone else, your assumption that clock speed is the only factor in power usage is manifestly incorrect. GDDR5 is quadruple-pumped, GDDR6 octuple-pumped, you really think that pushing twice the amount of data through at the same time is free? The effective clock speed is quoted for a reason, it's not just a marketing term. I 100% guarantee you're wrong, again. GN compared the GDDR5 and GDDR6 models of the 1650 and the GDDR6 model draws more power at its stock clocks. With GPU clocks normalised to the GDDR5 model's, it draws yet more power.
The point *I* was making is that the GDDR5 is bottlenecking the 1650 and that the GDDR6's extra bandwidth allows the core to do more work, which obviously requires more power. I know you understand that fact despite the matched clock speeds because of this thing you just said: And then you immediately say something that puts you back in the "Like everyone else, your assmption that clockspeed is the only factor in power usage is manifestly incorrect" category, by outright stating that the normalised clocks mean that the power consuption must be the different VRAM, only: Which side are you taking? You clearly understand that clockspeed is not the single factor determining power use, but you're then immediately using normalised clockspeed to defend your argument that the power difference must be the VRAM and only the VRAM. You can't have both!
I admit that the line "We also know that an RX 480 uses 40-50W for its 8GB" was a bit of an eye-opener for me. I'm not disputing GN, but as a counter argument it's clear that not all GDDR5 consumes that much. The 1070 Max-Q has a total TDP of just 80W, I really don't believe that the the 8GB of GDDR5 in a 1070Max-Q uses 40-50W. Let's face it, if it was using 50W, then that means that the GP104 is somehow providing decent performance on just 30W. That's pretty asburd. At best, I think we can assume that AMD struggle to make an efficient memory controller where Nvidia have that part nailed. At worse, It's possible that GN were wrong? I doubt it. Steve Burke is pretty passionate about GPUs and knows his stuff.