Monday, April 13th 2020

Leaked Benchmark shows Possible NVIDIA MX450 with GDDR6 Memory

A new listing was spotted on the 3DMark results browser for what could be the NVIDIA MX450 laptop GPU. The MX450 is expected to be based on the TU117, the same as the GTX 1650 speculated @_rogame. The leaked benchmark shows the MX450 having a clock speed of 540 MHz and 2 GB of GDDR6 memory. The memory is listed as having a speed of 2505 MHz meaning a potential memory speed of 10Gbit/s. It is interesting to see the shift to GDDR6 in NVIDIA's suite of products likely due to a shortage in GDDR5 or simply that GDDR6 is now cheaper.

The TU117 GPU found in the GTX 1650 GDDR6 has proven itself to be a solid 1080p gaming option. The chip is manufactured on TSMC's 12 nm process and features 1024 shading units, 64 texture mapping units and 32 ROPs. The MX450 should provide a significant boost over integrated graphics at a TDP of 25 W, and will sit under the GTX 1650 Mobile due to its reduced RAM and power/thermal constraints.
Sources: Guru3D, @_rogame
Add your own comment

21 Comments on Leaked Benchmark shows Possible NVIDIA MX450 with GDDR6 Memory

#1
ShurikN
It is interesting to see the shift to GDDR6 in NVIDIA's suite of products likely due to a shortage in GDDR5 or simply that GDDR6 is now cheaper.
GDDR6 also uses less power, which is essential for devices that will feature this GPU
Posted on Reply
#2
TheLostSwede
News Editor
This looks a lot more interesting than the MX350.
Posted on Reply
#3
Ferrum Master
TheLostSwedeThis looks a lot more interesting than the MX350.
I thought is was a new Geforce 4 in the first place... :laugh:
Posted on Reply
#4
Assimilator
ShurikNGDDR6 also uses less power, which is essential for devices that will feature this GPU
Incorrect, it uses more power. GTX 1650 GDDR6 models have to clock their GPUs lower than the GDDR5 models to fit within the same power budget.
Posted on Reply
#5
ShurikN
AssimilatorIncorrect, it uses more power. GTX 1650 GDDR6 models have to clock their GPUs lower than the GDDR5 models to fit within the same power budget.
Doesnt GDDR6 operate at a lower voltage? Unless the current on them is cranked up over GDDR5.
Posted on Reply
#6
Assimilator
ShurikNDoesnt GDDR6 operate at a lower voltage? Unless the current on them is cranked up over GDDR5.
Correct, but GDDR6 operating frequencies are far higher than GDDR5. For example the GTX 1650 GDDR6 uses 12Gbps chips vs the 8Gbps on the GDDR5 model - that 50% extra bandwidth has to come from somewhere, and GDDR6 is really just "GDDR5 version 2" so power usage will scale with frequency in the same manner.
Posted on Reply
#7
R0H1T
AssimilatorIncorrect, it uses more power. GTX 1650 GDDR6 models have to clock their GPUs lower than the GDDR5 models to fit within the same power budget.
And how did you measure GDDR6 power consumption using that, while early models may have been power hungry it's also true that they deliver a lots more bit/W than comparable GDDR5 solutions!
Posted on Reply
#8
IceShroom
Now people will have this instead MX350 with thier Renoir/TigerLake CPU+4GB RAM+1TB HDD laptop. Nice.
Posted on Reply
#9
Assimilator
R0H1TAnd how did you measure GDDR6 power consumption using that, while early models may have been power hungry it's also true that they deliver a lots more bit/W than comparable GDDR5 solutions!
There's no good reason I can think of for NVIDIA to clock the GDDR6 models' GPUs lower, except for keeping the TBP similar to the GDDR5 models so that the same boards and coolers can be used.
Posted on Reply
#10
R0H1T
That doesn't support what you claimed i.e. GDDR6 uses more power, it's conjecture at best.
AssimilatorCorrect, but GDDR6 operating frequencies are far higher than GDDR5. For example the GTX 1650 GDDR6 uses 12Gbps chips vs the 8Gbps on the GDDR5 model - that 50% extra bandwidth has to come from somewhere, and GDDR6 is really just "GDDR5 version 2" so power usage will scale with frequency in the same manner.
You do know that GDDR6 is based on DDR4 right, just like GDDR5 was based on DDR3, you don't suppose they're lying about regular desktop memory power numbers as well?
Posted on Reply
#11
Assimilator
R0H1TThat doesn't support what you claimed i.e. GDDR6 uses more power, it's conjecture at best.
Yes, it's conjecture. But if you have any other explanation for why NVIDIA would clock their GPUs slower on GDDR6 models, please feel free to share.
R0H1TYou do know that GDDR6 is based on DDR4 right, just like GDDR5 was based on DDR3, you don't suppose they're lying about regular desktop memory power numbers as well?
Where did I say anybody was lying about anything? The fact of the matter is that increasing frequency is going to consume more power.
Posted on Reply
#12
davideneco
More rebrand than polaris

and shame on you
Posted on Reply
#13
R0H1T
AssimilatorWhere did I say anybody was lying about anything? The fact of the matter is that increasing frequency is going to consume more power.
The actual frequency isn't increased that much, remember GDDR5x - this is the same. The effective rate has increased much more than the actual frequency, GDDR memory runs at QDR. Unless you are going to to say DDR4 consumes more power than DDR3 or LPddr4 wrt LPddr4x?
Posted on Reply
#14
lexluthermiester
AssimilatorIncorrect, it uses more power. GTX 1650 GDDR6 models have to clock their GPUs lower than the GDDR5 models to fit within the same power budget.
That is a misconception. At similar clocks, GDDR6 uses much less energy and produces less heat than GDDR5.
Posted on Reply
#15
Houd.ini
AssimilatorYes, it's conjecture. But if you have any other explanation for why NVIDIA would clock their GPUs slower on GDDR6 models, please feel free to share.
50% higher bandwidth would screw up performance and pricing in their convoluted midrange space.
Posted on Reply
#16
TheGuruStud
Houd.ini50% higher bandwidth would screw up performance and pricing in their convoluted midrange space.
Surely, you jest. What performance? :roll:
Posted on Reply
#17
R0H1T
lexluthermiesterThat is a misconception. At similar clocks, GDDR6 uses much less energy and produces less heat than GDDR5.
I'm not sure it's a "popular" misconception, if at all. Any new memory tech is generally more efficient & less power hungry than the previous gen, exceptions being something like LPDDR4x vs say regular DDR5. The higher efficiency is one of the most crucial reasons why manufacturers switch to newer, better mem besides the increased bandwidth.
Posted on Reply
#18
watzupken
R0H1TAnd how did you measure GDDR6 power consumption using that, while early models may have been power hungry it's also true that they deliver a lots more bit/W than comparable GDDR5 solutions!
I feel when pushed to specs, GDDR6 draws more power than a GDDR5, i.e. at 14Gbps vs 8Gbps. This is very clear when you compare GTX 1660 vs 1660 Super, where the only change is the memory. I don't deny the upgrade of memory improves performance significantly, but it will increase the power envelop of the GPU for sure.

In this case, one of the reasons for a very low core clockspeed is likely attributed to the GDDR6 used. The second reason is if the rumor is true that this is using the TU117 chip, we have witnessed that it requires quite a lot of power when jumping from GTX 1650 to 1650 Super. To shrink the 100+ W to 25W means a significant reduction in clockspeed.

In fact, I think the MX350 should be sufficient to fend off competition for now. MX450 should be based on a new fab, because 12nm is clearly struggling with power requirement with a hefty improvement in specs.
Posted on Reply
#19
Chrispy_
watzupkenI feel when pushed to specs, GDDR6 draws more power than a GDDR5, i.e. at 14Gbps vs 8Gbps. This is very clear when you compare GTX 1660 vs 1660 Super, where the only change is the memory. I don't deny the upgrade of memory improves performance significantly, but it will increase the power envelop of the GPU for sure.
The assumption I made from that was that the 1660 is being held back by 8Gbps VRAM and by switching to GDDR6 the bottleneck was removed, allowing the GPU to run unhindered, which has an obvious increase in power consumption.

In any modern GPU, VRAM is a relatively small percentage of the power usage. Even if you double the power consumption of GDDR5, it would make such a small difference in the total board power consumption that you'd be hard pressed to separate it from the margin of error in your measurements (and all evidence points towards GDDR6 actually consuming less power clock-for-clock - don't forget the 14Gbps GDDR6 in the 1660 Super is actually a lower clock than the 8Gbps GDDR5 in the vanilla 1660)

Meanwhile, the power cost of extra GPU performance goes up exponentially thanks to the P=I^2R and V=IR relationship as part of the power usage calculations. If you want 10% higher clocks that'll likely require 15% more voltage, which will result in a power increase of ~25%. That's why overclocking guzzles so much energy. 25% more juice for 10% extra clockspeed.

So yeah, the 1660 super uses 15-20% more power than the 1660, and provides about 12-15% more performance. I 100% guarantee you that the consumption increase is due to raised GPU utilisation resulting in higher core clocks. I'm making an educated guess here, but it correlates with everything else unlike the insane assumption that lower-clocked, more efficient memory is somehow driving the power consumption up by 15-20% total board power which - if it was caused by the memory change alone - would imply that GDDR6 uses around 900% more power than GDDR5, not the 20% less power claimed by Micron/Samsung.
Posted on Reply
#20
Assimilator
Chrispy_In any modern GPU, VRAM is a relatively small percentage of the power usage.
GN did an excellent breakdown of why AMD went with HBM for Fury and Vega, and there are various important tidbits therein, to quote, "We also know that an RX 480 uses 40-50W for its 8GB". RX 480 TBP is north of 150W, so you are claiming that a 27 - 33% total power consumption is "relatively small"? Laughable.

Granted, GPU memory controllers have almost certainly become more efficient since RX 480, but I would still be surprised if GDDR consumes under 20% of the TBP in even the latest cards.
Chrispy_don't forget the 14Gbps GDDR6 in the 1660 Super is actually a lower clock than the 8Gbps GDDR5 in the vanilla 1660
Like everyone else, your assumption that clock speed is the only factor in power usage is manifestly incorrect. GDDR5 is quadruple-pumped, GDDR6 octuple-pumped, you really think that pushing twice the amount of data through at the same time is free? The effective clock speed is quoted for a reason, it's not just a marketing term.
Chrispy_I 100% guarantee you that the consumption increase is due to raised GPU utilisation resulting in higher core clocks.
I 100% guarantee you're wrong, again. GN compared the GDDR5 and GDDR6 models of the 1650 and the GDDR6 model draws more power at its stock clocks. With GPU clocks normalised to the GDDR5 model's, it draws yet more power.
Posted on Reply
#21
Chrispy_
The point of my post seems to have gone over your head, despite the fact that you are clearly understanding the concepts behind it. The GN video I just watched is beautiful because it's the same GPU and they clock-match the GDDR5 and GDDR6 versions.

The point *I* was making is that the GDDR5 is bottlenecking the 1650 and that the GDDR6's extra bandwidth allows the core to do more work, which obviously requires more power. I know you understand that fact despite the matched clock speeds because of this thing you just said:
AssimilatorLike everyone else, your assumption that clock speed is the only factor in power usage is manifestly incorrect. GDDR5 is quadruple-pumped, GDDR6 octuple-pumped, you really think that pushing twice the amount of data through at the same time is free?
And then you immediately say something that puts you back in the "Like everyone else, your assmption that clockspeed is the only factor in power usage is manifestly incorrect" category, by outright stating that the normalised clocks mean that the power consuption must be the different VRAM, only:
AssimilatorI 100% guarantee you're wrong, again. GN compared the GDDR5 and GDDR6 models of the 1650 and the GDDR6 model draws more power at its stock clocks. With GPU clocks normalised to the GDDR5 model's, it draws yet more power.
Which side are you taking? You clearly understand that clockspeed is not the single factor determining power use, but you're then immediately using normalised clockspeed to defend your argument that the power difference must be the VRAM and only the VRAM. You can't have both!

I admit that the line "We also know that an RX 480 uses 40-50W for its 8GB" was a bit of an eye-opener for me. I'm not disputing GN, but as a counter argument it's clear that not all GDDR5 consumes that much. The 1070 Max-Q has a total TDP of just 80W, I really don't believe that the the 8GB of GDDR5 in a 1070Max-Q uses 40-50W. Let's face it, if it was using 50W, then that means that the GP104 is somehow providing decent performance on just 30W. That's pretty asburd. At best, I think we can assume that AMD struggle to make an efficient memory controller where Nvidia have that part nailed. At worse, It's possible that GN were wrong? I doubt it. Steve Burke is pretty passionate about GPUs and knows his stuff.
Posted on Reply
Add your own comment
Dec 21st, 2024 08:58 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts