Saturday, December 18th 2021
NVIDIA Announces Three New Mobile GPUs With Spring 2022 Availability
NVIDIA has just announced three new mobile GPUs, although the question is how new any of them really are, as the model names suggest they're anything but. First up is the GeForce RTX 2050, which should be based on the Turing architecture. The other two GPUs are the GeForce MX550 and MX570, both presumably based on the Ampere architecture, although NVIDIA hasn't confirmed the specifics.
The GeForce RTX 2050 features 2048 CUDA cores, which is more than the mobile RTX 2060, but it has lower clock speeds and a vastly lower power draw at 30-45 Watts depending on the notebook design choices and cooling. It's also limited to 4 GB of 64-bit GDDR6 memory, which puts this in GeForce MX territory when it comes to memory bandwidth, as NVIDIA quotes an up to memory bandwidth of a mere 112 GB/s.The two GeForce MX parts also support GDDR6 memory, but beyond that, NVIDIA hasn't released anything tangible in terms of the specs. NVIDIA mentions that the GeForce MX550 will replace the MX450, while stating both GPUs are intended to boost the performance for video and photo editing on the move, with the MX570 also being suitable for gaming. All three GPUs are said to ship in laptops sometime this coming spring.
Update: According to a tweet by ComputerBase, who has confirmed the information with Nvidia, the RTX 2050 and the MX570 are based on Ampere and the GA107 GPU, with the MX550 being based on Turin and the TU117 GPU. The MX570 is also said to support DLSS and "limited" RTX features, whatever that means.
Sources:
Nvidia, Nvidia 2, @ComputerBase
The GeForce RTX 2050 features 2048 CUDA cores, which is more than the mobile RTX 2060, but it has lower clock speeds and a vastly lower power draw at 30-45 Watts depending on the notebook design choices and cooling. It's also limited to 4 GB of 64-bit GDDR6 memory, which puts this in GeForce MX territory when it comes to memory bandwidth, as NVIDIA quotes an up to memory bandwidth of a mere 112 GB/s.The two GeForce MX parts also support GDDR6 memory, but beyond that, NVIDIA hasn't released anything tangible in terms of the specs. NVIDIA mentions that the GeForce MX550 will replace the MX450, while stating both GPUs are intended to boost the performance for video and photo editing on the move, with the MX570 also being suitable for gaming. All three GPUs are said to ship in laptops sometime this coming spring.
Update: According to a tweet by ComputerBase, who has confirmed the information with Nvidia, the RTX 2050 and the MX570 are based on Ampere and the GA107 GPU, with the MX550 being based on Turin and the TU117 GPU. The MX570 is also said to support DLSS and "limited" RTX features, whatever that means.
33 Comments on NVIDIA Announces Three New Mobile GPUs With Spring 2022 Availability
Even though I paid a lot more than I should have, I'm glad I started having parts ordered for my new rig build back in March-April. The damn fans didn't get here until two weeks ago... so I should be going to pick it up within the next week and have it before Christmas holiday. :rockout:
RTX 2050 = GA107
MX570 = GA107
MX550 = TU117
MX550 also doesn't support RT.
Source: Computerbase
Also:
I would stay away from their products, I don't buy anything Nvidia..
All that matters is what nVidia can achieve in under 50W. If you want more than 50W of dGPU gaming in a portable form factor you pretty much NEED to sit down at a desk and find a power outlet, at which point you don't really need a laptop. A decade ago, travelling to a location for LAN play was a reasonable thing, but the world has changed dramatically since then, COVID being just one of the factors for this shift in trends.
As far as I can tell, Nvidia don't really have any compelling Ampere chips in the sub-50W range yet. Turing made do with the 1650 Max-Q variants at 35W which were barely fast enough to justify themselves over some of the far more efficient APUs like the 4700U and 5700U which at 25W all-in allowed for far slimmer, cooler, quieter, lighter, longer-lasting laptops that can still game.
Unfortunately these are still based on the GA107 and we've already seen how disappointing and underwhelming that is at lower TDPs - The 3050 isn't exactly an easy performance recommendation even at the default 80W TDP. A cut-down version, running far slower than that? It may struggle to differentiate itself from the ancient 1650 and any wins it gains over the old 1650 will likely come at increased performance/Watt which kills the appeal.
I think the real problem with Ampere is that Samsung 8nm simply isn't very power-efficient. Yes, it's dense, but performance/Watt isn't where it shines. Sadly, there aren't any compelling TSMC 7nm GPUs making their way into laptops right now so we're all stuck with 2019-esque performance/Watt solutions.
There are no differences between both vendors' image quality when outputting the same signal and the same pixel format. What may cause this is incorrect settings reported by either EDID or manual user input, as NVIDIA supports chroma compression in older hardware generations (including YCbCr 4:2:0 and 4K 60Hz on Kepler GPU with DVI-D DL/HDMI 1.4 port), while AMD did not do so until Polaris.
Nowadays every monitor and every GPU should be capable of displaying a full-range RGB at 8 bpc signal at a minimum.
But look at the green colour of the grass between RX 6900 XT on the right and RTX 3090 on the left.
RX 6900 XT vs RTX 3090 - Test in 8 Games l 4K l - YouTube
Sorry bro, that stuff is hearsay... has always been. The color reproduction on both companies' GPUs is fully accurate when set correctly.
In the bad old days there was ATi's Quack3.exe and Nvidia's FX series cheating on aniso filtering. Both companies fixed those cheats once called out by the media. After that we had Nvidia very slow on the uptake with gamma-corrected AA for MSAA, that was the last real rendering difference between the two companies, around the time that AMD bought ATi.
There are however some exceptionally dumb issues with the Nvidia driver from this decade, one of which is still an issue even to this day:
I guess this is why so many videos still exist in shitty limited RGB with grey blacks and poor contrast. It's even spawned several AMD vs Nvidia comparison videos where the AMD video clearly has higher contrast than the Nvidia video and the ignorant vlogger is just presenting the two as "differences" when they simply need to set their drivers to output at full dynamic range.
Yes, they're ignorant, but no this is not their fault. It's Nvidia's fault and their drivers have been a trainwreck for years. I kind of wish I was running an AMD GPU right now because the Nvidia control panel makes me want to rant about negligence and bad UI every time I have to actually use it (which is mercifully almost never).