Friday, August 12th 2022

SCHENKER XMG NEO 17 M22 Released: Ryzen 9 6900HX, RTX 3080 Ti, 16:10 Display, Liquid Cooling

Designed for maximum performance, XMG adds two models with AMD's Ryzen 6900HX to its NEO series. The NEO 17 has been completely redesigned and features a keyboard with CHERRY's MX ULP Tactile RGB switches, a 16:10 display with 2,560 x 1,600 pixels, a 99 Wh battery and a revised cooling system that can fully exploit the power limits of the graphics card (up to the GeForce RTX 3080 Ti). The NEO 15, on the other hand, is a platform update based on the existing chassis. The new AMD edition (generation M22 model) is compatible with the external XMG OASIS laptop liquid cooling system and, in contrast to the Intel version introduced at the beginning of the year (model generation E22), is fully VR-compatible.

Featuring AMD's Ryzen 9 6900HX with 8 cores and 16 threads as well as freely configurable Nvidia graphics cards (GeForce RTX 3080 Ti, 3080 and 3070 Ti), the new NEO 17 packs a mighty punch. In addition, XMG ensures the gaming and content creation laptop is fitted with a powerful cooling system: With five heat pipes, separate heat sinks on all four air outlets and two 11 mm fans with unobtrusive, low-frequency sound characteristics, even the 175 watt TGP of the RTX 3080 Ti with 16 GB GDDR6 VRAM (150 watts plus 25 watt Dynamic Boost 2.0) can be permanently run at full power. The AMD Ryzen 9 6900HX also operates continuously at 65 watts under full load, with boost peaks of up to 85 watts.
Like the new NEO 15 (M22), the laptop can be enhanced with the optional XMG OASIS external laptop liquid cooling system. Resulting benefits include significantly quieter operation under full load as well as lower CPU and GPU temperatures. While the Overboost performance profile, which can be selected at the touch of a button, already provides the maximum power limits, the XMG Control Center offers experienced experts a wide range of further tuning options.

Mechanical keyboard with ultra-flat CHERRY MX switches
XMG's NEO 17 is one of the first laptops to feature CHERRY's new, fully mechanical MX Ultra Low Profile Tactile RGB switches. These stand out for their tactile, non-clicking switching characteristics and boast a uniquely flat design with an overall height of only 3.5 mm. The actuation force is 65 cN, total travel and pre travel are 1.8 and 0.8 mm respectively. Moreover, the switches, known for their longevity (50 million clicks), impress with a highly precise typing feel with clearly defined feedback. Only the upper row of function keys and the numeric keypad of the NEO 17 utilise membrane switches.

Additional features include N-key rollover, anti-ghosting, RGB per-key illumination as well as a dedicated number pad and arrow keys. The Microsoft Precision-compliant glass touchpad is very generous with a sliding surface of 15 x 9.5 cm.

High-resolution, true-colour G-Sync 240 Hz display with 16:10 aspect ratio
The NEO 17 is the first laptop in the XMG range to integrate a display with a 16:10 aspect ratio. The 240 Hz WQXGA panel offers a resolution of 2,560 x 1,600 pixels, 380 nits of brightness and 99 percent sRGB colour space coverage. Considering these key specs, the display is perfectly suited for photo and video editing as well as gaming. The low response time of 6 milliseconds and support for Nvidia G-Sync are particularly appealing to gamers. Advanced Optimus allows for automatic switching of the display connection between the efficient iGPU from AMD and the dedicated Nvidia graphics card without a system restart; as an alternative, manual selection via a MUX switch in the BIOS is also possible.

Up to 64 GB DDR5 RAM, two PCIe 4.0 SSDs and VR-ready USB-C port with dGPU connection
As usual for XMG, the NEO 17 offers two SO-DIMM sockets providing a maximum of 64 GB DDR5-4800 RAM and two M.2 slots for fast PCI Express 4.0 SSDs. In terms of external connectivity, the laptop has all ports for large-format connectors on the back. These include the docking option for the XMG OASIS, HDMI 2.1, 2.5 Gigabit Ethernet and a USB-C 3.2 port with integrated DisplayPort 1.4 stream and connection to the Nvidia graphics card - the NEO (M22) with AMD processors is thus compatible with current VR headsets, unlike the NEO (E22) with Intel CPUs. This also explains the intended omission of USB 4.0: The corresponding controller is part of the CPU package in AMD's Rembrandt platform and is designed to connect the DisplayPort signal via the iGPU. Since the NEO 17 only has one Type-C port, XMG has opted for the more flexible USB-C 3.2, which allows a dGPU connection and therefore VR compatibility and G-Sync.
Spread over the left and right sides of the laptop, there are three USB-A ports, a card reader for full-size SD cards and separate jacks for a headset and microphone.

99 Wh battery and 330 watt power supply unit
In line with the powerful configuration of the NEO 17, XMG is shipping the laptop with a 330 watt power supply unit. As a result, it can still be recharged even at full load. Mobile power is supplied by a 99 Wh battery, which, together with the efficient AMD processor and Advanced Optimus, provides long runtimes under partial load.

Manufactured predominantly from aluminium, the chassis (display lid and bottom panel) measures 381.7 x 272.8 x 27 mm and weighs 2.8 kg. Additional reinforcement in the palm rest area comes from a stabilising magnesium frame: this provides additional rigidity so that the palm rest, which is finished with a pleasant grip-touch surface remains solid even under heavy usage.

XMG NEO 15 (M22) also comes with AMD Ryzen 9 6900HX
The platform update of the new XMG NEO 15 (M22) is based on the same chassis as the already popular XMG NEO 15 (E22) and is therefore more compact and, with a weight of 2.2 kg, also more mobile than the NEO 17 (M22). The laptop features an additional entry-level version with GeForce RTX 3060 graphics. Another new feature of this model is a USB-C 3.2 port with DisplayPort connected to the dGPU: The NEO 15 (E22), released at the beginning of 2022 and featuring processors from Intel's 12th Core generation, offers Thunderbolt 4, but the DisplayPort stream is connected to the iGPU and therefore not suitable for connecting VR headsets. Other key features of the predecessor model are still included in the M22 version, such as a 240 Hz WQHD display (in a 16:9 format), an optomechanical keyboard with tactile silent switches, compatibility with the XMG OASIS and graphics cards up to the GeForce RTX 3080 Ti (up to 175 watt).

Pricing and availability
The base configuration of the XMG NEO 17 (M22), which can be freely configured on bestware.com, includes AMD's Ryzen 9 6900HX, a GeForce RTX 3070 Ti, 16 (2x8) GB DDR5-4800, a 500 GB Samsung 980 SSD and a 240 Hz WQXGA IPS display. The starting price including 19% VAT is € 2,949. Upgrade options for the graphics card include a GeForce RTX 3080 (€ 495) and RTX 3080 Ti (€ 879). The base configuration of the XMG NEO 15 (M22) is available from € 2.099 and includes a GeForce RTX 3060 and a 240 Hz WQHD IPS display. More powerful graphics cards such as the RTX 3070 Ti (€ 497), RTX 3080 (€ 992) and RTX 3080 Ti (€1,376) can be configured as desired; the upgrade prices are based on the starting configuration with an RTX 3060. The addition of the XMG OASIS external liquid cooling system adds € 199.

Both laptops are available for pre-order now and are expected to be shipping from early (NEO 17) or mid-September (NEO 15). Regardless of this, a discount campaign is running at bestware.com until August 16th: anyone who pre-orders an XMG NEO (M22) or buys any other laptop during this period will receive a discount of 100 euros on the total price, if they choose either a Windows 10 or Windows 11 licence (Home or Pro).
Add your own comment

50 Comments on SCHENKER XMG NEO 17 M22 Released: Ryzen 9 6900HX, RTX 3080 Ti, 16:10 Display, Liquid Cooling

#26
bug
Xex360With Pascal they put the same desktop GPUs on laptops, just reduced clocks, now they sell an overpriced 3080ti that is basically a slow 3070ti, for 5000€ or so I expect to see the full GPU at least the 3080.
You put way too much weight on labels/model numbers.
Posted on Reply
#27
Valantar
Xex360With Pascal they put the same desktop GPUs on laptops, just reduced clocks, now they sell an overpriced 3080ti that is basically a slow 3070ti, for 5000€ or so I expect to see the full GPU at least the 3080.
Pascal is also pretty much the only series in which this has been true. And it makes perfect sense - names are only arbitrary signifiers of relative performance between tiers, and laptops are not desktops, so there's no reason to expect them to be identical, especially given the drastically different power and thermal envelopes. And, once again, you're not getting a GA102 in a mobile form factor - it's too large, needs too much space for VRAM, needs too wide a VRM to reasonably fit in even a large gaming laptop, even if they underclocked it for efficiency. It would be a super expensive, bespoke product tier that would only exist in DTR laptops - which barely sell at all. It would lose money for both Nvidia and laptop makers. As @bug said, you put way too much weight on model numbers. Higher number=better, but mobile and desktop are entirely separate lists, and tiers are not cross-comparable.
Posted on Reply
#28
maxfly
There aren't many options for ANY kind of wcing in a mass produced laptop. There are simply far too many variables at play not to mention the problems associated with trying to physically fit something that might actually work into such a tight space.
Once you get the idea of a custom wced loop out of your head and look at it from such a small physical area. Your not left with much else than what we see here.
Unless you think your customer might be willing to carry around a 240rad and res/pump combo along with their laptop (not likely). It could work but then you've got to design some kind of ultra flat cpu and gpu blocks that wont turn your laptop into something military grade thick. Again the space constraints.
With all the complaints we see with people complaining about their gaming laptops overheating...I think this is a good example of people thinking out of the box for once.
Posted on Reply
#29
bug
maxflyThere aren't many options for ANY kind of wcing in a mass produced laptop. There are simply far too many variables at play not to mention the problems associated with trying to physically fit something that might actually work into such a tight space.
Once you get the idea of a custom wced loop out of your head and look at it from such a small physical area. Your not left with much else than what we see here.
Unless you think your customer might be willing to carry around a 240rad and res/pump combo along with their laptop (not likely). It could work but then you've got to design some kind of ultra flat cpu and gpu blocks that wont turn your laptop into something military grade thick. Again the space constraints.
With all the complaints we see with people complaining about their gaming laptops overheating...I think this is a good example of people thinking out of the box for once.
Not to mention "liquid cooling" is not mentioned on the official website anyway, it's just an unfortunate addition on TPU.
The official website only states:
The excellent cooling system of the XMG NEO 17 is one of the most striking features of the laptop, which is designed for uncompromising high performance. The interconnected system with five heat pipes and four heat sinks and air outlets allows the processor and graphics card to be pushed to their ultimate limits, while the two 11 mm fans are characterised by an unobtrusive, low-frequency sound characteristic.
Posted on Reply
#30
XMG Support
Schenker Rep
Xex360I'm curious though, is AMD also not supporting this, I had a 6800xt and during gameplay seemed to hover around 230w, with lower voltages and clocks it would be a better solution to the hopelessly slow 3080ti mobile.
The requirements betwee laptop and desktop cards are just different. Laptops have quite different PCB layouts where the same key components need to fit in a much tighter space. Silicon vendors don't like to spend the R&D resources on helping OEMs to cram a desktop layout into a laptop and basically re-inventing the laptop layout that already exists.

The 230 watts total board power of your 6800 XT is way beyond anything that is currently possible in laptops. NVIDIA RTX 3080 Ti takes up to 175 watts and is already at the limit of what most vendors can do.

It's true that AMD's RDNA2 has made huge gains in terms of performance-per-watt efficiency. That's why we we'd be excited to bring a laptop with RDNA2 into the market. But this is not about desktop vs. laptop, it's about supply, vendor support and about delivery a full, working product. We are not ready to reveal any specific plans yet.
bugNot to mention "liquid cooling" is not mentioned on the official website anyway, it's just an unfortunate addition on TPU.
Support for XMG OASIS is mentioned on the product page and you can select XMG OASIS on bestware when configuring your XMG NEO (E22 and M22).

But good point, the word "liquid" is not to be found on the product page at the moment. Perhaps we will add a link to the XMG OASIS micro-page which explains the liquid cooling solution in details.

From the XMG OASIS FAQ:

What is XMG OASIS?

XMG OASIS is a modular liquid cooling system that has been specially developed for the XMG NEO series. The system consists of an external housing that contains a liquid reservoir, a pump, a fan and a radiator. The radiator is the heart of the liquid cooling system: warm liquid flows through a tube from the laptop into the radiator and is cooled there by the large-area case fan. The cold liquid flows back into the laptop via a second tube, thus forming a closed circuit. Due to the large surface area of the 120mm fan, XMG OASIS generates significantly less fan noise than a usual laptop air cooling system.

The two cooling tubes are connected to a metal pipe inside the laptop through a 2-in-1 quick release connector on the back of the laptop. The pipe inside the laptop is soldered to the traditional heat pipes of the laptop’s air cooling system. The tube is following a curvy path across the cooling system, indirectly touching the laptop’s main heat emitters: processor, graphics card, voltage regulators and video memory. Liquid flows through the inside of this tube and, thanks to its high thermal conductivity coefficient, transports the excess heat from the aforementioned hotspots directly to the outside with surprisingly high efficiency.

Cheers,
Tom
Posted on Reply
#31
lexluthermiester
ValantarDesktop PC watercooling: here's my $200, 1kg pure copper GPU waterblock with precision machined microchannels and highly optimized flow paths, alongside my $150 CPU block that's nearly as complex.

Laptop water cooling: yeah, so there's this tiny, kinda flat copper pipe here, you see, that runs kind of across the GPU and GPU in a little loop, on top of the other heatpipes, and it's got water in it.
Actually, physics says it should work. You can joke, but I'm betting it's effective. That said...

I like this system. Ticks the boxes for me. I would undervolt/underclock it all and I'd really like someone to do a 16:10 19' or 20" notebook, but this system is solid IMHO.
Posted on Reply
#32
bug
lexluthermiesterActually, physics says it should work. You can joke, but I'm betting it's effective. That said...
Sure it should work. But like @XMG Support said above, 175W is about the limit of what it can do. Whereas on a desktop, you can "easily" manage 100W on top of that. Of course, it's still physics. Airflow, more material and more room to work with have to amount to something.
Posted on Reply
#33
lexluthermiester
bug175W is about the limit of what it can do. Whereas on a desktop, you can "easily" manage 100W on top of that. Of course, it's still physics. Airflow, more material and more room to work with have to amount to something.
Those are good points. However this is a mobile system. It doesn't need to handle more. What it does it does likely does very well. I haven't seen one personally, so I can't say for sure, but the short math says it should work well.
Posted on Reply
#34
Valantar
lexluthermiesterActually, physics says it should work. You can joke, but I'm betting it's effective. That said...
Way ahead of you there:
ValantarI think this is the same OEM solution that LTT has covered a few times, and apparently it works kind of decently
The fact that it works doesn't make the difference between this and desktop custom water cooling any less funny.
Posted on Reply
#35
lexluthermiester
ValantarWay ahead of you there:

The fact that it works doesn't make the difference between this and desktop custom water cooling any less funny.
Yeah I didn't see that: TLDR. Fair enough though.
Posted on Reply
#36
Jism
I'm sure they could develop or install something like a ancient Thermaltake Tide Water inside of it:



Other then collecting dust it's pretty much maintaince free.
Posted on Reply
#37
Karti
AssimilatorIt's so funny how obviously shoehorned in there that one pathetic pipe is. It doesn't even cover the maximum surface area it could, FFS! Just goes to show that fools and their money...
small pipe with liquid cool enough to cool down not the GPU/CPU core itself, but cooling more the heatpipes that actualy cooldown cpu/gpu

it actually works very well
Posted on Reply
#38
VSG
Editor, Reviews & News
I feel like I should share these directly in here so people see the data:







I have a Ryzen-based XMG Neo 15 here already that will be tested on the OASIS too, but after that I'll get the NEO 17 in so you guys will have that data as well. Point is the internal liquid cooling setup works fine, just that the design of the external OASIS cooler unit could be improved for this market.
Posted on Reply
#39
Unregistered
ValantarPascal is also pretty much the only series in which this has been true. And it makes perfect sense - names are only arbitrary signifiers of relative performance between tiers, and laptops are not desktops, so there's no reason to expect them to be identical, especially given the drastically different power and thermal envelopes. And, once again, you're not getting a GA102 in a mobile form factor - it's too large, needs too much space for VRAM, needs too wide a VRM to reasonably fit in even a large gaming laptop, even if they underclocked it for efficiency. It would be a super expensive, bespoke product tier that would only exist in DTR laptops - which barely sell at all. It would lose money for both Nvidia and laptop makers. As @bug said, you put way too much weight on model numbers. Higher number=better, but mobile and desktop are entirely separate lists, and tiers are not cross-comparable.
Before we had an "m" after the name to differentiate between laptop and desktop GPUs, with Pascal (and later Turing) nVidia removed it because they put the same die on laptops.
The rest of your arguments fall apart as the 3080ti laptops are insanely expensive, and Turing laptop GPU had huge dies like the 2080 super which had 200w unlike the fake 3080ti mobile with 175w.
You and @bug are naïve enough to fall to nVidia marketing, the new naming scheme is a lie to mislead people (alongside the awful power nonsense).
#40
Valantar
Xex360Before we had an "m" after the name to differentiate between laptop and desktop GPUs, with Pascal (and later Turing) nVidia removed it because they put the same die on laptops.
The rest of your arguments fall apart as the 3080ti laptops are insanely expensive, and Turing laptop GPU had huge dies like the 2080 super which had 200w unlike the fake 3080ti mobile with 175w.
You and @bug are naïve enough to fall to nVidia marketing, the new naming scheme is a lie to mislead people (alongside the awful power nonsense).
Well, it's good to know that we are dealing with someone with deep engineering knowledge on a level rivalling at least Nvidia's.

/s

We never saw a mobile TU102 implementation, so... yeah. Still topped out at the second largest die, still topped out at 256-bit memory. Same as now. Turing didn't have quite the same massive transient power spikes of Ampere either, which might go some way towards explaining its marginally higher power limits. But OEMs are also free to configure the 3080 Ti mobile at 175W+ if they want to - that this hasn't happened likely says more about diminishing returns for performance than anything else. Also, do you really need that 'm'? It's a laptop gpu. It sits on a laptop motherboard. It is not a desktop GPU - it can't be. It's obvious Nvidia doesn't want to highlight the difference between the two, but... does it matter? Yes, it's a branding and marketing exercise that removes a tiny sliver of clarity, but the only point at which I find it even remotely problematic is if it were actually confusing to buyers - and I don't believe there are enough people cross-shopping laptops and desktops for that to be much of a problem.
Posted on Reply
#41
Unregistered
ValantarWell, it's good to know that we are dealing with someone with deep engineering knowledge on a level rivalling at least Nvidia's.

/s

We never saw a mobile TU102 implementation, so... yeah. Still topped out at the second largest die, still topped out at 256-bit memory. Same as now. Turing didn't have quite the same massive transient power spikes of Ampere either, which might go some way towards explaining its marginally higher power limits. But OEMs are also free to configure the 3080 Ti mobile at 175W+ if they want to - that this hasn't happened likely says more about diminishing returns for performance than anything else. Also, do you really need that 'm'? It's a laptop gpu. It sits on a laptop motherboard. It is not a desktop GPU - it can't be. It's obvious Nvidia doesn't want to highlight the difference between the two, but... does it matter? Yes, it's a branding and marketing exercise that removes a tiny sliver of clarity, but the only point at which I find it even remotely problematic is if it were actually confusing to buyers - and I don't believe there are enough people cross-shopping laptops and desktops for that to be much of a problem.
You are completely missing my point, nVidia shouldn't use misleading names that's all, they used to have a reasonable naming scheme. Now it doesn't make any sense.
I am quite aware of laptops limitations, as such I didn't suggest a 3090, rather to put the same die on both if the same name is used, and to give it more TGP like 200w, having these a 3070ti on a laptop would have similar levels of performance to the dGPU.
As for the rest of your "arguments" as I said you fail for nVidia marketing, to be fair it's quite effective. It is sad to see here, we ought to be able to see through marketing.
Posted on Edit | Reply
#42
lexluthermiester
Xex360You are completely missing my point, nVidia shouldn't use misleading names that's all, they used to have a reasonable naming scheme.
No, they saw right through your point. You are missing some context which Valantar was trying to help you understand.
Posted on Reply
#43
Unregistered
lexluthermiesterNo, they saw right through your point. You are missing some context which Valantar was trying to help you understand.
What context? The only thing I read was excuses/speculatios to justify nVidia's misleading marketing. None of my arguments were addressed.
Posted on Edit | Reply
#44
bug
Xex360What context? The only thing I read was excuses/speculatios to justify nVidia's misleading marketing. None of my arguments were addressed.
There are no arguments to address. The correlation between laptop and desktop part names does not exist. Simple as that.
Posted on Reply
#45
Unregistered
bugThere are no arguments to address. The correlation between laptop and desktop part names does not exist anymore. Simple as that.
Fixed that for you!
/S
Let's just agree to disagree, though it is interesting to see people with different views we disagree with.
Posted on Edit | Reply
#46
Valantar
Xex360What context? The only thing I read was excuses/speculatios to justify nVidia's misleading marketing. None of my arguments were addressed.
Your arguments weren't addressed? Have you even had multiple? Let's see:
Xex360Didn't know that, probably nVidia doesn't want competition for their pathetic laptop GPUs, with Pascal they managed to put the same GPUs on laptops (better with the 1070 which had more cuda cores).
Xex360With Pascal they put the same desktop GPUs on laptops, just reduced clocks
These are essentially the same thing being said, just from slightly different angles. "Laptop GPUs used to be closer to desktop GPUs, or named differently." Which was, among other places, addressed in this post:
ValantarPascal is also pretty much the only series in which this has been true. And it makes perfect sense - names are only arbitrary signifiers of relative performance between tiers, and laptops are not desktops, so there's no reason to expect them to be identical, especially given the drastically different power and thermal envelopes.
And once again:
Xex360Before we had an "m" after the name to differentiate between laptop and desktop GPUs,
This is just naming. Names are arbitrary. You can think that one naming convention is better or worse than another, but that's just an opinion.
Xex360with Pascal (and later Turing) nVidia removed it because they put the same die on laptops.
No, they removed it because they wanted to remove the stigma of "m GPUs are crap" which had - deservedly - cemented itself over the years.

For the response to that, again, see above.

Moving on:
Xex360The rest of your arguments fall apart as the 3080ti laptops are insanely expensive
This isn't an argument, at least not one that relates to the topic at hand whatsoever. Does one segment being more expensive somehow make desktop and laptop GPUs inherently comparable? What? Does the fact that a large excavator costs more than a family sedan suddenly render them somehow comparable? Different things are different things. Mobile GPUs are not desktop GPUs.
Xex360and Turing laptop GPU had huge dies like the 2080 super which had 200w unlike the fake 3080ti mobile with 175w.
And you have already been told that this is not true. The size difference between mobile Turing and Ampere is marginal - they're both huge. Mobile Turing topped out at TU104. I was actually wrong before, as Ampere doesn't top out at GA104, but has the GA103S as its largest mobile variant. These are very comparable in terms of die size - 545mm2 and 496mm2 respectively. Yes, GA103S is marginally smaller, but not to a degree that matters. You have also persitently failed to address the actually space-consuming components of a laptop GPU, which matter far more than die size: the board space needed for ancillary circuitry, mainly VRMs and VRAM. Ampere boosts much more aggressively than Turing and as a consequence has significant power excursions/current spikes, and thus need a beefier VRM for the same wattage. They also top out at the same 256-bit VRAM bus, with 8 VRAM dice - they're at the same level.

What does this tell us? Well, we can use our eyes and look at the boards of desktop GPUs and compare them to the boards of high end gaming laptops. Two things are clear: high end GPUs have huge and/or jam-packed boards with a lot of Z-height for VRM components; laptops are always jam-packed, do not have the luxury for lots of Z-height (outside of a few very thick laptops), and are thus more limited in what you can pack into them. Which is why there's no TU102 or GA102 laptop variants. There just isn't room, outside of the tiny, hyper-expensive niche of DTR laptops - and developing a specific SKU for that would be a massive money sink for everyone involved.

You're also blatantly ignoring the massive power consumption increase from Turing to Ampere and its attendant consequences when looking at mobile SKUs in your insistence on comparing them to desktop cards. The 2080 Super mobile has a 100W lower TDP than its desktop counterpart (configurable up to a 50W lower TDP), at 150W (up to 200W) vs. 250W. So, that's a reasonable gap, right? But how does that change when the 3080 Ti has a full 100W TDP increase over the 2080 Super? That delta grows - you can't just choose to increase laptop TDPs by 100W - physics doesn't work that way. We've also seen a fundamental change in how laptop GPUs are segmented from Nvidia, with the Max-Q labeling being abandoned, as they've realized that it's better to allow each OEM to configure the GPU to what their chassis can actually cool (with a lower bound for acceptable performance per model) than forcing fixed configurations that might not fit a given thermal envelope. This has the disadvantage of less clarity for consumers, as you can't look for the Max-Q label for low power or its absence for high performance, but also has the advantage of providing a wider, more diverse set of options for consumers as laptops can be designed at any point in between those two extremes.

There is also of course the fact that with Turing, 80-tier desktop GPUs were 04-series (with the exception of the Ti, which had no mobile counterpart), while with Ampere, they are 02. This also changes things, no? Again: are you expecting GA102 on mobile? That is a significantly larger die - 628 mm2 - again, and crucially, one equipped with 12 memory channels. Would you want a mobile GA102 cut down to 8 channels? Or are you just selectively ignoring the impossibility of implementing this much VRAM in any reasonably sized gaming laptop?

And, once again, remember the market segments and how they are evolving. Gaming laptops are shrinking. Drastically - the 17" segment today is dominated by thin-bezeled laptops around the size of 15" laptops just a few years ago. Larger than 17" is essentially nothing. And they're getting thinner and more portable too. While you could argue that the growth of SFF means desktops are also shrinking, it's still a world apart in terms of cooling capability and what can actually be fit within the confines of the chassis. A GA102 laptop GPU isn't feasible in the types of laptops that sell in any type of quantity today. Period.

This isn't "falling for Nvidia's marketing". It is taking into account the actual relevant realities surrounding these two separate product segments, and accepting that despite having similar names, names are ultimately arbitrary and for GPUs only designate relative performance within their same generation and brand (with a lot of flexibility). You are insisting that the names should be non-arbitrary, i.e. that there must be a material commonality between a mobile 3080 Ti and a desktop 3080 Ti, beyond both of them being at or near the top of their respective product stacks - extreme performance, highly priced GPUs in their respective markets. This insistence of yours is misguided and misunderstood, and from your arguments seems to be based on some weak reasoning and unrealistic desires. You cannot compare a 3080 Ti laptop to a 3080 Ti desktop and expect similar performance - they'll both be crazy fast, but still wildly different. That was true with Turing and Pascal too. And with previous generations.
Posted on Reply
#47
Unregistered
ValantarYour arguments weren't addressed? Have you even had multiple? Let's see:


These are essentially the same thing being said, just from slightly different angles. "Laptop GPUs used to be closer to desktop GPUs, or named differently." Which was, among other places, addressed in this post:

And once again:

This is just naming. Names are arbitrary. You can think that one naming convention is better or worse than another, but that's just an opinion.

No, they removed it because they wanted to remove the stigma of "m GPUs are crap" which had - deservedly - cemented itself over the years.

For the response to that, again, see above.

Moving on:

This isn't an argument, at least not one that relates to the topic at hand whatsoever. Does one segment being more expensive somehow make desktop and laptop GPUs inherently comparable? What? Does the fact that a large excavator costs more than a family sedan suddenly render them somehow comparable? Different things are different things. Mobile GPUs are not desktop GPUs.

And you have already been told that this is not true. The size difference between mobile Turing and Ampere is marginal - they're both huge. Mobile Turing topped out at TU104. I was actually wrong before, as Ampere doesn't top out at GA104, but has the GA103S as its largest mobile variant. These are very comparable in terms of die size - 545mm2 and 496mm2 respectively. Yes, GA103S is marginally smaller, but not to a degree that matters. You have also persitently failed to address the actually space-consuming components of a laptop GPU, which matter far more than die size: the board space needed for ancillary circuitry, mainly VRMs and VRAM. Ampere boosts much more aggressively than Turing and as a consequence has significant power excursions/current spikes, and thus need a beefier VRM for the same wattage. They also top out at the same 256-bit VRAM bus, with 8 VRAM dice - they're at the same level.

What does this tell us? Well, we can use our eyes and look at the boards of desktop GPUs and compare them to the boards of high end gaming laptops. Two things are clear: high end GPUs have huge and/or jam-packed boards with a lot of Z-height for VRM components; laptops are always jam-packed, do not have the luxury for lots of Z-height (outside of a few very thick laptops), and are thus more limited in what you can pack into them. Which is why there's no TU102 or GA102 laptop variants. There just isn't room, outside of the tiny, hyper-expensive niche of DTR laptops - and developing a specific SKU for that would be a massive money sink for everyone involved.

You're also blatantly ignoring the massive power consumption increase from Turing to Ampere and its attendant consequences when looking at mobile SKUs in your insistence on comparing them to desktop cards. The 2080 Super mobile has a 100W lower TDP than its desktop counterpart (configurable up to a 50W lower TDP), at 150W (up to 200W) vs. 250W. So, that's a reasonable gap, right? But how does that change when the 3080 Ti has a full 100W TDP increase over the 2080 Super? That delta grows - you can't just choose to increase laptop TDPs by 100W - physics doesn't work that way. We've also seen a fundamental change in how laptop GPUs are segmented from Nvidia, with the Max-Q labeling being abandoned, as they've realized that it's better to allow each OEM to configure the GPU to what their chassis can actually cool (with a lower bound for acceptable performance per model) than forcing fixed configurations that might not fit a given thermal envelope. This has the disadvantage of less clarity for consumers, as you can't look for the Max-Q label for low power or its absence for high performance, but also has the advantage of providing a wider, more diverse set of options for consumers as laptops can be designed at any point in between those two extremes.

There is also of course the fact that with Turing, 80-tier desktop GPUs were 04-series (with the exception of the Ti, which had no mobile counterpart), while with Ampere, they are 02. This also changes things, no? Again: are you expecting GA102 on mobile? That is a significantly larger die - 628 mm2 - again, and crucially, one equipped with 12 memory channels. Would you want a mobile GA102 cut down to 8 channels? Or are you just selectively ignoring the impossibility of implementing this much VRAM in any reasonably sized gaming laptop?

And, once again, remember the market segments and how they are evolving. Gaming laptops are shrinking. Drastically - the 17" segment today is dominated by thin-bezeled laptops around the size of 15" laptops just a few years ago. Larger than 17" is essentially nothing. And they're getting thinner and more portable too. While you could argue that the growth of SFF means desktops are also shrinking, it's still a world apart in terms of cooling capability and what can actually be fit within the confines of the chassis. A GA102 laptop GPU isn't feasible in the types of laptops that sell in any type of quantity today. Period.

This isn't "falling for Nvidia's marketing". It is taking into account the actual relevant realities surrounding these two separate product segments, and accepting that despite having similar names, names are ultimately arbitrary and for GPUs only designate relative performance within their same generation and brand (with a lot of flexibility). You are insisting that the names should be non-arbitrary, i.e. that there must be a material commonality between a mobile 3080 Ti and a desktop 3080 Ti, beyond both of them being at or near the top of their respective product stacks - extreme performance, highly priced GPUs in their respective markets. This insistence of yours is misguided and misunderstood, and from your arguments seems to be based on some weak reasoning and unrealistic desires. You cannot compare a 3080 Ti laptop to a 3080 Ti desktop and expect similar performance - they'll both be crazy fast, but still wildly different. That was true with Turing and Pascal too. And with previous generations.
Again you fail miserably to address my points or even understand them. All the points you raised I'm aware of them but are irrelevant.
As I said earlier we can agree to disagree.
#48
bug
Xex360Again you fail miserably to address my points or even understand them. All the points you raised I'm aware of them but are irrelevant.
As I said earlier we can agree to disagree.
This isn't us agreeing to disagree. It's just you looking at things from a very specific point of view, that allows you to talk thrash about Nvidia. It's your right, of course. Just be a man and admit it.
Posted on Reply
#49
Valantar
Xex360Again you fail miserably to address my points or even understand them. All the points you raised I'm aware of them but are irrelevant.
As I said earlier we can agree to disagree.
... so, perhaps try to expand on your arguments, if they are so incomprehensible to us? You said you have brought up arguments that haven't been addressed; I quoted the majority of your postings in this thread and showed how it had been responded to. What are we missing, beyond your vague and general thing of disagreeing that desktop and laptop GPUs should be considered separately due to being distinct and separate types of products?
Posted on Reply
#50
bug
Valantar... so, perhaps try to expand on your arguments, if they are so incomprehensible to us? You said you have brought up arguments that haven't been addressed; I quoted the majority of your postings in this thread and showed how it had been responded to. What are we missing, beyond your vague and general thing of disagreeing that desktop and laptop GPUs should be considered separately due to being distinct and separate types of products?
Leave him be, this conversation has run its course.
Posted on Reply
Add your own comment
Dec 22nd, 2024 10:01 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts