Thursday, May 2nd 2024

NVIDIA Advertises "Premium AI PC" Mocking the Compute Capability of Regular AI PCs

According to the report from BenchLife, NVIDIA has started the marketing campaign push for "Premium AI PC," squarely aimed at the industry's latest trend pushed by Intel, AMD, and Qualcomm for an "AI PC" system, which features a dedicated NPU for processing smaller models locally. NVIDIA's approach comes from a different point of view: every PC with an RTX GPU is a "Premium AI PC," which holds a lot of truth. Generally, GPUs (regardless of the manufacturer) hold more computing potential than the CPU and NPU combined. With NVIDIA's push to include Tensor cores in its GPUs, the company is preparing for next-generation software from vendors and OS providers that will harness the power of these powerful silicon pieces and embed more functionality in the PC.

At the Computex event in Taiwan, there should be more details about Premium AI PCs and general AI PCs. In its marketing materials, NVIDIA compares AI PCs to its Premium AI PCs, which have enhanced capabilities across various applications like image/video editing and upscaling, productivity, gaming, and developer applications. Another relevant selling point is the user base for these Premium AI PCs, which NVIDIA touts to be 100 million users. Those PCs support over 500 AI applications out of the box, highlighting the importance of proper software support. NVIDIA's systems are usually more powerful, with GeForce RTX GPUs reaching anywhere from 100-1300+ TOPS, compared to 40 TOPS of AI PCs. How other AI PC makers plan to fight in the AI PC era remains to be seen, but there is a high chance that this will be the spotlight of the upcoming Computex show.
Source: BenchLife.info
Add your own comment

28 Comments on NVIDIA Advertises "Premium AI PC" Mocking the Compute Capability of Regular AI PCs

#1
wNotyarD
NVIDIA putting the AI in "The way it's meant to be pAId".
Posted on Reply
#2
Vayra86
Nvidia seems so hellbent on telling us that we need this, I'm still trying to figure out why I need this though.

Its really getting awfully close to that shady salesman selling you real antique coins or something.
Posted on Reply
#3
Vya Domus
Oh you know things are rough when you have to shame your consumers into caring about your features.

Conventional GPUs were easy to sell, RT capable GPUs less so. ML capable GPUs, now that's a really tough sell because hardly anyone cares, if someone wants to use something that needs ML they just use web based resources.
Posted on Reply
#4
Denver
I still have a brain. Then... No, thank you.
Posted on Reply
#5
windwhirl
Vayra86Nvidia seems so hellbent on telling us that we need this, I'm still trying to figure out why I need this though.

Its really getting awfully close to that shady salesman selling you real antique coins or something.
I still haven't heard of anything AI-related that I can run locally in my computer that meets the following criteria:
  • Needs to be fast (i.e., I don't have to wait for whatever result it is for longer than 10 seconds)
  • Doesn't require an obscene amount of computing power or purchasing new hardware
  • Is actually useful
  • Doesn't require touching a command prompt or doing any kind of configuration the regular user wouldn't know how to do.
  • EDIT: Doesn't require to be a privacy nightmare or have ethical concerns regarding whether the model was built with properly licensed data instead of stolen data
Posted on Reply
#6
Darmok N Jalad
At least their AI chat security bugs can operate at a premium level speeds.
Posted on Reply
#7
Nioktefe
They are salty because they will be excluded from every portable AI crazyness.

Every devices that is sub 50W will not use an nvidia GPU and will rely on integrated NPU (which is good, because that way it's less of a power hog).


They could be a part of the arm on pc race and try to compete with apple and qualcomm. But it seems they look more towards the insane margin of datacenter and they will only use salt for consumer.
Posted on Reply
#8
Steevo
It's another PhysX with real PhysX, just like their real pre cooked stuff before when GPUs weren't powerful enough to run real time plus rendering.

Actually it's the turd in the punch bowl, getting PC nerds to pay for them to create more AI business they can ignore gaming for. Just to get dumb results from inadequate datasets and having the privilege of paying for beta testing it.
Posted on Reply
#9
HisDivineOrder
I can't wait till Jensen gets up and tells us how AI-powered Nvidia VLSS Pro (Voice Layer Sound System) with VLI (Voice Language Interface) is the secret sauce to making NPC's talk to us for realsies and shows off incredible conversations between players and NPC's. Then everyone floods Youtube with videos of them talking about dingdongs and hoho's and making the NPC's glitch. The scripting language can be called VUDA. Bring together all the buzzwords into a salad so chock full of hyperbole and exaggeration we'll all spend thousands of dollars for 80 series cards to have weird conversations with NPC's and pine for the days of actual writers writing dialogue again.

But by then writers will be dead and AI-written articles will tell us they were never real and were a myth. "Humans cannot write stories," the articles will say. "Preposterous."
Posted on Reply
#10
Dr. Dro
Vya DomusOh you know things are rough when you have to shame your consumers into caring about your features.

Conventional GPUs were easy to sell, RT capable GPUs less so. ML capable GPUs, now that's a really tough sell because hardly anyone cares, if someone wants to use something that needs ML they just use web based resources.
Well, it just so happens that practically every modern GPU is ML capable. It all boils down to "AI is the new kool thing and if you aren't using it, you're lame".

Guess i'm lame, then
NioktefeThey are salty because they will be excluded from every portable AI crazyness.
Quite contrary, since NVIDIA retains the most profitable slice of the AI market with a very comfortable lead and any significantly powerful mobile system will sport their hardware anyway. It'll actually be a combination of Intel or AMD's NPUs + a GeForce "AI GPU", which is somewhat the point.

$NVDA goes BRR.

Posted on Reply
#11
Random_User
Vayra86Nvidia seems so hellbent on telling us that we need this, I'm still trying to figure out why I need this though.

Its really getting awfully close to that shady salesman selling you real antique coins or something.
This mockery isn't new at all. Just like everyone was ought to get PhysX, which suddenly became CUDA dependant. And then Tesselation (yeah, yeah, it was AMD's train first), Hairworks... and eventually RTRT, just as the sole reason to use and push the Tensor cores.
This is an old game. People been fooled and tricked for a while, but somehow completely missed the fact of it. Instead began to praise it as "holy grail" that somehow already existed and been in use for about a half century already. Not even the fact, that nobody is actually able to use it with the capabilities current consumer devices, but, that's another story. And now the gulible masses being baited by the fancy AI stuff, but no one really can give a reasonable answer why they need it. Even worse, most people not even asking themselves, if it's something they really need and want. Most just consume like it is the way it should be.

The answer is: about a hundred years ago, the electricity was introduced. Hardly anyone, including industrial big capital owners, as much as ordinary people, could imagine any use of it, outside of some light bulbs hanging out of the celling. And then the entire continents became dependant on it, like there's no tomorow. And the most horrible thing is, that most of the electric power, being criminally misused for the things that not only are not needed, but even should not exist at all. So many stuff that people could easilly get along without, but get only out of laziness and curiocity. People themselves got into this cozy cell, because being weak prey is easy and beneficial.

And Nvidia is the first company to know this, and exploit this dependancy and weakness at maximum. They've catched this whole "wave" and introduce even more useless ideas, to employ. So far, people greet them with huge praise. This is rampage.
HisDivineOrderI can't wait till Jensen gets up and tells us how AI-powered Nvidia VLSS Pro (Voice Layer Sound System) with VLI (Voice Language Interface) is the secret sauce to making NPC's talk to us for realsies and shows off incredible conversations between players and NPC's. Then everyone floods Youtube with videos of them talking about dingdongs and hoho's and making the NPC's glitch. The scripting language can be called VUDA. Bring together all the buzzwords into a salad so chock full of hyperbole and exaggeration we'll all spend thousands of dollars for 80 series cards to have weird conversations with NPC's and pine for the days of actual writers writing dialogue again.

But by then writers will be dead and AI-written articles will tell us they were never real and were a myth. "Humans cannot write stories," the articles will say. "Preposterous."

Some very popular games feel like their scripts, dialogues and even entire plots were written by machine. Even despite they've been made almost a decade ago. Some writers, are just so awful for their positions, that it's scary that AI picks their "style" as an example and put it as the basis for own "creativity".

It's all worse now. The Ministry speakers being replaced with an AI. So nobody would end up questioning the ones in power.
Posted on Reply
#12
phints
The average consumer gives zero f***s about an "AI PC", really what Nvidia is saying here is a PC with an iGPU vs. a PC with one of their RTX cards.
Posted on Reply
#13
Totally
The usual Nvidia grift. Rush out to get ahead of everyone and establish a foothold. Make a push to be industry leader in area. Once established, wall off, and charge out the nose for access while attempting to corner market. Throws tantrum, goes on a mocking campaign when the industry gravitates away from them towards a more open or royalty free standard.
Posted on Reply
#14
JohH
Well they're right. If you buying a PC for inference then they're better at it. And it can do training too.

Are they not supposed to point that out? Why does it make some people here mad. I'm not gonna buy a PC based on any of this but maybe someone else will.
Posted on Reply
#15
Camm
Nvidia acting like a petulant child whilst it refuses to address that much of its AI products just consume way too much power to be left on all the time.
Posted on Reply
#16
Carillon
JohHWell they're right. If you buying a PC for inference then they're better at it. And it can do training too.

Are they not supposed to point that out? Why does it make some people here mad. I'm not gonna buy a PC based on any of this but maybe someone else will.
Problem is that a "premium AI PC" could have no NPU and not be compatible with "AI PC" software.
IMHO the AI PC thing is BS, putting an NPU next to an iGPU that already has AI cores makes no sense. NPUs, again in my opinion, will consume more power than a discrete GPU, because people will eventually use it (willingly or unwillingly, pressing the AI key by mistake), versus not having an NPU and therefore not trying it.
Posted on Reply
#17
Minus Infinity
Hey guys we don't make an NPU so all of them are shit and our uber expensive power hungry GPU's are the way to go, even for laptops where battery life is important. Just keep it plugged in you'll be fine.

Seriously Nvidia will eventually be in a pickle as it doesn't make cpu for laptops or desktop and in laptops it will eventually be a distant memory for dGPU's as apu's just keep getting better and better and will come with very powerful NPUs Lunar Lake will offer 100TOPS total performance and Panther Lake will double that and Strix's successor will no doubt do the same.

Nvidia can't just make a dNPU so how will it compete long term. Or it just won't give a shit as it earns hundreds of billion from datacentre and commercial AI. Will they try and get into desktop/laptop cpu/apu based on ARM?
Posted on Reply
#18
Dr. Dro
TotallyThe usual Nvidia grift. Rush out to get ahead of everyone and establish a foothold. Make a push to be industry leader in area. Once established, wall off, and charge out the nose for access while attempting to corner market. Throws tantrum, goes on a mocking campaign when the industry gravitates away from them towards a more open or royalty free standard.
Unfortunately for us, it would seem they're doing it right. Going by the latest earnings report, things are going really bad over at AMD's graphics division (not that it should be unexpected, given how things have been going). Last quarter brought only ~$900M in while the GPU market's booming this much. Imagine if it wasn't.
Posted on Reply
#19
Hyderz
but nvidia didnt put down the premium "price"
Posted on Reply
#21
azrael
Personally, I'm mocking every AI PC.
Posted on Reply
#22
vmarv
AleksandarKIn its marketing materials, NVIDIA compares AI PCs to its Premium AI PCs, which have enhanced capabilities across various applications like image/video editing and upscaling, productivity, gaming, and developer applications. Another relevant selling point is the user base for these Premium AI PCs, which NVIDIA touts to be 100 million users. Those PCs support over 500 AI applications out of the box, highlighting the importance of proper software support.
I'd like to see the list of these 500 AI applications. :rolleyes:
Posted on Reply
#23
stimpy88
You could buy 2 or 3 AI PC's for how much nGreedia charges for just a 4090.
Posted on Reply
#24
sephiroth117
Minus InfinityHey guys we don't make an NPU so all of them are shit and our uber expensive power hungry GPU's are the way to go, even for laptops where battery life is important. Just keep it plugged in you'll be fine.

Seriously Nvidia will eventually be in a pickle as it doesn't make cpu for laptops or desktop and in laptops it will eventually be a distant memory for dGPU's as apu's just keep getting better and better and will come with very powerful NPUs Lunar Lake will offer 100TOPS total performance and Panther Lake will double that and Strix's successor will no doubt do the same.

Nvidia can't just make a dNPU so how will it compete long term. Or it just won't give a shit as it earns hundreds of billion from datacentre and commercial AI. Will they try and get into desktop/laptop cpu/apu based on ARM?
AI is a GPU thing, you are speaking of consumer AI product (phone, laptop etc.) in which indeed a CPU could be enough..however an idle/near-idle Nvidia card consumes a ridiculous amount of power especially in laptop configuration

It's a budget thing, a CPU or a CPU+GPU it's more expensive
Posted on Reply
#25
Jism
Vayra86Nvidia seems so hellbent on telling us that we need this, I'm still trying to figure out why I need this though.

Its really getting awfully close to that shady salesman selling you real antique coins or something.
Microsoft new Windows later will have some sort of AI that requires AI dedicated hardware.

But it's a big scam if you ask me, to give the whole PC industry another "impulse" as we really must have it.

Truth is, all that AI integration, wether it's coming from Google, MS, Apple or Meta, they all have one big thing in common: the collection of telemetry or simply put big data, over the use of a billion of devices all over the world.
Posted on Reply
Add your own comment
May 18th, 2024 06:16 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts