Monday, December 13th 2021

Intel Envisions a Future in which Your Devices Share Hardware Resources

We've had remote gaming for several years now—where your laptop with a basic iGPU can stream gameplay as it's being rendered on your gaming desktop, either across your home network, or across the Internet. We've also seen cloud-gaming, where you pay a provider like NVIDIA GeForce NOW to host your digital game licenses, and render your game in datacenters, to stream it across to your device. What if this idea is turned on its head—what if your laptop holds your software or games, and it simply streams close-to-metal data over network, to use their hardware resources? Intel thinks this is possible.

Intel today pulled off a fascinating presentation titled "Powering Metaverses," along the sidelines of the Real-Time Conference 2021 (RTC 2021) virtual summit. Dubbed "resource abstraction," Intel is working on a technology that intelligently senses compute resources available to a device across other devices on the network; accounts for network bandwidth and latency; and treats these resources as if they are available to a local machine. The company put out a conceptual demo of a laptop with a basic iGPU dramatically improving gaming performance by roping in hardware resources of a gaming desktop on the network; without the game actually being installed on that desktop. If latency-sensitive applications like games could be pulled off on this system, it bodes really well for applications that aren't as latency-sensitive, or even as bandwidth-sensitive. Resource Abstraction will feature a lot as Intel steers toward Web 3.0 and metaverses. The concept video can be found here.
Add your own comment

11 Comments on Intel Envisions a Future in which Your Devices Share Hardware Resources

#1
chris.london
“The company put out a conceptual demo of a laptop with a basic iGPU dramatically improving gaming performance by roping in hardware resources of a gaming desktop on the network.”

The key word is conceptual. Even Thunderbolt seriously bottlenecks GPUs, you would need very expensive wired networking gear to pull this off and those tend to generate quite a bit of heat too. Wireless definitely wouldn’t work.

What I don’t understand is that why you wouldn’t just stream from the desktop computer in the first place. Intel’s approach has only 1 advantage over streaming and that is that you wouldn’t have to install the game on the desktop computer but is that enough considering all the downsides?

I don’t want to call this a dumb idea just yet, but I am definitely thinking it.
Posted on Reply
#2
Flanker
They need to envision a future where they are selling their graphics cards to consumers
Posted on Reply
#3
dragontamer5788
This sounds like Intel is poorly copying IBM's Z-system announcement, where chips share their L2 cache to make L3 or L4 cache for the whole system. (Apparently, remote L2 cache is still faster than reading/writing to DDR4 so its still a net benefit)
Posted on Reply
#4
Mussels
Freshwater Moderator
If they can spread GPU power out reasonably and keep things fast enough (gigabit can keep things in the 1ms range, latency matters a lot here) this could prove rather interesting


It'd be like geforce now/steam in house streaming/moonlight, but mixed with GPU mining
chris.london“The company put out a conceptual demo of a laptop with a basic iGPU dramatically improving gaming performance by roping in hardware resources of a gaming desktop on the network.”

The key word is conceptual. Even Thunderbolt seriously bottlenecks GPUs, you would need very expensive wired networking gear to pull this off and those tend to generate quite a bit of heat too. Wireless definitely wouldn’t work.

What I don’t understand is that why you wouldn’t just stream from the desktop computer in the first place. Intel’s approach has only 1 advantage over streaming and that is that you wouldn’t have to install the game on the desktop computer but is that enough considering all the downsides?

I don’t want to call this a dumb idea just yet, but I am definitely thinking it.
Because the program had to be installed in the desktop.
Their example is getting the GPU power, without needing the game/program installed at the desktop end


Edit: i cant reply lower down, server issue
TheinsanegamerNWe already have that, in home streaming was solved by valve 8 years ago.
No we dont

Steam in home streaming requires steam installed at both ends, and the game at the server end
This is asking it to be done the other way around, with a remote machine adding GPU power

125MB/s can stream 4K from netflix... five times. They're not using this is a HDMI cable here, they're using compressed video data of some kind.
Posted on Reply
#5
TheinsanegamerN
MusselsIf they can spread GPU power out reasonably and keep things fast enough (gigabit can keep things in the 1ms range, latency matters a lot here) this could prove rather interesting
Gigabit ethernet tops out at 125MB/s. The geforce GT 1030 produces 48GB/s of bandwidth. NVlink, used to link two GPUs, is 300GB/s.

In the realm of "fast enough", gigabit is somewhere between molasses and the heat death of the universe.
MusselsIt'd be like geforce now/steam in house streaming/moonlight, but mixed with GPU mining
We already have that, in home streaming was solved by valve 8 years ago.
MusselsBecause the program had to be installed in the desktop.
Their example is getting the GPU power, without needing the game/program installed at the desktop end
We cant get two GPUs, linked together through a high speed interface, in a PC to sync properly, and you think intel is going to achieve this with two entire different computers, possibly running different architectures, linked over a network? Intel's paper is pure conjecture on "what could happen", and wouldnt be out of place from disney's "tomorrowland" from the 60s.
Posted on Reply
#6
Steevo
How many home routers have enough backplane bandwidth to make this remotely feasible? Since the image isn’t or hasn’t been rendered the resources will have to be streamed to the target machine, loaded in vmem, run through the graphics pipeline without a PCIe bus of bandwidth available, rendered, compressed, streamed back to the client, decompressed, displayed and there is no way that can be done with any assurance of performance let alone safety protocols to prevent the direct accesss from being exploited, there is literally not enough bandwidth and low enough latency in almost any home or basic office hardware to make this work.

So out of everyone on the internet 5% may and all of them are people who know what that means, and will usually have a dedicated gaming rig and a decent gaming laptop with steam in home streaming already installed.

Congratulations Intel, you envisioned 2018. It’s time to stop dreaming of the past and trying to rebrand it or repackage it as your own.
Posted on Reply
#7
Mussels
Freshwater Moderator
TheinsanegamerNWe already have that, in home streaming was solved by valve 8 years ago.
No we dont

Steam in home streaming requires steam installed at both ends, and the game at the server end
This is about NOT needing the games installed anywhere else, and not needing a server... just borrowing their GPU power.
This is asking it to be done the other way around, with a remote machine adding GPU power

125MB/s can stream 4K from netflix... five times. They're not using this is a HDMI cable here, they're using compressed video data of some kind.
Posted on Reply
#8
InVasMani
So they want to borrow GPU resources from the desktop? Wouldn't it just be easier streaming them. Just put a capture card in a laptop or that connects via USB-C or thunderbolt. This design seems convoluted and crazy. It's like trying to do something like Lucid Hydra from a desktop to a laptop. Seems a bit like a if it's not broke don't fix it scenario. What is the advantage to this envisioned idea exactly!!?? Xilinx has something a bit like it. I'm not sure what Intel is targeting exactly with this laptop idea probably many use case segments I imagine. I'm perplexed on what they are targeting and why this method over something that seems easier to accomplish. There must be more to it than let's stream GPU data to a laptop for games.




Why reverse which system runs the software exactly!? It seems like they want the client to just run the GPU drivers and hardware off a server essentially, but why do they want that over the server running both and streaming it to the client which can already be done pretty well and easily at the same time.

Also Intel's presentation slide be like...it's never too late to Mario party! My whole perspective of the internet has forever changed...
Posted on Reply
#9
Mussels
Freshwater Moderator
InVasManiSo they want to borrow GPU resources from the desktop? Wouldn't it just be easier streaming them. Just put a capture card in a laptop or that connects via USB-C or thunderbolt. This design seems convoluted and crazy. It's like trying to do something like Lucid Hydra from a desktop to a laptop. Seems a bit like a if it's not broke don't fix it scenario. What is the advantage to this envisioned idea exactly!!?? Xilinx has something a bit like it. I'm not sure what Intel is targeting exactly with this laptop idea probably many use case segments I imagine. I'm perplexed on what they are targeting and why this method over something that seems easier to accomplish. There must be more to it than let's stream GPU data to a laptop for games.




Why reverse which system runs the software exactly!? It seems like they want the client to just run the GPU drivers and hardware off a server essentially, but why do they want that over the server running both and streaming it to the client which can already be done pretty well and easily at the same time.

Also Intel's presentation slide be like...it's never too late to Mario party! My whole perspective of the internet has forever changed...
They want to stream GPU resources
They never said they wanted to limit it to one GPU max


This could be something they envision powering Nvidias Geforce now server racks, with the load spread across multiple machines - or in a workplace where everyone gets ARM based apple products with no real horsepower, but they can remotely add the grunt on-demand from servers elsewhere in (or out) of the office


I wouldnt mind having one big GPU in the house, and the rest on IGP - knowing the GPU power could be streamed around as needed
Posted on Reply
#10
TheoneandonlyMrK
And their example is total balls anyway I can already steamlink games and I can skip buying that pc on the server and use Nvidia grid!.
Too niche perhaps but as a concept I like shared resources.
Posted on Reply
#11
InVasMani
I just don't get what they gain out of moving the software to be run from the server to the client. That actually seems like it would be slower and add latency because it then has to instruct the server what to do rather than directly doing it and sending it to the client. I agree with above though shared resources is great. The only thing I've drawn from it that makes sense is it's aimed at pooling resources to the client from different servers or really each could be a client and server and dynamically be on a whim.

It seems like they could've presented the idea better. I think it would head in this direction eventually though out of ease and necessity as opposed to building a server bigger and bigger and more interconnected. Much like with yields on larger chip dies you can only expand so much and so easily that alternatives start to make better sense.
Posted on Reply
Add your own comment
Dec 18th, 2024 03:00 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts