Wednesday, August 31st 2022

Intel Meteor Lake Can Play Videos Without a GPU, Thanks to the new Standalone Media Unit

Intel's upcoming Meteor Lake (MTL) processor is set to deliver a wide range of exciting solutions, with the first being the Intel 4 manufacturing node. However, today we have some interesting Linux kernel patches that indicate that Meteor Lake will have a dedicated "Standalone Media" Graphics Technology (GT) block to process video/audio. Moving encoding and decoding off GPU to a dedicated media engine will allow MTL to play back video without the GPU, and the GPU can be used as a parallel processing powerhouse. Features like Intel QuickSync will be built into this unit. What is interesting is that this unit will be made on a separate tile, which will be fused with the rest using tile-based manufacturing found in Ponte Vecchio (which has 47 tiles).
Intel Linux PatchesStarting with [Meteor Lake], media functionality has moved into a new, second GT at the hardware level. This new GT, referred to as "standalone media" in the spec, has its own GuC, power management/forcewake, etc. The general non-engine GT registers for standalone media start at 0x380000, but otherwise use the same MMIO offsets as the primary GT.

Standalone media has a lot of similarity to the remote tiles present on platforms like [Xe HP Software Development Vehicle] and [Ponte Vecchio], and our i915 [kernel graphics driver] implementation can share much of the general "multi GT" infrastructure between the two types of platforms.
Sources: Linux patches, via Phoronix
Add your own comment

32 Comments on Intel Meteor Lake Can Play Videos Without a GPU, Thanks to the new Standalone Media Unit

#2
Verpal
ExcuseMeWtfThis actually sounds very useful.
IMO Intel graphic is always the best when use for low power application and quicksync, stable and functional.
Posted on Reply
#3
thewan
I hope this means we can get the F series CPU that minus the IGPU but retains their video encode/decode block. Or that we can disable the IGPU while retaining the video encode/decode in K and non K series. The video engine from Intel is miles ahead of the competition, with the exception of Apple M series that is.
Posted on Reply
#5
john_
I wonder if this is what AMD is trying to do with their new GPUs. Take everything but the CUs(if I am understand it right) and move it to another chiplet/tile. Later they can just increase the size of the chiplet/tile that houses the CUs, or/and keep making everything else, like the media engine, at an older process, what they are doing with the I/O in their CPUs.
Posted on Reply
#6
bug
Besides easier manufacturing, is there an actual advantage to this? I mean, GPUs can power gate unused areas anyway, so they're basically turned off while decoding video (unless using shaders for post-processing).
Posted on Reply
#7
aQi
Things just got interesting on whole new level.
Posted on Reply
#8
AM4isGOD
Meteor lake will be a new dawn for Intel, as a end to the monolithic CPU
Posted on Reply
#9
ThrashZone
Hi,
Seems likely a clash in drivers for audio and graphic's and quality to coming soon
Adding complexity where none is needed

Personally if I use a graphic's card I'd prefer it's used primarily and only if I didn't use one then onboard graphic's would be preferred for obvious reasons.

Maybe it's just to make browser hardware acceleration easier to maintain ?
Posted on Reply
#10
Wirko
thewanI hope this means we can get the F series CPU that minus the IGPU but retains their video encode/decode block. Or that we can disable the IGPU while retaining the video encode/decode in K and non K series. The video engine from Intel is miles ahead of the competition, with the exception of Apple M series that is.
Yes, the logical assumption would be that GPU means iGPU here.
Posted on Reply
#11
Assimilator
bugBesides easier manufacturing, is there an actual advantage to this? I mean, GPUs can power gate unused areas anyway, so they're basically turned off while decoding video (unless using shaders for post-processing).
Not so much easier manufacturing as lowering defect rates to increase product value. Video encode/decode is relatively uncomplicated fixed-function hardware that has a low failure rate during manufacture, versus something intricate like a full-blown GPU that is far more likely to end up defective. Makes complete sense to split the two so that more CPUs ship with functioning video decode/encode (which let's be honest, is all that 99.9% of Intel's iGPUs are used for).
Posted on Reply
#12
bug
AssimilatorNot so much easier manufacturing as lowering defect rates to increase product value. Video encode/decode is relatively uncomplicated fixed-function hardware that has a low failure rate during manufacture, versus something intricate like a full-blown GPU that is far more likely to end up defective. Makes complete sense to split the two so that more CPUs ship with functioning video decode/encode (which let's be honest, is all that 99.9% of Intel's iGPUs are used for).
Yeah, poor choice of words. Manufacturing is actually more complicated, but when you throw out smaller dies, you get to churn out more SKUs per wafer, that's what I meant. Not to mention you can build this on a not-so-cutting-edge node, since it's not that performance critical.
Posted on Reply
#13
Chrispy_
I don't understand.

Intel IGPs have fixed-function decode/encode in a media block already, and have done for at least a decade. That fixed-function block was previously under the umbrella of "IGP" but that was merely a naming convention, it was part of the CPU silicon and not actually responsible for display output or GPU functions.

As far as I can tell (please correct me if I'm missing something fundamental) this announcement is just a shift in the way the existing encode/decode FF hardware is named. They call it a tile now, rather than a logic block, and that makes their block diagrams tidier to look at, but the end result is that it's still baked into the CPU die as before.

Potentially the only benefit or change that I can imagine is that quicksync will potentially be available for the SKUs ending in F.
Posted on Reply
#14
ZoneDymo
VerpalIMO Intel graphic is always the best when use for low power application and quicksync, stable and functional.
I do wanna chime in here, 12600k so currently still the lastest Intel has to offer and its not actually THAT stable, watching youtube vids on it the screen sometimes go black and when I ahve 2 videos open and one playing sometimes it glitches out and copies that vid to the other window and I have to reload it to get it back to normal.

not a huge issue but not as stable as I want it.

On topic:

So now we have an even ermm smaller igpu?
I wonder if this means that that specific line of Intel gpu's without igpu's (I think the F designation) can then still do video etc with meteorlake.
Posted on Reply
#15
R0H1T
I don't think Intel's gonna make a separate tile just for A/V from the looks of it this is just a slight name change.
Posted on Reply
#16
dyonoctis
ThrashZoneHi,
Seems likely a clash in drivers for audio and graphic's and quality to coming soon
Adding complexity where none is needed

Personally if I use a graphic's card I'd prefer it's used primarily and only if I didn't use one then onboard graphic's would be preferred for obvious reasons.

Maybe it's just to make browser hardware acceleration easier to maintain ?
For decoding intel media engine is actually the best. They can decode stuff that even nvidia can’t and much, much faster too. It’s why intel based system tend to always beat AMD for video editing.
Posted on Reply
#17
ThrashZone
Hi,
PhysX I often switch to cpu instead of auto nvidia cp so guess all this is just no big deal of a change.
Posted on Reply
#18
bug
Chrispy_I don't understand.

Intel IGPs have fixed-function decode/encode in a media block already, and have done for at least a decade. That fixed-function block was previously under the umbrella of "IGP" but that was merely a naming convention, it was part of the CPU silicon and not actually responsible for display output or GPU functions.

As far as I can tell (please correct me if I'm missing something fundamental) this announcement is just a shift in the way the existing encode/decode FF hardware is named. They call it a tile now, rather than a logic block, and that makes their block diagrams tidier to look at, but the end result is that it's still baked into the CPU die as before.

Potentially the only benefit or change that I can imagine is that quicksync will potentially be available for the SKUs ending in F.
As far as I understand, it's still a block, but not within the CPU. It's a chiplet (or whatever Intel calls it) on its own now.
Posted on Reply
#19
Chrispy_
bugAs far as I understand, it's still a block, but not within the CPU. It's a chiplet (or whatever Intel calls it) on its own now.
Tile, but "tile" is just intel's internal name for that logic block. It's still monolithic silicon, not an MCP like Ryzens.

Edit:
Wait, that's true for Alder Lake, Meteor Lake may actually be true tiles (which are effectively chiplets like AMD's MCPs under a slightly different interposer and trademarked name)
Posted on Reply
#20
Darmok N Jalad
Sounds like what the Neural Engine does in Apple Silicon. The NE, when leveraged properly, is quite impressive. For example, DXO PureRAW v2 uses the NE, where v1 used the iGPU. It knocks the processing time down on my 20MP RAW files from 20s to about 8s. If QS is using this hardware too, I suspect other programs like DXO PureRAW will as well, provided it’s easy for developers to implement. I suspect Intel would assist in that.
Posted on Reply
#21
Punkenjoy
bugBesides easier manufacturing, is there an actual advantage to this? I mean, GPUs can power gate unused areas anyway, so they're basically turned off while decoding video (unless using shaders for post-processing).
What if, and a big what if, Intel is really abandoning GPU, want to keep their nice video encoding/decoding but want to use a third party GPU instead...

That would be a great first step toward that path. But at the same time, it's probably to bring closer to CPU those engine as maybe it make more sense for Intel if they choose to not compete on high end GPU
Posted on Reply
#22
bug
PunkenjoyWhat if, and a big what if, Intel is really abandoning GPU, want to keep their nice video encoding/decoding but want to use a third party GPU instead...

That would be a great first step toward that path. But at the same time, it's probably to bring closer to CPU those engine as maybe it make more sense for Intel if they choose to not compete on high end GPU
Intriguing as that may be, it's not possible (or a smart move). IGPs are a must for laptops/ultrabooks. What would Intel use instead?
Also, abandoning GPUs right after they launch Arc? It's probably the worst moment in the history of Intel to do that.
Posted on Reply
#23
Punkenjoy
bugIntriguing as that may be, it's not possible (or a smart move). IGPs are a must for laptops/ultrabooks. What would Intel use instead?
Also, abandoning GPUs right after they launch Arc? It's probably the worst moment in the history of Intel to do that.
There are rumours that intel want to ditch it's GPU division. I think it would be a mistake for them but they could outsource it to a company like PowerVR, Qualcomm or Nvidia. Maybe not on all SKU, they could keep a minimal igpu for some workload but they could add a tile for a third party in their package since they are going chiplets with Meteor lake and beyond.
Posted on Reply
#24
Darmok N Jalad
iGPUs are essential, even in some very basic form (which is what Intel did for years with the 14nm CPUs). People who don’t game can get by with an iGPU, and almost every base work PC fits that bill. I’m surprised AMD didn’t include a GPU in all Ryzens for all these years, and it’s good to see them bringing it back with Zen4. Just being able to drive a display without a dGPU is great for troubleshooting.
Posted on Reply
#25
evernessince
dyonoctisFor decoding intel media engine is actually the best. They can decode stuff that even nvidia can’t and much, much faster too. It’s why intel based system tend to always beat AMD for video editing.
No, it's really not: www.pugetsystems.com/labs/articles/Adobe-Premiere-Pro---NVIDIA-GeForce-RTX-3070-3080-3090-Performance-1951/

You are comparing CPU rendering to accelerated, apples and oranges. Both in decoding and encoding NVENC is superior by a wide margin. Quick Sync supports some newer standards like 4:2:2 but it won't be relevant for another 8 years.

Intel's media engine is the worst of the 3 when it comes to quality output, bitrate, ect. It only beats AMD in regards to supported applications. It looses to Nvidia in everything aside from fringe feature support.
Posted on Reply
Add your own comment
May 21st, 2024 17:23 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts