Wednesday, August 31st 2022

Intel Meteor Lake Can Play Videos Without a GPU, Thanks to the new Standalone Media Unit

Intel's upcoming Meteor Lake (MTL) processor is set to deliver a wide range of exciting solutions, with the first being the Intel 4 manufacturing node. However, today we have some interesting Linux kernel patches that indicate that Meteor Lake will have a dedicated "Standalone Media" Graphics Technology (GT) block to process video/audio. Moving encoding and decoding off GPU to a dedicated media engine will allow MTL to play back video without the GPU, and the GPU can be used as a parallel processing powerhouse. Features like Intel QuickSync will be built into this unit. What is interesting is that this unit will be made on a separate tile, which will be fused with the rest using tile-based manufacturing found in Ponte Vecchio (which has 47 tiles).
Intel Linux PatchesStarting with [Meteor Lake], media functionality has moved into a new, second GT at the hardware level. This new GT, referred to as "standalone media" in the spec, has its own GuC, power management/forcewake, etc. The general non-engine GT registers for standalone media start at 0x380000, but otherwise use the same MMIO offsets as the primary GT.

Standalone media has a lot of similarity to the remote tiles present on platforms like [Xe HP Software Development Vehicle] and [Ponte Vecchio], and our i915 [kernel graphics driver] implementation can share much of the general "multi GT" infrastructure between the two types of platforms.
Sources: Linux patches, via Phoronix
Add your own comment

32 Comments on Intel Meteor Lake Can Play Videos Without a GPU, Thanks to the new Standalone Media Unit

#26
Operandi
Chrispy_Tile, but "tile" is just intel's internal name for that logic block. It's still monolithic silicon, not an MCP like Ryzens.

Edit:
Wait, that's true for Alder Lake, Meteor Lake may actually be true tiles (which are effectively chiplets like AMD's MCPs under a slightly different interposer and trademarked name)
I think the idea for making this a dedicated 'tile' is that they can scale it up and put different variations of it into different products without it being necessarily directly bound to powerful GPU.

These media engines still use parts of main logic of GPU to do their thing though right?
Darmok N JaladiGPUs are essential, even in some very basic form (which is what Intel did for years with the 14nm CPUs). People who don’t game can get by with an iGPU, and almost every base work PC fits that bill. I’m surprised AMD didn’t include a GPU in all Ryzens for all these years, and it’s good to see them bringing it back with Zen4. Just being able to drive a display without a dGPU is great for troubleshooting.
I'm sure they really wanted that but given how small AMD was at the time all they could afford to do was get the core CPU architecture and core layout done and out the door and every desktop and server Zen CPU pretty much still follows the same basic layout. They probably could have gotten some soft of basic GPU built into the first IO die but moving to a chipplet approach was a big enough change at the time.
PunkenjoyThere are rumours that intel want to ditch it's GPU division. I think it would be a mistake for them but they could outsource it to a company like PowerVR, Qualcomm or Nvidia. Maybe not on all SKU, they could keep a minimal igpu for some workload but they could add a tile for a third party in their package since they are going chiplets with Meteor lake and beyond.
Too soon for them to give up on a dedicated GPU and there is no way a company the size of Intel wants to be beholden to anyone else for something so critical.
Posted on Reply
#27
dyonoctis
evernessinceNo, it's really not: www.pugetsystems.com/labs/articles/Adobe-Premiere-Pro---NVIDIA-GeForce-RTX-3070-3080-3090-Performance-1951/

You are comparing CPU rendering to accelerated, apples and oranges. Both in decoding and encoding NVENC is superior by a wide margin. Quick Sync supports some newer standards like 4:2:2 but it won't be relevant for another 8 years.

Intel's media engine is the worst of the 3 when it comes to quality output, bitrate, ect. It only beats AMD in regards to supported applications. It looses to Nvidia in everything aside from fringe feature support.
So you are quoting an older article from puget bench, to demonstrate that they are wrong in their newer article when they said that Intel quicksync got the fastest hardware decoding ? and don't you think that it would be dumb on their part to compare CPU accelerated vs hardware on a benchmark meant for video editors ? As if someone who's working on the field would ever use cpu decoding when working with h265. If someone if looking to buy a core i9, then they can afford a GPU that can decode h.265.

www.pugetsystems.com/labs/articles/Adobe-Premiere-Pro-Intel-Core-i9-12900KS-Performance-2314/
Posted on Reply
#28
Punkenjoy
dyonoctisSo you are quoting an older article from puget bench, to demonstrate that they are wrong in their newer article when they said that Intel quicksync got the fastest hardware decoding ? and don't you think that it would be dumb on their part to compare CPU accelerated vs hardware on a benchmark meant for video editors ? As if someone who's working on the field would ever use cpu decoding when working with h265. If someone if looking to buy a core i9, then they can afford a GPU that can decode h.265.

www.pugetsystems.com/labs/articles/Adobe-Premiere-Pro-Intel-Core-i9-12900KS-Performance-2314/
Probably another good reason why intel want to decouple quick sync from the GPU
Posted on Reply
#29
evernessince
dyonoctisSo you are quoting an older article from puget bench, to demonstrate that they are wrong in their newer article when they said that Intel quicksync got the fastest hardware decoding ? and don't you think that it would be dumb on their part to compare CPU accelerated vs hardware on a benchmark meant for video editors ? As if someone who's working on the field would ever use cpu decoding when working with h265. If someone if looking to buy a core i9, then they can afford a GPU that can decode h.265.

www.pugetsystems.com/labs/articles/Adobe-Premiere-Pro-Intel-Core-i9-12900KS-Performance-2314/
You are again cherry picking. From that same article:

"However, for the H.264 and HEVC tests, Intel can be 2-3x faster than AMD. This definitely means that our benchmark favors Intel CPUs with Quick Sync"

As they state, Intel Quick sync is only faster with certain settings in Adobe premier mostly against hardware where the workload isn't being accelerated. That's only considering Adobe premier and they specifically state that performance of Quick Sync compared to NVENC is less than impressive in other applications.

Overall, NVEC is 100% better. Please try to cherry pick harder.
Posted on Reply
#30
Chrispy_
evernessinceIntel's media engine is the worst of the 3 when it comes to quality output, bitrate, ect. It only beats AMD in regards to supported applications. It looses to Nvidia in everything aside from fringe feature support.
I used to think that but I was corrected a few months back by someone doing a Quicksync/NVENC/VCE comparison and Quicksync is now good - generally better than AMD's VCE and comparable in many situations to NVENC. Turing NVENC is still more efficient at very low bitrates but Quicksync did well enough that you'd not be worrying about the differences too much.

My dated understanding was based on decade-old Quicksync which was basically as you described - quick and dirty with huge filesizes and generally worse quality than AMD or Nvidia at any given bitrate. All it had back then was speed, and only in a limited range of codecs/fixed resolutions.
Posted on Reply
#31
dyonoctis
evernessinceYou are again cherry picking. From that same article:

"However, for the H.264 and HEVC tests, Intel can be 2-3x faster than AMD. This definitely means that our benchmark favors Intel CPUs with Quick Sync"

As they state, Intel Quick sync is only faster with certain settings in Adobe premier mostly against hardware where the workload isn't being accelerated. That's only considering Adobe premier and they specifically state that performance of Quick Sync compared to NVENC is less than impressive in other applications.

Overall, NVEC is 100% better. Please try to cherry pick harder.
Can you show me where they are saying that quicksync is absolute dogshit in decoding? Because even before 12th gen that's not what puget was saying about quicksync vs nvenc. Yes, for Davinci resolve they said that Intel supporting more format give them an edge, but they never said that it was bad even then. The link that you showed me never mentioned Intel, nor quicksync. If you do have a source that shows otherwise I will change my stance.
But atm, weither it's from puget, or youtube channel talking about video editing they all said that 12th gen quicksync is good. One guy even tested resolve and look at the scores. That doesn't scream "dogshit to me". (resolve allow you to choose wich media engine you want to use)
www.pugetsystems.com/labs/articles/Premiere-Pro-GPU-Decoding-for-H-264-and-HEVC-media---is-it-faster-1908/




Posted on Reply
#32
Punkenjoy
Well i think one is arguing about Video playback in premiere and the other one is arguing about encoding speed and quality. You don't seem to talk about the same thing at all.
Posted on Reply
Add your own comment
Dec 19th, 2024 20:35 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts