Saturday, October 31st 2020

Intel Storms into 1080p Gaming and Creator Markets with Iris Xe MAX Mobile GPUs

Intel today launched its Iris Xe MAX discrete graphics processor for thin-and-light notebooks powered by 11th Gen Core "Tiger Lake" processors. Dell, Acer, and ASUS are launch partners, debuting the chip on their Inspiron 15 7000, Swift 3x, and VivoBook TP470, respectively. The Iris Xe MAX is based on the Xe LP graphics architecture, targeted at compact scale implementations of the Xe SIMD for mainstream consumer graphics. Its most interesting feature is Intel DeepLink, and a powerful media acceleration engine that includes hardware encode acceleration for popular video formats, including HEVC, which should make the Iris Xe MAX a formidable video content production solution on the move.

The Iris Xe MAX is a fully discrete GPU built on Intel's 10 nm SuperFin silicon fabrication process. It features an LPDDR4X dedicated memory interface with 4 GB of memory at 68 GB/s of bandwidth, and uses PCI-Express 4.0 x4 to talk to the processor, but those are just the physical layers. On top of these are what Intel calls Deep Link, an all encompassing hardware abstraction layer that not only enables explicit multi-GPU with the Xe LP iGPU of "Tiger Lake" processors, but also certain implicit multi-GPU functions such as fine-grained division of labor between the dGPU and iGPU to ensure that the right kind of workload is split between the two. Intel referred to this as GameDev Boost, and we detailed it in an older article.
Deep Link goes beyond the 3D graphics rendering domain, and also provides augmentation of the Xe Media Multi-Format Encoders of the iGPU and dGPU to linearly scale video encoding performance. Intel claims that a Xe iGPU+dGPU combine offers more than double the encoding performance of NVENC on a GeForce RTX 2080 graphics card. All this is possible because a common software framework ties together the media encoding capabilities of the "Tiger Lake" CPU and Iris Xe MAX GPU that ensures the solution is more than the sum of its parts. Intel refers to this as Hyper Encode.
Deep Link also scales up AI deep-learning performance between "Tiger Lake" processors and the Xe MAX dGPU. This is because the chip has a DLBoost DP4a accelerator. As of today, Intel has onboarded major brands in the media encoding software ecosystem to support Deep Link—Hand Brake, OBS, XSplit, Topaz Gigapixel AI, Huya, Joyy, etc., and is working with Blender, Cyberlink, Fluendo, and Magix for full support in the coming months.
Under the hood, the Iris Xe MAX, as we mentioned earlier, is built on the 10 nm SuperFin process. This is a brand new piece of silicon, and not a "Tiger Lake" with its CPU component disabled, as its specs might otherwise suggest. It features 96 Xe execution units (EUs), translating to 768 programmable shaders. It also has 96 TMUs and 24 ROPs. It features an LPDDR4X memory interface, which 68 GB/s of memory bandwidth. The GPU is clocked at 1.65 GHz. It talks to "Tiger Lake" processors over a common PCI-Express 4.0 x4 bus. Notebooks with Iris Xe MAX have their iGPUs and dGPUs enabled to leverage Deep Link.
Media and AI only paint half the picture, the other being gaming. Intel is taking a swing at the 1080p mainstream gaming segment with the Iris Xe MAX offering over 30 FPS (playable) in AAA games at 1080p. It trades blows with notebooks that use the NVIDIA GeForce MX450 discrete GPU. We reckon that most e-sports titles should be playable at over 45 FPS at 1080p. Over the coming months, one should expect Intel and its ISVs to invest more in Game Boost, which should increase performance further. The Xe LP architecture features DirectX 12 support, including Variable Rate Shading (tier-1).
But what about other mobile platforms, and desktop, you ask? The Iris Xe MAX is debuting exclusively with thin-and-light notebooks based on 11th Gen Core "Tiger Lake" processors, but Intel has plans to develop desktop add-in cards with Iris Xe MAX GPUs sometime in the first half of 2021. We predict that if priced right, this card could sell in droves to the creator community, who could leverage the card's media encoding and AI DNN acceleration capabilities. It should also appeal to the HEDT and mission-critical workstation crowds that require discrete graphics, as they minimize their software sources.
Update Nov 1st: Intel clarified that the desktop Iris Xe MAX add-in card will be sold exclusively to OEMs for pre-builts.

The complete press-deck follows.
Add your own comment

74 Comments on Intel Storms into 1080p Gaming and Creator Markets with Iris Xe MAX Mobile GPUs

#1
TheLostSwede
News Editor
It's a start I guess, but MX350 beating performance is hardly something to brag about.
To be impressive, this would have to beat the GTX 1650 in mobile.
Can't find any mention of which codecs are supported for encoding, beyond H.265.
Posted on Reply
#2
ixi
Weak gpu - sad. Will be overpriced as well because intel thats why...
Posted on Reply
#4
XL-R8R
TheLostSwedeIt's a start I guess, but MX350 beating performance is hardly something to brag about.
I arrived to say the same; "storming" is hardly a word to use when beating a lowly MX350.
Posted on Reply
#5
InVasMani
tiggerWell it's a start for their GPU section
It's closer to a continuation of the same with slight refinements. I can see why Jim Keller left Intel so quickly they seem stuck in their ways of doing things. I'm curious where Jim Keller goes next or if he simply retires he's at that age where he certain could do either. I think it's a matter of personal ambition the money certain isn't a big driving reason at this point for a man like that he's made his personal fortune well enough by now. I think a man like that would be rather excited to return to AMD potentially and help guide and lead the Xilinx division perhaps if they wrap up, finalize, and approval of that merger rapidly transpires. It seems to me it would be right up his ally. I could even envision him lending his hand to help speed along and progress ARM or RISC-V as well. I don't know if he's the type of person that can kind of retire and sit still or not I don't know the man's personality, but he is rather passionate about driving the future of technology.
ixiWeak gpu - sad. Will be overpriced as well because intel that's why...
Classic case of Intel stuck in it's ways they want to innovate iGPU just enough to scrape by so discrete graphics cards aren't additional cost considerations to selling their products to businesses for mobile or SFF desktop devices.
Posted on Reply
#6
X71200
68 of memory bandwidth, that's equal to cards from 10+ years ago. Laughing stock material here.
Posted on Reply
#7
dj-electric
X7120068 of memory bandwidth, that's equal to cards from 10+ years ago. Laughing stock material here.
Well, that's how DDR works, did you expect GDDR5+ speeds out of an iGPU?
Posted on Reply
#8
X71200
That's not how the article explains this GPU exactly though. At that point, I'd simply buy a Zen laptop and stick with older games if this gives me nothing other than esports realistically.
Posted on Reply
#9
iO
Being on par with a 2 gens old entry level GPU is a bit underwhelming and "launching" it on a saturday and without supplying test samples shows how much confidence they have in their product...

But I guess OEMs are happy as Intel will likely give these chips away practically for free to claim design wins.
Posted on Reply
#11
X71200
That RAM is also likely soldered btw, Acer has been doing it for a while now on Swifts.
Posted on Reply
#12
ZoneDymo
sooo this would be good for doing a livestream with?
aka have this cpu in your system and let the gpu part of it doing the streaming work?
Posted on Reply
#13
Dave65
1080p is so yesterday.
Posted on Reply
#14
X71200
If you're going to stream basic stuff, maybe. But then again, the NVENC in the 2080 is not actually as fine of a butter as they're making it to be. It lags on heavier loads easily, have had it happen when I tried to stream PUBG. Vermeer is what you're looking for.
Posted on Reply
#15
silentbogo
One helluva storm... gotta get my umbrella :roll:
As we've discussed in previous underwhelming press release, that's the maximum capability of current-gen Iris Xe. If Xe MAX can only keep up with MX350, which is actively being phased out either by underclocked/TDP-capped 1650, or less powerful but more efficient Vega iGPs on 4000-series Ryzen, it can only mean that competitive Intel dGPUs aren't even close to being ready. Just the spec&perf show clearly that it's an overclocked G7 iGPU, only modified to have its own framebuffer.
Posted on Reply
#16
john_
Well, things change. And in the future we might be seeing systems with AMD CPUs and Intel discrete GPUs.

AMD and Nvidia are focusing at cards that cost over $200-$300, leaving the low end market to older models, APUs and integrated graphics. But new and cheap graphics cards are needed in HTPCs for example. An older CPU without integrated graphics could use a discrete Intel graphics card that costs less than $100 and offers QuickSync and hardware support for all modern video codecs. If AMD and Nvidia start ignoring the market of cards under $100, Intel might have a chance to sell some more GPUs to consumers.
Posted on Reply
#17
X71200
I mean those convertible laptops look like the stuff where you would see components soldered, and if they integrate that directly to other laptops, that's failure. If you're soldering stuff, why not maybe make multiple variations of the GPU where you have faster clocks, RAM, etc? I don't see the point of sticking it to a single design as this has been done before and didn't help sales even with AMD. I know I didn't buy into it.
Posted on Reply
#18
Vayra86
TheLostSwedeIt's a start I guess, but MX350 beating performance is hardly something to brag about.
To be impressive, this would have to beat the GTX 1650 in mobile.
Can't find any mention of which codecs are supported for encoding, beyond H.265.
What is Mx350 performance, even? The max TDP budget? Or some low power nondescript ultrabook implementation?!

Totally worthless bench, theyve wasted a dozen slides and our time because they still havent got anything here.

Xe is thus far going nowhere. Is this a start? They are not really competing with anything here that is more than glorified IGP. Weve been there with Broadwell already... and that was done with just a simple i7 CPU!
Posted on Reply
#19
TheLostSwede
News Editor
Vayra86What is Mx350 performance, even? The max TDP budget? Or some low power nondescript ultrabook implementation?!

Totally worthless bench, theyve wasted a dozen slides and our time because they still havent got anything here.

Xe is thus far going nowhere. Is this a start? They are not really competing with anything here that is more than glorified IGP. Weve been there with Broadwell already... and that was done with just a simple i7 CPU!
C'mon, be nice to Intel, they made a stand alone GPU again. Now they just have 22 years of catching up to do...
Posted on Reply
#20
dyonoctis
The biggest issue with any gpu trying to make it into the "content creator" market is the fact that cuda exist...apple killed open CL for metal, and microsoft doesn't seem interested in making a windows api that could enable any gpu. While it does seems great for video editing, there's too many 3d worlkload that are using cuda/optix.
Posted on Reply
#21
InVasMani
There needs to be a more viable alternative to CUDA that isn't a proprietary API that actually gains traction, but it won't happen w/o a few companies banding together for the benefit of increased competition. That won't happen of course unless enough of these companies feel threatened by CUDA's impact on their bottom line. Unless there is a financial incentive it's unlikely to happen because at the end of the day they are businesses.
Posted on Reply
#22
londiste
dyonoctisThe biggest issue with any gpu trying to make it into the "content creator" market is the fact that cuda exist...apple killed open CL for metal, and microsoft doesn't seem interested in making a windows api that could enable any gpu. While it does seems great for video editing, there's too many 3d worlkload that are using cuda/optix.
oneAPI?
Posted on Reply
#23
dyonoctis
londisteoneAPI?
That's interesting, but I wonder how long it will take to become mainstream. AMD ROCm was also supposed to make radeon run CUDA...since 2016 but there's still no news of amd support in mainstream CUDA apps
Posted on Reply
#24
dorsetknob
"YOUR RMA REQUEST IS CON-REFUSED"
TheLostSwedeNow they just have 22 years of catching up to do...
More like 30 Years and at least $1b to catch up and pass either NV or AMD
At the moment they are about on par with ten year old tech
Posted on Reply
#25
TechLurker
The only novel thing here is their ability to have an iGPU and dGPU link and work together, which isn't something AMD is really doing currently. Although I recall AMD did have a more rudimentary version of it with Crossfire, and was revisiting the idea more for future MCM GPUs and heterogenous computing via Infinity Architecture in general.

Still, it is a small edge (feature-wise) Intel has for now, and if they're able to go further with a SAM-like equivalent, they could potentially squeeze out a bit more performance that way. That said, it'll be a while until they can sufficiently catch up in actual performance, unless AMD or NVIDIA trips up hard.
Posted on Reply
Add your own comment
May 21st, 2024 09:45 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts