Wednesday, March 30th 2022

Intel Formally Announces Arc A-series Graphics

For decades, Intel has been a champion for PC platform innovation. We have delivered generations of CPUs that provide the computing horsepower for billions of people. We advanced connectivity through features like USB, Thunderbolt and Wi-Fi. And in partnership with the PC ecosystem, we developed the ground-breaking PCI architecture and the Intel Evo platform, pushing the boundary for what mobile products can do. Intel is uniquely positioned to deliver PC platform innovations that meet the ever-increasing computing demands of professionals, consumers, gamers and creators around the world. Now, we take the next big step.

Today, we are officially launching our Intel Arc graphics family for laptops, completing the Intel platform. These are the first discrete GPUs from our Intel Arc A-Series graphics portfolio for laptops, with our desktop and workstation products coming later this year. You can visit our Newsroom for our launch video, product details and technical demos, but I will summarize the highlights of how our Intel Arc platform and A-Series mobile GPU family will deliver hardware, software, services and - ultimately - high-performance graphics experiences.
  • New Laptops with Intel Arc Graphics: We've partnered with top OEMs to co-engineer an amazing lineup of laptops that feature new and improved gaming and content creation capabilities with Intel Arc graphics and 12th Gen Intel Core processors. Many new systems with Intel Arc 3 graphics will feature the Intel Evo platform's trademark responsiveness, battery life and Wi-Fi 6 connectivity in thin-and-light form factors. Laptops with Intel Arc 3 graphics offer enhanced 1080p gaming and advanced content creation, and those with Intel Arc 5 and Intel Arc 7 graphics will offer the same cutting-edge, content-creation capabilities coupled with increased graphics and computing performance. The first laptops with Intel Arc 3 GPUs are available to preorder now and will be followed by the more powerful designs with Intel Arc 5 and Intel Arc 7 graphics in early summer.
  • Unleashing the Laptop Platform: The foundation of products with Intel Arc A-Series GPUs and our platform-level approach to graphics innovation starts with our new Xe High Performance Graphics microarchitecture (Xe HPG), which is engineered for gamers and creators. We have packed a ton of great technology into Xe HPG, including powerful Xe-cores with Intel XMX AI engines, a graphics pipeline optimized for DirectX 12 Ultimate with hardware acceleration for ray tracing, the Xe Media Engine tuned to accelerate existing and future creator workloads and the Xe Display Engine ready for DisplayPort 2.0 UHBR 10.
    • Intel Xe Matrix Extensions (XMX) AI engines provide more compute capability for accelerating AI workloads. These AI engines have 16 times the compute to complete AI inferencing operations when compared to traditional GPU vector units, which can increase performance in productivity, gaming and creator applications.
    • Xe Super Sampling (XeSS) is our solution that leverages the power of Intel Arc graphics' XMX AI-engines to deliver high-performance, AI-accelerated upscaling. XeSS is a novel upscaling technology that uses deep learning to synthesize images that are very close to the quality of native high-res rendering. XeSS is coming in the summer and will be supported on all products with Arc A-Series graphics.
    • Intel Arc A-Series GPUs are the first in the industry to offer full AV1 hardware acceleration, including both encode and decode, delivering faster video encode and higher quality streaming while consuming the same internet bandwidth. We've worked with industry partners to ensure that AV1 support is available today in many of the most popular media applications, with broader adoption expected this year. The AV1 codec will be a game changer for the future of video encoding and streaming.
    • We've integrated Intel Deep Link technologies to enable Intel Arc GPUs to work seamlessly with Intel CPUs and integrated graphics for a performance improvement across gaming, creating and streaming workloads. Intel Deep Link enables dynamic power sharing, intelligently distributing power across the platform to increase application performance up to 30% in creation and compute-intensive applications. With Hyper Encode and Hyper Compute, Deep Link allows multi-engine acceleration in transcoding and AI tasks. More details are available in our product fact sheet.
  • Community Experiences: Our Intel Arc graphics are more than another piece of hardware in your PC. They are your portal to play and create. We have a dedicated team focused on delivering Day 0 game-ready drivers, which you'll be able to track in our new Intel Arc Control interface, an all-in-one hub that puts you in full control of the gaming experience. Intel Arc Control includes custom performance profiles, built-in streaming, a virtual camera, integrated Game ON driver downloading, automatic game capture, and more. The app supports Intel Iris Xe graphics and Intel Arc GPUs for a unified software experience. By working with our developer partners, we are making a growing portfolio of Intel-optimized games and multimedia applications available to discrete graphics customers through special launch bundles. Bundles will vary based on the system and the region, but the first of these gamer and creator bundles is rolling out in April with the launch of our A-Series mobile products. Our goal is to deliver something new and fun to the community every day of the year. We invite you to connect with us and join the conversation on our Intel Insiders discord.
Looking Ahead
Today marks the first step in our journey. You'll see Intel Arc graphics continue to improve and evolve, with new features and an ever-expanding ecosystem coming throughout the year. And for desktop enthusiasts, our Intel Arc graphics add-in-cards will be coming this summer.

We are excited, and we hope you are too. It's going to be a big year for Intel Arc graphics.

The complete slide-deck follows.
Add your own comment

53 Comments on Intel Formally Announces Arc A-series Graphics

#1
jeremyshaw
AV1 Encode! I am definitely curious now.
Posted on Reply
#2
Cutechri
About time. More interested in desktop dGPUs though.
Posted on Reply
#3
AnotherReader
I hope that, in the long run, Intel gives AMD and Nvidia a run for their money. This duopoly, verging on a monopoly, is not good for us.
Posted on Reply
#4
20mmrain
I would have liked them to show their FPS slides against AMD or Nvidia Laptop equivalents. Not Iris.
Posted on Reply
#5
Selaya
jeremyshawAV1 Encode! I am definitely curious now.
well judging by the performance of nvenc i wouldnt hold my breath (too much)
Posted on Reply
#6
Dr. Dro
Selayawell judging by the performance of nvenc i wouldnt hold my breath (too much)
Realtime AV1 encoding is a huge deal! This format beats the pants out of HEVC/h265 and is not bound by the same restrictive licensing that led the format to be mostly ignored.

NVENC's advantage over AMD's competing Video Core Next is pretty much only in AVC/h264 encoding, the sole reason this is relevant is that due to royalties charged by the HEVC patent holders, video streaming services refused to adopt the format and kept using the old AVC format whose patents have already expired, or adopted the VP9 codec.

At 5 Mbps (Twitch's maximum bandwidth), 1080p30 should be imperceptibly encoded using the AV1 format, streams will have practically Blu-ray quality with this format.
Posted on Reply
#7
Denver
AnotherReaderI hope that, in the long run, Intel gives AMD and Nvidia a run for their money. This duopoly, verging on a monopoly, is not good for us.
I don't think it will help much if it depends on TSMC's limited capacity.
Posted on Reply
#8
AnotherReader
DenverI don't think it will help much if it depends on TSMC's limited capacity.
Right now, that is the case. In the future, if Intel sorts out its process woes, then that won't be a concern.
Posted on Reply
#9
EatingDirt
Still no details on the desktop is disappointing. That along with only comparing their igpu against their own IrisXe does not inspire confidence in anyone except shareholders that know nothing about the competition.
Posted on Reply
#10
Steevo
So a lot of numbers, like TW3 at Medium settings is 68FPS which is faster than a no number from Iris, at a unknown TDP, at a unknown frequency, at a unknown memory, with a unknown cooling system.

A lot of fluff in them their slides. Good thing is efluff and not real, they would have a mess on their hands trying to dispose of that much physical fluff.

"The Elden Ring was CAPTURED on a series 7 GPU," not rendered, merely captured.
Posted on Reply
#11
Upgrayedd
DenverI don't think it will help much if it depends on TSMC's limited capacity.
I thought they had first dibs on their 5nm?
Posted on Reply
#12
Selaya
Dr. DroRealtime AV1 encoding is a huge deal! This format beats the pants out of HEVC/h265 and is not bound by the same restrictive licensing that led the format to be mostly ignored.

NVENC's advantage over AMD's competing Video Core Next is pretty much only in AVC/h264 encoding, the sole reason this is relevant is that due to royalties charged by the HEVC patent holders, video streaming services refused to adopt the format and kept using the old AVC format whose patents have already expired, or adopted the VP9 codec.

At 5 Mbps (Twitch's maximum bandwidth), 1080p30 should be imperceptibly encoded using the AV1 format, streams will have practically Blu-ray quality with this format.
yes but the thing is, will realtime hardware-accelerated av1 encode have the same compression rate as software?
because nvenc h264 is like, total ass and produces either significantly larger files or significantly worse quality compared to software
Posted on Reply
#13
Dr. Dro
Selayayes but the thing is, will realtime hardware-accelerated av1 encode have the same compression rate as software?
because nvenc h264 is like, total ass and produces either significantly larger files or significantly worse quality compared to software
It should be at least on par with faster encoding presets. AV1 encoders have come a long time since the early svt-av1 days that required dual Skylake-SP Xeon processors for realtime :)

ffmpeg already supports the format, and some front-ends like Shutter Encoder have already been updated to support it. 1080p 30 basically transcodes into 3 Mbps AV1 from an AVC source at practically real time on my laptop's Ryzen 5 5600H, one minute of footage takes around 1m7sec to complete, while the same settings on HEVC take 1m35sec. A dedicated and optimized hardware encoder that meets this level of performance should be able to easily do real-time encoding retaining a very high level of detail in the image. Of course, as newer generations of AV1 encoders come out, they will become faster and more accurate, too, but even in a basic state, the codec is simply far ahead of the traditional options we are used to working with.

On a semi-unrelated note, there is something interesting that I came across just now, Xaymar, the original author of the AMD AMF encoder for OBS, has also conducted a study of their own regarding AVC/H264 encoder performance, you may check it out here:

www.xaymar.com/articles/2022/01/10/h264-encoder-showdown/

Turing-class NVENC seems to do particularly well against even CPU encoding at reasonable speeds, so I expect great things from Lovelace's.
Posted on Reply
#14
Fatalfury
AMD needs to STEPUP thier game(by making availability and stock) in the laptop market.
They are already having a very low share in the mobile(laptop) space with 80% of manfactures going to Intel+ Nvdia combo.
all they do is release 1 or 2 full AMD setup just for "name sake" and call it a day.

AMD better do something soon or they gonna end up with less than 5% market share in laptop segments(GPUs).
Posted on Reply
#15
Selaya
Dr. Dro[ ... ]
www.xaymar.com/articles/2022/01/10/h264-encoder-showdown/

Turing-class NVENC seems to do particularly well against even CPU encoding at reasonable speeds, so I expect great things from Lovelace's.
wait what, that cant be right, unless im stupid or something (or the software is, idk)

i ran a few of my recordings through handbrake at 30 constant quality and turing nvenc basically threw a file twice the size as software medium at me
curiously, slower and up also threw a file of a larger size than medium at me, but maybe they have higher fidelity despite the identical 30 quality preset? (idk, would be the only logical explanation since i did not really inspect/watch the results any further)
Posted on Reply
#16
ModEl4
Looking at these performance slides, we would be lucky if we get RX 570 ($169 5 years ago) performance level for the desktop part (8 Xe cores) on the TPU games set up, because although on newer games like Doom Eternal will be faster than RX 570, the average score will come down a lot due to lack of optimization for older games, DX11 games etc.
I hope I'm wrong, but right now this is the impression I'm getting.
Posted on Reply
#17
HisDivineOrder
AV1 encoding, but HDMI 2.0b.

XeSS, but PCIe 4.0 in 2H 2022.

Price better be incredible.
Posted on Reply
#18
Mysteoa
I like how Intel doesn't want to compare to any other dgpus, but instead decided to compared it to their igpus.
Posted on Reply
#19
Daven
AnotherReaderI hope that, in the long run, Intel gives AMD and Nvidia a run for their money. This duopoly, verging on a monopoly, is not good for us.
But 40 years of an actual Intel CPU monopoly is just fine and dandy?
Posted on Reply
#20
HisDivineOrder
DavenBut 40 years of an actual Intel CPU monopoly is just fine and dandy?
Did anyone say that?
Posted on Reply
#21
Valantar
I'm ... curious? as to the relatively down-to-earth launch of this. Very little fanfare, and clearly a slow rollout, with just low-end laptop GPUs first, and not a single comparison against any competitor. Possible interpretations:
-underwhelming performance
-undercooked drivers, expecting much better performance in the coming months
-not wanting to overpromise (yeah, lol, this is a tech company doing PR, so ...)
-????

I can't say I'm blown away by those Xe comparisons - it's essentially a wider version of the same arch with dedicated VRAM and a bespoke power budget, so outperforming it by something that looks like 25-33% isn't all that impressive. Guess that depends on the power draw of that chip though.
Posted on Reply
#22
AnotherReader
DavenBut 40 years of an actual Intel CPU monopoly is just fine and dandy?
Intel's monopoly didn't last 40 years. AMD was very competitive for a large part of that period. I miss the old days when there were other x86 cpu design companies, but 2 is better than 1.
Posted on Reply
#23
Steevo


The 5700G has half the shaders, shares the DDR4 and runs half the speed at low settings.



The 6500 has the exact number of cores, memory & bus, but runs at almost 2X the core speed but at high settings, and its gimped by the PCIe.


From what it seems the 370M is about 30-40% slower than the 6500XT at half the power budget. So Watt for Watt they seem to be about 15% behind AMD TSMC 6nm, Intel being 7nm, If their hardware scales and is priced right with god drivers they could be here for the fight.
Dr. DroIt should be at least on par with faster encoding presets. AV1 encoders have come a long time since the early svt-av1 days that required dual Skylake-SP Xeon processors for realtime :)

ffmpeg already supports the format, and some front-ends like Shutter Encoder have already been updated to support it. 1080p 30 basically transcodes into 3 Mbps AV1 from an AVC source at practically real time on my laptop's Ryzen 5 5600H, one minute of footage takes around 1m7sec to complete, while the same settings on HEVC take 1m35sec. A dedicated and optimized hardware encoder that meets this level of performance should be able to easily do real-time encoding retaining a very high level of detail in the image. Of course, as newer generations of AV1 encoders come out, they will become faster and more accurate, too, but even in a basic state, the codec is simply far ahead of the traditional options we are used to working with.

On a semi-unrelated note, there is something interesting that I came across just now, Xaymar, the original author of the AMD AMF encoder for OBS, has also conducted a study of their own regarding AVC/H264 encoder performance, you may check it out here:

www.xaymar.com/articles/2022/01/10/h264-encoder-showdown/

Turing-class NVENC seems to do particularly well against even CPU encoding at reasonable speeds, so I expect great things from Lovelace's.
"The Hyper Encode workload measures the time it takes to transcode a 4K/30fps AVC @ 57.9Mbps clip to 4K/30fps HEVC @ 30Mbps High Quality format with 3 applications: HandBrake, DaVinci Resolve, Cyberlink PowerDirector. The comparison for the claim is using both Alder Lake integrated graphics and Alchemist to encode in a I+I configuration versus the integrated graphics adapter alone."

"The AV1 workload measures the time it takes to transcode a 4K/30fps AVC @ 57.9Mbps clip to 4K/30fps AV1 @ 30Mbps High Speed format. The comparison for the 50x claim is using the Alder Lake CPU (software) to transcode the clip on a public FFMPEG build versus Alchemist (hardware) on a proof-of-concept Intel build."
Posted on Reply
#24
medi01
As Intel CPU + AMD GPU notebooks are non-existent, it's mainly NV that needs to worry.
Posted on Reply
#25
Steevo
medi01As Intel CPU + AMD GPU notebooks are non-existent, it's mainly NV that needs to worry.
They are working hard at making a CPU to be competitive, and given they success in GPU tech and FPGA like hardware they might pull it off, but they won't make a dent in the serious laptop/desktop market as the whole system is X86-64 based and the compatibility is too much of an issue.
Posted on Reply
Add your own comment
Aug 14th, 2024 14:48 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts