Wednesday, March 30th 2022

Intel Formally Announces Arc A-series Graphics

For decades, Intel has been a champion for PC platform innovation. We have delivered generations of CPUs that provide the computing horsepower for billions of people. We advanced connectivity through features like USB, Thunderbolt and Wi-Fi. And in partnership with the PC ecosystem, we developed the ground-breaking PCI architecture and the Intel Evo platform, pushing the boundary for what mobile products can do. Intel is uniquely positioned to deliver PC platform innovations that meet the ever-increasing computing demands of professionals, consumers, gamers and creators around the world. Now, we take the next big step.

Today, we are officially launching our Intel Arc graphics family for laptops, completing the Intel platform. These are the first discrete GPUs from our Intel Arc A-Series graphics portfolio for laptops, with our desktop and workstation products coming later this year. You can visit our Newsroom for our launch video, product details and technical demos, but I will summarize the highlights of how our Intel Arc platform and A-Series mobile GPU family will deliver hardware, software, services and - ultimately - high-performance graphics experiences.
  • New Laptops with Intel Arc Graphics: We've partnered with top OEMs to co-engineer an amazing lineup of laptops that feature new and improved gaming and content creation capabilities with Intel Arc graphics and 12th Gen Intel Core processors. Many new systems with Intel Arc 3 graphics will feature the Intel Evo platform's trademark responsiveness, battery life and Wi-Fi 6 connectivity in thin-and-light form factors. Laptops with Intel Arc 3 graphics offer enhanced 1080p gaming and advanced content creation, and those with Intel Arc 5 and Intel Arc 7 graphics will offer the same cutting-edge, content-creation capabilities coupled with increased graphics and computing performance. The first laptops with Intel Arc 3 GPUs are available to preorder now and will be followed by the more powerful designs with Intel Arc 5 and Intel Arc 7 graphics in early summer.
  • Unleashing the Laptop Platform: The foundation of products with Intel Arc A-Series GPUs and our platform-level approach to graphics innovation starts with our new Xe High Performance Graphics microarchitecture (Xe HPG), which is engineered for gamers and creators. We have packed a ton of great technology into Xe HPG, including powerful Xe-cores with Intel XMX AI engines, a graphics pipeline optimized for DirectX 12 Ultimate with hardware acceleration for ray tracing, the Xe Media Engine tuned to accelerate existing and future creator workloads and the Xe Display Engine ready for DisplayPort 2.0 UHBR 10.
    • Intel Xe Matrix Extensions (XMX) AI engines provide more compute capability for accelerating AI workloads. These AI engines have 16 times the compute to complete AI inferencing operations when compared to traditional GPU vector units, which can increase performance in productivity, gaming and creator applications.
    • Xe Super Sampling (XeSS) is our solution that leverages the power of Intel Arc graphics' XMX AI-engines to deliver high-performance, AI-accelerated upscaling. XeSS is a novel upscaling technology that uses deep learning to synthesize images that are very close to the quality of native high-res rendering. XeSS is coming in the summer and will be supported on all products with Arc A-Series graphics.
    • Intel Arc A-Series GPUs are the first in the industry to offer full AV1 hardware acceleration, including both encode and decode, delivering faster video encode and higher quality streaming while consuming the same internet bandwidth. We've worked with industry partners to ensure that AV1 support is available today in many of the most popular media applications, with broader adoption expected this year. The AV1 codec will be a game changer for the future of video encoding and streaming.
    • We've integrated Intel Deep Link technologies to enable Intel Arc GPUs to work seamlessly with Intel CPUs and integrated graphics for a performance improvement across gaming, creating and streaming workloads. Intel Deep Link enables dynamic power sharing, intelligently distributing power across the platform to increase application performance up to 30% in creation and compute-intensive applications. With Hyper Encode and Hyper Compute, Deep Link allows multi-engine acceleration in transcoding and AI tasks. More details are available in our product fact sheet.
  • Community Experiences: Our Intel Arc graphics are more than another piece of hardware in your PC. They are your portal to play and create. We have a dedicated team focused on delivering Day 0 game-ready drivers, which you'll be able to track in our new Intel Arc Control interface, an all-in-one hub that puts you in full control of the gaming experience. Intel Arc Control includes custom performance profiles, built-in streaming, a virtual camera, integrated Game ON driver downloading, automatic game capture, and more. The app supports Intel Iris Xe graphics and Intel Arc GPUs for a unified software experience. By working with our developer partners, we are making a growing portfolio of Intel-optimized games and multimedia applications available to discrete graphics customers through special launch bundles. Bundles will vary based on the system and the region, but the first of these gamer and creator bundles is rolling out in April with the launch of our A-Series mobile products. Our goal is to deliver something new and fun to the community every day of the year. We invite you to connect with us and join the conversation on our Intel Insiders discord.
Looking Ahead
Today marks the first step in our journey. You'll see Intel Arc graphics continue to improve and evolve, with new features and an ever-expanding ecosystem coming throughout the year. And for desktop enthusiasts, our Intel Arc graphics add-in-cards will be coming this summer.

We are excited, and we hope you are too. It's going to be a big year for Intel Arc graphics.

The complete slide-deck follows.
Add your own comment

53 Comments on Intel Formally Announces Arc A-series Graphics

#26
AnotherReader
Steevo

The 5700G has half the shaders, shares the DDR4 and runs half the speed at low settings.



The 6500 has the exact number of cores, memory & bus, but runs at almost 2X the core speed but at high settings, and its gimped by the PCIe.


From what it seems the 370M is about 30-40% slower than the 6500XT at half the power budget. So Watt for Watt they seem to be about 15% behind AMD TSMC 6nm, Intel being 7nm, If their hardware scales and is priced right with god drivers they could be here for the fight.



"The Hyper Encode workload measures the time it takes to transcode a 4K/30fps AVC @ 57.9Mbps clip to 4K/30fps HEVC @ 30Mbps High Quality format with 3 applications: HandBrake, DaVinci Resolve, Cyberlink PowerDirector. The comparison for the claim is using both Alder Lake integrated graphics and Alchemist to encode in a I+I configuration versus the integrated graphics adapter alone."

"The AV1 workload measures the time it takes to transcode a 4K/30fps AVC @ 57.9Mbps clip to 4K/30fps AV1 @ 30Mbps High Speed format. The comparison for the 50x claim is using the Alder Lake CPU (software) to transcode the clip on a public FFMPEG build versus Alchemist (hardware) on a proof-of-concept Intel build."
40% slower sounds like a good estimate. TPU only tests at Ultra settings so I have to use Gamers Nexus's really old review of the Witcher 3.


If this still holds true, then it's in the 1050 Ti range or around the new 6000 series APUs from AMD. I'm using TPU's review of the 1050 Tifor my estimate.

Posted on Reply
#27
Steevo
AnotherReader40% slower sounds like a good estimate. TPU only tests at Ultra settings so I have to use Gamers Nexus's really old review of the Witcher 3.


If this still holds true, then it's in the 1050 Ti range or around the new 6000 series APUs from AMD. I'm using TPU's review of the 1050 Tifor my estimate.

1080 60FPS gaming at high settings will be here next year with IGP or basic dedicated hardware, my whole (really old) machine could be replaced with a laptop and gain at least 50% more CPU performance and on par GPU performance if not more with scaling tech.
Posted on Reply
#28
Denver
Steevo

The 5700G has half the shaders, shares the DDR4 and runs half the speed at low settings.



The 6500 has the exact number of cores, memory & bus, but runs at almost 2X the core speed but at high settings, and its gimped by the PCIe.


From what it seems the 370M is about 30-40% slower than the 6500XT at half the power budget. So Watt for Watt they seem to be about 15% behind AMD TSMC 6nm, Intel being 7nm, If their hardware scales and is priced right with god drivers they could be here for the fight.



"The Hyper Encode workload measures the time it takes to transcode a 4K/30fps AVC @ 57.9Mbps clip to 4K/30fps HEVC @ 30Mbps High Quality format with 3 applications: HandBrake, DaVinci Resolve, Cyberlink PowerDirector. The comparison for the claim is using both Alder Lake integrated graphics and Alchemist to encode in a I+I configuration versus the integrated graphics adapter alone."

"The AV1 workload measures the time it takes to transcode a 4K/30fps AVC @ 57.9Mbps clip to 4K/30fps AV1 @ 30Mbps High Speed format. The comparison for the 50x claim is using the Alder Lake CPU (software) to transcode the clip on a public FFMPEG build versus Alchemist (hardware) on a proof-of-concept Intel build."
Must be RX 680M level depending on the laptop/TDP/Ram speed





Its the same full 96EU iGPU from intel slides**
Posted on Reply
#29
Dr. Dro
Selayawait what, that cant be right, unless im stupid or something (or the software is, idk)

i ran a few of my recordings through handbrake at 30 constant quality and turing nvenc basically threw a file twice the size as software medium at me
curiously, slower and up also threw a file of a larger size than medium at me, but maybe they have higher fidelity despite the identical 30 quality preset? (idk, would be the only logical explanation since i did not really inspect/watch the results any further)
It's probably your configuration, you can often use more specific frame settings, optimizations and bit rates towards a given medium so you may maximize size to quality ratio :)
Steevo"The Hyper Encode workload measures the time it takes to transcode a 4K/30fps AVC @ 57.9Mbps clip to 4K/30fps HEVC @ 30Mbps High Quality format with 3 applications: HandBrake, DaVinci Resolve, Cyberlink PowerDirector. The comparison for the claim is using both Alder Lake integrated graphics and Alchemist to encode in a I+I configuration versus the integrated graphics adapter alone."

"The AV1 workload measures the time it takes to transcode a 4K/30fps AVC @ 57.9Mbps clip to 4K/30fps AV1 @ 30Mbps High Speed format. The comparison for the 50x claim is using the Alder Lake CPU (software) to transcode the clip on a public FFMPEG build versus Alchemist (hardware) on a proof-of-concept Intel build."
Personally I am less concerned with the 50x marketing claim and more concerned with its ability to record high-resolution AV1 in real time without bogging down my processor :D

It's not like then Gen 6 NVENC in Turing/Ampere cards is 50 times faster than a modern competent processor either... those claims do give me some eerie vibes from the earliest days of GPU-processed video coding, remember Badaboom? I feel old. :eek:
Posted on Reply
#30
zlobby
Dr. DroRealtime AV1 encoding is a huge deal! This format beats the pants out of HEVC/h265 and is not bound by the same restrictive licensing that led the format to be mostly ignored.

NVENC's advantage over AMD's competing Video Core Next is pretty much only in AVC/h264 encoding, the sole reason this is relevant is that due to royalties charged by the HEVC patent holders, video streaming services refused to adopt the format and kept using the old AVC format whose patents have already expired, or adopted the VP9 codec.

At 5 Mbps (Twitch's maximum bandwidth), 1080p30 should be imperceptibly encoded using the AV1 format, streams will have practically Blu-ray quality with this format.
Not quite as you describe it but hey...

Even if so, I can't wait to see my favorite e-thot's mascara mistakes in glorious 4K at 10Mbps! Gotta preserve bandwidth, hence preserve data plan, hence save more money to donate as simping!
Posted on Reply
#32
Luminescent
I think these mobile GPUs are more intended to at least put up a fight with apple M1 laptops than actually competing with AMD and Nvidia in gaming space.
Somewhere at the beginning of the presentation they show some NLE timeline and then Adobe photoshop i think, this indicates Intel could've beefed up media engine to decode files like 4k 120fps, 8k 30,60fps.... and better support for Adobe.
Right now windows laptops are a joke for photo/video content creation compared to Apple, they consume a lot of power and get beaten by super thin and light apple laptops, even Full Tower computers with rtx 3090 and 5950x can't match apple m1 timeline playback in video editing programs, it's getting ridiculous.
Posted on Reply
#33
Valantar
LuminescentI think these mobile GPUs are more intended to at least put up a fight with apple M1 laptops than actually competing with AMD and Nvidia in gaming space.
Somewhere at the beginning of the presentation they show some NLE timeline and then Adobe photoshop i think, this indicates Intel could've beefed up media engine to decode files like 4k 120fps, 8k 30,60fps.... and better support for Adobe.
Right now windows laptops are a joke for photo/video content creation compared to Apple, they consume a lot of power and get beaten by super thin and light apple laptops, even Full Tower computers with rtx 3090 and 5950x can't match apple m1 timeline playback in video editing programs, it's getting ridiculous.
None of that changes with this architecture - outside of AV1 support it's pretty standard. It doesn't have anything even moderately resembling Apple's ProRes support, nor their level of software/hardware optimization for these tasks. I would love to see Windows platforms improve in this respect, but this ain't it.
Posted on Reply
#34
Luminescent
ValantarNone of that changes with this architecture - outside of AV1 support it's pretty standard. It doesn't have anything even moderately resembling Apple's ProRes support, nor their level of software/hardware optimization for these tasks. I would love to see Windows platforms improve in this respect, but this ain't it.
Do you have some inside information ? how do you know ?
Apple proRes is not hard at all to decode, Apple m1 shines with long gop codecs and files like 8k 30fps from Canon R5, Sony a7s III 4k 120fps, GH6 5.7k, they are perfectly smooth on apple M1 laptop, 5950x and a rtx3090 not smooth at all.
If this is not addressed ASAP then Intel and Nvidia can kiss goodbye to that crowd, 10-20W laptop beats 500W or more Full tower computer, one does it brute force and one does it with dedicated hardware.
Posted on Reply
#35
Vayra86
Wow, they shrunk Iris down again and added EUs and features. +30% over their IGP. Surprising, indeed. This is 'Arc'? Appearances can be deceiving...
LuminescentI think these mobile GPUs are more intended to at least put up a fight with apple M1 laptops than actually competing with AMD and Nvidia in gaming space.
Somewhere at the beginning of the presentation they show some NLE timeline and then Adobe photoshop i think, this indicates Intel could've beefed up media engine to decode files like 4k 120fps, 8k 30,60fps.... and better support for Adobe.
Right now windows laptops are a joke for photo/video content creation compared to Apple, they consume a lot of power and get beaten by super thin and light apple laptops, even Full Tower computers with rtx 3090 and 5950x can't match apple m1 timeline playback in video editing programs, it's getting ridiculous.
You might be on to something, the gaming performance here is not going to turn heads at all.
Posted on Reply
#36
Valantar
LuminescentDo you have some inside information ? how do you know ?
Apple proRes is not hard at all to decode, Apple m1 shines with long gop codecs and files like 8k 30fps from Canon R5, Sony a7s III 4k 120fps, GH6 5.7k, they are perfectly smooth on apple M1 laptop, 5950x and a rtx3090 not smooth at all.
If this is not addressed ASAP then Intel and Nvidia can kiss goodbye to that crowd, 10-20W laptop beats 500W or more Full tower computer, one does it brute force and one does it with dedicated hardware.
... it doesn't require inside information, it just requires noticing that Intel isn't advertising any form of hardware ProRes decoding. If they had it, they would be advertising it. I'm well aware that it's quite easy to decode - it's nowhere near as heavily compressed as H.264, H.265 or AV1, after all - but most likely Apple just won't give licences to build hardware decoders to its major competitors. That's my guess, at least.

As for how Apple handles those other codecs, that's that hardware/software optimization I'm talking about. They've got several decades worth of experience in optimizing for media playback, encoding, and editing. This has been a core focus for Apple since the 1980s. It's hardly surprising that they vastly outperform competitors that have never shown much of a long-term interest in doing this, and that also lack the full stack integration of Apple. I wouldn't be surprised at all to learn that Apple's decode blocks also accelerate a lot of non-prores codecs that aren't advertised - they ultimately don't need to do so specifically, as long as it works well and their target audience knows it. But of course they've also got some serious software/OS chops here - they managed to make Final Cut vastly outperform Premiere and other competitors on Intel Macs after all.

I completely agree that both Intel and AMD need to step up their hardware acceleration game, but they've got a significant hurdle there: their target markets are much more diverse, which makes it all the harder to justify the die area required by large hardware accelerator arrays that only serve a relatively small niche of that market. Apple of course sells to a lot of people beyond media professionals, but those are their core audience, and they really don't care about anyone else when it comes to Macs - which can be seen in a lot of their hardware choices. They would be fine if everyone else just had an iPhone, and left the Mac to the professionals.
Vayra86Wow, they shrunk Iris down again and added EUs and features. +30% over their IGP. Surprising, indeed. This is 'Arc'? Appearances can be deceiving...



You might be on to something, the gaming performance here is not going to turn heads at all.
I don't disagree - and the first DG1 implementations were marketed only towards media production, after all - but remember that these are low-end comparisons, in a market where iGPUs are much more powerful than 1-2 years ago. Most likely they expected a bigger difference when these designs were first made, but even still, the larger Arc GPUs are likely to go far past this level (unless they've really messed up the design).
Posted on Reply
#37
medi01
SteevoThey are working hard at making a CPU to be competitive, and given they success in GPU tech and FPGA like hardware they might pull it off, but they won't make a dent in the serious laptop/desktop market as the whole system is X86-64 based and the compatibility is too much of an issue.
Tens of millions of bazinga like MX series is being sold, and if you check price diff, it's about 100 bucks for that "faux discrete GPU" alone.

At least that part of the market would be wiped out. (AMD APUs are already beating that for quite some time, now Intel will)

The "serious laptop market" if you mean laptops wielding 6800XT/3080 like GPUs, is just a small fraction of the market, most is on crap like MX and 1650. All that is now threatened by Intel. (and, mind you, nice discount if CPU+GPU bundle is used, is a given)
Posted on Reply
#38
Valantar
Honestly, Intel's performance comparison here is kind of funny. On the one had you have an i7-1280P, a 28W 6P+8E 96EU Xe CPU, and on the other hand you have a 12700H, a 45W 6P+8E 96EU (disabled/inactive in this testing, I assume) CPU alongside a 128EU GPU with an undisclosed power budget. And the latter outperforms the former by ... 25-33%? Yeah, that's not something I'd be shouting from the rooftops either. Couldn't they at least have compared them using the same CPU?
Posted on Reply
#39
ratirt
ValantarHonestly, Intel's performance comparison here is kind of funny. On the one had you have an i7-1280P, a 28W 6P+8E 96EU Xe CPU, and on the other hand you have a 12700H, a 45W 6P+8E 96EU (disabled/inactive in this testing, I assume) CPU alongside a 128EU GPU with an undisclosed power budget. And the latter outperforms the former by ... 25-33%? Yeah, that's not something I'd be shouting from the rooftops either. Couldn't they at least have compared them using the same CPU?
I've noticed that too. Maybe it's for the confusion of the people looking at it. Also, good advertisement for the new Intel CPU like 'this one runs faster with the new Intel CPU'.
Posted on Reply
#40
trsttte
SteevoSo a lot of numbers, like TW3 at Medium settings is 68FPS which is faster than a no number from Iris, at a unknown TDP, at a unknown frequency, at a unknown memory, with a unknown cooling system.

A lot of fluff in them their slides. Good thing is efluff and not real, they would have a mess on their hands trying to dispose of that much physical fluff.

"The Elden Ring was CAPTURED on a series 7 GPU," not rendered, merely captured.
Hmm are you sugesting they used a thunderbolt egpu to render the game or what?
HisDivineOrderAV1 encoding, but HDMI 2.0b.

XeSS, but PCIe 4.0 in 2H 2022.

Price better be incredible.
AlderLake mobile (or Ryzen for that matter) don't have PCIe5.0 and it's also not necessary. I'd think the desktop cards (at least the higher end 5 ann 7 series) will have PCIe5.0 but it's not like it will make the performance any different (desktop AlderLake including it was pretty much marketing when there are no devices that use it)

HDMI 2.0 is disappointing (at least they were honest instead of slapping the now allowed 2.1 sticker) but they can offer 2.1 ports with converters from the DisplayPort connectors. I also question if this is just for the laptop market and again in the Desktop cards things will be different (in laptops they need to deal with the connection to/from the igp and/or a mux which might be the limiting factor).
Posted on Reply
#41
Valantar
ratirtI've noticed that too. Maybe it's for the confusion of the people looking at it. Also, good advertisement for the new Intel CPU like 'this one runs faster with the new Intel CPU'.
But the i7-1280P is just as new as the 12700H (actually newer, the U and P series released later than the H series). It just doesn't add up beyond these GPUs just not performing very well.
trsttteAlderLake mobile (or Ryzen for that matter) don't have PCIe5.0 and it's also not necessary. I'd think the desktop cards (at least the higher end 5 ann 7 series) will have PCIe5.0 but it's not like it will make the performance any different (desktop AlderLake including it was pretty much marketing when there are no devices that use it)
Yeah, PCIe 5.0 would have been an utter waste. Consumer GPUs don't meaningfully saturate PCIe 3.0 x16 yet, and 4.0 x16 is plenty still. Going to 5.0 just ramps up power consumption (not by much, but still something) and increases board complexity for no good reason.
Posted on Reply
#42
Luminescent
Valantar... it doesn't require inside information, it just requires noticing that Intel isn't advertising any form of hardware ProRes decoding. If they had it, they would be advertising it. I'm well aware that it's quite easy to decode - it's nowhere near as heavily compressed as H.264, H.265 or AV1, after all - but most likely Apple just won't give licences to build hardware decoders to its major competitors. That's my guess, at least.
ProRes is easy to decode and important to high end productions for the flexibility, i don't think those people edit on laptops.
If someone shoots ProRes they are gonna color grade and once you start to heavily color grade your footage then RTX 3090s makes sense.
ValantarAs for how Apple handles those other codecs, that's that hardware/software optimization I'm talking about. They've got several decades worth of experience in optimizing for media playback, encoding, and editing. This has been a core focus for Apple since the 1980s. It's hardly surprising that they vastly outperform competitors that have never shown much of a long-term interest in doing this, and that also lack the full stack integration of Apple. I wouldn't be surprised at all to learn that Apple's decode blocks also accelerate a lot of non-prores codecs that aren't advertised - they ultimately don't need to do so specifically, as long as it works well and their target audience knows it. But of course they've also got some serious software/OS chops here - they managed to make Final Cut vastly outperform Premiere and other competitors on Intel Macs after all.
You talk like this is something very complicated that can't be done, it's very simple, they just need the hardware to decode those files so they play in real time, you can find that hardware even in cheap android phones, even in cameras that actually shoot that footage at 8k, they have silicon dedicated so you can play that file in camera in real time.
They didn't do this until now because it's dedicated silicon space for just that, rather brute force it at 300W than reserving silicon space for dedicated media engine that can decode at 5-10W.
Also as an editor, i just need that file to play in real time at full resolution when i edit it, i don't need it to play at 10x.
Posted on Reply
#43
Valantar
LuminescentProRes is easy to decode and important to high end productions for the flexibility, i don't think those people edit on laptops.
If someone shoots ProRes they are gonna color grade and once you start to heavily color grade your footage then RTX 3090s makes sense.
You seem unaware that quite a few entry level cameras and low end recorders like Atomos's products record in Prores. It's not the most common format, but it's still really common, and certainly not only among high end productions.
LuminescentYou talk like this is something very complicated that can't be done, it's very simple, they just need the hardware to decode those files so they play in real time, you can find that hardware even in cheap android phones, even in cameras that actually shoot that footage at 8k, they have silicon dedicated so you can play that file in camera in real time.
They didn't do this until now because it's dedicated silicon space for just that, rather brute force it at 300W than reserving silicon space for dedicated media engine that can decode at 5-10W.
Also as an editor, i just need that file to play in real time at full resolution when i edit it, i don't need it to play at 10x.
I never said it was complicated, I said that Apple has this down pat because they've worked concertedly towards it for decades while their competitors haven't. I'm quite aware of the ubiquitous nature of hardware video encoders and decoders as well - I don't live under a rock. But that doesn't change any of what I've said - and, for the record, Apple are AFAIK the only computer chipmaker with ProRes hardware accelerations (cameras, recorders etc. are another thing entirely). Also, what you're saying isn't quite accurate: the improved timeline smoothness on Apple devices illustrates precisely that achieving real-time playback by itself isn't necessarily enough. You also need the surrounding software to be responsive, you need to be able to fetch the right data quickly, to handle interrupts and jumping around a file smoothly, and you need the OS to handle the IO and threads in a way that's conducive to this being smooth. Heck, Apple laptops consistently outperformed Intel laptops with the exact same CPU and equally fast storage and GPUs (or faster GPUs) in these workloads, and that certainly wasn't limited to ProRes, nor to QuickSync-accelerated codecs.
Posted on Reply
#44
Dr. Dro
ValantarI never said it was complicated, I said that Apple has this down pat because they've worked concertedly towards it for decades while their competitors haven't. I'm quite aware of the ubiquitous nature of hardware video encoders and decoders as well - I don't live under a rock. But that doesn't change any of what I've said - and, for the record, Apple are AFAIK the only computer chipmaker with ProRes hardware accelerations (cameras, recorders etc. are another thing entirely). Also, what you're saying isn't quite accurate: the improved timeline smoothness on Apple devices illustrates precisely that achieving real-time playback by itself isn't necessarily enough. You also need the surrounding software to be responsive, you need to be able to fetch the right data quickly, to handle interrupts and jumping around a file smoothly, and you need the OS to handle the IO and threads in a way that's conducive to this being smooth. Heck, Apple laptops consistently outperformed Intel laptops with the exact same CPU and equally fast storage and GPUs (or faster GPUs) in these workloads, and that certainly wasn't limited to ProRes, nor to QuickSync-accelerated codecs.
That'd be because it was developed by and is maintained by Apple. Their computers and software really are very much designed for each other, at the complete expense of everything else. It certainly is a peculiar, weird format. Apple's documentation claims that the full quality ProRes (4444 XQ) was designed for a ~500 Mbps bit rate assuming 1080p at NTSC frame rate (29.97 fps), I get that the quality must be downright insane but... it's such an unwieldy format, it's really intended for use during the mastering and development stage, definitely not for consumption. At 4K IVTC film standard (23.976 fps), 4444 XQ has a data rate of around 1.7 Gbit/s, totaling ~764 GB per hour of footage. That's insane, and imo the biggest bottleneck on managing a format like ProRes is storage performance.

Vegas Pro, which was under Sony Creative Software for a very long time before it was acquired by Magix, also supports(ed? out of the loop here) a whole host of Sony-specific codecs used by their high-end cinema cams and many film industry standards, so I would guess that it's just a thing of the trade.
Posted on Reply
#45
Luminescent

[USER=171585]Valantar[/USER]

Did you ever shoot ProRes ? you get huge files for a few seconds of footage, it's insane.
Latest Panasonic GH6 has internal ProRes, 5.7k 25p 422 at 1.6Gbps.
What i want to say is that most people don't care about ProRes hardware decoding, the majority of youtubers , event and corporate videographers, it's all h.264 and h.265, maybe 10 bit for a bit more data.
Posted on Reply
#46
Jack Slayter
This is for mid-2022 and still no HDMI 2.1. Very disappointing!
Posted on Reply
#47
MikeMurphy
I'm quite happy with a modest low cost GPU upgrade available in laptops. Features seek spot on.

No doubt beefy GPUs coming to the discreet desktop space. Couldn't care less if Intel competes in the very high end, as long as it's competitive in whatever performances brackets those products end up in.
Posted on Reply
#48
Valantar
Luminescent

[USER=171585]Valantar[/USER]

Did you ever shoot ProRes ? you get huge files for a few seconds of footage, it's insane.
Yep, quite familiar with that. The SSDs my partner uses for her Atomos recorder definitely get to stretch their legs. The file size is definitely a negative, but it's not that problematic.
LuminescentLatest Panasonic GH6 has internal ProRes, 5.7k 25p 422 at 1.6Gbps.
And yet that's a relatively affordable, compact camera, of a brand, type and class widely used by all kinds of video producers.
LuminescentWhat i want to say is that most people don't care about ProRes hardware decoding, the majority of youtubers , event and corporate videographers, it's all h.264 and h.265, maybe 10 bit for a bit more data.
But now you're adding all kinds of weird caveats that don't apply to my initial statement that you are arguing against. I never specified that this applied to "most people", nor did I say anything about the preferences of various content producers. None of this changes the fact that even on identical hardware, Apple managed to make their systems and software (but also third party software to some extent) outperform Windows systems, which speaks to the importance of system, OS and software design on top of hardware encode/decode performance.

And, for the record, I think you're really underestimating the ubiquity of ProRes. Is it mainly for professionals? Absolutely. Is it also used by pretty much anyone with access to it who wants to do color grading or other advanced editing, or want to preserve the full dynamic range of their shots? Again, yes. HEVC or other compressed codecs, even when recording in some kind of log color format, lose way too much information for most videographers I've met who have some kind of artistic ambition. They could no doubt mostly do what they do 95-99% as well without ProRes, but they still use ProRes. It being a widely accepted industry standard is also a huge draw in this regard.
Posted on Reply
#49
ratirt
ValantarBut the i7-1280P is just as new as the 12700H (actually newer, the U and P series released later than the H series). It just doesn't add up beyond these GPUs just not performing very well.
But the H series is the upper model in the mobile stack. So from that perspective it has some sort of meaning I think.
Posted on Reply
#50
Valantar
ratirtBut the H series is the upper model in the mobile stack. So from that perspective it has some sort of meaning I think.
I would argue the opposite - that that very point undermines the comparison, as it implies that the GPU needs a 45W CPU to outperform a 28W CPU and its iGPU, which ... well, that's not a good look. I would also honestly be surprised if this is a common system configuration - it doesn't make sense to me to pair a 45W H-series CPU with a ... let's guess at 35-50W low-end GPU. That would be much better paired with a 28W P-series CPU, even if the H would likely perform a few % better. It just doesn't add up in the end.
Posted on Reply
Add your own comment
Nov 28th, 2024 01:33 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts