Friday, July 15th 2022
Intel Previews Arc A750 Graphics Card Performance
Intel has decided to share some more details on its upcoming Arc A750 graphics card, the one and same that appeared briefly in a Gamer Nexus video just the other day. The exact product being previewed is the Intel Arc A750 Limited Edition graphics card, but the company didn't reveal any specifications of the card in the video it posted. What is revealed, is that the card will outperform a NVIDIA GeForce RTX 3060 card at 1440p in the five titles that Intel provided performance indications and average frame rates for. The five games are F1 2021, Cyberpunk 2077, Control, Borderlands 3 and Fortnite, so in other words, mostly quite demanding games with F1 2021 and Fortnite being the exceptions.
The only game we get any kind of insight into the actual performance of in the video, is Cyberpunk 2077, where Ryan Shrout details the game settings and the actual frame rate. At 2560 x 1440, using high settings, the Arc A750 delivers 60.79 FPS, with a low of 50.54 FPS and a max of 77.92 FPS. Intel claims this is 1.17 times the performance of an EVGA GeForce RTX 3060 XC Gaming 12G graphics card. At least Intel didn't try to pull a fast one, as the company provided average frame rates for all the other games tested as well, not just how many times faster the Intel card was and you can see those results below. The test system consisted of an Intel Core i9-12900K fitted to an ASUS ROG Maximus Z690 Hero board, 32 GB of 4800 MHz DDR5 memory and a Corsair MP600 Pro XT 4 TB NVMe SSD, as well as Windows 11 Pro. According to the video, the Arc graphics cards should launch "this summer" and Intel will be releasing more details between now and the launch.
Source:
Intel
The only game we get any kind of insight into the actual performance of in the video, is Cyberpunk 2077, where Ryan Shrout details the game settings and the actual frame rate. At 2560 x 1440, using high settings, the Arc A750 delivers 60.79 FPS, with a low of 50.54 FPS and a max of 77.92 FPS. Intel claims this is 1.17 times the performance of an EVGA GeForce RTX 3060 XC Gaming 12G graphics card. At least Intel didn't try to pull a fast one, as the company provided average frame rates for all the other games tested as well, not just how many times faster the Intel card was and you can see those results below. The test system consisted of an Intel Core i9-12900K fitted to an ASUS ROG Maximus Z690 Hero board, 32 GB of 4800 MHz DDR5 memory and a Corsair MP600 Pro XT 4 TB NVMe SSD, as well as Windows 11 Pro. According to the video, the Arc graphics cards should launch "this summer" and Intel will be releasing more details between now and the launch.
99 Comments on Intel Previews Arc A750 Graphics Card Performance
260 dollars max and include a game.
Add another 200-300 on top of that in today's economy.
What's more likely is that these will be sold in volumes to east-Asian markets and OEM markets for a lower price. You fall into the trap of thinking that driver optimizations can just compensate for any shortcoming, but it can't.
The underlying problem for this architecture is clearly showcased when synthetic benchmarks scales much better than real games, that tells us the problem is scheduling inside the GPU, which is not done by the driver at all. This happens because synthetics typically are designed to be more computationally intensive while games are designed to render efficiently.
Intel's problem is essentially very similar to the problem that AMD has had for years, just a fair bit worse. If you remember back when RX 400/500 were competing with Pascal, AMD typically needed 40-50% more Flops, more memory bandwidth, etc. to produce the same performance as Nvidia. This lead many AMD fans to claim that these GPUs had a huge potential to unlock with driver optimizations, thinking there would be 30-40++% more to gain here. Even some reviewers claimed that these cards would perform better over time. So did RX 400/500 ever get that ~30-40% performance gain over Nvidia's counterparts? No, because it was never a caused by driver overhead.
While Intel can certainly fix their driver bugs, and perhaps squeeze out a few percent here and there if they find some overhead, but the overall performance level compared to competing products will remain roughly the same. Think about it, to go from a RTX 3060 performance level to RTX 3060 Ti is about ~30% and RTX 3070 ~45%, there is no way removal of driver overhead is going to yield this amount of performance gains. And never mind competing with Nvidia's high-end products which has approx. twice the performance.
Lastly, there is not even a guarantee that performance is going to improve in all cases. There is a very real chance that some cases are going to get slightly worse with bugfixes, especially if they end up doing quickfixes to eliminate bugs.
The big hole in the low end graphics market is a nice spot for intel to aim for, but they're going to edging into the mid range on a first try.
Amd bulldozer was fun to mess with to; see what you could with it and what it couldn't do.
After the 7000 series AMD moved focus from high-end GPUs to heterogeneous computing. It wasn't until Lisa Sue that AMD moved focus back to high end GPUs. To say 'look how long it took AMD to become competitive again' is misleading because it ignores the fact that AMD intentionally moved it's focus away from high end GPUs for awhile in addition to them almost going bankrupt.
Intel doesn't have either of those two factors in play. In reality, when Lisa Sue did come in, RDNA 1 released relatively shortly after proving that in fact when AMD does set out to create a high end GPU it can. RDNA 1, of which was competitive with Turing on an architectural level. It didn't release into the high end due to a lack of resources on AMD's part. Again, not a restriction Intel has.
Intel's GPU engineers are former AMD and Nvidia employees. With Intel's resources at their disposal they have all that's needed to make a good GPU uArch 1st generation. Let's not make excuses for Intel.
"intentionally", give me a break. They intentionally made Vega suck after hyping it
People still have some kikaas rigs out there.
What you get is a simular approach as AMD Vega, good at compute, not so good at rendering graphics. The RX400/500 is the same. It's raja all over again.
The problem is they need 50% more power (as of right now) in order to compete with AMD or Nvidia cards.
Nvidia and AMD on the other hand are using full graphics based chipset that are designed for gaming. That explains why both are so efficient compared to Intel. For some generations, intel woud'nt be no threat. But they are marketing it very clever, by using Esports and what more.
Obviously from a marketing standpoint creating GPU's for compute and resell the ones that did'nt meet quality guidelines to graphics is a good thing todo. Nothing gets lost. But when i saw the 8 pin header and it cant even compete with a lower end graphics card of AMD i'm like stop it already. Just release it, stop the hype, stop picking titles that favor intel graphics. Aim for the 2nd generation and perhaps then it can make a change.
RT is pointless to bring up on a card with this performance level.
The performance that Intel showed was indicative of that.
Below new info, it seems something like the below to me:
A770 16GB 32Xe $399-379 ($399?)
A770 8GB 32Xe $359-329 ($349?)
A750 8GB 24-28Xe $319-279 ($299?)
A580 8GB 16-24Xe $249-229 ($249?)
A380 6GB 8Xe $139-129 ($129?)
A310 4GB 4-6Xe $109-99 ($99?)
If it's not good then they won't be much of a competitor.
Navi 33 will be at least +55% faster in 1080p vs A770.
Wondering when even will someone buy intel :D
Regarding pricing, when Navi33 launches Intel can just adjust pricing based on AMD's pricing and market conditions.
Regarding QHD performance in newer titles A770 will be on average faster than 6650XT and 6650XT has $399 SRP and ≥$349 street price so A770 8GB at $349 for the time being will be just fine!
VGA market needs a third player imo so it will be great for all of us if Intel succeeds.
Probably by Q4 2024 we will have the next Nvidia/AMD architectures and Intel's response (Celestial) will be much closer (Although early an easy napkin prediction is that Intel's top SKU will be at least 2X faster in 4K in relation with Navi33!)
Should we be happy for intel that they in 2022 MAYBE will release powerful enough gpu to run latest titles on highest settings and over 144fps in fhd? Doubt that.
Personally I'll be interested in intel gpu when their price will be 30% lower than amd or nvidia from the same level.
Do intel even provide alternative to nvidia or amd software, like image or video capture and all the other stuff what comes in experience or adrenaline?
But one thing for sure. Intel first lineup with dGPU will cost good fortune in future for collectors even if they will deliver crap products.
RX6650XT dropped in Mindfactory at €399 before 2 days, probably in order to match the recent RTX 3060 €399 drop. Then clearly it's not for you.
There is no reason Intel to discount so much a product when it is at the same performance level as the competition.
All the previews that we show from China and the new ones from GamersNexus etc, didn't have major software bugs apparently on the games tested, that's a good sign and i expected worse situation to tell you the truth.
Performance level can easily be balanced with the right price and you will find a lot of people to have much greater tolerance (-30%) than you.