Thursday, August 11th 2022
Intel Arc A750 Trades Blows with GeForce RTX 3060 in 50 Games
Intel earlier this week released its own performance numbers for as many as 50 benchmarks spanning the DirectX 12 and Vulkan APIs. From our testing, the Arc A380 performs sub-par with its rivals in games based on the DirectX 11 API. Intel tested the A750 in the 1080p and 1440p resolutions, and compared performance numbers with the NVIDIA GeForce RTX 3060. Broadly, the testing reveals the A750 to be 3% faster than the RTX 3060 in DirectX 12 titles at 1080p; about 5% faster at 1440p; about 4% faster in Vulkan titles at 1080p, and about 5% faster at 1440p.
All testing was done without ray tracing, performance enhancements such as XeSS or DLSS weren't used. The small set of 6 Vulkan API titles show a more consistent performance lead for the A750 over the RTX 3060, whereas the DirectX 12 API titles sees the two trade blows, with a diversity of results varying among game engines. In "Dolmen," for example, the RTX 3060 scores 347 FPS compared to the Arc's 263. In "Resident Evil VIII," the Arc scores 160 FPS compared to 133 FPS of the GeForce. Such variations among the titles pulls up the average in favor of the Intel card. Intel stated that the A750 is on-course to launch "later this year," but without being any more specific than that. The individual test results can be seen below.The testing notes and configuration follows.
Source:
Intel Graphics
All testing was done without ray tracing, performance enhancements such as XeSS or DLSS weren't used. The small set of 6 Vulkan API titles show a more consistent performance lead for the A750 over the RTX 3060, whereas the DirectX 12 API titles sees the two trade blows, with a diversity of results varying among game engines. In "Dolmen," for example, the RTX 3060 scores 347 FPS compared to the Arc's 263. In "Resident Evil VIII," the Arc scores 160 FPS compared to 133 FPS of the GeForce. Such variations among the titles pulls up the average in favor of the Intel card. Intel stated that the A750 is on-course to launch "later this year," but without being any more specific than that. The individual test results can be seen below.The testing notes and configuration follows.
85 Comments on Intel Arc A750 Trades Blows with GeForce RTX 3060 in 50 Games
And yes, I can imagine 3090s selling for $550. I mean, there's already used ones popping up at $850-900. Besides, even Rolex prices are going back down this year, if we try to make parallels with premium products, which these GPUs are. So yeah, I rest my case, I guess we'll see in a few months who's going to be right.
Here are your statements for the record:
... a 10+ core CPU and modern GPUs are huge carpets to hide underneath any performance problem.
Also an unoptimised game will sell more CPUs and GPUs than an optimized one, meaning not only you can market it faster, you can also get nice sponsor money from Nvidia, AMD and Intel, by partially optimizing for their architecture instead for everyones. Nice attempt of a straw man argument there. :rolleyes:
The literature on parallelization has been known since 60s, and the limits of scaling are described by Amdahl's law. This is basic knowledge for CS studies, and don't attempt to approach this subject before understanding it. Assuming you're limited to the scope of gaming here, game simulation("game loop") and rendering are both pipelined workloads, which means you have to apply Amdahl's law on each step of the pipeline, and you need to resync all the worker threads before continuing. Combine this with fixed deadlines for each workload (e.g. 100 Hz tick rate gives 10 ms for game simulation, 120 Hz framerate gives 8.3 ms for rendering), leaves little wiggle room for using large quantities of threads for small tasks. Each synchronization increases the risk of delays either from the CPU side (SMT) or from the OS scheduler. Each delay to synchronization will pile up, and if the accumulated delay is large enough, it causes stutter or potentially game breaking bugs. So in conclusion, there are limits to how many threads a game engine can make use of.
And if you think what I'm posting is opinions, then you're mistaken. What I'm doing here is citing facts and making logical deductions, these are essential skills for engineers. I think even $200 might be a little too much, considering AMD's and Nvidia's next get will arrive soon. If the extra features like various types of sync, OC, etc. in the control panel are still not stable, I would say much less. I think it might soon be time for a poll of what people would be willing to pay for it in its current state, for me it would probably be $120/$150 for A750/A770, and I would probably not put it in my primary machine. But I want to see a deeper dive into framerate consistency on A750/A770.
On CoD Vanguard at 1440p, the A750 gets 75 fps and the RTX 3060 gets 107 fps.
But in the graph made by Intel it says that they perfom EXACTLY the same.
¿¿¿???
Have anyone checked some of the rest?
It could be a mistake, either the table or the graph, but chances are the table is more correct.
«After discovering that the previous slides of Intel's GPU performance bore little resemblence to the real game performance, how is anyone expected to trust these Intel slides?
Intel, you were caught lying, and you haven't addressed that yet»
Please send me the link for the previous Intel slides/statements that proved that Intel was lying about gaming performance because I'm not aware.
I said way way early, when the rumors was that ARC-512 was between 3070-3070Ti in performance that it will be around or lower than 3060Ti and that if A380 match RX570 it will be a miracle. I can't answer for the expectations of others.
I'm not aware Intel setting these expectations but leakers (like i said in the past they probably correlated the 3DMark scores that were leaked to them with the actual gaming performance)
If you find any official Intel statement supporting these performance claims please sent it to me.
Sure the hardware should be capable of supporting near 3070 performance in some new DX12/Vulcan games well optimized for Intel's architecture (in some resolutions) and from what i can tell even Intel themselves thought that the performance delta between synthetic-games will be lower (so the S/W team underdelivered) but did they actually officially in a slide showed to the public much higher performance than the one they claim here?
Edit:
I have a feeling that the original internal Intel performance forecast before one year was around 3060Ti performance or slightly above for ARC-512 and around -13% GTX1650 super for ARC-128 so the software team underdelivered around -10%-12% imo)
Don't get me wrong, I would love one more player, but Intel did all the stuff that Raja did when he was at AMD, they pushed the Hypetrain uphill and forget to build rails downhill. That's typical PR vs engineering. We will see if they make it to Battlemage or not.
Now, selling expensive (let's suggest 3090 Ti at $700, 4070 for $800, 4080 for $1200 and 4090 for $2000) will open up opportunities for the competition right? Well, what can the competition do?
Intel. Absolutely NOTHING.
AMD. Follow Nvidia's pricing. Why? Because they don't have unlimited capacity at TSMC and whatever capacity they have, they prioritize first for EPYC and Instinct, then for SONY and Microsoft console APUs, then CPUs and GPUs for big OEMs, with mobiles probably being before desktops, then retail Ryzen CPUs and lastly retail GPUs. That's why AMD had such nice financial results. Because they are the smaller player, with the least capacity, selling almost EVERYTHING they build to corporations and OEMs, not retail customers. We just get whatever it's left. So. Can AMD start selling much cheaper than Nvidia. No. Why? Because of capacity limitations. Let's say that RX 7600 is as fast as RTX 3090 Ti and is priced at $500. A weak after it's debut, demand will be so high that the card will become unavailable everywhere. It's retail price will start climing and will get to cost as much as the RTX 3090 Ti. AMD gets nothing from that jump from $500 to $700, retailers get all that difference. We already seen it. RX 6900 XT, RX 6800 XT, RX 6800 came out with MSRPs that where more or less logical if not just very good. Then the crypto madness started and latter mid and low end models where introduced at MSRPs that where looking somewhat higher than expected when compared to MSRPs of Nvidia's already available models.
So, Nvidia can keep prices up and don't fear of losing 5-10% of market share, knowing that whatever loss they will have because of competition will be controllable. On the other hand their profits will be much higher, even with that minus 5-10% market share.
Just an opinion of course. Nothing more.
I am perhaps remembering it wrong, but I'm a deeply cynical man who expects the worst from everyone and I'm rarely disappointed. Intel have managed to achieve that.
Intel leaks/PR/investor talks/rumour mill has been strong, but it feels like 2-3 years of me constantly complaining about the flurry of news about a nonexistent product and broken promises.
It's late here, perhaps someone else can go back through 30+ months of Intel noise to find the full history of Intel article hype to find one particularly incriminating slide but I just have this vague feel that Intel are constantly backtracking on what they said last time. I'll have a proper hunt through the dozens/hundreds of Intel Arc articles tomorrow if workload permits.
Claim-to-claim it's all reasonable, but go back a year or so and they were promising the moon. If A750 is out next week and matches these claims then we're done here - Intel have delivered. Can you buy an A750 yet? No. Can you even buy an A380 in most of the world yet? No. They need to deliver a product to market (and not just a foreign test slice of the market) before it's obsolete and they need to deliver approximately the performance they promised. If the A750 doesn't launch worldwide for another 4-6 months, all of the promises made now are kind of worthless because the price and performance of current competition is in constant flux. As mentioned earlier, Intel's initial claims were comparing to 2017's Pascal cards because that's how overdue this thing is. This. We may need a third player in the market but if AMD are the pro-consumer underdog compared to the incumbent market-leader, Intel are so horrible (both anti-consumer and anti-competition) that they are orders of magnitudes worse than AMD and Nvidia combined. They do have a full, varied, and longstanding history of well-documented monopolisation, anti-competitive malpractice, bribery, coercion, and blackmail.
Don't take my word for it, read the news articles from the last 30 years, Wikipedia, archive.org - whatever; You need to have been living under a rock to think Intel are the good guys. PC technology would likely be close to a decade ahead of where we are now if it wasn't for Intel playing dirty.
unintel is in a big hole. Aplauses lol
By the time they get these out the newer gpus would of trampled these
They also misconfigured the Vega cards with very high voltages, so they couldn't hold their boost clocks at all. The Vega56 I had, I lowered the GPU voltages a bit and it could keep its clock at 1700MHz indefinitely, plus I bumped the memory speed to something around 1100MHz. It got around 24k in Fire Strike (graphics score), which put it between the 2070 and 2080, or 1080 and 1080Ti. This was the $400 Vega 56 mind you.
This same guy who fucked up Vega is now working on Intel Arc, mind you.
You can set standard voltage for all cards at 2V and throw away 5 cards, at 2.1V and throw away only 1 card, or 2.2V and sell all of them as good working cards. I think companies just go with the third option. That's why we can undervolt most of our hardware at some degree and have no stability problems with it.
Keep in mind Vega 56 have the same scheduling resources of Vega 64, but less resources to schedule. We saw a similar sweetspot with GTX 970 vs. GTX 980, where GTX 970 achieved more performance per GFlop and was very close when run at the same clocks. That's a misconception. If you think all Vega 56s would remain 100% stable on all workloads throughout the warranty period, then you're wrong. This voltage is a safety margin to compensate for chip variance and wear of the chips, how large this margin is depends on the quality and characteristics of the chip. I tend to ignore speculations about the affects of management, regardless if it's good or bad.
I know all the good engineering is done on the lower level, but management and middle-management still needs to facilitate that, through priorities and resources.
But I think this guy is a good example of someone failing upwards.
CoD Warzone -> CoD Vanguard: +53% -> -30% (same game engine)
Battlefield V -> Battlefield 2042: -10% -> 0% (same game engine)
So it's a mixed bag to say the least.
Among the games viewed as favorable to ARC there are also many either very old or not particularly advanced graphically, to mention a few; Fortnite (Unreal), Deeprock Galactic (Unreal), PUBG (Unreal), WoW, Arcadegeddon, Dying Light 2, Warframe, and a lot more Unreal games. So most of these are either not very advanced games (often Unreal) or ~10 year old engines with patched in DX12 support.
I'm not claiming any of these are bad games, or even invalid for a comparison. My point is that Tom Petersen and Ryan Shrout in their PR tour with LTT, GN, PCWorld, etc. claimed newer and better games will do better on Intel ARC, and I don't see the numbers supporting that. To me it seems like the "lighter"(less advanced) games do better on A750 while the "heavier" games do better on RTX 3060, and my assessment based on that future games will probably scale more like the heavier ones. I would like to remind people about an historical parallel; years ago AMD loved to showcase AotS, which was demanding despite not so impressive graphics, but this was supposed to showcase the future of gaming. Well did it? (no)
What conclusions do you guys draw?