• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Intel Arc A770

@W1zzard

How much better does XeSS work using native instructions? Any differences image wise, also in motion/ghosting? Well there shouldn't but you you never know.
 
A770 performs worse than 6600 XT, draws same power as the 6800, but costs $50 more than the former... awful. Should be $250 max not $350.
 
Last edited:
Looks like there's very good potential for competitive performance/power ratio when we look at properly optimized games like Metro Exodus with raytracing.
A year from now, we could be looking at the A770 competing with the 3070 and 3070 Ti all around, which is pretty great considering Intel is a newcomer.



Why would you compare transistor amount between a fully enabled Navi 23 and cut-down G10? The A750 isn't using a bunch of those transistors anyway.
A better comparison would be the GA104 with 17B transistors, because that way we're comparing GPUs with similar hardware capabilities (dedicated tensor cores, larger RT units with more accelerated stages, etc.). In this case Intel is spending a bunch more transistors on the 16MB big L2 cache inside the chip, whereas Nvidia depends on a more expensive GDDR6X outside the chip.
The only obvious loss on Intel's side is die size (transistor density), but it just goes to show how intel does have room to grow.
Because I'm talking about cost in making those cards. Disabling part of the GPU doesn't make it cheaper to make.
 
The VRAM is running at 2 Ghz no matter the load. That's most probably where the high idle power comes from.
Many other GPUs, mainly from AMD, have a similar problem, but only when using multiple display outputs or high refresh rate displays.
Indeed - I forgot to mention that, an important reason for high power figures. Higher core clock was the low hanging fruit and yes, it's quite high still, remember nv GTX 470 having 608MHz 3D clock and effectively 704 shaders (at that clock), well the intel card idles at that clock. :laugh:

By the way - does aynone know if the Arc has memory compression going/turned on, or just Z-compression?
Would be interesting to measure. From all the test results quite a lot of times, despite having all those shaders and TMUs and ROPs, it just can't compete on par with the others, kinda like
Radeon HD 7970 never really could match (unless it was a shader-heavy load) the nimble GeForce GTX 680 until HD 7970 had bandwith (not Z, Z was turned on already) compression (in later Windows 7 and above drivers, finewine it was called or something), like the GTX 680 (from the start).
 
Last edited:
Indeed - I forgot to mention that, an important reason for high power figures. Higher core clock was the low hanging fruit and yes, it's quite high still, remember nv GTX 470 having 608MHz 3D clock and effectively 704 shaders (at that clock), well the intel card idles at that clock. :laugh:

By the way - does aynone know if the Arc has memory compression going/turned on, or just Z-compression?
Would be interesting to measure. From all the test results quite a lot of times, despite having all those shaders and TMUs and ROPs, it just can't compete on par with the others, kinda like
Radeon HD 7970 never really could match (unless it was a shader-heavy load) the nimble GeForce GTX 680 until HD 7970 had bandwith (not Z, Z was turned on already) compression (in later Windows 7 and above drivers, finewine it was called or something), like the GTX 680 (from the start).

That's a good question, and I've been wondering if some of those major software \ driver driven performance features, introduced many years ago, and that most of us have forgotten about are lacking or absent. Another one is, I can't recall the technical name, but PowerVR like 20 years ago had a tech that allowed the GPU to not render parts of a 3D scene that are not going to be seen by the viewer. So like if there is a trash can behind a person on the screen, hence you wouldn't see the trash can from your angle, the trash can doesn't get rendered. I believe that was incorporated from their IP by AMD and Nvidia. Someone is going to need to do more in-depth testing to see what is up.

To look at a spec sheet, you would think that these cards would stomp all over things like the 3060 Ti or 6700XT, but we know they don't.

This is the A750 vs the 6700XT for example :

1665063590067.png
 
It's not just that Intel can't provide a faster GPU. They can't even get the drivers to function properly.
You literally have a review saying that they lack optimisation, not that they are not working + there is noticable improvement in them - all of which reasonable for the first actual dedicated GPU series. Have you forgotten the endless issues both other teams have had and still do from time to time? It is literally what I said - unless the Intel is 146% perfect then they shouldn't bother. You won't ever get a 3rd player that way.
 
You literally have a review saying that they lack optimisation, not that they are not working + there is noticable improvement in them - all of which reasonable for the first actual dedicated GPU series. Have you forgotten the endless issues both other teams have had and still do from time to time? It is literally what I said - unless the Intel is 146% perfect then they shouldn't bother. You won't ever get a 3rd player that way.

It's kind of funny, as you were posting that I was looking at reviews of 6650XT and 6700 on newegg.

Seems like objectivity is very subjective..

A few exerpts -

5 star review :
1665075548827.png



Different 5 star :
1665075704799.png


4 star :
1665075822971.png

Another 5 star :

1665076117643.png
 
Thanks for the great review @W1zzard . For me, the idle power consumption is too great for it to be an alternative. However, the performance in some cases is promising, and given the progress Intel has made with the drivers since the A380 review, the future may see it competing with a 3060 Ti/6700 XT rather than a 3060/6600 XT. Still, given the high clocks, it should have performed between the RX 6800 and 6800 XT in at least some titles, and I haven't seen any where it does that. To summarize:

Pros
  • Good raytracing implementation, better than AMD, and close to Nvidia
  • Low price
Cons
  • High idle power consumption
  • poor performance in most pre DX12 API based games
  • Price too close to the 6700 XT which is almost always faster
In its best titles, it performs very close to the 3060 Ti, and in some cases, even the 3070. Most encouragingly, this list includes one DX11 title too: The Witcher 3.

BenchmarkRelease YearAPIResolutionA77030606600 XTUplift over 3060Uplift over 6600 XT3060 Ti6700 XTUplift over 3060 TiUplift over 6700XT
3070
Loss vs 3070
Assassin's Creed Valhalla
2020​
DX121440p
59.2​
48.6​
55.6​
22%​
6%​
61.5​
69.8​
-4%​
-15%​
67.5​
12%​
Control
2019​
DX121440p
68.2​
52​
49.7​
31%​
37%​
68.8​
66.3​
-1%​
3%​
80.4​
15%​
CyberPunk 2077
2020​
DX121080p
74.2​
59.6​
68.5​
24%​
8%​
77.7​
83.9​
-5%​
-12%​
88.1​
16%​
Days Gone
2021​
DX121440p
73.8​
67.4​
66.8​
9%​
10%​
90.2​
84.5​
-18%​
-13%​
104.5​
29%​
Deathloop
2021​
DX121440p
79.7​
67.7​
63.9​
18%​
25%​
88.9​
88.4​
-10%​
-10%​
102.1​
22%​
Doom Eternal
2020​
Vulkan1440p
167.3​
134.1​
151.3​
25%​
11%​
155.4​
194.5​
8%​
-14%​
194​
14%​
Dying Light 2
2022​
DX121440p
68.8​
48.7​
54.9​
41%​
25%​
65.5​
71.8​
5%​
-4%​
75.2​
9%​
Elden Ring
2022​
DX121440p
63.1​
53.1​
55.1​
19%​
15%​
68.8​
64.4​
-8%​
-2%​
75.8​
17%​
Far Cry 6
2021​
DX121440p
75.7​
70.3​
79​
8%​
-4%​
90.3​
95.2​
-16%​
-20%​
100.2​
24%​
Forza Horizon 5
2021​
DX121440p
64.6​
60.1​
69.1​
7%​
-7%​
78.4​
92.3​
-18%​
-30%​
87.6​
26%​
Hitman 3
2021​
DX121440p
94.2​
72.2​
86.9​
30%​
8%​
97.4​
105.8​
-3%​
-11%​
113.5​
17%​
Metro Exodus
2019​
DX121440p
113.7​
75.7​
79.6​
50%​
43%​
98.5​
101.2​
15%​
12%​
113.9​
0%​
Red Dead Redemption 2
2019​
DX121440p
65.3​
46.9​
55.6​
39%​
17%​
59.5​
72.4​
10%​
-10%​
65.4​
0%​
Resident Evil Village
2021​
DX121440p
125.9​
87.7​
112.1​
44%​
12%​
115.4​
145.6​
9%​
-14%​
133.3​
6%​
The Witcher 3
2015​
DX111440p
113.0​
86.9​
97.5​
30%​
16%​
117.9​
118.1​
-4%​
-4%​
135.7​
17%​
Watch Dogs Legion
2020​
DX121440p
58.7​
46.9​
54.7​
25%​
7%​
62​
70.7​
-5%​
-17%​
70.3​
17%​
SummaryGEOMEAN
81.3​
64.6​
71.5​
26%
14%
84.1​
90.9​
-3%
-11%
96.1​
15%
 
Last edited:
Where are the AIB cards?
 
@W1zzard Given the CPU bottleneck apparent at lower resolutions for the A770 in many DX11 games, is it time for the GPU testbed to switch to the 12900k or the 7700X?
 
Can anyone explain why with so many 8 GB cards the performance hit is so large, while in others with 8 GB it is running just fine?

1665079352705.png
 
Can anyone explain why with so many 8 GB cards the performance hit is so large, while in others with 8 GB it is running just fine?

View attachment 264442

The only 8gb card that runs fine is from Intel, clearly they did something right and have some aces under their sleeves

This example specifically is a test with ray tracing and intel has been showing some very good ray tracing performance even beating nvidia at their own game, i don't want to just say fine wine but well, clearly the cards have much bigger silicon than the tier they are competing at and a lot of driver handicaps so yeah, maybe it will be a rather fine wine after it ages.
 
@W1zzard Given the CPU bottleneck apparent at lower resolutions for the A770 in many DX11 games, is it time for the GPU testbed to switch to the 12900k or the 7700X?
I doubt it'll go away with a faster CPU, just higher. Yes, I have plans to upgrade CPU for next rebench (early winter)

Can anyone explain why with so many 8 GB cards the performance hit is so large, while in others with 8 GB it is running just fine?
Memory management. Looks like Intel is doing something right here
 
I mean intel has been at ray tracing for a while and does know a bit about memory management. The card is interesting and I'll pick one it up and jam it in a nuc extreme box for an htpc in the living room.
 
streaming arc gaming
 
What is the maximum bandwidth of the HDMI 2.1 port? Since it's bridged from DisplayPort I assume it can't be the full 48gbps.
 
You literally have a review saying that they lack optimisation, not that they are not working + there is noticable improvement in them - all of which reasonable for the first actual dedicated GPU series. Have you forgotten the endless issues both other teams have had and still do from time to time? It is literally what I said - unless the Intel is 146% perfect then they shouldn't bother. You won't ever get a 3rd player that way.
This gen was known as DG2 aka this is actually the 2nd gen dedicated graphics from Intel.
DG1 was so embarrassing that Intel only sold it to a few OEMs and shoved it under the rug.

As for getting a 3rd player, from all the comments online about all the different GPU reviews you see a common trend.
People don't even want a 2nd player let alone a 3rd, all they want is nVidia or cheaper nVidia cards.
Gamers actively mock and ridicule others with "knock-off" brand cards that is not a Geforce.
Everyone says they want competition yet nobody want to support the competition when it comes down to actual purchase.
You have influencer like LTT that actively pushs people who buy a product in such a state, yet you almost never see them actually use an AMD card let alone Intel.
Sure, for an editing/ rendering machine it only makes sense to use a 3090/4090 due to CUDA/Optix and the 24G vram being very useful in those tasks.
On the other hand, you basically never see anything except a 3090/Ti or 4090 even in a gaming build.
 
Last edited:
This gen was known as DG2 aka this is actually the 2nd gen dedicated graphics from Intel.
DG1 was so embarrassing that Intel only sold it to a few OEMs and shoved it under the rug.

As for getting a 3rd player, from all the comments online about all the different GPU reviews you see a common trend.
People don't even want a 2nd player let alone a 3rd, all they want is nVidia or cheaper nVidia cards.
Gamers actively mock and ridicule others with "knock-off" brand cards that is not a Geforce.
Everyone says they want competition yet nobody want to support the competition when it comes down to actual purchase.
You have influencer like LTT that actively pushs people who buy a product in such a state, yet you almost never see them actually use an AMD card let alone Intel.
Sure, for an editing/ rendering machine it only makes sense to use a 3090/4090 due to CUDA and the 24G vram being very useful in those tasks.
On the other hand, you basically never see anything except a 3090/Ti or 4090 even in a gaming build.
It's ironic - everybody wants cheaper Nvidia cards, but nobody lifts a damn brain cell to think about what could make Nvidia cards cheaper (competition).

I actually liked LTT'S video asking gamers to consider the A750 and A770 to bring some competition into the game. On the other hand, I totally disliked Hardware Unboxed and Gamers' Nexus's videos. Even though Arc showed good performance for its price, they mocked it for doing "only" about 120 FPS in CS:GO. It's as if you buy flowers, cook and wash every day and take your girlfriend everywhere around the world, but she leaves you anyway because you left your pants on the sofa one time two years ago. I mean, wtf, really?

It's funny because I usually prefer HU or GN over LTT, but not this time.
 
It's ironic - everybody wants cheaper Nvidia cards, but nobody lifts a damn brain cell to think about what could make Nvidia cards cheaper (competition).

I actually liked LTT'S video asking gamers to consider the A750 and A770 to bring some competition into the game. On the other hand, I totally disliked Hardware Unboxed and Gamers' Nexus's videos. Even though Arc showed good performance for its price, they mocked it for doing "only" about 120 FPS in CS:GO. It's as if you buy flowers, cook, wash and take your girlfriend everywhere around the world, but she leaves you anyway because you left your pants on the sofa one time two years ago. I mean, wtf, really?

It's funny because I usually prefer HU or GN over LTT, but not this time.
HUB and GN are doing their due diligence as reviewers to point out all the caveats of a product, so that buyer can make their own decision.
TBH HUB is being more lenient in its thumbnail saying it is "not terrible" which is actually more positive than most other youtuber with click-bait titles.

One of the worst this round is DigitalFoundry, they constantly try to downplay the 6600XT by keep stating it is the "most expensive",
despite the 6650XT can regularly be found under $300 and the 3060 is hard to find @Msrp.
It is the mix of some good data with his personal opinion that is most dangerous.
 
Last edited:
HUB and GN are doing their due diligence as reviewers to point out all the caveats of a product, so that buyer can make their own decision.
TBH HUB is being more lenient in its thumbnail saying it is "not terrible" which is actually more positive than most other youtuber with click-bait titles.

One of the worst this round is DigitalFoundry, they constantly try to downplay the 6600XT by keep stating it is the "most expensive",
despite the 6650XT can regularly be found under $300 and the 3060 is hard to find @Msrp.
It is the mix of some good data with his personal opinion that is most dangerous.

After a cursory example of a poorly optimized game for ARC (Ass Creed Unity on DX11), DF then proceeded to just use best case scenarios for all the subsequent benchmarks (DX12 and Vulkan games only), which would lead the viewer to believe that ARC is almost flawless and gives a false and misleading impression of the card. The 6600XT was also misrepresented, as you put it, since it can be found for cheaper than the A770 and will easily destroy the A770 in the thousands of DX11 and DX9 titles available right now.

When a majority of the most played games on Steam are still on DX11 and DX9, this is a huge shortcoming of the review.
 
Last edited:
After a cursory example of a poorly optimized game for ARC (Ass Creed Unity on DX11), DF then proceeded to just use best case scenarios for all the subsequent benchmarks (DX12 and Vulkan games only), which would lead the viewer to believe that ARC is almost flawless and gives a false and misleading impression of the card. The 6600XT was also misrepresented, as you put it, since it can be found for cheaper than the A770 and will easily destroy the A770 in the thousands of DX11 and DX9 titles available right now.

When a majority of the most played games on Steam are still on DX11 and DX9, this is a huge shortcoming of the review.
That's my gripe too. I'm more than happy to cut Intel some slack for a first attempt and openly welcome their competition, but I don't think I've ever seen such a gaping chasm between "We picked as many of the 0.3% titles that were DX12-only games whilst excluding as many of the DX9-11 titles most people are actually playing to benchmark" vs observable reality from certain tech sites. "Average fps across 12 games" charts in this case are utterly worthless when fps plummets outside of a "bubble" of a couple of dozen 'benchmark titles'. Also still waiting for even a single tech site to test emulated DX9 compatibility. As we've seen in the past with stuff like DgVoodoo2, API translation layers are not without issues (increased rendering errors, glitches, etc).
 
It's as if you buy flowers, cook and wash every day and take your girlfriend everywhere around the world, but she leaves you anyway because you left your pants on the sofa one time two years ago. I mean, wtf, really?

I understand your point but that's not really what's happening with ARC, there's still several problems, some more frequent some less, and the product is only competitive againts inflated Nvidia options, which Intel casually shruggs off (and Linus as well for example - some rtx/ai features are more limited on AMD but that's just Intel using the same selective benchmarks as Nvidia as been using since it brough RTX hw to market). GN compares the card against AMD and the sale pitch from intel gets much less appealing very fast (there's also the hole GN vibe of being overly critical of anything)

I hope RDNA3 gets better rtx performance and is able to sets the record straight (and kicks a big fire under Nvidias pants), this almost tradition of shrugging off AMD by the "big guys" (nvidia and intel, one on gpus and the other on cpus but now also on gpus since it can't on cpus) needs to end.
 
Last edited:
This gen was known as DG2 aka this is actually the 2nd gen dedicated graphics from Intel.
DG1 was so embarrassing that Intel only sold it to a few OEMs and shoved it under the rug.

As for getting a 3rd player, from all the comments online about all the different GPU reviews you see a common trend.
People don't even want a 2nd player let alone a 3rd, all they want is nVidia or cheaper nVidia cards.
Gamers actively mock and ridicule others with "knock-off" brand cards that is not a Geforce.
Everyone says they want competition yet nobody want to support the competition when it comes down to actual purchase.
You have influencer like LTT that actively pushs people who buy a product in such a state, yet you almost never see them actually use an AMD card let alone Intel.
Sure, for an editing/ rendering machine it only makes sense to use a 3090/4090 due to CUDA/Optix and the 24G vram being very useful in those tasks.
On the other hand, you basically never see anything except a 3090/Ti or 4090 even in a gaming build.

That's easy to understand though. People want other people to take the pain for them. The last AMD cared I owned was back when it was ATi and I had a 9700 pro. Was great at the time, nVidia had nothing that could compete with it. I built at box with a 3850 for a room mate at one point as well.

The issue is that nvidia's ecosystem is just better.
 
Honestly if this came out 12 months ago it might have made an impression. A770 is slower than a 2080! And we'll soon comparing it to RDNA3 and Lovelace. Competition is good, but it'll only make a dent at the lower end. I hope drivers can eek another 10-20% more performance and power consumption can be lowered. I still wonder if they will go all in on development of Battelmage though as that needs to be a massive upgrade and at least double performance because it'll be facing Blackwell and RDNA4.

Well, Intel's driver team was in Russia and that got shut down when Russia invaded Ukraine. Intel had to rebuild it elsewhere.

Probably explains why Metro runs so well (mostly ties with a 3070), that game was developed by a Ukrainian studio so likely they had links to the Intel team in Russia (despite the war, there are many many familial connections and so on between the two). It's probably a good example of optimization of the driver for a game.
 
Back
Top