• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel Arc A770

Count von Schwalbe

Moderator
Staff member
Joined
Nov 15, 2021
Messages
3,082 (2.78/day)
Location
Knoxville, TN, USA
System Name Work Computer | Unfinished Computer
Processor Core i7-6700 | Ryzen 5 5600X
Motherboard Dell Q170 | Gigabyte Aorus Elite Wi-Fi
Cooling A fan? | Truly Custom Loop
Memory 4x4GB Crucial 2133 C17 | 4x8GB Corsair Vengeance RGB 3600 C26
Video Card(s) Dell Radeon R7 450 | RTX 2080 Ti FE
Storage Crucial BX500 2TB | TBD
Display(s) 3x LG QHD 32" GSM5B96 | TBD
Case Dell | Heavily Modified Phanteks P400
Power Supply Dell TFX Non-standard | EVGA BQ 650W
Mouse Monster No-Name $7 Gaming Mouse| TBD
Well, Intel's driver team was in Russia and that got shut down when Russia invaded Ukraine. Intel had to rebuild it elsewhere.

Probably explains why Metro runs so well (mostly ties with a 3070), that game was developed by a Ukrainian studio so likely they had links to the Intel team in Russia (despite the war, there are many many familial connections and so on between the two). It's probably a good example of optimization of the driver for a game.
Crumbs, I reckon Intel regretted that decision...

I think a lot of the hate Intel is getting over the apparent cherry-picking is based on this kind of thing. AMD/Nvidia have had years to optimize the performance of each and every major game release (when they released) but the ground-up development of Intel's drivers means that many are not optimized yet. Most likely each driver release will have a few titles with drastically improved performance as Intel works through the back-log of popular games.
 
Joined
Apr 29, 2014
Messages
4,290 (1.11/day)
Location
Texas
System Name SnowFire / The Reinforcer
Processor i7 10700K 5.1ghz (24/7) / 2x Xeon E52650v2
Motherboard Asus Strix Z490 / Dell Dual Socket (R720)
Cooling RX 360mm + 140mm Custom Loop / Dell Stock
Memory Corsair RGB 16gb DDR4 3000 CL 16 / DDR3 128gb 16 x 8gb
Video Card(s) GTX Titan XP (2025mhz) / Asus GTX 950 (No Power Connector)
Storage Samsung 970 1tb NVME and 2tb HDD x4 RAID 5 / 300gb x8 RAID 5
Display(s) Acer XG270HU, Samsung G7 Odyssey (1440p 240hz)
Case Thermaltake Cube / Dell Poweredge R720 Rack Mount Case
Audio Device(s) Realtec ALC1150 (On board)
Power Supply Rosewill Lightning 1300Watt / Dell Stock 750 / Brick
Mouse Logitech G5
Keyboard Logitech G19S
Software Windows 11 Pro / Windows Server 2016
Excellent review, I was excitedly waiting to see what your performance numbers would be for this card as its (At least to me) one of the most exciting releases in a long time (Solely based on the fact its Intel's highest current GPU I am aware of and a newcomer in general). Unfortunately, with prices dropping on the equivalent cards in the performance category (Which based on the chart is somewhere around the RTX 3060 and AMD RX 6600 in terms of performance at the different resolutions on average) this card is a very hard sell since its performance is all over the place. I really want to support them and hopefully get a second generation that's even better but it's not in the range I would want for my main rig (Maybe my second rig or wife's).

Still its a cool midrange card and I will say Intel has already come a long way just from the first cards!
 
Joined
Jan 27, 2015
Messages
1,715 (0.48/day)
System Name Legion
Processor i7-12700KF
Motherboard Asus Z690-Plus TUF Gaming WiFi D5
Cooling Arctic Liquid Freezer 2 240mm AIO
Memory PNY MAKO DDR5-6000 C36-36-36-76
Video Card(s) PowerColor Hellhound 6700 XT 12GB
Storage WD SN770 512GB m.2, Samsung 980 Pro m.2 2TB
Display(s) Acer K272HUL 1440p / 34" MSI MAG341CQ 3440x1440
Case Montech Air X
Power Supply Corsair CX750M
Mouse Logitech MX Anywhere 25
Keyboard Logitech MX Keys
Software Lots
This gen was known as DG2 aka this is actually the 2nd gen dedicated graphics from Intel.
DG1 was so embarrassing that Intel only sold it to a few OEMs and shoved it under the rug.

As for getting a 3rd player, from all the comments online about all the different GPU reviews you see a common trend.
People don't even want a 2nd player let alone a 3rd, all they want is nVidia or cheaper nVidia cards.
Gamers actively mock and ridicule others with "knock-off" brand cards that is not a Geforce.
Everyone says they want competition yet nobody want to support the competition when it comes down to actual purchase.
You have influencer like LTT that actively pushs people who buy a product in such a state, yet you almost never see them actually use an AMD card let alone Intel.
Sure, for an editing/ rendering machine it only makes sense to use a 3090/4090 due to CUDA/Optix and the 24G vram being very useful in those tasks.
On the other hand, you basically never see anything except a 3090/Ti or 4090 even in a gaming build.

DG1 really was never meant for games/consumers. It was likely just to iron out any driver issues related to normal office productivity work. Just to give a hypothetical, for me, if a game crashes that's one thing that is not really a big show-stopper where I'd toss a $300-$350 GPU. But if Teams or Office are unstable, it's gone.

HUB and GN are doing their due diligence as reviewers to point out all the caveats of a product, so that buyer can make their own decision.
TBH HUB is being more lenient in its thumbnail saying it is "not terrible" which is actually more positive than most other youtuber with click-bait titles.

One of the worst this round is DigitalFoundry, they constantly try to downplay the 6600XT by keep stating it is the "most expensive",
despite the 6650XT can regularly be found under $300 and the 3060 is hard to find @Msrp.
It is the mix of some good data with his personal opinion that is most dangerous.

HUB was fair IMO.

On video, TechYES is I think one of the better channels for what GN seems to be trying to do, especially for value seeking gamers. He talks about both the new cards, and the deals you can get on used cards vs new. He's very much into what gives you the best bang for buck, and yes he's all over the 6600XT / 6650XT as best value.

GN, the more I look at their recent reviews the more I think that guy is trash. He spends the first half of his reviews (not just this one) pontificating about his viewpoints, and his opinion on some industry drama or the other.

So now you're halfway through the video and he proceeds to draw conclusions from benchmarking a whole 6-7 games. That should take like 1 hour, and yet he leads you to believe they've been pulling 16 hours days to bring that information. There are just a whole lot more informational sites with far far broader test data than what that guy provides out there, and more useful dialogue like 'Console ports work great on this card, frostbite engine is horrible though'.

This kind of data is also out there for AMD vs Nvidia, they are not consistent with each other at all on what games run well on each platform.

Case in point, from TPU, this is part of a 6800 vs 3070 Ti comparison. I wonder which one GN would say was consistent :

1665159615945.png
 
Joined
Jan 14, 2019
Messages
12,341 (5.75/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE
HUB and GN are doing their due diligence as reviewers to point out all the caveats of a product, so that buyer can make their own decision.
TBH HUB is being more lenient in its thumbnail saying it is "not terrible" which is actually more positive than most other youtuber with click-bait titles.
Maybe... or maybe they're trying to play it safe and not suggest people to buy a product that isn't 100% reliable. I keep forgetting that not only techies watch these channels.

There's also the clickbait factor. More people watch your video if you openly take a dump on the product, or even the industry. That's why I'm trying to restrict the time I spend on watching reviews instead of just reading TPU and calling it a day. TPU's game tests are the closest to the ones I play anyway.

I understand your point but that's not really what's happening with ARC, there's still several problems, some more frequent some less, and the product is only competitive againts inflated Nvidia options, which Intel casually shruggs off (and Linus as well for example - some rtx/ai features are more limited on AMD but that's just Intel using the same selective benchmarks as Nvidia as been using since it brough RTX hw to market). GN compares the card against AMD and the sale pitch from intel gets much less appealing very fast (there's also the hole GN vibe of being overly critical of anything)

I hope RDNA3 gets better rtx performance and is able to sets the record straight (and kicks a big fire under Nvidias pants), this almost tradition of shrugging off AMD by the "big guys" (nvidia and intel, one on gpus and the other on cpus but now also on gpus since it can't on cpus) needs to end.
I think JayzTwoCent's video is the best on this. He didn't do as many benchmarks as the others, but the ones he did showed a clear Nvidia win in some titles, a clear AMD win in others and a huge Intel win in some. It tells me that any of the 3 can be a winner depending on your needs.

Personally, I don't have too high hopes of RDNA 3 mainly because it hasn't even been announced yet. It'll be a good few months before we see any of them in stores, and when we do, it'll be the 7800 and 7900 series first, which are a competition on an entirely different price and performance range. These are 4K cards, and I'm still on 1080p with no plans to upgrade any time soon. I prefer to keep the cost of computational power needed low for now. The 3060 (Ti) / 6600 (XT) / A770 level is just right for me.

I agree that the tradition of shrugging off needs to end... so does the tradition of comparing everything to the competition. I mean, you (general you) know what games you play, then why not just look at review data on those games, and buy something that suits your price and performance needs? Why does everything need to be 5 FPS faster than the competition? You don't see any competition when you have one graphics card plugged into your system with nothing to compare it to. I also don't understand the purpose of averages. Nobody plays "Average Game" because it doesn't exist, and if you don't test for every single game on the planet, then your average value means nothing.
 
Last edited:
Joined
Aug 20, 2007
Messages
21,469 (3.40/day)
System Name Pioneer
Processor Ryzen R9 9950X
Motherboard GIGABYTE Aorus Elite X670 AX
Cooling Noctua NH-D15 + A whole lotta Sunon and Corsair Maglev blower fans...
Memory 64GB (4x 16GB) G.Skill Flare X5 @ DDR5-6000 CL30
Video Card(s) XFX RX 7900 XTX Speedster Merc 310
Storage Intel 905p Optane 960GB boot, +2x Crucial P5 Plus 2TB PCIe 4.0 NVMe SSDs
Display(s) 55" LG 55" B9 OLED 4K Display
Case Thermaltake Core X31
Audio Device(s) TOSLINK->Schiit Modi MB->Asgard 2 DAC Amp->AKG Pro K712 Headphones or HDMI->B9 OLED
Power Supply FSP Hydro Ti Pro 850W
Mouse Logitech G305 Lightspeed Wireless
Keyboard WASD Code v3 with Cherry Green keyswitches + PBT DS keycaps
Software Gentoo Linux x64 / Windows 11 Enterprise IoT 2024
Mantle?! Wtf! Mantle is like ~9-years-old and still only supports ~180 titles out of tens-of-thousands of titles. How is that even comparable to NVIDIAs tech features' prowess that supports over gazillions more on top of that?! :kookoo:

People like you need to understand, like others here, *today's* OBSERVATION / PERSONAL EXPERIENCE (not going back to the stone-age :laugh:) has nothing to do with fanaticism, etc.

Find another term because that doesn't apply here. :roll:
Mantle was influential in low level APIs, and literally commited swaths of code to Vulkan, which is huge today.
 
Joined
Jun 10, 2014
Messages
2,987 (0.78/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
Driver quality is killing it.
Hardware is mostly OK, Software needs a lot of improvement.
If they don't improve drastically the driver efficiency and also make it good at CPU multithreading scaling, the problem (the % performance that they lose due to driver inefficiency) will be even bigger in Battlemage.
The discrepancy between synthetic performance and real world performance is strong evidence of the exact opposite; the problem is in hardware scheduling, not software at all. These performance characteristics are very comparable to the performance issues of Polaris and Vega, only worse. If driver overhead in general was to blame, we should have seen a clear growing overhead with the more powerful ARC cards. If specific API calls were to blame for slowing everything down, then they would have easily identified and fixed that by now. They've had the final GPUs in testing since early 2022, and engineering samples long before that. The issues with Alchemist is hardware level, and no amount of driver patchwork can solve that.

Should be called Intel Arc A770 Beta Edition not Limited Edition
No, it's Limited Performance Edition. ;)

You mean like how Mantle became Vulkan, and heavily influenced d3d12?
Mantle was influential in low level APIs, and literally commited swaths of code to Vulkan, which is huge today.
Either this is some kind of joke, or you (like most people) have no idea what the word literally actually means.:rolleyes:

The fact police is here;
There is zero Mantle code in Vulkan, as there is zero code in the core of Vulkan. It's an API spec after all, not a code base, the real code is in the various drivers implementing the graphics APIs, this should be basic knowledge.

And to your earlier claim; no Mantle did not become Vulkan at all. Vulkan is based on the SPIR-V architecture from OpenCL and the efforts of the OpenGL AZDO initiative. The final spec did get some influence from the work on DirectX 12 (which again was inspired from Mantle), but these are still principally different APIs based on different coding paradigms, and therefore work structurally different.
 
Joined
Jan 27, 2015
Messages
1,715 (0.48/day)
System Name Legion
Processor i7-12700KF
Motherboard Asus Z690-Plus TUF Gaming WiFi D5
Cooling Arctic Liquid Freezer 2 240mm AIO
Memory PNY MAKO DDR5-6000 C36-36-36-76
Video Card(s) PowerColor Hellhound 6700 XT 12GB
Storage WD SN770 512GB m.2, Samsung 980 Pro m.2 2TB
Display(s) Acer K272HUL 1440p / 34" MSI MAG341CQ 3440x1440
Case Montech Air X
Power Supply Corsair CX750M
Mouse Logitech MX Anywhere 25
Keyboard Logitech MX Keys
Software Lots
The discrepancy between synthetic performance and real world performance is strong evidence of the exact opposite; the problem is in hardware scheduling, not software at all. These performance characteristics are very comparable to the performance issues of Polaris and Vega, only worse. If driver overhead in general was to blame, we should have seen a clear growing overhead with the more powerful ARC cards. If specific API calls were to blame for slowing everything down, then they would have easily identified and fixed that by now. They've had the final GPUs in testing since early 2022, and engineering samples long before that. The issues with Alchemist is hardware level, and no amount of driver patchwork can solve that.

No, actually they know exactly where in the API the major bottlenecks are.

1665180021004.png
 

OneMoar

There is Always Moar
Joined
Apr 9, 2010
Messages
8,795 (1.64/day)
Location
Rochester area
System Name RPC MK2.5
Processor Ryzen 5800x
Motherboard Gigabyte Aorus Pro V2
Cooling Thermalright Phantom Spirit SE
Memory CL16 BL2K16G36C16U4RL 3600 1:1 micron e-die
Video Card(s) GIGABYTE RTX 3070 Ti GAMING OC
Storage Nextorage NE1N 2TB ADATA SX8200PRO NVME 512GB, Intel 545s 500GBSSD, ADATA SU800 SSD, 3TB Spinner
Display(s) LG Ultra Gear 32 1440p 165hz Dell 1440p 75hz
Case Phanteks P300 /w 300A front panel conversion
Audio Device(s) onboard
Power Supply SeaSonic Focus+ Platinum 750W
Mouse Kone burst Pro
Keyboard SteelSeries Apex 7
Software Windows 11 +startisallback
The discrepancy between synthetic performance and real world performance is strong evidence of the exact opposite; the problem is in hardware scheduling, not software at all. These performance characteristics are very comparable to the performance issues of Polaris and Vega, only worse. If driver overhead in general was to blame, we should have seen a clear growing overhead with the more powerful ARC cards. If specific API calls were to blame for slowing everything down, then they would have easily identified and fixed that by now. They've had the final GPUs in testing since early 2022, and engineering samples long before that. The issues with Alchemist is hardware level, and no amount of driver patchwork can solve that.


No, it's Limited Performance Edition. ;)



Either this is some kind of joke, or you (like most people) have no idea what the word literally actually means.:rolleyes:

The fact police is here;
There is zero Mantle code in Vulkan, as there is zero code in the core of Vulkan. It's an API spec after all, not a code base, the real code is in the various drivers implementing the graphics APIs, this should be basic knowledge.

And to your earlier claim; no Mantle did not become Vulkan at all. Vulkan is based on the SPIR-V architecture from OpenCL and the efforts of the OpenGL AZDO initiative. The final spec did get some influence from the work on DirectX 12 (which again was inspired from Mantle), but these are still principally different APIs based on different coding paradigms, and therefore work structurally different.
they can fix it at the driver level as far as DX9/11 performance is concerned either though the use of a modified D3D9ON12 implementation or DXVK or some custom solution because D3D9ON12 is fantastically
slow because you are taking D3D9 Api calls which are already numerous and slow(the draw calls are notoriously expensive without a hardware backed scheduler specific for DX9/11
and then converting those to a string of DX12 commands

as for raw vulkan/pure DX12 perf I don't how much more they can squeeze out of it
they really need to find anouther 15-20% in raw fps uplift in addition to fixing the problems with the D3D9ON12 translation layer
 
Joined
Jun 10, 2014
Messages
2,987 (0.78/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
No, actually they know exactly where in the API the major bottlenecks are.
Bandwidth, etc. are not API bottlenecks.

they can fix it at the driver level as far as DX9/11 performance is concerned either though the use of a modified D3D9ON12 implementation or DXVK or some custom solution because D3D9ON12 is fantastically
slow because you are taking D3D9 Api calls which are already numerous and slow(the draw calls are notoriously expensive without a hardware backed scheduler specific for DX9/11
and then converting those to a string of DX12 commands
They can fix DirectX 9 performance by actually implementing DirectX 9 in their driver instead of relying on an abstraction layer. But DirectX 9 games are not normally a part of GPU reviews, so thos will not scew the benchmark results in any way.

D3D9ON12 in an abstraction layer made by Microsoft. It's not a matter of "optimizing" it, as that will never be a top quality solution, they should implement DirectX 9 in their graphics driver instead. And speed isn't the biggest concern here, but the fact that DirectX 9 and 12 have very different states, and there isn't a direct 1-to-1 translation between API calls, so the result will always be lousy compared to a proper API implementation.

DirectX 11 isn't going to be affected by D3D9ON12, nor are driver optimizations likely to do a major uplift here.

as for raw vulkan/pure DX12 perf I don't how much more they can squeeze out of it
they really need to find anouther 15-20% in raw fps uplift in addition to fixing the problems with the D3D9ON12 translation layer
And they're not going to find it. They've had the final hardware since early 2022, and if there was a major bottleneck in the drivers which could unleash 20% more performance across the board, they would have found it by now.

The problem is in the hardware, and requires a hardware fix.
 
Joined
Aug 20, 2007
Messages
21,469 (3.40/day)
System Name Pioneer
Processor Ryzen R9 9950X
Motherboard GIGABYTE Aorus Elite X670 AX
Cooling Noctua NH-D15 + A whole lotta Sunon and Corsair Maglev blower fans...
Memory 64GB (4x 16GB) G.Skill Flare X5 @ DDR5-6000 CL30
Video Card(s) XFX RX 7900 XTX Speedster Merc 310
Storage Intel 905p Optane 960GB boot, +2x Crucial P5 Plus 2TB PCIe 4.0 NVMe SSDs
Display(s) 55" LG 55" B9 OLED 4K Display
Case Thermaltake Core X31
Audio Device(s) TOSLINK->Schiit Modi MB->Asgard 2 DAC Amp->AKG Pro K712 Headphones or HDMI->B9 OLED
Power Supply FSP Hydro Ti Pro 850W
Mouse Logitech G305 Lightspeed Wireless
Keyboard WASD Code v3 with Cherry Green keyswitches + PBT DS keycaps
Software Gentoo Linux x64 / Windows 11 Enterprise IoT 2024
Either this is some kind of joke, or you (like most people) have no idea what the word literally actually means.
I mean literally. Do I need to dig up old news, or can you use google? Mantle contributed the complete codebase of itself to help get Vulkan going. It's very dead, but it's influence lives on, is the point.

Heck, it even says this right on the wikipedia page for Mantle in the opening paragraph:


Of course the API has stuff to commit. Do you think an API lacks documentation, code examples, and dev tools?
 
Last edited:

OneMoar

There is Always Moar
Joined
Apr 9, 2010
Messages
8,795 (1.64/day)
Location
Rochester area
System Name RPC MK2.5
Processor Ryzen 5800x
Motherboard Gigabyte Aorus Pro V2
Cooling Thermalright Phantom Spirit SE
Memory CL16 BL2K16G36C16U4RL 3600 1:1 micron e-die
Video Card(s) GIGABYTE RTX 3070 Ti GAMING OC
Storage Nextorage NE1N 2TB ADATA SX8200PRO NVME 512GB, Intel 545s 500GBSSD, ADATA SU800 SSD, 3TB Spinner
Display(s) LG Ultra Gear 32 1440p 165hz Dell 1440p 75hz
Case Phanteks P300 /w 300A front panel conversion
Audio Device(s) onboard
Power Supply SeaSonic Focus+ Platinum 750W
Mouse Kone burst Pro
Keyboard SteelSeries Apex 7
Software Windows 11 +startisallback
Bandwidth, etc. are not API bottlenecks.


They can fix DirectX 9 performance by actually implementing DirectX 9 in their driver instead of relying on an abstraction layer. But DirectX 9 games are not normally a part of GPU reviews, so thos will not scew the benchmark results in any way.

D3D9ON12 in an abstraction layer made by Microsoft. It's not a matter of "optimizing" it, as that will never be a top quality solution, they should implement DirectX 9 in their graphics driver instead. And speed isn't the biggest concern here, but the fact that DirectX 9 and 12 have very different states, and there isn't a direct 1-to-1 translation between API calls, so the result will always be lousy compared to a proper API implementation.

DirectX 11 isn't going to be affected by D3D9ON12, nor are driver optimizations likely to do a major uplift here.


And they're not going to find it. They've had the final hardware since early 2022, and if there was a major bottleneck in the drivers which could unleash 20% more performance across the board, they would have found it by now.

The problem is in the hardware, and requires a hardware fix.
your entire argument hinges on "well if there was 20% in the drivers to find they would have found it by now"
yea if they were looking OR cared or where not busy or incompetent
this is intel we are talking about :roll: we have seen the level of give a fuck out of there driver team its not very high
 
Joined
Jun 10, 2014
Messages
2,987 (0.78/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
your entire argument hinges on "well if there was 20% in the drivers to find they would have found it by now"
yea if they were looking OR cared or where not busy or incompetent
this is intel we are talking about :roll: we have seen the level of give a fuck out of there driver team its not very high
What a remarkable, fact based and well formed response! You know, with cursing you defeat any kind of logical argument ;)

Technically speaking, it's not hard to analyze and detect overhead. There are profiling tools which can pinpoint pretty precisely timing and resource allocation. It's not like developers rely on the Ballmer peak and glass balls to optimize code, contrary to popular opinion development skills are surprisingly methodical, rational and deductive in nature.

So, if there were major driver overhead, this would be easily detectable. Not only could GPU performance be severly bottlenecked by the CPU then, but we would expect this bottleneck to increase with GPU power (assuming FPS will increase, not details), so we should expect a A770 to be significantly more bottlenecked by it than A380. As I've said, Intel Arc performs poorly in real world gaming compared to synthetic benchmarks, which points to hardware level scheduling, not driver overhead.

I don't think you grasp how massive a driver overhead issue would have to be to hold back ~20% performance. Whether this is a whole API or just single API calls causing this, this would be very massive, and would be very evident using a profiling tool. And remember, to unleash major gains this trend have to persist across "every" workload, so any such trend should be easy to find. Especially if there are a few API calls taking up too much time of frame rendering and the GPU is undersaturated, this sort of stuff is very evident in profiling. And the fact that they have been struggling since the early engineering samples last year to find anything to squeeze out a tiny bit more of performance, but they can't, because there isn't anything significant to gain from the driver side. So it's very unlikely that Intel will suddenly stumble across something that will unleash 20% more performance, and I'm not talking about a single edge case here, but 20% across the board, which is highly unlikely.

Then lastly, there is history;
Those who remember the launches of Polaris and Vega, remembered that not only forums but also some reviewers claimed that driver optimizations would make them age like "fine wine" and turn out to be better investments than their Nvidia counterparts. Some even suggested e.g. RX 480 to compete in the GTX 1070/1080 range, once the drivers matured after "a few months". Well did it happen? Not yet, but I'm sure it will happen any day now!
And there are not many examples of driver "miracles". The biggest pretty much across-the-board driver optimization I can recall was done by Nvidia shortly after the release of DirectX 12, when they brought most of their DirectX 12-related driver improvements to their respective DirectX 9/10/11 and OpenGL implementations, and achieved something like ~10% after a massive overhaul. And this was overhead they were well aware of for years.
Another recent example is AMD's OpenGL implementation rewrite which yielded some significant gains (and some regressions). And this was an issue OpenGL devs have known about since the early 2000s, AMD(ATI)'s OpenGL implementation was always buggy and underperforming, and it was simply not prioritized for over a decade.

So my point here is, we should stop making excuses about poorly performing hardware and blaming "immature" drivers. DirectX 10/11/12 are high priority APIs, so if there were major bottlenecks in their driver implementation, they would know, no matter how "stupid" you think Intel's engineers are.
And isn't it funny, that for years "immature drivers" have been the excuse whenever AMD have released an underperforming product (and now Intel), but not Nvidia? I smell bias…

I mean literally. Do I need to dig up old news, or can you use google? Mantle contributed the complete codebase of itself to help get Vulkan going. It's very dead, but it's influence lives on, is the point.
Contributing to something and claiming A became B is not the same thing. And since you are twisting words I'm going to use your own words against you;
- "You mean like how Mantle became Vulkan"
- "Mantle was influential in low level APIs, and literally commited swaths of code to Vulkan"
Both of these claims are untrue, no matter how you try to twist it or split hairs.
Khronos developed Vulkan based on input from numerous contributors, including AMD and their Mantle, and they built this on top of their SPIR-V architecture and the ground work done by the AZDO initiative for OpenGL. While there may be some surface-level similarities between Mantle and Vulkan, Vulkan is far more featured and have much more state management than Mantle ever had, so these are not the same, even though many in the tech press don't know the difference.
 
Joined
Jan 14, 2019
Messages
12,341 (5.75/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE
What a remarkable, fact based and well formed response! You know, with cursing you defeat any kind of logical argument ;)

Technically speaking, it's not hard to analyze and detect overhead. There are profiling tools which can pinpoint pretty precisely timing and resource allocation. It's not like developers rely on the Ballmer peak and glass balls to optimize code, contrary to popular opinion development skills are surprisingly methodical, rational and deductive in nature.

So, if there were major driver overhead, this would be easily detectable. Not only could GPU performance be severly bottlenecked by the CPU then, but we would expect this bottleneck to increase with GPU power (assuming FPS will increase, not details), so we should expect a A770 to be significantly more bottlenecked by it than A380. As I've said, Intel Arc performs poorly in real world gaming compared to synthetic benchmarks, which points to hardware level scheduling, not driver overhead.

I don't think you grasp how massive a driver overhead issue would have to be to hold back ~20% performance. Whether this is a whole API or just single API calls causing this, this would be very massive, and would be very evident using a profiling tool. And remember, to unleash major gains this trend have to persist across "every" workload, so any such trend should be easy to find. Especially if there are a few API calls taking up too much time of frame rendering and the GPU is undersaturated, this sort of stuff is very evident in profiling. And the fact that they have been struggling since the early engineering samples last year to find anything to squeeze out a tiny bit more of performance, but they can't, because there isn't anything significant to gain from the driver side. So it's very unlikely that Intel will suddenly stumble across something that will unleash 20% more performance, and I'm not talking about a single edge case here, but 20% across the board, which is highly unlikely.

Then lastly, there is history;
Those who remember the launches of Polaris and Vega, remembered that not only forums but also some reviewers claimed that driver optimizations would make them age like "fine wine" and turn out to be better investments than their Nvidia counterparts. Some even suggested e.g. RX 480 to compete in the GTX 1070/1080 range, once the drivers matured after "a few months". Well did it happen? Not yet, but I'm sure it will happen any day now!
And there are not many examples of driver "miracles". The biggest pretty much across-the-board driver optimization I can recall was done by Nvidia shortly after the release of DirectX 12, when they brought most of their DirectX 12-related driver improvements to their respective DirectX 9/10/11 and OpenGL implementations, and achieved something like ~10% after a massive overhaul. And this was overhead they were well aware of for years.
Another recent example is AMD's OpenGL implementation rewrite which yielded some significant gains (and some regressions). And this was an issue OpenGL devs have known about since the early 2000s, AMD(ATI)'s OpenGL implementation was always buggy and underperforming, and it was simply not prioritized for over a decade.

So my point here is, we should stop making excuses about poorly performing hardware and blaming "immature" drivers. DirectX 10/11/12 are high priority APIs, so if there were major bottlenecks in their driver implementation, they would know, no matter how "stupid" you think Intel's engineers are.
And isn't it funny, that for years "immature drivers" have been the excuse whenever AMD have released an underperforming product (and now Intel), but not Nvidia? I smell bias…


Contributing to something and claiming A became B is not the same thing. And since you are twisting words I'm going to use your own words against you;
- "You mean like how Mantle became Vulkan"
- "Mantle was influential in low level APIs, and literally commited swaths of code to Vulkan"
Both of these claims are untrue, no matter how you try to twist it or split hairs.
Khronos developed Vulkan based on input from numerous contributors, including AMD and their Mantle, and they built this on top of their SPIR-V architecture and the ground work done by the AZDO initiative for OpenGL. While there may be some surface-level similarities between Mantle and Vulkan, Vulkan is far more featured and have much more state management than Mantle ever had, so these are not the same, even though many in the tech press don't know the difference.
The only question is: did anybody actually test for CPU usage (to detect overhead symptoms) with these cards?
 
Joined
Jun 10, 2014
Messages
2,987 (0.78/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
The only question is: did anybody actually test for CPU usage (to detect overhead symptoms) with these cards?
Testing "CPU usage" isn't the right way to do it, as pretty much any rendering thread or other thread waiting for something will be pegged at 100%. This is something developers do whenever there are latency concerns, because if we don't then the OS scheduler might kick another random thread in there an cause up to milliseconds of latency which can ultimately affect the game. So this is why most games have 1-3 threads at 100%, even if they have many more threads with some load.

The way to test for CPU bottlenecks is to reduce the potential bottleneck by either using a faster CPU or a slower/slowed down GPU, and check how the real world performance is affected.
 
Joined
Aug 15, 2016
Messages
486 (0.16/day)
Processor Intel i7 4770k
Motherboard ASUS Sabertooth Z87
Cooling BeQuiet! Shadow Rock 3
Memory Patriot Viper 3 RedD 16 GB @ 1866 MHz
Video Card(s) XFX RX 480 GTR 8GB
Storage 1x SSD Samsung EVO 250 GB 1x HDD Seagate Barracuda 3 TB 1x HDD Seagate Barracuda 4 TB
Display(s) AOC Q27G2U QHD, Dell S2415H FHD
Case Cooler Master HAF XM
Audio Device(s) Magnat LZR 980, Razer BlackShark V2, Altec Lansing 251
Power Supply Corsair AX860
Mouse Razer DeathAdder V2
Keyboard Razer Huntsman Tournament Edition
Software Windows 10 Pro x64
Vulkan is derived from and built upon components of AMD's Mantle API, which was donated by AMD to Khronos with the intent of giving Khronos a foundation on which to begin developing a low-level API that they could standardize across the industry.
Keywords: 'derived' and 'fundation'.
 
Joined
Apr 30, 2020
Messages
985 (0.59/day)
System Name S.L.I + RTX research rig
Processor Ryzen 7 5800X 3D.
Motherboard MSI MEG ACE X570
Cooling Corsair H150i Cappellx
Memory Corsair Vengeance pro RGB 3200mhz 32Gbs
Video Card(s) 2x Dell RTX 2080 Ti in S.L.I
Storage Western digital Sata 6.0 SDD 500gb + fanxiang S660 4TB PCIe 4.0 NVMe M.2
Display(s) HP X24i
Case Corsair 7000D Airflow
Power Supply EVGA G+1600watts
Mouse Corsair Scimitar
Keyboard Cosair K55 Pro RGB
As Steve of gamers Nexus said "Gamers are not Enthuiast"

however this is a fundamental problem for intel

Enthuaist are the people willing to try new things regaurdless of the problems, They like to tinker with new things, ok cool those people will buy ARC. How ever they're very small and are niche anymore compared to the sheer volume of "Gamers".

Gamers Don't want to be testers for hardware, they don't want to try new things. They just want stuff that works, on everything they use & play. This were the problem is, because intel won't get the sheer volume needed of different setups.

I still don't understand why gamers keep complaining about prices of Nvidia GPU's?

Gamers should know by now there is enough of them to pursade the market in a such a way that Nvidia would stop selling over priced GPU if they would just stop buying them?

I consider my self an ethauist, but I'm looking into mGPU gaming on DX12 & Vulkan, so the top card I'd buy is a anything from a RTX 2070 super to RTX 3090 TI/ 6400 XT to 6950 XT/ maybe A380, A750 A770, there is no point for me to ever buy a RTX 4090. I've been doing Research for mGPU games & games that support it. I don't have a reason to keep up with triple A titles coming, which means I'll probably be finding older games, that usually end up on sale. What I'm doing is actually very hard, Heck just looking for.

Gamers would feel like what I'm doing very big waste of time & money, but it's what I want to do, not them. To me many times on here it just seems like Gamers have "mob mentality" it's their way or nothing.
 
Joined
Jun 10, 2014
Messages
2,987 (0.78/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
As I've said in earlier threads;
My opinion is that Intel could have turned this entire Arc debacle into something positive by labeling it "Beta" and selling it at a very significant discount, like ~$150 for A770, and with a crystal clear disclaimer that performance would be inconsistent and there could still be significant bugs.
 
Joined
Jan 20, 2019
Messages
1,560 (0.73/day)
Location
London, UK
System Name ❶ Oooh (2024) ❷ Aaaah (2021) ❸ Ahemm (2017)
Processor ❶ 5800X3D ❷ i7-9700K ❸ i7-7700K
Motherboard ❶ X570-F ❷ Z390-E ❸ Z270-E
Cooling ❶ ALFIII 360 ❷ X62 + X72 (GPU mod) ❸ X62
Memory ❶ 32-3600/16 ❷ 32-3200/16 ❸ 16-3200/16
Video Card(s) ❶ 3080 X Trio ❷ 2080TI (AIOmod) ❸ 1080TI
Storage ❶ NVME/SSD/HDD ❷ <SAME ❸ SSD/HDD
Display(s) ❶ 1440/165/IPS ❷ 1440/144/IPS ❸ 1080/144/IPS
Case ❶ BQ Silent 601 ❷ Cors 465X ❸ Frac Mesh C
Audio Device(s) ❶ HyperX C2 ❷ HyperX C2 ❸ Logi G432
Power Supply ❶ HX1200 Plat ❷ RM750X ❸ EVGA 650W G2
Mouse ❶ Logi G Pro ❷ Razer Bas V3 ❸ Logi G502
Keyboard ❶ Logi G915 TKL ❷ Anne P2 ❸ Logi G610
Software ❶ Win 11 ❷ 10 ❸ 10
Benchmark Scores I have wrestled bandwidths, Tussled with voltages, Handcuffed Overclocks, Thrown Gigahertz in Jail
As Steve of gamers Nexus said "Gamers are not Enthuiast"

however this is a fundamental problem for intel

Enthuaist are the people willing to try new things regaurdless of the problems, They like to tinker with new things, ok cool those people will buy ARC. How ever they're very small and are niche anymore compared to the sheer volume of "Gamers".

Gamers Don't want to be testers for hardware, they don't want to try new things. They just want stuff that works, on everything they use & play. This were the problem is, because intel won't get the sheer volume needed of different setups.

I'm sure that was said within context and not necessarily branding all gamers as non-hardware-enthusiasts. In general, I can't see most gamers being hardware enthusiasts, which usually comes in small packets in any workload/profession/etc... most gamers just want sufficient hardware or if the pockets are swell over-kill hardware for a wholesome gaming experience. Many of these guys will opt for pre-builds, consoles and handhelds. Then you've for the Gamer comm who fancy getting their hands dirty going the DIY route for all sorts of reasons (aesthetics, uncompromising quality parts, freedom for hardware adaptation/upgrading, just making it your own per size/preference/spec/colour contrast/features/etc etc). Some of these reasons spell out "interest"/"hobby"/"hands-on motivation" which is already a level-up in "enthusiasm" .... so how do we then determine who is a hardware enthusiast and who isn't? I'd like to see this enthusiast measuring yard stick.

Keep in mind, most people don't have enough money to splash around for, as you suggest: "willing to try new things regaurdless of the problems" or "Gamers Don't want to be testers for hardware, they don't want to try new things. They just want stuff that works, on everything they use & play". I'm certain the "enthusiast" has not been robbed of intelligence or logic to blindly buy into anything and everything to fill some odd urge to nurse their enthusiasm... i'm sure the majority of enthusiasts are ordinary people who just want the best hardware at the right price (+ considering temps/noise/power consumption) to fulfil a desired performance target (be it gaming or other). That's why we look to reviews/benchmarks/etc for a more informed decision before parting with our hard earned money.

I'm a gamer and definitely a hardware enthusiast. Even when i'm not buying i'm fixated on all things new and challenging. It gets so bad, even after upgrading to something more than adequate if not overkill, it doesn't take long for the upgrade-itch to kick in. Not because I "need" more performance but because the fixation of newer developments, the small insignificant details and negligible upticks in performance keeps the ball rolling - it's a battlefront for the enthused to keep up with the more relevant 3 year'ish (or 5 for a platform swap) upgrade plan. In short, I don't need to touch hardware on a regular basis nor invest a ton of money on various applications of hardware and yet i'm ENTHUSED!!

I still don't understand why gamers keep complaining about prices of Nvidia GPU's?

If you still don't get it, i don't believe the train of cognition will postpone any further. Probably already left the station.

A little clue though: Vast majority of gamers can't afford or can't justify throwing their hard earned cash at $500+ graphics cards. I earn well, usually maintain a healthy savers account... even I highly dislike how the market dynamics have changed and simply don't like parting chunks of my money to feed the unregulated authoritarian greed. But what can you do... i'm a performance buff and I demand visual eye candy at its best... hence compelled by the forces at play to pluck another ~$800 savers wound to grab a next-Gen hi-perf card. Not just an enthusiast, but enthusiastically obsessed if you ask me!

Gamers would feel like what I'm doing very big waste of time & money, but it's what I want to do, not them. To me many times on here it just seems like Gamers have "mob mentality" it's their way or nothing.

Honestly i don't see this type of only my highway gamer mob mentality on my screen, especially on this site. It's likely the majority of members here are potentially gamers hence seeing more game-relevant material shouldn't be too surprising. You should also appreciate the large "gamer" slice of the pie... its one of the well fed and teased instruments which heavily influences modern consumer tech.
 
Joined
Mar 29, 2014
Messages
483 (0.12/day)
Funny how, in this TPU review, the 6650XT is presented in the beginning as a direct competitor to the 770. Moving on to every single graph used, you notice the 6650XT is completely missing. The 6600XT is only used instead. Seeing as how your very own review states the 6650XT is overall 5% better than the 6600XT, this omission is the single reason you can claim the Intel 770 has a price performance lead...except it's not. Nice try.
 
Joined
Jul 15, 2020
Messages
1,021 (0.64/day)
System Name Dirt Sheep | Silent Sheep
Processor i5-2400 | 13900K (-0.02mV offset)
Motherboard Asus P8H67-M LE | Gigabyte AERO Z690-G, bios F29e Intel baseline
Cooling Scythe Katana Type 1 | Noctua NH-U12A chromax.black
Memory G-skill 2*8GB DDR3 | Corsair Vengeance 4*32GB DDR5 5200Mhz C40 @4000MHz
Video Card(s) Gigabyte 970GTX Mini | NV 1080TI FE (cap at 50%, 800mV)
Storage 2*SN850 1TB, 230S 4TB, 840EVO 128GB, WD green 2TB HDD, IronWolf 6TB, 2*HC550 18TB in RAID1
Display(s) LG 21` FHD W2261VP | Lenovo 27` 4K Qreator 27
Case Thermaltake V3 Black|Define 7 Solid, stock 3*14 fans+ 2*12 front&buttom+ out 1*8 (on expansion slot)
Audio Device(s) Beyerdynamic DT 990 (or the screen speakers when I'm too lazy)
Power Supply Enermax Pro82+ 525W | Corsair RM650x (2021)
Mouse Logitech Master 3
Keyboard Roccat Isku FX
VR HMD Nop.
Software WIN 10 | WIN 11
Benchmark Scores CB23 SC: i5-2400=641 | i9-13900k=2325-2281 MC: i5-2400=i9 13900k SC | i9-13900k=37240-35500
As Steve of gamers Nexus said "Gamers are not Enthuiast"
Generally speiking, I dont think you need expensive stuff in you computer to be enthusiast (see my system) and versa versa - you are not enthusiast by default if you have expensive stuff in you computer so we very much agree on that.
Regarding ARC, i don't think buying on is an enthusiast thing, more like 'early-adopter' thing. I`m not even considering buying on until ARC second gen but the subject interest me a lot. Resone: driver are to immature for my taste, but I can definitely understand someone will buy one for that reason excactly.
So enthusiast isn`t an 'early-adopter', most of them aren't imo.
 
Joined
Jan 14, 2019
Messages
12,341 (5.75/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE
Generally speiking, I dont think you need expensive stuff in you computer to be enthusiast (see my system) and versa versa - you are not enthusiast by default if you have expensive stuff in you computer so we very much agree on that.
Regarding ARC, i don't think buying on is an enthusiast thing, more like 'early-adopter' thing. I`m not even considering buying on until ARC second gen but the subject interest me a lot. Resone: driver are to immature for my taste, but I can definitely understand someone will buy one for that reason excactly.
So enthusiast isn`t an 'early-adopter', most of them aren't imo.
Not only that. I think using low-end and outdated hardware to create something that is still usable today is more of an enthusiast thing than buying the highest-end stuff and calling it a day. Everybody knows what a 3090 is, and everybody knows it'll play any game at basically any setting. There's no challenge in building a system around it. You only need money. I'm more proud of having built my two HTPCs (in my signature) than I was of any of my high-end systems in the past. "HTPC 2" is dead silent with a passively cooled CPU as well as GPU, yet it's still capable of some light gaming. Currently, I only have Nascar Heat 5 installed on it, which runs at around 40 FPS at 1080p which I think is impressive.
 
Joined
Sep 21, 2020
Messages
1,646 (1.08/day)
Processor 5800X3D -30 CO
Motherboard MSI B550 Tomahawk
Cooling DeepCool Assassin III
Memory 32GB G.SKILL Ripjaws V @ 3800 CL14
Video Card(s) ASRock MBA 7900XTX
Storage 1TB WD SN850X + 1TB ADATA SX8200 Pro
Display(s) Dell S2721QS 4K60
Case Cooler Master CM690 II Advanced USB 3.0
Audio Device(s) Audiotrak Prodigy Cube Black (JRC MUSES 8820D) + CAL (recabled)
Power Supply Seasonic Prime TX-750
Mouse Logitech Cordless Desktop Wave
Keyboard Logitech Cordless Desktop Wave
Software Windows 10 Pro
The more I think about it, the less reason I see behind Intel's marketing and pricing of the A750/770. It's quite obvious that by betting on DX12/RT and AV1 encode they are targeting the younger demographic of "pro gamers" / Twitch players. These are typically the people who only play the newest or most popular games at the moment, and would purchase any publicized title on launch day. But this group tend to be habitual Nvidia buyers, and are not going to change their affinity simply because a third player has entered the market. They are also very likely to already own a GPU which performs on par with Intel's top offering, or better.

And I find it funny how Intel seem to completely disregard AMD's 6600XT/6650XT as the competitor to the A750/770. Intel promote their cards as a cheaper and faster alternative to NVidia's RTX3060, but they fail even by this metric. In TPU's review the A750 ends up being marginally slower than the RTX3060 @ 1080p and only slightly faster @ 1440p, while the A770 is mere 4% and 12% faster. Price wise, at least in Europe, the RTX3060 can easily be found for under $300, and the 6600XT even cheaper. Either of these cards offers better value for money with more consistent performance, mature drivers, better compatibility in a wide range of games, and much better power efficiency,

Don't get me wrong, I really want Intel to succeed with their dGPUs. But I can't see that happening with the suggested MSRP of $300-350. When you are a newcomer in a highly competetive market, you don't get customers by offering them something similar at a higher price. You need to give them something better at a lower price.
 
Joined
Jan 14, 2019
Messages
12,341 (5.75/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE
The more I think about it, the less reason I see behind Intel's marketing and pricing of the A750/770. It's quite obvious that by betting on DX12/RT and AV1 encode they are targeting the younger demographic of "pro gamers" / Twitch players. These are typically the people who only play the newest or most popular games at the moment, and would purchase any publicized title on launch day. But this group tend to be habitual Nvidia buyers, and are not going to change their affinity simply because a third player has entered the market. They are also very likely to already own a GPU which performs on par with Intel's top offering, or better.

And I find it funny how Intel seem to completely disregard AMD's 6600XT/6650XT as the competitor to the A750/770. Intel promote their cards as a cheaper and faster alternative to NVidia's RTX3060, but they fail even by this metric. In TPU's review the A750 ends up being marginally slower than the RTX3060 @ 1080p and only slightly faster @ 1440p, while the A770 is mere 4% and 12% faster. Price wise, at least in Europe, the RTX3060 can easily be found for under $300, and the 6600XT even cheaper. Either of these cards offers better value for money with more consistent performance, mature drivers, better compatibility in a wide range of games, and much better power efficiency,

Don't get me wrong, I really want Intel to succeed with their dGPUs. But I can't see that happening with the suggested MSRP of $300-350. When you are a newcomer in a highly competetive market, you don't get customers by offering them something similar at a higher price. You need to give them something better at a lower price.
I think targeting DX12 and RT is normal - that's where we need more performance. DX11 and older games already run well enough on basically anything (except for a couple titles on Arc).

I want Intel to succeed myself, and I really want to buy an A770 to play with it, but a couple of things hold me back:
1. Drivers.
2. Unexplained high idle power consumption. My PC is in idle / browsing most of its time, so it's really important.
3. Low overall performance relative to chip size, theoretical capabilities and power consumption. I mean, 4096 shaders with 128 ROPs should really perform better especially with 225 W TDP.

Also, I found a really good deal on the reference 6750 XT at a local store which made me doubt what I want. It's 100-120 quid cheaper than AIB 6750 XTs. I think I'll wait until the A770 appears in stores and see what its real price will be like and go from there.
 

Count von Schwalbe

Moderator
Staff member
Joined
Nov 15, 2021
Messages
3,082 (2.78/day)
Location
Knoxville, TN, USA
System Name Work Computer | Unfinished Computer
Processor Core i7-6700 | Ryzen 5 5600X
Motherboard Dell Q170 | Gigabyte Aorus Elite Wi-Fi
Cooling A fan? | Truly Custom Loop
Memory 4x4GB Crucial 2133 C17 | 4x8GB Corsair Vengeance RGB 3600 C26
Video Card(s) Dell Radeon R7 450 | RTX 2080 Ti FE
Storage Crucial BX500 2TB | TBD
Display(s) 3x LG QHD 32" GSM5B96 | TBD
Case Dell | Heavily Modified Phanteks P400
Power Supply Dell TFX Non-standard | EVGA BQ 650W
Mouse Monster No-Name $7 Gaming Mouse| TBD
Joined
Jan 14, 2019
Messages
12,341 (5.75/day)
Location
Midlands, UK
System Name Nebulon B
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance DDR5-4800
Video Card(s) AMD Radeon RX 6750 XT 12 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Bazzite (Fedora Linux) KDE
Top