Thursday, August 4th 2022
Intel Arc Board Partners are Reportedly Stopping Production, Encountering Quality Issues
According to sources close to Igor Wallossek from Igor's lab, Intel's upcoming Arc Alchemist discrete graphics card lineup is in trouble. As the anonymous sources state, certain add-in board (AIB) partners are having difficulty adopting the third GPU manufacturer into their offerings. As we learn, AIBs are sitting on a pile of NVIDIA and AMD GPUs. This pile is decreasing in price daily and losing value, so it needs to be moved quickly. Secondly, Intel is reportedly suggesting AIBs ship cards to OEMs and system integrators to start the market spread of the new Arc dGPUs. This business model is inherently lower margin compared to selling GPUs directly to consumers.
Last but not least, it is reported that at least one major AIB is stopping the production of custom Arc GPUs due to quality concerns. What this means is yet to be uncovered, and we have to wait and see which AIB (or AIBs) is stepping out of the game. All of this suggests that the new GPU lineup is on the verge of extinction, even before it has launched. However, we are sure that the market will adapt and make a case for the third GPU maker. Of course, these predictions should be taken with a grain of salt, and we await more information to confirm those issues.
Source:
Igor's Lab
Last but not least, it is reported that at least one major AIB is stopping the production of custom Arc GPUs due to quality concerns. What this means is yet to be uncovered, and we have to wait and see which AIB (or AIBs) is stepping out of the game. All of this suggests that the new GPU lineup is on the verge of extinction, even before it has launched. However, we are sure that the market will adapt and make a case for the third GPU maker. Of course, these predictions should be taken with a grain of salt, and we await more information to confirm those issues.
133 Comments on Intel Arc Board Partners are Reportedly Stopping Production, Encountering Quality Issues
This persists throughout a broad range of games:
So great, it uses the same amount of power. Its also a stuttery GPU. The above differences are night and day. Before we praise Polaris, look at how similar it performs to the 390X, in lows. Its just a minor update to GCN, not much else - and a good one, in relative sense: lows and averages definitely got closer together. But Nvidia was on a completely different level by then, being almost rock solid.
See this is the point really that was emerging as it always has in a few decades of seeing the battle Nvidia/AMD: power is everything. And that applies broadly. You need the graphical power, but you also need to run that at the lowest possible TDP. You need a stack that can scale and maintain efficiency.
As the node stalled at 28nm for a huge amount of time, that's when the efficiency battle was on in earnest: after all, how do you differentiate when you're all on the same silicon? You can't hide anything by saying 'we're behind' or 'we're ahead'. What we saw in the Kepler > Pascal generations was hot competition because architecture was everything. AMD was pushing an early success with GCN's 7950 & 7970 and basically rode that train until Hawaii XT, and then rebranded on the same chip even, still selling it because it was actually still competitive and by then super cheap.
AMD stayed on that train for far too long and then accumulated 2 years of falling behind on Nvidia. Its 2H2022 and they still haven't caught up. They might have performance parity with Nvidia, but that's with a lacking featureset. Luckily its a featureset that isn't a must have yet, but its only since a few months now that we have a solid DLSS alternative for example.
Now, apply this lens on Team Blue. Arc is releasing to compete with product performance we've had (more than?) two years ago in the upper midrange - and note: Turing was the first gen taking twice as long to release as we were used to, so in the 'normal' cadence of 1~1,5year gen upgrades the gap would have been bigger. It does so with a grossly inefficient architecture, and the node advantage doesn't cover the gap either. It uses a good 30-50% more power for the same workload as a competitor. Or it could use equal power, like your 1060 comparison, and then perform a whole lot less, dropping even further down the stack. On top of that, there is no history of driver support to trust on and no full support at launch.
Its literally repeating the same mistakes we've seen not very long ago, and we can't laugh at Intel in its face because they're such a potential competitor? There is no potential! There never was, and many people called it - simply because they/we know how effin' hard this race really is. We've seen the two giants struggle. We've seen Raja fall into the very same holes he did at AMD/RTG.
I can honestly see just one saving grace for Intel: The currently exploding TDP budgets. It means Intel can ride a solid arch for longer than one generation just by sizing it up. But... you need a solid arch first, and Arc is not it by design, because the primary product is Xe. As long as there is no absolute, near-fanatical devotion to a gaming GPU line, you can safely forget it. And this isn't news: Nvidia was chopping down their featureset towards pure gaming chips for a decade already; stacking one successful year upon another.
I have to say it's stupid to form any opinion on an unreleased product. China doesn't count. When the rest of the Arc Alchemist series is released, and we can finally get our hands on one, we can see for ourselves.
As for Intel's ARC, providing Intel have a decent amount of fully working dies, Intel have the financial muscles to take a temporary loss for market penetration. If Intel have loads of dies which are not assembled on PCBs yet, they can even choose to use cheaper VRAM and other cost saving measures to make a budget card.
Do not insult other members.
Discuss the topic, not each other... and, be civil about it.
Post facts... not personal attacks!
My 1080 was on water, and when it went back to air I prepared for the worst... to find out I cant hear it when gaming, and my issues years before were just bad stock TIM.
I still think we need to wait and see what this quality control issue was, as we have no evidence it was an intel supplied part
If the drivers are in that kind of state, and basically need another 6-9 months in the oven, then by the time everything is ready, and they have spun a new PCB revision, then Intel will be competing with next gen cards from AMD & nVidia, and will be lucky for ARC to compete against even the lowest end cards, so Intel will be forced to slash prices. I think this first iteration of ARC is DOA, it's simply too late, too expensive and too underpowered to have any meaningful impact on the market by the time it's released.
I hope that Intel fix what is wrong, concentrate in the top two or three SKU's, release them as loss-leaders to be competitive, and get their asses working on the next gen as fast as they can. I would assume that the drivers will be based on the same foundation as the first gen, so they should be fairly stable by then.
I feel that Intel really should stick with ARC, and it would be short-sighted to completely cancel the discreet GPU project. I don't trust Intel not to simply align with the other two vendors and pricefix the market, but I at least hope that they actually do intend to be competitive, and keep the low to mid range prices sane, as I'm sure nVidia want to move the midrange to $800+ after what happened to the 3070.
No way the RX 480 was 1050ti performance, that makes no sense, i'm sorry
I have a winter and a summer setting in Wattman. The winter one is around 15% faster than a stock with the fan at 83756857rpm (XD), but with only 10-20w more (as reported by GPU-Z) at 100% (HBM 1100MHz, 1.04v, stock frequency, +50%PL), and 20-30 less under 90%. It’s perfect since it helps heating the underside of my desk, since in my room during winter I rarely see more than 19°C (upper floor with a radiator, but the heater is controller by the lower floor which is always 2°C hotter than my room).
And the summer one is a super-efficient one, basically the same performance as a stock, air-cooled one (1395MHz, 0.9v, HBM 1095MHz) but with 70w less (again, by GPU-Z). It’s around a 1080 in both performance and efficiency.
The only driver issue I had was 3-4 years ago, there was a memory leak with Forza Horizon 3 that prevented its boot, promptly solved a week later with the next beta.
I always used Radeon GPUs, so that’s why I bought the 64 vs the 1080, and I’m going to buy a 7700XT-7800 (together with a 1440p144 27” IPS display) when they’ll be released.
Cyberpunk 2077 - Big title, but not known as a particularly well made game.
Control - I'm not familiar, so no comment.
Borderlands 3 - An Unreal game, fairly popular but not impressive graphically or well scaling.
Fortnite - Another Unreal game, very popular but not known for good graphics scaling, as any Unreal game.
So, these are the games they claimed to be best "optimized", when at least two of them are using an "universal" game engine with just high-level rendering code written for those games, one is known to be bad and one is an outlier. I honestly think they were just grasping at straws to find any games where A750 outperformed RTX 3060, instead of finding the most cutting edge ones.
What's next, are they going to showcase the best new GPU for WoW, CS:GO and the Sims 4? I have a very good theory of how them managed to ship a such broken driver package, but I want to stress that it's speculation, other than the fact that their driver was fairly stable prior to adding additional "gaming features" and gimmicks. So my theory is that the driver was fairly good and QAed until they started merging in extra features and gimmicks. It's fairly common in large code projects to have multiple teams working on separate branches, and run into problems where branch A and B works fine by themselves, but introduces completely new bugs when combined. So whenever merging in a new feature, the entire software package needs a new round of QA, which is the reason for having a "feature freeze" long before a planned release, this is well known among software developers. Yet, many companies have team leaders or management who thinks merging features last-minute is a good thing. If you remember, back in those days RX 480 wasn't just compared to GTX 1060, many forum warriors claimed that RX 480 was significantly better than GTX 1060, that it was just a matter of some driver tweaks and voila, it should perform in the GTX 1070 ~ GTX 1080 range. They were siting the same old lies about the driver being "immature" and "games being optimized for Nvidia". So did RX 480 ever unlock that ~30-40+% extra performance with optimized drivers? No. And it's probably a matter of time before AMD drops driver support for it, as they have dropped 200/300 series already.
Pacal (Geforce 10 series) will probably remain as one of the "best" architectures and GPU investments in terms of how long the GPU is useful for gaming. Just look at GTX 1060, a card which 6 years later can almost compete with lower mid-range cards, and those who bought GTX 1080/1070 Ti can still game well, albeit not on the highest settings. Except for RT, Pascal has aged very well, much better than the generations before it, and likely better than Turing and Ampere will.
F1: It's a well-optimised game series that is historically known to run well on a potato without issues. At least none of my PC configurations (I change components quite often) ran the current iteration below 100 FPS.
Cyberpunk and Control: Based on the A380 review here at TPU, Arc seems to ray trace quite alright. Intel probably managed to find a combination of settings that plays to Arc's advantage with RT.
Borderlands and Fortnite: I don't play these games, but aren't these the kind of titles that run on a potato to make sure the intended audience (kids) can play it too?
There is a 9-19% performance gap (on average FPS) between the two cards in current day benches and there was at launch, as pointed out by the link earlier.
Userbenchmark agrees - though I'm the last to take that as simple truth, I won't be able to defend 'its coincidental' that all those numbers point in the same direction.
gpu.userbenchmark.com/Compare/AMD-RX-480-vs-Nvidia-GTX-1060-6GB/3634vs3639
The best case scenario for the RX480 is that it gets equal~ish FPS in select titles. But that's not counting the lows. This was not different from the situation at launch: look at the huge gap between averages and 1% lows and how it compares to any Pascal card.
Rubbish or not, the numbers don't lie. Some tinted glasses apply here, I'm sorry to break the dream. Same applies to you.
The most interesting bit here is how this all works in our heads, I think. The numbers simply don't lie. But I recognize the sentiment. We're easily telling ourselves this is just as good because the 'differences are minor'. That's all your ego at work, not rationale.
In other words, don't take this as criticism, take it as a point of reflection. I get it, and I do it myself. I always compare my GTX 1080 to anything else to determine what's good or not. Its crazy how powerful the brain is in drawing our picture for us.
Suit yourself ;) I pointed out your own review link, userbenchmark, and now here we have another 10% which is not counting the 1% lows.
Although, I agree that the 1060 is a much better card, due to its lower power consumption, which leads to wide availability of ITX versions, which the 480/580 completely lacked.
However, the gap on the 1% and 0.1% lows is absolutely huge, and its one of the reasons Nvidia maintained the lead it had. It just ran more smoothly. It also shows AMD produced cards to get high fps, not consistency in FPS - whether that was natural to GCN at the time or not, I don't know. But it echoes the whole episode of frame pacing/microstutter on both brands. Nvidia clearly started sacrificing high averages for consistency at some point.
Now obviously RX480 wasn't going to equal a 1060, because it literally wasn't marketed to compete with that card. It was competing with the 970, albeit far too late - and even there it didn't quite get to a decisive victory. Pascal had the node advantage.