Thursday, August 4th 2022

Intel Arc Board Partners are Reportedly Stopping Production, Encountering Quality Issues

According to sources close to Igor Wallossek from Igor's lab, Intel's upcoming Arc Alchemist discrete graphics card lineup is in trouble. As the anonymous sources state, certain add-in board (AIB) partners are having difficulty adopting the third GPU manufacturer into their offerings. As we learn, AIBs are sitting on a pile of NVIDIA and AMD GPUs. This pile is decreasing in price daily and losing value, so it needs to be moved quickly. Secondly, Intel is reportedly suggesting AIBs ship cards to OEMs and system integrators to start the market spread of the new Arc dGPUs. This business model is inherently lower margin compared to selling GPUs directly to consumers.

Last but not least, it is reported that at least one major AIB is stopping the production of custom Arc GPUs due to quality concerns. What this means is yet to be uncovered, and we have to wait and see which AIB (or AIBs) is stepping out of the game. All of this suggests that the new GPU lineup is on the verge of extinction, even before it has launched. However, we are sure that the market will adapt and make a case for the third GPU maker. Of course, these predictions should be taken with a grain of salt, and we await more information to confirm those issues.
Source: Igor's Lab
Add your own comment

133 Comments on Intel Arc Board Partners are Reportedly Stopping Production, Encountering Quality Issues

#101
AusWolf
stimpy88I don't know anybody who thought that Intel were capable of delivering a graphics card that was capable of taking on a 3090Ti. I've never read that on any forum, or any tech site.

What people were expecting, was that Intel should be capable of creating a card which would maybe match the midrange cards from nVidia, at the top end.
What people were expecting, was lower prices than nVidia & AMD.
What people were expecting, was better availability of all graphics cards due to having another alternative on the market.
What people were expecting, was more competition for customers money, therefore reigniting the drive for new graphical features to differentiate between offerings from the 3 vendors.
What people were expecting, was having a 3rd player in the market would drive down prices overall.
What people were expecting, was that Intel should have had the experience and expertise to pull off a product launch with only minor issues, certainly not hardware ones. We all expected the drivers to be the weak point, as Intel simply does not know how to make them.
We can still expect all of this, as Arc hasn't been launched, yet. Like I said - wait and see. It's still too early to consider it dead.
Posted on Reply
#102
chrcoluk
Sitting on stockpiles of AMD and Nvidia eh, who would have thought it haha.
Posted on Reply
#103
Assimilator
AusWolfDid you expect 1st gen Arc to launch and give AMD and Nvidia a thorough beating right from the start? Come on...
I've never mentioned anything about Arc beating the incumbents anywhere in any of my posts on this topic, so kindly stop it with your strawman argument.
AusWolf1st gen RDNA didn't beat Nvidia, and had lots of driver and heat issues, but we needed AMD to come up with something to break the monopoly. And they did, and now, everyone is happy with RDNA 2.
Arc being completely and fundamentally broken and uncompetitive as a completely new and low-end product against strong established competitors, is an absolutely, completely different scenario to AMD releasing a new product line that is competitive in all segments when their drivers work right.
stimpy88I don't know anybody who thought that Intel were capable of delivering a graphics card that was capable of taking on a 3090Ti. I've never read that on any forum, or any tech site.

What people were expecting, was that Intel should be capable of creating a card which would maybe match the midrange cards from nVidia, at the top end.
What people were expecting, was lower prices than nVidia & AMD.
What people were expecting, was better availability of all graphics cards due to having another alternative on the market.
What people were expecting, was more competition for customers money, therefore reigniting the drive for new graphical features to differentiate between offerings from the 3 vendors.
What people were expecting, was having a 3rd player in the market would drive down prices overall.
What people were expecting, was that Intel should have had the experience and expertise to pull off a product launch with only minor issues, certainly not hardware ones. We all expected the drivers to be the weak point, as Intel simply does not know how to make them.
Thank you for demonstrating that you are also capable of logical thought.
AusWolfWe can still expect all of this, as Arc hasn't been launched, yet. Like I said - wait and see. It's still too early to consider it dead.
It has launched. In China. And it's shit. Try reading the review that was literally posted on this site.
Posted on Reply
#104
eidairaman1
The Exiled Airman
FlankerWell, this time it got further than larrabee
Still doa
TropickRight? I know a lot of people have been pissed at the launch issues but honestly I don't know what everyone expected. This is a new industry they're launching in to, there's going to be some inevitable growing pains.

The problem for me is how wishy-washy they're being post-launch. They're absolutely destroying consumer goodwill with this sheepish response to the issues DG128 has. It makes them seem like they never really cared about it that much in the first place are being too willing to just abandon it. They either needed to be honest with their customers during the development process and make sure everybody knew this was a moonshot type launch and to brace for impact, or get the hype train rolling (like they did) and then when the launch didn't go the way they wanted it to, reassure everyone that they stand behind their promise and guarantee the product will improve and then actually DO it. Not, y'know, cut the entire damn AXG division.

I basically live next door to the site where Intel is set to build their new fabs in Ohio. People used to be absolutely hyped for it and the investment it would bring. Now the general sentiment is Intel is pulling an Amazon and is all talk about this grand new project but the delivery date might as well be 10 years out for all we know.
Um no it is not a new industry they are launching into, they had the i740 which flopped, then larabbee which was DOA, plus the igps over the decades, ARC is DOA at this rate.
Posted on Reply
#105
chrcoluk
If Intel abandon ARC from this first generation, that would show extreme short termism, they probably should have launched later when the drivers were at least functioning properly on a basic level, and the cards should have been much cheaper concentrating on building up a user base instead of profitability.
Posted on Reply
#106
Vayra86
Bomby569that's not true, the 480 barely consumed more power then the 1060

www.gamersnexus.net/hwreviews/2518-nvidia-gtx-1060-review-and-benchmark-vs-rx-480/page-3
And performed way worse at that power draw. Check your own review. 60 vs 52 fps 1% lows in GTAV 1080p isn't little, its the gap between framedrops and stable FPS on that game and overall there is a 15% perf gap.

This persists throughout a broad range of games:

So great, it uses the same amount of power. Its also a stuttery GPU. The above differences are night and day. Before we praise Polaris, look at how similar it performs to the 390X, in lows. Its just a minor update to GCN, not much else - and a good one, in relative sense: lows and averages definitely got closer together. But Nvidia was on a completely different level by then, being almost rock solid.

See this is the point really that was emerging as it always has in a few decades of seeing the battle Nvidia/AMD: power is everything. And that applies broadly. You need the graphical power, but you also need to run that at the lowest possible TDP. You need a stack that can scale and maintain efficiency.

As the node stalled at 28nm for a huge amount of time, that's when the efficiency battle was on in earnest: after all, how do you differentiate when you're all on the same silicon? You can't hide anything by saying 'we're behind' or 'we're ahead'. What we saw in the Kepler > Pascal generations was hot competition because architecture was everything. AMD was pushing an early success with GCN's 7950 & 7970 and basically rode that train until Hawaii XT, and then rebranded on the same chip even, still selling it because it was actually still competitive and by then super cheap.

AMD stayed on that train for far too long and then accumulated 2 years of falling behind on Nvidia. Its 2H2022 and they still haven't caught up. They might have performance parity with Nvidia, but that's with a lacking featureset. Luckily its a featureset that isn't a must have yet, but its only since a few months now that we have a solid DLSS alternative for example.

Now, apply this lens on Team Blue. Arc is releasing to compete with product performance we've had (more than?) two years ago in the upper midrange - and note: Turing was the first gen taking twice as long to release as we were used to, so in the 'normal' cadence of 1~1,5year gen upgrades the gap would have been bigger. It does so with a grossly inefficient architecture, and the node advantage doesn't cover the gap either. It uses a good 30-50% more power for the same workload as a competitor. Or it could use equal power, like your 1060 comparison, and then perform a whole lot less, dropping even further down the stack. On top of that, there is no history of driver support to trust on and no full support at launch.

Its literally repeating the same mistakes we've seen not very long ago, and we can't laugh at Intel in its face because they're such a potential competitor? There is no potential! There never was, and many people called it - simply because they/we know how effin' hard this race really is. We've seen the two giants struggle. We've seen Raja fall into the very same holes he did at AMD/RTG.

I can honestly see just one saving grace for Intel: The currently exploding TDP budgets. It means Intel can ride a solid arch for longer than one generation just by sizing it up. But... you need a solid arch first, and Arc is not it by design, because the primary product is Xe. As long as there is no absolute, near-fanatical devotion to a gaming GPU line, you can safely forget it. And this isn't news: Nvidia was chopping down their featureset towards pure gaming chips for a decade already; stacking one successful year upon another.
Posted on Reply
#107
AusWolf
AssimilatorI've never mentioned anything about Arc beating the incumbents anywhere in any of my posts on this topic, so kindly stop it with your strawman argument.
Please watch your tone. I never swore at you, so I think it'd be fair to expect the same from you.
AssimilatorArc being completely and fundamentally broken and uncompetitive as a completely new and low-end product against strong established competitors, is an absolutely, completely different scenario to AMD releasing a new product line that is competitive in all segments when their drivers work right.
How do you know it's broken and uncompetitive? Do you have an A770 in your rig? Or are you basing your judgement on rumours and Gamer's Nexus's clickbait video on the A380?
AssimilatorIt has launched. In China. And it's shit. Try reading the review that was literally posted on this site.
I did, and it didn't seem so bad. Also, it's the A380, which is the lowest of the low end. Don't judge the whole series based on a single contender. Or would it be fair to judge RDNA 2 based on the RX 6400?

I have to say it's stupid to form any opinion on an unreleased product. China doesn't count. When the rest of the Arc Alchemist series is released, and we can finally get our hands on one, we can see for ourselves.
Posted on Reply
#108
efikkan
Vayra86Now, apply this lens on Team Blue. Arc is releasing to compete with product performance we've had (more than?) two years ago in the upper midrange - and note: Turing was the first gen taking twice as long to release as we were used to, so in the 'normal' cadence of 1~1,5year gen upgrades the gap would have been bigger. It does so with a grossly inefficient architecture, and the node advantage doesn't cover the gap either. It uses a good 30-50% more power for the same workload as a competitor. Or it could use equal power, like your 1060 comparison, and then perform a whole lot less, dropping even further down the stack.
When someone have an inferior product, the right pricing can still make it a compelling product. Unfortunately for AMD with their RX 480 vs GTX 1060 situation, their production volume was too low (GTX 1060 typically had 10x the user base in the Steam survey back then), and combined with more VRAM and higher TDP, this made it hard for AMD to sell them cheaper.

As for Intel's ARC, providing Intel have a decent amount of fully working dies, Intel have the financial muscles to take a temporary loss for market penetration. If Intel have loads of dies which are not assembled on PCBs yet, they can even choose to use cheaper VRAM and other cost saving measures to make a budget card.
Posted on Reply
#109
95Viper
Stay on the topic.
Do not insult other members.
Discuss the topic, not each other... and, be civil about it.
Post facts... not personal attacks!
Posted on Reply
#110
Bomby569
Vayra86And performed way worse at that power draw. Check your own review. 60 vs 52 fps 1% lows in GTAV 1080p isn't little, its the gap between framedrops and stable FPS on that game and overall there is a 15% perf gap.

This persists throughout a broad range of games:

So great, it uses the same amount of power. Its also a stuttery GPU. The above differences are night and day. Before we praise Polaris, look at how similar it performs to the 390X, in lows. Its just a minor update to GCN, not much else - and a good one, in relative sense: lows and averages definitely got closer together. But Nvidia was on a completely different level by then, being almost rock solid.

See this is the point really that was emerging as it always has in a few decades of seeing the battle Nvidia/AMD: power is everything. And that applies broadly. You need the graphical power, but you also need to run that at the lowest possible TDP. You need a stack that can scale and maintain efficiency.

As the node stalled at 28nm for a huge amount of time, that's when the efficiency battle was on in earnest: after all, how do you differentiate when you're all on the same silicon? You can't hide anything by saying 'we're behind' or 'we're ahead'. What we saw in the Kepler > Pascal generations was hot competition because architecture was everything. AMD was pushing an early success with GCN's 7950 & 7970 and basically rode that train until Hawaii XT, and then rebranded on the same chip even, still selling it because it was actually still competitive and by then super cheap.

AMD stayed on that train for far too long and then accumulated 2 years of falling behind on Nvidia. Its 2H2022 and they still haven't caught up. They might have performance parity with Nvidia, but that's with a lacking featureset. Luckily its a featureset that isn't a must have yet, but its only since a few months now that we have a solid DLSS alternative for example.

Now, apply this lens on Team Blue. Arc is releasing to compete with product performance we've had (more than?) two years ago in the upper midrange - and note: Turing was the first gen taking twice as long to release as we were used to, so in the 'normal' cadence of 1~1,5year gen upgrades the gap would have been bigger. It does so with a grossly inefficient architecture, and the node advantage doesn't cover the gap either. It uses a good 30-50% more power for the same workload as a competitor. Or it could use equal power, like your 1060 comparison, and then perform a whole lot less, dropping even further down the stack. On top of that, there is no history of driver support to trust on and no full support at launch.

Its literally repeating the same mistakes we've seen not very long ago, and we can't laugh at Intel in its face because they're such a potential competitor? There is no potential! There never was, and many people called it - simply because they/we know how effin' hard this race really is. We've seen the two giants struggle. We've seen Raja fall into the very same holes he did at AMD/RTG.

I can honestly see just one saving grace for Intel: The currently exploding TDP budgets. It means Intel can ride a solid arch for longer than one generation just by sizing it up. But... you need a solid arch first, and Arc is not it by design, because the primary product is Xe. As long as there is no absolute, near-fanatical devotion to a gaming GPU line, you can safely forget it. And this isn't news: Nvidia was chopping down their featureset towards pure gaming chips for a decade already; stacking one successful year upon another.
I literally owned them both, that's at release and we all know how shitty AMD is at drivers especially at launch, and most times way beyond that (rx 5700 was a mess). That reflects power draw, drivers don't change that, but it doesn't reflect the difference between them in performance later on
Posted on Reply
#111
Mussels
Freshwater Moderator
MikeMurphyFor what it's worth my Vega56 has aged like fine wine. Runs cool as a cucumber with an undervolt+ overclock and great driver support. Runs my 4k60 titles like a champ. The same can't be said of the 1070.
My 1070ti and 1080 both run amazing and cold, they just needed a repaste along the way
My 1080 was on water, and when it went back to air I prepared for the worst... to find out I cant hear it when gaming, and my issues years before were just bad stock TIM.

I still think we need to wait and see what this quality control issue was, as we have no evidence it was an intel supplied part
Posted on Reply
#112
stimpy88
chrcolukIf Intel abandon ARC from this first generation, that would show extreme short termism, they probably should have launched later when the drivers were at least functioning properly on a basic level, and the cards should have been much cheaper concentrating on building up a user base instead of profitability.
I agree, but from what I understand, the cards are already delayed more than 7 months now due to the driver team not being able to deliver. The strange thing about the drivers, is apparently, simple things like buttons in the driver control panel don't work, or work in unintended ways. It's like nobody is actually eating the dogfood the team is putting out...

If the drivers are in that kind of state, and basically need another 6-9 months in the oven, then by the time everything is ready, and they have spun a new PCB revision, then Intel will be competing with next gen cards from AMD & nVidia, and will be lucky for ARC to compete against even the lowest end cards, so Intel will be forced to slash prices. I think this first iteration of ARC is DOA, it's simply too late, too expensive and too underpowered to have any meaningful impact on the market by the time it's released.

I hope that Intel fix what is wrong, concentrate in the top two or three SKU's, release them as loss-leaders to be competitive, and get their asses working on the next gen as fast as they can. I would assume that the drivers will be based on the same foundation as the first gen, so they should be fairly stable by then.

I feel that Intel really should stick with ARC, and it would be short-sighted to completely cancel the discreet GPU project. I don't trust Intel not to simply align with the other two vendors and pricefix the market, but I at least hope that they actually do intend to be competitive, and keep the low to mid range prices sane, as I'm sure nVidia want to move the midrange to $800+ after what happened to the 3070.
Posted on Reply
#113
Vayra86
Bomby569I literally owned them both, that's at release and we all know how shitty AMD is at drivers especially at launch, and most times way beyond that (rx 5700 was a mess). That reflects power draw, drivers don't change that, but it doesn't reflect the difference between them in performance later on
The RX480 is just almost a tier below a 1060... even not counting power. I don't really understand the comparison. RX480 competed with GTX970, realistically, but was too late for it. Polaris drivers weren't horrible at launch, were they?
Posted on Reply
#114
Bomby569
Vayra86The RX480 is just almost a tier below a 1060... even not counting power. I don't really understand the comparison. RX480 competed with GTX970, realistically, but was too late for it. Polaris drivers weren't horrible at launch, were they?
rx 470 (that i also owned one) was/his around 1050ti performance, rx480 was/his around 1060 6gb performance. Excluding bad drivers or the occasional AMD/Nvidia sponsored/optimised game.

No way the RX 480 was 1050ti performance, that makes no sense, i'm sorry
Posted on Reply
#115
TheoneandonlyMrK
Vayra86The RX480 is just almost a tier below a 1060... even not counting power. I don't really understand the comparison. RX480 competed with GTX970, realistically, but was too late for it. Polaris drivers weren't horrible at launch, were they?
I own both still, used both extensively, that's rubbish they're about equal all told.
Posted on Reply
#116
fb020997
MikeMurphyFor what it's worth my Vega56 has aged like fine wine. Runs cool as a cucumber with an undervolt+ overclock and great driver support. Runs my 4k60 titles like a champ. The same can't be said of the 1070.
I’m still happily using my 64, bought the last week of Sept. 2017. Runs 1080p60 or 75 (my current entry-level monitor) ultra without any issue even on recent games (albeit I never installed any 2021-22 game for now, big backlog to finish first) and often 1440p VSR. Of course it’s OCed and UVed since day… five I believe?
I have a winter and a summer setting in Wattman. The winter one is around 15% faster than a stock with the fan at 83756857rpm (XD), but with only 10-20w more (as reported by GPU-Z) at 100% (HBM 1100MHz, 1.04v, stock frequency, +50%PL), and 20-30 less under 90%. It’s perfect since it helps heating the underside of my desk, since in my room during winter I rarely see more than 19°C (upper floor with a radiator, but the heater is controller by the lower floor which is always 2°C hotter than my room).
And the summer one is a super-efficient one, basically the same performance as a stock, air-cooled one (1395MHz, 0.9v, HBM 1095MHz) but with 70w less (again, by GPU-Z). It’s around a 1080 in both performance and efficiency.
The only driver issue I had was 3-4 years ago, there was a memory leak with Forza Horizon 3 that prevented its boot, promptly solved a week later with the next beta.

I always used Radeon GPUs, so that’s why I bought the 64 vs the 1080, and I’m going to buy a 7700XT-7800 (together with a 1440p144 27” IPS display) when they’ll be released.
Posted on Reply
#117
efikkan
Has anyone else been puzzled about the games which Intel showcased as tier 1 games performing the best for Arc?


F1 2021 - Isn't this game (and its predecessors) typically statistically outliers scaling very differently from most other games?
Cyberpunk 2077 - Big title, but not known as a particularly well made game.
Control - I'm not familiar, so no comment.
Borderlands 3 - An Unreal game, fairly popular but not impressive graphically or well scaling.
Fortnite - Another Unreal game, very popular but not known for good graphics scaling, as any Unreal game.

So, these are the games they claimed to be best "optimized", when at least two of them are using an "universal" game engine with just high-level rendering code written for those games, one is known to be bad and one is an outlier. I honestly think they were just grasping at straws to find any games where A750 outperformed RTX 3060, instead of finding the most cutting edge ones.
What's next, are they going to showcase the best new GPU for WoW, CS:GO and the Sims 4?
stimpy88The strange thing about the drivers, is apparently, simple things like buttons in the driver control panel don't work, or work in unintended ways. It's like nobody is actually eating the dogfood the team is putting out...

If the drivers are in that kind of state, and basically need another 6-9 months in the oven…
I have a very good theory of how them managed to ship a such broken driver package, but I want to stress that it's speculation, other than the fact that their driver was fairly stable prior to adding additional "gaming features" and gimmicks. So my theory is that the driver was fairly good and QAed until they started merging in extra features and gimmicks. It's fairly common in large code projects to have multiple teams working on separate branches, and run into problems where branch A and B works fine by themselves, but introduces completely new bugs when combined. So whenever merging in a new feature, the entire software package needs a new round of QA, which is the reason for having a "feature freeze" long before a planned release, this is well known among software developers. Yet, many companies have team leaders or management who thinks merging features last-minute is a good thing.
Vayra86The RX480 is just almost a tier below a 1060... even not counting power. I don't really understand the comparison. RX480 competed with GTX970, realistically, but was too late for it.
If you remember, back in those days RX 480 wasn't just compared to GTX 1060, many forum warriors claimed that RX 480 was significantly better than GTX 1060, that it was just a matter of some driver tweaks and voila, it should perform in the GTX 1070 ~ GTX 1080 range. They were siting the same old lies about the driver being "immature" and "games being optimized for Nvidia". So did RX 480 ever unlock that ~30-40+% extra performance with optimized drivers? No. And it's probably a matter of time before AMD drops driver support for it, as they have dropped 200/300 series already.

Pacal (Geforce 10 series) will probably remain as one of the "best" architectures and GPU investments in terms of how long the GPU is useful for gaming. Just look at GTX 1060, a card which 6 years later can almost compete with lower mid-range cards, and those who bought GTX 1080/1070 Ti can still game well, albeit not on the highest settings. Except for RT, Pascal has aged very well, much better than the generations before it, and likely better than Turing and Ampere will.
Posted on Reply
#118
AusWolf
efikkanHas anyone else been puzzled about the games which Intel showcased as tier 1 games performing the best for Arc?

F1 2021 - Isn't this game (and its predecessors) typically statistically outliers scaling very differently from most other games?
Cyberpunk 2077 - Big title, but not known as a particularly well made game.
Control - I'm not familiar, so no comment.
Borderlands 3 - An Unreal game, fairly popular but not impressive graphically or well scaling.
Fortnite - Another Unreal game, very popular but not known for good graphics scaling, as any Unreal game.

So, these are the games they claimed to be best "optimized", when at least two of them are using an "universal" game engine with just high-level rendering code written for those games, one is known to be bad and one is an outlier. I honestly think they were just grasping at straws to find any games where A750 outperformed RTX 3060, instead of finding the most cutting edge ones.
What's next, are they going to showcase the best new GPU for WoW, CS:GO and the Sims 4?
Interesting question. Since we're in theory-land now, my theory is this:

F1: It's a well-optimised game series that is historically known to run well on a potato without issues. At least none of my PC configurations (I change components quite often) ran the current iteration below 100 FPS.
Cyberpunk and Control: Based on the A380 review here at TPU, Arc seems to ray trace quite alright. Intel probably managed to find a combination of settings that plays to Arc's advantage with RT.
Borderlands and Fortnite: I don't play these games, but aren't these the kind of titles that run on a potato to make sure the intended audience (kids) can play it too?
Posted on Reply
#119
Red_Machine
I never thought Borderlands was a game aimed at kids. And it's always been a relatively heavy hitter in terms of graphics performance, despite how it looks.
Posted on Reply
#120
AusWolf
Red_MachineI never thought Borderlands was a game aimed at kids. And it's always been a relatively heavy hitter in terms of graphics performance, despite how it looks.
I might have confused it with something else, then.
Posted on Reply
#121
Vayra86
TheoneandonlyMrKI own both still, used both extensively, that's rubbish they're about equal all told.
Interesting, because benchmarks don't tell that story.

There is a 9-19% performance gap (on average FPS) between the two cards in current day benches and there was at launch, as pointed out by the link earlier.
Userbenchmark agrees - though I'm the last to take that as simple truth, I won't be able to defend 'its coincidental' that all those numbers point in the same direction.
gpu.userbenchmark.com/Compare/AMD-RX-480-vs-Nvidia-GTX-1060-6GB/3634vs3639

The best case scenario for the RX480 is that it gets equal~ish FPS in select titles. But that's not counting the lows. This was not different from the situation at launch: look at the huge gap between averages and 1% lows and how it compares to any Pascal card.

Rubbish or not, the numbers don't lie. Some tinted glasses apply here, I'm sorry to break the dream.
Bomby569I literally owned them both, that's at release and we all know how shitty AMD is at drivers especially at launch, and most times way beyond that (rx 5700 was a mess). That reflects power draw, drivers don't change that, but it doesn't reflect the difference between them in performance later on
Same applies to you.

The most interesting bit here is how this all works in our heads, I think. The numbers simply don't lie. But I recognize the sentiment. We're easily telling ourselves this is just as good because the 'differences are minor'. That's all your ego at work, not rationale.

In other words, don't take this as criticism, take it as a point of reflection. I get it, and I do it myself. I always compare my GTX 1080 to anything else to determine what's good or not. Its crazy how powerful the brain is in drawing our picture for us.
Posted on Reply
#122
Bomby569
Vayra86Interesting, because benchmarks don't tell that story.

There is a 9-19% performance gap (on average FPS) between the two cards in current day benches and there was at launch, as pointed out by the link earlier.
Userbenchmark agrees - though I'm the last to take that as simple truth, I won't be able to defend 'its coincidental' that all those numbers point in the same direction.
gpu.userbenchmark.com/Compare/AMD-RX-480-vs-Nvidia-GTX-1060-6GB/3634vs3639

The best case scenario for the RX480 is that it gets equal~ish FPS in select titles. But that's not counting the lows. This was not different from the situation at launch: look at the huge gap between averages and 1% lows and how it compares to any Pascal card.

Rubbish or not, the numbers don't lie. Some tinted glasses apply here, I'm sorry to break the dream.


Same applies to you.

The most interesting bit here is how this all works in our heads, I think. The numbers simply don't lie. But I recognize the sentiment. We're easily telling ourselves this is just as good because the 'differences are minor'. That's all your ego at work, not rationale.
when you use userbenchmark you're already losing the argument. You're in a website that does a much better job at it and you choose to ignore to go with userbenchmark. OK i guess.
Posted on Reply
#123
Vayra86
Bomby569when you use userbenchmark you're already losing the argument. You're in a website that does a much better job at it and you choose to ignore to go with userbenchmark. OK i guess.


Suit yourself ;) I pointed out your own review link, userbenchmark, and now here we have another 10% which is not counting the 1% lows.
Posted on Reply
#124
AusWolf
Vayra86

Suit yourself ;) I pointed out your own review link, userbenchmark, and now here we have another 10% which is not counting the 1% lows.
10% is basically nothing, I might add. I can't think of a situation when it matters.

Although, I agree that the 1060 is a much better card, due to its lower power consumption, which leads to wide availability of ITX versions, which the 480/580 completely lacked.
Posted on Reply
#125
Vayra86
AusWolf10% is basically nothing, I might add. I can't think of a situation when it matters.

Although, I agree that the 1060 is a much better card, due to its lower power consumption, which leads to wide availability of ITX versions, which the 480/580 completely lacked.
10% is 10%. I didn't attribute any value to it. But its not 0%, and its not 'margin of error' 2-3% either.

However, the gap on the 1% and 0.1% lows is absolutely huge, and its one of the reasons Nvidia maintained the lead it had. It just ran more smoothly. It also shows AMD produced cards to get high fps, not consistency in FPS - whether that was natural to GCN at the time or not, I don't know. But it echoes the whole episode of frame pacing/microstutter on both brands. Nvidia clearly started sacrificing high averages for consistency at some point.

Now obviously RX480 wasn't going to equal a 1060, because it literally wasn't marketed to compete with that card. It was competing with the 970, albeit far too late - and even there it didn't quite get to a decisive victory. Pascal had the node advantage.
Posted on Reply
Add your own comment
Jan 15th, 2025 05:57 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts