Sparkle Arc A580 Orc Review 35

Sparkle Arc A580 Orc Review

(35 Comments) »

Value and Conclusion

  • The Sparkle Arc A580 Orc is expected to sell for $180.
  • Sufficient performance for 1080p Full HD gaming
  • Sub-$200—Reasonably priced
  • Support for DirectX 12 Ultimate and hardware-accelerated ray tracing
  • Better RT performance than AMD, worse than NVIDIA
  • Compact form factor
  • Good overclocking potential
  • Backplate included
  • Idle fan-stop
  • XeSS upscaling technology
  • Support for HDMI 2.0 & DisplayPort 2.0
  • Support for AV1 hardware encode and decode
  • Used market has some alternatives with better price/performance
  • Fans are quite loud
  • High power consumption
  • Lower energy efficiency than competing cards
  • Resizable BAR required for good performance
  • No memory overclocking
  • When idle, fan keeps jumping out of fan stop in regular intervals
Today Intel has released their Arc A580 graphics card, which is great news, because it squashes rumors that company has given up on their Arc line of discrete graphics cards. We have heard about this particular model back in September 2022, when it was announced officially. We're not exactly sure why it took Intel one year to bring this card to market, but we have some theories. It is highly likely that the target audience of a sub-$200 graphics card will be playing older games—DirectX 9 and DirectX 11. At launch, both these APIs ran with several performance penalties on Intel Arc, because the Alchemist architecture was designed with a focus on DirectX 12. Intel invested a lot of resources improving the experience for DirectX 9 and DirectX 11, with excellent results. With those challenges tackled by the the driver team, it makes a lot more sense to release A580 to the market. I think what also matters is that AMD decided to price their lowest Radeon RX 7000, the RX 7600 at $270, and NVIDIA scrapped plans for a GeForce RTX 4050 and called it RTX 4060 instead, priced at $300. This leaves the market with no modern sub-$200 graphics card option—a nice opportunity for Intel to gain market share.

The Arc A580 is based on the same 6 nm ACM-G10 silicon as the Arc A750 and A770, just with fewer GPU cores enabled. While A770 comes with all the 4096 shaders active, A750 runs with 3584, and today's Arc A580 has 3072 enabled. Compared to the A770 that's a 25% difference. For our launch coverage of Intel's Arc A580 we've looked at the Sparkle Arc A580 Orc (this review), and the ASRock Arc A580 Challenger. A third partner Gunnir is also releasing an A580 model, but we haven't heard anything about sampling from them yet. Same story with Acer, who are an Intel Arc partner, but don't seem to be part of this launch, or maybe they've already given up on their GPU hardware ambitions.

Averaged over our 25-game-strong game performance testing suite, at 1080p Full HD, we're seeing the Sparkle Arc A580 Orc well ahead of the GeForce RTX 3050, with an impressive 23% performance lead. Compared to the GeForce RTX 2060, the performance lead is around 10%. This makes the card roughly as fast as the AMD Radeon RX 6600, 10% behind the RX 5700 XT. Intel's own Arc A750 is only 11% faster, which is surprisingly close. The NVIDIA GeForce RTX 3060 is 12% faster, too. AMD's new release, the Radeon RX 7600 is 28% faster, but also much more expensive ($240). As expected, NVIDIA GeForce RTX 4060 ($280) beats the Arc A580, by 30%, but it's almost 60% more expensive at the same time. With these performance numbers the A580 should be good enough for all titles at Full HD at nearly maximized settings, which is a very decent result. If you want more FPS, then it should be easy to dial down settings a bit, or use an upscaler technology. Intel's own XeSS upscaling is available in only a few games, but AMD FSR has good availability and works on all GPU vendors.

While 1080p numbers are good, the Arc A580 simply doesn't have the horse power for AAA gaming at 1440p, unless you're really willing to dial down the details a lot, or play lighter titles, like MOBA games that are played by millions. This doesn't just apply to the Arc A580—it's true for all competing cards in this segment, if you want 1440p you should spend at least $300-$350 on a graphics card. I still find it interesting to look at performance scaling of various GPUs as we climb the screen resolution ladder. Intel Arc does VERY well here. As you increase resolution, the card gains on all its competitors, because these cards are constrained in many ways, most notably memory bandwidth, which becomes a bottleneck at higher resolutions. For example, while the RTX 4060 is 30% faster at 1080p, at 4K the gap is only 19%. Same for RX 7600, 26% faster at 1080p, only 8% faster at 4K. This shows that Intel is on the right track with their GPU architecture, they just need to release products targeting higher performance levels.

All these tests were with ray tracing disabled, and that's what you should be looking for when shopping in the entry-level segment. Sprinkling RT effects on top of your game graphics comes with a serious performance hit—making little sense when you're only running around 60-80 FPS, even with RT disabled. We still tested ray tracing, and I'm happy to report that Intel's ray tracing implementation is great. I didn't encounter any serious issues, crashes or rendering errors, which is quite a feat for such a new technology. Technologically, Intel is clearly ahead of AMD, due to the use of dedicated hardware units handling a larger share of the RT processing, which can be seen in the performance benchmarks. NVIDIA's cards are still the kings of ray tracing, especially GeForce Ada.

NVIDIA's biggest selling point for the GeForce 40 Series is support for DLSS 3 Frame Generation. The algorithm takes two frames, measures how things have moved in those two frames and calculates an intermediate frame in which these things moved only half the distance. While this approach is definitely not problem-free, especially when pixel-peeping at stills or slowed down video, in real-time it's nearly impossible to notice any difference. As you run at higher FPS and resolution it becomes even more difficult, because the deltas between each frame are getting smaller and smaller. Being able to double your FPS is a huge capability, because it means you can enable ray tracing for free, or game at higher resolutions. Of course you are limited to games with DLSS 3 support, of which there are currently around 40, mostly AAA titles, but not every title will support it. AMD has released their own FSR 3 Frame Generation technology just a few days ago, Intel is the only player without this feature. While you can run FSR 3 on Intel hardware, too, DLSS 3 Frame Gen is limited to NVIDIA GeForce 40 GPUs, which makes it a desirable feature that could end up being the deciding factor if the pricing differences are small. But right now, the cheapest DLSS 3 graphics card costs $280, which is completely out of scope for buyers interested in the $180 Arc A580.

Sparkle's Arc A580 Orc is a custom-design model using a compact dual-slot, dual-fan cooling solution. This ensure that the card will fit into smaller cases or older computers. Unlke ASRock, Sparkle is branding their card in a refreshing Intel Blue, which looks nice, especially thanks to the subtle highlights. Unfortunately the cooling performance of the heatsink is a bit on the weak side. While temperatures are quite decent with 69°C, noise levels are pretty high with 42 dBA, which is louder than nearly all modern graphics cards. Surprisingly, the ASRock A580 Challenger does much better here and runs whisper-quiet, with lower temperatures, but it has a bigger triple-slot cooler. I did try remounting and redoing the thermal paste, but no difference. Given the weak cooler I think it would have been better to allow higher temperatures, so less fan noise can be emitted. What I like is that Sparkle is including a temperature-controlled lighting element with their card that gives you a quick indication of the card's temperature. This is a pure hardware solution—no software required. There's also mentions of a factory overclock, but it ticks as the same 1700 MHz default clock that Intel mentions in their specs list, same as the ASRock A580. I guess what they mean is that the boosting behavior is improved, possibly due to an increased power limit. In our testing the ASRock A580 ran at an average clock frequency of 2354 MHz, the Sparkle Arc A580 is slightly behind that with 2312 MHz, which turns into a 2% performance difference. I also noticed that Sparkle is running their GPU at higher voltage than ASRock (1.06 V vs 1.02 V), increasing the heat output, which pushes the cooling requirements up, making life even harder for the small cooler. Maybe Sparkle can improve the fan's behavior with a BIOS update. If that materializes I'll retest and update this review. Just like all other recent graphics card releases, the Arc A580 features fan-stop, which shuts fans off during idle, desktop work and Internet browsing.

Intel contracted TSMC to fabricate their GPU chip, because TSMC is the only company in the world that has a currently working 6 nanometer production process for desktop-class processors. Such a small process promises great energy efficiency, yet the A580 (just like the other Arc cards) clearly falls behind the current-gen offerings from AMD and NVIDIA in terms of efficiency. Compared to the A750 and A770, the A580 is a little bit less energy-efficient, which is surprising—usually the lower-end models run with better efficiency. At the end of the day Intel Arc Alchemist is roughly on the same efficiency level as older cards like RX 5700 XT and RTX 2070—a bit disappointing. On the other hand, RTX 3050 and RTX 3060 aren't doing much better, and these are just one generation old. This shows that with the right engineering, Intel's next generation might be able to catch up to the big players. Looking at our voltage-frequency tests, and V-Sync power consumption, it seems that Intel simply doesn't have all the refined power-saving technologies yet that their competitors developed over the decades. I think that's also the reason for the very high non-gaming power consumption. With over 40 W in idle, sitting at the desktop doing nothing, power consumption in that scenario is 3x as high as on the competitors—which can become a dealbreaker if you're running your PC a lot of hours each day. Gaming power draw is a bit on the high side with around 210 W, but nothing that any decent PSU can't handle.

Overclocking of our card worked well, we gained 11% in real-life performance, just on the GPU. Memory overclocking is not available on this generation of Intel graphics cards. We confirmed with Intel that they will add memory overclocking on future products: "For this generation we focused on enabling GPU overclocking. We will be looking at memory overclocking with the next generation." I also have to praise Intel for including their own overclocking software, unlike NVIDIA, which basically forces you to use 3rd party apps.

According to Intel, the Arc A580 is expected to retail for $180, which is great news for gamers looking for a decent graphics card for less than $200. Arc A580 is able to deliver an enjoyable experience at Full HD in most games at maximum settings. In a few titles you'll have to dial down settings a bit, which is a very reasonable expectation considering the price point. Remember, gaming is about gameplay, not just about graphics. Both NVIDIA and AMD have pretty much given up on the sub-$200 segment and rather want you to sell their stuff priced at $250+, which is out of reach for many gamers. Arc A580 still doesn't exist in a vacuum. There's tons of used graphics cards available on the used market, at very decent pricing. My favorite is the Radeon RX 5700 XT, which can be found for around $150 online in decent condition. At that price point it's one of the best cards you can buy at this time (for Full HD gaming). While it lacks support for ray tracing and modern technologies like HDMI 2.0 and AV1 video encode, those really don't matter much if you just want to spend a little bit of time with PC gaming. No doubt, more recent releases have better energy efficiency, but that doesn't matter if you play just a few hours each week, maybe even less, which is also the reason you don't want to spend a lot of money on a graphics card. Another good alternative is Radeon RX 6600 non-XT, available used for $170. You're basically trading some performance for better energy efficiency, plus $10. On the NVIDIA side there isn't too many strong candidates. The GeForce RTX 4060 is way too expensive at $280—if you are considering the $180 A580 you don't have another $100 lying around. GeForce RTX 3050 for $210 is considerably slower than Arc A580, even in ray tracing, and power efficiency isn't better either. The same is true for RTX 2060 ($160). My favorite option from the green camp is the aging GeForce GTX 1080 Ti. You can find it online for $170, and it offers performance and power draw comparable to Arc A580 and RX 5700 XT.

The strongest competitor for Intel Arc A580 is Intel's own Arc A750. Right now the A750 is available on Newegg for $190, just $10 more than the A580. This is a +5% increase in pricing and our results show that you'll be getting a performance uplift of around 10%—pretty much a no-brainer. Given these price points I suspect that we'll see Arc A580 much closer to $150 very soon, which will make the card a good choice as entry-level card or for people who want to upgrade their system. There's still a lot that Intel can do on the software side, and it looks like they are willing to make that investment. Recently, Intel has been the fastest and most consistent GPU maker when it comes to game ready support for new games, often beating AMD and even NVIDIA.

Intel Arc isn't just good for gaming, it could also be used for more professional workloads like GPU-accelerated rendering and AI, like Stable Diffusion. Here the A580 can make a difference, thanks to having a lot of memory bandwidth available. Another use case is encoding, especially due to AV1. The Arc A580 is the most affordable GPU with hardware-acceleration for AV1 video encode. Both AMD and NVIDIA can still spoil Intel's party. While NVIDIA hasn't released a RTX 4050 and it doesn't fit within their overall strategy, such a release is definitely possible. Using the same AD107 GPU as on the RTX 4060, just with fewer shaders could be an option. I'd estimate a price point of $230 for the RTX 4050, with NVIDIA's DLSS 3 Frame Generation technology as the selling point. I don't think I'd spend an extra 30% ($180 vs $230) for that, but if it was $200 vs $180 I would definitely consider it, also for the improved energy efficiency. AMD could use a cut-down Navi 33 (like on the RX 7600) to carve out a product that targets sub-$200 customers. But that doesn't align with AMD's "follow NVIDIA at minus 10% pricing" strategy, unless NVIDIA makes the first move.
Discuss(35 Comments)
View as single page
Nov 26th, 2024 05:14 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts