ASRock Arc B570 Challenger OC Review 34

ASRock Arc B570 Challenger OC Review

(34 Comments) »

Value and Conclusion

  • According to ASRock, the B570 Challenger OC will sell for $230.
  • Decent price/performance ratio
  • Good performance for 1080p
  • Runs extremely quietly
  • XeSS Frame Generation
  • 10 GB VRAM
  • Dual-slot design
  • Backplate included
  • Idle fan-stop
  • Good energy efficiency in gaming
  • Very low temperatures
  • Support for HDMI 2.1 & DisplayPort 2.1
  • Good video encode/decode hardware acceleration support
  • Pricing fairly close to Intel B580 baseline price
  • High idle power consumption / ASPM required
  • PCIe 4.0 x8 interface
  • No support for DLSS (yes I know it's an NV exclusive, still doesn't change the fact that you can have it on one option and not on others)
  • Resizable BAR required for good performance
Positioning & Architecture
Last month, in December, Intel launched their Battlemage B-Series of graphics cards. Initially only the B580 was available, but the company already announced that B570 is coming—in January. Today is the day, and we're able to share our reviews of the Intel Arc B570 graphics cards with you. For this launch we have two cards: Sparkle Arc B570 Guardian OC and ASRock Arc B570 Challenger OC (this review). An Intel reference design card is not available for the B570, all units sold will be provided by the company's various board partners.

ASRock's B570 Challenger is a factory-overclocked custom design version of B570. It comes with an OC to 2600 MHz GPU frequency, which is a +4% OC. You also get a dual-fan cooling solution that fits strictly in a dual-slot footprint, at a price of $230, which is a $10 increase over the Intel base MSRP.

From an architectural standpoint, Intel has implemented numerous modifications to enhance GPU efficiency on Battlemage, particularly to ensure that hardware units are utilized more effectively and don't have to wait for data from other units in the pipeline. As expected, not only the rasterization parts of the chip have been improved, the ray tracing units got some special love and are now much more powerful, offering up to twice the throughput. Last but not least, the XMX matrix math AI engines have been upgraded—so the new XeSS Frame Generation can yield best results.

Performance
We upgraded our test system last month, which is now built on AMD technology with the outstanding Ryzen 7 9800X3D. We've updated to Windows 11 24H2, complete with the newest patches and updates, and have added a selection of new games. At 1080p Full HD, the new Sparkle Arc B570 is faster than last's generation's Arc A770 flagship—an impressive achievement. Compared to the A580 from last generation, the performance uplift is 27%. The B580 that we saw last month is around 15% ahead. Compared to NVIDIA's GeForce RTX 4060, the B570 is a little bit slower, but it depends on the game, I'd almost call it "close enough." The same is true for AMD RX 7600, which is technically 1% slower—"close enough," too. AMD's RX 7600 XT is around 10% faster, but more expensive at the same time. If you have more money to spend, then something like RTX 4060 Ti can offer higher FPS (+37%), RX 7700 XT (+51%), but these cards cost almost $400, while Arc B570 goes for just $230.

Since we don't have a reference design card from Intel, it's hard to quantify the factory OC performance gains. Sparkle's card is rated for 2660 MHz, ASRock's card for 2600 MHz, yet they both run at almost the same speed in real workloads. On average, I measured 2684 MHz for Sparkle, 2679 MHz for ASRock. Something similar has happened for the B580, so I suspect it's a peculiarity of the Intel Battlemage design, maybe even some sort of bug.

At higher resolution, the B570 does gain a few percentage points compared to the rest of the market, which is a good sign that the bigger Battlemage cards will be very interesting, only the B580 manages to pull away by a few percentage points. I see the Arc B570 definitely as a card for 1080p, and in some titles Ultra settings might be a bit too high, or you'd have to use upscaling. While 1440p is in reach in lighter titles or at greatly reduced settings, this isn't the optimal usage scenario for the B570.

VRAM
While both RTX 4060 and RX 7600 have "only" 8 GB VRAM, Intel offers 10 GB on the B570, which is definitely a welcome improvement over 8 GB. Good job Intel! However, at 1080p, there's not a single title in our test suite than runs better with 10 GB than with 8 GB. There is Ratchet & Clank RT 1080p, which looks like it's hitting some memory limit on AMD and Intel with 8 GB and 10 GB, but NVIDIA 8 GB cards are fine. I guess I can feel a hint of VRAM pressure in Alan Wake 2 at 1080p, but it's definitely not a huge win for the B570. Nevertheless, having 10 GB is definitely better than 8 GB, and it also helps with memory bandwidth, because the bus is 160-bit instead of 128-bit. Last but not least, the psychological value is definitely there, because people are starting to worry whether 8 GB will be enough for the future. More VRAM will not magically make every game run faster. As long as a game's VRAM usage is low enough, the extra memory won't make any difference. Future games will probably use more VRAM, but that trajectory is limited by the VRAM sizes on game consoles. Games are developed first for consoles, and it's quite rare to see developers go the extra mile for greatly improved visuals on PC. Rather 2024 was full of poor PC ports, and considering the issues we saw, "better textures if lots of VRAM" is quite low on the list. Once you increase resolution to 1440p, 8 GB-class cards are definitely in trouble, but at that point you're probably considering upscaling, which will lower the VRAM requirements due to the reduced render resolution.

Ray Tracing
Intel's architecture has improved ray tracing support, and it definitely runs faster than what AMD offers. Compared to NVIDIA RTX 4060, at least at 1080p, the green team still has the upper hand. Once you run at 1440p though, all the competing 8 GB cards lose steam, because they are running out of VRAM. Now the Arc B570 is breathing down the neck of NVIDIA's RTX 4060. Interestingly, the Arc B580 is considerably faster in RT than the B570—by 30%, whereas the difference is only 15% in rasterization. I'm still not convinced that RT is worth it in this segment, especially when you're fighting to reach 60 FPS in the first place. Sure, you can turn on upscaling and frame generation, but is the loss in image quality worth the RT eye candy? On the other hand, there are lighter games that run very well with RT off, like F1 24, here enabling RT could make sense. I still don't think it's the most important capability in this segment.

Upscaling & Frame Generation
NVIDIA's biggest selling point for the GeForce 40 Series is its support for DLSS 3 Frame Generation, which promises to double the frame rates, and GeForce 50 Series offers multi-frame-generation. Such performance gains are a significant benefit, allowing you to enable ray tracing at no extra performance cost or to game at higher resolutions, or simply reach playable FPS with a cheaper card. However, this feature isn't available universally and requires game support, most high-profile AAA titles do support it. AMD's FSR 3 Frame Generation is available on many new releases, and it works on Intel hardware, too. With this generation, Intel is finally introducing their own version of frame generation, called XeSS Frame Generation. At this time, the technology is only available as beta preview in F1 24, but I'm positive that Intel will convince all the major studios to add XeSS Frame Generation in their games. My confidence comes from the observation that XeSS upscaling has been included in almost every game that supports more than one upscaler, provided they aren't sponsored games that are restricted to a single upscaler tech.

Still, at the end of the day, NVIDIA's DLSS 3 technology, for both upscaling and frame generation is the gold standard, because it is the most mature tech, with the best image quality and game support. Based on their marketing presentations, DLSS 4 will improve upon this on all the fronts—better quality, higher FPS, etc. Unfortunately it is available for NVIDIA RTX cards only, while FSR and XeSS will run on all cards. Intel's XeSS Frame Generation uses the XMX Matrix cores available in Battlemage, Alchemist and Lunar Lake, so it will not run on older GPUs and iGPUs.

Physical Design, Heat & Noise
Both custom designs that we reviewed today are using the Intel Arc B580 reference PCB with small modifications only. The only differences are one fewer VRM phase and memory chip, besides that, both B570 PCBs are virtually identical. The ASRock Arc B570 uses a black/gray color theme, which looks good, but feels a little bit plasticky depending on how you look at it. With just 25 dBA, the card run whisper-quiet—making it a fantastic options for games who want to avoid excessive fan noise. While the Sparkle B580 is still "quiet" its 30 dBA will be more noticeable during gaming. Our apples-to-apples cooler comparison test reveals that both cards have virtually the same raw cooling performance, it's just that Sparkle is running at higher fan speed to achieve lower temperatures. Considering that the difference is 60°C vs 63°C, I think ASRock did the right thing, allowing marginally higher temperatures to achieve lower fan noise.

While there was no idle fan-stop on the A-Series, Intel has made this capability standard with the B-Series—the fans will turn off in idle, desktop productivity, media playback and internet browsing. Unlike the B580, where the fans spin up every few seconds even when fully idle, both B570 cards that we tested will keep their fans stopped properly as long as there is no load on the card.

PCIE x8 and Resizable BAR
Just like last generation, Arc Battlemage is specified as "requires PCIe Resizable BAR." While there was some controversy about that in 2022, in 2024, that's a very reasonable requirement. Virtually every motherboard supports this configuration, and it's enabled by default, so novice users won't have to deal with manual setting changes.

While the B580 cards use a physical x16 connector, only eight lanes are connected to the GPU. This matches what AMD and NVIDIA have been doing with RX 7600 and RTX 4060. Cutting the number of lanes in half reduces chip design complexity in the GPU's PCIe interface, which means smaller die size, which lowers fabrication cost. The tradeoff is around 2% in gaming performance, an acceptable compromise in my opinion.

Power Consumption
For Battlemage, Intel has contracted TSMC to fabricate their GPU chips, using a modern 5 nanometer production process. This, paired with the improvements in GPU architecture design, helps bring power consumption down considerably. Unlike some B580 designs, the B570 is firmly a single 8-pin power connector product, which helps ensure an upgrade path, even for older computers. As mentioned before, the VRM design is identical to the Intel reference card, just some power limits were increased. We measured around 155 W for both cards in gaming—a 30 W reduction over the B580. Considering the 15% performance difference, this means that the B570 is a bit more energy efficient than the B580 and is now roughly on par with modern GPU designs from AMD and NVIDIA. GeForce RTX 4060 on the other hand is clearly more efficient, with just 128 W under load. It's still not a dealbreaker because the power consumption cost isn't big enough, and the cooler designs can handle the increased heat output very well, with minimal noise.

What is problematic though is the high power consumption in idle and non-gaming states. We measured 29 W, which is simply too much for any graphics card in 2025. Competing products do much better here: RTX 4060: 14 W, RX 7600 XT: 4 W, RX 7600: 2 W (!) What's really disappointing for me is that Intel knew about this since Arc A-Series was launched in late 2022, and they didn't bother fixing it. VRAM is still running at full speed, no matter if the card is sitting idle or fully loaded—competitors had low-clocked idle states since forever. Intel's official answer is to enable ASPM power management, which is default off on virtually all desktop systems, so users are required to go into their BIOS and change a setting that they might have never heard of. On top of that, the Windows power profile has to be manually tweaked from the default to "Maximize energy savings," nested several levels deep in the legacy power profiles. We still took the time to measure the effect of ASPM and can confirm that it helps lower single-monitor idle power consumption to 7 W, which is good. Multi-monitor and media playback are still sky-high with 29 W and 31 W respectively.

Media Codecs & Display Connectivity
Arc Battlemage has excellent support for modern codecs, basically everything is there, including hardware-accelerated encode for AV1, H.264 and H.265. While that is certainly nice to have, for the vast majority of users it won't make much of a difference. All streaming services have several fallbacks, in case a codec is not supported, so you're fine even with older hardware. Also, all modern iGPUs and discrete GPUs have support for AV1 decode, just encode varies a bit. Unless you're a creator or want to publish gameplay, even hardware without AV1 encode acceleration will be fine, and something like H.265 or H.264 is good enough, too. The same is true for connectivity, sure 4K up to 360 Hz is awesome, but in this segment, most people will be using 1080p Full HD or 1440p.

Overclocking
Intel has made major upgrades to Battlemage's overclocking capabilities, and built a fully-featured package. Users can now modify memory frequency, voltages and the GPU's voltage-frequency curve. The fan curve is also adjustable through an exceptionally well-designed interface that I really like. In our testing we got around 10% real-life performance gains, which is pretty decent, but I feel like there is more potential there with further improvements to the Intel software.

Drivers & Game Support
Despite being the youngest vendor of discrete graphics cards, Intel's driver team has been performing very well. They are consistently releasing game ready drivers ahead of major and minor game releases, beating AMD regularly, sometimes even NVIDIA. This strong release cadence has been going on for well over a year, so there's no reason to assume they'll slow down with Battlemage. During testing, I encountered some small issues in games, Intel released a fixed driver within days. To me, it looks like Intel is aware that drivers and software are an essential piece of the product and that they are allocating the proper resources for it.

Performance on weaker CPUs
It seems there is a performance regression when Arc Battlemage is running on weaker processors. This could especially be problematic for people who are upgrading just the GPU. Intel is aware of the situation, and it seems they are confident that it can be fixed. They provided the following statement: "We are aware of reports of performance sensitivity in some games when paired with older generation processors and we are investigating."

Pricing & Alternatives
Intel's MSRP for the Arc B570 is $220, both custom designs we're reviewing today sell for $230. While this is certainly an attractive price, and I applaud Intel for caring about the sub-$250 and sub-$300 segment (B580), I feel like the strongest competitor of the B570 is the B580. For just a little bit extra you get 12 GB VRAM, 15% higher performance in raster, 30% higher performance in RT. On the other hand, while the B570's performance is nearly on par with the RTX 4060, it is substantially more affordable, considering the lowest price for an RTX 4060 is $300. In return, you get support for DLSS 3 for upscaling and frame generation, much better energy efficiency, and everything else NVIDIA offers that makes them the industry leader. Still, a $70-80 price increase is quite a lot of money, especially in this segment. AMD's Radeon RX 7600 non-XT is $250, more expensive than the Arc B570, but it comes with smaller VRAM of 8 GB, a tiny bit lower performance, but much better idle power usage—I'm not convinced and would probably lean towards buying a B570 (or B580). AMD does offer the RX 7600 XT with 16 GB VRAM, but at $310 it's much too expensive for the minimal gains it offers, and still can't match B580. If money is tight, AMD's previous-generation RX 6600 XT and 6650 XT could be options, but these generally lack the performance for decent 1080p gaming at highest details. RTX 3060 Ti is around $300 these days, I'd prefer a 4060 over that any day—for DLSS 3 Frame Generation. Overall, Intel's new card is in an excellent position to capture market share in this high-volume segment.

Future Releases
Last month, in my B580 reviews I predicted that we will see some new GPU action at CES from both AMD and NVIDIA. Turns out AMD didn't officially announce anything and their RX 9070 series is in a completely different segment. I have no doubt that they will eventually offer something in the $300 price range—but it doesn't seem to be soon, maybe not even in 2025. NVIDIA has announced four GPUs at the top of their product stack—the cheapest model is $550. I'd guess that they will announce more, cheaper, SKUs in the second quarter of this year, but I'm having doubts if they will even care about the sub-$300 segment, maybe even sub-$350. This is good news for Intel though, because it gives them free rein over this part of the market—now they just need to get the volume up and establish themselves as the third alternative.
Recommended
Budget
Discuss(34 Comments)
View as single page
Jan 17th, 2025 16:37 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts