• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel Arc B580 Battlemage Unboxing & Preview

AcE

Joined
Dec 3, 2024
Messages
366 (9.89/day)
Definitely, raytracing performance and XeSS are wins for Intel and AMD would do well to learn from that.
I think AMD underestimated the importance of RT, hence why it took them so long to go for proper RT cores - or they thought it would be enough, and while it is "enough", enough isn't good enough (lol). The other issue could've been development times which interfered with building a proper RT core earlier. I think AMD was for a long time concentrated on getting big raster performance first (which they then did, multiple times), and only then RT, that's why now they do a proper RT core.

FSR on the other hand will get "AI" or ML improvements with version 4, that's already known. FSR 3.1 is easily good enough though, XeSS isn't better as far as I know, only DLSS clearly is. They also need something similar to "Ray Reconstruction" to replace the noisy RT filter the game usually uses.
 
Joined
Feb 1, 2019
Messages
3,684 (1.70/day)
Location
UK, Midlands
System Name Main PC
Processor 13700k
Motherboard Asrock Z690 Steel Legend D4 - Bios 13.02
Cooling Noctua NH-D15S
Memory 32 Gig 3200CL14
Video Card(s) 4080 RTX SUPER FE 16G
Storage 1TB 980 PRO, 2TB SN850X, 2TB DC P4600, 1TB 860 EVO, 2x 3TB WD Red, 2x 4TB WD Red
Display(s) LG 27GL850
Case Fractal Define R4
Audio Device(s) Soundblaster AE-9
Power Supply Antec HCG 750 Gold
Software Windows 10 21H2 LTSC
Now that Fortnite is on Unreal Engine 5.5! I've got a 4090 and it dips below 60fps often on max video settings at 1440p. It fluctuates between 40-65 fps, with huge dips to 5fps for areas not loaded before or too much action. (no DLSS, FSR, motion blur, frame insertion used)
How fast was it on older engine?

UE seems to just keep regressing perpetually lol.
 

AcE

Joined
Dec 3, 2024
Messages
366 (9.89/day)
How fast was it on older engine?

UE seems to just keep regressing perpetually lol.
He's not supposed to ignore half the tech the 4090 is famed for (DLSS + FG). He's playing a game on newest engine with heavy RT and is surprised cookie that his highend GPU is "only" at 60 fps without any help. That's a lot, try that with a 4070.
 
Joined
Oct 2, 2020
Messages
1,025 (0.66/day)
System Name ASUS TUF F15
Processor Intel Core i7-11800H
Motherboard ASUS FX506HC
Cooling Laptop built-in cooling lol
Memory 24 GB @ 3200
Video Card(s) Intel UHD & Nvidia RTX 3050 Mobile
Storage Adata XPG SX8200 Pro 512 GB
Display(s) Laptop built-in 144 Hz FHD screen
Audio Device(s) LOGITECH 2.1-channel
Power Supply ASUS 180W PSU
Mouse Logitech G604
Keyboard SteelSeries Apex 7 TKL
Software Windows 10 Enterprise 21H2 LTSC
Ask Sapphire why they can't make a cooler that can keep their vRAM bellow 90ºC in a 22ºC ambient with over 2000 rpm. on their fans ;)
well, gaming isn't like 24/7 server. or mining:D as about "dusty gaming gpu" is worse than "well chilled mining gpu" - it's a nonsense. any "maintained" gpu would be better than "dusty abandoned", be it mining or gaming usage. so - in conclusion - IDC about temps. unless it crashes like my stupid previous TUF laptop:D
 
Joined
Jul 5, 2013
Messages
28,464 (6.77/day)
I know I'm late, but this looks very promising, and it's only the lower tier card! Excited for the upper tier offerings!
 
Joined
Feb 20, 2019
Messages
8,439 (3.93/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
Definitely, raytracing performance and XeSS are wins for Intel and AMD would do well to learn from that.
Rumours (to be taken with a grain of salt) say that the 8800XT should raytrace like a 4080/4080S.

I'll believe it when I see it, ofc!
 
Joined
Jul 5, 2013
Messages
28,464 (6.77/day)
Rumours (to be taken with a grain of salt) say that the 8800XT should raytrace like a 4080/4080S.

I'll believe it when I see it, ofc!
Seems plausible. AMD's RTRT performance is quite good with the RX7000 GPUs, so the RX8000 GPUs should be very competitive.
 
Joined
Nov 26, 2021
Messages
1,717 (1.51/day)
Location
Mississauga, Canada
Processor Ryzen 7 5700X
Motherboard ASUS TUF Gaming X570-PRO (WiFi 6)
Cooling Noctua NH-C14S (two fans)
Memory 2x16GB DDR4 3200
Video Card(s) Reference Vega 64
Storage Intel 665p 1TB, WD Black SN850X 2TB, Crucial MX300 1TB SATA, Samsung 830 256 GB SATA
Display(s) Nixeus NX-EDG27, and Samsung S23A700
Case Fractal Design R5
Power Supply Seasonic PRIME TITANIUM 850W
Mouse Logitech
VR HMD Oculus Rift
Software Windows 11 Pro, and Ubuntu 20.04
Rumours (to be taken with a grain of salt) say that the 8800XT should raytrace like a 4080/4080S.

I'll believe it when I see it, ofc!
Yeah 7900XT raster and 4080 raytracing performance sounds unlikely, but we'll see in about a month's time.
 
Joined
Feb 14, 2012
Messages
1,854 (0.39/day)
Location
Romania
Sounds more like a heatpad / TIM issue than a bad cooler. Sapphire is pretty solid in general.
Generally they are, i also have a RX 560 and a 5500XT from Sapphire, thay are cool and quiet. But with the Pulse edition and RX 6650 XT they screwed it up. And since i live in Romania, and i void the warranty if i replace the pads or TIM, i had to return it :/
 
Joined
May 29, 2024
Messages
33 (0.15/day)
Location
United States of America
Processor AMD Ryzen 7 7800X3D
Motherboard ASUS B650-F ROG STRIX GAMING WIFI ATX
Cooling DeepCool AK500 ZERO DARK
Memory TeamGroup T-Create Expert 32GB Kit (2 x 16GB) DDR5-6000 CL30
Video Card(s) Gigabyte RTX 3050 Gaming OC
Storage WD Black SN850 1TB PCIe 4.0
Display(s) ASUS ROG Swift OLED PG27AQDM
Case Fractal Design North
Audio Device(s) Topping DX3 Pro+ DAC/AMP, Byerdynamic TYGR 300R, HyperX QuadcastS
Power Supply MSI MEG Ai1000P 1000W 80+ Platinum
Mouse LAMZU Maya X
Keyboard DURGOD 65% Gateron Yellow switches
Software Windows 10 Pro
Joined
Feb 1, 2019
Messages
3,684 (1.70/day)
Location
UK, Midlands
System Name Main PC
Processor 13700k
Motherboard Asrock Z690 Steel Legend D4 - Bios 13.02
Cooling Noctua NH-D15S
Memory 32 Gig 3200CL14
Video Card(s) 4080 RTX SUPER FE 16G
Storage 1TB 980 PRO, 2TB SN850X, 2TB DC P4600, 1TB 860 EVO, 2x 3TB WD Red, 2x 4TB WD Red
Display(s) LG 27GL850
Case Fractal Define R4
Audio Device(s) Soundblaster AE-9
Power Supply Antec HCG 750 Gold
Software Windows 10 21H2 LTSC
How fast was it on older engine?

UE seems to just keep regressing perpetually lol.
Guess will dump this here, appeared in my YT feed.

 
Joined
Mar 10, 2023
Messages
40 (0.06/day)
Now that Fortnite is on Unreal Engine 5.5! I've got a 4090 and it dips below 60fps often on max video settings at 1440p. It fluctuates between 40-65 fps, with huge dips to 5fps for areas not loaded before or too much action. (no DLSS, FSR, motion blur, frame insertion used)
Max settings are only for vga testing, usually you can turn off many stuff while the game still looks the same...
 
Joined
Sep 17, 2014
Messages
22,830 (6.06/day)
Location
The Washing Machine
System Name Tiny the White Yeti
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
VR HMD HD 420 - Green Edition ;)
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
You can't get the sarcasm, I will say it for you, its not for 4k, its not even for 1440p, this card is too slow.
By the way, what from "new games" in the section "Game Testing" you can't get also? Seeing how delusional you are, I am sure that you now will cherry pick 1-2 games from this section.
You keep saying that, but you forget to use your brain apparently when you read a review;

Intel is releasing >8GB cards with better bandwidth so yes they sure as hell do handle 1440p better and they damn well should at the core performance on tap. The fact you see a 30 odd FPS number in reviews is because the settings are maxed in said games. Its actually quite an achievement seeing those numbers at that res with those settings. Dial back to high or med-high without a substantial IQ hit and poof the A770 magically turned into a 1440p card. Similarly, a 12GB midranger is decidedly a 1440p capable card. That's also the main criticism leveled at the 8GB 'midrange' these days: the limited capacity just keeps you on 1080p more so than a lack of core oomph; all of these cards however were always a minigame of tweaking games settings to get the best performance out of them.

Its amazing, this thing we call PCs, isn't it. You actually have to think about things for a second and you can actually make almost anything run properly if you do so. If that's lost on you... buy a console, instead of calling others delusional.

I played 3440x1440 on a GTX 1080 until last January; go figure. No issues, not a top end FPS either, but perfectly doable. You might need to stop 'playing reviews' and instead game a little more on various hardware.

I have the rx6600 combined with an i9 11900KF with 32gb of ram running at 3200mhz and play at 1440p. The 6600 is most definitely not a card for 1440p, and is worse that the 4060. It's about 21% slower that the 4060 on TPU's relative perfomance charts when looking at the 4060's page.

View attachment 374356

I usually play at mid to mostly high settings and achieve a satisfactory 70+ average fps on most games. Which you can see in my picture of the games currently on my pc.

View attachment 374355

Do I play extremely intensive games, no not really. But whenever I play an intensive game, it's not like cards like the 6600 or 4060 will be terrible at 1440p. I understand why people say X card is for 1440p or X card is a 1080p card. But I think most people would be fine with performance like mine or just the 4060's performance at 1440p. It's mostly people that are tech savvy that care more about the specific specs and whatnot.

I also understand people not wanting to compare the B580 to the 4060. While the specs on paper and value proposition are really good, I think it's fair to doubt Intel's drivers. They do a good job supporting their alchemist gpu's and they have come a long way from when they launched. But at the end of the day, this is only Intel's second (released) shot at the discrete gpu market and their tech isn't really mature compared to AMD and nVidea.

I won't be upgrading my gpu anytime soon, but I do genuinely wish Intels gpu team the best and succes. They deserve it and have done their damned hardest to get the alchemist gpu's to the state they are in right now.

Other than that, thanks for for reading my rambling and just know I'm not trying to offend anyone.
The 4060 is neutered hard to be what it is: a money trap that forces you to upgrade ASAP so Nvidia can earn again on your wallet. Its a typical x60 from team green: a complete POS built for obsolescence, priced only just a little over your comfort zone. It is an 8GB card with the bandwidth of a bottom barrel 2017 GPU, and despite its cache, will be constantly fighting over resources with itself - you're looking at 272 GB/s bandwidth here, for reference, the A770 offers almost double that. The RX6600 is in the same category as a 4060, but objectively worse in every way (224GB/s even!). Pascal's 1060 6GB offered 192GB/s, for reference - you're literally looking at a slightly upgraded set of 2017 era products here.

The B580 is most definitely not in that category, much like A770 shouldn't have been; the cards have the specs to drive more than 1080p, and they can indeed do it, too, without being a stutterfest. Will you get fantastic FPS? No. But it will be playable, and on 4060/6600 it won't.
 
Last edited:

AcE

Joined
Dec 3, 2024
Messages
366 (9.89/day)
The 4060 is neutered hard to be what it is: a money trap that forces you to upgrade ASAP so Nvidia can earn again on your wallet.
It is nothing special with lower midrange cards to be honest, but your comment is only true when it comes to AAA gaming, otherwise the GPU will easily hold out for much longer. The bandwidth you mention is irrelevant, in any bandwidth argument you have to regard *effective bandwidth* not just pure memory bandwidth, as the card is build around cushioning that "low bandwidth" with a lot of L2 bandwidth and thus the meaning of "effective bandwidth" which AMD uses since RDNA2 and "Infinity Cache" - which is also proven to work since then, so I wouldn't be as cynical.

Intel's cards are primitive, they lack big caches, hence their need to use big buses like 256 bit and a lot of traditional bandwidth to reach the same goal Nvidia and AMD reach through better means, big caches and saving power by only using 128 bit and in general smaller busses that get the job done with advanced architectures.

The 6600 which you try to use as a bad argument here, is one of the most loved cards today. People celebrate it because it's extremely efficient, and that's just because it uses a 128 bit bus with a lot of cache, which is just better than using a outdated big bus on a lower midrange card. People maligned the 6600 when it came out, then later accepted that the card works beautifully and since then the card is a good seller and pretty much liked. Nvidia then copied the "big cache, low power" concept of AMD later with the RTX 40 gen. The 4060 is practically the 6600 of Nvidia, so it's pretty much a good card. Which isn't great is the 4060 Ti, it's a higher tier card with just 8 GB vram, it should have a bit more vram but it has the minimum amount viable.
 
Joined
Sep 17, 2014
Messages
22,830 (6.06/day)
Location
The Washing Machine
System Name Tiny the White Yeti
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
VR HMD HD 420 - Green Edition ;)
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
It is nothing special with lower midrange cards to be honest, but your comment is only true when it comes to AAA gaming, otherwise the GPU will easily hold out for much longer. The bandwidth you mention is irrelevant, in any bandwidth argument you have to regard *effective bandwidth* not just pure memory bandwidth, as the card is build around cushioning that "low bandwidth" with a lot of L2 bandwidth and thus the meaning of "effective bandwidth" which AMD uses since RDNA2 and "Infinity Cache" - which is also proven to work since then, so I wouldn't be as cynical.

Intel's cards are primitive, they lack big caches, hence their need to use big buses like 256 bit and a lot of traditional bandwidth to reach the same goal Nvidia and AMD reach through better means, big caches and saving power by only using 128 bit and in general smaller busses that get the job done with advanced architectures.
Cache works, but to an extent. If its saturated, you'll run into bandwidth constraints nonetheless. There is no 'effective' bandwidth, there's just another buffer before the VRAM. There is just a bit more cache and there is bandwidth as it has always been. It will depend entirely on the use case (not just res and quality of a game, but the very way the engine provides data and how the GPU then handles it) how much of an improvement you really get on the hard reported bandwidth.

Nonetheless, bandwidth is still one of the two primary metrics that determine where cards land in the performance stack. Not cache - that's present everywhere, and its also sized relative to bandwidth/size of the GPU and VRAM. The gap between the x60/6600 bandwidth and that of higher end cards is immense. A 7900XT for example has a whoppin 800GB/s and a 4090 has a full TB/s - the 4060 is literally a quarter of that; logic suggests that if 4090 is considered 4K capable, the x60 could never have become more than a 1080p card. Cache fixes nothing in this relative comparison.

Intel's cards also utilize larger cache now:

1733740673540.png


And still offer 456GB/s on 12GB VRAM.

We agree completely though about the AAA focus. Its silly, for this class of cards imho, you just know they'll be in trouble right away in the newest games, it has always been the case; but that's exactly the same I'm saying about 1440p. Its a resolution you can indeed play, much like the latest triple A titles, but with compromises. Stuttery gaming however to me isn't a compromise, that's just plain shit and a clear sign you're asking too much of a GPU.
 
Last edited:

AcE

Joined
Dec 3, 2024
Messages
366 (9.89/day)
Cache works, but to an extent.
It works in the case of the 4060 for 1080p and 1440p with either reduced details or DLSS enabled, so yes it just works. In the case of Nvidia even better than with AMD's infinity cache, because Nvidia bumped up the better L2 Cache instead of using an extra L3 cache which is slower.
The gap between the x60/6600 bandwidth and that of higher end cards is immense.
The gaps are nothing special, they scale with the tier of the cards. We already saw that with 6600 XT to 6900 XT, the shader amount more than doubled but the bandwidth did not - but it got 4x the L3 cache, which helps cushion up the rest of the missing memory bandwidth. Again, only effective bandwidth is relevant, and cards such as the 4090, 4080 also prove that it works. Both cards are way way stronger than their predecessors, yet have the same bandwidth comparably, the rest comes through L2 cache. It simply works.
Intel's cards also utilize larger cache now:
18 MB is very low for a card stronger than RX 6600, which has 32 MB extra cache. The card still mainly relies on a big bus for bandwidth, the concept of Intel's GPUs is hence outdated. I will also expect bad efficiency here, since the bigger bus has exactly that downside of eating more power. Intel is still at least 2 generations behind.
And still offer 456GB/s on 12GB VRAM.
Yes and that's a downside, not a upside. Bigger cache with smaller bus is the modern way, not the other way around. Again, RX 6600 was doubted by people, but later it was just proven that the L3 cache works and the 128 bit bus was never a downside for what people really did with that GPU. Nobody plays in 4K with a RX 6600. The card is 99% used for 1080p and works beautifully there. The same is true for the 4060. And nobody needs 12 GB vram on a lower midrange card btw, every time TPU releases a new game benchmark, it is proven that 12 GB is not needed in 1080 and not even in 1440p. For a 4060 Ti which is either a better 1080p card or 1440p card it would be nice to have 12 GB vram as it's "safer", but even there it is still not needed, the card works normally.
 
Last edited:
Joined
Sep 26, 2024
Messages
5 (0.05/day)
Processor AMD 5700X
Motherboard Gigabyte X570S UD
Cooling DeepCool AS500
Memory Crucial Ballistix 32GB DDR4-3600
Video Card(s) XFX Speedster RX6800
Storage Kingston KC3000 2TB
Power Supply be quiet Straight Power 11 550W
Mouse Razer DeathAdder V3
Keyboard Cherry Stream 2019
I think the consensus seems to be that the B580 is going to be a solid entry level GPU for the price on offer.
One might say... ze B580 is going to be Intel's Polaris Moment.
It looks very similar to ze RX480 back in 2016.
 
Last edited:
Joined
Feb 28, 2015
Messages
111 (0.03/day)
One might say... ze B580 is going to be Intel's Polaris Moment.
It looks very similar to ze RX480 back in 2016.
Nope to big GPU die on expensive node barely faster than GPUs with smaller die on older node.
In my opinion they release small numbers only to not piss of shareholder due not fulfilled promises.
 
Joined
Jul 5, 2013
Messages
28,464 (6.77/day)
Nope to big GPU die on expensive node barely faster than GPUs with smaller die on older node.
In my opinion they release small numbers only to not piss of shareholder due not fulfilled promises.
Well, that's an interesting opinion. Doesn't really jive with the reality of being a major IC fab, but whatever..
 
Joined
Jul 9, 2021
Messages
80 (0.06/day)
in Europe b580 is 330 - 450 eur (19% vat) saurce https://geizhals.de/?fs=arc+b580
that's make it in 4060ti range price (real stores prices, not msrp)
for 100 euro more guess 4060ti is still a better choice because full AI support on leveraged llama+flux ... cuda stuff but being bottlenecked at 128bit is nonsense.
if prices still 330euro at minimum not much to say about it on a 2 years investment
they should release a full 16gb card and as much bandwith possible and power, anything else is wasted money today.
however, due to fact that rtx 4070 super is 6-800 eur retail stores prices, arc can be take in consideration on 330-450 eur range.
if stocks are low on old continent, this is a hit or miss.
 

AcE

Joined
Dec 3, 2024
Messages
366 (9.89/day)
in Europe b580 is 330 - 450 eur
It then competes with not only 4060, but also 7600 and the 16 GB version of 7600, doesn't bode well for a smaller competitor which is less popular, to ask so much for a GPU that isn't better. Too expensive, I wouldn't buy it.
 
Joined
Jul 9, 2021
Messages
80 (0.06/day)
It then competes with not only 4060, but also 7600 and the 16 GB version of 7600, doesn't bode well for a smaller competitor which is less popular, to ask so much for a GPU that isn't better. Too expensive, I wouldn't buy it.
looks like compete also with 4070ti super wich is 2x price :(
 
Top