Sunday, March 26th 2023

Intel Arc "Battlemage" to Double Shader Count, Pack Larger Caches, Use TSMC 4 nm

Intel's next-generation Arc "Battlemage" GPU is expected to numerically-double its shader counts, according to a report by RedGamingTech. The largest GPU from the Arc "Battlemage" series, the "BMG-G10," aims to power SKUs that compete in the performance segment. The chip is expected to be built on a TSMC 4 nm-class EUV node, similar to NVIDIA's GeForce "Ada" GPUs, and have a die-size similar to that of the "AD103" silicon powering the GeForce RTX 4080.

Among the juiciest bits from this report are that the top "Battlemage" chip will see its Xe Core count doubled to 64, up from 32 on the top "Alchemist" part. This would see its execution unit (EU) count doubled to 1,024, and unified shader counts at 8,192. Intel is expected to give the chip clock speeds in excess of 3.00 GHz. The Xe Cores themselves could see several updates, including IPC uplifts, and support for new math formats. The memory sub-system is expected to see an overhaul, with a large 48 MB on-die L2 cache. While the memory bus is unchanged at 256-bit wide, the memory speed could see a significant increase up from the 16-17.5 Gbps on the Arc A770. As for when customers can actually expect products, the RedGamingTech report puts launch of the Arc "Battlemage" series at no sooner than Q2-2024. The company is expected to launch refreshed "Alchemist+" GPUs in 2023.
Sources: RedGamingTech (YouTube), VideoCardz
Add your own comment

44 Comments on Intel Arc "Battlemage" to Double Shader Count, Pack Larger Caches, Use TSMC 4 nm

#3
Minus Infinity
One can only hope with Raja gone, the gpu division can "quietly" achieve its goals with out BS hype. AMD and Nvidia need a massive kick up their greedy arses.
Posted on Reply
#4
evernessince
So a 3 fold increase in L3, faster memory, and a doubling of Shaders and RT cores.

Such a card might be around 4070 Ti level based on TPU's data (although to be fair, TPU's data has yet to account for driver updates).

Intel just needs to keep drilling away at those driver updates and push aggressive pricing. It's going to be tough to get all the fringe issues that can pop up is rarely played games though, hopefully they have a very robust error logging and reporting system built in.
Posted on Reply
#5
usiname
3060 also has almost double shader count over 2060, but the difference is only 20%
Posted on Reply
#6
Patriot
Minus InfinityOne can only hope with Raja gone, the gpu division can "quietly" achieve its goals with out BS hype. AMD and Nvidia need a massive kick up their greedy arses.
Need to get rid of ryan shrout as well.
Posted on Reply
#7
phanbuey
PatriotNeed to get rid of ryan shrout as well.
+100
Posted on Reply
#8
Jism
Minus InfinityOne can only hope with Raja gone, the gpu division can "quietly" achieve its goals with out BS hype. AMD and Nvidia need a massive kick up their greedy arses.
I think his work is done, and he's send off. Intel takes it from here.

I mean he did'nt create terrible GPU's; Fuji, Vega, Polaris etc where quite good cards. Even better at Compute then Nvidia.

The problem was they had a limited budget to work with. They could only create a compute based card and spin a version of that off as a gamer consumer card.

It lacked being efficient; that was thrown out of the window once the clocks where raised to high levels in order to compete with Nvidia.

RDNA and CDNA are a seperate devision now; one for graphics one for compute.
Posted on Reply
#9
evernessince
usiname3060 also has almost double shader count over 2060, but the difference is only 20%
The thing with Ampere is Nvidia is cramming 4 more FP32 cores per SM but only doubled L1 cache per SM. Those cores can only handle FP, that's how Nvidia was able to fit them.

In the case of the 3060 vs the 2060, you can see from TPU's GPU database that the 3060 actually has less SMs, TMUs, RT cores, and far less Tensor cores than the 2060. Both have the exact same amount of L2 cache.

At the end of the day the 3060 is a significantly smaller die than the 2060 so there's only so much you can expect. If you look at performance per 2mm, the architecture did pretty well.
Posted on Reply
#10
TumbleGeorge
Is possible BMG to use GDDR7 and to be first graphic cards with it on market?
Posted on Reply
#11
HisDivineOrder
Have you ever noticed how these amazing rumors always turn out to be so extreme they're obviously not true? A fab improvement, double the die, double the IPC, a third more speed, and magical unicorns powering it?

Let's be serious. It's going to have a slight increase in clockspeed, 10-20% more IPC, and maybe one market segment higher than the first generation.
Posted on Reply
#12
DrCR
As a Linux gamer, this may very well be my next card.

Currently running an old Kepler on a otherwise new build and continue to do so since I absolutely refuse to spend the $$$ that Nvidia and AMD now demand. I’d sooner spend more on a different hobby or medium of entertainment.
Posted on Reply
#13
ratirt
HisDivineOrderHave you ever noticed how these amazing rumors always turn out to be so extreme they're obviously not true? A fab improvement, double the die, double the IPC, a third more speed, and magical unicorns powering it?

Let's be serious. It's going to have a slight increase in clockspeed, 10-20% more IPC, and maybe one market segment higher than the first generation.
Yeah that was my impression as well. People should really take a step back with the rumors since it is like a competition nowadays, who is going to give the most unbelievable rumor ever.
normally they go with double everything which in Intel's case, is extremely hard to believe.
Posted on Reply
#14
evernessince
HisDivineOrderHave you ever noticed how these amazing rumors always turn out to be so extreme they're obviously not true? A fab improvement, double the die, double the IPC, a third more speed, and magical unicorns powering it?

Let's be serious. It's going to have a slight increase in clockspeed, 10-20% more IPC, and maybe one market segment higher than the first generation.
No one is claiming double the IPC, that's nuts. You might be confusing double the IPC with double the shaders, the two are completely different.

1/3 more performance is not a heavy ask, most GPU generations are 40% or more.

A fab improvement to 4nm isn't exactly far fetched either given Nvidia is already on it.

Die size is not a performance statistic companies brag about. It's something an enthusiast might note to compare actual value the customer is getting or to compare a product relative to others in the stack.

Intel only getting 10-20% higher IPC along with a slight bump in clocks would mean they'd fall even further behind AMD and Nvidia as both of those companies achieved a larger gain than that this gen. You seem to be thinking of the CPU market with your figures.
Posted on Reply
#15
lemonadesoda
This is awful rumour mill rubbish that should not be on TPU.

It's all speculation based on the Q3,2022 roadmap that was "leaked" when Raja was still around and was vying for more 2023 budget allocation from the Intel executives. He's since been given the boot.

Until we get info from Intel, a lid should be put on this discussion - because this speculation is just as probable as a total ARC shutdown unless Intel makes R&D and investment and product line commitments to Arc.

Quotes from the "source"
One should note that the roadmap is slightly outdated
the roadmap confirms Intel’s plans for Battlemage in Q1 2024 - no it doesnt!
As mentioned above, the roadmap may not be accurate anymore


btarunr usually does a great job of posting news, but this item should be removed on the basis of: a low quality post
Posted on Reply
#16
Cruzy
I might be mistaken but as far as my knowledge goes, there’s no 4nm process as such, at least noch from TSMC. Ada Lovelace ist produced in an optimized 5nm process that is called 4N. I poured it out somewhere before but I guess it’s easy to mix up.

Please fix it because it’s also wrong in the title.

BR
Posted on Reply
#17
Bwaze
HisDivineOrderHave you ever noticed how these amazing rumors always turn out to be so extreme they're obviously not true? A fab improvement, double the die, double the IPC, a third more speed, and magical unicorns powering it?

Let's be serious. It's going to have a slight increase in clockspeed, 10-20% more IPC, and maybe one market segment higher than the first generation.
Well, this is much more down to earth than direct Intel announcements (not just rumors) about "Alchemist", remember when it was supposed to compete with high end Turing, then high end Ampere, and when it came out it didn't even reach midrange? Although it was obvious from the size of chips and ammount of memory those cards weren't cheap to build and Intel must be selling them almost at a loss...

Nobody needs these kinds of childish PR stunts, especially when the turned out to be completely unfounded:

Posted on Reply
#18
napata
usiname3060 also has almost double shader count over 2060, but the difference is only 20%
That's why you should look at the whole specs and not just Tflops... A 3060 achieved this by doubling the CUDA cores per SM but that's not the case here. A 3060 has less SMs than a 2060. A more valid comparison would be 3080 -> 4090.
Posted on Reply
#19
ymdhis
JismI mean he did'nt create terrible GPU's; Fuji, Vega, Polaris etc where quite good cards. Even better at Compute then Nvidia.
eeeh, I wouldn't say that. Fiji was a supersized Tonga and really just a test drive for HBM integration, Polaris had a snafu that made it burn up PCIE slots and it never scaled high enough (they touted it as having multi-gpu performance on par with high end cards... Polaris was really saved by its price/performance, and 4gb cards shipping with 8gb), and Vega was late and had a broken-in-silicon draw stream binning rasterizer which made the card very fillrate starved and under performing (they had to release the card in a certain fiscal quarter so they ended up cutting corners).

Polaris and Vega were both awesome at their price/performance, but that's because they knew they didn't have a hot product. In other words, Raja fucked up. It isn't to anyone's surprise that these new Intel discrete GPUs also ended up as train wrecks.

As for compute, AMD cards always had a disproportionately high amount of shaders which made them powerful for some compute tasks, primarily hashing for password cracking and bitcoin. This went back at least as far as Evergreen. However, from what I've heard, getting compute working on them is a nightmare (the applications I used in the HD5000 days pretty much broke and had to be remade per every driver update, plus there's an infamous bug where if you did compute + video decode at the same time, the card soft locked - this bug kept reappearing in every single generation of cards for 10+ years).
Posted on Reply
#20
sLowEnd
Cool. It'd be great if there's serviceable FP64 performance too, so we can fold. Alchemist's inability to fold is disappointing.
Posted on Reply
#21
Metroid
HisDivineOrderHave you ever noticed how these amazing rumors always turn out to be so extreme they're obviously not true? A fab improvement, double the die, double the IPC, a third more speed, and magical unicorns powering it?

Let's be serious. It's going to have a slight increase in clockspeed, 10-20% more IPC, and maybe one market segment higher than the first generation.
GPU's are different than CPU's, while gpus have more than 50% performance increase every generation, cpus only have 10 to 20% at maximum. However, it been stated that this will come in q2 2024 is a lie, more like q3 2024 or maybe q1 2025. Intel have 3 attempts to get this right if they want q1 2025, if they really want to release on q2 2024 then Intel has just one attempt to get this right.
Posted on Reply
#22
dj-electric
Doubling and then some to me seems almost like a requirement by now. Alchemist already arrived at what felt like 2 generations late.
Posted on Reply
#23
kondamin
PatriotNeed to get rid of ryan shrout as well.
He’s just a talking head who sits between intel and the media.
he has no input on products he just makes intel approachable by the media.

which used to be, not needed.
Posted on Reply
#24
Vayra86
Minus InfinityOne can only hope with Raja gone, the gpu division can "quietly" achieve its goals with out BS hype. AMD and Nvidia need a massive kick up their greedy arses.
My wallet is ready.
Posted on Reply
#25
mama
Well drivers have improved. Maybe they can add the hardware to match. I'll believe it when it actually launches. Other rumours suggest that Intel is dumping several upcoming releases for their proposed GPU schedule. Again, time will tell.
Posted on Reply
Add your own comment
Dec 22nd, 2024 12:41 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts