Sunday, December 29th 2024
Intel Rumored to Launch Arc Battlemage GPU With 24GB Memory in 2025
Intel could be working on a new Arc graphics card according to Quantum Bits quoted by VideoCardz. It's based on the Battlemage architecture and has 24 GB of memory, twice as much as current models. This new card seems to be oriented more towards professionals, not gamers. Intel's Battlemage lineup currently has the Arc B580 model with 12 GB GDDR6 memory and a 192-bit bus. There's also the upcoming B570 with 10 GB and a 160-bit bus. The new 24 GB model will use the same BGM-G21 GPU as the B580, while the increased VRAM version could use higher capacity memory modules or a dual-sided module setup. No further technical details are available at this moment.
Intel looks to be aiming this 24 GB version at professional tasks such as artificial intelligence jobs like Large Language Models (LLMs) and generative AI. The card would be useful in data centers, edge computing, schools, and research, and this makes sense for Intel as they don't have a high-memory GPU for professional productivity markets yet. The company wants to launch this Arc Battlemage with bigger memory in 2025, we guess it might be announced in late spring or ahead of next year's Computex if there's no rush. Intel in the meantime will keep making their current gaming cards too as the latest Arc series was very well received, a big win for Intel after all the struggles. This rumor hints that Intel's expanding its GPU plan rather than letting it fade away, that was a gray scenario before the launch of Battlemage. Now it seems they want to compete in the professional and AI acceleration markets as well.
Sources:
Quantum Bits, VideoCardz
Intel looks to be aiming this 24 GB version at professional tasks such as artificial intelligence jobs like Large Language Models (LLMs) and generative AI. The card would be useful in data centers, edge computing, schools, and research, and this makes sense for Intel as they don't have a high-memory GPU for professional productivity markets yet. The company wants to launch this Arc Battlemage with bigger memory in 2025, we guess it might be announced in late spring or ahead of next year's Computex if there's no rush. Intel in the meantime will keep making their current gaming cards too as the latest Arc series was very well received, a big win for Intel after all the struggles. This rumor hints that Intel's expanding its GPU plan rather than letting it fade away, that was a gray scenario before the launch of Battlemage. Now it seems they want to compete in the professional and AI acceleration markets as well.
In 2025, Intel plans to launch a larger memory version of the Battlemage series graphics cards, with the capacity increased to 24 GB.
In the future, the existing version will continue to serve consumer markets such as games, while the 24G larger video memory version will target the "productivity market."
The target users of the "productivity market" include data centers, edge computer rooms, education and scientific research, and individual developers.
— Quantum Bits
48 Comments on Intel Rumored to Launch Arc Battlemage GPU With 24GB Memory in 2025
On the other hand, this gives me some hope that, if they release a B700 model (likely with 16GB), then a pro model with 32gb could become a thing. If they're priced more reasonably than a 5090 (like in the ~$1k range) I might grab a couple to upgrade from my current setup. 6000 series is pretty much shit for AI workloads given that it lacks the WMMA specs the 7000 series has.
Intel's software support is also way better than AMD's in that front, too bad they lack proper hardware to make use of this stack.
AMD is the other way around, great hardware with lacking software. Afaik falcon shores is their upcoming platform for AI, which resembles a GPU, but doesn't have much to do with the current Arc lineup.
Their gaudi oferrings are pretty much meant solely for cheap inferencing, and their other GPUs offerings in that space haven't gotten any traction. The level of entitlement lol
Besides, RDNA2 is only 4 years old, this means it only was mining for a couple years, and miners nowadays are mostly good guys, they don't abuse their wares. Sure, you might get worse VRAM overclocking edge but this doesn't matter all that much. My 6700 XT is more than alive and still overclocks none worse than it did when I bought it (Feb '23). RDNA2 PCBs are overengineered. Cooling systems are also overengineered. These are built to last. You must be completely paranoid to avoid these. I'm not talking connectors. I know, reading comprehension does rarely exist in people nowadays but I was talking pure PSU overload. And yes, 40 Xe2 cores at these speeds might get over 500 W power spikes. Not every PSU under the Sun can handle that. Also not that easy to cool it in the first place. Better safe than sorry.
This could answer the riddle of this information not provided by Intel, as indicated in the B580 review.
I want to see intel push hard and amd get spooked intel will pass them that they too wake up and bring the goods.
bring xx70/xx80 level with 14gb+ for gaming and video editing
Another issue with lack of VRAM in a GPU is that it becomes impossible to resell at a half decent price, further killing your ROI. But what stings most is that lacking VRAM effectively turns perfectly fine chips and boards into E-waste, something determined by the presence or absence of a single or couple of GB worth of chips.
So yes, tech marches forward, indeed, better to get products that can withstand a bit of that no? I know I wouldn't want to be double checking every game review if the VRAM usage didn't actually creep up to the dreaded 10GB all the time, just ONE generation later... ;)
It's almost 4.5 years old now, and still plays whatever I throw at it extremely well, it's very hard to me annoyed at the card for its VRAM.
From where I sit, I hear far more people who didn't buy one whigne about the 10gb than people who did, but hey maybe that's just my perspective. I wouldn't change a thing about what I bought then and how I've held out since.
I tried gaming with the ryzen 7600X graphics. I tried gaming with the MSI nvidia 960 4gB GTX (~45€ bought and sold on second hand market 2023)
That is no fun.
For basic and office tasks the ryzen 7600x graphics was working well 2023 for several months in windows 11 pro / gnu gentoo linux.
the radeon 6600 is reasonable priced in my point of view. Those intel cards may be an option for windows gamers who do not care about driver quality. I doubt it works that way.
Please just test it yourself for several months with 4gb vram card, 8GB vram card and 16gb vram cards.
i did my share of processor, dram and graphic card testing in past few years.
8gb -> check whqd and control game
4gb -> check whqd and subnautica game -> textures are popping up after seconds with a nvidia 960 with 4gb vram in windows in whqd
I wanted to know which class of graphic card whqd needs for my games, what graphic card chip and how much vram. For testing I used used those giveaway epic games.
it's utilisation and not having enough memory kills your fps, no matter jow much nvidia fanboys with 10-12GB may disagree
For Ampere, this is already happening with all 8GB GPUs right now. They're either there already depending on what you play, or you'll find yourself tweaking far more than you'd like, OR you'll still find you can't eliminate stutter entirely.
There's no denying this, I've been experiencing this throughout using several GPUs in the past few decades, how they run out of juice, what you can do to prevent it, and what you can't do to eliminate the issue entirely. I've also seen what type of cards last FAR longer and return far greater value for money. They're never the midrangers with so-so VRAM. Those always go obsolete first, and when they do, their resale value is already in the shitter.
You mentioned Hogwarts. Now let's get real here. Say you bought an Ampere GPU. It was released some TWO years ago. Hogwarts is also about that age. That's not a great lifetime for a GPU to already be cutting back on settings, most notably texturing which is a low-GPU performance-hit to image quality. Its easy to have at maximum. All you really need is VRAM and it always helps the presentation quite a bit, which can't be said of various post effects. Its not the end of the world no, but can you really say that you've got a well balanced GPU at that point, if it can otherwise quite easily produce enough frames?
I believe we can and should expect more of products that keep rapidly increasing in price.
Hogwarts is indeed 2 years old, but not many games actually use more VRAM than hogwarts maxed out even today. I literally took the or one of the worst vram hogs. Also, you have to take into account that the vram issue occurs when using maxed out RT on TOP of the ultra textures. And the context here is, sure, goddamn it ngreedia - had they used 12gb of vram on my 3060ti, I would be able to run hogwarts maxed out with RT, while now I have to lower texture quality to high to do that. GREAT, problem is, the alternative product (that has more vram) wouldn't let me run the game with RT at all (too freaking slow).
So surely, you can judge a product in a vacuum, but isn't it better to compare it to what else is available at that price point? That's where nvidia thrives, because sure a lot of their products lack this that or the other, but they are usually better overall than their competition. In which case, it doesn't matter what they lack if they still are the better choice, does it?
EG1. It's quite fascinating but eg, a measly 3060ti with it's 8gb of vram chocking it can run that game with better image quality than a 6800xt (unless you don't like RT I guess). Fascinating, no?
With that said I think the minimum for the new cards should be 12gb (5060, 9060XT - 300$ tier) but I don't think that's going to happen, lol.