Friday, January 21st 2022

Intel Arc Alchemist Xe-HPG Graphics Card with 512 EUs Outperforms NVIDIA GeForce RTX 3070 Ti

Intel's Arc Alchemist discrete lineup of graphics cards is scheduled for launch this quarter. We are getting some performance benchmarks of the DG2-512EU silicon, representing the top-end Xe-HPG configuration. Thanks to a discovery of a famous hardware leaker TUM_APISAK, we have a measurement performed in the SiSoftware database that shows Intel's Arc Alchemist GPU with 4096 cores and, according to the report from the benchmark, just 12.8 GB of GDDR6 VRAM. This is just an error on the report, as this GPU SKU should be coupled with 16 GB of GDDR6 VRAM. The card was reportedly running at 2.1 GHz frequency. However, we don't know if this represents base or boost speeds.

When it comes to actual performance, the DG2-512EU GPU managed to score 9017.52 Mpix/s, while something like NVIDIA GeForce RTX 3070 Ti managed to get 8369.51 Mpix/s in the same test group. Comparing these two cards in floating-point operations, Intel has an advantage in half-float, double-float, and quad-float tests, while NVIDIA manages to hold the single-float crown. This represents a 7% advantage for Intel's GPU, meaning that Arc Alchemist has the potential for standing up against NVIDIA's offerings.
Sources: SiSoftware Benchmark Database, @TUM_APISAK (Twitter), via VideoCardz
Add your own comment

95 Comments on Intel Arc Alchemist Xe-HPG Graphics Card with 512 EUs Outperforms NVIDIA GeForce RTX 3070 Ti

#51
AusWolf
HD64GA mistake there. 6nm is an enhanced 7nm node and since it allow a bit higher clocks or lower power draw, AMD will refresh their whole RX6X00 line up using that. They just started with the low end because the 6500XT/6400 core chips are being made for at least 2-3 months now firstly for notebooks and the ones on desktop are the lower binned ones.
Not to mention that it's all irrelevant regarding this article, as Intel will probably use their own fabs to make GPUs, just like they do for their CPUs as well.
Posted on Reply
#52
Vya Domus
silentbogoOpenCL has a wa-a-a-ay more significant overhead
Not true, GPGPU APIs have very little overhead by their very design, whenever you see something being faster under CUDA vs OpenCL it's usually because they're using libraries by Nvidia handwritten in their proprietary assembly language so that nobody can match their performance.
Posted on Reply
#53
ModEl4
The results are essentially meaningless for extracting game performance. But confirm some odd design choices that Raja team did balancing(?) the design. When he left AMD, Tom Piazza was the head of Intel graphics group if i remember and their designs although left down mainly from driver/software support, developer relation program, budget constraints etc, they had 3 excellent design anchors, media engine better than AMD polaris, pixel backend better than AMD polaris and also a very competitive EU design from an area/power perspective and also the DX feature set was better than polaris. Raja team design choices are strange in some regards, they have double matrix/FP ratio vs Nvidia Ampere (though probably Ada Lovelace will match it by making both FP units FP/Int capable and also beefing up the tensor core inside the SM tpucdn.com/gpu-specs/images/g/930-sm-diagram.jpg) and also the FP32/FP64/FP128 ratio is higher (double & quad) than Ampere which is not needed for the gaming market and also other choices that all together increase the transistor requirements without offering equivalent returns for the gaming sector. The design is very forward looking and more feature-rich than RDNA2 (which is inferior than 2018 Turing...) but with RDNA2 in every console I doubt the design choices will bear fruit other than a good base to build drivers/software for future designs.
Posted on Reply
#54
silentbogo
Vya DomusNot true, GPGPU APIs have very little overhead by their very design, whenever you see something being faster under CUDA vs OpenCL it's usually because they're using libraries by Nvidia handwritten in their proprietary assembly language so that nobody can match their performance.
Why is it not true? OpenCL is good for portability and flexibility, but it compiles kernels in runtime or uses an intermediate form for pre-compiled kernels, which not only adds significant delay before your GPU can even start executing the code, but also makes it less optimized in general (especially if you have a fixed hardware configuration). Think of it as .NET or JVM, but for GPGPU.
"handwritten in proprietary assembly language" is a bit of a stretch for CUDA, but that's why it is faster than OpenCL - API written and optimized for specific hardware. OpenCL was made for any and all hardware. But if you want to write fast code for AMD cards - there are many alternatives, like Vulkan Compute, DirectCompute, HIP etc. If anything, HIP is probably the closest contender to CUDA in GPGPU API wars, even though nobody talks about it.
Posted on Reply
#55
Vya Domus
silentbogo"handwritten in proprietary assembly language" is a bit of a stretch for CUDA
It's literally the truth, check this out : cnugteren.github.io/tutorial/pages/page13.html This is a guy that tried to match one of Nvidia's library for linear algebra using OpenCL and he wasn't able to, this is what he concluded :
In other words, to get more performance, we'll have to go to assembly level. This will allow us to: (1) schedule instructions for maximum ILP, (2) save precious registers to increase register tiling, (3) use 32-bit addresses, and (4) ensure that there are no register bank-conflicts. With extra registers, we can further increase the tile-sizes and get better performance. But we can't do all of this in OpenCL nor in CUDA: our optimisation story ends here. There are however community-built assemblers for the Fermi architecture and the Maxwell architecture (see below), but there is none for the Kepler architecture.
Because they never disclose details about the assembler, hence you can never match what Nvidia does in-house with their software, it has nothing to do with the API itself.
Posted on Reply
#56
MrMeth
Caring1That's comparing Cuda to Open CL.
Does the Intel GPU even have Cuda capabilities?
It can't have cuda , that's Nvidia only ip.
Posted on Reply
#57
aQi
Price definitely concerns to pull those strings otherwise its always nvidia.
Posted on Reply
#58
hhumas
i will see when its available to a common consumer , I am in still search of 30 series nvidia gpu m and using 1080ti .
Posted on Reply
#59
chrcoluk
If it really is 3070ti 16gig vram for the first gen top end card I think thats a reasonable start.
Posted on Reply
#60
silentbogo
Vya DomusIt's literally the truth
I think you've completely misread the whole thing. Just because a wall of text has a word "assembler" in it does not mean CUDA or its libraries are written in assembly. That'd require an army of bearded dudes being locked-up somewhere in the dark and musty bomb shelter for 2 decades.
What he meant is that he has no fine control over instruction scheduling, that's about it. And at the bottom there is a disclaimer "Last updated in 2014 - information might be outdated", because it definitely is(and even in 2014 it was partially wrong).
Posted on Reply
#61
SaLaDiN666
Vayra86LOL.

Yeah, no. Just no. You must not have looked around too much the last decade to say the above... Leaks are 70% marketing if not more and it happens at every company and product release these days.

Even in politics 'the leak' is a tried and tested tool to gauge the public response before actually finalizing an idea.
There is a significant difference between your feelings and reality.

Firstly, the leaks are being actively propagated by hw sites because that's the reason why you would visit and read them. People don't work for free.

Secondly, AMD and Intel mostly give mostly an absolutely ****** about the leaks, because Average Joe and prosumers are not interested in nor follow them.

Prosumers get all the materials in advance and everything is being actively consulted with them, that's where the majority of leaks come from, my friend. That's where the real money are. Anything exposed in leaks is already known to them.

Average Joe doesn't read this site either, does not follow the leaks and marketing-wise, whoever has the fastest cpu, that influences him the most.


So no, leaks are actually a terrible marketing stategy with minimal reach.
Posted on Reply
#62
Fouquin
Vayra86LOL.

Yeah, no. Just no. You must not have looked around too much the last decade to say the above... Leaks are 70% marketing if not more and it happens at every company and product release these days.

Even in politics 'the leak' is a tried and tested tool to gauge the public response before actually finalizing an idea.
I guess every company I've worked with in the last decade is just pretending to fire people for leaking data on upcoming products than.
Posted on Reply
#63
dj-electric
CrackongSince you are aware of that.
You should be aware of how awful it was, when we saw "New leaks" before the 12th gen launch day.
The "leaks" did not last for few days, or a week, they last for a whole month.
Every single working day within that month, a "New leak" popped up.

We all know NDA isn't water tight
But this?
This a swiss cheese.

As I 've mentioned before, this is either intentional, or Intel should fire the whole PR team for that.

TotallyIf that's the case then what's the point of an NDA if they aren't going to enforce it?
Leaks happen to Intel, AMD and NVIDIA all the time. If this is not your 1st year in the hardware world, you know that it just happens.
Are they happy about it? sometimes yes and sometimes no. Often its free publicity, often its free poor publicity.

There's no need to romanticize punishments for breaking NDAs. Nobody is going to go through the effort of plugging a leak dam the size of Intel's evaluation hardware sent to endless locations and in endless amounts. Eventually that hardware will reach the shelves with or without leaks.

Now, refusing to extrapolate data from what you read online and going online on purpose just to get pissed at leaks - that's something I would never understand. TPU as an example of a media that can either share leaks or not share them, decides to do share them, because that's what this website's staff is here for. It brings discussion, traffic, and pumps life into the website.

My best advice for someone who "can't stand" a series of leak reports, turn off the internet or just don't browse to hardware and PC websites that casually report those things.
Posted on Reply
#64
Caring1
MrMethIt can't have cuda , that's Nvidia only ip.
Too slow, that's why it pays to read before you comment.
TotallyIf that's the case then what's the point of an NDA if they aren't going to enforce it?
NDAs cover the facts.
PR department "leaks" are intended to release highlights and build up hype.
Posted on Reply
#65
Crackong
dj-electricLeaks happen to Intel, AMD and NVIDIA all the time. If this is not your 1st year in the hardware world, you know that it just happens.
Since you've mentioned "Leaks happen to Intel, AMD and NVIDIA all the time"
I suggest you look up in the TPU main news section
Please search the time of "leaks" of detailed AMD Ryzen 6000 mobile CPU specs, relative to the time of the actual announcement.
Please search the time of "leaks" of detailed Intel 12th gen CPU specs, relative to the time of the actual announcement.
AMD "leaks" happened within several days before the announcement.
Intel "leaks" happened in few months before the announcement.
That is a very significant difference.

As I 've mentioned,
At this point everybody and their dog knows Intel "leaks" are not leaks but Intel PR stunt.

By the way,
I see you are quite offensive to the others and going personal.
I suggest you clam down, and focus on the topic.
Posted on Reply
#66
dj-electric
CrackongAt this point everybody and their dog knows Intel "leaks" are not leaks but Intel PR stunt.
You were told exactly why Intel gear leaks occur so early. Thousands of eval boards and CPUs make it to dozens of facilities around the world. Intel themselves have claimed to have 10,000 validation boards running for ADL-S when QA reached its peak, with Intel's own people claiming a large increase in all qualified sample shipments to its partners as of late.
This stuff gets to many people's hands, some of whom don't care about their employer's NDA contracts. Often when results are uploaded anonymously online via score-checking software like GeekBench or PugetBench - its there forever, and the user can't always keep this stuff offline.

If you want to mascarade about the amount of posted news about Intel's leaks - w1zzard@techpowerup.com
Otherwise, crusading online as Don Quixote to accuse Intel of self-leaking is a humble but bizarre goal to spend hours on daily. If stuff leaks - good or bad, its going to be reported by media.
Posted on Reply
#67
Crackong
dj-electricYou were told exactly why Intel gear leaks occur so early. Thousands of eval boards and CPUs make it to dozens of facilities around the world. Intel themselves have claimed to have 10,000 validation boards running for ADL-S when QA reached its peak, with Intel's own people claiming a large increase in all qualified sample shipments to its partners as of late.
You speak like Intel is the only company who sends out evaluation hardware in advance.

Everybody sent out evaluation hardware months ahead of the product launch
But the "leak" swarm only applies to Intel hardware ?

Com on it is so obvious

Just face the facts
Posted on Reply
#68
dj-electric
CrackongJust face the facts
You were told the facts. Intel is currently a much larger manufacturing operation than AMD, validating and sending samples across the globe, with is many facilities and locations in multiple continents. As much as its hard for you to read. Im not the one running an online account that 90% of its comments are dedicated to bash Intel, its really an astounding dedication on your part, with cynicism or without it.

Do expect this to repeat to a degree with Alchemist products. This stuff goes everywhere and in large quantities, even if the silicon this time isn't directly made in one of Intel's fabs

This online tribalism is getting old. It got old a good decade ago.
Posted on Reply
#69
sillyconjunkie
This is a great thread..seriously. A lot of contention and good content (for a change).

With all the supply chain follies, I think Intel is playing this smart.. I don't think the available data, as sparse as it is, represents the top-end part. Just a hunch..

There will be stock. Not sure how much will go to scalpers but the product will be more available than what we've seen lately.
AnarchoPrimitivHow come whenever Intel dGPUs are talked about, everyone completely fails to mention that Intel is having them produced at TSMC which means that these videocards will do absolutely NOTHING to alleviate supply shortages, and therefore absolutely NOTHING to lower prices....
For the nth time, TSMC is not the bottleneck in the supply chain issue and most if Intel's ish is moving back in-house.

TSMC is growing because they deliver. They continue to sign new contracts, expand facilities for individual customers and in additional countries. That doesn't happen at the rate it is for TSMC without producing actual product...in contracted quantities.

Look elsewhere for the source of the issue.. It's not far downstream from TSMC.
Posted on Reply
#70
Crackong
dj-electricYou were told the facts. Intel is currently a much larger manufacturing operation than AMD, validating and sending samples across the globe, with is many facilities and locations in multiple continents. As much as its hard for you to read. Im not the one running an online account that 90% of its comments are dedicated to bash Intel, its really an astounding dedication on your part, with cynicism or without it.

Do expect this to repeat to a degree with Alchemist products. This stuff goes everywhere and in large quantities, even if the silicon this time isn't directly made in one of Intel's fabs

This online tribalism is getting old. It got old a good decade ago.
I see.
You are not going to argue with logic and reasons, instead you are going the personal route, typical.

You could have compared some "leaks" across different manufacturers and find some ways to prove your point, but you didn't.
Instead you sway away from the topic again and be offensive.

If you truly believe your "facts", prove it with data.
Digging somebody else's comments won't help you to prove your point.
Posted on Reply
#71
arni-gx
so, microsoft want to named their gpu pc dekstop like this, like what, intel xe hpg 1070 ti 16gb...... ??
Posted on Reply
#72
Vayra86
FouquinI guess every company I've worked with in the last decade is just pretending to fire people for leaking data on upcoming products than.
How many got fired?
Posted on Reply
#73
Vya Domus
silentbogoJust because a wall of text has a word "assembler" in it does not mean CUDA or its libraries are written in assembly.
They are, it boggles my mind why you refuse to accept this basic fact. And it's not like this is something special, every vendor provides libraries hand tuned by themselves, the exception is Nvidia doesn't want anybody else doing the same thing.
silentbogoWhat he meant is that he has no fine control over instruction scheduling, that's about it.
Because with Nvidia you don't have access to the assembler, so you can never match what they do.
Posted on Reply
#74
sith'ari
dj-electricYou were told the facts. Intel is currently a much larger manufacturing operation than AMD, validating and sending samples across the globe, with is many facilities and locations in multiple continents. As much as its hard for you to read. Im not the one running an online account that 90% of its comments are dedicated to bash Intel, its really an astounding dedication on your part, with cynicism or without it.

Do expect this to repeat to a degree with Alchemist products. This stuff goes everywhere and in large quantities, even if the silicon this time isn't directly made in one of Intel's fabs

This online tribalism is getting old. It got old a good decade ago.
Well , 90% is nothing :p , my own comments are about 100% bash Intel , because of the illegal tactics that Intel used in order to gain this dominance and money ,about ... a good decade ago(i liked your phrase)
That's why Intel is the only company from those 3(legal fact here) that was found guilty by the European Commission for ""actions which harmed millions of European Consumers"".
It's good to read every now and then how much economically powerful Intel is , but it's even better to remember every now and then ,the means that Intel used(a good decade ago) in order to gain the power that they are using today against those other companies.
I'll never forget Intel's past practices that's why i'll never be lenient with Intel and my comments will be 100% negative towards Intel....
Posted on Reply
#75
Fouquin
Vayra86How many got fired?
Exactly 432 and one half. Why do you expect me to have information on every employment number? Even if I were told the exact figure I'd have no reason to provide it, name names, or otherwise. I know that it happens, and that companies have policies in place that are enforced when information leaks can be traced back to an individual or team. This isn't news to anyone in the industry. Loose lips sink ships, you signed that NDA for a reason.
Posted on Reply
Add your own comment
Jan 18th, 2025 00:56 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts