Wednesday, June 22nd 2022

Intel Arc A370M Graphics Card Tested in Various Graphics Rendering Scenarios

Intel's Arc Alchemist graphics cards launched in laptop/mobile space, and everyone is wondering just how well the first generation of discrete graphics performs in actual, GPU-accelerated workloads. Tellusim Technologies, a software company located in San Diego, has managed to get ahold of a laptop featuring an Intel Arc A370M mobile graphics card and benchmark it against other competing solutions. Instead of using Vulkan API, the team decided to use D3D12 API for tests, as the Vulkan usually produces lower results on the new 12th generation graphics. With the 30.0.101.1736 driver version, this GPU was mainly tested in the standard GPU working environment like triangles and batches. Meshlet size is set to 69/169, and the job is as big as 262K Meshlets. The total amount of geometry is 20 million vertices and 40 million triangles per frame.

Using the tests such as Single DIP (drawing 81 instances with u32 indices without going to Meshlet level), Mesh Indexing (Mesh Shader emulation), MDI/ICB (Multi-Draw Indirect or Indirect Command Buffer), Mesh Shader (Mesh Shaders rendering mode) and Compute Shader (Compute Shader rasterization), the Arc GPU produced some exciting numbers, measured in millions or billions of triangles. Below, you can see the results of these tests.
Next, we have a Ray Tracing test with Compute Shader (CS) and hardware (API) rendering models. These ones cover CS Static (ray tracing with 40M triangles total), CS Dynamic Fast (ray tracing with 4.2M triangles and 2.9M vertices total), and CS Dynamic Full (CS Dynamic Fast but with full BLAS rebuild instead of fast BVH update) tests. The API ones include API Static, API Dynamic Fast, and API Dynamic Full, using API-provided ray tracing techniques. The timings shown below represent BLAS update / Scene tracing times.
The team also tested how much memory the GPU needs for BLAS and Scratch buffers, as shown below.
Overall, the Intel Arc GPU performance is not excellent due to the poor state of the driver. Tellusim's GravityMark benchmark crashes on D3D12 API, meaning that Intel has much work to do to improve the performance. Numbers for Arc A370M are easily scaled to its bigger brother, the A770M, as the giant silicon has four times the compute power, so rough estimations could be placed.
Source: Tellusim Blog
Add your own comment

12 Comments on Intel Arc A370M Graphics Card Tested in Various Graphics Rendering Scenarios

#1
phanbuey
They need to iterate faster. Maybe a competitor after battlemage.
Posted on Reply
#2
bonehead123
"Geezywheezy, I'm soooo suh_prizeeed"

Said absolutely nObOdY, hahahaha.......

Posted on Reply
#3
Jism
Yes the arcs are perhaps good in syntetic or compute, but lack any decent gaming power. People say it's the drivers but Intel had a year to optimize for these drivers. Its 2 years since the announcement, and i think the whole thing is DOA. It's another vega; originally intented as a compute based chip. And they releasing simular versions now for "pro" consumers: videocardz.com/newz/intel-arc-a40-pro-graphics-card-appears-at-south-korean-regulatory-office

Intel is quite some years behind here. And i doubt if raja is ever able to come-up with something beating a 1080ti still.
Posted on Reply
#4
chstamos
Intel have failed to deliver on so many fronts with this, it's not even funny. Underpowered, seemingly overpriced, way too late, with unoptimized drivers. They haven't got a single redeeming factor in this release. Had they at least managed to release on schedule (that is, two whole years ago), even this crap would fly, seeing how the market was starved for GPUs.

People have kept comparing this to i740 negatively, but as a matter fact, i740 was a respectable, if behind its time card. It cost half as much as the flagships of the era cost, and provided performance on a par with the immediately previous generation flagship, Riva128. This trainwreck wishes it was an i740. Let's hope they do better on the second iteration. First one is dead on arrival and garbage.
Posted on Reply
#5
Jism
The delay in release is proberly due to failed targets as in performance or perhaps a truckload of bugs.

The i740 was so populair because it offered "Some" 3D capabilities while being affordable. Intel lured for centuries on providing simple 2D VGA cards installed on their chipset or motherboard that would eliminate the use of external graphics cards, making it overall cheaper for most office sollutions. They cant compete with the two giants just like that.

Overall, the i740 was discontinued as a whole, since it just did'nt perform as competition was doing.
Posted on Reply
#6
zlobby
Just in time for 2030 release!
Posted on Reply
#7
InVasMani
What a failure you gotta do a bit better than AMD/NVIDIA's bottom tier that's about to be phased out by a new generation.
Posted on Reply
#8
zlobby
InVasManiWhat a failure you gotta do a bit better than AMD/NVIDIA's bottom tier that's about to be phased out by a new generation.
When it comes to raw engineering talent and not some shady marketing and monopoly tactics it's not that easy as many think.
chstamosIntel have failed to deliver on so many fronts with this, it's not even funny.
I for one find it hilarious. Karma is a b*atch.
Posted on Reply
#9
phanbuey
zlobbyWhen it comes to raw engineering talent and not some shady marketing and monopoly tactics it's not that easy as many think.


I for one find it hilarious. Karma is a b*atch.
They for sure did this to themselves, but to be fair, if you're comparing this to IRIS it's a huge engineering leap.

Just a generation late.
Posted on Reply
#10
maxfly
Sad sad sad. Everything we've seen so far has been a big fat zero. So much for a third player in the graphics realm helping to bring prices back in line :(
Posted on Reply
#12
Flydommo
ARC cards will have to be much cheaper if Intel wants to sell any of these at all. Now we know why only two major players have dominated the graphics card market for so long. Not even a giant like Intel with a lot of iGPU experience manages AMD's or Nvidia's level of performance despite years of research and development.
Posted on Reply
Add your own comment
Oct 31st, 2024 20:30 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts