Wednesday, June 22nd 2022
Intel Arc A370M Graphics Card Tested in Various Graphics Rendering Scenarios
Intel's Arc Alchemist graphics cards launched in laptop/mobile space, and everyone is wondering just how well the first generation of discrete graphics performs in actual, GPU-accelerated workloads. Tellusim Technologies, a software company located in San Diego, has managed to get ahold of a laptop featuring an Intel Arc A370M mobile graphics card and benchmark it against other competing solutions. Instead of using Vulkan API, the team decided to use D3D12 API for tests, as the Vulkan usually produces lower results on the new 12th generation graphics. With the 30.0.101.1736 driver version, this GPU was mainly tested in the standard GPU working environment like triangles and batches. Meshlet size is set to 69/169, and the job is as big as 262K Meshlets. The total amount of geometry is 20 million vertices and 40 million triangles per frame.
Using the tests such as Single DIP (drawing 81 instances with u32 indices without going to Meshlet level), Mesh Indexing (Mesh Shader emulation), MDI/ICB (Multi-Draw Indirect or Indirect Command Buffer), Mesh Shader (Mesh Shaders rendering mode) and Compute Shader (Compute Shader rasterization), the Arc GPU produced some exciting numbers, measured in millions or billions of triangles. Below, you can see the results of these tests.Next, we have a Ray Tracing test with Compute Shader (CS) and hardware (API) rendering models. These ones cover CS Static (ray tracing with 40M triangles total), CS Dynamic Fast (ray tracing with 4.2M triangles and 2.9M vertices total), and CS Dynamic Full (CS Dynamic Fast but with full BLAS rebuild instead of fast BVH update) tests. The API ones include API Static, API Dynamic Fast, and API Dynamic Full, using API-provided ray tracing techniques. The timings shown below represent BLAS update / Scene tracing times.The team also tested how much memory the GPU needs for BLAS and Scratch buffers, as shown below.Overall, the Intel Arc GPU performance is not excellent due to the poor state of the driver. Tellusim's GravityMark benchmark crashes on D3D12 API, meaning that Intel has much work to do to improve the performance. Numbers for Arc A370M are easily scaled to its bigger brother, the A770M, as the giant silicon has four times the compute power, so rough estimations could be placed.
Source:
Tellusim Blog
Using the tests such as Single DIP (drawing 81 instances with u32 indices without going to Meshlet level), Mesh Indexing (Mesh Shader emulation), MDI/ICB (Multi-Draw Indirect or Indirect Command Buffer), Mesh Shader (Mesh Shaders rendering mode) and Compute Shader (Compute Shader rasterization), the Arc GPU produced some exciting numbers, measured in millions or billions of triangles. Below, you can see the results of these tests.Next, we have a Ray Tracing test with Compute Shader (CS) and hardware (API) rendering models. These ones cover CS Static (ray tracing with 40M triangles total), CS Dynamic Fast (ray tracing with 4.2M triangles and 2.9M vertices total), and CS Dynamic Full (CS Dynamic Fast but with full BLAS rebuild instead of fast BVH update) tests. The API ones include API Static, API Dynamic Fast, and API Dynamic Full, using API-provided ray tracing techniques. The timings shown below represent BLAS update / Scene tracing times.The team also tested how much memory the GPU needs for BLAS and Scratch buffers, as shown below.Overall, the Intel Arc GPU performance is not excellent due to the poor state of the driver. Tellusim's GravityMark benchmark crashes on D3D12 API, meaning that Intel has much work to do to improve the performance. Numbers for Arc A370M are easily scaled to its bigger brother, the A770M, as the giant silicon has four times the compute power, so rough estimations could be placed.
12 Comments on Intel Arc A370M Graphics Card Tested in Various Graphics Rendering Scenarios
Said absolutely nObOdY, hahahaha.......
Intel is quite some years behind here. And i doubt if raja is ever able to come-up with something beating a 1080ti still.
People have kept comparing this to i740 negatively, but as a matter fact, i740 was a respectable, if behind its time card. It cost half as much as the flagships of the era cost, and provided performance on a par with the immediately previous generation flagship, Riva128. This trainwreck wishes it was an i740. Let's hope they do better on the second iteration. First one is dead on arrival and garbage.
The i740 was so populair because it offered "Some" 3D capabilities while being affordable. Intel lured for centuries on providing simple 2D VGA cards installed on their chipset or motherboard that would eliminate the use of external graphics cards, making it overall cheaper for most office sollutions. They cant compete with the two giants just like that.
Overall, the i740 was discontinued as a whole, since it just did'nt perform as competition was doing.
Just a generation late.