Thursday, April 16th 2020
Intel Gen12 Xe iGPU Could Match AMD's Vega-based iGPUs
Intel's first integrated graphics solution based on its ambitious new Xe graphics architecture, could match AMD's "Vega" architecture based iGPU solutions, such as the one found in its latest Ryzen 4000 series "Renoir" iGPUs, according to leaked 3DMark FireStrike numbers put out by @_rogame. Benchmark results of a prototype laptop based on Intel's "Tiger Lake-U" processor surfaced on the 3DMark database. This processor embeds Intel's Gen12 Xe iGPU solution, which is purported to offer significant performance gains over current Gen11 and Gen9.5 based iGPUs.
The prototype 2-core/4-thread "Tiger Lake-U" processor with Gen12 graphics yields a 3DMark FireStrike score of 2,196 points, with a graphics score of 2,467, and 6,488 points physics score. These scores are comparable to 8 CU Radeon Vega iGPU solutions. "Renoir" tops out at 8 CUs, but shores up performance to the 11 CU "Picasso" levels by other means. Besides tapping into the 7 nm process to increase engine clocks, improve the boosting algorithm, and modernizing the display- and multimedia engines; AMD's iGPU is largely based on the same 3-year old "Vega" architecture. Intel Gen12 Xe makes its debut with the "Tiger Lake" microarchitecture slated for 2021.
Source:
_rogame (Twitter)
The prototype 2-core/4-thread "Tiger Lake-U" processor with Gen12 graphics yields a 3DMark FireStrike score of 2,196 points, with a graphics score of 2,467, and 6,488 points physics score. These scores are comparable to 8 CU Radeon Vega iGPU solutions. "Renoir" tops out at 8 CUs, but shores up performance to the 11 CU "Picasso" levels by other means. Besides tapping into the 7 nm process to increase engine clocks, improve the boosting algorithm, and modernizing the display- and multimedia engines; AMD's iGPU is largely based on the same 3-year old "Vega" architecture. Intel Gen12 Xe makes its debut with the "Tiger Lake" microarchitecture slated for 2021.
45 Comments on Intel Gen12 Xe iGPU Could Match AMD's Vega-based iGPUs
I wonder how long x86 has left outside gaming and workstation segments?
That goes for gaming. It goes for x86 / ARM. The market has become so all encompassing, there IS no one size fits all. It also echoes in MS's Windows RT attempt, for example. People want Windows for specific reasons. Windows phone..., same fate. And even within Windows x86, the cross compatibility just doesn't happen.
Microsoft is missing a huge chunk of the mobile and wearables pie. ARM is their re-entry trajectory for these markets, so I really don't think they have abandoned ARM.
I was all-in with MS, but I’ve had so many bad experiences with their hardware that I’ve vowed to never buy anything with their name on it that isn’t a mouse or keyboard. I’ll use their OS and Office, but that’s it–they‘ve shown no real commitment to anything else. I don’t even trust Surface. If you look at that brand’s track record, few devices have really been successful. Their App Store is a joke too. The few apps I’ve purchased or tried from there won’t even install correctly or run after the fact. MS can’t even master what other software companies have managed to do–install software on Windows!
However, Microsoft failing a few times doesn't mean they will also fail the next time they give it a try. Let's face it, future is all mobile and MS have 0 presence in mobile, so 2+2=4? It's only a matter of time until we see their next attempt at it.
Their android apps, like office and edge browser, are pretty good imo.
One could say, yeah it is just a bunch of 3D stages seperated nicely into performance numbers, however that would overlook the runtime of data in flight. It is the architecture that makes bandwidth possible. AMD has its shot, not by the dint of its memory, but heterogeneous memory address space. If they can keep addressing to a minimum with cpu serving the scalar graphics pipeline.
A GPU is engineered to function with any amount of memory bandwidth available but to operate optimally with a specific minimal level. There is no point in putting faster GPUs in APUs when the memory isn't getting faster and there is no going around that, it's a hard limit.
This is probably the last time I respond as I have no clue what exactly are you arguing against, what you're writing is just borderline incoherent to me.
The external bandwidth has data the gpu is unaware of. That is the difference. At full speed, it takes 250MHz to read memory end to end. Every cycle a single module of the GDDR5 system bursts just 4 bytes. It is not online memory like the registers are. Those are crazy.
I guess the correct term is,
Plus, GDDR5 is aligned. You get seriously worse performance when you introduce timings. Signals need to phase in.
Sadly, that doesn't stop me from trying. You might call this tilting at windmills, but one day - one day! - I want to have a coherent and comprehensible discussion with them. You misunderstand the issues being raised against you. You are claiming that increasing external memory bandwidth wouldn't help iGPUs because they are limited by internal restrictions. While parts of what you say are true, the whole is not. While there are of course bandwidth limitations to the internal interconnects and data paths of any piece of hardware, these interconnects have massive bandwidth compared to any external memory interface, and these internal pathways are thus rarely a bottleneck. For an iGPU this is especially true as the external memory bandwidth is comparatively tiny. Compounding this is the fact that architecturally the iGPUs are the same as their larger dGPU siblings, meaning they have the same internal characteristics. If what you say was true, then a Vega 64 at the same clocks as a Vega 8 iGPU would perform the same as they would both be limited by internal bandwidth. They obviously don't, and thus aren't.
Beyond this, your post is full of confused terminology and factual errors.
Simple ones first: how is supersampling (a form of anti-aliasing) related to anisotropic filtering (texture filtering)? And how does the computational cost of performing an operation like that become "bandwidth"? What you are describing is various aspects of the processing power of the GPU. Processing of course has its metrics, but bandwidth is not one of them, as bandwidth is a term for data transfer speed and not processing speed (unless used wrongly or metaphorically). Of course this could be relevant through the simple fact that no shader can compute anything without having data to process, which is dependent on external memory. You can't do anisotropic filtering on a texture that isn't available in time. But other than that, what you are saying here doesn't relate much to bandwidth.
Second: the statement "it takes 250MHz to read memory end to end" is a meaningless statement. Hz is a measure of cycles per second. Any amount of memory can be read end to end at any rate of cycles/second if given sufficient time. Do you mean to read a specific amount of memory end to end within a specific time frame over a specific bus width? You need to specify all of these data points for that statement to make sense. Also, the point of memory bandwidth is to be able to deliver lots of data rapidly, but not to read the entire memory end to end - most data in VRAM is unused at any given time. The point of increased memory bandwidth is thus not to be able to deliver the full amount of memory faster, but to be able to keep delivering the necessary amount of data to output a frame at either a higher detail level/resolution at the same rate, or at the same detail level/resolution at a higher rate.
Also, how does 768 threads become 768Hz? A thread is not a cycle. 768 threads means 768 parallel (or sequential, though that would be rare for a GPU) threads at the given speed, for however long the threads are running. The statement you quoted seems to be saying that at a given speed (I assume this is provided on your source) 768 threads would be needed to overcome the latency of the system as compared to the X number of threads (again, I assume provided in your source) the current system actually has. (Btw, where is that quote from? No source provided, and not enough context to know what you're quoting.) The quote certainly doesn't seem to say what you mean it to say.