Monday, February 19th 2018

Intel Unveils Discrete GPU Prototype Development

Intel is making progress in its development of a new discrete GPU architecture, after its failed attempt with "Larrabee" that ended up as an HPC accelerator; and ancient attempts such as the i740. This comes in the wake of the company's high-profile hiring of Raja Koduri, AMD's former Radeon Technologies Group (RTG) head. The company unveiled slides pointing to the direction in which its GPU development is headed, at the IEEE International Solid-State Circuits Conference (ISSCC) in San Francisco. That direction is essentially scaling up its existing iGPU architecture, and bolstering it with mechanisms to sustain high clock speeds better.

The company's first 14 nm dGPU prototype, shown as a test-chip at the ISSCC, is a 2-chip solution. The first chip contains two key components, the GPU itself, and a system agent; and the second chip is an FPGA that interfaces with the system bus. The GPU component, as it stands now, is based on Intel's Gen 9 architecture, and features a three execution unit (EU) clusters. Don't derive numbers from this yet, as Intel is only trying to demonstrate a proof of concept. The three clusters are wired to a sophisticated power/clock management mechanism that efficiently manages power and clock-speed of each individual EU. There's also a double-clock mechanism that doubles clock speeds (of the boost state) beyond what today's Gen 9 EUs can handle on Intel iGPUs. Once a suitable level of energy efficiency is achieved, Intel will use newer generations of EUs, and scale up EU counts taking advantage of newer fab processes, to develop bigger discrete GPUs.
More slides follow.

Source: PC Watch
Add your own comment

65 Comments on Intel Unveils Discrete GPU Prototype Development

#26
TheoneandonlyMrK
evernessinceThis isn't Raja's first step. He just joined the company last month, no way he did these designs in a month. Raja will have a hand in their next gen architecture and current / future drivers.
This is a hatchet job to see what they have , hence the fpga interface , you may be right but in time scale terms this prototypes actually on point for Raja to have a hand in , but yes i guess I'd agree someone at intel could have had this idea a little bit longer than a month ago ,possibly but the execution isnt all that then.
Posted on Reply
#27
Vya Domus
john_Assuming they can throw enough money on the problem and come out with good GPUs, of course.
Given their track record , they failed to apply that strategy numerous times. Unless the money was thrown towards bribing OEMs.
theoneandonlymrkPlay fair dude, this is likely Raja's very first step towards something
Raja isn't a one man army. He can't do miracles , people that are knowledgeable and skilled enough in this area are incredibly rare , most of them are working for AMD and Nvidia already and the rest at Qualcomm and ARM. Intel is more than likely pursuing a compute oriented GPU to compete against Nvidia in the datacenters.
Posted on Reply
#28
TheoneandonlyMrK
Vya DomusGiven their track record , they failed to apple that strategy numerous times. Unless the money was thrown towards bribing OEMs.



Raja isn't a one man army. He can't do miracles , people that are knowledgeable and skilled enough in this area are incredibly rare , most of them are working for AMD and Nvidia already and the rest at Qualcomm and ARM. Intel is more than likely pursuing a compute oriented GPU to compete against Nvidia in the datacenters.
I dissagree in part ,in part because I think calling it a compute oriented GPU could be both right and a simplification , it's right if they go traditional discrete Gpu route but im half expecting something a bit more now I've seen this.

If they leverage the experience they have putting an Fpga to work in the data center they could have a Computer accelerator instead of just a graphics or compute accelerator.

After all the ease of reconfiging FPGAs make them the ultimate possible accelertor of Anything.

This could sidestep any graphics performance gap by increasing common addoption of fpga APIs obviously within direct X.

All chip companies are both diversifying and adopting more modular, many accelerator designs ,an fpga usurps a lot of that in one package so it's only time before we see them in consumer land.

All opinions on possibilities, too early to know tbh.
Posted on Reply
#30
lexluthermiester
RejZoRIt would be cool if we got 3rd GPU player on graphics market.
I think the best Intel can hope for is mid-tier standings. Of course that's a big market and if they can get a GPU that provided respectable performance, people might take them a bit more seriously.

My "Yes" vote was reserved and contingent on the idea that Intel is going to be competitive in the entry and mid-tier levels, which currently they aren't.
Posted on Reply
#31
bug
Captain_TomI could see Intel really carving out a decent niche in the lower-mid to lower-enthusiast set of products.

1) Not quite as strong as Nvidia/AMD's halo products
2) Not quite as good price/perf as AMD's midrange
3) But industry-leading perf/watt.

For the time, the 10-25w IvyBridge-Broadwell had absolutely incredible perf/watt. But they just couldn't scale them up efficiently past even ~50w...
Those were probably never meant to scale past their current TDPs anyway. Intel has Iris Pro for those that need more juice. Almost as rare as hen's teeth as far as availability is concerned, but the option is out there.
Posted on Reply
#32
qubit
Overclocked quantum bit
I always knew Intel could get into the discrete graphics card market if it wanted to and have said so before. I’m also confident that they can give NVIDIA decent competition in time.
Posted on Reply
#33
Captain_Tom
bugThose were probably never meant to scale past their current TDPs anyway. Intel has Iris Pro for those that need more juice. Almost as rare as hen's teeth as far as availability is concerned, but the option is out there.
Well that's probably what Intel would say in damage control mode lol. But they would by lying....

There were side-by-side comparisons of a GTX 650 and a Iris Pro graphics where they said "You can't tell the difference!" Then they announced they had a GPU with double the EU's coming out the next generation (Clearly trying to infer that they would be in the midrange segment for the first time ever). They wanted to get to the level where they could be in most gaming laptops, but they couldn't and so AMD is providing them with their Vega Gx products.


P.S. Also keep in mind that even to get to "GTX 650 levels," Intel had to include ludicrously expensive EDRAM. So sure they beat AMD for a year by ~30%, but their product was using 14nm instead of 28nm; and it cost 2-3x as much to make lol. That's not sustainable....
Posted on Reply
#34
renz496
TheLostSwedeThere already are a lot more GPU makers than Nvidia and AMD, but none of them are for PC's.

ARM, Imagination Technologies, Qualcomm and Vivante all make GPU's, as well as technically S3/VIA...
I most likely missed some companies as well, but the problem is, none of them can keep up with Nvidia due to the amount of money they're throwing at their R&D of new graphics architectures.

Even Intel isn't going to be able to catch up any time soon. At best I'd expect something mid-level for the first 2-3 years, as it's expensive, it's resource intensive and time consuming to make GPU's.
sometimes i don't think it's about money. company like Qualcomm or Intel have the money needed for the required R&D but the problem is Nvidia heavily focusing on things that many hardware maker did not like: software.
Posted on Reply
#35
dj-electric
mastrdrverIntel can not make a GPU and never will because it's a CPU company first.
That is so wrong, I don't even know from witch angle i could explain why. I guess none would be the best.
Posted on Reply
#36
Blueberries
Intel actually has a surprisingly good R&D department so if they roll something out it's going to have a lot more engineering and thought into it than just "similar performance at a reduced cost."

I'll agree with other comments in this thread that I'd be hard pressed to believe they will catch up to nVidia anytime soon but if they release something I can almost guarantee it will be something interesting and different.
Posted on Reply
#37
Captain_Tom
BlueberriesIntel actually has a surprisingly good R&D department
When they aren't blowing 7 Billion Dollars on McAfee lol.

They could have use just THAT dumb expenditure to make an entire GPU line-up, or possibly even a new CPU architecture that wouldn't have been Curb Stomped by Zen...
Posted on Reply
#38
mak1skav
Probably they want a piece of the Deep Learning pie that Nvidia seems to dominate day by day.
Posted on Reply
#40
LemmingOverlord
TheLostSwedeThere already are a lot more GPU makers than Nvidia and AMD, but none of them are for PC's.

ARM, Imagination Technologies, Qualcomm and Vivante all make GPU's, as well as technically S3/VIA...
I most likely missed some companies as well, but the problem is, none of them can keep up with Nvidia due to the amount of money they're throwing at their R&D of new graphics architectures.

Even Intel isn't going to be able to catch up any time soon. At best I'd expect something mid-level for the first 2-3 years, as it's expensive, it's resource intensive and time consuming to make GPU's.
That's not 100% correct. All GPU IP can be used on a PC, it is not "x86-specific", and we've seen examples of x86 vendors licensing GPU IP for their low-budget kit (i.e. Intel & Imagination). But each GPU developer has their own reasons not to go into the PC market, and it's not for lack of budget. Qualcomm, ARM and Imagination could afford it (and God knows Imagination needed to diversify, but they opted for the easy way out: sell to the Chinese)

1 - It doesn't serve their strategy to diversify into the discrete desktop GPU market
2 - Their development focus is in low-power, mobile devices. Shifting to desktop discrete might upset the status quo and the investment outlook.
3 - Their GPU strategy is to supplement their SoC strategy in exclusivity (i.e. Adreno)
4 - reasons...

Basically you need someone with big balls and deep pockets to take on the power that be...
Posted on Reply
#41
FordGT90Concept
"I go fast!1!11!1!"
Two things strike me as odd in these documents:
1) Why a dual chip solution? Why separate the computer interfacing completely from the GPU? Was one chip borrowed from some other project (e.g. wifi controller) so they saved time/money by piggybacking on the basics of existing tech? This doesn't make sense by itself.

2) They are really focused on keeping power consumption to a minimum. This point is particularly interesting because it tells us where their focus is: portable/low power devices. My first thought was Atom+4K which they're already doing. Second thought is that Intel is very concerned about Vega M powered Ryzen chips. That doesn't explain why Intel is pursuing discreet graphics though.

Then it struck me: Intel wants to divorce themselves completely from NVIDIA which means no more Intel/NVIDIA Optimus systems. AMD sucks right now in the laptop GPUs market so, other than crawling back to NVIDIA, Intel's only option is to scale up their IGPs to match NVIDIA's in gaming/workstation laptops.

Circling back to #1, why the two chip solution? Intel owns the CPU and chipset. They're soon going to own the GPU too. Intel likely kept them separate so they could create a new interface (replacing the generic PCIE controller) tailored to the dual GPU configuration (IGP + IGP expansion card). In a corporate environment, IT could order 100 units of the same computer with 20 IGP expansion cards then only add the card to computers that are destined for people that need it. Presumably there is no additional set up and to the operating system, it will only see one GPU with expanded capabilities. The IGP can enable/disable EU clusters as demand increases/decreases.

It makes perfect sense. i7-8809G may have tested that new interface too. They could also package the MCM on a PCB to be hooked into desktop PCIE slots. I'm still thinking there has to be something unique about the PCIE implementation here--perhaps a dedicated low latency slot that removes most of the overhead by having a dedicated and constrained PCIE controller (only talks to one device).
Posted on Reply
#42
bug
About that #1, it's a PoC, I wouldn't read too much into it just yet.
Posted on Reply
#43
FordGT90Concept
"I go fast!1!11!1!"
If it was entirely benign, they wouldn't have mentioned it in the IEEE documents.
Posted on Reply
#44
LemmingOverlord
FordGT90ConceptTwo things strike me as odd in these documents:
1) Why a dual chip solution? Why separate the computer interfacing completely from the GPU? Was one chip borrowed from some other project (e.g. wifi controller) so they saved time/money by piggybacking on the basics of existing tech? This doesn't make sense by itself.

2) They are really focused on keeping power consumption to a minimum. This point is particularly interesting because it tells us where their focus is: portable/low power devices. My first thought was Atom+4K which they're already doing. Second thought is that Intel is very concerned about Vega M powered Ryzen chips. That doesn't explain why Intel is pursuing discreet graphics though.

Then it struck me: Intel wants to divorce themselves completely from NVIDIA which means no more Intel/NVIDIA Optimus systems. AMD sucks right now in the laptop GPUs market so, other than crawling back to NVIDIA, Intel's only option is to scale up their IGPs to match NVIDIA's in gaming/workstation laptops.

Circling back to #1, why the two chip solution? Intel owns the CPU and chipset. They're soon going to own the GPU too. Intel likely kept them separate so they could create a new interface (replacing the generic PCIE controller) tailored to the dual GPU configuration (IGP + IGP expansion card). In a corporate environment, IT could order 100 units of the same computer with 20 IGP expansion cards then only add the card to computers that are destined for people that need it. Presumably there is no additional set up and to the operating system, it will only see one GPU with expanded capabilities. The IGP can enable/disable EU clusters as demand increases/decreases.

It makes perfect sense. i7-8809G may have tested that new interface too. They could also package the MCM on a PCB to be hooked into desktop PCIE slots. I'm still thinking there has to be something unique about the PCIE implementation here--perhaps a dedicated low latency slot that removes most of the overhead by having a dedicated and constrained PCIE controller (only talks to one device).
I guess the dual chip plays to Intel's own experience in CPU design, where they separate a number of functions from the core in what they call the "system agent" (or uncore), mostly power management and memory management.

In the old days you'd have CPU+North Bridge+South Bridge, then Intel moved the north bridge onto the CPU. CPUs now have their own memory controller, PCIe controller, etc... That's what Intel is used to doing.

In this case, from what they're showing here, it seems they are using a modular design for expedience to show the proof of concept. Whatever route they take down the line will determine if this will be a single piece of silicon or a bunch of chips soldered together.

However, don't cream your pants just yet. This PoC is not a new architecture but a tweaked Gen9LP chip (HD Graphics 5xx-series) attached to a system agent, and with integrated voltage regulators to better manage power and frequencies... they're just trying to optimize efficiency, rather than create an entirely new GPU design. Apparently they have low-power cores and *cough* "high-performing" *cough* cores (EUs), much like ARM has light- and heavy- workload cores. This design doesn't change anything architecturally (i.e. the GPUs are as crappy as before), but it does bring in power savings and extended battery life (where battery life is relevant).

Superimpose this on the PCIe interface and you can optimize any type of interface: EMIB, add-in card, etc...

It's a bit "meh" to be honest, but Raja probably told people they needed to do *something* for IEEE.
Posted on Reply
#45
iO
It looks a lot more like the test vehicle of their iGPU in discrete form, which they might built every time they test and tune a new gen rather than fabbing the whole CPU...
Posted on Reply
#46
Gasaraki
ZoneDymoGTX1180*
It's most likely going to be called GTX2080.
Posted on Reply
#47
Casecutter
theoneandonlymrkthis is likely Raja's very first step towards something
You realize Raja is just tasked with picking up all the pieces a interesting tidbit's Intel has developed and tried (hold patents on) over the years and will now be sorting through all that and see how it might string together into something that could be brought to a discrete GPU offering (without stepping in Nvidia/AMD proprietary technologies and ending up in litigation) and start making ROI for all of Intel's time and investment. (that's a nice run-on!)
Posted on Reply
#48
GAR
Larabee? is that you? back from the dead?
Posted on Reply
#49
TheoneandonlyMrK
CasecutterYou realize Raja is just tasked with picking up all the pieces a interesting tidbit's Intel has developed and tried (hold patents on) over the years and will now be sorting through all that and see how it might string together into something that could be brought to a discrete GPU offering (without stepping in Nvidia/AMD proprietary technologies and ending up in litigation) and start making ROI for all of Intel's time and investment. (that's a nice run-on!)
Read the thread pal and realise i know i commented as much after that post

I said in that post , did you read one line.

Play fair dude, this is likely Raja's very first step towards something,




"ie intels present best igpu with hybrid shaders and an fpga added to make up for all thats missing if they don't fit cpu cores and more importantly the rest of its supporting circuitry related to specific purposes ie power control but, the fpga mainly acts as an interface between intels proprietary inter chip interface type and pciex since it's clear the igpu was designed without pciex in mind it makes sense to test what they can expect before scaling up the design, they won't actually make many of these even for themselves, it's a stepping stone chip clear as day."


Also

The shop bought ones four or five years out imho and certainly won't be a dual chip solution maybe MCM though.
Because that FPGA could possibly be a game changer if used to pump the GPUs core performance intelligently and or to directly affect a new Api for mainstream use case acceleration, intel already do fpga in server so they know it's got legs.

Later on i said this too,

im half expecting something a bit more now I've seen this.

If they leverage the experience they have putting an Fpga to work in the data center they could have a Computer accelerator instead of just a graphics or compute accelerator.

After all the ease of reconfiging FPGAs make them the ultimate possible accelertor of Anything.

This could sidestep any graphics performance gap by increasing common addoption of fpga APIs obviously within direct X.

All chip companies are both diversifying and adopting more modular, many accelerator designs ,an fpga usurps a lot of that in one package so it's only time before we see them in consumer land.


So i clearly get that Raja grabbed whatever was near, it didn't have external connections just inter chip ones so they added an fpga ,which coincidentally could be a viable co processor itself.

Seams another news agro site agrees with me but they have the name of purpose.

Edge computing ,not Gaming per say , that would be a shame.
Posted on Reply
#50
Vayra86
theoneandonlymrkPlay fair dude, this is likely Raja's very first step towards something, ie intels present best igpu with hybrid shaders and an fpga added to make up for all thats missing if they don't fit cpu cores and more importantly the rest of its supporting circuitry related to specific purposes ie power control but, the fpga mainly acts as an interface between intels proprietary inter chip interface type and pciex since it's clear the igpu was designed without pciex in mind it makes sense to test what they can expect before scaling up the design, they won't actually make many of these even for themselves, it's a stepping stone chip clear as day.

The shop bought ones four or five years out imho and certainly won't be a dual chip solution maybe MCM though.
Because that FPGA could possibly be a game changer if used to pump the GPUs core performance intelligently and or to directly affect a new Api for mainstream use case acceleration, intel already do fpga in server so they know it's got legs.
You present some solid points there and I feel four to five years out is realistic. However - consider what other companies can and will do in five years. If anything, this relies mostly on Intel's ability to improve the GPU in a fundamental sense combined with Raja's expertise on the rest. That is quite a gamble, one I wouldn't put money on.
Posted on Reply
Add your own comment
Nov 21st, 2024 13:35 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts