Tuesday, May 8th 2018
Intel Could Unveil its Graphics Card at 2019 CES
It looks like Intel is designing its discrete graphics processor at a breakneck pace, by a team put together by Raja Koduri. Its development is moving so fast, that the company could be ready with a working product to show the world by the 2019 International CES, held in early-January next year. Intel's development of a graphics processor is likely motivated by the company's survival instinct to not fall behind NVIDIA and AMD in making super-scalar architectures to cash in on two simultaneous tech-booms - AI and blockchain computing.
A blessing in disguise for gamers is the restoration of competition. NVIDIA has been ahead of AMD in PC graphics processor performance and efficiency since 2014, with the latter only playing catch-up in the PC gaming space. AMD's architectures have proven efficient in other areas, such as blockchain computing. NVIDIA, on the other hand, has invested heavily on AI, with specialized components on its chips called "tensor cores," which accelerate neural-net building and training.
Source:
TweakTown
A blessing in disguise for gamers is the restoration of competition. NVIDIA has been ahead of AMD in PC graphics processor performance and efficiency since 2014, with the latter only playing catch-up in the PC gaming space. AMD's architectures have proven efficient in other areas, such as blockchain computing. NVIDIA, on the other hand, has invested heavily on AI, with specialized components on its chips called "tensor cores," which accelerate neural-net building and training.
38 Comments on Intel Could Unveil its Graphics Card at 2019 CES
Its just that i don't think they can do it this fast. I would believe a CES2020 paper launch, not a 2019 one.
And sure, Intel knows how to lose battles. They do almost every other month in some segments.
If a product does see the light in 7 months, it might have an awkward start, but this trigger needs to happen for the next ones to be a lot better. Hopefully, this is a race they can even attend.
But as always they will salvage some parts of the development into their CPU arch just as the ring arch for example. So it won't be a complete loss if it fails.
I am afraid that they will go emulation route mending the handicap by brute force...
Also you have to wonder on what process. 10nm isn't working out so is it going to be their 14nm when others will be using 7nm.
Amd requires 484mm2 worth of vega cores to compete with 300mm2 Pascal chips, and it isn't exactly failing except at the high end where it cant scale any further due to cost. Intel could do the same thing initially
They can jump right into the server market with openCL with an alliance with AMD on the software stack.
Intel Discrete Graphics Card combined with the IGP Could kick ass ( Running Intels version of AMD hybrid Crossfire Setup).
That is, in mainstream platforms. Of course us HEDT users would have to use the dGPU even at idle :)
As w1zzard stated this might not be for the consumer market but I can't see it not being a possibility, their driver does need work but the foundation is there, same goes with the hardware side of the equation. As for if they will fail, well that's just speculation, any number of the infinite possibilities can happen. I just hope for more competition in this space, I am tired of the same two after all this time. Good luck buying this competition nvidia, this is no 3dfx. :p
Intel's earlier failures with graphics development were mostly to do with resources required to sustain long-term development cycles and "political will" at Intel because the "old" Intel was a CPU company. The current Intel diversified its portfolio and can do whatever it wants (vide the asinine McAfee deal).
It all comes down to "how long a deadline and how many resources did they give Raja?". While Raja's experience has been desktop GPUs, I wouldn't put it past Intel to repurpose a poor design just for the sake of saving the product, and end up with a crappy AI core.
I am not very impressed by Raja, tbh. His experience at AMD (even though you can't peg him entirely for the VEGA design) shows he oversells his product.
More freedom and choice is great.
But not a full blown big GPU, out of nowhere and competitive with Nvidia..
Why would they put so much effort into making such a efficient hardware functionality which is avoided like the plague by the graphics industry is beyond comprehension. Even something of similar size from ARM or Qualcomm would wipe the floor with these things in pretty much every relevant category (including power efficiency , though to be fair it would beat AMD/Nvidia a well). Their architecture just simply performs poorly and it's optimized for things that no one needs as pointed above. I don't know what is the cause of their failure to implement competitive GPUs in this segments , it's either lack of money allocated to it , or lack of talented people. I'm tempted to believe it's the latter and the addition of Raja wont be enough as far as I am concerned.
edit: not to mention they will have refined the process used on there current integrated gpus for the upcoming discrete ones. plus stuff like this is something that just was not thought up yesterday, they have been planning it for a few years and have already worked out some of the issues. i wont be surprised if this is in the range of rx560 - gtx1060 category of cards. its also not suprising that they are waiting for gddr6 to come into full production by the time they show this.
I'm less concerned with performance or the adaptions of an Intel GPU versus the consumer / professional options from their opponents but rather how a MAJOR change in innovation (from onboard to discrete) will lead to localized optimizations.
Xeon Phi-- Absolve memory / thread wall (MIC) local
3D XPoint - Absolve RAM limits / Storage latency micron acquisition
Intel ??? GPU - Absolve parallel bus latency through firmware or driver optimizations.
If the first iteration of Intel's discrete GPU doesn't scare its competition... the second or third will.
And that Usp could get them selling.
No one mentions the totally rubbish rendering of 3d objects on intel?? I have seen 3 way , intel ,amd , Nvidia, film and game rendering comparisons that shine a light on the rather poor game reproduction on intel hardware, they do films fine to be fair.
They already have working GPU, working drivers, all the infrastructure and knowledge is in place, and don't forget they are doing GPUs since 20 years ago.
Its just scaling everything to a whole new level.
From programming perspective going from 1 core to 2 cores is hard, however once your software is working with 20, switching to 2000 is just a parameter change. (They have 20-40 in their current integrated GPUs)
Isn't the SHADER cores nVidia and AMD proprietary ONLY?? If so, how can Intel develop a new GPU without those?? Sorry, just asking....
Sure they patent their GPUs in their entirety and their ISAs but these change very often and it's not worth it to waste time copying entire architectures. Not as a long term strategy anyway.