• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Intel Starts Shipping Xe LP-based DG1 Discrete GPU to OEMs; Locks it out of Other Systems

I wonder how Xe will be in at the low end relative to cost with CMAA and if these also do include RTRT as has been hinted for low resolution RTRT they might could be good relative to cost given CMAA is good on image quality relative to performance and paired with some RTRT hardware it's a nice mixture. If these somehow can seamlessly do mGPU with the iGPU of the Intel CPU's as well they'd do better in turn. In fact perhaps one or the other can handle CMAA and other post process and the other RTRT. They don't look very impressive, but I would hope Intel has some reasonable draws to the designs and can improve them further and come up with something more credible outside the low end market in time.
 
What does locking in the GPU into specific configurations accomplish? Is there a blooming black market for crap useless bottom-rung GPUs that we're not aware of?

This is truly a confusing decision on intel's part. All this does is fan the flames of potential F.u.d. when (if) they release an actual competitive GPU (ie that it won't work as well on AMD systems, that intel might hobble it on competing CPUs et cetera)

Not that I believe anything like this is probable, but they're not building up any good will with this kind of lock-in crap.
While not black market, all those sketchy, budget Chinese/Ebay GPUs are recycled bits and pieces off old and sometimes bottom-tier GPUs; some with more memory than the original cards the cores came from leading to weird performance. It wouldn't surprise me if some DG1 cards, after they get phased out, are recycled and made workable on all platforms courtesy of Chinese electronics chop shops, much like how they were able to bypass Intel locks and make certain CPUs from one LGA socket generation work when normally it shouldn't (IIRC, GamersNexus was partly fond of some of those boards, and even Linus tested out one or two China boards).

That said, this locked-in ecosystem seems to line up with Intel's earlier statements regarding a sort of mixed iGPU+dGPU concept they were discussing awhile back; some sort of ultra-quick shifting between iGPU for web browsing and dGPU for heavier rendering or the like. This is likely how they'll test and eventually implement the feature on a wider scale; via part lock-ins. Still, this doesn't bode well if they hope to get a foot into the market. At least even with cheap Nvidia or AMD GPUs, they're guaranteed to already work, and can be swapped between rigs as the systems are slowly retired (cannibalizing a spare part off one retired rig).
 
I don't think Intel is aiming to compete in the mid range and high end against Nvidia/AMD right now. I think they know they can't compete at that end right now and want to offer cost advantages at the low end side of the market. If these can work in tandem as a discrete low end GPU with iGPU on the CPU itself in sync and be supported well at the same time by Intel it should be provide a cost effective solution. Intel can evolve both upward over time. Eventually they could even throw MCM into the mix on both ends. One might be for render the stronger of the two or MCM pairs and the other post process the weaker for a quad big.LITTLE integrated+discrete hybrid. Intel's CMAA should be good for post process and I wager they can evolve it further and involve other techniques along with upscale. They can also as well throw in RTRT hardware on both. Another aspect of that is big.LITTLE itself for the CPU. That too in turn could render the weaker part of the equation on weaker GPU parts and stronger portion on the stronger bits of hardware.

Now consider this a big.LITTLE CPU + a big.LITTLE GPU design in one chip in a MCM design and then you also have 3D stacking as well down the pike. The weaker computational things run more in the background and probably stacked lower in terms of 3D stacking and in regard to heat dissipation and utilized less heavily and/or often and the stronger computational things stacked higher with better heat dissipation convection up higher and run more heavily and/or often. I can't wait to see the first 3D stacked MCM CPU/GPU take hold. I expect it'll have a big.LITTLE approach and how it's managed will play a big role in the overall perception and success of it. This isn't that lol, but I still have high hopes for technology and a company with Intel's history should give some reason for hope for the future.
 
this is optane all over again and is this a poor means of pushing consumers towards its new cpus?... in closed oem ecosystems where they are shielded from open scrutiny and competition unlike in diy
 
Ugh Intel, giving the middle finger to interoperability standards, wanting to be its own little ecosystem all the time. They can keep it.
 
I don't think Intel is aiming to compete in the mid range and high end against Nvidia/AMD right now. I think they know they can't compete at that end right now and want to offer cost advantages at the low end side of the market. If these can work in tandem as a discrete low end GPU with iGPU on the CPU itself in sync and be supported well at the same time by Intel it should be provide a cost effective solution. Intel can evolve both upward over time. Eventually they could even throw MCM into the mix on both ends. One might be for render the stronger of the two or MCM pairs and the other post process the weaker for a quad big.LITTLE integrated+discrete hybrid. Intel's CMAA should be good for post process and I wager they can evolve it further and involve other techniques along with upscale. They can also as well throw in RTRT hardware on both. Another aspect of that is big.LITTLE itself for the CPU. That too in turn could render the weaker part of the equation on weaker GPU parts and stronger portion on the stronger bits of hardware.

Now consider this a big.LITTLE CPU + a big.LITTLE GPU design in one chip in a MCM design and then you also have 3D stacking as well down the pike. The weaker computational things run more in the background and probably stacked lower in terms of 3D stacking and in regard to heat dissipation and utilized less heavily and/or often and the stronger computational things stacked higher with better heat dissipation convection up higher and run more heavily and/or often. I can't wait to see the first 3D stacked MCM CPU/GPU take hold. I expect it'll have a big.LITTLE approach and how it's managed will play a big role in the overall perception and success of it. This isn't that lol, but I still have high hopes for technology and a company with Intel's history should give some reason for hope for the future.
I believe you are right to say that Intel is aware they may not be able to challenge both Nvidia and AMD at this point. However, locking the GPU to specific Intel build still don't make sense. They can lock it, but it doesn't mean that OEMs will want to buy them, unless of course Intel is selling this really cheap, undercutting AMD and Nvidia. Also all the things you mentioned, requires software optimization to happen. If you are not allowing people to have the opportunity to test it by creating artificial limitations, I don't think the software side of things will improve anytime soon. Game developers for example will naturally optimized games based on how what is the common/ popular system configuration. If Intel can't even penetrate a low single digit, they will be in the back burner when it comes to optimization. These are my opinion, and I may be wrong. But if Intel persists in this sorts of tactic, I am not optimistic as to whether their graphic business will take off in the retail space. It likely will end up the way of Optane.
 
Also all the things you mentioned, requires software optimization to happen. If you are not allowing people to have the opportunity to test it by creating artificial limitations, I don't think the software side of things will improve anytime soon. Game developers for example will naturally optimized games based on how what is the common/ popular system configuration.
Intel may not be at this stage yet. I bet they are still focusing on hardware and firmware compatibility and optimization.
 
This getting more and more interesting for already redundant product.



Great, no treatment for Celeron, and I thought Celeron folks need it most. It's better shipping them with F SKU, but it's Intel , I bet their shipping it with K variant also.



People who bought budget machine mostly doing basic things, and what task do iGPU aren't capable of? Funny someone wrote it will be good for AI, bla bla..., who in right minds buying a budget machine for complex C ? To be fair, with dedicated VRAM it will benefit in some forms, but not that significant just like GT 710 equipped with GDDR5. And there's another crap, this would work like CF/SLI better than the other 2 companies do for a decades. What?
Not people, corporations. The customer isn't the end-user, it's HP, Dell, and Lenovo.

Intel's managed to persuade OEMs to buy their ultra-basic dGPU and since it's not powerful enough for gaming they're likely selling to the OEMs based on encode performance and 2D/Video acceleration in things like Photoshop, Handbrake, Premiere etc.

Don't forget, the cheapest GPU from Nvidia worth buying for video work is the 1650 Super. All Intel has to do to sell these things to OEMs is make the price attractive enough and show that their QuickSync encoder is comparable to NVENC in much more expensive cards.

I'm sure DG1 will suck for gaming, and I'm sure DG1 will be underwhelming, but its real purpose in this limited format isn't to make profit for Intel, but to get some DG1s out on the market to generate real-world usage data. In other words, HP, Dell, Lenovo are paying beta-testers and the likely upside for them is extremely tempting pricing from Intel, as well as perhaps some exclusivity in being one of only a few options for obtaining the new Intel dGPUs.
 
Not people, corporations. The customer isn't the end-user, it's HP, Dell, and Lenovo.

Intel's managed to persuade OEMs to buy their ultra-basic dGPU and since it's not powerful enough for gaming they're likely selling to the OEMs based on encode performance and 2D/Video acceleration in things like Photoshop, Handbrake, Premiere etc.

Don't forget, the cheapest GPU from Nvidia worth buying for video work is the 1650 Super. All Intel has to do to sell these things to OEMs is make the price attractive enough and show that their QuickSync encoder is comparable to NVENC in much more expensive cards.

I'm sure DG1 will suck for gaming, and I'm sure DG1 will be underwhelming, but its real purpose in this limited format isn't to make profit for Intel, but to get some DG1s out on the market to generate real-world usage data. In other words, HP, Dell, Lenovo are paying beta-testers and the likely upside for them is extremely tempting pricing from Intel, as well as perhaps some exclusivity in being one of only a few options for obtaining the new Intel dGPUs.

Dont forget the new Geforce 1010, just in time to compete with this
 
Back
Top