Thursday, January 18th 2018
Intel Core i7-8705G with Vega M Obliterates 8th Gen Core + GeForce MX 150
It looks like Intel has achieved the design goals of its new Core i7-8705G multi-chip module, built in collaboration with AMD. Combining a 4-core/8-thread "Kaby Lake" CPU die with an AMD "Vega" GPU die that has its own 4 GB HBM2 memory stack, the ruthless duo put similarly-priced discrete GPU setups to rest, such as the combination of an 8th generation Core processor + NVIDIA GeForce MX 150. More importantly, entry-level discrete GPU combinations with high-end mobile CPUs have a similar power/thermal envelope as the i7-8705G MCM, but at significantly higher PCB footprint.
Dell implemented the Core i7-8705G on one of its latest XPS 15 2-in-1 models. The device was compared to an Acer Swift 3 (SF314-51), which combines a Core i5-8250U processor with GeForce MX 150 discrete graphics; and a Dell XPS 13 9370, which implements an 8th generation Core processor that has Intel's workhorse graphics core, the HD 620. The three devices squared off against each other at "Rise of the Tomb Raider" game benchmark. The i7-8705G averaged 35 frames per second (fps), while the MX 150 barely managed 24 fps. The HD 620 ran a bored intern's PowerPoint slideshow at 9 fps.
Source:
HotHardware
Dell implemented the Core i7-8705G on one of its latest XPS 15 2-in-1 models. The device was compared to an Acer Swift 3 (SF314-51), which combines a Core i5-8250U processor with GeForce MX 150 discrete graphics; and a Dell XPS 13 9370, which implements an 8th generation Core processor that has Intel's workhorse graphics core, the HD 620. The three devices squared off against each other at "Rise of the Tomb Raider" game benchmark. The i7-8705G averaged 35 frames per second (fps), while the MX 150 barely managed 24 fps. The HD 620 ran a bored intern's PowerPoint slideshow at 9 fps.
50 Comments on Intel Core i7-8705G with Vega M Obliterates 8th Gen Core + GeForce MX 150
I like that AMD and Intel found common ground to collaborate. I dislike the fact that websites are talking "obliteration" when comparing shitty-specced machines to Intel/AMD's "latest and greatest". You either compare machines of similar price (and your deltas are performance, battery life and overall portability), or you compare hardware of similar generations and specs (where your deltas are price and battery life). These are neither.
As for the "This is what Dell allowed users to compare against" - that's a weird claim. Did Dell have competing PCs from other brands lined up in their booth? And even so, how difficult would it be for whoever ran the tests to simply take a note of the settings and have a comparison run on whatever hardware would be suitable? It's not like Dell can police journalists' benchmarking practices, after all. The biggest challenge there would be access to comparable hardware, which I assume to be the reasoning behind the rather lopsided comparison here. Of course, the transparent and right thing to do here would be to also report detailed settings for the benchmark, which as I see it is the only real error here outside of some over-the-top, semi-clickbaity phrasing.
Going from the HotHardware source article, we know the tests were run at 1080p high, with 29.69fps reported for 1080p very high (no word on AF or AA settings, but no reason to assume they're not stock). Assuming that NotebookCheck's RotR benchmark is the built-in one, that's a bit below the Yoga 720s result of 32 fps and the Spin 5's result of 31.2.
As for "guesstimating" power draw, it's clear you didn't actually read what I posted. Every number I posted outside of Dell's claims for the XPS 15's cooling capacity isn't TDP, but rather actual measured power draw under real-world loads by a trustworthy third party. Of course, Dell is talking CPU/GPU cooling capacity, while the review numbers are for the full system, which probably adds 10-15W to that depending on the configuration. If Dell says they can cool 55W, I assume the whole laptop will draw ~70W, if not closer to 75 with the 4k display. For good measure, let's add 20W to that for peak power draw for short periods of time. More than that is extremely unlikely, unless Dell manages to bork this design beyond belief. Comparing that to real-world numbers from the Yoga 720, it peaks at almost 120W during FurMark+Prime95, and hovers just below 100W playing The Witcher 3. That's a testament to how well-built its cooling system is (even if the CPU throttles to 800MHz during the power virus test), but it also shows the real-world power draw of something that we expect KBL-G to be comparable to. Are my XPS power draw numbers guesstimates? Sure. But they're well founded, and you don't seem to say otherwise. Even if I'm 10% off, we're still seeing significantly less power draw than the Yoga 720 for performance that's almost within margin of error.
To me, it seems like KBL-G makes a pretty good middle ground between low-end KBL-R+MX150 systems and bigger, bulkier systems using the GTX 1050, while being closer to the latter in performance (and essentially the same price). If that's how a power-throttled pre-production KBL-G performs, this is looking promising.
While the comparisons in the original HotHardware article are less than ideal (the HD 620? that's ridiculous (but probably also what they had access to)), the wording rather dumb, and TPU re-using that wording possibly even worse, that's not the most interesting thing here. Sure, we can talk about lack of journalistic integrity, which is a valuable topic to discuss (especially in the time of clickbait deluges and fake news). In this specific case, I'm more interested in discussing the actual news, regardless of its packaging, as it is rather significant.
Just found this and you seem to know what you're talking about. maybe you could illuminate something i've been wondering about some time.
It's the TDP.
The early bench shows around 1050 TI, but I'm not even convinced that they'll do 1050 or even maxQ 1050 (if such a thing exists).
Many sources claim that KBL-G have the 'H' series processor integrated with the GPU unit. 1050 alone has TDP of 60 something Watts, give or take a few. The whole thing is rated at 60W.
Unless the EIMD bridge thing breaks physics and creates more power than it consumes, this can't be done.
Now, some articles do say that the 60W that this thing consumes have intelligent controller that directs power to where it needs to go between the CPU and GPU.
if it is indeed the H series it's 60W between 45+60 (or more on GPU).
how does this thing work?
I was literally holding off buying a mbp15 because I thought the H series will be coming out next month.
also a RX550 is gfx804, which means Polaris 12, this NDA codename for Vega M is literally Polaris 22
Firstly, for CPUs the TDP isn't very relevant for gaming loads. 45W mobile CPUs don't actually consume 45W while gaming, as gaming is rarely a consistent 100% load across all cores. As such, the performance loss of limiting the CPU to, say, 25-30W (dynamically) while the GPU is under load is probably barely noticeable. As an example, look at the measured power draw of this laptop with a 7300HQ (45W) and a 1050Ti vs this one with an 8550U (@25W) and a 1050Ti. They're essentially the same, and a 7700HQ doesn't increase that by all that much.
The GPU goes for a wide-and-slow approach too, which is always more efficient than pushing clocks high (though it has the downside of higher fab costs with larger die sizes - which Intel compensates for by pricing this kind into oblivion). The KBL-G chips have 20 or 24 CUs enabled. For comparison, at regular clocks, the 16-CU RX 560 is comparable to the 1050Ti desktop. Adding 50% more CUs and aiming for the same performance window would allow for a serious downclock - which the .9-1.2GHz specs corroborate. Vega at 1-1.2GHz is scary efficient, at times outpacing Pascal for perf/W in similar low-clocked scenarios. Now, the people above might be right that Vega M is actually Polaris (although that sounds unlikely to me, at least unless it's some in-between arch without Rapid Packed Math or other compute-focused functionality), but for now, I prefer to think that Vega=Vega.
Then there's GDDR5 vs HBM, which is a major advantage in small power envelopes like this. I can't remember the source or the exact numbers for this at the moment, but I've read a HBM power analysis that compared Vega to Pascal at ~250W (I believe it was Vega FE vs Titan X(p)). That analysis estimated a ~50W power draw from the GDDR5X on the Titan, vs. <20W for the HBM2 (Edit: it might have been thisGamersNexus article, which clearly states Vega FE memory consumes <20W). GDDR5X is more efficient at high clocks than GDDR5, so it's not unreasonable to expect the 4GB on a mobile 1050 to consume around 10-15W despite lower clocks. The GN article estimates the 8GB of 8GBPS GDDR5 on an RX 480 on a 256-bit bus to consume 40-50W, so RAM could easily be a larger portion of the ~50W TDP of a mobile 1050Ti (the 1050Ti clocks the ram at 7GBPS on a 128-bit bus, so it should consume less than half of the RAM of the 480 - but not far less). A single 4GB stack of lower-than-Vega FE clocked HBM2 should more than halve that power draw, with theoretical numbers (according to specs) for a 4GB single stacks running around 3.75W. If we assume 15W vs 5W - which seems reasonable for real-world numbers to me, if not overly nice to Nvidia - that's 10W saved right there.
So, let's say we start out with the 45W + ~70W (according to NotebookCheck) combination of a H-series i7 and a mobile 1050Ti. That's 115W. Subtract 20W for the dynamic CPU limit, another 10-15 for the RAM swap, and you're around 80-85W. Then add the efficiency of a wide-and-slow Vega setup, and you won't have to reduce performance much at all to hit a 65W goal. The 931-1101MHz clocks of the 65W KBL-G chips is what makes this believable to me, as that should be an incredibly efficient clock range for Vega.
Again: all of this is speculation. But I don't doubt that it's possible, given the money and resources (and lack of qualms about selling expensive chips).
For all we know, the architecture ID might be misidentified by software unable to correctly read unreleased hardware, or just a placeholder based on the closest released hardware resource-wise for drivers to work. I'd be happy to be proved wrong, though, if you can provide some sources.
AMD GPUs are basically made like LEGO structures... AMD can play with the CUs almost any way they want
if they have a Polaris CU (with 64 shaders), they can pack any number of it into a GPU, and pair it up with virtually any memory subsystem
that's why they were hired to do the console GPUs... very flexible, not like Intel's iGPUs
gfxbench.com/device.jsp?benchmark=gfx40&os=Windows&api=gl&D=AMD+694C:C0&testgroup=info
browser.geekbench.com/v4/compute/811174
its Polaris 22 i said its architecture was 12
Amd/comments/7ahb6z
no evidence its vega xD
i say it again... its not Vega.
Polaris = gfx800+
Vega = gfx900, 901 (901 is in Ryzen 2000G series)
As for the "lego-like" nature of AMD GPUs, you're not telling anyone here anything we don't know - but you're not answering my questions either. See above. Given that GFX804 is Polaris 12, how can it have 20-24 CUs as per the official spec sheets of KBL-G? That's impossible, as Polaris 12 has 16 CUs and a far smaller die size. In other words: this is not Polaris 12. Which, again, tells us that the GFX804 Hardware ID is erroneous in some way.
Now, the GFXBench link above is the only one actually saying anything linking GFX804 with 694C/694E, the rest are just benchmarks with varying degrees of misidientification of the CPU, GPU or both. Which is to be expected with pre-release hardware. If you see the others as evidence of any link between Vega M and the Polaris architecture, you'll have to spell that out for us, 'cause all the rest of the links are just saying "Vega M is 694C/694E" which tells us nothing at all.
As for the GFXBench thing, it's also interesting to note that only the OpenCL part identifies it as GFX804.
Now, I actually have a couple of hypotheses as to why we're seeing this weirdness, which are far friendlier with Ockham's razor than "'Vega M' is Polaris,
AMD is lying":First, some base facts:
- KBL-G is a fully Intel product, with the entire driver stack provided by Intel (see launch information for confirmation of this from AMD)
- AMD then must provide Intel with base driver code to integrate into the Intel driver stack for KBL-G - Intel doesn't write drivers for AMD hardware, but probably optimizes it
- the KBL-G "pGPU" is a semi-custom chip, not entirely matching any existing amd GPU and thus requiring specific driver optimizations
- Support for new GPU designs rarely, if ever, appears in drivers until launch day or close to it
- Intel is selling this as a gaming chip, and has as of now not even hinted at pro-level aspirations for this design.
None of these things are new assumptions. Combining points 3 and 4 gives us two options for pre-release testing: either tweak existing drivers to misidentify the GPU to enable not-yet-finalized optimizations from another not-that-dissimilar chip (similar CU count/compute resources), or use not-yet-ready pre-release driver code lacking all these optimizations. They're probably doing both, but the first option would definitely be done as a step to seeing how the drivers would need to be tweaked compared to what's already released.Then there's the fact that Intel is doing the testing, not AMD. They're on a tight schedule, requiring them to get this working quickly, while working with relatively unknown code and unknown hardware. Could this lead to the use of incorrect Hardware IDs as placeholders? I don't think that's unlikely - and certainly not as unlikely as AMD blurring the distinction between Polaris and Vega (which are separate architectures in very significant ways that don't relate to marketing whatsoever). Even though this project has been in the works for 2+ years, Intel hasn't had access to fully-functioning hardware or drivers until very recently.
Then there's the gaming-focused nature of the chip. If Intel has to pay AMD for every part of their hardware design AND software stack, and the OpenCL part is barely goint to be used at all (outside of professional applications, which isn't gaming, so not what this chip is marketed for). Why pay extra for a newer, more optimized version of the OpenCL driver stack if you don't need it, or could get an older one for cheaper? Or what if AMD is unwilling to part with it, to not cannibalize sales of Radeon Pro parts like in the MBP range and upcoming Vega Pro cards? Considering the compute-focused nature of the Vega architecture, it makes very much sense for AMD to not give Intel full access to the driver stack here, limiting the access to new, optimized code to the graphics/gaming parts.
In other words, this could be
- Placeholder IDs for early hardware testing, implemented to get drivers to work before significant rewriting can be done, or
- The result of AMD not giving Intel their newest OpenCL drivers, with Intel implementing the code "as-is" in pre-release testing
Neither of these hypotheses require any new assumptions - such as AMD calling a Polaris arch Vega for ... marketing purposes? - and as such fit well with Ockham's razor. Your hypothesis requires assuming that both AMD and Intel are lying or at best twisting the truth, which is a far bigger leap to make, and thus requires more evidence. What you've presented so far does not hold up to my standards, sadly. Heck, outside of the GFXBench listings, there's no link there at all. If you have more, feel free to post it here, but for now I'll chalk it up to speculation with pretty shaky foundations.On my free time i maintain the largest device id database aswel
pci-ids.ucw.cz/read/PC/1002
pci-ids.ucw.cz/read/PC/10de
Also: where do you get this "permanent device ID" stuff from? All we've seen is pre-release/ES hardware running pre-release software. A lot of things can change between ES and retail.
the graphics core is based on polaris via bios and register layout and the IMC registry is the same as FIJI (fury series)... this isnt a rumor or a grain of salt.. its fact.. but i know its not enough for you.. so i end this convo with... im sorry i cant prove it to you that its polaris.. but i know what it is.
As for the insider info: it might very well be true. I know for sure that I don't have anything like that. But again, I'll believe it when it can actually be corroborated :) That's what semi-custom means, more or less. The customer goes to AMD, says "I want a chip with this part, this part, and four of these parts", and AMD makes the chip for them. Unless the customer has crazy amounts of money and very specific needs, they use "off-the-shelf" component designs, as anything else would be wildly expensive and time consuming. I suppose a relevant question here is whether Vega was available for the development of semi-custom designs 2+ years ago when this collaboration reportedly began. Given that retail Vega launched half a year ago, I don't see that as unlikely, especially given that retail Vega was delayed. After all, there's no reason why this design had to be finalized and initial fab test runs started until after the retail launch, and reportedly both the PS4 Pro and Xbox One X have Vega elements in their GPU cores. This is also something that makes me doubt that this is a Polaris design - by now, it's simply too old. Might it be Polaris-adjacent, or Vega with features disabled to approach Polaris feature parity? Sure. Would probably be cheaper for Intel.
Another part of what makes me doubt that this could be Polaris (i.e. GCN 1.4 CUs, and not Vega NCUs, mainly) is power consumption: the Radeon RX 470D, with 28 Polaris CUs, has a Total Board Power of 150W at 1266MHz. Even subtracting the full ~30W for the 4GB of GDDR5 on there, 4/28ths to bring it to CU parity, and another 5% or so for power delivery losses that are externalized by the on-package GPU, you're still pushing the power envelope of the Vega M GH (~98W, according to my way oversimplified math). Which, of course, doesn't include the CPU part attached to the Vega M. A 76MHz downclock to the 1190MHz boost spec of the i7-8709G and 8809G doesn't account for 25W+ of power, nor should a downclock to the base speed of 1063MHz do so either. In other words: a pure Polaris part would need some serious efficiency tuning to reach these power numbers.
But again: I don't have any industry sources telling me anything, and there's every possibility I'm wrong. It just doesn't seem likely AMD would call this Vega unless it was significantly Vega based.
at 1.1ghz they would be about the same