Saturday, October 13th 2018
New PT Data: i9-9900K is 66% Pricier While Being Just 12% Faster than 2700X at Gaming
Principled Technologies (PT), which Intel paid to obtain some very outrageous test results for its Core i9-9900K eight-core processor launch event test-results, revised its benchmark data by improving its testing methodology partially. Initial tests by the outfit comparing Core i9-9900K to the Ryzen 7 2700X and Ryzen Threadripper 2950X and 2990WX, sprung up false and misleading results because PT tested the AMD chip with half its cores effectively disabled, and crippled its memory controller with an extremely sub-optimal memory configuration (4-module + dual-rank clocked high, leaving the motherboard to significantly loosen up timings).
The original testing provided us with such gems as the i9-9900K "being up to 50 percent faster than 2700X at gaming." As part of its revised testing, while Principled Technologies corrected half its rookie-mistakes, by running the 2700X in the default "Creator Mode" that enables all 8 cores; it didn't correct the sub-optimal memory. Despite this, the data shows gaming performance percentage-differences between the i9-9900K and the 2700X narrow down to single-digit or around 12.39 percent on average, seldom crossing 20 percent. This is a significant departure from the earlier testing, which skewed the average on the basis of >40% differences in some games, due to half the cores being effectively disabled on the 2700X. The bottom-line of PT's new data is this: the Core i9-9900K is roughly 12 percent faster than the Ryzen 7 2700X at gaming, while being a whopping 66% pricier ($319 vs. $530 average online prices).This whopping 12.3% gap between the i9-9900K and 2700X could narrow further to single-digit percentages if the 2700X is tested with an optimal memory configuration, such as single-rank 2-module dual-channel, with memory timings of around 14-14-14-34, even if the memory clock remains at DDR4-2933 MHz.
Intel responded to these "triumphant" new numbers with the following statement:
The entire testing data follows:
Source:
Principled Technologies (PDF)
The original testing provided us with such gems as the i9-9900K "being up to 50 percent faster than 2700X at gaming." As part of its revised testing, while Principled Technologies corrected half its rookie-mistakes, by running the 2700X in the default "Creator Mode" that enables all 8 cores; it didn't correct the sub-optimal memory. Despite this, the data shows gaming performance percentage-differences between the i9-9900K and the 2700X narrow down to single-digit or around 12.39 percent on average, seldom crossing 20 percent. This is a significant departure from the earlier testing, which skewed the average on the basis of >40% differences in some games, due to half the cores being effectively disabled on the 2700X. The bottom-line of PT's new data is this: the Core i9-9900K is roughly 12 percent faster than the Ryzen 7 2700X at gaming, while being a whopping 66% pricier ($319 vs. $530 average online prices).This whopping 12.3% gap between the i9-9900K and 2700X could narrow further to single-digit percentages if the 2700X is tested with an optimal memory configuration, such as single-rank 2-module dual-channel, with memory timings of around 14-14-14-34, even if the memory clock remains at DDR4-2933 MHz.
Intel responded to these "triumphant" new numbers with the following statement:
Given the feedback from the tech community, we are pleased that Principled Technologies ran additional tests. They've now published these results along with even more detail on the configurations used and the rationale. The results continue to show that the 9th Gen Intel Core i9-9900K is the world's best gaming processor. We are thankful to Principled Technologies' time and transparency throughout the process. We always appreciate feedback from the tech community and are looking forward to comprehensive third party reviews coming out on October 19.The media never disputed the possibility of i9-9900K being faster than the 2700X. It did, however, call out the bovine defecation peddled as "performance advantage data."
The entire testing data follows:
322 Comments on New PT Data: i9-9900K is 66% Pricier While Being Just 12% Faster than 2700X at Gaming
Though, even if it won't exceed those 95W for base clocks, it will pull pretty exactly +230–250W at the wall, excluding the rest of the system – at stock clocks under full load. Mind any overclocking!
Anyway, the overall power-consumption will bump quite a bit! It's still physics, isn't it?
The power-draw can be extrapolated quite easily, ordinary rule of three …
If a 8700K needs about 66.8 Watts with its 6 cores on a average gaming-load, then a 9900K will be drawing about 89.07 Watts. Still, it won't run on its stock-clocks of 5 Ghz but 'only' on a 8700K's 4.7 GHz – so you have to add the additional consumption which even comes on top of that.
Calculation:
Averaged power-draw at stock-clocks (@4.7 GHz)
Average gaming-load OC'd (@4,9 GHz)
… which makes it ~90W on average gaming-load @4.7 GHz on 8 cores, just by the numbers alone.
So a 9900K will be consume at least 90W (in the best theoretical case) – though, this will not be the actual case since it has a 33% increased Cache compared to the 8700K (12 vs 16 MByte). So due to that fact (of the increased Cache) alone it will be drawing significant more power than those 90W and probably will exceed the TDP of 95W.
Or in other words, it will be very likely that the 9900K will already exceed its TDP of 95W already at stock clocks (as you can see by the already overstepping 95.74 Watts at 4.9 Ghz), especially if it runs any warmer.
If we then have another look on the 8700K while being under heavy torture load like Prime, we see it doesn't get any better anyway. At Prime a 8700K pulls already 159.5 Watts@stock – and as such, a 9900K will be pulling also at least 212.67 Watts. Having said this, it's still ain't running at its stock-clocks at 5 Ghz but again still 'only' at stock-clocks of a 8700K at 4.7 Ghz. … of course without the additional power-draw of the remaining +300 Mhz, sans the increased power-consumption of its larger cache.
Calculation:
Full-load power-consumption at stock-clocks (@4.7 GHz)
Full-load power-consumption OC'd (@4,9 GHz)
As a result, a 9900K will be in that (still best theoretical) case consume at least circa 212.67 W under full load – admittedly, even that won't be the actual case as it still has a 33% increased Cache compared to the 8700K (12 vs 16 MByte). Hence, due to that fact of its increased Cache it will be consume significantly more. On average the 9900K might be easily draw +230–250 Watts. In any case, the official fantasy-TDP of just 95W is here pure maculation and by all means just printer's waste. So, as usual on Intel's official extremely misleading TDP-specifications.
Note!
All numbers here are always representing the best case (sic!) and are in fact the best possible and assumable Numbers, since we're still at 4.9 Ghz in this scenario. Any greater attention should also be paid to the evident fact that in every single case and all numbers do reflecting the actual Package Power Consumption and as such those are solely reflecting only the processor's consumption in and of itself. Those numbers ain't the power consumptions of the whole system! Those are the CPU's values alone.
☞ Please also note, that the Cache which now has a Size increased by 33% will be making significant contributions to actual Wattage-numbers. Furthermore, all numbers arose with the assistance of a Chiller which cooling loop was cooled down and held permanently at 20°C (which, as a side-note, didn't even could hinder the 8700K from running into its thermal limit).
Résumé or bottom line
- All of those are Best case-values.
- All Wattages and (possible) clock frequencies under utilisation and made possible through the use of a Chiller (Compressor-cooling).
- All calculations lacking the remaining clock speed of +100 MHz towards the nominal-clock of the 9900K (naturally including the respective overconsumption)
- The actual Wattage might be very likely levelling off at +230–250 Watt nominal-consumption, at stock-clocks under full load.
SmartcomPS: One forgive the potentially significant simplification of given circumstances for the purpose of exemplification. Errors excepted.
Problem is, nobody except for me does this in reviews. Go look at any of the board reviews here and you'll see it. I provided the new board reviewer with the hardware to test this as well, so you'll continue to see actual CPU power draw in motherboard reviews here @ TPU.
I'm also happy to report that AMD's current platform TDP numbers are pretty accurate as to actual power draw as well.
You'll also have to note that Turbo clocks on both platforms, and CPU throttle are controlled by this number. You can even find it in board BIOSes, where you can manipulate the maximum power drawn (and some board makers have previously used this to cheat on review benchmark numbers). By default on all current platforms, this number matches a CPU's advertised TDP.
So rather than blame AMD or Intel on this one, you gotta blame the reviewers who are reporting inaccurate information to you. Its especially revolting to me that nobody tests this way, especially considering that Zalman made meters for this you could buy, that cost just $40, so you don't even need to spend a lot to measure this accurately. A decent and reliable clamp-on meter these days can be had for around $100.
Power numbers derived from meters connected to the PSU's power cord measure the entire system, as well as PSU inefficiency. Of course those numbers seem inflated.. they include board, memory, drives, mouse and keyboard, as well as the CPU, if not also idle power draw of a videocard.
The defense gets erected whenever AMD fans tell Intel users that the difference is negligible when there are countless benches (even a dozen of the corrected ones in this article underline it!) that show Intel CPUs still excel at higher framerates and in single thread limited scenarios. And its not just benches either, but practice - experience - from using the hardware. We're on an enthusiast forum, so there is going to be a larger than normal group interested in top end performance. And when it comes to price: the performance gap in some scenario's is easily 30% - let's look at GPU and the additional cost of a 30% faster 2080ti over its little brother below it. Remarkably similar. In both cases, you could say 'must be cheaper', and in both cases we as consumers have influence on that, by simply not buying it. - Don't forget your (expensive) B-die sticks
- Don't forget to clock your Intel CPU at 4 Ghz
- Don't use Source as an example game engine
- Don't use a ST limited scenario
And all of a sudden, Ryzen looks almost (still missing some % though) as good as an Intel CPU! You should go work for PT! I heard they're doing a Ryzen piece next month.
Surely you can see the irony. You have just literally summed up everything that is wrong about AMD-fan perspective on performance, and you cannot even see it, apparently. You should take this perfect example to reflect upon. Dummy... :laugh:
He's one of the few who does that and always have, as he's famous for doing exactly that.
In addition, you seem to have overlooked my note down there were I trying to state expressively, what such numbers are representing and where do those were coming from, no? Reading that makes me actually genuinely happy …
Now shut up and take my money! Oh, and if you don't mind, let me **** **** ****! I'm actually pret·ty aware of the ongoing over-extensively and widely used abusive methods of such ways to hide way higher actual numbers under the rug, the all too common practice to benchmark with open roofs and completely unbridled for higher numbers (cough MCE! Unlimited Powertargets!) while 'determine' the »actual power-consumption« afterwards whilst having the product muzzled by given BIOS-/UEFI-options and/or lowered PTs, pre-cooled cards, etcetera – thank you. I always criticise such wrongdoings as extremely and excessively misleading and deceiving. Always have, always will.
Especially since such devices and/or apparatuses shall be not only pretty affordable for a today's technical editorial department but since every damn reviewer who considers themselves as any reputable or at least may have the personal aspiration to be taken any serious and trustworthy is nothing less than ob·li·ga·ted in taking such measurements and determine such numbers of actual and nominal power-consumptions.
Using the overall system's wattage I straight-out consider such attempts or habits as direct intent to mislead or deceive. All the more if the product is a) known to be taxing higher numbers in reality and especially b) if such reviewers were already made aware and pointed towards such facts (that using the overall system's wattage is representing the product in a way to massively flattering light).
Smartcom
Now, many things can go wrong that can cause TDP to be exceeded, but it is NEVER supposed to happen, no matter the type of CPU loading. So anyone telling you that this happens, at stock, is misinforming you, and isn't smart enough (IMHO) to investigate why such is taking place. Please also note that the fallacy I see right away is that they are using software to measure this, rather than physical hardware, as we here @ TPU do. Package Power Consumption is a SOFTWARE reading. So, no this guy is NOT doing as we do. He's reading software, and is assuming that things are reported accurately, when clearly they aren't. He's clearly identified a problem in his configuration for sure, but what and where that problem is, is NOT being reported properly.
Whether this is the default configuration or not is up for debate. From what I can see from whitepaper, technically it should not be (PL2 is 1.25 TDP and up to 10ms with PL1 Tau at 1 second). Are motherboard manufacturers playing around with settings more than they should (in addition to MCE)?
www.intel.com/content/dam/www/public/us/en/documents/datasheets/8th-gen-core-family-datasheet-vol-1.pdf
Chapter 5: Thermal Management (Page 88)
The other thing with Intel's power management is that AVX throws most of it straight out the window. If overclockers decide to disable the default AVX Offset (-2/-3) along with disabling the power limits, that will increase power consumption and heat by a lot. Back to talking about stock - in general Intel has set the limits pretty well, Turbo frequencies will work reasonably fine for anything not AVX. Heavy AVX load, however, will drop the frequencies down to base quickly. You are measuring power with clamp, right? Have you tested from at least a couple different motherboards whether software readings lie and by how much (both motherboad and CPU)?
I have been looking for a cheap clamp to test the power myself but a $40 meter or reasonably cheap clamps do not have a very good accuracy and so far I have not been interested enough to go for something costing couple hundred moneys. Software might not be in a different ballpark from a cheap meter, given that software readings are somewhat verified in hardware.
When you have the best and no competition then you can name your prices.... Both Intel 9900K and Nvidia 2080Ti are 2018 best CPU & GPU with no competition from AMD.
I you want a cheaper 3rd place 2700X its the time to do that. Secondly 9700K out performs 2700X in most games and OC up to 5.5GHz with EK.
If you're a PC Gamer the 9700K is your best choice. If you're just want bragging rights with benchmarking get the top dog 9900K.
2700X Max's out 4.4GHz
9900K/9700K both max out 5.5GHz
8700K/8086K both max out 5.3GHz
That's your head room.
Enjoy your blue tax for extra 5FPS.
So a 9900k is basically double the price of a 2700x for less then 12% gaming performance increase, we know who the true winner is here.
CPU
>Athlon 200GE - Minimal desktop
>R3 2200G - Bare minimum gaming (dGPU optional)
>R5 2400G/i5-8400 - Consider IF on sale
>R5 2600/X - Good gaming & multithreaded work use CPUs
>i7-9700k - If pairing w/ a 2080Ti and the extra $200+ is worth ~135 FPS instead of ~120 FPS to you, despite better CPUs coming next year and requiring new boards
>R7 2700/X - Best value high-end CPU on a non-HEDT platform
>Wait for R7 3700X - Surely the best overall and not a massive disappointment like the 9900k
>Threadripper/Used Xeon - HEDT
Now, stop acting shocked, like this is new. Intel has priced their premium processor at premium prices for nearly two decades. Factored for inflation, those $1,000 chips cost a lot more than this one does.