Monday, December 14th 2020

Intel Core i9-11900K "Rocket Lake" Boosts Up To 5.30 GHz, Say Rumored Specs

Intel's upcoming 11th Generation Core i9-11900K processor boosts up to 5.30 GHz, according to rumored specs of various 11th Gen Core "Rocket Lake-S" desktop processors, sourced by Harukaze5719. According to this specs-sheet, both the Core i9-11900K and the Core i7-11700K (i7-10700K successor) are 8-core/16-thread parts, and clock-speeds appear to be the only apparent product segmentation between the two. The i9-11900K has a maximum single-core boost frequency of 5.30 GHz, and 4.80 GHz all-core boost. The i7-11700K, on the other hand, has an all-core boost of 4.60 GHz, and 5.00 GHz single-core boost. This time around, even the Core i7 part gets Thermal Velocity Boost.

11th Gen Core i5 continues to be 6-core/12-thread, with Intel allegedly readying an unlocked Core i5-11600K, and a locked i5-11400. Both parts lack TVB. The i5-11600K ticks up to 4.90 GHz single-core, and 4.70 GHz all-core; while the i5-11400 does 4.20 GHz single-core, and 4.40 GHz all-core. The secret-sauce with "Rocket Lake-S" is the introduction of the new "Cypress Cove" CPU cores, which Intel claims offer a double-digit percent IPC gain over the current-gen "Comet Lake," an improved dual-channel DDR4 memory controller with native support for DDR4-3200, a PCI-Express Gen 4 root-complex, and a Gen12 Xe-LP iGPU. The "Cypress Cove" CPU cores also feature VNNI and DLBoost, which accelerate AI DNN; as well as limited AVX-512 instructions. The 11th Gen core processors will also introduce a CPU-attached M.2 NVMe slot, similar to AMD Ryzen. Intel is expected to launch its first "Rocket Lake-S" processors before Q2-2021.
Sources: harukaze5719 (Twitter), VideoCardz
Add your own comment

52 Comments on Intel Core i9-11900K "Rocket Lake" Boosts Up To 5.30 GHz, Say Rumored Specs

#26
R0H1T
Hey that was 25Mhz, though back in the day (when I wasn't even born) it was probably an entire tier of performance!
efikkanIf a hypothetical quad core beat any 8 or 12 core on the market, wouldn't you want to buy it?
Not unless the hypothetical quad core also supports DDR5 6000Mhz or higher, so no this thing isn't winning all MT benches vs the 10900k.
Posted on Reply
#27
ThrashZone
Hi,
Yeah if it hits it's boost for a couple seconds it's not false advertising lol
Trouble is always how long it stays at boost
10900k is pretty disappointing at default settings throttles bad just on a short test like R20 but it hit it's boost just not very long lol :roll:
Posted on Reply
#28
EarthDog
ThrashZone10900k is pretty disappointing at default settings throttles bad just on a short test like R20
I can't say I experienced this in any CB R20 testing over that same number of chips. It helps to have a proper motherboard and cooling as well as a more complete understanding of how boost works. IIRC, stock intel boost is 75 seconds assuming all parameters are met. If you don't like how the CPU boosts, change it in the BIOS so it stays (overclocking). It's nominal clock speed is what 5/5.1 GHz anyway? For the most part, a quick blip is all that is needed to get a program up and running from one/two threads quicker.
R0H1THey that was 25Mhz,
On your machine. Others were up to 100 MHz off. Either way, it isn't a lot, but even at 25 Mhz.. if the box says X.X, I don't want X.X - 0.anything. :)
Posted on Reply
#29
RandallFlagg
efikkanThe rated 5.3 GHz boost is highly optimistic, even when the CPU doesn't hit thermal or power limits, it rarely go above 5.1 GHz at stock.
When Intel will be replacing its 10-core i9-10900k with an 8-core i9-11900k, it's going to be better, despite having fewer cores. People needs to stop fixating on specs and focus on relevant benchmarks.
If a hypothetical quad core beat any 8 or 12 core on the market, wouldn't you want to buy it?


Then I got to ask, why?
For the mainstream market, 8 cores is going to be plenty for a long time.
Most power users, content creators, developers, etc. who actually benefits from more than 8-cores generally want other features as well, such as more PCIe lanes for SSDs etc. I would argue it would be better if both AMD and Intel capped their mainstream platform at 100W and 8 cores (for now), and moved their 12+ cores to their respective HEDT platforms, instead of AMD starting their Threadrippers on 24 cores.
Gotta agree with this. I would even say that the majority of games and applications are fine at 4 fast cores and don't get any benefit from 6 cores. A lot of the perception of needing more cores is a result of those higher core count SKUs from both AMD and Intel having higher clock speeds.

I mean you need look no further than these 2 reviews - 10320 4.6Ghz 4C/8T and the 3300X 4C/8T, noting that the 3300X is more like a Zen 3 4C/8T since it does not suffer the chiplet cache context switching penatly common to higher core count Zen 2. Even on AAA titles these 4 core chips in aggregate are only a percent or two behind their 6 core peers. And I would bet that if they had higher clocks like the higher SKU chips (10320 below caps at 4.6Ghz vs 5.1 and 5.3 for 10700K and 10900K) they could well be faster than the higher core chips in many situations, as the extra cores also result in more frequent context switching and heavier memory bus usage - which is bad for gaming.

Posted on Reply
#30
dicktracy
Zen3 is only a tiny bit faster than outdated Skylake, and AMD’s official number for 5950x is “5%” faster, owing much of that to PCIE 4.0. Rocketlake with new arch and PCIE 4.0 will make short work of it.
Posted on Reply
#31
Raendor
RandallFlaggGotta agree with this. I would even say that the majority of games and applications are fine at 4 fast cores and don't get any benefit from 6 cores. A lot of the perception of needing more cores is a result of those higher core count SKUs from both AMD and Intel having higher clock speeds.

I mean you need look no further than these 2 reviews - 10320 4.6Ghz 4C/8T and the 3300X 4C/8T, noting that the 3300X is more like a Zen 3 4C/8T since it does not suffer the chiplet cache context switching penatly common to higher core count Zen 2. Even on AAA titles these 4 core chips in aggregate are only a percent or two behind their 6 core peers. And I would bet that if they had higher clocks like the higher SKU chips (10320 below caps at 4.6Ghz vs 5.1 and 5.3 for 10700K and 10900K) they could well be faster than the higher core chips in many situations, as the extra cores also result in more frequent context switching and heavier memory bus usage - which is bad for gaming.

That 10400f is great in all honesty. It's hald the price of 5600x, but is about 90%+ performance when I looked at new charts.
Posted on Reply
#32
RandallFlagg
RaendorThat 10400f is great in all honesty. It's hald the price of 5600x, but is about 90%+ performance when I looked at new charts.
Yeah, in the real market the 10320 is overpriced and the 3300X is both overpriced and rare as rain the desert. That leaves the 10400 as the going value gamers chip. Makes me wonder about the 11400 a bit. Even if it is at the 10400's original MSRP of $189, it might be a real gaming killer chip.

To wit, the 10400 is sold out at my local MicroCenter, and their most popular in-stock chips right now are the 9700K, 9900K, and 10700K.
Posted on Reply
#33
ThrashZone
Hi,
lots in Houston all except 10900k & KFC
Posted on Reply
#34
Makaveli
EarthDogNot sure that is true. I've dealt with 4/5 of these chips and they all hit 5.3 Ghz without issue. In fact, I can't recall of the dozens of Intel CPUs, them not hitting their boost. This isn't an AMD chip where a BIOS fix was needed to reach boost even though all parms were met... :p
Most of the 10900K chips will hit that with PL2 and 250 watts of power and has to be under 70c.
Posted on Reply
#35
EarthDog
MakaveliMost of the 10900K chips will hit that with PL2 and 250 watts of power and has to be under 70c.
Yeah.. no clue... they all boosted without issue to me. First I've heard of such a thing on Intel when conditions are met.
Posted on Reply
#36
Lionheart
efikkanThe rated 5.3 GHz boost is highly optimistic, even when the CPU doesn't hit thermal or power limits, it rarely go above 5.1 GHz at stock.
When Intel will be replacing its 10-core i9-10900k with an 8-core i9-11900k, it's going to be better, despite having fewer cores. People needs to stop fixating on specs and focus on relevant benchmarks.
If a hypothetical quad core beat any 8 or 12 core on the market, wouldn't you want to buy it?


Then I got to ask, why?
For the mainstream market, 8 cores is going to be plenty for a long time.
Most power users, content creators, developers, etc. who actually benefits from more than 8-cores generally want other features as well, such as more PCIe lanes for SSDs etc. I would argue it would be better if both AMD and Intel capped their mainstream platform at 100W and 8 cores (for now), and moved their 12+ cores to their respective HEDT platforms, instead of AMD starting their Threadrippers on 24 cores.
To answer your question with such ease, why not? Plus I want some competition from the blue side, who wouldn't want a 12 core CPU @ 5.3ghz, I know I would, do I need it? Hell no, do I want it? Hell yes! 6 - 8 core CPU's maybe the mainstream but that doesn't mean that should be the limit. Plus the naming scheme is all ruined now, 10900k = 10 core 20 thread / now the new 11900k is an 8 core 16 threaded part? That's just silly.
dicktracyZen3 is only a tiny bit faster than outdated Skylake, and AMD’s official number for 5950x is “5%” faster, owing much of that to PCIE 4.0. Rocketlake with new arch and PCIE 4.0 will make short work of it.
Translation - AMD bad, Intel & Nvidia good.
Posted on Reply
#37
Makaveli
LionheartTranslation - AMD bad, Intel & Nvidia good.
Pretty much what I got out of that post also.
Posted on Reply
#38
DeathtoGnomes
I dont see mention of TDP and wattage in this PR.
Posted on Reply
#39
watzupken
dicktracyZen3 is only a tiny bit faster than outdated Skylake, and AMD’s official number for 5950x is “5%” faster, owing much of that to PCIE 4.0. Rocketlake with new arch and PCIE 4.0 will make short work of it.
I don't think PCI-E 4.0 contributed anything to the performance bump to be honest. Even the fastest card now is not really PCI-E bandwidth bound if I am not mistaken.

As for 5% faster, I think it depends on what metrics you are looking at. If you are looking at purely just gaming, then yeah I agree the improvement is not great when compared to Intel's Comet Lake. However you do need to know that the Intel chip is running with a higher boost speed. So 5% with a clockspeed disadvantage shows that the IPC gained is actually higher. In any case, I am looking forward to see what Rocket Lake will bring to the table, though I am also expecting an incredible amount of power required and heat output for the high end models.
Posted on Reply
#40
bonehead123
Soooo.....

Just exactly how many plus signs (10, 15, 20 ?) can we look forward to at the end of this pitiful pathetic semi-simulated uptick of yet anutha 14nm cpu ???????
Posted on Reply
#41
medi01
R0H1TBased on just one game? Well if you say so :ohwell:
It's even "cooler" than that, once you take into account that we are talking about a game which even newest most expensive GPUs struggle with, getting CPU limited in it takes... talent.
Posted on Reply
#42
EarthDog
bonehead123Soooo.....

Just exactly how many plus signs (10, 15, 20 ?) can we look forward to at the end of this pitiful pathetic semi-simulated uptick of yet anutha 14nm cpu ???????
Whats worse... these pathetic semi-simulated uptick of yet anutha 14nm cpu can compete performance wise with AMD or the process it's built on is being used several times?

Nobody GAF about the process node. The only way most people do is by power consumption... and most dont GAF about that either. ;)
Posted on Reply
#43
Am*
efikkanThe rated 5.3 GHz boost is highly optimistic, even when the CPU doesn't hit thermal or power limits, it rarely go above 5.1 GHz at stock.
When Intel will be replacing its 10-core i9-10900k with an 8-core i9-11900k, it's going to be better, despite having fewer cores. People needs to stop fixating on specs and focus on relevant benchmarks.
If a hypothetical quad core beat any 8 or 12 core on the market, wouldn't you want to buy it?


Then I got to ask, why?
For the mainstream market, 8 cores is going to be plenty for a long time.
Most power users, content creators, developers, etc. who actually benefits from more than 8-cores generally want other features as well, such as more PCIe lanes for SSDs etc. I would argue it would be better if both AMD and Intel capped their mainstream platform at 100W and 8 cores (for now), and moved their 12+ cores to their respective HEDT platforms, instead of AMD starting their Threadrippers on 24 cores.
First off, no I wouldn't want to buy a hypothetical quad core -- even if it was 2x faster than 8/12 core CPUs (which is never going to happen anyway). At best, I would possibly look at getting it for older legacy programs that have trouble scaling with more cores, but there are barely any left at this point. We've had 8 cores being the standard now in sub-300 USD gaming consoles for 7 years -- that's already the BARE MINIMUM. Nobody's buying high end desktop CPUs for "good enough" performance today. Quad cores have been good enough for so long so far only because of Intel's maintained monopoly on the desktop market (which they no longer have). And this is assuming they do not get another big security exploit which will take their performance advantage away post-patch (practically guaranteed at this point with their rehashed old architecture).

And what's going to be "plenty" for a long time is irrelevant. Nobody's buying enthusiast level performance CPUs for "good enough" performance, let alone for their garbage throwaway GPUs and this launch (if true) is a complete embarrassment from Intel and their useless bean-counter CEO. We could argue about whether the iGPU would be useful or not if Intel were at least somewhat serious about driver support -- which has been basically non-existent since forever. AMD and Nvidia provide dozens of driver release and around 5-10 years of support for their GPUs compared to Intel who at best release one or two updates per year for around 2 years (with the exception of their re-hashed iGPUs that stayed the same for multiple generations, since they haven't bothered to change barely anything in them for around half a decade now besides some minor tweaks). I honestly think it's time for this dinosaur company to go out of business and wouldn't miss them one bit if they did -- they have been a joke for over half a decade now and are refusing to compete on either price or performance in any meaningful way, despite being massively behind their direct competitor whilst outside competitors are already knocking on their door and about to blow them out of the water (and I'm not even talking about AMD -- Apple Silicon and RISCV competitors at a fraction of their size are going to mop the floor with Intel in the next year or two in both efficiency and performance). If they do not get their act together and go through a complete management re-structure, I'm 100% certain we will be looking at another has-been dinosaur patent troll (a.k.a. IBM 2.0).

And please stop giving any more awful suggestions to either AMD or Intel about handicapping their mainstream platforms to 8 cores. Not everyone who needs more than 8 CPU cores needs the extra PCI-E lanes, nor has the space for a HEDT socket (which is too large for those of us looking for a small but future-proof ITX build, myself included). The fact that AMD's 12 and 16 core parts have been selling out consistently on the mainstream platform only proves you wrong.
Posted on Reply
#44
efikkan
LionheartTo answer your question with such ease, why not? Plus I want some competition from the blue side, who wouldn't want a 12 core CPU @ 5.3ghz, I know I would, do I need it? Hell no, do I want it? Hell yes! 6 - 8 core CPU's maybe the mainstream but that doesn't mean that should be the limit. Plus the naming scheme is all ruined now, 10900k = 10 core 20 thread / now the new 11900k is an 8 core 16 threaded part? That's just silly.
Well, my point is that these sockets are mainstream platforms, by pushing too many cores and PCIe lanes on this platform they are rising the costs for most PC buyers. I would much rather they lowered the entry for HEDT and kept mainstream at 100W TDP and ~8 cores (for now).
Am*First off, no I wouldn't want to buy a hypothetical quad core -- even if it was 2x faster than 8/12 core CPUs (which is never going to happen anyway). At best, I would possibly look at getting it for older legacy programs that have trouble scaling with more cores, but there are barely any left at this point.
Clearly you don't know how software scales.
Even in a perfect environment, two cores at half speed would not catch up with a single core at full speed.
The more cores you divide a workload between, the more overhead you'll get. There will always be diminishing returns with multithreaded scaling.
Am*We've had 8 cores being the standard now in sub-300 USD gaming consoles for 7 years -- that's already the BARE MINIMUM.
What kind of reasoning is this?
And your phone probably have 6-8+ cores as well. Performance matters, not cores.
Am*Nobody's buying high end desktop CPUs for "good enough" performance today.
Just a kind reminder, Rocket Lake (and AM4) are mainstream platforms. ;)
Am*Quad cores have been good enough for so long so far only because of Intel's maintained monopoly on the desktop market (which they no longer have).
Nonsense. There were 8-core prototypes of Cannon Lake (cancelled shrink of Skylake), and planned and cancelled 6-core variants of Kaby Lake, long before Zen. 14nm might be mature now, but it took a very long time to get there.
Am*And this is assuming they do not get another big security exploit which will take their performance advantage away post-patch (practically guaranteed at this point with their rehashed old architecture).
A well known architecture is likely to have less problems than an untested one. The more time passes by, the greater the chance of finding a large problem.
And don't pretend like most of these vulnerabilities didn't affect basically every modern CPU microarchitecture in one way or another.
Am*And what's going to be "plenty" for a long time is irrelevant. Nobody's buying enthusiast level performance CPUs for "good enough" performance
So people should buy CPUs they don't need, just in case?

The reality is, unless you're a serious content-creator, 3D-modeller, developer or someone else who runs a workload which significantly benefits from more than 8 cores, you're better off with the fastest 6 or 8 core you can find, for the most responsive and smooth user experience. Not to mention the money saved can be put into other things, including a better GPU, or future upgrades etc.
Am*I honestly think it's time for this dinosaur company to go out of business and wouldn't miss them one bit if they did -- they have been a joke for over half a decade now and are refusing to compete on either price or performance in any meaningful way, despite being massively behind their direct competitor whilst outside competitors are already knocking on their door and about to blow them out of the water…
The factual incorrections here are too severe to even cover in this discussion thread.
You are just biased against Intel, and don't even care about the facts.
We need more competition, not less.
Am*Apple Silicon and RISCV competitors at a fraction of their size are going to mop the floor with Intel in the next year or two in both efficiency and performance
You don't even have a faint idea about what these things are.
Am*And please stop giving any more awful suggestions to either AMD or Intel about handicapping their mainstream platforms to 8 cores. Not everyone who needs more than 8 CPU cores needs the extra PCI-E lanes, nor has the space for a HEDT socket (which is too large for those of us looking for a small but future-proof ITX build, myself included). The fact that AMD's 12 and 16 core parts have been selling out consistently on the mainstream platform only proves you wrong.
It's not about handicapping, it's about driving up the costs with unnecessary features when there is already a workstation/enthusiast platform to cover this. And BTW, HEDT motherboards in mini-ITX form factor does exist, not that it makes any sense, but anyway.

And how many of those buying these 12 and 16 core models are doing it just to brag about benchmark scores? Probably a good portion.
Posted on Reply
#45
kapone32
dicktracyZen3 is only a tiny bit faster than outdated Skylake, and AMD’s official number for 5950x is “5%” faster, owing much of that to PCIE 4.0. Rocketlake with new arch and PCIE 4.0 will make short work of it.
You hope
Posted on Reply
#46
EarthDog
kapone32You hope
Do you think that will be difficult? Even on 14++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++?

Gaming performance is about the same...ST performance is slightly faster and MT performance, with SMT, is notably faster. It's not a reach at all to think they'll take the IPC and single threaded performance crown. Where I do believe they will have trouble is with MT and improving the efficiency of HT vs SMT. But clocks and IPC can overcome that difference. I fully believe RL will be as performant or more performant than Zen2/5000 series. The biggest issue will be price and power use IMO.

As I said above, nobody gives a hoot about the process node... half the PC users can't even spell it, nonetheless know what relevance that has over their CPU. I also believe less people care about power use than people believe. In the enthusiast realm, I feel it's simply a talking point to hold over your competitor more than it has real world consequences.
Posted on Reply
#47
kapone32
EarthDogDo you think that will be difficult? Even on 14++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++?

Gaming performance is about the same...ST performance is slightly faster and MT performance, with SMT, is notably faster. It's not a reach at all to think they'll take the IPC and single threaded performance crown. Where I do believe they will have trouble is with MT and improving the efficiency of HT vs SMT. But clocks and IPC can overcome that difference. I fully believe RL will be as performant or more performant than Zen2/5000 series. The biggest issue will be price and power use IMO.
It remains to be seen. My thought process since the 5600X easily OCs to 4.7+ that we may indeed get XT chips at 5 GHZ to respond to this chip.
Posted on Reply
#48
EarthDog
kapone32It remains to be seen. My thought process since the 5600X easily OCs to 4.7+ that we may indeed get XT chips at 5 GHZ to respond to this chip.
Considering most XT's are 100 MHz above the non XT (one is 200 MHz), I wouldn't hold my breath you'll see 5 GHz (or 300 MHz) from them. Maybe I'm wrong.

Just be sure to bitch about those as much as others bitch about "super" and "Ti" cards coming out in response... Goose/Gander....I find that hilarious (not that you do it... but plenty here had no issues with their precious' incremental response but did with others)... :p
Posted on Reply
#49
Am*
efikkanWell, my point is that these sockets are mainstream platforms, by pushing too many cores and PCIe lanes on this platform they are rising the costs for most PC buyers. I would much rather they lowered the entry for HEDT and kept mainstream at 100W TDP and ~8 cores (for now).
Complete nonsense. You can still get 8 core CPUs for the former price of quad cores. The options with extra cores do not affect pricing in any way.
efikkanClearly you don't know how software scales.
Even in a perfect environment, two cores at half speed would not catch up with a single core at full speed.
The more cores you divide a workload between, the more overhead you'll get. There will always be diminishing returns with multithread
More irrelevant nonsense. Even if I compare an ancient overclocked Sandy Bridge to today's top of the line Comet Lake core vs core, we're not getting double the performance per core even 10 years later. These upcoming CPUs are not advertising to be double the speed per core vs this year's -- but rather, barely reaching 10% gains. Otherwise Intel would be plastering it on every page.
efikkanWhat kind of reasoning is this?
And your phone probably have 6-8+ cores as well. Performance matters, not cores.
I never said performance doesn't matter. What I am saying is in today's world where development time is being cut at every point possible with more and more games being ports from consoles, 8 core CPUs are the bare minimum now and need to be the entry level products, not enthusiast level -- since you do not get the hardware optimizations on Windows that you would on consoles and need extra cores to handle the bloat and extra background tasks of Windows 10. Again, your argument is rendered worthless -- since it is not advertising performance gains anything close to what would be necessary to make a difference.
efikkanNonsense. There were 8-core prototypes of Cannon Lake (cancelled shrink of Skylake), and planned and cancelled 6-core variants of Kaby Lake, long before Zen. 14nm might be mature now, but it took a very long time to get there.
You're desperately reaching. At no point did I mention cancelled or engineering sample CPUs -- we're talking about the longevity quad cores had up to now and why this was the case due to the technological standstill and AMD massively falling behind. With them now being back to leapfrogging performance gains every 2 years, these 8 cores will age like milk in comparison. Your argument of 8 cores being enough today is as wrong as it was with people back in 2008 saying dual cores will be enough "for a long time"...
efikkanA well known architecture is likely to have less problems than an untested one. The more time passes by, the greater the chance of finding a large problem.
And don't pretend like most of these vulnerabilities didn't affect basically every modern CPU microarchitecture in one way or another.
Utter drivel. AMD's architecture has been stable since 2nd gen Ryzen and I can count on one hand the vulnerabilities that affected AMD (which they addressed in good time). Compare that to Intel's godawful architecture that has been picked apart at every level -- from branch prediction and cache level vulnerabilities to SGX/ME/Hyperthreading (which itself is still considered a security risk and still shipping vulnerable CPUs).
efikkanSo people should buy CPUs they don't need, just in case?
No -- they should buy enthusiast level CPUs that have at least some room for future-proofing when time comes for it, since people are keeping their hardware for longer.
efikkanThe factual incorrections here are too severe to even cover in this discussion thread.
You are just biased against Intel, and don't even care about the facts.
We need more competition, not less.
You seem to have trouble with basic comprehension -- considering my laptop, tablet and 2x desktops are running Intel.

And at this point, I think Zhaoxin and VIA are more likely to provide better competition to AMD than Intel.
efikkanYou don't even have a faint idea about what these things are.
Couldn't think of a more worthless reply. You sure got me there, professor...
efikkanIt's not about handicapping, it's about driving up the costs with unnecessary features when there is already a workstation/enthusiast platform to cover this. And BTW, HEDT motherboards in mini-ITX form factor does exist, not that it makes any sense, but anyway.

And how many of those buying these 12 and 16 core models are doing it just to brag about benchmark scores? Probably a good portion.
There are no unnecessary features. You suggesting to people wanting more than 8 cores to jump on HEDT platforms is beyond stupid. ITX sized HEDT motherboards practically don't exist outside of the handful made by Asrock and are both impossible to get and cost a fortune due to the extra VRMs/power components alone that would be capable of handling higher end CPUs. You're so clueless that you're contradicting your own point directly, since people would be paying for extra CPU PCIE lanes that they would never use. And what people's uses are -- whether for benchmarking or not, is none of your business.
Posted on Reply
#50
InVasMani
kayjay010101So because one game has a major bug that is bound to be fixed (literally has already been fixed by the community by changing one value in the .exe file with a hex editor; I'm guessing this will be updated in a hotfix soon so it's applied to everyone, eradicating the bug) that happens to make one company's CPUs perform better, that automatically makes their CPUs better for gaming? That might be the worst attempt at excusing fanboyism I've seen in a while
You can use imagecfg on the .exe file and set it to run however you want in terms of CPU cores usage. It works on most exe's except a few system ones can't be tampered with so to speak.
Posted on Reply
Add your own comment
Dec 22nd, 2024 08:08 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts