• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Alleged Leaked Details on Intel Comet Lake-S Platform Require... You Guessed It... A New Platform

It doesn't bother me in the slightest if Intel changed the socket every month. I build a new rig around every 4 years and buy a new CPU and MOBO. What Intel does in between doesn't affect me at all.

If you want to upgrade every time Intel drops a new CPU then I guess you have reason to be displeased.
 
so, uhh, how loooonnnggggg has it been since Intel's last socket? 6 months? 9 months? I forgot what I had for lunch yesterday.
Here's a good place to look;
Socket 1151 has been going for 5 years. It had one major transition, from DDR3 to DDR4.
The longest AMD has gone between new socket introductions is 3 years(Socket A -> Socket 754)

So does anyone want to continue whining about Intel making a new advancement?
 
Yeah because we all know that the socket format is the only thing that matters, pin outs, CPU compatibility, features, etc mean jack shit. The 4th grader level arguments continue.
 
Yeah because we all know that the socket format is the only thing that matters, pin outs, CPU compatibility, features, etc mean jack shit. The 4th grader level arguments continue.
Are you done? Or do we need to ask the mods to jump in again?
 
Are you done? Or do we need to ask the mods to jump in again?

Did you recognize yourself in that description or what ? Sorry buddy, that's your business if you thought that's the case not mine. Don't reply if you're done, it's as simple as that, I didn't address you in anyway.
 

Let me ask you this, why are you whining about our whining?
Why are you on the one hand so convinced intel does not care for its consumers (and somehow you see this as a good thing) and on the other hand are trying to stop us from complaining?
If you feel its pointless because Intel wont listen, ok...move on? you understand what a comment section is for, if you dont like it, then dont be here.
Not to be impolite but nobody needs you here and certainly not because you come here telling us what we should or should not have a problem with and acting like we dont know what we are talking about but you somehow do.

I already mentioned, intel had a long stretch with socket 775, after that it changed all the time and iirc even some same sockets arent really the same.

Meanwhile AMD has made socket AM4 and is just going forward with that.
That, we feel, is better and again, Intel can do whatever they want, we just wont be interested in their products.
AND with your roll eyes stuff, clearly you feel we are just a minority that have no influence on the massive sales Intel will make, then AGAIN, why are you here...why are you so hellbent on this with broken arguments when you are so convinced we, the people you are fighting so hard against, do not matter?
 
Last edited:
I really hope Comet Lake will regain some base clocks. I believe the reclining base clocks is a result of increasing core count while retaining TDP and node, and the node refinements haven't been enough to keep the base clocks up.

There have been engineering samples of Cascade Lake-X running at 4.0 GHz base, and I do believe it would be possible for Comet Lake-S as well if the cores are a little realigned.

Personally, I don't think the mainstream market needs more than 8 cores for now, even though AMD offers 12 and soon 16 cores. Making faster cores which scales well across fewer cores are more important for most real workloads. i9-9900K today struggles with throttling due to thermal density. With some tweaks, increased TDP to 125W, a sustained 4 GHz under multithreaded AVX load and sustained ~4.7 GHz under non-AVX loads could yield a ~5-10% performance increase without going beyond 5 GHz max boost, and that's without changing the architecture.

I recall earlier these year there's some leaks, or sort of , a 4Ghz based with fewer core and whooping TDP. Unless Intel managed to refined that, I'm highly doubt they can pull even 3.8Ghz based on 10 core :rolleyes:
While I partially agree on that "mainstream market needs more than 8 cores for now", is it nice to have an option? I mean in dark ages you need 2 rigs for same task, now it can be done faster in just 1 rig. 12 core on mainstream is a god send for us "budget professional" :D
On core part, I believe we already reaching peak single core MIPS, don't see any big leap since Athlon64 FX-53, thus adding more core or another threads is more feasible.
 
It doesn't bother me in the slightest if Intel changed the socket every month. I build a new rig around every 4 years and buy a new CPU and MOBO. What Intel does in between doesn't affect me at all.

If you want to upgrade every time Intel drops a new CPU then I guess you have reason to be displeased.
New sockets is mainly a "problem" for the selection of motherboards. Probably around ~1% of PC builders ever upgrade to a new CPU within the same architecture, and there are very few reasons to do so.

I think the ideal would be to have one socket per architecture. I think Intel switches a little too often, but AMD switches too rarely and promises backwards and forward compatibility they can't deliver on.
Almost no motherboard maker offer proper support for any motherboard beyond 2 years anyway.

While I partially agree on that "mainstream market needs more than 8 cores for now", is it nice to have an option? I mean in dark ages you need 2 rigs for same task, now it can be done faster in just 1 rig. 12 core on mainstream is a god send for us "budget professional" :D
Options are always good, especially when user's needs are different :)
But the fact remains, that for many non-synthetic workstation tasks a faster 8-core will often beat a 12-core. A good workstation CPU needs to strike the balance between core speed and core count.
 
Here's a good place to look;
Socket 1151 has been going for 5 years. It had one major transition, from DDR3 to DDR4.
The longest AMD has gone between new socket introductions is 3 years(Socket A -> Socket 754)

So does anyone want to continue whining about Intel making a new advancement?
you mean 2066 from 2017 right? ( its right in that chart. :p) We are counting ALL cpu/sockets combos, not just PC, that would be biased dont ya think?
 
you mean 2066 from 2017 right? ( its right in that chart. :p) We are counting ALL cpu/sockets combos, not just PC, that would be biased dont ya think?
I was implying more Socket 1151 that Socket 1200 is replacing. In the mainstream desktop arena, it been since 2015 with the DDR4 revision in 2017. So it kinda depends on your perspective.
 
Stay on topic. If someone has an issue with a member, take it to PM's and keep it civil, ignore them, or bring it to a mod.
 
Newest cpu's from both companies mislead us consumers in terms of tdp. A 65W said to be CPU consumes double that when boosts without overclocking it.now if Intel does bring 135w cpu and it stays within that range at stock boosts i see no problem with it in fact I think it is the ethical way.
 
Stay on topic. If someone has an issue with a member, take it to PM's and keep it civil, ignore them, or bring it to a mod.

I guess my earlier warning went for nothing......thread bans and perma bans issued to those that have one thing in mind-to troll and bait
 
People are so silly, 98% of you don't need in any way PCIe 4, and people are getting angry over this.
 
[…]
My point was that Intel controls when 10nm is ready. They screwed up by announcing it long before they were ready to ship it, and they got hurt by that.

But Intel doesn't control when PCI-E 5.0 is ready. PCI-SIG does that. And if Intel announces (or leaks) that they're going to implement PCI-E 5.0 in future, then that means they (and PCI-SIG) are both sure that PCI-E 5.0 will be ready to go at that time.

Why am I sure of that? Because there are 900 other companies all involved in the decision making here, and the announcement of PCI-E 5.0 being ready to go, would have only happened after those 900 companies all agreed that PCI-E 5.0 was ready to launch and could be delivered on time.
I still don't get it and I'm same as confused as @fynxer here. I just don't get the correlation you're trying to make.
Where's the connection between Intel announcing to feature PCI-E 5.0 anytime in the future and Intel releasing actual hardware supporting even at least its predecessor?

Are you implying the standard isn't there yet or wouldn't be finished? PCI-E 4.0 was completed already in '17!
And we even have had the surprising situation that PCI-E 5.0 even already saw completion well prior to its actual predecessor reaching the consumer-market. PCI-E 4.0 is complete and ready to manufacture since a while now – and so is PCI-E 5.0. So what are you trying to say?

All of this is true, but, again, @juiseman was implying that because Intel fucked up the engineering of 10nm, that intel would therefore be incapable of implementing PCI-E 5.0 on time.
To be honest and being fair towards their (recent) history with announcements and sporting no actual products following them afterwards, I would doubt that too.
For instance, I wouldn't want to bet even a single dime that Intel is bringing actual hardware supporting anything beyond PCI-E 3.0 prior to nVidia feature PCI-E 4.0 on their cards.

Would you, @theoneandonlymrk, please like to explain to me, exactly how juiseman's point is true, bearing in mind that:

1 - designing a controller to implement an existing standard is nowhere near as complicated as building a new semiconductor manufacturing process from scratch
2 - PCI-SIG doesn't release standards that aren't ready to be implemented, whereas Intel *did* announce a 10nm technology that was nowhere near implementation.
Well, given how Intel managed to fuck up virtually everything the last couple of years (not just after Ryzen but even well before that) …

There's no proper competition-products against anything Ryzen from them with a new approach instead of just warming up their age-old Core-µArch for the next half of a decade. Like getting flexible and sport some bright new ideas and approaches, like get a new mask and just copying AMD with their design. Or even this as some quick-and-dirty-approach. No will to change of their old, filthy corporate habits and behaviour but relying on age-old grey-ish to straight-out illegal practices to stay trying to stay atop at all costs.

See, they've literally fucked up every entry into another lucrative market since the Sandy bridge-era. Their second trying to enter the mobile-market to create a sound sales market for their low-cost Atoms (and competing against ARM) was a flaming desaster from start to finish. Their wireless approach with 3G, 4G and 5G was also a flamign desaster from start to finish. … and given how much they pumped into their dedicated graphics-department to establish any dominance in graphics – and eventually had to end up to just compulsively bound it to other products by implement is as some embedded graphics as their iGPU (Hint: no-one would've bought or featured their graphics within their products unless they were focred to do). Well … Thing is, you can go on and on with so many examples.

Thing is, hey were just plain unable to bring any greater innovative competitive products whatsoever (apart from their CoreµArch; and given Meltdown and alike, even that was cheated on) to enter a new market on their own – like AMD did by bringing Threadripper which wasn't at any road-map nor planned just a few month prior to hit the market with an actual product (and re-define the HEDT-space altogether in one sweep).

So having said that, I would dare to say that chances are that they're also will fuck up that too, yes.
Thing is, their corporate character hasn't changed a bit since the Pentium (4) times and they (just as always) trying to get dirty if there's some competitor which came up with a innovative product they weren't prepared for (Hint: They never were and most likely never will be). They ain't innovating at all.

Smart
 
People are so silly, 98% of you don't need in any way PCIe 4, and people are getting angry over this.
Exactly, I don't need PCIe4; I just want a few more lanes to be dedicated to things like NVMe SSDs. I'm not asking for much. Right?
 
Actually I am :D
No need to fiddling all days, delidding hundred dollar CPU and voiding warranty in process, put extra dollar for cooling, mambo jambo with "AVX offset", just plugged in let auto do the rest. I don't need more headache while running virtual machine.
Just come into my mind, if i7 8700K 6c12t had 3.7Ghz base, i7 9700K 8c 8t had 3.6Ghz base, could that be future i7 Comet Lake would be 10c10t 3.5Ghz base? :rolleyes:



I noticed you don't post any screen shoot, with your 1 core boost. Go ahead post a screen shoot of the awesome "advertised" 4.75 mhz that your Cheery Picked CPU can do! :D Probably cause u can't hummm........Did you join cause yer getting paid to flame threads?
 
Socket 1151 has been going for 5 years.
Only after hacking and modding BIOS to circumvent Intel's artificial limitations devised for making more money from selling new chipsets for rebranded CPUs.

Not much of people runnning "9th" gens on "6th" gen mobos...

And now there's third socket for Skylake arch...

And I'm sure Intel will come out with yet another socket when DDR5 comes out.
Only question is will they do that great innovation by pulling one pin out or sticking in one new pin.
Or cheap out and add incompatibility in firmware...

People are so silly, 98% of you don't need in any way PCIe 4, and people are getting angry over this.
While little use at the moment, what about three or four years from now?
I think we both agree that good eight (/+) core CPU will easily last that time.
And pretty sure that at that point for example GPUs are quite more advanced than currently.
 
Wow this post is HOT.

Personal opinion,
I don't really care about LGA1200 because the mass majority almost concluded why Intel keep changing sockets / chipsets.
Just factor that into the cost and , well , cost / performance tells everything.

My concern is, how much thermal density this 10-core has ?
Are we finally getting a portable nuclear reactor ? :roll:
 
LGA 1200 package (with 9 more pins that the current LGA 1151) do you even math?
 
Wow this post is HOT.

Personal opinion,
I don't really care about LGA1200 because the mass majority almost concluded why Intel keep changing sockets / chipsets.
Just factor that into the cost and , well , cost / performance tells everything.

My concern is, how much thermal density this 10-core has ?
Are we finally getting a portable nuclear reactor ? :roll:

for real... a 7900x for the masses.

As long as the performance is there and it can handle 32GB across 4 dimms at max OC im sold tho so...

then again i might wait out 10/7nm from intel or the zen 3. With my current OC im sitting at 3700x performance, and I don't need 12 cores so this 2 year old chip in my system looks like it can hang out for another year or so.

Honestly not a great time for CPU buying for the foreseeable unless you're dropping in a 3900x or replacing an old ryzen.
 
Last edited:
Options are always good, especially when user's needs are different :)
But the fact remains, that for many non-synthetic workstation tasks a faster 8-core will often beat a 12-core. A good workstation CPU needs to strike the balance between core speed and core count.

Remember when Intel introduced hyper-threading and multi core to counter Athon64 FX-53? We already reaching pinnacle of MIPS on single core. Bumping clocks and adding performance doesn't instantly translate to "better" performance. What i'm trying to say is 2 core =/ 2 x 1 core, or 2Ghz =/ 2 x 1Ghz, on top of that 1 core multi threading =/ 2 core. Faster core are doable, by reducing some instruction set or making greater L1. Former solution was less attractive as software development grows and more demanding instruction needed and keeping legacy to ensure backward compability ( but seriously, who still use MMX or SSE these days? ). While latter solution would be expensive as greater L1 require more pipelines so bigger fetch and decode is needed.But what do I know, Intel had Jim Keller, he hasn't show his magic yet, it would be great feat in the dawn of monolithic CPU :D

I noticed you don't post any screen shoot, with your 1 core boost. Go ahead post a screen shoot of the awesome "advertised" 4.75 mhz that your Cheery Picked CPU can do! :D Probably cause u can't hummm........Did you join cause yer getting paid to flame threads?

Why should I bragging my puny CPU? You already knew better, or should I say hotter and inefficient ? :D
Funny, i was gonna asked the same thing to other member who posted here :rolleyes:

========

While "majority" wouldn't mind about socket change, a gentle reminder,this is still Kaby Lake...erm...Comet Lake uArch. Intel yet implement their module core with omni path and PCH on die with Ice Lake uArch, so prepare for another socket change :p
 
for real... a 7900x for the masses.

7900x uses a different architecture and the die is muccccccch larger.
The die size of 7900x is 322 mm²
Source : https://www.anandtech.com/show/1155...-core-i9-7900x-i7-7820x-and-i7-7800x-tested/6

The die sizes of 8700k and 9900k are 150 mm² and 174 mm² respectively.
Source : https://en.wikichip.org/wiki/intel/microarchitectures/coffee_lake

Now , if this comet lake 10 core is a re-re-refresh.
The die size is expected to be 200mm²

The 7900x is a 140W TDP Chip , 140 / 322 = 0.435W / mm²
10 core comet lake is 135W , 135 / 200 = 0.675W / mm²

So in-terms of heat density, 10 core comet lake is 55% more than an 7900x.

Then , thermal conduction is directly proportional to surface area.
So the 10 core comet lake has 38% less surface area than the 7900x
Means the rate of conduction of 10 core comet lake is at least 38% lower than the 7900x

So this thing has 55% more heat density and 38% less thermal conductivity than a regular old 7900x .

Maybe change to "Nuclear Inside" . :)
 
Last edited:
Back
Top