Tuesday, July 16th 2024

Intel Planning P-core Only "Bartlett" LGA1700 Processor for 2025

In a surprising development, Intel plans to extend the longevity of its Socket LGA1700 platform even as the newer LGA1851 platform led by the Core Ultra 200 "Arrow Lake" remains on track for a late-Q3/early-Q4 2024 debut. This, according to a sensational leak by Jaykihn. It plans to do this with a brand-new silicon for LGA1700, codenamed "Bartlett." This should particularly interest gamers for what's on offer. Imagine the "Raptor Lake-S" die, but with four additional P-cores replacing the four E-core clusters, making a 12-core pure P-core processor—that's "Bartlett." At this point we're not sure which P-core is in use—whether it's the current "Raptor Cove," or whether an attempt will be made by Intel to backport a variant of "Lion Cove" to LGA1700.

This wouldn't be the first pure P-core client processor from Intel after its pivot to heterogeneous multicore—the "Alder Lake" H0 die has six "Golden Cove" P-cores, and lacks any E-core clusters. Intel is planning to give launch an entire new "generation" of processor SKUs for LGA1700 which use the newer client processor nomenclature by Intel, which is Core 200-series, but without the "Ultra" brand extension. There will be SKUs in the Core 3, Core 5, Core 7, and Core 9 brand extensions. Some of these will be Hybrid, and based on the rehashed "Raptor Lake-S" 8P+16E silicon, and some "Alder Lake-S" 8P+8E; but "Bartlett" will be distinctly branded within the series, probably using a letter next to the numerical portion of the processor model number. There will not be any Core 3 series chips based on "Bartlett," but Core 5, Core 7, and Core 9.
The Core 5 "Bartlett" series will feature an 8-core configuration. That's 8 P-cores, and no E-cores. The Core 7 "Bartlett" will be 10-core, no E-core. The Core 9 "Bartlett" will draw the most attention, as being 12-core. If Intel is using "Raptor Cove" P-cores, these should be 8-core/16-thread, 10-core/20-thread, and 12-core/24-thread, respectively. Depending on the K- or non-K SKUs, these chips feature a processor base power value of 125 W, or 65 W, or even 45 W.

Intel is planning to launch these non-Ultra Core Socket LGA1700 processors in Q1-2025, but the "Bartlett" silicon won't arrive before Q3-2025.
Source: Jaykihn (Twitter)
Add your own comment

127 Comments on Intel Planning P-core Only "Bartlett" LGA1700 Processor for 2025

#76
Dr. Dro
Exciting CPU for once. I'm very interested.
dj-electricWait. A 12 core P-core only CPU?
Can we reschedule that to like, next week?
Also - the timing seems a little weird to me from a product stack standpoint. A new LGA1700 series after a refresh generation? That I can accept, but managing a whole new lineup after the launch of a completely new platform, and possibly beating it in many speed & CU oriented performance metrics - seems a bit backwards.

If such products exist - I would expect to see them launched for LGA1851, not an EOLing platform.
If new products are being released - even if a different flavor of the same tech, is it an EOL platform or a "long-lived beast such as AM4"? People gotta decide... AMD hasn't even changed any of the CPUs they are releasing for AM4 post-AM5, its just boring rebrands, this would be actually something new. Props where due - give us this, Intel.
Posted on Reply
#77
AusWolf
fevgatosIf you wanna compare efficiency you put 2 cpus at the same power and compare the results.
Wrong. You take the work done at default (e.g. Cinebench score with no background apps), and divide that by the total energy used during the work. Since we're arguing about technicalities...
fevgatosTbf, the k lineup is an enthusiasts lineup, then you put that on a high end z mobo that's also an enthusiasts mobo, then you let it run at default. At this point some of the blame is to the user.
No one should be blamed for running their system at default, imo. Default should always be a safe setting.
Posted on Reply
#78
oxrufiioxo
Dr. DroExciting CPU for once. I'm very interested.
Excited just to see how it performs a lot of people wanted something like this to begin with I would have probably grabbed one day 1 if this was a launch sku...... For me to even consider buying something with E cores it would need to be 10-15% faster than a competing product at everything while using 120-140W but an all P core cpu I would take even if it was a bit slower overall in MT task.

I would be more willing to deal with 200W if all cores were identical and there were more than 8 but my office with the door closed which I prefer does best around 500w total system power and most of that is eaten up by my 4090.... Beyond that and it starts to get pretty toasty with 600-700w getting uncomfortable even at 500w there is a 5 degree Fahrenheit delta.

The 7800X3D/4090 combo I tested out for a week ran stupidly cool it was at less than 400w in most games miss that a little bit but not the poor MT performance lol...
Posted on Reply
#79
FoulOnWhite
AusWolfWrong. You take the work done at default (e.g. Cinebench score with no background apps), and divide that by the total energy used during the work. Since we're arguing about technicalities...


No one should be blamed for running their system at default, imo. Default should always be a safe setting.
I have run my 12700k mostly stock since 2021, after what i upgraded from this was a big jump, and at the time it was pretty high end. Looking forward to trying this bartlett though, even if this is only a secondary setup by then.
Posted on Reply
#80
fevgatos
AusWolfWrong. You take the work done at default (e.g. Cinebench score with no background apps), and divide that by the total energy used during the work. Since we're arguing about technicalities...
You are not comparing chip efficiency like that, you are just comparing the out of the box settings. This allows companies to "cheat" and win efficiency just by locking their cpus to lower power levels, eg the 14900T. That doesn't really make it the most efficient chip in existence, it's just the oob settings that are better tuned for efficiency.
Posted on Reply
#81
Hecate91
fevgatosNo it's not.

No it doesn't.

Again, power draw isn't efficiency. If you wanna compare efficiency you put 2 cpus at the same power and compare the results. When you do that you'll realize that amd is lagging behind in most segments in both ST and MT efficiency. Let's not go over this again. It's just a fact, ComputerBase has done the work for us, lets just accept it and move on.
Power draw is one aspect of power efficiency, when a cpu uses less power at default settings, it is obviously more efficient than one which uses more. There are trustworthy tests here which correlate with other reviews resulting a 7800X3D using about half the power of a 14900k in games, you are the one who keeps bringing it up that a underclocked and undervolted intel cpu will use less power, except most people are going to use their cpu's at default settings. It absolutely isn't the users fault for leaving things at default, especially when these cpu's get used in servers or people are buying them as prebuilt systems.
tpucdn.com/review/intel-core-i9-14900k/images/power-games.png
www.techpowerup.com/review/intel-core-i9-14900k/22.html
Posted on Reply
#82
AusWolf
oxrufiioxoExcited just to see how it performs a lot of people wanted something like this to begin with I would have probably grabbed one day 1 if this was a launch sku...... For me to even consider buying something with E cores it would need to be 10-15% faster than a competing product at everything while using 120-140W but an all P core cpu I would take even if it was a bit slower overall in MT task.

I would be more willing to deal with 200W if all cores were identical and there were more than 8 but my office with the door closed which I prefer does best around 500w total system power and most of that is eaten up by my 4090.... Beyond that and it starts to get pretty toasty with 600-700w getting uncomfortable even at 500w there is a 5 degree Fahrenheit delta.

The 7800X3D/4090 combo I tested out for a week ran stupidly cool it was at less than 400w in most games miss that a little bit but not the poor MT performance lol...
I don't even care how different E and P cores are. I just want a homogeneous architecture not to rely on software and Windows 11's scheduler to make it work properly.
fevgatosYou are not comparing chip efficiency like that, you are just comparing the out of the box settings. This allows companies to "cheat" and win efficiency just by locking their cpus to lower power levels, eg the 14900T. That doesn't really make it the most efficient chip in existence, it's just the oob settings that are better tuned for efficiency.
If you don't test CPUs at stock, then what? Who's to say this setting or that power level is what you should test at? You can tune every chip to stupid levels of efficiency, so I don't see it as an argument. Stock is what tests should be done at, as that's the intended use by the factory.
Posted on Reply
#83
oxrufiioxo
AusWolfI don't even care how different E and P cores are. I just want a homogeneous architecture not to rely on software and Windows 11's scheduler to make it work properly.
Same, it's the reason I would take higher power draw over babysitting windows 11.
Posted on Reply
#84
fevgatos
AusWolfIf you don't test CPUs at stock, then what?
Then test them at the power you want to use them at? I mean most products I have I would't have bought if I had to run them stock. The 4090 is a prime example, no way id using a 450w card. I bought it cause its still the fastest card at the 320w im running it at.
Posted on Reply
#85
AusWolf
fevgatosThen test them at the power you want to use them at? I mean most products I have I would't have bought if I had to run them stock. The 4090 is a prime example, no way id using a 450w card. I bought it cause its still the fastest card at the 320w im running it at.
So if 100 people want to run it at 100 different power levels, then we do 100 tests? Well, umm... No. Just no. In that regard, every CPU can claim the title for the most efficient one, because they all have their highest efficiency points at different power levels. Besides, not many people go shopping for a CPU thinking "ah, this one would do so well at 80 instead of 125 Watts". There is no way you can pre-test it before buying, either.
Posted on Reply
#86
atomsymbol
AusWolfSo if 100 people want to run it at 100 different power levels, then we do 100 tests? Well, umm... No. Just no. In that regard, every CPU can claim the title for the most efficient one, because they all have their highest efficiency points at different power levels. Besides, not many people go shopping for a CPU thinking "ah, this one would do so well at 80 instead of 125 Watts". There is no way you can pre-test it before buying, either.
At least where I live, it is possible to return any CPU without providing any reason whatsoever (or simply providing a reason like "The CPU didn't meet performance expectations") within 2 weeks from purchase. I think 2 weeks is enough time for most people building a PC to determine whether the purchased CPU was a mistake or wasn't.
Posted on Reply
#87
Minus Infinity
fevgatosNo it's not. Intel has cpus with 56 P cores cooled by a single tower noctua air cooler, you think power will be an issue for 12 of them? Lol
56P cores run much lower cocks, Intel actually have to care about their consumers in that market and not run them out of spec or try to be benchmark warrirors.
Posted on Reply
#88
Apocalypsee
AusWolfI don't even care how different E and P cores are. I just want a homogeneous architecture not to rely on software and Windows 11's scheduler to make it work properly.
+1 this. I wanted to play games on my PC, wanted a streamlined OS not having thread scheduler and whatnot. I'm looking forward how this Bartlett turn out. I might go Intel if all gone well. Really, even though I love tweaking, I wanted something that is inherently low latency, I don't like the E-core P-core BS, neither I like AMD multichip direction with UCLK, FCLK MCLK BS either. All this added latency. I wanted homogenous arch. One of few reason I bought an APU, because low inter-core latency.
Posted on Reply
#89
Dr. Dro
Apocalypsee+1 this. I wanted to play games on my PC, wanted a streamlined OS not having thread scheduler and whatnot. I'm looking forward how this Bartlett turn out. I might go Intel if all gone well. Really, even though I love tweaking, I wanted something that is inherently low latency, I don't like the E-core P-core BS, neither I like AMD multichip direction with UCLK, FCLK MCLK BS either. All this added latency. I wanted homogenous arch. One of few reason I bought an APU, because low inter-core latency.
My experience is that performance is adequate on both 10 and 11 using Raptor Lake, the OS isn't completely oblivious. 11 runs better, but I spent most of the time using 10 on my PC. Bartlett is interesting even so late for the same reason the 7800X3D is, it's "KISS". The least complicating factors the better.
Posted on Reply
#90
Sunny and 75
btarunrAt this point we're not sure which P-core is in use—whether it's the current "Raptor Cove," or whether an attempt will be made by Intel to backport a variant of "Lion Cove" to LGA1700.
All will be revealed in Q3'25.
Posted on Reply
#91
fevgatos
AusWolfSo if 100 people want to run it at 100 different power levels, then we do 100 tests? Well, umm... No. Just no. In that regard, every CPU can claim the title for the most efficient one, because they all have their highest efficiency points at different power levels. Besides, not many people go shopping for a CPU thinking "ah, this one would do so well at 80 instead of 125 Watts". There is no way you can pre-test it before buying, either.
You don't need 100 points man, 3-4 configurations are enough to make a plot point. Both tpu nd computerbase do this.
Posted on Reply
#92
AusWolf
fevgatosYou don't need 100 points man, 3-4 configurations are enough to make a plot point. Both tpu nd computerbase do this.
Sure, TPU does that with flagships like the 14900K or 7950X, but even that's above and beyond what one can expect from a review site. You can't expect them to thoroughly test every CPU at multiple imaginable power targets to suit every potential buyer's needs. The other thing is that while power limited performance is useful info for a few tech savvy individuals like ourselves, you can't use it to draw any overarching conclusion about the CPU itself. Your 14900K performing awesomely with a 125 W limit does not prove how great the 14900K is. It only shows that it suits your needs under certain conditions. So my point stands.
Posted on Reply
#93
Sunny and 75
btarunr"Raptor Cove" P-cores
With more L3 Cache and DDR4 support, the BTL will be Intel's version of the 5800X3D.
Posted on Reply
#94
Wirko
I think the Bartlett chip is being designed primarily for entry level workstations (Core CPUs on W680 boards) and small servers (Xeons on boards with C-series chipsets). But of course some chips won't qualify for that purpose, so they will become Core "K" processors.
Posted on Reply
#95
P4-630
Sunny and 75With more L3 Cache and DDR4 support, the BTL will be Intel's version of the 5800X3D.
It will be much better than a 5800X3d though and upto 12 P cores...
Posted on Reply
#96
fevgatos
AusWolfSure, TPU does that with flagships like the 14900K or 7950X, but even that's above and beyond what one can expect from a review site. You can't expect them to thoroughly test every CPU at multiple imaginable power targets to suit every potential buyer's needs. The other thing is that while power limited performance is useful info for a few tech savvy individuals like ourselves, you can't use it to draw any overarching conclusion about the CPU itself. Your 14900K performing awesomely with a 125 W limit does not prove how great the 14900K is. It only shows that it suits your needs under certain conditions. So my point stands.
Well again you don't really need to see behavior in multiple configurations. You can make rough estimations even by just looking at the stock power levels. Like looking at the 7950x, it's less efficient than the 7800x 3d in MT at stock but it's super obvious that it will be way way faster and therefore more efficient when you run both at the same power. I don't really need to do the tests to figure that out, right?

Regarding the 12p core chip, if it can somehow match the 14700k in MT performance, I'll be damn interested. If it doesn't, it will be meh.
Posted on Reply
#97
AusWolf
fevgatosWell again you don't really need to see behavior in multiple configurations. You can make rough estimations even by just looking at the stock power levels. Like looking at the 7950x, it's less efficient than the 7800x 3d in MT at stock but it's super obvious that it will be way way faster and therefore more efficient when you run both at the same power. I don't really need to do the tests to figure that out, right?
That's a theory, and it remains just that until you test it.
fevgatosRegarding the 12p core chip, if it can somehow match the 14700k in MT performance, I'll be damn interested. If it doesn't, it will be meh.
As for me, if it can match the 7800X3D in gaming and cooling efficiency, I'll be interested. Having 4 extra cores without the added latency of the inter-CCD communication sounds like something that could be useful in the future.
Posted on Reply
#98
InVasMani
AusWolfThat's a theory, and it remains just that until you test it.


As for me, if it can match the 7800X3D in gaming and cooling efficiency, I'll be interested. Having 4 extra cores without the added latency of the inter-CCD communication sounds like something that could be useful in the future.
Really 8 cores is the ideal minimum core count that adheres to console and really not every title will push those limits to it's knee's at the same time. However some will be closer to doing so and having a bit of additional headroom for other multi-tasking just is a nice benefit for a variety of reasons. I'd say in general a PC should strive to either meet console specs or for 2-4 cores above them to allow for a bit of additional MT reassurance leeway. I'm not about to tell people what's best for them outright, but I think it's pretty reasonable and sensible guidance to consider.

If Intel bumps up cache on these a reasonable bit they will give 7800X3D some real competition. It looks like a great in socket upgrade for many on LGA1700 as well. If I had a 12600K and saw this news I'd be really excited about it. No one was expecting it from Intel. Like I alluded Intel had room to do this type of thing and other options, but I wasn't anticipating them actually doing so based on their past history. It's really a sign healthy competition in the CPU market actually because if AMD was doing a poor job competing we'd just be stuck on quad cores with a 50MHz bump on a new socket.
Posted on Reply
#99
AusWolf
InVasManiReally 8 cores is the ideal minimum core count that adheres to console and really not every title will push those limits to it's knee's at the same time. However some will be closer to doing so and having a bit of additional headroom for other multi-tasking just is a nice benefit for a variety of reasons. I'd say in general a PC should strive to either meet console specs or for 2-4 cores above them to allow for a bit of additional MT reassurance leeway. I'm not about to tell people what's best for them outright, but I think it's pretty reasonable and sensible guidance to consider.

If Intel bumps up cache on these a reasonable bit they will give 7800X3D some real competition. It looks like a great in socket upgrade for many on LGA1700 as well. If I had a 12600K and saw this news I'd be really excited about it. No one was expecting it from Intel. Like I alluded Intel had room to do this type of thing and other options, but I wasn't anticipating them actually doing so based on their past history. It's really a sign healthy competition in the CPU market actually because if AMD was doing a poor job competing we'd just be stuck on quad cores with a 50MHz bump on a new socket.
It's not necessarily about core count. Hardware Unboxed did a video proving that 6 faster cores can do the same work as 8 slower ones, even in gaming. Having a bit of an extra headroom is always nice, though.
Posted on Reply
#100
iameatingjam
If it fixes the degradation problems.... that might be the way.
fevgatosRegarding the 12p core chip, if it can somehow match the 14700k in MT performance, I'll be damn interested. If it doesn't, it will be meh.
Well of course it wont. Thats the whole point of ecores. They give better multicore performance with less die space.
Posted on Reply
Add your own comment
Aug 16th, 2024 00:18 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts