Thursday, August 4th 2022

Intel "Raptor Lake" Core i9-13900 De-lidded, Reveals a 23% Larger Die than Alder Lake

An Intel Core "Raptor Lake" engineering sample was de-lidded by Expreview giving us a first look at what will be Intel's last monolithic silicon client processor before the company switches over to chiplets, with its next-generation "Meteor Lake." The chip de-lidded here is the i9-13900, which maxes out the "Raptor Lake-S" die, in featuring all 8 "Raptor Cove" P-cores and 16 "Gracemont" E-cores physically present on the die, along with 36 MB of shared L3 cache, and an iGPU based on the Xe-LP graphics architecture.

The "Raptor Lake-S" silicon is built on the same Intel 7 (10 nm Enhanced SuperFin) silicon fabrication node as "Alder Lake-S." The "Raptor Lake-S" (8P+16E) die measures 23.8 mm x 10.8 mm, or 257 mm² in area, which is 49 mm² more than that of the "Alder Lake-S" (8P+8E) die (around 209 mm²). The larger die area comes from not just the two additional E-core clusters, but also larger L2 caches for the E-core clusters (4 MB vs. 2 MB), and larger L2 caches for the P-cores (2 MB vs. 1.25 MB); besides the larger shared L3 cache (36 MB vs. 30 MB). The "Raptor Cove" P-core itself could be slightly larger than its "Golden Cove" predecessor.
Even with the larger die, there's plenty of vacant fiberglass substrate inside the IHS. Future client sockets such as the LGA1800 have an identical package size to the LGA1700, with the additional pin-count coming from shrinking the "courtyard" in the land-grid (the central empty space). This indicates that future MCM chips such as the "Meteor Lake" have plenty of real-estate on the substrate, and Intel can maintain package-size and cooler-compatibility across several more generations. That said, "Raptor Lake-S" will be a Socket LGA1700 processor, will work with Intel 600-series and upcoming 700-series chipset motherboards; but will likely not be compatible with future LGA1800 platforms.
Sources: Expreview (BiliBili), VideoCardz
Add your own comment

42 Comments on Intel "Raptor Lake" Core i9-13900 De-lidded, Reveals a 23% Larger Die than Alder Lake

#1
Crackong
The '13700k' running 5.8GHz already shocked me with its 287W of power consumption.
Wonder how the 13900k would perform with the extra 8 e-cores
Maybe 350W ?
Posted on Reply
#2
ir_cow
Yep. If you thought 12th gen OC was bad with thermals, good luck with this one.
Posted on Reply
#3
InVasMani
My first thoughts are yields. My second is power consumption. My third is sock lifespan being half a CPU product launch generation. Well folks we rolled out half a CPU generation stay tuned for the leaks about the new socket for the other half that's a prequisite and has the same number of pins.
Posted on Reply
#4
phanbuey
I think I will undervolt it and change that 5.8ghz single core boost to a a 5.6/5.5Ghz all core boost up to 180W then cap it there.

Would probably end up faster overall and I would feel better about my thermal load.
Posted on Reply
#5
oxrufiioxo
phanbueyI think I will undervolt it and change that 5.8ghz single core boost to a a 5.6/5.5Ghz all core boost up to 180W then cap it there.

Would probably end up faster overall and I would feel better about my thermal load.
Unless there is a special binned edition of this the 13700k still seems like the much better option.
Posted on Reply
#6
Minus Infinity
CrackongThe '13700k' running 5.8GHz already shocked me with its 287W of power consumption.
Wonder how the 13900k would perform with the extra 8 e-cores
Maybe 350W ?
I've seen figures of 420W mentioned, probably for an OC'd 13900K or if they release a 13900KS. Appalling power consumption. Pairing this with a 4090Ti will see 1.2kW+ PSU requirements with the ability to handle 2.5kW power spikes.
Posted on Reply
#7
AlwaysHope
ir_cowYep. If you thought 12th gen OC was bad with thermals, good luck with this one.
So not much has changed from 11th gen.
Posted on Reply
#8
trparky
Minus InfinityI've seen figures of 420W mentioned, probably for an OC'd 13900K or if they release a 13900KS. Appalling power consumption. Pairing this with a 4090Ti will see 1.2kW+ PSU requirements with the ability to handle 2.5kW power spikes.
With power consumption like that, I'd rather buy AMD.
Posted on Reply
#9
ir_cow
Looks like the deliding process will be exactly the same. SMDs haven't moved at all. I'm ready day 1 to do this. Just need a spare CPU.
Posted on Reply
#10
Ruru
S.T.A.R.S.
CrackongThe '13700k' running 5.8GHz already shocked me with its 287W of power consumption.
Wonder how the 13900k would perform with the extra 8 e-cores
Maybe 350W ?
Minus InfinityI've seen figures of 420W mentioned, probably for an OC'd 13900K or if they release a 13900KS. Appalling power consumption. Pairing this with a 4090Ti will see 1.2kW+ PSU requirements with the ability to handle 2.5kW power spikes.
What a great timing as price of electricity is skyrocketing.. :rolleyes:
Posted on Reply
#11
tpu7887
InVasManiMy first thoughts are yields. My second is power consumption. My third is sock lifespan being half a CPU product launch generation. Well folks we rolled out half a CPU generation stay tuned for the leaks about the new socket for the other half that's a prequisite and has the same number of pins.
Why yields? Intel's 10nm process has been around for a while now and the chip isn't massive or anything...
Minus InfinityI've seen figures of 420W mentioned, probably for an OC'd 13900K or if they release a 13900KS. Appalling power consumption. Pairing this with a 4090Ti will see 1.2kW+ PSU requirements with the ability to handle 2.5kW power spikes.
4090Ti needs 800W?

2.5kW spikes? Transients are handled by capacitors in VRMs and capacitors on the PSU output (most are capable of spikes way higher than 2.5kW)

Edit: If I had a 4090Ti which took 400W running FurMark, and I had a next gen CPU that was overclocked in a way that made it take 400W running Prime95 small FFT, the PSU I'd buy would be an 850W model. Not 1200W.
Why? Because gaming the system would fluctuate between 400 and 650 watts, rarely peaking above that.

1200W PSUs putting out 100W aren't efficient - even Platinum and Titanium models. Why is the trend to buy PSUs that are 50-100% larger than they need to be? It just wastes power at idle, which is a lot of the time for most PCs.

The only reason I can think of for this nonsense is that a long time ago, 400W PSUs would be able to put out 200Won 12V and 200W on 5+3.3V.
But now most PSUs can put 95+% of their rated power output on 12V rails. So get over it! Stop buying huge power supplies that aren't needed!!!
Posted on Reply
#12
nguyen
ir_cowYep. If you thought 12th gen OC was bad with thermals, good luck with this one.
If TDP being kept the same, more surface area would mean lower thermal density though?
Posted on Reply
#13
Crackong
nguyenIf TDP being kept the same, more surface area would mean lower thermal density though?
Yes and No

Yes - IF TDP being kept the same, more surface area would mean lower thermal density.

No -
From TPU's own E-cores only 12900k test, 8 E-cores would do 70W of power consumption.
And it is unrealistic to assume Intel could magically squeeze out 70W of headroom out of the same manufacturing process while keeping the same TDP.
So something has to be sacrificed
Per core performance , or the 'Actual power consumption'

I think the 13900k would be both.
In stock locked TDP mode, its per core performance suffers
In Unlocked mode, your electricity bill suffers (and your room temperature).
Posted on Reply
#14
nguyen
CrackongIn Unlocked mode, your electricity bill suffers (and your room temperature).
Only when people play "Cinebench" all day :roll:
Posted on Reply
#15
Minus Infinity
tpu7887Why yields? Intel's 10nm process has been around for a while now and the chip isn't massive or anything...


4090Ti needs 800W?

2.5kW spikes? Transients are handled by capacitors in VRMs and capacitors on the PSU output (most are capable of spikes way higher than 2.5kW)

Edit: If I had a 4090Ti which took 400W running FurMark, and I had a next gen CPU that was overclocked in a way that made it take 400W running Prime95 small FFT, the PSU I'd buy would be an 850W model. Not 1200W.
Why? Because gaming the system would fluctuate between 400 and 650 watts, rarely peaking above that.

1200W PSUs putting out 100W aren't efficient - even Platinum and Titanium models. Why is the trend to buy PSUs that are 50-100% larger than they need to be? It just wastes power at idle, which is a lot of the time for most PCs.

The only reason I can think of for this nonsense is that a long time ago, 400W PSUs would be able to put out 200Won 12V and 200W on 5+3.3V.
But now most PSUs can put 95+% of their rated power output on 12V rails. So get over it! Stop buying huge power supplies that aren't needed!!!
So you are unaware of the Ampere cards having power spikes of 150% so hitting 1kW on a 400W normal peak power draw. ATX 3 PSU's are said to be able to cope better with these huge power spikes and the problem will be worse with Lovelace. There is talk of 4090 Ti hitting 800W. You do the math.
Posted on Reply
#16
Crackong
nguyenOnly when people play "Cinebench" all day :roll:
To be honest
If the user isn't doing tile-based rendering ( Cinebench ) or something similar (Editing software), he/she shouldn't bother with the 13900k and should go 13700k if all they need is 8 P-cores
Gaming and normal day applications will never utilize 16 e-cores.
Heavy multicore applications like virtualization (e.g. VMware) aren't friendly to this Hybrid Architecture anyway.

So Yes, only those doing Cinebench ( something similar ) as the sole purpose of the machine would actually need this CPU.
This thing just isn't made for anything else.
Posted on Reply
#17
Jimmy_
Imagine the amount of power this freaking CPU will be drawing :D ~400W
Intel be like if we can beat Nvidia in GPU ll try to match their GPU TDP with our silicon xD

GGWP with overclocking folks with the thermals - I can feel the pain in the ass while doing OC to a 13900k or ks.
Also, 13900ks might come 6.0Ghz out of the box :)
Posted on Reply
#18
nguyen
CrackongTo be honest
If the user isn't doing tile-based rendering ( Cinebench ) or something similar (Editing software), he/she shouldn't bother with the 13900k and should go 13700k if all they need is 8 P-cores
Gaming and normal day applications will never utilize 16 e-cores.
Heavy multicore applications like virtualization (e.g. VMware) aren't friendly to this Hybrid Architecture anyway.

So Yes, only those doing Cinebench ( something similar ) as the sole purpose of the machine would actually need this CPU.
This thing just isn't made for anything else.
huh? 13900K will have better binned P-cores and stronger IMC than 13700K, obviously the 13900K will offer the best gaming experience out of the stack. If people care about value then 13700F/13600F/13400F are far better choice, but not everyone care about value. There are people buying 5950x when all they do are gaming anyways.
Posted on Reply
#19
JustBenching
ir_cowYep. If you thought 12th gen OC was bad with thermals, good luck with this one.
Actually thermals were great, the issue for some people was the flatness of some coolers (the popular Arctic aio for example) not playing well with 12th gen.
Minus InfinitySo you are unaware of the Ampere cards having power spikes of 150% so hitting 1kW on a 400W normal peak power draw. ATX 3 PSU's are said to be able to cope better with these huge power spikes and the problem will be worse with Lovelace. There is talk of 4090 Ti hitting 800W. You do the math.
Thats absolute nonsense. Had a 3090 with a 550w custom bios and a 10900k oced to hell on an 850w power supply. I should have been experiencing freequent shutdowns, but i didint... So
trparkyWith power consumption like that, I'd rather buy AMD.
Same was said about the 12900k. Meanwhile im playing fc6 at 25 watts maxing my 3090....
Posted on Reply
#20
Crackong
nguyenhuh? 13900K will have better binned P-cores and stronger IMC than 13700K, obviously the 13900K will offer the best gaming experience out of the stack. If people care about value then 13700F/13600F/13400F are far better choice, but not everyone care about value. There are people buying 5950x when all they do are gaming anyways.
Yes and No.

Yes - not everyone care about value. There are people buying 5950x when all they do are gaming.

No -
Most of the crowd DO care about value and are significantly more than those who don't.
Everyone else DO care about value, power consumption and heat.
And those factors are heavily hindering the potential of the 13900k.
Posted on Reply
#21
TheDeeGee
ir_cowYep. If you thought 12th gen OC was bad with thermals, good luck with this one.
Why bother OCing? Can barely get 5% these days.

65 Watt CPU is all you need for gaming anyways.
Posted on Reply
#22
Richards
Intel 10nm superfin is superior to tsmc 7nm and 5nm... raptor lake will rake the performance crown
Posted on Reply
#23
napata
Minus InfinitySo you are unaware of the Ampere cards having power spikes of 150% so hitting 1kW on a 400W normal peak power draw. ATX 3 PSU's are said to be able to cope better with these huge power spikes and the problem will be worse with Lovelace. There is talk of 4090 Ti hitting 800W. You do the math.
And yet if you look at GN's test you'll see that the 3090Ti with 100W higher TDP at 450W has similar spikes as the 3090. Hell, in some tests it did better than the 3090 so obviously you can prevent high spikes with better VRM.
Posted on Reply
#24
Valantar
napataAnd yet if you look at GN's test you'll see that the 3090Ti with 100W higher TDP at 450W has similar spikes as the 3090. Hell, in some tests it did better than the 3090 so obviously you can prevent high spikes with better VRM.
It's highly dependent on the board design, BIOS tuning, and even weird variables like how it interacts with specific motherboards, but the fact that some cards have less horrendous spikes doesn't really serve as an argument against this being a major problem that will only get worse if TDPs increase further.
nguyenIf TDP being kept the same, more surface area would mean lower thermal density though?
Only if said surface area increase is actually the cores themselves growing. Cache uses some power, but not that much, the iGPU is idle/power gated on most desktops, and the interconnects are pretty much the same. And, of course, E cores don't use much power. So the lower thermal density would only come from whatever proportion of the die size increase comes from the P cores increasing in size.
Posted on Reply
#25
Punkenjoy
tpu7887Why yields? Intel's 10nm process has been around for a while now and the chip isn't massive or anything...
Well, for yields, let say intel have standard defect rate for 10nm witch is probably the case by now, the change of die size change the yield from 67% and 179 good dies per wafer to 61% and 130 good die per wafer.

It's not catastrophic, but intel will probably need the price hike they announced to keep their margin at the same level.

But those number start to get really low. To put that in perspective, AMD on a process with similar defect rate would get 87% and 706 good ccd per wafer.

That is just the CCD and they also need the I/O die and all that. but that clearly show the potential of smaller chiplets for yields.

For the I/O die, they would have 78% yield and 359 dies per wafer. So for 2 wafer, and assuming all chips meet the clock, intel would get 260 CPUs and AMD would get 350 possible 7950x CPU.

But for power consumption. I think in game, where most people will use it, the power consumption difference between Zen 4 and RPL will be marginal.

And if you need a full core load, the power consumption / performance time might be worth the tradeoff.

I am more concern with the GPU power since I game most of the time on my PC and the Power usage is sustained.
Posted on Reply
Add your own comment
Nov 17th, 2024 10:17 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts