• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel "Raptor Lake" Core i9-13900 De-lidded, Reveals a 23% Larger Die than Alder Lake

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,670 (7.43/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
An Intel Core "Raptor Lake" engineering sample was de-lidded by Expreview giving us a first look at what will be Intel's last monolithic silicon client processor before the company switches over to chiplets, with its next-generation "Meteor Lake." The chip de-lidded here is the i9-13900, which maxes out the "Raptor Lake-S" die, in featuring all 8 "Raptor Cove" P-cores and 16 "Gracemont" E-cores physically present on the die, along with 36 MB of shared L3 cache, and an iGPU based on the Xe-LP graphics architecture.

The "Raptor Lake-S" silicon is built on the same Intel 7 (10 nm Enhanced SuperFin) silicon fabrication node as "Alder Lake-S." The "Raptor Lake-S" (8P+16E) die measures 23.8 mm x 10.8 mm, or 257 mm² in area, which is 49 mm² more than that of the "Alder Lake-S" (8P+8E) die (around 209 mm²). The larger die area comes from not just the two additional E-core clusters, but also larger L2 caches for the E-core clusters (4 MB vs. 2 MB), and larger L2 caches for the P-cores (2 MB vs. 1.25 MB); besides the larger shared L3 cache (36 MB vs. 30 MB). The "Raptor Cove" P-core itself could be slightly larger than its "Golden Cove" predecessor.



Even with the larger die, there's plenty of vacant fiberglass substrate inside the IHS. Future client sockets such as the LGA1800 have an identical package size to the LGA1700, with the additional pin-count coming from shrinking the "courtyard" in the land-grid (the central empty space). This indicates that future MCM chips such as the "Meteor Lake" have plenty of real-estate on the substrate, and Intel can maintain package-size and cooler-compatibility across several more generations. That said, "Raptor Lake-S" will be a Socket LGA1700 processor, will work with Intel 600-series and upcoming 700-series chipset motherboards; but will likely not be compatible with future LGA1800 platforms.

View at TechPowerUp Main Site | Source
 
The '13700k' running 5.8GHz already shocked me with its 287W of power consumption.
Wonder how the 13900k would perform with the extra 8 e-cores
Maybe 350W ?
 
My first thoughts are yields. My second is power consumption. My third is sock lifespan being half a CPU product launch generation. Well folks we rolled out half a CPU generation stay tuned for the leaks about the new socket for the other half that's a prequisite and has the same number of pins.
 
I think I will undervolt it and change that 5.8ghz single core boost to a a 5.6/5.5Ghz all core boost up to 180W then cap it there.

Would probably end up faster overall and I would feel better about my thermal load.
 
I think I will undervolt it and change that 5.8ghz single core boost to a a 5.6/5.5Ghz all core boost up to 180W then cap it there.

Would probably end up faster overall and I would feel better about my thermal load.

Unless there is a special binned edition of this the 13700k still seems like the much better option.
 
The '13700k' running 5.8GHz already shocked me with its 287W of power consumption.
Wonder how the 13900k would perform with the extra 8 e-cores
Maybe 350W ?
I've seen figures of 420W mentioned, probably for an OC'd 13900K or if they release a 13900KS. Appalling power consumption. Pairing this with a 4090Ti will see 1.2kW+ PSU requirements with the ability to handle 2.5kW power spikes.
 
I've seen figures of 420W mentioned, probably for an OC'd 13900K or if they release a 13900KS. Appalling power consumption. Pairing this with a 4090Ti will see 1.2kW+ PSU requirements with the ability to handle 2.5kW power spikes.
With power consumption like that, I'd rather buy AMD.
 
The '13700k' running 5.8GHz already shocked me with its 287W of power consumption.
Wonder how the 13900k would perform with the extra 8 e-cores
Maybe 350W ?

I've seen figures of 420W mentioned, probably for an OC'd 13900K or if they release a 13900KS. Appalling power consumption. Pairing this with a 4090Ti will see 1.2kW+ PSU requirements with the ability to handle 2.5kW power spikes.
What a great timing as price of electricity is skyrocketing.. :rolleyes:
 
My first thoughts are yields. My second is power consumption. My third is sock lifespan being half a CPU product launch generation. Well folks we rolled out half a CPU generation stay tuned for the leaks about the new socket for the other half that's a prequisite and has the same number of pins.

Why yields? Intel's 10nm process has been around for a while now and the chip isn't massive or anything...

I've seen figures of 420W mentioned, probably for an OC'd 13900K or if they release a 13900KS. Appalling power consumption. Pairing this with a 4090Ti will see 1.2kW+ PSU requirements with the ability to handle 2.5kW power spikes.
4090Ti needs 800W?

2.5kW spikes? Transients are handled by capacitors in VRMs and capacitors on the PSU output (most are capable of spikes way higher than 2.5kW)

Edit: If I had a 4090Ti which took 400W running FurMark, and I had a next gen CPU that was overclocked in a way that made it take 400W running Prime95 small FFT, the PSU I'd buy would be an 850W model. Not 1200W.
Why? Because gaming the system would fluctuate between 400 and 650 watts, rarely peaking above that.

1200W PSUs putting out 100W aren't efficient - even Platinum and Titanium models. Why is the trend to buy PSUs that are 50-100% larger than they need to be? It just wastes power at idle, which is a lot of the time for most PCs.

The only reason I can think of for this nonsense is that a long time ago, 400W PSUs would be able to put out 200Won 12V and 200W on 5+3.3V.
But now most PSUs can put 95+% of their rated power output on 12V rails. So get over it! Stop buying huge power supplies that aren't needed!!!
 
Last edited:
Yep. If you thought 12th gen OC was bad with thermals, good luck with this one.

If TDP being kept the same, more surface area would mean lower thermal density though?
 
If TDP being kept the same, more surface area would mean lower thermal density though?
Yes and No

Yes - IF TDP being kept the same, more surface area would mean lower thermal density.

No -
From TPU's own E-cores only 12900k test, 8 E-cores would do 70W of power consumption.
And it is unrealistic to assume Intel could magically squeeze out 70W of headroom out of the same manufacturing process while keeping the same TDP.
So something has to be sacrificed
Per core performance , or the 'Actual power consumption'

I think the 13900k would be both.
In stock locked TDP mode, its per core performance suffers
In Unlocked mode, your electricity bill suffers (and your room temperature).
 
Why yields? Intel's 10nm process has been around for a while now and the chip isn't massive or anything...


4090Ti needs 800W?

2.5kW spikes? Transients are handled by capacitors in VRMs and capacitors on the PSU output (most are capable of spikes way higher than 2.5kW)

Edit: If I had a 4090Ti which took 400W running FurMark, and I had a next gen CPU that was overclocked in a way that made it take 400W running Prime95 small FFT, the PSU I'd buy would be an 850W model. Not 1200W.
Why? Because gaming the system would fluctuate between 400 and 650 watts, rarely peaking above that.

1200W PSUs putting out 100W aren't efficient - even Platinum and Titanium models. Why is the trend to buy PSUs that are 50-100% larger than they need to be? It just wastes power at idle, which is a lot of the time for most PCs.

The only reason I can think of for this nonsense is that a long time ago, 400W PSUs would be able to put out 200Won 12V and 200W on 5+3.3V.
But now most PSUs can put 95+% of their rated power output on 12V rails. So get over it! Stop buying huge power supplies that aren't needed!!!
So you are unaware of the Ampere cards having power spikes of 150% so hitting 1kW on a 400W normal peak power draw. ATX 3 PSU's are said to be able to cope better with these huge power spikes and the problem will be worse with Lovelace. There is talk of 4090 Ti hitting 800W. You do the math.
 
Only when people play "Cinebench" all day :roll:
To be honest
If the user isn't doing tile-based rendering ( Cinebench ) or something similar (Editing software), he/she shouldn't bother with the 13900k and should go 13700k if all they need is 8 P-cores
Gaming and normal day applications will never utilize 16 e-cores.
Heavy multicore applications like virtualization (e.g. VMware) aren't friendly to this Hybrid Architecture anyway.

So Yes, only those doing Cinebench ( something similar ) as the sole purpose of the machine would actually need this CPU.
This thing just isn't made for anything else.
 
Imagine the amount of power this freaking CPU will be drawing :D ~400W
Intel be like if we can beat Nvidia in GPU ll try to match their GPU TDP with our silicon xD

GGWP with overclocking folks with the thermals - I can feel the pain in the ass while doing OC to a 13900k or ks.
Also, 13900ks might come 6.0Ghz out of the box :)
 
To be honest
If the user isn't doing tile-based rendering ( Cinebench ) or something similar (Editing software), he/she shouldn't bother with the 13900k and should go 13700k if all they need is 8 P-cores
Gaming and normal day applications will never utilize 16 e-cores.
Heavy multicore applications like virtualization (e.g. VMware) aren't friendly to this Hybrid Architecture anyway.

So Yes, only those doing Cinebench ( something similar ) as the sole purpose of the machine would actually need this CPU.
This thing just isn't made for anything else.

huh? 13900K will have better binned P-cores and stronger IMC than 13700K, obviously the 13900K will offer the best gaming experience out of the stack. If people care about value then 13700F/13600F/13400F are far better choice, but not everyone care about value. There are people buying 5950x when all they do are gaming anyways.
 
Yep. If you thought 12th gen OC was bad with thermals, good luck with this one.
Actually thermals were great, the issue for some people was the flatness of some coolers (the popular Arctic aio for example) not playing well with 12th gen.

So you are unaware of the Ampere cards having power spikes of 150% so hitting 1kW on a 400W normal peak power draw. ATX 3 PSU's are said to be able to cope better with these huge power spikes and the problem will be worse with Lovelace. There is talk of 4090 Ti hitting 800W. You do the math.
Thats absolute nonsense. Had a 3090 with a 550w custom bios and a 10900k oced to hell on an 850w power supply. I should have been experiencing freequent shutdowns, but i didint... So

With power consumption like that, I'd rather buy AMD.
Same was said about the 12900k. Meanwhile im playing fc6 at 25 watts maxing my 3090....
 
huh? 13900K will have better binned P-cores and stronger IMC than 13700K, obviously the 13900K will offer the best gaming experience out of the stack. If people care about value then 13700F/13600F/13400F are far better choice, but not everyone care about value. There are people buying 5950x when all they do are gaming anyways.

Yes and No.

Yes - not everyone care about value. There are people buying 5950x when all they do are gaming.

No -
Most of the crowd DO care about value and are significantly more than those who don't.
Everyone else DO care about value, power consumption and heat.
And those factors are heavily hindering the potential of the 13900k.
 
Yep. If you thought 12th gen OC was bad with thermals, good luck with this one.
Why bother OCing? Can barely get 5% these days.

65 Watt CPU is all you need for gaming anyways.
 
So you are unaware of the Ampere cards having power spikes of 150% so hitting 1kW on a 400W normal peak power draw. ATX 3 PSU's are said to be able to cope better with these huge power spikes and the problem will be worse with Lovelace. There is talk of 4090 Ti hitting 800W. You do the math.
And yet if you look at GN's test you'll see that the 3090Ti with 100W higher TDP at 450W has similar spikes as the 3090. Hell, in some tests it did better than the 3090 so obviously you can prevent high spikes with better VRM.
 
And yet if you look at GN's test you'll see that the 3090Ti with 100W higher TDP at 450W has similar spikes as the 3090. Hell, in some tests it did better than the 3090 so obviously you can prevent high spikes with better VRM.
It's highly dependent on the board design, BIOS tuning, and even weird variables like how it interacts with specific motherboards, but the fact that some cards have less horrendous spikes doesn't really serve as an argument against this being a major problem that will only get worse if TDPs increase further.

If TDP being kept the same, more surface area would mean lower thermal density though?
Only if said surface area increase is actually the cores themselves growing. Cache uses some power, but not that much, the iGPU is idle/power gated on most desktops, and the interconnects are pretty much the same. And, of course, E cores don't use much power. So the lower thermal density would only come from whatever proportion of the die size increase comes from the P cores increasing in size.
 
Back
Top