• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Intel Lunar Lake Technical Deep Dive

TSMC is wrong.

N7+ is 114 MTr/mm^2
N5 is 138 MTr/mm^2
Scaling is 1.21x

N3 is 224 MTr/mm^2
Scaling is 1.62x

N7 - 91.2–96.5
N5 - 138.2 -> 1.51x
N3 - 197 -> 1.42x

That image is based off some marketing data that might even be true but definitely has some caveats :D

Edit: image from https://www.eetimes.com/1383768-2/ - this was from TSMC projections from a year before N3 was actually in production.
 
TSMC is wrong.

N7+ is 114 MTr/mm^2
N5 is 138 MTr/mm^2
Scaling is 1.21x

N3 is 224 MTr/mm^2
Scaling is 1.62x

The density depends on the proportion of logic, SRAM, analog elements, etc.

Navi 31 GCD @ 5nm, Density 150.2M / mm²
Navi 33 die @ 6nm, Density 65.2M / mm²

1000016326.png


 
I honestly found the discussion itself here even more interesting than the deep dive. Not that it wasn’t great, mind you, great work and just goes to show that good articles lead to thought provoking discussion.
That said, I am curious to see how the thin-n-light battle plays out with Qualcomm on the field. This probably will bring back the perennial “x86 is old and should be shot behind the shed, ARM is the way” debate. We’ll see.

TSMC is wrong.

N7+ is 114 MTr/mm^2
N5 is 138 MTr/mm^2
Scaling is 1.21x

N3 is 224 MTr/mm^2
Scaling is 1.62x
I would assume that TSMC themselves would be more in tune with what their own nodes can do compares to whatever extrapolation one can make based on the raw numbers. Besides, I think they are comparing base N7 to N5. Not the advanced variants.
 
The density depends on the proportion of logic, SRAM, analog elements, etc.
In addition to that there are usually at least 2 variations optimized for area and performance respectively. The performance (read: clock speeds) optimized one comes with about 30-40% density penalty.
 
Which I always find funny. First ARM processor was released just 7 years after the Intel 8086. 39yrs vs 46yrs. People act like it is the new kid on the block.
Yeah, I can at least remember ARM back on early 2000s PDAs and the like. Adequate for the job then, but it’s not like those devices were brimming with possibilities and features.
 
Yeah, I can at least remember ARM back on early 2000s PDAs and the like. Adequate for the job then, but it’s not like those devices were brimming with possibilities and features.
These devices were absolutely brimming with possibilities and features. I would argue what was missing was the environment to really facilitate a (smart)phone as we think of it today. In very early 2000s Wifi was still new. For mobile data there was GPRS, 3G became a thing somewhere 2003-2005 and took a while to adopt. Resistive touchscreens were prevalent. And this is just the hardware side of it :)

Somewhere in late 2000s an increasingly clear point appeared that computers - including smartphones and other small, light devices - had become or were on the brink of becoming "fast enough" for everyday use like office work or browsing the web. This is really regardless of which ISA or architecture we are talking about.
 
Yuk yuk and more yuk
I am not sure where this new obsession with battery life is comming from but its been litteral years since battery life on any laptop was a concern
Intel needs to step back and re-think what they are doing AMD went down this road of chasing some illusionary rabbit and nearly went bankrupt
 
AI in it's current form is a bunch of BS. It's the new thing big Tech want to use to drain even more money out of the masses. I work in the creative industry but I can't find any use in any AI tools.

The Skymont E-core versus Zen 5c comparison will be interesting but at the end of the day its all about the power. Chip space on package doesn't affect purchasing decisions but less size sometimes means less TDP.

But we do have some preliminary information to look at. For instance the Epyc 9754 which uses 128 Zen 4c cores with hyperthreading has a TDP of 360W. The new Xeon 6700E which uses 144 Skymont E-cores with no hyperthreading has a TDP of 330W.

Intel Xeon 6700E "Sierra Forest" CPUs Launched: Up To 144 E-Cores, 330W TDP, 34% More Efficient Versus AMD EPYC Bergamo (wccftech.com)

I'm not seeing a big difference here in TDP and potential performance between these two chips and the Epyc 9754 has been on the market for a year.

So you are saying Intel is at least a generation behind AMD in server? They can only match AMD's old Epyc using their newest server CPU.
 
Yeah but the Pl2 last for 50 seconds or something. When you are actually using it, it drops to 35. Due to the IO die, that's just impossible for amd desktop chips.

it can be disabled or set so it uses as much power as it can

Yuk yuk and more yuk
I am not sure where this new obsession with battery life is comming from but its been litteral years since battery life on any laptop was a concern
Intel needs to step back and re-think what they are doing AMD went down this road of chasing some illusionary rabbit and nearly went bankrupt
my gaming laptop says otherwise
 
That's why they come with power bricks.
its a laptop why cant it have more that 3 hours of runtime when not running heavy programs and my power adapter killed it self right when the warranty ended
 
its a laptop why cant it have more that 3 hours of runtime when not running heavy programs and my power adapter killed it self right when the warranty ended
If you really want to know why then complete relevant physics classes at coursera, edX, or watch MIT opencourseware videos. Alternatively, seek knowledge at forums that are capable of answering such a question - TechPowerUp forum is definitely NOT capable of answering your question.
 
AI in it's current form is a bunch of BS. It's the new thing big Tech want to use to drain even more money out of the masses. I work in the creative industry but I can't find any use in any AI tools.



So you are saying Intel is at least a generation behind AMD in server? They can only match AMD's old Epyc using their newest server CPU.

It's really not, but the general public at large simply doesn't care about particularly at this stage. It has to start somewhere, but I still see it already much more beneficial than RTRT is at this stage even if people do or don't use either from a individual standpoint. Give it a bit time and you'll find out a lot of people are leveraging AI even now to a good degree and it's just going to get better with further refinement. AI can get quite a bit better even w/o better hardware resources from where it is now.

It's less the case with RTRT like there is only so much optimization you can do to it, but AI is very open ended and resources can be pooled together and trained and so it will improve a lot over time as a result and pretty rapidly in a lot of cases which is part of the beauty and genius to it. It's going to be nearly indistinguishable between what was and wasn't done by AI versus a human doing it similar to like path traced images versus real life images that can look deceptively real.

I can't wait until it can mostly perfect a lot of the more mundane problems with continue training persistence along with additional and better sub-routines or whatever. In my eyes the floodgates have already been wide opened and idea's are just pouring out right now at a alarming right that's rapidly progressing and propelling AI ahead further and further as time marches on. Right now AI feels like it's in a free fall, but like parachutes are starting to be opened up littering the skies with idea's to be trained and refined and turned into entire diverse spanning concepts.
 
If you really want to know why then complete relevant physics classes at coursera, edX, or watch MIT opencourseware videos. Alternatively, seek knowledge at forums that are capable of answering such a question - TechPowerUp forum is definitely NOT capable of answering your question.
Why can't they just make cpus more energy efficient when not heavily used I Know 100wh will only go so far if it's the maximum size but they don't even try
 
Back
Top