Friday, June 27th 2025

Some Intel Nova Lake CPUs Rumored to Challenge AMD's 3D V-Cache in Desktop Gaming

Looking to challenge AMD's gaming CPU supremacy, Intel is reportedly developing Nova Lake processors with enhanced cache technology that could rival the popular 3D V-Cache found in X3D chips. According to leaker @Haze2K1, Intel plans to add "bLLC" (big Last Line Cache) to at least two Nova Lake models. This improved L3 cache is similar to AMD's 3D V-Cache, which has made X3D chips the top pick for enthusiast gamers since 2022. The new processors with bLLC will have 8 P-cores and 4 LP-E-Cores. One version will include 20 E-cores, while another will have 12 E-cores. Both are expected to keep a 125 W TDP rating.

Intel's bLLC technology already exists in Clearwater Forest server processors where local cache integrates into the base tile positioned beneath active tiles. This structural approach mirrors AMD's current 9000-series X3D design, where V-Cache attaches to the bottom of CPU dies—a significant improvement over earlier generations that placed cache on top, causing thermal issues and clock speed limitations. Yet, Intel said no to consumer plans for a technology similar to AMD's 3D V-Cache. In November 2024, Intel's Tech Communications Manager Florian Maislinger told YouTubers der8auer and Bens Hardware that they didn't plan such a desktop version. The Nova Lake-S family is set to hit the market in late 2026 or early 2027, with at least six desktop models using new LGA 1954 packaging. The lineup will start from the top-end Core Ultra 9 485K with 52 cores and 150 W TDP and go down to the basic Core Ultra 3 415K offering 12 cores at 125 W TDP.
Sources: TechSpot, @Haze2K1 on X
Add your own comment

140 Comments on Some Intel Nova Lake CPUs Rumored to Challenge AMD's 3D V-Cache in Desktop Gaming

#1
dir_d
Man i dunno, will it be too late if it comes out early 2027 to compete with Zen6. Sounds like a very fun processor i want to see it in action.
Posted on Reply
#2
freeagent
Good, I think V-Cache is a terrible way to segment CPU's, always have.

I hope Intel gives it to AMD over this one.
Posted on Reply
#3
TheinsanegamerN
freeagentGood, I think V-Cache is a terrible way to segment CPU's, always have.
You think it's good Intel is going to segment it's CPUs with v cache because you think segmenting CPUs with v cache is terrible? WUT?
freeagentI hope Intel gives it to AMD over this one.
It may help them catch up significantly. IDK if they will dethrone and on efficiency or outright gaming though.
Posted on Reply
#4
Chrispy_
How did Intel get so bad? 3D V Cache is already 3 years old and it's absolutely dominant in gaming. Why is it going to take until Nova Lake? (late 2026, early 2027)? Even if Intel missed the boat to get it into Raptor Lake, they should have been throwing cache at Meteor lake at a minimum and it's a motherflippin' digrace that Arrow Lake didn't have this.

I'm seeing why Intel is losing marketshare so fast. They're too far behind and their agility in the face of competition is on par with an ocean-faring supertanker. Let's be very generous and very optimistic and say that this rumour is accurate AND the gaming variants with extra cache are launching on time in the first wave of Nova Lake releases. If both those things are true, Intel might have an answer to AMD's X3D range a mere four years late.

To put that in perspective, four years ago Intel had just launched their back-ported Rocket Lake onto 2015's 14nm process, and the Radeon 6900XT was the most recent flagship GPU launch.
Posted on Reply
#5
Dyatlov A
As long as they keep E cores, Intel stays sucker
Posted on Reply
#6
Squared
Chrispy_I'm seeing why Intel is losing marketshare so fast. They're too far behind and their agility in the face of competition is on par with an ocean-faring supertanker. Let's be very generous and very optimistic and say that this rumour is accurate AND the gaming variants with extra cache are launching on time in the first wave of Nova Lake releases. If both those things are true, Intel might have an answer to AMD's X3D range a mere four years late.
The successes of Moore's Law only happened because every company believed that if it didn't match Moore's Law, the competition would. Every successful silicon company is constantly trying to understand where its competitors will be tomorrow and trying to have a strong position next to it. So yeah it is very sad that Intel is taking this long to answer V-cache on desktop, especially considering Intel has been using advanced packaging since Meteor Lake and Sapphire Rapids, which means Intel is already paying for related complexity while AMD's non-V-cache chips are merely silicon chiplets on substrate.

On that topic, Intel is already paying for advanced packaging so does it really cost that much more to include bLLC? Most of the desktop market is gamers so it's probably going to be more in demand than the non-bLLC chips, as AMD's pricing suggests ($305 for the 9700X and $472 for the 9800X3D). I imagine that statement, 'Intel plans to add "bLLC" (big Last Line Cache) to at least two Nova Lake models,' is a very conservative lower limit on the number of models.
dir_dMan i dunno, will it be too late if it comes out early 2027 to compete with Zen6. Sounds like a very fun processor i want to see it in action.
Last rumor I saw, Zen 6 was on N2X and Nova Lake on N2P, which would mean Nova Lake should come to market first. Although I think that rumor is unlikely; AMD is already skipping N3 entirely (except Zen 5c servers), waiting also for the most advanced N2 node would be a long wait.
Posted on Reply
#7
N/A
Looks like one of the 8C16c tiles is replaced by bLLC.
Posted on Reply
#8
dyonoctis
Chrispy_How did Intel get so bad? 3D V Cache is already 3 years old and it's absolutely dominant in gaming. Why is it going to take until Nova Lake? (late 2026, early 2027)? Even if Intel missed the boat to get it into Raptor Lake, they should have been throwing cache at Meteor lake at a minimum and it's a motherflippin' digrace that Arrow Lake didn't have this.

I'm seeing why Intel is losing marketshare so fast. They're too far behind and their agility in the face of competition is on par with an ocean-faring supertanker. Let's be very generous and very optimistic and say that this rumour is accurate AND the gaming variants with extra cache are launching on time in the first wave of Nova Lake releases. If both those things are true, Intel might have an answer to AMD's X3D range a mere four years late.

To put that in perspective, four years ago Intel had just launched their back-ported Rocket Lake onto 2015's 14nm process, and the Radeon 6900XT was the most recent flagship GPU launch.
AMD also got the advantage of being able to fab X3D chiplet in bulk that can either be used for Epyc or desktop Ryzen. Meanwhile Intel got to make a specific fab allocation for those chips.
Posted on Reply
#10
Daven
Intel needs to get out of the chip design business. The CPU market is crowded and the GPU market is the only lucrative space currently.

Intel is sitting on a gold mine with its fabs but all that capacity is wasted making shitty Intel CPUs that less and less people want.
Posted on Reply
#11
Pizderko
So today is the year 2025 - Where is Rentable Units?

Also, according to Intel's patent:
Rentable Unit reduce the processing time better than hyper-threading technology since Northwood P-4.
Posted on Reply
#12
TechLurker
AMD should counter by releasing X3D as standard on both regular Zen and Zen-Compact chiplets. Then every AMD CPU could technically be a gaming CPU in a pinch (or run other specialized workloads that also enjoy 3D Cache).
Chrispy_How did Intel get so bad?
Because for the longest time, they were constrained by the limits of their own fabs, IIRC. All of their designs were to use their fab processes, and it took time for them to get things to work within their fab constraints. They only began using TSMC relatively recently, and had begun to split up their company into more semi-independent entities in order to improve agility again.
DavenIntel needs to get out of the chip design business. The CPU market is crowded and the GPU market is the only lucrative space currently.

Intel is sitting on a gold mine with its fabs but all that capacity is wasted making shitty Intel CPUs that less and less people want.
Intel has been trying to get more outside contracts to their fabs; even if it meant fabricating their rivals' CPUs in the process, whether it'd be an ARM, RISC V, or AMD design. It seems like uptake has been slow though, given the lack of good news and the most recent ones being on-and-off rumors that Intel was going to sell it entirely, the same way AMD sold off its own fabs.
Posted on Reply
#13
SOAREVERSOR
DavenIntel needs to get out of the chip design business. The CPU market is crowded and the GPU market is the only lucrative space currently.

Intel is sitting on a gold mine with its fabs but all that capacity is wasted making shitty Intel CPUs that less and less people want.
Corporate customers want them. Gaming is not the end all be all it's not even a productive use of a CPU it's just masturbation. When it comes to corporate laptops and mini desktops which is the majority of stuff sold people want intel. Even for a lot of servers and workstations customers want intel.

Intel + intel or nvidia graphics is the default out there and what corporations buy for Windows. If they are going to move off intel for laptops and mini desktops it's going to be when Windows on ARM is good enough as more and more applications are just done in a browser now. It's not dirty tricks either here.

Stop thinking like a gamer. We are in the same situation as when it was Athalon 64. It dominated in servers and dual socket workstations when it came to corporations but for desktops it was all Pentium 4 and for laptops it was all Pentium M. Except this time around intels current products are actually only worse than AMDs when it comes to gaming and not for actual work. And again, gaming doesn't really count for work any more than masturbation does.

When it comes to the corporate world AMD is in servers and workstations you won't see it in other areas. Even then it will not be with an AMD GPU. And if the system is really important it's now not going to be using x86 at all. You're talking stuff from IBM. And at the server level we often aren't talking Windows at all but linux or Unix.
Posted on Reply
#14
Rover4444
Great news, Intel's the only one with actual CUDIMM support and sane PCI-E layouts on their boards.
DavenIntel needs to get out of the chip design business. The CPU market is crowded and the GPU market is the only lucrative space currently.

Intel is sitting on a gold mine with its fabs but all that capacity is wasted making shitty Intel CPUs that less and less people want.
Get back to me when someone sells a mini PC with better transcoder support, power budget, and capability than one based on an N150 at the same price.
Posted on Reply
#15
ThomasK
TechLurkergiven the lack of good news and the most recent ones being on-and-off rumors that Intel was going to sell it entirely, the same way AMD sold off its own fabs.
Let's be honest here, Intel should've gone fabless a long time ago, when it got stuck on 14nm for about 7 years. It could afford staying still, whilst it had little to no competition, which then came way too late.

Let's see how long it'll take until the new CEO realizes the mistake.
Posted on Reply
#16
kondamin
Cool i wonder how they will be doing it.
Shame they aren't pushing nova lake to q4 2025 and are getting an arrowlake refresh
Posted on Reply
#17
Zendou
ThomasKLet's be honest here, Intel should've gone fabless a long time ago, when it got stuck on 14nm for about 7 years. It could afford staying still, whilst it had little to no competition, which then came way too late.

Let's see how long it'll take until the new CEO realizes the mistake.
I dont think the US Government will allow Intel to spin off its Fabs, they got way too much money from the CHIPS act to try and dump that. The whole point is to have on shore fabs and TSMC has already stated that their highest technology will be made in Taiwan only.
Posted on Reply
#18
bug
This improved L3 cache is similar to AMD's 3D V-Cache
Thanks to this bit, I have finally learned X3D is not L4 cache, it's a way to build a bigger L3 one. You live, you learn.
Posted on Reply
#19
Visible Noise
DavenIntel needs to get out of the chip design business.
ThomasKIntel should've gone fabless a long time ago
Sounds like we should have a TPU cage match.
Posted on Reply
#20
wickerman
ThomasKLet's be honest here, Intel should've gone fabless a long time ago, when it got stuck on 14nm for about 7 years. It could afford staying still, whilst it had little to no competition, which then came way too late.

Let's see how long it'll take until the new CEO realizes the mistake.
The culture at intel (at least through the 90s that I was aware of) has always been fab first. Chip design worked around what the fabs told them they could or could not do. They were cutting edge and world class for a very long time, basically until the late 2010s. Probably around 2014 TSMC started throwing engineers by the hundreds at cracking 10nm, and had volume production by 2016. Intel stalled on 10nm, we didnt see it in volume until Fab 42 came on line around 2020.

The biggest struggle was probably EUV lithogaphy. Intel stuck with multi-patterning DUV while TSMC had EUV with N7 in 2017 and N7+ later became their high volume node (2019, with apple buying most of the wafers i believe?) Intel's 10nm was solidly compatable to TSMCs 7nm for density and performance was there, but if your chip takes 30 steps of mask and etch with DUV and your competitor can do it in a handful with EUV, you just don't compete. Yours is harder to make, yours has more steps that can go wrong, and every day you have to problem solve your competitor gets more used to their new EUV process.

the EUV fumble alone put intel probably 5 years behind TSMC. TSMC also had a huge advantage being a foundry by service, they could go all in on the subtle data-driven tweaks here and there (the type of thing that turned a N7 into an N7+ or what ever optimized for lower power or higher frequency, etc) because they always had a big enough client to foot the bill for research, and enough clients in general to profit off the highly optimized nodes that followed. Intel made chips for Intel.

But the fact that Intel is here today with EUV on Intel 4 process, has great chiplet tech, and is bringing fancy vcache to the table, shows they know how to get back up from such a fall from grace. The question on my mind is can intel turn foundry as a service into the beast that can feed chip design once again.
Posted on Reply
#21
truthsayer
wickermanThe culture at intel (at least through the 90s that I was aware of) has always been fab first. Chip design worked around what the fabs told them they could or could not do. They were cutting edge and world class for a very long time, basically until the late 2010s. Probably around 2014 TSMC started throwing engineers by the hundreds at cracking 10nm, and had volume production by 2016. Intel stalled on 10nm, we didnt see it in volume until Fab 42 came on line around 2020.

The biggest struggle was probably EUV lithogaphy. Intel stuck with multi-patterning DUV while TSMC had EUV with N7 in 2017 and N7+ later became their high volume node (2019, with apple buying most of the wafers i believe?) Intel's 10nm was solidly compatable to TSMCs 7nm for density and performance was there, but if your chip takes 30 steps of mask and etch with DUV and your competitor can do it in a handful with EUV, you just don't compete. Yours is harder to make, yours has more steps that can go wrong, and every day you have to problem solve your competitor gets more used to their new EUV process.

the EUV fumble alone put intel probably 5 years behind TSMC. TSMC also had a huge advantage being a foundry by service, they could go all in on the subtle data-driven tweaks here and there (the type of thing that turned a N7 into an N7+ or what ever optimized for lower power or higher frequency, etc) because they always had a big enough client to foot the bill for research, and enough clients in general to profit off the highly optimized nodes that followed. Intel made chips for Intel.

But the fact that Intel is here today with EUV on Intel 4 process, has great chiplet tech, and is bringing fancy vcache to the table, shows they know how to get back up from such a fall from grace. The question on my mind is can intel turn foundry as a service into the beast that can feed chip design once again.
Intel has had Foveros advanced packaging since 2019. Sapphire Rapids, a server XPU with 47 chiplets (including cache chiplets) packaged using Foveros was released in 2022.

Let's stop this myth of "Intel only failed because of 10nm", Intel's decline is due to Intel as an organisation failing across the board, the client group, data center group have all failed alongside the manufacturing group.
Posted on Reply
#22
A Computer Guy
PizderkoSo today is the year 2025 - Where is Rentable Units?

Also, according to Intel's patent:
Rentable Unit reduce the processing time better than hyper-threading technology since Northwood P-4.
I'm surprised AMD hasn't done rentable chiplets. I really want a 64 core threadripper for weekly compression jobs but otherwise don't need them. :laugh:
Posted on Reply
#23
Dr_b_
Late 2026/2027?! LOL by then AMD will have moved on to 4D vCache
Posted on Reply
#24
thesmokingman
I really hate this type of wishful marketing BS.
Posted on Reply
#25
TumbleGeorge
Dr_b_Late 2026/2027?! LOL by then AMD will have moved on to 4D vCache
Yes will have cach in non existing dimension.
Posted on Reply
Add your own comment
Jul 4th, 2025 07:02 CDT change timezone

New Forum Posts

Popular Reviews

TPU on YouTube

Controversial News Posts