• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

14900k - Tuned for efficiency - Gaming power draw

More cores at least can be 100% utilized. 3d cache being 100% utilized I think is a challenge AMD needs to face toward the future otherwise the manufacturing complexity and cost of it may not be worth it. This is why I was wondering if the trend of new games benefitting from x3d cache was going up or down. If it's going down then x3d cpus become obsolete going into the future and lose significant value. AMD needs to ensure x3d retains value.
Isn't it more a case of whatever you can feed the CPU from software?

There hasn't been maximum core utilization either on the vast majority of applications but we still get more cores.
Is either one really economically viable? What is viable? I think in the future new CPUs come with new architectures / changes that will again be both a response and a look towards the future in terms of how applications can use them. But I don't see how cache is 'more dead' than another bunch of cores that way, its probably less dead because cache will always accelerate certain workloads, so if you don't have it, you won't have the acceleration.
 
Can be utilised is not the same as will be utilised. Paying more for more cache isn't a bigger gamble than paying for more cores, except that bigger cache can also be used by today's games:
View attachment 326629

It's really a combination of both today's games can use more cache or more cores and will vary by game. The bigger looming question is how games in 2-5 years from could look in terms of how heavily they utilize or are dependent upon core ST/MT versus core cache capacity. If it were me trying to project the future on how things will compare I would suspect the higher ST/MT with broader memory performance IMC expectations will age more gracefully longer term while lower latency is better more readily and immediately on more game software today and current high end GPU hardware.

It's not even like it's a enormous performance % gap either. If you want a ton more MT for instances where you can leverage them to forgo a small performance difference you might not even notce at resolution targets you play at that seems legit enough to me. Everyone's use cases are different though and coming from a i3-6100 was often in CPU bottleneck scenario where the CPU MT performance especially held back and choked performance severely so made absolute sure that wasn't the case this time with my CPU.

The 14700K can handle anything I throw at it basically and barely flinch even throwing multiple things at once that use to choke my old CPU. It's a much more fluid experience multi-task experience. I'm sure the 7800X3D or 7900X3D would've been pretty great a swell, but I like having all the MT availible in case I ever want or need it for something rather than being shorthanded and wishing I had more MT headroom. It's a great all around chip in the right hands irrespective of some small % gaming difference at resolution targets I hate to game at in 2023 on a 1440p 10-bit display.
 
The 14700K can handle anything I throw at it basically and barely flinch even throwing multiple things at once that use to choke my old CPU. It's a much more fluid experience multi-task experience. I'm sure the 7800X3D or 7900X3D would've been pretty great a swell, but I like having all the MT availible in case I ever want or need it for something rather than being shorthanded and wishing I had more MT headroom. It's a great all around chip in the right hands irrespective of some small % gaming difference at resolution targets I hate to game at in 2023 on a 1440p 10-bit display.
Same with me and the 7800X3D. It loses a couple % on MT, but gains a couple % in gaming, which is my system's main and only focus, so I don't need anything else. Whether you opt for more cache, or more cores, is up to your personal preference, and you're not wrong with either, in my opinion. :)
 
They were originally thinking about the thick heatspreader being a vapour chamber.

I'm not fond for the IHS design. Thicker, I'm ok with. It raises the gradient a little, but not bad. The notches which reduces surface area is my concern. Because that's the only surface area of which you can remove the thermals.

Anyhow, on the Intel Topic, I've just completed a 6ghz run 3DMark05 which is cpu intense mostly. And only requires a dual core. I am easily cooling this chip on air at this frequency with 3 cores. Probably around 150w.

Have the parts for a water loop now. I had to toss the old pump and tubing was a little slimy. Should be able to test CyberPunk closer to 300w loads. I'll use the all copper nickel plated block to handle the thermals. The only thing I'm missing is fans. Gotta get some new ones. Also use 120.3 thick radiator.

I figure I can move up to, but not exceeding 1200-1300 BTU/Hr which will easily cover the cpu. In the past 6.1ghz 16 threads.
 
Why would I buy a 1-2, or any % weaker CPU for gaming?
The answer is simple: you don't have RTX 4090! Because only with this video card you have these small differences. When that small difference disappears (because you don't have the most powerful video card of the moment), the 7800X3D turns into a performance/price disaster.
Interesting how AMD supporters gave up on the multitasking argument when it disappeared with the advent of E-cores.

On:
Another game, just bought.
GPU: 100%
CPU: 20%
CPU Power: <40W
A little earlier I had to archive ~50GB of data. And I played Cyberpunk all this time, I just wasn't there to admire the evolution of compression. Ctrl+Shit+F2 to activate the full computing power (double compared to 7800X3D) and I didn't see any stuttering in the game.
That's the beauty of these processors that are blamed only because the reviews used them "untamed": 14700K(F) can get SC/MT performance it unmatched by a 7800X3D with the same power consumption, but I can also double the performance when I need it and. KF variant is even cheaper than the 7800X3D.
Thanks to these Intel options, AMD waived $50 for the 7800X3D. I remind you that it was launched at $450 and at this price it was sold for several months. Now it has a $400 recommendation, and I still consider it too expensive for what it offers.

Maybe it was better to stay with the X3Ds in your garden.

Robo OSD.png
Robo power.jpg
 
Last edited:
Can be utilised is not the same as will be utilised. Paying more for more cache isn't a bigger gamble than paying for more cores, except that bigger cache can also be used by today's games:
View attachment 326629

Isn't it more a case of whatever you can feed the CPU from software?

There hasn't been maximum core utilization either on the vast majority of applications but we still get more cores.
Is either one really economically viable? What is viable? I think in the future new CPUs come with new architectures / changes that will again be both a response and a look towards the future in terms of how applications can use them. But I don't see how cache is 'more dead' than another bunch of cores that way, its probably less dead because cache will always accelerate certain workloads, so if you don't have it, you won't have the acceleration.
Sorry I wasn't very clear when I was talking about utilization. I meant in a very industry wide broad sense not necessarily on a per machine basis someone getting 100% usage per task manager. You can use cores for just about anything however if the x3d cache is (for the sake of argument) only 50% effective across all games and 10% effective across all applications but on average costs 30% more to produce than a non-x3d chip then there will conceivably be a problem down the road if your chip infrastructure moves to only building x3d cache cpus. The value and demand for those chips won't be there if effective utilization goes in a downward trend and costs will be higher compared to your competition. We can see at least at the moment in the consumer space AMD hasn't banked on making all Ryzen chips x3d based chips yet. Maybe they will, maybe they won't, only time will tell. Also you don't see Intel trading off ecores to provide gobs of cache, at least not yet.

Also sorry to the OP I've gone off topic so I will retire this subconversation.
 
Last edited:
The answer is simple: you don't have RTX 4090! Because only with this video card you have these small differences. When that small difference disappears (because you don't have the most powerful video card of the moment), the 7800X3D turns into a performance/price disaster.
Interesting how AMD supporters gave up on the multitasking argument when it disappeared with the advent of E-cores.
I don't have a 4090, so I have to buy a CPU that's weaker for gaming, even if it's more expensive, just because it has more cores which I'm not gonna use. What kind of moronic argument is this? :kookoo:

It amazes me that after all these posts, you still fail to read what I'm saying. Let's put it in simpler terms:
  1. I am not an AMD supporter. About 80% of my systems through my life have been Intel+Nvidia, and I love(d) them equally as I love my AMD ones.
  2. I gave up on the multitasking argument after spending nearly a grand on a 5950X that didn't improve my gaming experience at all.
  3. I do not need or want mixed e/p core chips because they need Windows 11 to give their best, which I'm unwilling to upgrade to. I also do not need or want a dual-CCD AMD chip, because the extra cores are completely unnecessary for my use case, and the inter-CCD latency can even hurt performance in some odd cases.
  4. I do not recommend the 7800X3D as a mid-tier CPU option. It is not great at a price-to-performance level, and I never said it was. I bought mine only because I was curious (and I still am curious, considering that I'm still not using its full potential with a 7800 XT), which I do not recommend others with more sensible budgets to do. I'm a hobby PC builder, I buy parts for fun, not because they make monetary sense. What I do recommend however, is a 7700 non-X or a 7600 or an i5-13500. So basically, is a 7800X3D necessary for gaming? No. Or is an 11700 with its 8 cores and 4.9 GHz max boost necessary for watching films on a HTPC? Absolutely not, but I do have one because of YOLO. Got any issues with that?
  5. You're happy with the 14700K. That's great! But why does that mean that others can't be happy with what they've got? What does my happiness take away from yours? Is this some sort of personal agenda against me or against AMD buyers in general? Why do you even feel like you have to prove something?

Sorry I wasn't very clear when I was talking about utilization. I meant in a very industry wide broad sense not necessarily on a per machine basis someone getting 100% usage per task manager. You can use cores for just about anything however if the x3d cache is (for the sake of argument) only 50% effective across all games and 10% effective across all applications but on average costs 30% more to produce than a non-x3d chip then there will conceivably be a problem down the road if your chip infrastructure moves to only building x3d cache cpus. The value and demand for those chips won't be there if effective utilization goes in a downward trend and costs will be higher compared to your competition. We can see at least at the moment in the consumer space AMD hasn't banked on making all Ryzen chips x3d based chips yet. Maybe they will, maybe they won't, only time will tell. Also you don't see Intel trading off ecores to provide gobs of cache, at least not yet.

Also sorry to the OP I've gone off topic so I will retire this subconversation.
I don't think we'll see a product lineup with X3D-only CPUs, ever. Some applications are not cache-sensitive, proven by AMD's investment in the smaller Zen 4c cores.
 
I don't think we'll see a product lineup with X3D-only CPUs, ever. Some applications are not cache-sensitive, proven by AMD's investment in the smaller Zen 4c cores.
Oh there are there and there is a need, but its mostly Enterprise systems that benefit from so much cache.
Its the EPYC chips ending in X like the 9684X with over 1Gb of L3 cache. Things like Databases/ML etc will benefit from larger cache as more info you can keep nearer the CPU the better.

The reason for Zen4c is for things like AWS etc where they can increase VM density per U of rack space without impacting performance greatly.

But as you mentioned the inter CCD latency kills any real benefit of it for things like games currently.
 
My next hat trick.
Try and run 13700K with passive cooling. XD
I can do 180w passive with my 5900X and one of my coolers using Linpack Xtreme :D
 
I don't have a 4090, so I have to buy a CPU that's weaker for gaming, even if it's more expensive, just because it has more cores which I'm not gonna use. What kind of moronic argument is this? :kookoo:
I just said that, in your case, X3D does not bring you any benefit in games. You can find at the same price, even cheaper, processors that achieve the same results in gaming as a 4080, but with superior performance in many other applications. In the case of 14700K, at the same price, the difference is colossal.

According to the TPU review, the difference between the 4080 and 4090 is:
5% in 1080p
11% in 1440p
25% in 4K
25% difference in 4K is demonstrated in the video below (as if they heard me and offered me a helping hand).
So, under these conditions, with 25% fewer frames per second, that humble advantage obtained by the 7800X3D in games (1%) against cheaper processors definitely disappears. If this small advantage disappears, I repeat, only the excessively high price remains from the 7800X3D, surpassed by the cheaper 7700X and effectively destroyed by the 13/14700K(F), processors sold at the same price or even lower.

Keep in mind that the reviews are performed with an "empty" operating system, without any applications running in the background. This fact puts Intel at a disadvantage, which has those E-cores that take care of the background. So, in real life, it is possible that the 7800X3D will lose to the processors with E-cores inside even if the 4090 is used.
For the other applications, we have enough data to state that the 14700K, at the same price, destroys the 7800X3D by ~20% in single and almost 100% in intensive CPU or intensive multithreading applications.

Only energy efficiency remains in the discussion, but... can you get 11.5K in CPU-Z and 25+K in Cine R23 without exceeding 80W? Can you actually get these scores, even using LN2? 14700K(F) succeeds. Without LN2, of course.

I notice that the illusion of that extra 1% (obtained only in games, in ideal conditions and with the most powerful video card) gives you a lot of power to force the ridiculous.

 
Last edited:
Low quality post by ElMoIsEviL
I just said that, in your case, X3D does not bring you any benefit in games. You can find at the same price, even cheaper, processors that achieve the same results in gaming as a 4080, but with superior performance in many other applications. In the case of 14700K, at the same price, the difference is colossal.

According to the TPU review, the difference between the 4080 and 4090 is:
5% in 1080p
11% in 1440p
25% in 4K
25% difference in 4K is demonstrated in the video below (as if they heard me and offered me a helping hand).
So, under these conditions, with 25% fewer frames per second, that humble advantage obtained by the 7800X3D in games (1%) against cheaper processors definitely disappears. If this small advantage disappears, I repeat, only the excessively high price remains from the 7800X3D, surpassed by the cheaper 7700X and effectively destroyed by the 13/14700K(F), processors sold at the same price or even lower.

Keep in mind that the reviews are performed with an "empty" operating system, without any applications running in the background. This fact puts Intel at a disadvantage, which has those E-cores that take care of the background. So, in real life, it is possible that the 7800X3D will lose to the processors with E-cores inside even if the 4090 is used.
For the other applications, we have enough data to state that the 14700K, at the same price, destroys the 7800X3D by ~20% in single and almost 100% in intensive CPU or intensive multithreading applications.

Only energy efficiency remains in the discussion, but... can you get 11.5K in CPU-Z and 25+K in Cine R23 without exceeding 80W? Can you actually get these scores, even using LN2? 14700K(F) succeeds. Without LN2, of course.

I notice that the illusion of that extra 1% (obtained only in games, in ideal conditions and with the most powerful video card) gives you a lot of power to force the ridiculous.


Multi-Threading. The very thing the guy who opened this thread disabled, is the thing which takes care of background tasks. The 7800X3D does not magically get slower because of background processes. It has no issues with those.

It's the better CPU. I know it's hard for you to admit this but it is. Just a better design overall. Positioned to tackle Games quite well.

As for Applications, just get a 7950X3D and turn on the extra cores or turn them off based on whether you're gaming or working on applications. Still better than anything Intel has.

Intel is playing second fiddle with right now, might change if they get their manufacturing working correctly as well as their modular designs.
 
Last edited:
Just wondering that why.... the whole idea of this Emergency Edition CPU is to be the fastest in some scenarios, no matter of its power consumption and it can reach 6GHz for a blink of an eye.

When you fine-tune it, why just not get a much cooler and not factory-overclocked to the roof-i7 and do the same to it?

Let's discuss the topic and not go off on tangents
Sorry, saw your post after I posted mine.
 
Low quality post by Gica
It's the better CPU. I know it's hard for you to admit this but it is. Just a better design overall. Positioned to tackle Games quite well.
Good for what? Does she cook, clean and take the children to school?
If you prove that a GT1030 helps, hats off!
For now, it excels only with the catastrophic performance/price, because you pay more for less performance and a hypothetical 1% (WoW!!!) in gaming only with RTX 4090.

Let's be serious!

You cling, desperately, to extremes. We are not discussing AMD or Intel actions here, but how much it helps me, owner of GTX 1050, RTX 2060, RTX 3060, RX 5700, RX6700XT... I think you get the idea. Anything below 4090 or 7900XTX. Can you prove that there is any difference between the 7800X3D and much cheaper processors using the video cards used by over 90% of gamers?
The same blah blah blah I saw from the AMD camp at the launch of 5800X3D@450$, I think June 2022. We are in 2023 and we see 5800X3D it at the level of 7600X and only with RTX4090. With a weaker video card... it's not hard to imagine that there is no difference even in 2022, between the 5800X3D and the much cheaper 5700X.
Prove that it brings at least 20% extra performance in games to have a topic of discussion. With this percentage, it is worth the price.
 
Last edited:
Low quality post by AusWolf
I just said that, in your case, X3D does not bring you any benefit in games. You can find at the same price, even cheaper, processors that achieve the same results in gaming as a 4080, but with superior performance in many other applications. In the case of 14700K, at the same price, the difference is colossal.

According to the TPU review, the difference between the 4080 and 4090 is:
5% in 1080p
11% in 1440p
25% in 4K
25% difference in 4K is demonstrated in the video below (as if they heard me and offered me a helping hand).
So, under these conditions, with 25% fewer frames per second, that humble advantage obtained by the 7800X3D in games (1%) against cheaper processors definitely disappears. If this small advantage disappears, I repeat, only the excessively high price remains from the 7800X3D, surpassed by the cheaper 7700X and effectively destroyed by the 13/14700K(F), processors sold at the same price or even lower.
You obviously didn't read my whole post:
I do not recommend the 7800X3D as a mid-tier CPU option. It is not great at a price-to-performance level, and I never said it was. I bought mine only because I was curious (and I still am curious, considering that I'm still not using its full potential with a 7800 XT), which I do not recommend others with more sensible budgets to do. I'm a hobby PC builder, I buy parts for fun, not because they make monetary sense. What I do recommend however, is a 7700 non-X or a 7600 or an i5-13500. So basically, is a 7800X3D necessary for gaming? No. Or is an 11700 with its 8 cores and 4.9 GHz max boost necessary for watching films on a HTPC? Absolutely not, but I do have one because of YOLO. Got any issues with that?
---
Keep in mind that the reviews are performed with an "empty" operating system, without any applications running in the background. This fact puts Intel at a disadvantage, which has those E-cores that take care of the background. So, in real life, it is possible that the 7800X3D will lose to the processors with E-cores inside even if the 4090 is used.
Games don't saturate 8 cores to 100%, not even with a 4090, so this is pure bollocks.

This is my last post on this. I bought what I bought because YOLO. If you don't like it, that's your problem, not mine. Now, let's get back on topic, shall we? :)
 
Low quality post by Bagerklestyne
We are in 2023 and we see 5800X3D it at the level of 7600X and only with RTX4090. With a weaker video card... it's not hard to imagine that there is no difference even in 2022, between the 5800X3D and the much cheaper 5700X.
Prove that it brings at least 20% extra performance in games to have a topic of discussion. With this percentage, it is worth the price.

You're right, in some games it's marginal, but in Far Cry and Borderlands 3 it was well over 20% compared to the 5800X (which I think we agree is going to be ahead of the 5700X on framerates) and that was with a 3080

But comparing to the 7600X and 'only getting the same performance', temper that with the fact it's a drop in solution (the 5800X3D) to a platform that's 7 years old, vs a new platform, with higher clocks, substantial IPC gains and DDR5 to go with it.
 
Merry Christmas! - Two reply bans given.


Despite the warning from another mod, some have continued with OT about AMD chips. This is about tuning the 14900k for efficiency. Stick to it.
 
I think the biggest thing for me with 14700K was adjusting the max ratio of P-core and E-core depending on application usage you can get a bit more or a bit less performance from either or to target ST or MT usage.

I haven't tinker with HT, but enabling all HT or disabling all HT or just select amount on particular P-cores could all react a bit differently with different software so that could be worth testing out and trying if you're bios allows for it. I'm not sure if HT can be adjusted in software or not, but probably. I just left it on with my old CPU, but it was a dual core there wasn't a big difference in either scenario and just seemed more consistent on with that hardware. It's very different with like dozens of cores the need for HT tends to be less dramatic, but certain software will leverage as many threads as you throw at it basically as well.

For the OS with windows 11 I like to use these command line tweaks to a few things. It's to do with memory usage and memory compression.

Code:
fsutil behavior set memoryusage 2

MMAgent

Enable-MMAgent -mc

Enable-MMAgent -pc

Set-MMAgent -MaxOperationAPIFiles 8192

This is a registry option for DWM you may have some luck with. It felt like keeping averaging period higher than threshold percent was ideal. I also dropped the averaging period down to like 252 and keep threshold percent somewhere between 7 and 63 in intervals of +/-7 that way it syncronizes evenly for better consistency. I think it's to do with drawing of GUI GDI desktop elements with mouse/keyboard interactions, but not 100% sure I haven't seen it really talked about though I think DWM it's to do with GDI.

Code:
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\Dwm\ExtendedComposition]
"ExclusiveModeFramerateAveragingPeriodMs"
"ExclusiveModeFramerateThresholdPercent"

Overall I don't think there is a really a great deal you can tune on these chips relative to stock. You can a little more efficiency or performance or both and slightly emphasize ST or MT, but overall the chips are already pushed pretty close to there general limits. You can forgo performance to tune efficiency to a higher extent though you'd want to weight that versus your needs and expectations of the chip.

What I would recommend is testing to see how it reacts to pushing P cores or E cores a bit more aggressively by dropping the ratio of one to boost the ratio of the other by x1. If you can do that and get it stable on your system you can kind of weight it a little more or a little less to either or for any given application pretty easily or keep it more stock balanced as is.
 
I just tested power draw of my ECO 14900K (at 4800/4000 MHz, HT off) with RTX 4070 with Cyberpunk built-in benchmark, I report power draw during camera passing the bar.

First low settings 1080p resolution, the game probably cannot utilise the CPU more: 94 W

low setting CPU util.png

Then ultra settings (no RT), 1440p resolution, this time GPU limited: 57 W

1440p ultra no RT CPU util.png


I normally see even lower CPU power draw in this game (typically just below 50W), because I have RT enabled.

So in this scenario: CPU at lower frequency, HT off and GPU limited, the power draw or this CPU is VERY MODEST.
 
To the previous post, which I cannot edit anymore, I could add that I was pretty disappointed when I saw that the real full gaming load power draw (94W), because after seeing that 50W with my typical GPU settings, I did not expect such "high number".

On the other hand in that 94W you have 8 P cores 100% loaded and 16 E cores loaded to what seems to be roughly 55% on average... The total computing output should be approx. equivalent to 13 P core CPU. And you have AMD 8 cores with enlarged cache beating it in gaming. I already complained about Intels lazy approach and that they could have easilly built a 10P core CPU with enlarged cache just with already developed elements/building blocks. What is interesting - it seems that there are no rumours about Intel developing a special gaming CPU in the future either. It that not weird?
 
Last edited:
To the previous post, which I cannot edit anymore, I could add that I was pretty disappointed when I saw that the real full gaming load power draw (94W), because after seeing that 50W with my typical GPU settings, I did not expect such "high number".

On the other hand in that 94W you have 8 P cores 100% loaded and 16 E cores loaded to what seems to be roughtly 55% on average... The total computing output should be approx. equivalent to 13 P core CPU. And you have AMD 8 cores with enlarged cache beating it in gaming. I already complained about Intels lazy approach and that they could have easilly built a 10P core CPU with enlarged cache just with already developed elements/building blocks. What is interesting - it seems that there are no rumours about Intel developing a special gaming CPU in the future either. It that not weird?
That's not weird at all. Rarely does throwing gobs cache at a problem ever fix the problem.
 
That's not weird at all. Rarely does throwing gobs cache at a problem ever fix the problem.
Well, AMD did exactly that and it worked for them, not sure why enlarged cache on an Intel CPu would be less beneficial... I am afraid we are off topic here.

Back to topic: in the above case, while being GPU limited, perhaps I could lower the CPU frequency even more without impacting performance?

When I see that approx. 65% percent P core and 25% E core utilisation, I wonder why the game does not use the P cores more?
 
Well, AMD did exactly that and it worked for them, not sure why enlarged cache on an Intel CPu would be less beneficial... I am afraid we are off topic here.

Back to topic: in the above case, while being GPU limited, perhaps I could lower the CPU frequency even more without impacting performance?

When I see that approx. 65% percent P core and 25% E core utilisation, I wonder why the game does not use the P cores more?
Regarding the Cache you really need to design the core/L3 area to be able to run the relevant VIAs up to 3d vCache style soloutions. Its not something you could just "bolt on" per say. It may be something coming in the next generation

I suspect due to the loading 1440p ultra settings means that its GPU limited so there are times when the CPU is waiting on GPU Busy timers to clear. while if you look at both there is still a fair load on the E-Cores but by having some cycle time free on the P Cores now in turn the E Cores are less loaded as well. You could possibly lower the CPU speed but how much less power draw are you actually going to achieve.

People have to remember that technically Intel is still on a 7nm Node vs AMDs access to 5nm via TSMC so they are always on the backfoot there.
 
Regarding the Cache you really need to design the core/L3 area to be able to run the relevant VIAs up to 3d vCache style soloutions. Its not something you could just "bolt on" per say. It may be something coming in the next generation

I suspect due to the loading 1440p ultra settings means that its GPU limited so there are times when the CPU is waiting on GPU Busy timers to clear. while if you look at both there is still a fair load on the E-Cores but by having some cycle time free on the P Cores now in turn the E Cores are less loaded as well. You could possibly lower the CPU speed but how much less power draw are you actually going to achieve.

People have to remember that technically Intel is still on a 7nm Node vs AMDs access to 5nm via TSMC so they are always on the backfoot there.
Isn't intel still on 10nm technically, just branded as 'intel 7?' Though I guess the whole measuring in NM thing is getting kind of muddy either way.

Anyway, considering how far back the node is compared to AMD, its quite impressive intel is able to get the performance that they do, I guess that gap is bridged through extra power draw. Well, while at load anyway. They're still really good at power draw during idle/low load.
 
Last edited:
Regarding the Cache you really need to design the core/L3 area to be able to run the relevant VIAs up to 3d vCache style soloutions. Its not something you could just "bolt on" per say.
Amd designed a computing chiplet to be as universal as possible with interconnections for an optional cache on top of it.

I wrote (or at least meant) that Intel could have designed a CPU with more cache already built in on the same piece of silicone, built from the exactly same stuff as the other CPUs just resized/rearranged.
Isn't intel still on 10nm technically, just branded as 'intel 7?'
...
Anyway, considering how far back the node is compared to AMD, its quite impressive intel is able to get the performance that they do, I guess that gap is bridged through extra power draw. ...
The old process Intel makes CPUs with is actually still quite usable to this day, but IMO is really well usable only for frequencies up to 5 GHz, it runs pretty efficiently, CPUs are very easy to cool and the power draw is still in sane levels even for pretty powerful CPUs.

Intel is raping their own silicon with voltage and heat to push it further, than it should be.
 
Last edited:
Back
Top