• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD Ryzen 9 7950X3D

I dont want anything. What I said was, people would want a 8pcore processor only and how I see it, Intel will not release that processor. What I also said, within the two scenarios, the more probable is only ecores than only pcores. Never said only ecores and that it would have been a great idea or Intel should do it.
The question now is, if these ecores are so bad why putting them in the CPU? I know the answer to that and it is definitely not efficiency nor performance. Never liked the idea of a crippled cores in a CPU for which you pay hard buck.
A cpu without ecores is worse in every possible way than one with ecores. No reason for Intel to remove them

Why?
The ecores are worse in every single way, including power efficiency.
View attachment 286035

I cant even understand any situation in which you want a company to only release ineffcient slow hardware, other than wanting them to go bankrupt. The only reason E-cores were added was to compete in multi threaded benchmarks vs ryzen, they're far better off with more P-cores or an extremely different approach to how they use these E-cores.
No, the ecores are not worse in every single way. They are better at performance per die. They are even better in efficiency when comparing 4ecores to 1 core, since they occupy the same die space
 
A cpu without ecores is worse in every possible way than one with ecores. No reason for Intel to remove them
I'd vote for more Pcores and no ecores. That would have been better in every possible way. Downside for some people would have been less cores in gerneral for sure.

A cpu without ecores is worse in every possible way than one with ecores. No reason for Intel to remove them


No, the ecores are not worse in every single way. They are better at performance per die. They are even better in efficiency when comparing 4ecores to 1 core, since they occupy the same die space
Then Intel should only use those since these are so much better.
 
I'd vote for more Pcores and no ecores. That would have been better in every possible way. Downside for some people would have been less cores in gerneral for sure.


Then Intel should only use those since these are so much better.

Haven't we been on this merry-go-round before?

E-cores make sense in multi-threaded loads.

Ideally, processor should be made just with P-cores and enough thermal and power headroom it could all run with full frequency.

But they don't build processors like that for a long time now. Because they can't - there is too much difference between workloads that require a few cores running as high as they can (games, non-multi-threaded applications) and true multi-threaded workloads.

So at first we got CPUs that can boost any core to boost clock - but with additional loaded cores that clock got lowered, so the CPU remained inside power and thermal limits.

Then we got CPUs that had preferred cores thst can boost highest, and other cores that can never achieve the top frequency, but are otherwise good enough when they are loaded as additional cores in multi-threaded workload. Intel had that before, Ryzen has that inside single CCD, and with two CCDs one is faster - capable of reaching full boost clock with low threaded load, the other is not.

Intel has now pushed this step forward with big-little design, with P cores and E cores.

Most of the downsides now lie squarely on Windows scheduler, and software not written with faster and slower cores in mind - but that also punishes performance on previous Intel CPUs and all Ryzens - if it can't properly distribute loads between P and E cores, it also can't pick fastest cores in CPUs where not all cores are equal (all CPUs for quite some years now).

But that is the future, and AMD will also jump on the bandwagon. Because it makes sense.
 
Then Intel should only use those since these are so much better.
Νope, you still need the pcores for the applications that don't scale infinitely. What ecores allow you to do is have BIG beefy cores for ST and low threaded workloads without sacrificing multithreaded performance. This is something that cannot be done with the traditional one size fits all cores
 
Haven't we been on this merry-go-round before?

E-cores make sense in multi-threaded loads.

Ideally, processor should be made just with P-cores and enough thermal and power headroom it could all run with full frequency.

But they don't build processors like that for a long time now. Because they can't - there is too much difference between workloads that require a few cores running as high as they can (games, non-multi-threaded applications) and true multi-threaded workloads.

So at first we got CPUs that can boost any core to boost clock - but with additional loaded cores that clock got lowered, so the CPU remained inside power and thermal limits.

Then we got CPUs that had preferred cores thst can boost highest, and other cores that can never achieve the top frequency, but are otherwise good enough when they are loaded as additional cores in multi-threaded workload. Intel had that before, Ryzen has that inside single CCD, and with two CCDs one is faster - capable of reaching full boost clock with low threaded load, the other is not.

Intel has now pushed this step forward with big-little design, with P cores and E cores.

Most of the downsides now lie squarely on Windows scheduler, and software not written with faster and slower cores in mind - but that also punishes performance on previous Intel CPUs and all Ryzens - if it can't properly distribute loads between P and E cores, it also can't pick fastest cores in CPUs where not all cores are equal (all CPUs for quite some years now).

But that is the future, and AMD will also jump on the bandwagon. Because it makes sense.
I know we had been through this. Im just replaying to people asking something or commenting on my posts. I really dont care about pcores nor ecores to be fair.
 
This is something that cannot be done with the traditional one size fits all cores

*Looks at AMD Ryzen lineup*

What you actually mean is that Intel have reached the "limits" of this architecture without desigining a whole new one.

Does Intel hold the ST lead? Yes ,barely
Can Intel challenge in the MT lead? Yes with caveats (PCore - Ecore designs and power consumption)


What I wonder about these is what would happen to a theoretical Threadripper varient with X3D to compete with Intels new HEDT. Would they limit it to only one or certain CCDs vs all of them?
 
*Looks at AMD Ryzen lineup*

What you actually mean is that Intel have reached the "limits" of this architecture without desigining a whole new one.

Does Intel hold the ST lead? Yes ,barely
Can Intel challenge in the MT lead? Yes with caveats (PCore - Ecore designs and power consumption)


What I wonder about these is what would happen to a theoretical Threadripper varient with X3D to compete with Intels new HEDT. Would they limit it to only one or certain CCDs vs all of them?
Ryzens entire lineup clearly shows the problem actually. New core on lower node and still can't take the st lead.

Think about the 12900k. In order to be that much faster in ST than the 5950x it needed those huge wide P-cores. But even with 10 of them, it would get absolutely stomped in Mt performance. So that's where the ecores kick in. It allowed it to compete in Mt performance by only sacrificing 2 of those P-cores.
 
Ryzens entire lineup clearly shows the problem actually. New core on lower node and still can't take the st lead.

That is true but it is within 10% most of the time in single threaded loads while also having equal performance in multi threaded loads while consuming anywhere from equal to 60% less power depending on application.

I am just hoping Intel is looking at a new architecture in the next 2 years or so otherwise we may end up seeing Intel going down AMDs Bulldozer path where they struggle long term to release "competative" designs.
 
You’re right but AMD didn’t have a cpu to compete in all workloads at the same time. Now they do.
A7600X3D may see the light in a year from now when the 7000 sale rate is reduced.
No they don’t. The 13900k (why is the 13900KS never tested?) still outperforms it in many applications and games! All they did is add more cache than Intel. Intel has the better design. If all thing were equal, same cache, the Intel chip would slaughter the AMD.
 
That is true but it is within 10% most of the time in single threaded loads while also having equal performance in multi threaded loads while consuming anywhere from equal to 60% less power depending on application.

I am just hoping Intel is looking at a new architecture in the next 2 years or so otherwise we may end up seeing Intel going down AMDs Bulldozer path where they struggle long term to release "competative" designs.
That's just a myth. Consuming 60% more is just lalaland comparisons. Nobody is running a 13900k for hour long renders at 500w. That's just bs. If you set a sane power limit the difference between a 7950x and a 13900k is like 10 to 15%. Yes amd wins in heavy mt efficiency, but not by a lot. On the other hand it actually loses, and sometimes by a lot, in every other task that isn't heavily multithreaded.

The 13900k biggest problem is gaming efficiency, which is horrible compared to the 3d. But that's it. In applications and productivity it's actually fine. Great I might add. Granted you don't run it at 500w, but why would you
 
why is the 13900KS never tested?
Intel didn't sample it, just yesterday they changed their mind and offered to send one over, probably after seeing that not all hope is lost ;)
13400F review very soon from me, then 7900 non-X, then I'll look at what other RPL non-K SKUs are interesting, and ofc the 13900KS as soon as it's here
 
No they don’t. The 13900k (why is the 13900KS never tested?) still outperforms it in many applications and games! All they did is add more cache than Intel. Intel has the better design. If all thing were equal, same cache, the Intel chip would slaughter the AMD.
This is a nonsensical thing to say.

Summary of your statement: If the Intel chip was more like the AMD chip, but still like the intel chip, it'd be better.

Might as well just say if the AMD chip was more like the intel chip, but still like the AMD chip, it'd be better. They both work by the same logic.
 
That's just a myth. Consuming 60% more is just lalaland comparisons. Nobody is running a 13900k for hour long renders at 500w. That's just bs. If you set a sane power limit the difference between a 7950x and a 13900k is like 10 to 15%. Yes amd wins in heavy mt efficiency, but not by a lot. On the other hand it actually loses, and sometimes by a lot, in every other task that isn't heavily multithreaded.

The 13900k biggest problem is gaming efficiency, which is horrible compared to the 3d. But that's it. In applications and productivity it's actually fine. Great I might add. Granted you don't run it at 500w, but why would you
Is it really though?

AV1/H264/H265
Unreal Engine
PS3 Emulation
After Effects
and so on

Lots of places where "long renders" actually may be done more readily by average people

To me you and most people here sure we may tweak power levels/limits but when your buying things in prebuilts etc how many SIs are going to set "sane" power limits? Following Gamers Nexus and LTTs reviews of some of the main SIs out there its quite common for them to either leave them completely stock or to set so insanely low limits you might as well buy a CPU class 2 tiers lower.....
 
Is it really though?

AV1/H264/H265
Unreal Engine
PS3 Emulation
After Effects
and so on

Lots of places where "long renders" actually may be done more readily by average people

To me you and most people here sure we may tweak power levels/limits but when your buying things in prebuilts etc how many SIs are going to set "sane" power limits? Following Gamers Nexus and LTTs reviews of some of the main SIs out there its quite common for them to either leave them completely stock or to set so insanely low limits you might as well buy a CPU class 2 tiers lower.....
Most if not all prebuilts have power limits already configured by the company selling you the computer based on cooling used. Regardless, if you don't want to tune power limits yourself,, you can buy the 13900 non k or the 13900t. Both of them are much, much, much much more efficient than an out of the box 7950x.

I find it completely dumb buying (or reviewing) the fastest unlocked K processors with unlimited power limits in heavy multithreaded workloads and then complaining that they are not efficient. Of course they are not, they are specifically configured (by the mobo manafacturer most of the time) to NOT be efficient.

There is no doubt that the 3d version of the 7950x is the only sanely configured out of the box cpu out of all the new ones. But that's not saying much to people knowing and willing to set their own power limits or the ones who buy locked versions like those i mentioned above.
 
But then you want your cake & eat it too, don't you? Why bang on about ST supremacy when unlocked 13xxx are highly inefficient? How about you show us if the more efficient (locked) RPL chips beat AMD hands down in ST/MT tasks while sipping power :rolleyes:

If you want (more)efficiency you will have to lower your clock speeds & with that Intel loses it's USP! The reason 7950x3d is one of the most efficient ships out there does boil down to clock speeds but also AMD's slightly more useful approach of not wanting to chase/advertise that useless 6GHz number which Intel "proudly" claimed.
 
Last edited:
But then you want your cake & eat it too, don't you? Why bang on about ST supremacy when unlocked 13xxx are highly inefficient? How about you show us if the more efficient (locked) RPL chips beat AMD hands down in ST/MT tasks while sipping power :rolleyes:

If you want (more)efficiency you will have to lower your clock speeds & with that Intel loses it's USP! The reason 7950x3d is one of the most efficient ships out there does boil down to clock speeds but also AMD's slightly more useful approach of not wanting to chase/advertise that useless 6GHz number which Intel "proudly" claimed.
A 13900k locked at 35w has the same ST performance as a 13900k with no power limits. Stop drinking the coolaid
 
TDPpower consumption remember? Show us the numbers & not just for one random benchmark!
 
TDPpower consumption remember? Show us the numbers & not just for one random benchmark!
You need benchmarks to figure out that a 13900 at 35 or 65w is more efficient than a 7950x? Yikes man, the amd fanboyism is through the roof with you.
 
If you care about power you aren’t an enthusiast.
Are we gatekeeping the term "enthusiast" behind maximum performance potential and nothing else now? It's like a car enthusiast suggesting that "real car enthusiasts" don't care about fuel economy or physical vehicle size. Why would you buy a gigantic V8 truck guzzling 15L/100km over a compact performance hatch sipping 5L/100km for your daily commute from the suburbs to your downtown office tower?

Some of us enthusiasts prefer to make our part selections based on specific design goals for the build, not just blindly throwing in whatever the absolute fastest parts available happen to be to gain a bit more epeen in the benchmark charts. Maybe I'm looking to build a compact high end gaming HTPC, and that's where these massive power consumption differentials really become important. There's a whole category of slim HTPC/desktop cases like Silverstone's RVZ03, Fractal's Ridge, etc that provide sufficient GPU chamber space for even quite large high end cards, but are significantly limited in CPU cooler clearance. Plenty of low profile downdraft CPU heatsinks would handle that 44W simulated 7800X3D gaming load average absolutely fine, while a 13900K would overwhelm them almost immediately.
 
No you need to need to show how 13900K @35W TDP will have the same ST scores as 13900K at default speeds ~ your claim, or how 13900(non K) with much lower turbo will match a regular 13900K in ST or MT tasks, even if 13900K is "TDP" limited.
www.ark.intel.com/content/www/us/en/ark/compare.html?productIds=232167,230496,230499,230498
Are you capable of reading reviews? Go read techpowerups review about the 13900k and check how much wattage it consumes for st tasks. Or nvm, here you go

https://tpucdn.com/review/intel-core-i9-13900k/images/power-singlethread.png

So it boosts just fine even with a 35w power limit
 
Are you capable of reading reviews? Go read techpowerups review about the 13900k and check how much wattage it consumes for st tasks. Or nvm, here you go

https://tpucdn.com/review/intel-core-i9-13900k/images/power-singlethread.png

So it boosts just fine even with a 35w power limit
A "single thread" is useless for most pro-applications and current-gen games.

power-games.png



power-applications.png




No they don’t. The 13900k (why is the 13900KS never tested?) still outperforms it in many applications and games! All they did is add more cache than Intel. Intel has the better design. If all thing were equal, same cache, the Intel chip would slaughter the AMD.
Zen 4 has AVX-512. Cinebench R23 doesn't use AVX-512.

Blender (with GPU acceleration) is free while Cinema 4D R25 has USD $3495.00 Single-User Perpetual License or USD 719.00 BILLED annually + GPU-Accelerated USD 264.00 BILLED annually.
 
Last edited:
A "single thread" is useless for most pro-applications and current-gen games.

power-games.png



power-applications.png





Zen 4 has AVX-512. Cinebench R23 doesn't use AVX-512.

Blender (with GPU acceleration) is free while Cinema 4D R25 has USD $3495.00 Single-User Perpetual License or USD 719.00 BILLED annually + GPU-Accelerated USD 264.00 BILLED annually.
Why are you telling me? R0hit thinks it's important
 
Why are you telling me? R0hit thinks it's important
You: "Why bang on about ST supremacy when unlocked 13xxx are highly inefficient?"

I'd vote for more Pcores and no ecores. That would have been better in every possible way. Downside for some people would have been less cores in gerneral for sure.


Then Intel should only use those since these are so much better.
E-Cores are useful for AVX/AVX2 with 128-bit use cases and benchmarks.

Current E-Cores have three ports 128-bit AVX2 hardware with 256-bit AVX2 software compatibility. Unofficial "AVX-128" refers to AVX's 128-bit mode subset.

You need benchmarks to figure out that a 13900 at 35 or 65w is more efficient than a 7950x? Yikes man, the amd fanboyism is through the roof with you.
For market segments that prioritize power efficiency, refer to https://www.notebookcheck.net/AMD-s...te-much-lower-power-consumption.698349.0.html

Ryzen 9 7945HX beats Intel Core i9-13980HX despite much lower power consumption

Cinebench R23 Multi Score

Unlimited TDP
Core i9-13980HX = 33052
Ryzen 9 7945HX = 34521

100 Watts TDP
Core i9-13980HX = 26507
Ryzen 9 7945HX = 33487

55 Watts TDP
Core i9-13980HX = 19478
Ryzen 9 7945HX = 26191

 
E-Cores are useful for AVX/AVX2 with 128-bit use cases and benchmarks.

Current E-Cores have three ports 128-bit AVX2 hardware with 256-bit AVX2 software compatibility. Unofficial "AVX-128" refers to AVX's 128-bit mode subset.
Useful does not mean necessary. I'm sure pcores can do better job in AVX applications but that beside the point.
 
Back
Top