• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD Ryzen 9 7950X3D

Ok I see it now. With yCruncher all the CPU's are blowing past their power targets. Is this a yCruncher specific behavior because I don't see that happening with AMD running Cinebench for example? It's kind of concerning in a way because when you set PPT you would expect the CPU to enforce that limit.
I don't think it's cruncher. There is probably a measuring error which explains Intel slight discrepancy between Pl2 and power draw, but the big difference on the amd cpu is only explained by him setting a tdp instead of a ppt limit. 50% is not a measuring error

And the graphs show AMD's still more efficient at almost every power/TDP/PPT level, vastly so.
No, not true. Unless 10 to 15% ( that's the actual difference) is vast to you, then sure whateva. I wouldn't call 10% vast but you do you.
 
I don't think it's cruncher. There is probably a measuring error which explains Intel slight discrepancy between Pl2 and power draw, but the big difference on the amd cpu is only explained by him setting a tdp instead of a ppt limit. 50% is not a measuring error
Rescanning the article I don't see the method how they measured the power draw but in any case now you are loosing me a bit when you start talking about TDP again. I don't have any TDP options in my UEFI/BIOS so I don't know of a way to configure the power limit by TDP (other than maybe ECO mode) I just know of setting PPT directly. I think if TDP options were available what you say makes sense that possibly they are using the wrong option to configure the power limit. Too bad UEFI setup screen shots were not part of the article.
 
Rescanning the article I don't see the method how they measured the power draw but in any case now you are loosing me a bit when you start talking about TDP again. I don't have any TDP options in my UEFI/BIOS so I don't know of a way to configure the power limit by TDP (other than maybe ECO mode) I just know of setting PPT directly. I think if TDP options were available what you say makes sense that possibly they are using the wrong option to configure the power limit. Too bad UEFI setup screen shots were not part of the article.
Yes, ecomode sets tdp limits, right?
 
Yes, ecomode sets tdp limits, right?
I think for me it's was a simple on or off type config in my UEFI but I don't have time to check now. Maybe someone can confirm if GIGABYTE X670E Aorus Master has a TDP specific configuration option and what Ryzen Master/HWiNFO64 reports for PPT values.
 
TPU data seems to backup what I seen on a youtube video, this chip the biggest news to me is its way more power efficient than the non 3D version. The non 3D version I am guessing is losing performance on IO waits, which are reduced with the bigger cache. The lower clock speeds probably having the 3D version better placed on the power curve. 30%+ power savings nothing to be sniffed at.
 
No it isn't, it absolutely isn't. You can tell by their graph in the last page. You my friend are absolutely wrong, sorry.

I apologize. I wasn't sure which graph you meant, so I combed through the whole page, and I now see that they were indeed - for reasons I can't really fathom - not making direct comparisons. I (correctly) understood that they were referencing PPT, not TDP, and my understanding of that was it meant a hard limit. Evidently not. Text quoted here for everyone else's benefit.

13900K:
Overall, at the 65 W mark we saw a peak load power of 71.4 W, and a peak core temperature of 39°C.

7950X:
Looking at the rest of the metrics at 65 W, we saw a peak load power of 90.3 W, and a peak core temperature of 52°C.

As I pointed out earlier, even factoring that in the AMD processor is still showing higher efficiency - just not to the extent implied by the numbers on the chart. I apologize again for the error, and I'll edit my previous post accordingly. I still can't fathom why Anandtech would have uncovered that information and not bothered reflecting it in the charts, but then I guess there's more than one reason I don't really go there anymore.
 
Well, I have to give kudos to W1zzard because he's one of only two reviewers to have had the presence of mind to test this APU with one CCX disabled to simulate the R7-7800X3D (the other being Steve Walton).

This APU is just ridiculous. I mean, I realise that AMD made it so that they could the i9-13900K off of the gaming throne and they succeeded in that, but otherwise, these R9 X3D APUs are UTTERLY POINTLESS!

Nobody who wants productivity is going to pay more money for less productivity. Nobody who wants gaming performance is going to pay WAY MORE money to buy a 12 or 16-core APU just so they can have 6 or 8 unused cores sitting there eating power for no reason while hoping that Windows doesn't screw up and use the wrong CCX.

This APU should never have been made because the R7-7800X3D is going to have even better gaming performance than it does as we have seen. Since the 3D cache mitigates the advantages of higher clocks and RAM speeds in gaming, this WILL come to pass, even if the R7-7800X3D has a lower clock speed than the R9-7950X3D.

The R7-7800X3D will be $250 less than this Frankenstein processor and is guaranteed to be a huge success because it makes sense to put the 3D cache on an APU that gamers will actually be interested in buying. OTOH, putting their 3D cache, a technology that was demonstrated on the R7-5800X3D to be beneficial for gaming but detrimental for productivIty on 12 and 16-core productivity APUs is insane. Some fools tried to defend it by asking the question "What if someone wants to do BOTH?". This question is ridiculous because prosumers ALWAYS proritise productivity over gaming and the R9-7950X already games as well as the i9-12900K, so it's not exactly lacking in the gaming department to begin with.

This is the stupidest decision that I've ever seen AMD make (and I've been using AMD since the Phenom II X4 940). To release two R9 X3D APUs that no gamer will buy while NOT releasing an R5 X3D APU that would guarantee AMD's undisputed dominance of PC gaming over Intel is so stupid that I cannot describe just how stupid because there are no accurate words that can be used in polite company.

This decision will haunt AMD for years to come as they have successfully blown hundreds of millions of dollars in both production and lost sales by making these R9 X3D abominations. The only one who will truly benefit from the existence of this product is Intel.
You’re right but AMD didn’t have a cpu to compete in all workloads at the same time. Now they do.
A7600X3D may see the light in a year from now when the 7000 sale rate is reduced.
 
I apologize. I wasn't sure which graph you meant, so I combed through the whole page, and I now see that they were indeed - for reasons I can't really fathom - not making direct comparisons. I (correctly) understood that they were referencing PPT, not TDP, and my understanding of that was it meant a hard limit. Evidently not. Text quoted here for everyone else's benefit.

13900K:


7950X:


As I pointed out earlier, even factoring that in the AMD processor is still showing higher efficiency - just not to the extent implied by the numbers on the chart. I apologize again for the error, and I'll edit my previous post accordingly. I still can't fathom why Anandtech would have uncovered that information and not bothered reflecting it in the charts, but then I guess there's more than one reason I don't really go there anymore.
At least you understood, unlike other people who see agendas behind people caring about FACTS. Thank you
 
For intel pl2 = power consumption under tau. The CPU cannot draw more power than the pl2 setting you put in the bios. Period.

But that's all irrelevant. The point is you can't use that graph to compare efficiency since the cpus are drawing vastly different amounts of watts than what the graph shows
So compare AMD's 105W mode to Intel's 125W. What's the issue here?

TPU data seems to backup what I seen on a youtube video, this chip the biggest news to me is its way more power efficient than the non 3D version. The non 3D version I am guessing is losing performance on IO waits, which are reduced with the bigger cache. The lower clock speeds probably having the 3D version better placed on the power curve. 30%+ power savings nothing to be sniffed at.
Why would it be news to you? After all, AMD listed it as 120W CPU months ago (vs 170W for 7900X). It performs the same as 7900X with 140-150W PPT:

r9_7950x_20221010_ppt_cinebench_r23.png
 
Last edited:
The issue here is you are comparing intels last gen with amds current gen, lol
Nice post manipulation there. Cropping out the part where I was showing 7950X performance at different PPT values vs. 7950X3D just to claim that I'm pretending it's 13900K on that chart (when I did not even mention Intel in that reply). Maybe don't be a blind fanboy next time?
 
No it isn't, it absolutely isn't. You can tell by their graph in the last page. You my friend are absolutely wrong, sorry.


Nope, truth and facts are the hills I want to die on.

No reason to buy Intel anymore after the 7950x 3d,great cpu.

Happy? Now can we stop lying about that Anand tech graph?
Please just stop. Its over. AMD has the better tech right now. And its only gonna get worse for Intel rather than better as now the AMD mobile range is better than Intel’s. See link below.

 
Nice post manipulation there. Cropping out the part where I was showing 7950X performance at different PPT values vs. 7950X3D just to claim that I'm pretending it's 13900K on that chart (when I did not even mention Intel in that reply). Maybe don't be a blind fanboy next time?
What? What cropping? What are you talking about? LOL

Please just stop. Its over. AMD has the better tech right now. And its only gonna get worse for Intel rather than better as now the AMD mobile range is better than Intel’s. See link below.

And that doesn't change the fact taht you were wrong about anandtechs review. Admit it like a big boy and move on
 
Pretending being dumb on top of that, eh. Even better, keep going.
I've literally have no idea what you are talking about. I quoted your post, the only graph you posted is the one I have quoted that compared the 7950x to the 12900k. Maybe go check your post again cause you are probably mistaken?
 
Hearing reports that the game bar shuts off /parks the other CCD during gameplay? Does that mean those cores are parked and the background processes are running on the vcache ccd as well?

Seems a little sketchy.
 
Last edited:
I've literally have no idea what you are talking about. I quoted your post, the only graph you posted is the one I have quoted that compared the 7950x to the 12900k. Maybe go check your post again cause you are probably mistaken?
You deleted the part where I posted the chart in response to chrcoluk concerning 7950X vs 7950X3D efficiency - nothing to do with Intel - making it look like I was comparing 7950X to 13900K, but pretending that 12900K is the same as 13900K making Intel look bad. That's deliberate manipulation. Stop playing dumb because I'm starting to think you are not pretending.

In response to you I was talking about the previously posted chart showing 105W 7950X using 145W, while 13900K is using 143W in 125W mode.

Not sure but at one point I didn't realize this either so just trying to be helpful.
The expanded quote does not change anything.


amd.png
 
Last edited:
I found this impressive, the Extra cache managed to increase IGPU's performance by up to 4x? So why the hell does not use cache in laptop apus and creates the best and most efficient IGPU for Ultra-thin of all time?

Could you do a quick test @W1zzard ?

View attachment 285958

I'm gonna guess: Lack of laptop market share to guarantee the increased R&D cost will be offset by higher sales.

Because people wanting to game on a laptop will look for a gaming laptop (dGPU) and people wanting a do it all laptop (light gaming included) are looking for a lower priced item and likely aren't going to spend the effort cross-shopping iGPUs.

AMD would need a pretty good targetted advertising campaign to break into that space.
 
I'm gonna guess: Lack of laptop market share to guarantee the increased R&D cost will be offset by higher sales.

Because people wanting to game on a laptop will look for a gaming laptop (dGPU) and people wanting a do it all laptop (light gaming included) are looking for a lower priced item and likely aren't going to spend the effort cross-shopping iGPUs.

AMD would need a pretty good targetted advertising campaign to break into that space.
I don't think this is the reason, AMD already invests hundreds of millions to continually evolve its IGPU which is now RDNA3. Since 3D cache is a technology that AMD already uses, it would not require so much investment. The cache itself should cost about $ 10.

Furthermore, the gains in energy efficiency would make the marketing alone, imagine a device like the Steam Deck achieving double or triple the performance while maintaining the same TDP!!
 
I don't think this is the reason, AMD already invests hundreds of millions to continually evolve its IGPU which is now RDNA3. Since 3D cache is a technology that AMD already uses, it would not require so much investment. The cache itself should cost about $ 10.

Furthermore, the gains in energy efficiency would make the marketing alone, imagine a device like the Steam Deck achieving double or triple the performance while maintaining the same TDP!!

Don't get me wrong, I would love this. I came to PC gaming from Intel Iris Plus iGPUs in NUCs (64 and 128MB L4 cache, not too dissimilar in intent to these designs) and always wanted more in that USFF.

But AMD APUs are a different monolithic die and are pretty small, so I think adding this cache might be a larger design hurdle than the one used successfully in the CCX designs of the desktop CPUs. Maybe it won't be and would be easier, but right now the 3D cache is a separate piece of silicon, even on a different process node, which is not a feature of AMD APU designs.

However they are fools if they don't at least try this design as there are a lot of laptops, USFFs, and the Steam Deck which would see big benefits from this.
 
Yeah, but can it play Crysi *er* Hogwarts though? Seriously, I’m shocked at how well the 13900K and 13700k did in comparison with these tests! There is a $120 price difference between the two in favor of the 13900K! Is the 7950X 3D $120 better? Not by the looks of these charts!
 
I only hope Intel will keep on releasing Pores with ecores. Maybe at some point they decide there is no need for Pcores anymore. It is way easier for me to think of that scenario rather than no ecores just pcores.
Why?
The ecores are worse in every single way, including power efficiency.
1677734411894.png


I cant even understand any situation in which you want a company to only release ineffcient slow hardware, other than wanting them to go bankrupt. The only reason E-cores were added was to compete in multi threaded benchmarks vs ryzen, they're far better off with more P-cores or an extremely different approach to how they use these E-cores.
 
Why?
The ecores are worse in every single way, including power efficiency.
View attachment 286035

I cant even understand any situation in which you want a company to only release ineffcient slow hardware, other than wanting them to go bankrupt. The only reason E-cores were added was to compete in multi threaded benchmarks vs ryzen, they're far better off with more P-cores or an extremely different approach to how they use these E-cores.
I dont want anything. What I said was, people would want a 8pcore processor only and how I see it, Intel will not release that processor. What I also said, within the two scenarios, the more probable is only ecores than only pcores. Never said only ecores and that it would have been a great idea or Intel should do it.
The question now is, if these ecores are so bad why putting them in the CPU? I know the answer to that and it is definitely not efficiency nor performance. Never liked the idea of a crippled cores in a CPU for which you pay hard buck.
 
Last edited:
Back
Top