• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD's Reviewers Guide for the Ryzen 9 7950X3D Leaks

Yeah but with asymmetric cores? half of which are 15% slower?... idk.

Yeah it will help in some WS tasks coded to take advantage of it, but in that setup those are mostly outliers.
Yes, in most cases, it'll be slightly slower than the 7950X. Keep in mind that the 7950X also enjoys a higher TDP.

Seems like the 3D cache was much more beneficial with DDR4. DDR5 almost doubles the bandwidth with similar latency, so the gains are much smaller.
There are still cases where the additional cache helps tremendously. F1 2021 and Watch Dogs Legion see enormous gains. Keep in mind that CPUs aren't like GPUs; they are latency engines, i.e. designed to reduce the latency of a task. For them, latency trumps bandwidth, and L3 cache's latency advantage is even greater for Zen 4 because of Zen 4's higher clocks.
 
Wow, what a waste of effort. I expect virtually nothing for productivity software. This seems be far weaker than the 5800X3D uplifts despite AMD's hype. Also overhyped and lied about 7900XT(X) performance too.

I could care less about about gaming performance with cpu's like Zen 4 and Raptor Lake, they are more than strong enough. For productivity I still think 13700K is the sweet spot, but will wait and see if the RL refresh is more than a tweak to clock speeds.
 
More interesting comparison would be faster ram on the Intel side and what it is on the AMD side. Intel can support faster ram while AMD can't. Tbh, it seems to be quite lackluster performance gain.
6000C30 is practically the fastest ram you can buy... if you buy 7200 stuff the latency is worse, you can't buy 7200C30 for example, so your comment is wrong

Top end hardware isn't struggling with 1080p; with the exception of Ashes of the Singularity, the lowest fps in the guide are just below 200 for the 13900k.
not true, try Hogwarts Legacy at 1080p with full RT on

there are lots of games that struggle at 1080p now, but none of them are in that guide

$247 ain't shit these days. hell costs $25 for a bowl of chicken wings, I still say 720-0 cl 34 should be Intel's norm
The very best ram you can buy is the 5600C28 stuff. After that, the 6000C30 stuff. You don't get any advantage buying "faster" stuff for Intel either.
 
not true, try Hogwarts Legacy at 1080p with full RT on

there are lots of games that struggle at 1080p now, but none of them are in that guide
You're right about games like Hogwarts Legacy being CPU bound at 1080p when ray tracing is selected. I hope it is among the games reviewers test when the review embargo expires in 3 days.
 
Those 3D chiplets are neutered to 5GHz, correct?

The more important numbers are the lows
 
The problem with these mainstream hardware review websites, is that they never actually test these larger V-Cache CPUs on game titles that really do benefit from the larger cache.

Games like Star Citizen, Digital Combat Simulator, Escape from Tarkov, Stellaris, Rimworld, Factorio, Satisfactory, Anno 1800 etc, those listed title benefit greatly from the larger L3 cache size, just from the older 5800X vs 5800X3D, the X3D model is like 40%-50+% higher FPS, or Ticks Per Seconds. It's probably these titles don't have a built in benchmarking utility that would allow reviewers to streamline benchmarking them. There are still so many others games that benefits from it.


I would expect the same from the 7950X -> 7950X3D on those titles as well.
 
6000C30 is practically the fastest ram you can buy... if you buy 7200 stuff the latency is worse, you can't buy 7200C30 for example, so your comment is wrong


The very best ram you can buy is the 5600C28 stuff. After that, the 6000C30 stuff. You don't get any advantage buying "faster" stuff for Intel either.
Εverything you just said is wrong.
 
Wow, what a waste of effort. I expect virtually nothing for productivity software. This seems be far weaker than the 5800X3D uplifts despite AMD's hype. Also overhyped and lied about 7900XT(X) performance too.

I could care less about about gaming performance with cpu's like Zen 4 and Raptor Lake, they are more than strong enough. For productivity I still think 13700K is the sweet spot, but will wait and see if the RL refresh is more than a tweak to clock speeds.
I suspect you mean "I couldn't care less", unless you mean gaming is important to you.
 
More interesting comparison would be faster ram on the Intel side and what it is on the AMD side. Intel can support faster ram while AMD can't. Tbh, it seems to be quite lackluster performance gain.
Which part of "apples-to-apples comparison" is unclear or confusing to you? The whole point is to isolate the performance of the CPU as far as possible, because they're not reviewing the system, they're reviewing the CPU.

hope you're gonna disable the fast CCD to compare how the 7800x3d will bench without waiting 5 more weeks
That would be pointless, because then you're reviewing a hypothetical 7800X3D, not an actual one.

The only biased test is the one using similar ram. AMD's stock ram is 5200 and max oc is 6000-6400 tops. Intel stock is 5600 and oced they can easily exceed 8000. Obviously, testing both with 6000 is the very definition of a biased test. Especially considering

1) A 6000c30 costs as much as a 6600c34 kit.
2) A 7200c34 kit is just 50€ more expensive than a 6000c30 kit
3) The 7950x 3d + 6000c30 is in fact more expensive than a 13900k + 7600 kit :)
See my reply to Zunexxx.

I was wondering that too and if somehow it's against the rules?
See my reply to Zunexxx.

It would be neat if they could stack DRAM. Imagine 128GB ram staked on the CPU.
Imagine paying a billion bucks for a CPU.
 
As with the vanilla zen4, the lower 78003dv will probably preform better in gaming than 79503dv.
No point in this cpu as far as I can see.
 
Some of you have suffered psychosis with the highest ram frequencies, most users run their memories at 6800 and to reach higher you needed a very expensive motherboard, speeds of 7400+ are only possible in light programs like cpuz or AIDA in games there will be instability,for sure there will be some individuals who will be able to run their systems this way but it is not the rule and we also throw into the void the logic that AMD is more expensive.
 
That’s it ? 6% of gain (in an AMD controlled test bench) at a ridiculous 1080P resolution with an RTX 4090, being most probably slower in any other application ?

There will be cases other than gaming where a 7950 X3D will be be faster than its regular counterpart. Any application that has a working set larger than 32 MB will benefit from that large cache. Even in W1zzard's review of the 5800X3D, you can see that there are tasks where it is faster than the 5800X despite the clock speed gap.

View attachment 285357
The gain is minimal, and in this specific case, the software has to deal with asymmetric CCDs. I’m not expecting a 7950X3D to be any better than a 7950X in most of the applications, beside games.
 
The very best ram you can buy is the 5600C28 stuff. After that, the 6000C30 stuff. You don't get any advantage buying "faster" stuff for Intel either.

It's not just about the CAS Latency, but the overall latency, which depends on timings as well as clock speeds.

Linus actually did a good video on this topic recently.

5600 CL28, 6000 CL30 and 6400 CL32 basically offer the same latency, but the extra bandwidth can be beneficial in certain situations. It was similar with DDR4 (3000 CL15, 3200 CL16, 3600 CL18).

RAM latency.jpg


RAM first word latency.jpg
 
CSGO is doing a bit worse then without hte 3DCache, probably more reliant on Clockspeed.
Because 400FPS at 4K is for broke hobos. :rolleyes:
 
Because 400FPS at 4K is for broke hobos. :rolleyes:
That’s the whole point. What are we looking for here ? Are those CPUs (AMD or Intel is the same) really needed FOR GAMING ?
Marketing machine is beyond ridiculous. AMD is advertising a 16C/32T CPU as a GAMING SOLUTION, when more than half of its resources will be totally wasted, and for professional applications a cheaper 7950X would be better.
 
CPU tests are always done at lower resolutions, it's a matter of choosing your bottleneck. It's easy for a modern high end GPU to push countless frames at 1080p, so the limit is the CPU capacity to push stuff to be rendered.

The opposite happens with GPU tests, the GPU will take longer and only be able to render a lower number of frames at higher resolutions while the CPU has enough time to keep the pipeline fed.
Going by that logic, why not test these at 720 or 480p? What he said makes sense, these resolutions are obsolete same as 1080p.
I doubt anyone spending that amount of money, 7950x3d with 4090, will be using 1080p.
These tests are downright useless and have 0 REAL value. But then again, if you were shown real case tests, the 99% of people tests and not the 0.0001% weirdo that will run this setup, you wouldn't even care to upgrade because in reality the difference is minimal, that's also true for new generation CPUs vs previous ones.
 
Which part of "apples-to-apples comparison" is unclear or confusing to you? The whole point is to isolate the performance of the CPU as far as possible, because they're not reviewing the system, they're reviewing the CPU.
I think you don't understand what apples to apples is then. Memory speed affects the IMC speed. When you put 6000 memory on the zen 4, you are running the IMC not just overclocked, but to the actual upper limit it can run. When you put 6000 memory on Intel, you are in fact UNDERCLOCKING the IMC compared to stock, and you are way way way below the upper limit of the IMC speed. You either run both at officially supported speeds, which is 5200 and 5600 respectively, or you run both maxed out, which is 6000 / 6400 for zen 4 and 7600-8000+ for Intel.
 
Going by that logic, why not test these at 720 or 480p? What he said makes sense, these resolutions are obsolete same as 1080p.
I doubt anyone spending that amount of money, 7950x3d with 4090, will be using 1080p.
These tests are downright useless and have 0 REAL value. But then again, if you were shown real case tests, the 99% of people tests and not the 0.0001% weirdo that will run this setup, you wouldn't even care to upgrade because in reality the difference is minimal, that's also true for new generation CPUs vs previous ones.
Going by that logic why not 6p.

Many people have worked on the best way to test a CPU, they ran all the tests many suggested and still ended up here.

It's called the scientific method and means they're testing right and everyone doubting them is wrong.

And 1080P will maybe take a decade to die , it's not anywhere near dead yet.

It doesn't matter what people will buy or use, typically at all.

It matters that the difference between cards or CPU is shown clearly, fair, and effective, hence the testing you see.
 
Going by that logic, why not test these at 720 or 480p? What he said makes sense, these resolutions are obsolete same as 1080p.
I doubt anyone spending that amount of money, 7950x3d with 4090, will be using 1080p.
These tests are downright useless and have 0 REAL value. But then again, if you were shown real case tests, the 99% of people tests and not the 0.0001% weirdo that will run this setup, you wouldn't even care to upgrade because in reality the difference is minimal, that's also true for new generation CPUs vs previous ones.

Your logic is flawed.
First of all, when testing CPU performance, you want to remove all other bottlenecks if possible. You want to see the maximum framerate a CPU can produce.
Second of all, there are no 720p or 480p monitors out there. But there are plenty of 1080p displays, with refresh rates including 480 Hz and higher.

You should watch the Hardware Unboxed video on this topic. Most people can't comprehend why CPUs are tested this way.

If you test a CPU with a GPU bottleneck, you have no idea what will happen when you upgrade your GPU. You framerate might stay exactly the same, because your CPU is maxed out.
But when you test it at 1080p, you will know exactly how much headroom you have.

This is exactly why low-end GPUs like 3050 and 3060, or even RX 6400, are tested with the fastest CPU on the market. You don't want a CPU bottleneck to affect your results.
 
But you still rely on software to hopefully make it work properly.. not too sure I like that idea. Actually, I am not too sure I like the direction this whole product line has gone.
 
Back
Top