• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD's Reviewers Guide for the Ryzen 9 7950X3D Leaks

Joined
Jun 18, 2021
Messages
2,550 (2.02/day)
Going by that logic, why not test these at 720 or 480p? What he said makes sense, these resolutions are obsolete same as 1080p.
I doubt anyone spending that amount of money, 7950x3d with 4090, will be using 1080p.
These tests are downright useless and have 0 REAL value. But then again, if you were shown real case tests, the 99% of people tests and not the 0.0001% weirdo that will run this setup, you wouldn't even care to upgrade because in reality the difference is minimal, that's also true for new generation CPUs vs previous ones.

Any good review will also test that because obviously it matters for the users. But to get to a good granularity for comparison between the capabilities of different cpus you need the CPU to be the important part, screw real use cases. Cinebench is also not a real use case, nor are most of the benchmarks out there.

It's called the scientific method and means they're testing right and everyone doubting them is wrong.

Unfortunate phrasing ;)

Some third party tests, with a 3090. Still at 1080p though.

View attachment 285448


I really want to see the comparison with the 5800x3d. From the AMD numbers seems like a wash, better in some, equal in others, i'm waiting for head to head tests.

When is the embargo on this over?
 
Joined
Feb 18, 2005
Messages
5,847 (0.81/day)
Location
Ikenai borderline!
System Name Firelance.
Processor Threadripper 3960X
Motherboard ROG Strix TRX40-E Gaming
Cooling IceGem 360 + 6x Arctic Cooling P12
Memory 8x 16GB Patriot Viper DDR4-3200 CL16
Video Card(s) MSI GeForce RTX 4060 Ti Ventus 2X OC
Storage 2TB WD SN850X (boot), 4TB Crucial P3 (data)
Display(s) 3x AOC Q32E2N (32" 2560x1440 75Hz)
Case Enthoo Pro II Server Edition (Closed Panel) + 6 fans
Power Supply Fractal Design Ion+ 2 Platinum 760W
Mouse Logitech G602
Keyboard Razer Pro Type Ultra
Software Windows 10 Professional x64
I think you don't understand what apples to apples is then. Memory speed affects the IMC speed. When you put 6000 memory on the zen 4, you are running the IMC not just overclocked, but to the actual upper limit it can run. When you put 6000 memory on Intel, you are in fact UNDERCLOCKING the IMC compared to stock, and you are way way way below the upper limit of the IMC speed. You either run both at officially supported speeds, which is 5200 and 5600 respectively, or you run both maxed out, which is 6000 / 6400 for zen 4 and 7600-8000+ for Intel.
> claims I don't understand what apples to apples mean
> goes on to suggest running memory at different speeds because MUH IMC

People like you are why the human race is doomed.
 
Joined
Mar 10, 2010
Messages
11,878 (2.21/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in four sticks./16Gb/16GB
Video Card(s) Powercolour RX7900XT Reference/Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506
Any good review will also test that because obviously it matters for the users. But to get to a good granularity for comparison between the capabilities of different cpus you need the CPU to be the important part, screw real use cases. Cinebench is also not a real use case, nor are most of the benchmarks out there.



Unfortunate phrasing ;)



I really want to see the comparison with the 5800x3d. From the AMD numbers seems like a wash, better in some, equal in others, i'm waiting for head to head tests.

When is the embargo on this over?
Well at least we know they're getting tested thoroughly on TPU ,third party reviews are what matters.

Hopefully removing differences not adding some obviously.
 

TheLostSwede

News Editor
Joined
Nov 11, 2004
Messages
17,656 (2.41/day)
Location
Sweden
System Name Overlord Mk MLI
Processor AMD Ryzen 7 7800X3D
Motherboard Gigabyte X670E Aorus Master
Cooling Noctua NH-D15 SE with offsets
Memory 32GB Team T-Create Expert DDR5 6000 MHz @ CL30-34-34-68
Video Card(s) Gainward GeForce RTX 4080 Phantom GS
Storage 1TB Solidigm P44 Pro, 2 TB Corsair MP600 Pro, 2TB Kingston KC3000
Display(s) Acer XV272K LVbmiipruzx 4K@160Hz
Case Fractal Design Torrent Compact
Audio Device(s) Corsair Virtuoso SE
Power Supply be quiet! Pure Power 12 M 850 W
Mouse Logitech G502 Lightspeed
Keyboard Corsair K70 Max
Software Windows 10 Pro
Benchmark Scores https://valid.x86.fr/yfsd9w
When is the embargo on this over?
Tuesday.

 
Joined
Feb 13, 2014
Messages
496 (0.13/day)
Location
Cyprus
Processor 13700KF - 5.7GHZ
Motherboard Z690 UNIFY-X
Cooling ARCTIC Liquid Freezer III 360 (NF-A12x25)
Memory 2x16 G.SKILL M-DIE (7200-34-44-44-28)
Video Card(s) XFX MERC 7900XT
Storage 1TB KINGSTON KC3000
Display(s) FI32Q
Case LIAN LI O11 DYNAMIC EVO
Audio Device(s) HD599
Power Supply RMX1000
Mouse PULSAR V2H
Keyboard KEYCHRON V3 (DUROCK T1 + MT3 GODSPEED R2)
Software Windows 11
Benchmark Scores Superposition 4k optimized - 20652
Your logic is flawed.
First of all, when testing CPU performance, you want to remove all other bottlenecks if possible. You want to see the maximum framerate a CPU can produce.
Second of all, there are no 720p or 480p monitors out there. But there are plenty of 1080p displays, with refresh rates including 480 Hz and higher.

You should watch the Hardware Unboxed video on this topic. Most people can't comprehend why CPUs are tested this way.

If you test a CPU with a GPU bottleneck, you have no idea what will happen when you upgrade your GPU. You framerate might stay exactly the same, because your CPU is maxed out.
But when you test it at 1080p, you will know exactly how much headroom you have.

This is exactly why low-end GPUs like 3050 and 3060, or even RX 6400, are tested with the fastest CPU on the market. You don't want a CPU bottleneck to affect your results.
If you read what i said i completely understand that, but what i am saying is that testing a 7950x3d and 4090 for 1080p makes no sense. No one will buy these components and use them to play 1080p. Most people with this budget will opt for a high end 4k or 1440p.

Removing all bottlenecks and testing a CPU at 1080p when the most likely scenario is to use it for 1440p and 4k makes the same sense as testing it on 720p, 480p or 6p as that other guy said.
It literally holds 0 meaning.

Also, to continue why removing all bottlenecks to test 1 part is meaningless, take into consideration synthetic benchmarks, they do just that. Why don't you go and buy your CPU based on a synthetic benchmark? Or your GPU? Because it literally doesn't matter, what you care is your use case.

Sure, you CAN use these parts with 1080p, but what makes more sense is that the benchmark for 1080p is auxiliary to the main benchmarks that are run on probable scenarios, 1440p or 4k, both with and without raytracing (and i bet on those configs the gains in performance will be negligible).
 
Last edited:
Joined
Jun 14, 2020
Messages
3,474 (2.13/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
> claims I don't understand what apples to apples mean
> goes on to suggest running memory at different speeds because MUH IMC

People like you are why the human race is doomed.
Im sure people that are factually correct is the reason the human race is doomed, while people like you bathing in falsehoods are the road to salvation. Absolutely on point :D

Sure, underclocking one while overclocking the other one is apples to apples
 
Joined
Jun 10, 2014
Messages
2,987 (0.78/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
Seems like the 3D cache was much more beneficial with DDR4. DDR5 almost doubles the bandwidth with similar latency, so the gains are much smaller.
But even moreso than that, Zen 4 is a quite bit faster than Zen 3, and this is primarily achieved through front-end improvements, which ultimately means it will be less cache sensitive. So we should expect it to get smaller gains from extra L3 cache (relatively speaking).

There are still cases where the additional cache helps tremendously. F1 2021 and Watch Dogs Legion see enormous gains.
Yes, those are edge cases. Cherry-picking edge cases to prove a point isn't a particular good argument, especially if you want to extrapolate this into general performance. The F1 game series have been known to be outliers for years, and I find it interesting that they don't even use the latest game in the series.

Keep in mind that CPUs aren't like GPUs; they are latency engines, i.e. designed to reduce the latency of a task. For them, latency trumps bandwidth, and L3 cache's latency advantage is even greater for Zen 4 because of Zen 4's higher clocks.
Firstly, L3 doesn't work the way most people think;
L3 (in current AMD and Intel architectures) is a spillover cache for L2, you should not think of it like a faster piece of RAM or a little slower L2. L3 will only be beneficial when you get cache hits there, and unlike L2, you don't get cache hits there from prefetched blocks etc. as L3 only contains recently discarded blocks from L2. L3 is a LRU type cache, which means every cache line fetched into L2 will push out another from L3.
You get a hit in L3 when: (ordered by likelyhood)
- An instruction cache line has been discarded from this core (or another core).
- A data cache line has been discarded from this core, most likely due to branch misprediction.
- A data cache line has been discarded from another core, but this is exceedingly rare compared to the other cases, as data stays in L3 for a very short time, and the chances of multiple threads accessing the same data cache line within a few thousand clock cycles is minuscule.

This is the reason why we see only a handful applications be sensitive to L3, as it has mostly to do with instruction cache. For those who know low level optimization, the reason should be immediately clear; highly optimized code is commonly known to be less sensitive to instruction cache, which essentially means better code is less sensitive to L3. Don't get me wrong, extra cache is good. But don't assume software should be designed to "scale with L3 cache", when that's a symptom of bad code.

Secondly, regarding latency vs. bandwidth;
Latency is always better when you look at a single instruction or a single block of data, but when looking at real world performance you have to look at the overall latency and throughput. If for instance a thread is stalled and waiting for two or more cache lines to be fetched, then slightly higher latency doesn't matter as much as bandwidth. This essentially comes down to the balance between data and how often the pipeline stalls. More bandwidth also means the prefetcher can fetch more data in time, so it might prevent some stalls all together. This is why CPUs overall are much faster than 20 years ago, even though latencies in general have gradually increased.
But this doesn't really apply to L3 though, as the L3 cache works very differently as described above.

Lastly, when compared to a small generational uplift, like Zen 2 -> Zen 3 or Zen 3 -> Zen 4, the gains from extra L3 is pretty small and the large gains are mostly down to very specific applications. This is why I keep calling it mostly a gimmick. If you on the other hand use one of those applications where you get a 30-40% boost, then my all means go ahead an buy one, but for everyone else, it's mostly something to brag about.

Intel 13900K official supported memory is DDR5 5600 : https://www.intel.com/content/www/u...-36m-cache-up-to-5-80-ghz/specifications.html everything above that is based on your luck at silicon lottery.
Not to mention that you are likely to downgrade that speed over time (or risk system stability).
 
Joined
Dec 12, 2012
Messages
774 (0.18/day)
Location
Poland
System Name THU
Processor Intel Core i5-13600KF
Motherboard ASUS PRIME Z790-P D4
Cooling SilentiumPC Fortis 3 v2 + Arctic Cooling MX-2
Memory Crucial Ballistix 2x16 GB DDR4-3600 CL16 (dual rank)
Video Card(s) MSI GeForce RTX 4070 Ventus 3X OC 12 GB GDDR6X (2610/21000 @ 0.91 V)
Storage Lexar NM790 2 TB + Corsair MP510 960 GB + PNY XLR8 CS3030 500 GB + Toshiba E300 3 TB
Display(s) LG OLED C8 55" + ASUS VP229Q
Case Fractal Design Define R6
Audio Device(s) Yamaha RX-V381 + Monitor Audio Bronze 6 + Bronze FX | FiiO E10K-TC + Sony MDR-7506
Power Supply Corsair RM650
Mouse Logitech M705 Marathon
Keyboard Corsair K55 RGB PRO
Software Windows 10 Home
Benchmark Scores Benchmarks in 2024?
Sure, you CAN use these parts with 1080p, but what makes more sense is that the benchmark for 1080p is auxiliary to the main benchmarks that are run on probable scenarios, 1440p or 4k, both with and without raytracing (and i bet on those configs the gains in performance will be negligible).

All you did is confirm you do not understand the process.

Example 1:

1080p - you get 60 FPS
1440p - you get 60 FPS
4K - you get 40 FPS

In this situation, upgrading your GPU would provide NO performance increase in 1440p. ZERO.
In 4K, you would only gain a maximum of 50% extra performance, even if the new GPU was twice as fast.
How would you know this without the 1080p test?

Example 2:
1080p - you get 100 FPS
1440p - you get 60 FPS
4K - you get 40 FPS

In this situation, the CPU bottleneck happens at 100 FPS. Which means you can get 67% more performance in 1440p after upgrading the GPU, and you can get 150% more performance in 4K.
You know the maximum framerate the CPU can achieve without a GPU bottleneck, which means you know what to expect when you upgrade your GPU.

What's important is to have this data for as many games as possible. Some games don't need a lot of CPU power, some are badly threaded, and some will utilize all 8 cores fully.

If you test in 4K, you're not testing the maximum potential of the CPU. You want to know this if you're planning on keeping your system for more than two years. Most people WILL upgrade their GPU before their CPU.
Seriously, please just go watch the Hardware Unboxed video.
 
Joined
Mar 10, 2010
Messages
11,878 (2.21/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in four sticks./16Gb/16GB
Video Card(s) Powercolour RX7900XT Reference/Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506
If you read what i said i completely understand that, but what i am saying is that testing a 7950x3d and 4090 for 1080p makes no sense. No one will buy these components and use them to play 1080p. Most people with this budget will opt for a high end 4k or 1440p.

Removing all bottlenecks and testing a CPU at 1080p when the most likely scenario is to use it for 1440p and 4k makes the same sense as testing it on 720p, 480p or 6p as that other guy said.
It literally holds 0 meaning.

Also, to continue why removing all bottlenecks to test 1 part is meaningless, take into consideration synthetic benchmarks, they do just that. Why don't you go and buy your CPU based on a synthetic benchmark? Or your GPU? Because it literally doesn't matter, what you care is your use case.

Sure, you CAN use these parts with 1080p, but what makes more sense is that the benchmark for 1080p is auxiliary to the main benchmarks that are run on probable scenarios, 1440p or 4k, both with and without raytracing (and i bet on those configs the gains in performance will be negligible).
I think the bit your not getting is that reviews don't benchmark to give customers an idea of how it will typically run on Their personal 144Hz 1440p or 4k 4090 equipped hardware, even in a 4090 review.
It's to see how each GPU, in this case the 4090 is compared to another few, now and in the future.
A subtle difference.
 
Joined
Feb 13, 2014
Messages
496 (0.13/day)
Location
Cyprus
Processor 13700KF - 5.7GHZ
Motherboard Z690 UNIFY-X
Cooling ARCTIC Liquid Freezer III 360 (NF-A12x25)
Memory 2x16 G.SKILL M-DIE (7200-34-44-44-28)
Video Card(s) XFX MERC 7900XT
Storage 1TB KINGSTON KC3000
Display(s) FI32Q
Case LIAN LI O11 DYNAMIC EVO
Audio Device(s) HD599
Power Supply RMX1000
Mouse PULSAR V2H
Keyboard KEYCHRON V3 (DUROCK T1 + MT3 GODSPEED R2)
Software Windows 11
Benchmark Scores Superposition 4k optimized - 20652
All you did is confirm you do not understand the process.

Example 1:

1080p - you get 60 FPS
1440p - you get 60 FPS
4K - you get 40 FPS

In this situation, upgrading your GPU would provide NO performance increase in 1440p. ZERO.
In 4K, you would only gain a maximum of 50% extra performance, even if the new GPU was twice as fast.
How would you know this without the 1080p test?

Example 2:
1080p - you get 100 FPS
1440p - you get 60 FPS
4K - you get 40 FPS

In this situation, the CPU bottleneck happens at 100 FPS. Which means you can get 67% more performance in 1440p after upgrading the GPU, and you can get 150% more performance in 4K.
You know the maximum framerate the CPU can achieve without a GPU bottleneck, which means you know what to expect when you upgrade your GPU.

What's important is to have this data for as many games as possible. Some games don't need a lot of CPU power, some are badly threaded, and some will utilize all 8 cores fully.

If you test in 4K, you're not testing the maximum potential of the CPU. You want to know this if you're planning on keeping your system for more than two years. Most people WILL upgrade their GPU before their CPU.
Seriously, please just go watch the Hardware Unboxed video.
Mate, i have been watching those guys for years and more advanced channels/forums as well. You still fail to understand my reasoning but i won't continue this in the thread, feel free to message me if you need more explanation than already given.
 
Joined
Dec 12, 2012
Messages
774 (0.18/day)
Location
Poland
System Name THU
Processor Intel Core i5-13600KF
Motherboard ASUS PRIME Z790-P D4
Cooling SilentiumPC Fortis 3 v2 + Arctic Cooling MX-2
Memory Crucial Ballistix 2x16 GB DDR4-3600 CL16 (dual rank)
Video Card(s) MSI GeForce RTX 4070 Ventus 3X OC 12 GB GDDR6X (2610/21000 @ 0.91 V)
Storage Lexar NM790 2 TB + Corsair MP510 960 GB + PNY XLR8 CS3030 500 GB + Toshiba E300 3 TB
Display(s) LG OLED C8 55" + ASUS VP229Q
Case Fractal Design Define R6
Audio Device(s) Yamaha RX-V381 + Monitor Audio Bronze 6 + Bronze FX | FiiO E10K-TC + Sony MDR-7506
Power Supply Corsair RM650
Mouse Logitech M705 Marathon
Keyboard Corsair K55 RGB PRO
Software Windows 10 Home
Benchmark Scores Benchmarks in 2024?
There's nothing for you to explain. You need stuff explained to you, but you're unwilling to listen.

4K testing is for the "here and now", it only shows you how current hardware behaves.
1080p testing is for both now and the future, it tells you how the CPU will behave in 2 years or more, when more powerful GPUs are available.
 
Joined
Jun 14, 2020
Messages
3,474 (2.13/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
If you read what i said i completely understand that, but what i am saying is that testing a 7950x3d and 4090 for 1080p makes no sense. No one will buy these components and use them to play 1080p.
CPUs should be tested with CPU bound settings. The resolution is irrelevant. If you are CPU bound at 4k then you can test them at 4k. Testing CPUs on non CPU bound settings is completely and utterly pointless. It gives you absolutely no information whatsoever.
 
Joined
Jan 18, 2020
Messages
822 (0.46/day)
CPUs should be tested with CPU bound settings. The resolution is irrelevant. If you are CPU bound at 4k then you can test them at 4k. Testing CPUs on non CPU bound settings is completely and utterly pointless. It gives you absolutely no information whatsoever.

It gives you the information that it doesn't matter what CPU you have for those settings. Which is the point.

It just depends what your use case is. No one should be thinking results at 720p and 1080p are going to translate to massive gains at resolutions they're actually going to use in the real world.

So no need to drop $1k on a CPU when you can get one for $250 and not notice any difference... Save the $750 and spend it on something that will positively impact your use case instead.
 
Joined
Jun 14, 2020
Messages
3,474 (2.13/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
It gives you the information that it doesn't matter what CPU you have for those settings. Which is the point.

It just depends what your use case is. No one should be thinking results at 720p and 1080p are going to translate to massive gains at resolutions they're actually going to use in the real world.

So no need to drop $1k on a CPU when you can get one for $250 and not notice any difference... Save the $750 and spend it on something that will positively impact your use case instead.
And 1080p results also tell you that as well, so why do you need to test at 4k? If I see a CPU getting 100 fps at 1080p then obviously it can get 100fps at 4k, if the card allows, so what exactly did a 4k CPU test offer me?
 
Joined
Mar 14, 2018
Messages
151 (0.06/day)
Wow, what a waste of effort. I expect virtually nothing for productivity software. This seems be far weaker than the 5800X3D uplifts despite AMD's hype. Also overhyped and lied about 7900XT(X) performance too.

I could care less about about gaming performance with cpu's like Zen 4 and Raptor Lake, they are more than strong enough. For productivity I still think 13700K is the sweet spot, but will wait and see if the RL refresh is more than a tweak to clock speeds.
Remember the 7000 series already doubled the L2 cache over the 5000 series. Likely why additional L3 doesn’t make as much of an improvement.
 
Joined
Jun 10, 2014
Messages
2,987 (0.78/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
CPUs should be tested with CPU bound settings. The resolution is irrelevant. If you are CPU bound at 4k then you can test them at 4k. Testing CPUs on non CPU bound settings is completely and utterly pointless. It gives you absolutely no information whatsoever.
That's sarcasm, right?
CPUs, GPU, and other hardware should be tested at relevant settings and workloads, anything else is utterly pointless for deciding which one to purchase.

If you want to induce artificial workloads to find theoretical limits then that's fine for a technical discussion, but this should not be confused with what is a better product. How a product behaves under circumstances you don't run into is not going to affect your user experience. Far too many fools have purchased products based on specs or artificial benchmarks.
 
Joined
Oct 6, 2021
Messages
1,605 (1.40/day)
It gives you the information that it doesn't matter what CPU you have for those settings. Which is the point.

It just depends what your use case is. No one should be thinking results at 720p and 1080p are going to translate to massive gains at resolutions they're actually going to use in the real world.

So no need to drop $1k on a CPU when you can get one for $250 and not notice any difference... Save the $750 and spend it on something that will positively impact your use case instead.
Don't you think that a person who buys a GPU over $1000 will just want to find out which is the best CPU for gaming regardless of the price, also having a very strong(best) CPU ensures that you don't have to change it for a long time and just upgrade the GPU, plus, it is likely that the successor to the 4090 will be so powerful that it starts to be limited by weak CPUs even at 4k.

Anyway, I agree with the idea that the ideal is to test in 1080p, 1440p and 4k.
 
Joined
Jun 14, 2020
Messages
3,474 (2.13/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
That's sarcasm, right?
CPUs, GPU, and other hardware should be tested at relevant settings and workloads, anything else is utterly pointless for deciding which one to purchase.

If you want to induce artificial workloads to find theoretical limits then that's fine for a technical discussion, but this should not be confused with what is a better product. How a product behaves under circumstances you don't run into is not going to affect your user experience. Far too many fools have purchased products based on specs or artificial benchmarks.
Are we at the point where it's considered sarcasm to test CPUs at....CPU bound settings? Just wow :roll:

It's so freaking easy to demonstrate the fallacy into your logic which begs the question how can you not notice it yourself. CPU A and CPU B both cost 300€.

CPU A gets 100 fps at 4k and 130 fps at 1080p.
CPU B gets 100 fps at 4k and 200fps at 1080p.

If you want a CPU just to play games, why the heck would you choose CPU A, since obviously CPU B will last you longer, and how the heck would you have known that unless you tested in CPU bound settings? I mean, do I really have to explain things like it's kindergarten?
 
Joined
Oct 12, 2005
Messages
708 (0.10/day)
One of the main problem of CPU testing is not the resolution they test, but the kind of game used by reviewer. It's almost always FPS/ 3rd person action game and there is very few if none of the game that are actually most of the time CPU bound.

Just build a very large base in valheim and you will be CPU limted at 4K. Build large town in a lot of colony/city builder game and you will be CPU bound at 4k. Even today, in MMORPG, in area where there is a lot of people or in raid, you can be CPU bound with modern CPU. In some case, it's laziness or lack of time to get produce a proper save (Like building a large base in Valheim or building a large city or factory in other type of games) in other, it's just very hard to test (like mmorpg).

But most of the time, the data extracted from non GPU-limited scenario can be extrapolated up to a degree to those kind of game so it's still worth it if it's the easiest thing to do.
 
Joined
May 15, 2020
Messages
697 (0.42/day)
Location
France
System Name Home
Processor Ryzen 3600X
Motherboard MSI Tomahawk 450 MAX
Cooling Noctua NH-U14S
Memory 16GB Crucial Ballistix 3600 MHz DDR4 CAS 16
Video Card(s) MSI RX 5700XT EVOKE OC
Storage Samsung 970 PRO 512 GB
Display(s) ASUS VA326HR + MSI Optix G24C4
Case MSI - MAG Forge 100M
Power Supply Aerocool Lux RGB M 650W
Mate, i have been watching those guys for years and more advanced channels/forums as well. You still fail to understand my reasoning but i won't continue this in the thread, feel free to message me if you need more explanation than already given.
Try to understand it harder, then.

Actually, they should be testing at 480p to find out the maximum framerate the CPU can give. they don't do that because they would get even more 'it's not realistic' comments. The scientific method is hard to grasp sometimes.
 
Joined
Jun 10, 2014
Messages
2,987 (0.78/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
Are we at the point where it's considered sarcasm to test CPUs at....CPU bound settings? Just wow :roll:
Any sense of rationality is completely gone from your comment, just look at the previous one;

Testing CPUs on non CPU bound settings is completely and utterly pointless. It gives you absolutely no information whatsoever.
This is not only nonsensical, it's actually blatantly false. It absolutely gives you a lot of information to benchmark real workloads at realistic settings, as this tells the reader how the contending products will perform in real life.

It's so freaking easy to demonstrate the fallacy into your logic which begs the question how can you not notice it yourself. CPU A and CPU B both cost 300€.

CPU A gets 100 fps at 4k and 130 fps at 1080p.
CPU B gets 100 fps at 4k and 200fps at 1080p.

If you want a CPU just to play games, why the heck would you choose CPU A, since obviously CPU B will last you longer, and how the heck would you have known that unless you tested in CPU bound settings? I mean, do I really have to explain things like it's kindergarten?
Your example is nonsense, as you wouldn't see that kind of performance discrepancy unless you are comparing a low-end CPU to a high-end CPU when benchmarking a large sample of games. And trying to use a cherry-picked game or two to showcase a "bottleneck" is ridiculous, as there is no reason to assume your selection has the characteristics of future games coming down the line.

The best approach is a large selection of games at realistic settings, and look at the overall trend eliminating the outliers, that will give you a better prediction of what is a better long-term investment.
 
Joined
Jun 14, 2020
Messages
3,474 (2.13/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
Any sense of rationality is completely gone from your comment, just look at the previous one;


This is not only nonsensical, it's actually blatantly false. It absolutely gives you a lot of information to benchmark real workloads at realistic settings, as this tells the reader how the contending products will perform in real life.
Any sense of rationality is completely gone from YOUR comment. A GPU bound testg ives you ABSOLUTELY no information about how the cpu performs. If you wanted to see how the GPU performs then go watch the GPU review, why the heck are you watching the cpu one?

I'm sorry but no point arguing anymore, you are just wrong. If I followed your advice only looking at 4k results I would have bought an 7100 or a g4560 instead of an 8700k since they performed exactly the same at 4k with the top GPU of the time. And then I upgraded to a 3090. If the graph below doesn't make you realize how wrong you are, nothing will so you are just going to get ignored

 
Joined
Dec 30, 2010
Messages
2,198 (0.43/day)
im not impressed with these results from a processor that costs $120+ more than the current top dog at its current retail price. Realistically performance may actually be less since AMD likely cherry picked some of those results. Regardless i think the real performance gains will matter on the lower SKUs as they will be cheaper and more direct competing on price. Im looking forward to the 7800x3d in april.

The price of cache is expensive. more expensive then just adding more cores to the party.

Its designed for a certain type of workload, games seem to benefit the most from additional L3 cache.

Other then that in regular apps it wont be any better then a flagship. Most likely due to higher sustained clocks.
 

Space Lynx

Astronaut
Joined
Oct 17, 2014
Messages
17,269 (4.67/day)
Location
Kepler-186f
Processor 7800X3D -25 all core ($196)
Motherboard B650 Steel Legend ($179)
Cooling Frost Commander 140 ($42)
Memory 32gb ddr5 (2x16) cl 30 6000 ($80)
Video Card(s) Merc 310 7900 XT @3100 core $(705)
Display(s) Agon 27" QD-OLED Glossy 240hz 1440p ($399)
Case NZXT H710 (Red/Black) ($60)
I'm getting excited for W1zzards review of the 7900x3d in a few days...

Happy Dance GIF
 
Joined
Nov 21, 2022
Messages
36 (0.05/day)
System Name Speed Demon 2.0
Processor i7 13700k
Motherboard ASUS ROG Strix Z690-E Gaming WiFi 6E
Cooling EK Supremacy Nickle, 2 x Magicool Elegant 360mm Copper Radiators
Memory Team T-Force Delta RGB 32GB DDR5 7200 (PC5 57600)
Video Card(s) MSI 3080-TI Air Cooled
Storage Corsair Gen4 MP600 + Multiple Samsung Gen 3
Display(s) Dell 32 Inch - S3222DGM & Dell G2422HS 23.8" Portrait
Case CaseLabs Mercury S8
Audio Device(s) SONOS
Power Supply Seasonic Focus PX-850 850W 80+ Platinum
Mouse MADCATZ R.A.T. 8+
Keyboard Logitech ‑ MX Mechanical
why only 1080p?
 
Top