• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Readies 16-core, 12-core, and 8-core Ryzen 7000X3D "Zen 4" Processors

Joined
Jan 20, 2019
Messages
1,550 (0.73/day)
Location
London, UK
System Name ❶ Oooh (2024) ❷ Aaaah (2021) ❸ Ahemm (2017)
Processor ❶ 5800X3D ❷ i7-9700K ❸ i7-7700K
Motherboard ❶ X570-F ❷ Z390-E ❸ Z270-E
Cooling ❶ ALFIII 360 ❷ X62 + X72 (GPU mod) ❸ X62
Memory ❶ 32-3600/16 ❷ 32-3200/16 ❸ 16-3200/16
Video Card(s) ❶ 3080 X Trio ❷ 2080TI (AIOmod) ❸ 1080TI
Storage ❶ NVME/SSD/HDD ❷ <SAME ❸ SSD/HDD
Display(s) ❶ 1440/165/IPS ❷ 1440/144/IPS ❸ 1080/144/IPS
Case ❶ BQ Silent 601 ❷ Cors 465X ❸ Frac Mesh C
Audio Device(s) ❶ HyperX C2 ❷ HyperX C2 ❸ Logi G432
Power Supply ❶ HX1200 Plat ❷ RM750X ❸ EVGA 650W G2
Mouse ❶ Logi G Pro ❷ Razer Bas V3 ❸ Logi G502
Keyboard ❶ Logi G915 TKL ❷ Anne P2 ❸ Logi G610
Software ❶ Win 11 ❷ 10 ❸ 10
Benchmark Scores I have wrestled bandwidths, Tussled with voltages, Handcuffed Overclocks, Thrown Gigahertz in Jail
But most people are thinking gaming when they think v-cache, who buys the 7950X for gaming. AMD needs to really do a good job of highlighting v-cache benefits outside of gaming. They should have slides prepared showing it's benefits in other types of software. We only have results for server applications mostly from Milan-X to see how it can benefit non-gaming.

streaming, encoding, transcoding, rendering, etc etc + gaming, no?

I pally up with 3 mates whilst gaming... two of them (partners in crime) are weddings/events videographers + trained drone pilots (or something like that).... so a ton of video editing/rendering on the go.... both buy into these types of higher core count CPUs. They've already bought into AM5 with a 7950X for strictly work purposes.... and both of them are holding off on their gaming+work rigs for something like a 7900XD/7950XD/etc. Another friend who's not looking to upgrade is currently on a 10900K for gaming, streaming and transcoding... I wouldn't be surprised if he jumps on the same bandwagon at some point.

Plus, i regularly see people on PCPP forums requesting build lists with similar demands. In short, there's definitely a market for it. Not for me though, i keep my gaming rig separate from work systems (not that my workload demands/benefits with 16 cores or whatnot)

X3D is welcome on all Zen 4 tiers but its a darn right shame they're gonna skip on a 7600X3D... thats baffling!
 
Joined
Feb 22, 2022
Messages
106 (0.11/day)
System Name Lexx
Processor Threadripper 2950X
Motherboard Asus ROG Zenith Extreme
Cooling Custom Water
Memory 32/64GB Corsair 3200MHz
Video Card(s) Liquid Devil 6900XT
Storage 4TB Solid State PCI/NVME/M.2
Display(s) LG 34" Curved Ultrawide 160Hz
Case Thermaltake View T71
Audio Device(s) Onboard
Power Supply Corsair 1000W
Mouse Logitech G502
Keyboard Asus
VR HMD NA
Software Windows 10 Pro
Well, computers have been my hobby for nearly 40 years and I have lots of disposable income - and as I work in I.T., the tax man likes to help me with my I.T. purchases :)
 

Mussels

Freshwater Moderator
Joined
Oct 6, 2004
Messages
58,413 (7.95/day)
Location
Oystralia
System Name Rainbow Sparkles (Power efficient, <350W gaming load)
Processor Ryzen R7 5800x3D (Undervolted, 4.45GHz all core)
Motherboard Asus x570-F (BIOS Modded)
Cooling Alphacool Apex UV - Alphacool Eisblock XPX Aurora + EK Quantum ARGB 3090 w/ active backplate
Memory 2x32GB DDR4 3600 Corsair Vengeance RGB @3866 C18-22-22-22-42 TRFC704 (1.4V Hynix MJR - SoC 1.15V)
Video Card(s) Galax RTX 3090 SG 24GB: Underclocked to 1700Mhz 0.750v (375W down to 250W))
Storage 2TB WD SN850 NVME + 1TB Sasmsung 970 Pro NVME + 1TB Intel 6000P NVME USB 3.2
Display(s) Phillips 32 32M1N5800A (4k144), LG 32" (4K60) | Gigabyte G32QC (2k165) | Phillips 328m6fjrmb (2K144)
Case Fractal Design R6
Audio Device(s) Logitech G560 | Corsair Void pro RGB |Blue Yeti mic
Power Supply Fractal Ion+ 2 860W (Platinum) (This thing is God-tier. Silent and TINY)
Mouse Logitech G Pro wireless + Steelseries Prisma XL
Keyboard Razer Huntsman TE ( Sexy white keycaps)
VR HMD Oculus Rift S + Quest 2
Software Windows 11 pro x64 (Yes, it's genuinely a good OS) OpenRGB - ditch the branded bloatware!
Benchmark Scores Nyooom.
Even then, of course you can. The 5600 literally wipes the floor with the 3d in terms of performance per price
Abso-friggin-lutely

The 5600 is IMO, the best choice for 99% of gamers out there - you need a serious GPU to ever have it even be a limit.
We only have results for server applications mostly from Milan-X to see how it can benefit non-gaming.
That's because it helps gaming and server applications, theres very little overlap.

because the 5800x3D did run at lower clocks (overall mine runs 4.45GHz all core vs 4.6GHz in AVX workloads, soooo much slower) it'd be hard to advertise it as an all purpose product when it'd have a deficit in some commonly used setups


What'd be amazing is if AMD had the pull intel does with microsoft, and could release a CPU with one 3D stacked die and others without - 3D becomes the P cores, and the others do the higher wattage boring workloads
 
Joined
Jun 14, 2020
Messages
3,439 (2.12/day)
System Name Mean machine
Processor 12900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
Abso-friggin-lutely

The 5600 is IMO, the best choice for 99% of gamers out there - you need a serious GPU to ever have it even be a limit.

That's because it helps gaming and server applications, theres very little overlap.

because the 5800x3D did run at lower clocks (overall mine runs 4.45GHz all core vs 4.6GHz in AVX workloads, soooo much slower) it'd be hard to advertise it as an all purpose product when it'd have a deficit in some commonly used setups


What'd be amazing is if AMD had the pull intel does with microsoft, and could release a CPU with one 3D stacked die and others without - 3D becomes the P cores, and the others do the higher wattage boring workloads
Even the 5700 is what, 190€ or something. Usually the low / mid range from both companies is pretty great in value, the higher end cpus 99% of the time are not worth the money. The 3d sadly falls in the high range tier and is a bit overpriced even at the recent cut down prices
 
Joined
Jun 10, 2014
Messages
2,985 (0.78/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
Even then, of course you can. The 5600 literally wipes the floor with the 3d in terms of performance per price
Wow, what a sight to behold! It must be the first CPU in history which also cleans your house. :p

We are no longer in the last decade. Devs do not target a number of cores. They have X amount of operation witch X amount can be send to other thread. They can scale it out to more thread but the more you scale out the multithreaded part, the more the single threaded part become the problem.
It seems you are talking about thread pools. Those are usually relevant when you have large quanitites of independent work chunks with no need for synchronization, which is exceedingly rare for interactive synchronous workloads with very strict deadlines like games. (large batch jobs are a different story)
With fast games pushing tick rates of 120 Hz(8.3ms) or higher, individual steps in the pipelined simulation can have a performance budget of 1 ms or much less, which leaves very small margins for delays caused by threading. At this fine level, splitting a task across threads may actually hurt performance, especially when it comes to stutter. It may even cause simulation glitches or crashes, as we've seen in some games. For this reason, even heavily multithreaded games usually do separate tasks on separate threads, e.g. ~2-3+ threads for GPU interaction, usually 1 for the core game simulation, 1 for audio, etc.

Before modern low level API, Game were hard to multithread and they had to decide what to run on other cores, during thoses days it was true that a game could benefits from having x cores…
Multithreading has been common in games since the early 2000s, and game engines still to this day have control over what to run on which cores.

But today, it's no longer the case, The modern API allow the devs to send a lot of operation to others thread. By example all draws calls (command sent to the GPU to perform an action) can now be multithreaded.
Current graphics APIs certainly can accept commands from multiple threads, but to what purpose? This just leaves the driver with the task to organize the commands, and it's shoved into a single queue of operations internally either way. The purpose of multiple threads with GPU context is to execute independent queues, e.g. multiple render passes, compute loads, asset loading, or possibly multiple viewports (e.g. split screen).

Usually, when a game's rendering is CPU bottlenecked, the game is either not running the API calls effectively or using the API "incorrectly". In such cases, the solution is to move the non-rendering code out of the rendering thread instead of creating more rendering threads.

Even more today since all those thread have data in common that be cached, if they are not too far, it can lead to very nice performance gain since the CPU will not waste time waiting for the data from main memory.
Data cache line hits in L3 from other threads are (comparatively) rare, as the entire cache is overwritten every few thousand clock cycles, so the window for a hit here is very tiny.
Instruction cache line hits from other threads in L3 is more common however, e.g. if multiple threads execute the same code but on different data.

That bring back to the point of 3D-Vcache. This is useful when a software reuse frequently the same data and that data can fit or close to fit in the cache. Game really behave like this since it's generating frame one after another. And you realise that if you want high FPS (60+) and you have to wait for memory latency, your working set need to be quite small. 250-500 MB depending on memory and other stuff.
Actually, this misconception is why we see so little gains from 3D V-Cache. The entire L1/L2/L3 hierarchy is overwritten many times in the lifecycle of a single frame, so there is no possibility of benefits here across frames.
We only see some applications and games benefit significantly from massive L3 caches, and it's usually not the ones which are the most computationally intensive, this is because of instruction cache hits, not data. When software is sensitive to instruction cache, it's usually a sign of unoptimized or bloated code. For this reason I'm not particularly excited about extra L3 cache, as this mainly benefits "poor" code. But when the underlying technology eventually is used to build bigger cores with varying features, now that's exciting.
 
Joined
Oct 12, 2005
Messages
707 (0.10/day)
Wow, what a sight to behold! It must be the first CPU in history which also cleans your house. :p


It seems you are talking about thread pools. Those are usually relevant when you have large quanitites of independent work chunks with no need for synchronization, which is exceedingly rare for interactive synchronous workloads with very strict deadlines like games. (large batch jobs are a different story)
With fast games pushing tick rates of 120 Hz(8.3ms) or higher, individual steps in the pipelined simulation can have a performance budget of 1 ms or much less, which leaves very small margins for delays caused by threading. At this fine level, splitting a task across threads may actually hurt performance, especially when it comes to stutter. It may even cause simulation glitches or crashes, as we've seen in some games. For this reason, even heavily multithreaded games usually do separate tasks on separate threads, e.g. ~2-3+ threads for GPU interaction, usually 1 for the core game simulation, 1 for audio, etc.
Threading switching are counted in nano seconds, not milliseconds. it's not a problem at all for a CPU to to run multiples thread in series if they can do it fast enough. Those threads are just a bunch of operations that needs to be done and if a CPU can complete the operation of 2 thread in one core faster than another one with 2 cores, that one core CPU will be faster and this is what we see right now.

The data proof what i say.


Multithreading has been common in games since the early 2000s, and game engines still to this day have control over what to run on which cores.
Most game today just map the topology of processor for things like CCD (to schedule it all in one CCD for Ryzen 9 by example) E-cores, SMT, etc. They do not really care if the job run on X core or Y. Else, they would have to code their game to run on every variation of the CPU. What if there is no SMT, what if there is just 4 core, etc.

And still, the main thread remain the bottleneck in most case. Else you would see 20-25% gain going from 6 core to 8 core in a CPU limited scenario and that is still not the case. Also, if thread were assigned staticly by the engine, the game would really run poorly on newer gen CPU with less cores. (like the 7600x). But still today, this CPU kick the ass for many other CPU with mores cores because it do not matter. Just raw performance matter.

Current graphics APIs certainly can accept commands from multiple threads, but to what purpose? This just leaves the driver with the task to organize the commands, and it's shoved into a single queue of operations internally either way. The purpose of multiple threads with GPU context is to execute independent queues, e.g. multiple render passes, compute loads, asset loading, or possibly multiple viewports (e.g. split screen).
The driver do not decide what to do on it's own. The CPU still have to send commands to the drivers and this is the part that is now more easily multithreadable with modern API.

After that, all drivers are more or less multithreaded. One of the main advantage Nvidia had with older API where it was harder to do multithreading code was that their drivers was more multithreaded. But it also had a higher CPU overhead. AMD catched up in june with its older DirectX driver and OpenGL now. but anyway. Once the driver done it's job, all the rest is done by the GPU and the numbers of cores do not really matter.
Usually, when a game's rendering is CPU bottlenecked, the game is either not running the API calls effectively or using the API "incorrectly". In such cases, the solution is to move the non-rendering code out of the rendering thread instead of creating more rendering threads.
but is it? This is an oversimplification. It can be true in some case, but not always. This is a discourse that i hate to read. This is just plain false and most people do not really understand what a game is doing.

If we take your reasoning up to the end, this mean we could have a game with the perfect physics, perfect AI and be photorealistics with unlimited numbers of assets and if the game do not run properly, its just because the game do not use the API "Correctly"

Data cache line hits in L3 from other threads are (comparatively) rare, as the entire cache is overwritten every few thousand clock cycles, so the window for a hit here is very tiny.
Instruction cache line hits from other threads in L3 is more common however, e.g. if multiple threads execute the same code but on different data.


Actually, this misconception is why we see so little gains from 3D V-Cache. The entire L1/L2/L3 hierarchy is overwritten many times in the lifecycle of a single frame, so there is no possibility of benefits here across frames.
We only see some applications and games benefit significantly from massive L3 caches, and it's usually not the ones which are the most computationally intensive, this is because of instruction cache hits, not data. When software is sensitive to instruction cache, it's usually a sign of unoptimized or bloated code. For this reason I'm not particularly excited about extra L3 cache, as this mainly benefits "poor" code. But when the underlying technology eventually is used to build bigger cores with varying features, now that's exciting.
It's not a misconception at all. It can be true that L1/L2 are overwritten many time in the lifecycle of a single frame, but that is what they are intended for. They need to cache data for a very small amount of time. still in that time many cycles can happen.

The fact that cache are overwritten is not an issue. As long as they aren't when they are required. But anyway, CPU have mechanism to predict what need to be in cache, what need to stay in cache and etc. They look at the future code and see what portion of memory would be needed. One misconception that we may have is to think about cache and Data. Cache do not cache data. Cache cache Memory space.

The CPU detect what memory region it need to continue working and it will prefetch it. There are other mechanism in the CPU that will decide if some memory region need to remain cache because they are likely to be reused.

Having more cache allow you to be more flexible and balance those things quite easily. Also a CPU might still reuse some data hundreds of time before it get purged from the L3 cache.

The working set for a single frame must remain small anyway. In a ideal scenario, let say you have 60 GB/s, you want to have 60 FPS, that mean that if you can access all your data instaneously, you cannot have more than 1 GB of data read during that frame (unless indeed, if you cache it).

If you add the wait for all memory access on top of that (a typical access would be few bytes, and you would wait 50-70 ns to get it) it's probably more around 200-300 MB per frame.

You can see that with a larger cache, you can now cache a significant portion of it.
 
Joined
Dec 12, 2012
Messages
773 (0.18/day)
Location
Poland
System Name THU
Processor Intel Core i5-13600KF
Motherboard ASUS PRIME Z790-P D4
Cooling SilentiumPC Fortis 3 v2 + Arctic Cooling MX-2
Memory Crucial Ballistix 2x16 GB DDR4-3600 CL16 (dual rank)
Video Card(s) MSI GeForce RTX 4070 Ventus 3X OC 12 GB GDDR6X (2610/21000 @ 0.91 V)
Storage Lexar NM790 2 TB + Corsair MP510 960 GB + PNY XLR8 CS3030 500 GB + Toshiba E300 3 TB
Display(s) LG OLED C8 55" + ASUS VP229Q
Case Fractal Design Define R6
Audio Device(s) Yamaha RX-V381 + Monitor Audio Bronze 6 + Bronze FX | FiiO E10K-TC + Sony MDR-7506
Power Supply Corsair RM650
Mouse Logitech M705 Marathon
Keyboard Corsair K55 RGB PRO
Software Windows 10 Home
Benchmark Scores Benchmarks in 2024?
No 6-core? I guess they are sticking with being greedy.

Really looking forward to seeing benchmarks and prices.
 
Joined
Jun 10, 2014
Messages
2,985 (0.78/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
Threading switching are counted in nano seconds, not milliseconds. it's not a problem at all for a CPU to to run multiples thread in series if they can do it fast enough…
SMT is done by the CPU transparently to the software, but scheduling of threads is done by the OS scheduler, which works on the millisecond time scale. (for Windows and standard Linux)

Delays may happen any time the OS scheduler kicks in one of the numerous background threads. Let's say you for example have a workload of 5 pipelined stages, each consisting of 4 work units, and a master thread will divide these to 4 worker threads. If each stage has to be completed by all threads and then synced up, then a single delay will delay the entire pipeline. This is why threadpools scale extremely well with independent work chunks and not well otherwise.

Most game today just map the topology of processor for things like CCD (to schedule it all in one CCD for Ryzen 9 by example) E-cores, SMT, etc…
While requesting affinity is possible, I serously doubt most games are doing that.

And still, the main thread remain the bottleneck in most case. Else you would see 20-25% gain going from 6 core to 8 core in a CPU limited scenario and that is still not the case.
That makes no sense. That's not how performance scaling i games work.

Also, if thread were assigned staticly by the engine, the game would really run poorly on newer gen CPU with less cores. (like the 7600x).
My wording was incorrect, I meant to say "on which thread", not which (physical) core, which mislead you.
The OS scheduler clearly decides where to run the threads.

but is it? This is an oversimplification. It can be true in some case, but not always. This is a discourse that i hate to read. This is just plain false and most people do not really understand what a game is doing.
The current graphics APIs are fairly low overhead. Render threads are not normally spending the majority of their CPU time with API calls. Developers will easily see this with a simple profiling tool.

If we take your reasoning up to the end, this mean we could have a game with the perfect physics, perfect AI and be photorealistics with unlimited numbers of assets and if the game do not run properly, its just because the game do not use the API "Correctly"
This is just silly straw man argumentation. I said no such thing. :rolleyes:
I was talking about rendering threads wasting (CPU) time on non-rendering work.

It's not a misconception at all. It can be true that L1/L2 are overwritten many time in the lifecycle of a single frame, but that is what they are intended for. They need to cache data for a very small amount of time. still in that time many cycles can happen.

The fact that cache are overwritten is not an issue. As long as they aren't when they are required. But anyway, CPU have mechanism to predict what need to be in cache, what need to stay in cache and etc. They look at the future code and see what portion of memory would be needed. One misconception that we may have is to think about cache and Data. Cache do not cache data. Cache cache Memory space.

The CPU detect what memory region it need to continue working and it will prefetch it. There are other mechanism in the CPU that will decide if some memory region need to remain cache because they are likely to be reused.
L3 (in current Intel and AMD CPUs) are spillover caches for L2, meaning anything evicted from L2 will end up there. If a CPU has 100 MB L3 cache, and the CPU reads ~100 MB (unique data) from memory, then anything from L3 which hasn't been promoted into L2 will be overwritten after this data is read. It is a misconception that "important data" remains in cache. The prefetcher does not preserve/keep important cache lines in L3, and for anything to remain in cache it needs to be reused before it's evicted from L3 too. Every cache line the prefetcher loads into L2 (even one which is ultimately not used) will evict another cache line.

Having more cache allow you to be more flexible and balance those things quite easily. Also a CPU might still reuse some data hundreds of time before it get purged from the L3 cache.

You can see that with a larger cache, you can now cache a significant portion of it.
Considering the huge amount of data the CPU is churning through, even all the data it ultimately prefetches unnecessarily, the hitrate of extra L3 cache is falling very rapidly. Other threads, even low priority background threads, also affects this.

The working set for a single frame must remain small anyway. In a ideal scenario, let say you have 60 GB/s, you want to have 60 FPS…
While one or more threads are working on the rendering, other thread(s) will process events, do game simulation, etc. (plus other background threads), all of which will traverse different data and code, which will "pollute" (or rather compete over) the L3.
 
Joined
Oct 12, 2005
Messages
707 (0.10/day)
SMT is done by the CPU transparently to the software, but scheduling of threads is done by the OS scheduler, which works on the millisecond time scale. (for Windows and standard Linux)
What you refer to is not thread scheduling but thread balancing (Into Slices of times that are, indeed, on that scale). An OS can assign a thread way faster than that if the CPU is ready and thread will release the CPU one it's done.

What you describe is how the OS would switch between thread if there is limited ressource (no more core/SMT available) and it need to switch between thread to make the job faster.

If OS were really that slow to assign thread, everything you do on your computer would feel awfully slow. That would leave so much performance on the table.
Delays may happen any time the OS scheduler kicks in one of the numerous background threads. Let's say you for example have a workload of 5 pipelined stages, each consisting of 4 work units, and a master thread will divide these to 4 worker threads. If each stage has to be completed by all threads and then synced up, then a single delay will delay the entire pipeline. This is why threadpools scale extremely well with independent work chunks and not well otherwise.


While requesting affinity is possible, I serously doubt most games are doing that.
It's not affinity but knowing How much thread they need to launch to get some balance.
That makes no sense. That's not how performance scaling i games work.
This indeed make no sense for game scaling. But this is how it work for things that aren't bottleneck by a single thread like 3d rendering.
My wording was incorrect, I meant to say "on which thread", not which (physical) core, which mislead you.
The OS scheduler clearly decides where to run the threads.
Agree!
The current graphics APIs are fairly low overhead. Render threads are not normally spending the majority of their CPU time with API calls. Developers will easily see this with a simple profiling tool.
They are lower overhead but that do not means null. Also having a lower overhead allow you to do more with the same.
This is just silly straw man argumentation. I said no such thing. :rolleyes:
I was talking about rendering threads wasting (CPU) time on non-rendering work.
That was unclear, but if a game indeed waste too much time on useless things, it is indeed bad...

L3 (in current Intel and AMD CPUs) are spillover caches for L2, meaning anything evicted from L2 will end up there. If a CPU has 100 MB L3 cache, and the CPU reads ~100 MB (unique data) from memory, then anything from L3 which hasn't been promoted into L2 will be overwritten after this data is read. It is a misconception that "important data" remains in cache. The prefetcher does not preserve/keep important cache lines in L3, and for anything to remain in cache it needs to be reused before it's evicted from L3 too. Every cache line the prefetcher loads into L2 (even one which is ultimately not used) will evict another cache line.


Considering the huge amount of data the CPU is churning through, even all the data it ultimately prefetches unnecessarily, the hitrate of extra L3 cache is falling very rapidly. Other threads, even low priority background threads, also affects this.
You are right that L3 is a victim cache and will contain previously used data (that may or may not have been prefetch by the prefetcher.) Having Larger L3 cache allow you to have more aggressive prefetcher without too much penalty.

The benefits of cache will not be linear indeed. But tripling it will give a significant performance gain when the code re-use frequently the same data. And this is what game usually do. But your concern are also totally valid because there are many other application that will just load new data all the time and barely re-use the data making the L3 cache almost useless.

But at the processor scale, (and not human time frame), in the best scenario where each of the 6 core read the from the memory at 60 GB/s, it will take about 1.5 ms to flush the cache with fresh data. 60 GB/s is about what DDR4-3800 would give you in theorical bench. If we use Fast DDR5 like DDR5-6000, that would be close to 90GB/s meaning the cache would be flushed every 1 ms.

This is indeed close to the timeframe a game use to render a frame. And this is a very theorical scenario. Also a read to main memory will be between 45-100 ns where a hit to L3 with only take 10 ns. The bandwidth is also 10 times faster. You do not need to have a very high hit ratio in the period of 16.6 ms to get significant gain. And all the cache hit leave the memory controller free to perform more memory access.

Even if you are doing more task, background task, etc, you will still have cache hit at some point
While one or more threads are working on the rendering, other thread(s) will process events, do game simulation, etc. (plus other background threads), all of which will traverse different data and code, which will "pollute" (or rather compete over) the L3.

Well a lot of that data will reuse the same data even within a thread making it worthwhile to have L3. And the thing is L3 do not cache "Things" it cache memory line. Memory line cache instruction and data. Those instruction can be reused as part of a loop many times. smaller loops will all happen within registry or L1, but some can take more time (and/or have branch making them longer to run) and benefits from having L2 and L3.

Again, Data prove this point. 5800X3D, Even if it clock slightly lower than 5800x Beat it at almost all game

1670379989577.png


There are 2 scenario where the extra cache in gaming will not be beneficial (In CPU limited scenario)

1. Too large working set. In those case, due to memory latency, you will be talking at game with very low FPS and memory bandwidth limited.
2. Game with way smaller datasets that could fit or almost fit in the 32 MB L3 of the 5800X making the extra cache useless.
 

MWK

Joined
May 19, 2021
Messages
24 (0.02/day)
I am so happy that they are not releasing the 7600x3d. 6 cores in 2023. It's is just silly and the frametime will be so choppy when you doing anything.
 
Joined
Oct 14, 2022
Messages
33 (0.04/day)
System Name Dreki (https://pcpartpicker.com/b/ZDCLrH)
Processor AMD Ryzen 7 2700
Motherboard MSI B450 Gaming Pro Carbon AC
Cooling ID Cooling SE-224-XT Basic
Memory Corsair Vengeance LPX 4x16GB DDR4-3200 (Running @ 2133 Because of MOBO Issues - Documented on PCPP)
Video Card(s) MSI Radeon RX 6800 XT Gaming X Trio
Storage WD SN770 2TB NVME SSD
Display(s) Viewsonic VP2768 27" 1440p Calibrated Display
Case Cougar MX330
Audio Device(s) Zoom AMS-24
Power Supply Corsair RM750X (2017) 750W
Mouse Cougar Minos X3
Keyboard Cougar Attack X3
Software Windows 10 Home, Davinci Resolve Studio, ON1 Photo Raw 2023
The 7950X3D is actually what I was looking for. Similar to what other people have said, there are those of us who do workstation jobs and gaming both. I do not want two machines for separate uses. A single processor that can do both the best or close to it is perfect. I don't want to have to decide which my system will excel at. I do video production, graphics, and animation, as well as gaming. Using the abundant cores on a 7950X3D for the production work and the "X3D" for gaming seems like it could be my perfect solution. This is mostly all day-dreaming for now though...

In all reality, I will probably still wait for next gen 8950X3D just to get past the first-AM5-gen hiccups and get better ecosystem stability. My R7 2700/GTX 1070 is basically adequate for now. :twitch: I am used to the issues it has with my workflows and can hopefully get by with it until I can send it big for a whole new system (actually need to replace the 1070 soon. :cry: Not sure it can keep going too much longer). Hopefully an RX 8950 XTX is available then too and I can do a matching system of 8950's... that would make my brain happy. :laugh:

We'll see... looking forward to RDNA3 in a few days and will see how things shape up for my video business in the new year. It is a side gig right now, but it's building.
 
Joined
Jan 20, 2019
Messages
1,550 (0.73/day)
Location
London, UK
System Name ❶ Oooh (2024) ❷ Aaaah (2021) ❸ Ahemm (2017)
Processor ❶ 5800X3D ❷ i7-9700K ❸ i7-7700K
Motherboard ❶ X570-F ❷ Z390-E ❸ Z270-E
Cooling ❶ ALFIII 360 ❷ X62 + X72 (GPU mod) ❸ X62
Memory ❶ 32-3600/16 ❷ 32-3200/16 ❸ 16-3200/16
Video Card(s) ❶ 3080 X Trio ❷ 2080TI (AIOmod) ❸ 1080TI
Storage ❶ NVME/SSD/HDD ❷ <SAME ❸ SSD/HDD
Display(s) ❶ 1440/165/IPS ❷ 1440/144/IPS ❸ 1080/144/IPS
Case ❶ BQ Silent 601 ❷ Cors 465X ❸ Frac Mesh C
Audio Device(s) ❶ HyperX C2 ❷ HyperX C2 ❸ Logi G432
Power Supply ❶ HX1200 Plat ❷ RM750X ❸ EVGA 650W G2
Mouse ❶ Logi G Pro ❷ Razer Bas V3 ❸ Logi G502
Keyboard ❶ Logi G915 TKL ❷ Anne P2 ❸ Logi G610
Software ❶ Win 11 ❷ 10 ❸ 10
Benchmark Scores I have wrestled bandwidths, Tussled with voltages, Handcuffed Overclocks, Thrown Gigahertz in Jail
I am so happy that they are not releasing the 7600x3d. 6 cores in 2023. It's is just silly and the frametime will be so choppy when you doing anything.

6-cores is an affordable smart entry point for gaming on the whole and at the moment 8 cores doesn't necessarily see any significant performance gains unless we're gonna cherry pick a couple of select titles. I'm yet to see any mainstream and half well optimised game max out on any current Gen 6 cores/12 threads let alone demanding 8 cores/16 threads for some minimal overall performance uplift (which is largely clockspeed variance opposed to additional cores for most titles)

I know you were specifically referencing "frametimes" but i thought the above was necessary for the less informed reader. As for frametime analysis:

7600X frametime analysis

7700X frametime analysis

I don't see anything of significance to axe 6-core gaming parts as "silly" in "2023". Same goes for previous Gen 5600X/12600K.... still fantastic for getting your feet wet with smooth gameplay. Moreover, not everyone is willing to fork out $400-$500 for a flagship gaming chip regardless of the performance feat.... a more affordable 6 core X3D would have been nice and a nice AM5 get-on-board tempting green card to an already expensive DDR5 AM5 platform. I get the feeling the 7600X3D will be something expected a little later after AMD's exhausted current elevated Zen 3 sales (5600~5800X3D)
 
Last edited:
Joined
Jun 10, 2014
Messages
2,985 (0.78/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
What you refer to is not thread scheduling but thread balancing (Into Slices of times that are, indeed, on that scale). An OS can assign a thread way faster than that if the CPU is ready and thread will release the CPU one it's done.
The OS scheduler runs at a certain interval, (in Linux it's normally fixed, in Windows it's somewhat dynamic ~0.1 ms -> 20 ms). Running a different scheduler (or changing parameters), e.g. a "low latency" kernel in Linux, enabling faster thread switching which in turn cuts down delays. I've actually done this exact thing to explore the effects of the OS scheduler on a rendering engine, so I know how it works.

What you describe is how the OS would switch between thread if there is limited ressource (no more core/SMT available) and it need to switch between thread to make the job faster.
You really don't understand how it works. Even your phrasing here is evidence you don't grasp the subject.

It's not affinity but knowing How much thread they need to launch to get some balance.
Yet another argument which doesn't make sense.

And still, the main thread remain the bottleneck in most case. Else you would see 20-25% gain going from 6 core to 8 core in a CPU limited scenario and that is still not the case. Also, if thread were assigned staticly by the engine, the game would really run poorly on newer gen CPU with less cores. (like the 7600x). But still today, this CPU kick the ass for many other CPU with mores cores because it do not matter. Just raw performance matter.
That makes no sense. That's not how performance scaling i games work.
This indeed make no sense for game scaling. But this is how it work for things that aren't bottleneck by a single thread like 3d rendering.
Well, it was your claim. :rolleyes:

The benefits of cache will not be linear indeed. But tripling it will give a significant performance gain when the code re-use frequently the same data. And this is what game usually do. But your concern are also totally valid because there are many other application that will just load new data all the time and barely re-use the data making the L3 cache almost useless.
Then let me try to explain it from another angle to see if you get the point;
Think of the amount of data vs. amount of code that the CPU is churning through at a given timeframe. As you hopefully know, most games have vastly less code than memory allocated, and even with system libraries etc., data is still much larger than code. On top of this, any programmer will know most algorithms execute code paths over and over, so the chances of a given cache line getting a hit is at least one order of magnitude higher if the cache line is code (vs. data), if not two. On top of this, the chances of a core getting a data cache hit from another core is very tiny, even much smaller than a data cache hit from it's own evicted data cache line. This is why programmers who know low level optimization knows L3 cache sensitivity often has to do with instruction cache, which in turn is very closely tied to the computational density of the code (in other words; how bloated the code is). This is why so few applications get a significant boost from loads of extra L3, they are simply too efficient, which is actually a good thing, if you can grasp it.

And to your idea of running entire datasets in L3, this is not realistic in real world scenarios on desktop computers. There are loads of background threads which will "pollute" your L3, and even just a web browser with some idling tabs will do a lot. For your idea to "work" you need something like an embedded system or controlled environment. Hypothetically, if we had a CPU with L3 split in instruction and data caches, now that would be a bit more interesting.
 
Joined
Sep 9, 2011
Messages
15 (0.00/day)
Actually, this misconception is why we see so little gains from 3D V-Cache. The entire L1/L2/L3 hierarchy is overwritten many times in the lifecycle of a single frame, so there is no possibility of benefits here across frames.
We only see some applications and games benefit significantly from massive L3 caches, and it's usually not the ones which are the most computationally intensive, this is because of instruction cache hits, not data. When software is sensitive to instruction cache, it's usually a sign of unoptimized or bloated code. For this reason I'm not particularly excited about extra L3 cache, as this mainly benefits "poor" code. But when the underlying technology eventually is used to build bigger cores with varying features, now that's exciting.
Yet the 5800x3D wipes the floor with the 13900k in MSFS. Even tho it is 1 ghz slower, uses slower memory and has a lower IPC. I have even seen reports of the AMD part beating the 13900k in DCS, a notoriously old code base that mostly runs on one core. This disparity however is unique to flight sims and in most other mainstream games, they are at parity. The press has said 'the 3D cache is miraculous' or some thing like that. I never really understood that and most comparisons neglect the bandwidth of the infinity fabric, the effect on latency and other details that might account for the difference.
 
Joined
Dec 10, 2022
Messages
486 (0.68/day)
System Name The Phantom in the Black Tower
Processor AMD Ryzen 7 5800X3D
Motherboard ASRock X570 Pro4 AM4
Cooling AMD Wraith Prism, 5 x Cooler Master Sickleflow 120mm
Memory 64GB Team Vulcan DDR4-3600 CL18 (4×16GB)
Video Card(s) ASRock Radeon RX 7900 XTX Phantom Gaming OC 24GB
Storage WDS500G3X0E (OS), WDS100T2B0C, TM8FP6002T0C101 (x2) and ~40TB of total HDD space
Display(s) Haier 55E5500U 55" 2160p60Hz
Case Ultra U12-40670 Super Tower
Audio Device(s) Logitech Z200
Power Supply EVGA 1000 G2 Supernova 1kW 80+Gold-Certified
Mouse Logitech MK320
Keyboard Logitech MK320
VR HMD None
Software Windows 10 Professional
Benchmark Scores Fire Strike Ultra: 19484 Time Spy Extreme: 11006 Port Royal: 16545 SuperPosition 4K Optimised: 23439
This is a serious mistake on the part of AMD. It has been repeatedly shown that the 3D cache has little to no positive impact on productivity tasks while often having an incredible positive impact on gaming. Why would they put the 3D cache on the R9-7900X and R9-7950X, two CPUs that specialise in productivity? Most people who buy CPUs like that buy them for productivity and wouldn't benefit much from the 3D cache. To me, that says a lot of people won't pay the extra for 3D cache on those CPUs. OTOH, the R5-7600X is primarily a gaming CPU and if AMD had put 3D cache on that, they would've ended up owning the gaming CPU market. The R7-7800X3D is the only one of the three that they're releasing that makes any sense whatsoever because some gamers do buy 8-core CPUs.

We are in the process of watching AMD stupidly snatch defeat from the jaws of victory. :kookoo:
 
Joined
Dec 12, 2012
Messages
773 (0.18/day)
Location
Poland
System Name THU
Processor Intel Core i5-13600KF
Motherboard ASUS PRIME Z790-P D4
Cooling SilentiumPC Fortis 3 v2 + Arctic Cooling MX-2
Memory Crucial Ballistix 2x16 GB DDR4-3600 CL16 (dual rank)
Video Card(s) MSI GeForce RTX 4070 Ventus 3X OC 12 GB GDDR6X (2610/21000 @ 0.91 V)
Storage Lexar NM790 2 TB + Corsair MP510 960 GB + PNY XLR8 CS3030 500 GB + Toshiba E300 3 TB
Display(s) LG OLED C8 55" + ASUS VP229Q
Case Fractal Design Define R6
Audio Device(s) Yamaha RX-V381 + Monitor Audio Bronze 6 + Bronze FX | FiiO E10K-TC + Sony MDR-7506
Power Supply Corsair RM650
Mouse Logitech M705 Marathon
Keyboard Corsair K55 RGB PRO
Software Windows 10 Home
Benchmark Scores Benchmarks in 2024?
There are people who do gaming and a lot of productivity on the same machine. But I do not think a lot of them care about getting the absolute best gaming performance. A 7950X is already the best of both worlds, I doubt many people would spend the extra $100 or whatever they ask.

But some will, and they are going to get their high margins, which is the only thing they care about.

They do not want to sell 6-core CPUs, because they are using 8-core chiplets. And I doubt many of those super tiny chiplets are defective, probably only some of the ones on the very edge of the wafer.

A 7600X3D would hinder the sales of the 8-core X3D model. They would sell millions of them, but their margins would be super low, and that goes against everything corporations believe in.

Every now and then each company will have a product with unbelievable value, but it will never become a common thing, because the big heads will not allow it.
 
Joined
Jan 20, 2019
Messages
1,550 (0.73/day)
Location
London, UK
System Name ❶ Oooh (2024) ❷ Aaaah (2021) ❸ Ahemm (2017)
Processor ❶ 5800X3D ❷ i7-9700K ❸ i7-7700K
Motherboard ❶ X570-F ❷ Z390-E ❸ Z270-E
Cooling ❶ ALFIII 360 ❷ X62 + X72 (GPU mod) ❸ X62
Memory ❶ 32-3600/16 ❷ 32-3200/16 ❸ 16-3200/16
Video Card(s) ❶ 3080 X Trio ❷ 2080TI (AIOmod) ❸ 1080TI
Storage ❶ NVME/SSD/HDD ❷ <SAME ❸ SSD/HDD
Display(s) ❶ 1440/165/IPS ❷ 1440/144/IPS ❸ 1080/144/IPS
Case ❶ BQ Silent 601 ❷ Cors 465X ❸ Frac Mesh C
Audio Device(s) ❶ HyperX C2 ❷ HyperX C2 ❸ Logi G432
Power Supply ❶ HX1200 Plat ❷ RM750X ❸ EVGA 650W G2
Mouse ❶ Logi G Pro ❷ Razer Bas V3 ❸ Logi G502
Keyboard ❶ Logi G915 TKL ❷ Anne P2 ❸ Logi G610
Software ❶ Win 11 ❷ 10 ❸ 10
Benchmark Scores I have wrestled bandwidths, Tussled with voltages, Handcuffed Overclocks, Thrown Gigahertz in Jail
This is a serious mistake on the part of AMD. It has been repeatedly shown that the 3D cache has little to no positive impact on productivity tasks while often having an incredible positive impact on gaming. Why would they put the 3D cache on the R9-7900X and R9-7950X, two CPUs that specialise in productivity? Most people who buy CPUs like that buy them for productivity and wouldn't benefit much from the 3D cache. To me, that says a lot of people won't pay the extra for 3D cache on those CPUs. OTOH, the R5-7600X is primarily a gaming CPU and if AMD had put 3D cache on that, they would've ended up owning the gaming CPU market. The R7-7800X3D is the only one of the three that they're releasing that makes any sense whatsoever because some gamers do buy 8-core CPUs.

We are in the process of watching AMD stupidly snatch defeat from the jaws of victory. :kookoo:

A serious mistake?

You're essentially suggesting anyone purchasing these chips has no intention on gaming? You can't ignore MT specialists being as human as you and I - theres a nice chunk of these workstation stroke gaming pundits who will definitely buy into these chips. I myself would fancy something like a 7900X3D for gaming, transcoding and the occasional video rendering....although since these non-gaming tasks are irregular i'd save my money and grab a 7700X3D (not that i'm looking to buy into AM5 unless DDR5/MOBO prices shrink).
 

imeem1

New Member
Joined
Jun 15, 2022
Messages
17 (0.02/day)
bought my 7950x during black friday, but didn't open it yet because I was waiting for a even bigger price drop, or a 7950x3d announcement . Like others, im going to be doing everything and anything on my pc simultaneously. So the 7950x3d will be exactly what im looking for.

I saw the x3d is good in Flight Simulator.
 
Joined
Jun 2, 2017
Messages
9,127 (3.34/day)
System Name Best AMD Computer
Processor AMD 7900X3D
Motherboard Asus X670E E Strix
Cooling In Win SR36
Memory GSKILL DDR5 32GB 5200 30
Video Card(s) Sapphire Pulse 7900XT (Watercooled)
Storage Corsair MP 700, Seagate 530 2Tb, Adata SX8200 2TBx2, Kingston 2 TBx2, Micron 8 TB, WD AN 1500
Display(s) GIGABYTE FV43U
Case Corsair 7000D Airflow
Audio Device(s) Corsair Void Pro, Logitch Z523 5.1
Power Supply Deepcool 1000M
Mouse Logitech g7 gaming mouse
Keyboard Logitech G510
Software Windows 11 Pro 64 Steam. GOG, Uplay, Origin
Benchmark Scores Firestrike: 46183 Time Spy: 25121
A serious mistake?

You're essentially suggesting anyone purchasing these chips has no intention on gaming? You can't ignore MT specialists being as human as you and I - theres a nice chunk of these workstation stroke gaming pundits who will definitely buy into these chips. I myself would fancy something like a 7900X3D for gaming, transcoding and the occasional video rendering....although since these non-gaming tasks are irregular i'd save my money and grab a 7700X3D (not that i'm looking to buy into AM5 unless DDR5/MOBO prices shrink).
I just sold a workstation system yesterday. The buyer told me that he was reviewing the parts before he came (5900X, Water cooled 6800XT, 4 TB NVME Storage and 850 W PSU. During the transaction I asked the buyer if he knew what Steam was and he looked at me like I had 2 heads. After I had him test the PC he looked at me and said "what was that Game where you are a Car thief again?" I said you mean GTA5 and he said There is a 5th one!!? I then showed him Humble Choice for the month and he was sold. I contacted him this morning to see how his rig was going and he told me he was waiting to go to Canada Computers to get more cables because he will be Gaming on one of his screens while Working sometimes. I myself am guilty of getting HEDT because I love PCIe lanes and you would have to have had that to understand how I have built like 9 PCs using the storage from my HEDT. If only AMD had not thought that the 3950X and a 12 Core HEDT (3945X) to the masses for an affordable price would have eaten into the 5950X sales but that might not have been the case.
 
Joined
Dec 10, 2022
Messages
486 (0.68/day)
System Name The Phantom in the Black Tower
Processor AMD Ryzen 7 5800X3D
Motherboard ASRock X570 Pro4 AM4
Cooling AMD Wraith Prism, 5 x Cooler Master Sickleflow 120mm
Memory 64GB Team Vulcan DDR4-3600 CL18 (4×16GB)
Video Card(s) ASRock Radeon RX 7900 XTX Phantom Gaming OC 24GB
Storage WDS500G3X0E (OS), WDS100T2B0C, TM8FP6002T0C101 (x2) and ~40TB of total HDD space
Display(s) Haier 55E5500U 55" 2160p60Hz
Case Ultra U12-40670 Super Tower
Audio Device(s) Logitech Z200
Power Supply EVGA 1000 G2 Supernova 1kW 80+Gold-Certified
Mouse Logitech MK320
Keyboard Logitech MK320
VR HMD None
Software Windows 10 Professional
Benchmark Scores Fire Strike Ultra: 19484 Time Spy Extreme: 11006 Port Royal: 16545 SuperPosition 4K Optimised: 23439
There are people who do gaming and a lot of productivity on the same machine. But I do not think a lot of them care about getting the absolute best gaming performance. A 7950X is already the best of both worlds, I doubt many people would spend the extra $100 or whatever they ask.

But some will, and they are going to get their high margins, which is the only thing they care about.

They do not want to sell 6-core CPUs, because they are using 8-core chiplets. And I doubt many of those super tiny chiplets are defective, probably only some of the ones on the very edge of the wafer.
I would say then that they should stop producing the R5-7600X and just replace it with the R5-3600X3D. The non-X parts are the budget parts and they can be left alone.
A 7600X3D would hinder the sales of the 8-core X3D model. They would sell millions of them, but their margins would be super low, and that goes against everything corporations believe in.
It would and it wouldn't. For someone who couldn't afford the "7800X3D" to begin with, an R5-7600X3D might stop them from deciding to buy an i5-13600K. That's another person hooked on the AM5 platform which means at least one more AMD CPU purchase to come. My own AM4 "Odyssey" has been R7-1700, R5-3600X, R7-5700X and R7-5800X3D on three motherboards. (X370, X570, A320)

I have an A320 motherboard because I came across one that was so hilarious that I had to buy it. Canada Computers had the Biostar A320M on clearance for only CA$40! My first build back in 1988 had a Biostar motherboard so I have a soft spot in my heart for them. This board looked so pathetic that I just couldn't say no to it. It was actually kinda cute:

It doesn't even have an M.2 port! The thing is though, it cost CA$40, and IT WORKS! You can't go wrong there.
Every now and then each company will have a product with unbelievable value, but it will never become a common thing, because the big heads will not allow it.
Well those big heads need to be given a shake because that's how you gain an advantageous market position. It's the difference between strategy and tactics. Just look at what nVidia did with the GTX 1080 Ti. Was it a phenomenal value? Absolutely! Is it one of the major reasons why nVidia is seen as the market leader? Absolutely! The long-term benefits typically outweigh the short-term losses. This isn't some new revelation either so those "big heads" must be pretty empty if they can't comprehend this.
bought my 7950x during black friday, but didn't open it yet because I was waiting for a even bigger price drop, or a 7950x3d announcement . Like others, im going to be doing everything and anything on my pc simultaneously. So the 7950x3d will be exactly what im looking for.
I do everything and anything on my PC too. The thing is, the most hardware-hardcore tasks that I do are gaming tasks. For everything else, hell, even my old FX-8350 would probably be fast enough. It's not like Windows or Firefox have greatly increased their hardware requirements in the last ten years.
I saw the x3d is good in Flight Simulator.
Yep. From what I understand the R7-5800X3D is currently the fastest CPU for FS2020. God only knows what kind of performance a Zen4 X3D CPU would bring. At the same time, I don't think that any performance advantage over the 5800X3D would be anything of significant value because it already runs FS2020 perfectly and you can't get better than perfect.
I just sold a workstation system yesterday. The buyer told me that he was reviewing the parts before he came (5900X, Water cooled 6800XT, 4 TB NVME Storage and 850 W PSU. During the transaction I asked the buyer if he knew what Steam was and he looked at me like I had 2 heads. After I had him test the PC he looked at me and said "what was that Game where you are a Car thief again?" I said you mean GTA5 and he said There is a 5th one!!? I then showed him Humble Choice for the month and he was sold. I contacted him this morning to see how his rig was going and he told me he was waiting to go to Canada Computers to get more cables because he will be Gaming on one of his screens while Working sometimes. I myself am guilty of getting HEDT because I love PCIe lanes and you would have to have had that to understand how I have built like 9 PCs using the storage from my HEDT. If only AMD had not thought that the 3950X and a 12 Core HEDT (3945X) to the masses for an affordable price would have eaten into the 5950X sales but that might not have been the case.
So you opened his eyes to what is possible! See, I love reading things like this and while I'm glad that you did what you did (because his life may never be the same now, which is a good thing), it also serves to underscore just how oblivious that a lot of workstation users are when it comes to gaming on a PC. I believe that most of the 12 and 16-core parts get bought by businesses for their own workstations (assuming that they don't need anything like Threadripper or EPYC) and gaming is far from their desired use. In fact, I'd be willing to bet that Zen4's IGP makes those CPUs even more attractive to businesses because how much GPU power does an office app use?
Abso-friggin-lutely

The 5600 is IMO, the best choice for 99% of gamers out there - you need a serious GPU to ever have it even be a limit.
I agree. The R5-5600 is a tremendous value, but only in the short-term. In the long-term, the R7-5800X3D will leave it in the dust. It's like back when people were scooping up the R3-3100X faster than AMD could make them. They were an unbelievable value at the time, but, like most quad-cores, their legs proved to be pretty short over the years.
because the 5800x3D did run at lower clocks (overall mine runs 4.45GHz all core vs 4.6GHz in AVX workloads, soooo much slower) it'd be hard to advertise it as an all purpose product when it'd have a deficit in some commonly used setups
Well, here's the catch... Technically, ALL x86 CPUs are all-purpose products. There's nothing that the R9-7950X can do that my FX-8350 can't. It's just a matter of how fast and so for certain tasks, there are CPUs that are better than others. However, that doesn't change the fact that they can all do everything that any other x86 CPU can do. That does meet my definition of all-purpose.

Q: Can an R7-5800X3D run blender?
A: Yep.
Q: Can it run ARM applications?
A: Yep.
Q: Can it run Adobe Premiere?
A: Yep.
Q: Can it run a virtual machine?
A: Yep.
Q: Can it run MS-Office or LibreOffice?
A: Yep.
Q: Can it run a multimedia platform?
A: Yep.
Q: Can it run a local server?
A: Yep.
Q: Can it run 7zip?
A: Yep.
Q: Can it run DOOM?
A: Tell me one thing that CAN'T run DOOM!

And finally...
Q: CAN IT RUN CRYSIS?
A: Yep.

Ok, so yeah, it's an all-purpose CPU!
What'd be amazing is if AMD had the pull intel does with microsoft, and could release a CPU with one 3D stacked die and others without - 3D becomes the P cores, and the others do the higher wattage boring workloads
That's why I think AMD made a serious screwup by not having a hexacore 3D CPU. The reason that Intel has that much pull with MS is the fact that most Windows PCs still have an Intel CPU at their heart. If AMD managed to conquer the gamer market with an R5-7600X3D, then they too would have considerable pull with Microsoft, far more than they have now anyway.
 
Last edited:

Count von Schwalbe

Moderator
Staff member
Joined
Nov 15, 2021
Messages
3,065 (2.78/day)
Location
Knoxville, TN, USA
System Name Work Computer | Unfinished Computer
Processor Core i7-6700 | Ryzen 5 5600X
Motherboard Dell Q170 | Gigabyte Aorus Elite Wi-Fi
Cooling A fan? | Truly Custom Loop
Memory 4x4GB Crucial 2133 C17 | 4x8GB Corsair Vengeance RGB 3600 C26
Video Card(s) Dell Radeon R7 450 | RTX 2080 Ti FE
Storage Crucial BX500 2TB | TBD
Display(s) 3x LG QHD 32" GSM5B96 | TBD
Case Dell | Heavily Modified Phanteks P400
Power Supply Dell TFX Non-standard | EVGA BQ 650W
Mouse Monster No-Name $7 Gaming Mouse| TBD
I would say then that they should stop producing the R5-7600X and just replace it with the R5-3600X3D. The non-X parts are the budget parts and they can be left alone.
5600X3D would be a better choice. They could use an existing chiplet design and it's not like the 3600 is particularly cheaper to produce than the 5600X - same node and less chiplets to attach to the substrate. Also, gaming performance is already much better with Zen 3.
 
Joined
Dec 10, 2022
Messages
486 (0.68/day)
System Name The Phantom in the Black Tower
Processor AMD Ryzen 7 5800X3D
Motherboard ASRock X570 Pro4 AM4
Cooling AMD Wraith Prism, 5 x Cooler Master Sickleflow 120mm
Memory 64GB Team Vulcan DDR4-3600 CL18 (4×16GB)
Video Card(s) ASRock Radeon RX 7900 XTX Phantom Gaming OC 24GB
Storage WDS500G3X0E (OS), WDS100T2B0C, TM8FP6002T0C101 (x2) and ~40TB of total HDD space
Display(s) Haier 55E5500U 55" 2160p60Hz
Case Ultra U12-40670 Super Tower
Audio Device(s) Logitech Z200
Power Supply EVGA 1000 G2 Supernova 1kW 80+Gold-Certified
Mouse Logitech MK320
Keyboard Logitech MK320
VR HMD None
Software Windows 10 Professional
Benchmark Scores Fire Strike Ultra: 19484 Time Spy Extreme: 11006 Port Royal: 16545 SuperPosition 4K Optimised: 23439
5600X3D would be a better choice. They could use an existing chiplet design and it's not like the 3600 is particularly cheaper to produce than the 5600X - same node and less chiplets to attach to the substrate. Also, gaming performance is already much better with Zen 3.
The problem is that they're not making those anymore and they don't want to. The more AM4 upgrade CPUs they make, the fewer AM5 CPUs and motherboards they can sell. There's also the fact that they'd have to sell the 5600X3D for even less than the 7600X3D would be and no corporation wants less money.

I do however agree that an R5-5600X3D would've made more sense than the R7-5800X3D. The thing is, I gave them a pass because it was their first "kick at the can" so to speak. I won't give them the same pass with AM5 because the 5800X3D showed that they clearly got it right. It's not just that they're being greedy, it's that they're being stupid. I really don't think that the 3D cache has enough of a positive impact on productivity that the productivity users will be willing to pay extra for it and they'll just gather dust like the RTX 4080 and RX 7900 XT. That's not good for anybody.
 

Count von Schwalbe

Moderator
Staff member
Joined
Nov 15, 2021
Messages
3,065 (2.78/day)
Location
Knoxville, TN, USA
System Name Work Computer | Unfinished Computer
Processor Core i7-6700 | Ryzen 5 5600X
Motherboard Dell Q170 | Gigabyte Aorus Elite Wi-Fi
Cooling A fan? | Truly Custom Loop
Memory 4x4GB Crucial 2133 C17 | 4x8GB Corsair Vengeance RGB 3600 C26
Video Card(s) Dell Radeon R7 450 | RTX 2080 Ti FE
Storage Crucial BX500 2TB | TBD
Display(s) 3x LG QHD 32" GSM5B96 | TBD
Case Dell | Heavily Modified Phanteks P400
Power Supply Dell TFX Non-standard | EVGA BQ 650W
Mouse Monster No-Name $7 Gaming Mouse| TBD
I do however agree that an R5-5600X3D would've made more sense than the R7-5800X3D.
I was saying it would make more sense than the 3600X3D.

The trouble with a 6-core X3D, 5K or 7K series, is that the X3D bit commands a $150 price premium - would you have bought a $400 6-core? I think enough people were complaining about the low core count/MT performance of the 5800X3D.
 
Top