• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

MSI Drops First Hint of AMD Increasing AM4 CPU Core Counts

Joined
Feb 18, 2005
Messages
5,847 (0.81/day)
Location
Ikenai borderline!
System Name Firelance.
Processor Threadripper 3960X
Motherboard ROG Strix TRX40-E Gaming
Cooling IceGem 360 + 6x Arctic Cooling P12
Memory 8x 16GB Patriot Viper DDR4-3200 CL16
Video Card(s) MSI GeForce RTX 4060 Ti Ventus 2X OC
Storage 2TB WD SN850X (boot), 4TB Crucial P3 (data)
Display(s) 3x AOC Q32E2N (32" 2560x1440 75Hz)
Case Enthoo Pro II Server Edition (Closed Panel) + 6 fans
Power Supply Fractal Design Ion+ 2 Platinum 760W
Mouse Logitech G602
Keyboard Razer Pro Type Ultra
Software Windows 10 Professional x64
I reckon we'll see at most 12c/24t parts on AM4, as anything higher would cannibalise Threadripper. Plus it will make the chips even more expensive to produce.
 
Joined
Mar 21, 2016
Messages
2,508 (0.78/day)
The IPC thing with Intel has always been talked about, but never actually proven. They only gained performance from ramping up clocks, just look the Core i7 6700 and 7700. All the performance difference came from higher clock and not IPC.
Memory speed bumps as well had to have made a impact too. That's why most enthusiasts aren't using the speeds they officially support.
 
Joined
Aug 6, 2017
Messages
7,412 (2.75/day)
Location
Poland
System Name Purple rain
Processor 10.5 thousand 4.2G 1.1v
Motherboard Zee 490 Aorus Elite
Cooling Noctua D15S
Memory 16GB 4133 CL16-16-16-31 Viper Steel
Video Card(s) RTX 2070 Super Gaming X Trio
Storage SU900 128,8200Pro 1TB,850 Pro 512+256+256,860 Evo 500,XPG950 480, Skyhawk 2TB
Display(s) Acer XB241YU+Dell S2716DG
Case P600S Silent w. Alpenfohn wing boost 3 ARGBT+ fans
Audio Device(s) K612 Pro w. FiiO E10k DAC,W830BT wireless
Power Supply Superflower Leadex Gold 850W
Mouse G903 lightspeed+powerplay,G403 wireless + Steelseries DeX + Roccat rest
Keyboard HyperX Alloy SilverSpeed (w.HyperX wrist rest),Razer Deathstalker
Software Windows 10
Benchmark Scores A LOT
AMD gave more IPC increase between 1st and 2nd gen Ryzen than Intel did between its past 3 generations; despite Zen and Zen+ being the same chip physically. I'm hopeful.
I think it's more a result of reviewers using faster ram in 2018 ryzen reviews than they used back then in 2017 ryzen 1 reviews. This is a huge improvemnt for ryzen 2 over ryzen 1 and also brings ryzen closer to intel's performance. Intel CPUs work on the ring bus, there's little latency. AMD uses CCX's, that's why using 3200 cl14 memory like tpu did in 2700x vs 8700 test usually means a slightly better performance improvemnt for amd than intel. When you test both on budget 2400/2666 CL16 sticks, the gap usually grows the other way, favoring intel.
 
Joined
Aug 13, 2010
Messages
5,482 (1.04/day)
A good improvement i can see in such move would be a 6C\12T APU with Navi in it. That could be one hell of a 7nm powerhouse.
 
Joined
May 24, 2007
Messages
1,116 (0.17/day)
Location
Florida
System Name Blackwidow/
Processor Ryzen 5950x / Threadripper 3960x
Motherboard Asus x570 Crosshair viii impact/ Asus Zenith ii Extreme
Cooling Ek 240Aio/Custom watercooling
Memory 32gb ddr4 3600MHZ Crucial Ballistix / 32gb ddr4 3600MHZ G.Skill TridentZ Royal
Video Card(s) MSI RX 6900xt/ XFX 6800xt
Storage WD SN850 1TB boot / Samsung 970 evo+ 1tb boot, 6tb WD SN750
Display(s) Sony A80J / Dual LG 27gl850
Case Cooler Master NR200P/ 011 Dynamic XL
Audio Device(s) On board/ Soundblaster ZXR
Power Supply Corsair SF750w/ Seasonic Prime Titanium 1000w
Mouse Razer Viper Ultimate wireless/ Logitech G Pro X Superlight
Keyboard Logitech G915 TKL/ Logitech G915 Wireless
Software Win 10 Pro
A good improvement i can see in such move would be a 6C\12T APU with Navi in it. That could be one hell of a 7nm powerhouse.
Think about the notebook/mobile segment with such a product, or a console.
 
Joined
Oct 2, 2015
Messages
3,152 (0.93/day)
Location
Argentina
System Name Ciel / Akane
Processor AMD Ryzen R5 5600X / Intel Core i3 12100F
Motherboard Asus Tuf Gaming B550 Plus / Biostar H610MHP
Cooling ID-Cooling 224-XT Basic / Stock
Memory 2x 16GB Kingston Fury 3600MHz / 2x 8GB Patriot 3200MHz
Video Card(s) Gainward Ghost RTX 3060 Ti / Dell GTX 1660 SUPER
Storage NVMe Kingston KC3000 2TB + NVMe Toshiba KBG40ZNT256G + HDD WD 4TB / NVMe WD Blue SN550 512GB
Display(s) AOC Q27G3XMN / Samsung S22F350
Case Cougar MX410 Mesh-G / Generic
Audio Device(s) Kingston HyperX Cloud Stinger Core 7.1 Wireless PC
Power Supply Aerocool KCAS-500W / Gigabyte P450B
Mouse EVGA X15 / Logitech G203
Keyboard VSG Alnilam / Dell
Software Windows 11
They need to design that CPU alwas works preferential to a single CCX unit as much as possible (if they aren't already doing it). To avoid communications between separate CCX units which are slower than within same CCX.
I think that's the OS's fault.
 
Joined
Aug 6, 2017
Messages
7,412 (2.75/day)
Location
Poland
System Name Purple rain
Processor 10.5 thousand 4.2G 1.1v
Motherboard Zee 490 Aorus Elite
Cooling Noctua D15S
Memory 16GB 4133 CL16-16-16-31 Viper Steel
Video Card(s) RTX 2070 Super Gaming X Trio
Storage SU900 128,8200Pro 1TB,850 Pro 512+256+256,860 Evo 500,XPG950 480, Skyhawk 2TB
Display(s) Acer XB241YU+Dell S2716DG
Case P600S Silent w. Alpenfohn wing boost 3 ARGBT+ fans
Audio Device(s) K612 Pro w. FiiO E10k DAC,W830BT wireless
Power Supply Superflower Leadex Gold 850W
Mouse G903 lightspeed+powerplay,G403 wireless + Steelseries DeX + Roccat rest
Keyboard HyperX Alloy SilverSpeed (w.HyperX wrist rest),Razer Deathstalker
Software Windows 10
Benchmark Scores A LOT
Think about the notebook/mobile segment with such a product, or a console.
a ryzen apu with navi would be basically an xbox inside a pc.

I think that's the OS's fault.
win 10 was never designed to work with ccx cpus in the first place. amd usually comes up with stuff that privides more raw performance, their gpus have more sp and tflops, their cpus have more cores. That performance usually gets lost in many tasks though, since in order for that to work you need compatible software. Not the fault of the os, not the fault of amd, just requires adoption time as it's just very different.
 
Last edited:
Joined
May 2, 2017
Messages
7,762 (2.78/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Joined
Oct 2, 2004
Messages
13,791 (1.87/day)
Memory speed bumps as well had to have made a impact too. That's why most enthusiasts aren't using the speeds they officially support.

Isn't that always the case? On X58 it was 1333MHz if I remember correctly, I was running a 1600MHz RAM. On X99 it's 2133MHz and was later bumped to 2400MHz iirc. I'm running 2666MHz RAM. Usually we run faster memory than specified.
 
Joined
Aug 16, 2016
Messages
1,025 (0.34/day)
Location
Croatistan
System Name 1.21 gigawatts!
Processor Intel Core i7 6700K
Motherboard MSI Z170A Krait Gaming 3X
Cooling Be Quiet! Shadow Rock Slim with Arctic MX-4
Memory 16GB G.Skill Ripjaws V DDR4 3000 MHz
Video Card(s) Palit GTX 1080 Game Rock
Storage Mushkin Triactor 240GB + Toshiba X300 4TB + Team L3 EVO 480GB
Display(s) Philips 237E7QDSB/00 23" FHD AH-IPS
Case Aerocool Aero-1000 white + 4 Arctic F12 PWM Rev.2 fans
Audio Device(s) Onboard Audio Boost 3 with Nahimic Audio Enhancer
Power Supply FSP Hydro G 650W
Mouse Cougar 700M eSports white
Keyboard E-Blue Cobra II
Software Windows 8.1 Pro x64
Benchmark Scores Cinebench R15: 948 (stock) / 1044 (4,7 GHz) FarCry 5 1080p Ultra: min 100, avg 116, max 133 FPS
Better solution would be to improve IPC and to slightly increase clocks (eg. 8C/16T Ryzen @ 3,8 GHz / 4,5 GHz turbo). Intel is gaining performance almost exclusively from higher clocks ever since Skylake with practically no IPC improvement.
 
Joined
May 2, 2017
Messages
7,762 (2.78/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Better solution would be to improve IPC and to slightly increase clocks (eg. 8C/16T Ryzen @ 3,8 GHz / 4,5 GHz turbo). Intel is gaining performance almost exclusively from higher clocks ever since Skylake with practically no IPC improvement.
Yep. Luckily, everything AMD has said since the launch of Ryzen points towards there being noticeable IPC improvements (the "low hanging fruit" quote in particular) coming in short order, and the move away from low-power processes (GF 14nm) to high-speed ones (12nm to a certain degree, 7nm significantly more) more suited for desktop/high-performance parts should help boost clocks even beyond the 1st-to-2nd gen increase.

While I wouldn't mind pushing the maximum amount of cores on the mainstream platform even further (the option for a 12-core doesn't hurt anyone), the gains are mostly fictional at this point. My GF's TR 1920X workstation crushes my R5 1600X gaming build in Adobe Premiere, but mine is just as fast (or faster) in everyday tasks and gaming. Software (and games in particular) really needs to branch out and utilize more cores (and more CPU resources in general - games barely require more CPU power now than 10 years ago, while GPU utilization has skyrocketed), and increasing core counts on CPUs doesn't really get you anything if that increase in utilization doesn't arrive early in the 3-4-year lifespan of the average enthusiast CPU. der8auer made a good point about this in a recent video - game developers need to start looking into what they can do with the current crop of really, really powerful CPUs.
 
Joined
Jan 8, 2017
Messages
9,509 (3.27/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
They need to design that CPU alwas works preferential to a single CCX unit as much as possible (if they aren't already doing it).

There is no point in doing that , Zen isn't a heterogeneous architecture. Better/faster cache will sort this out , a CPU never talks directly to system memory but rather through each cache level and only then if the instruction/data isn't found it accesses the main memory.

Ryzen 2 has lower cache latency and as a result memory I/O is improved across the board.
 
Joined
Oct 2, 2004
Messages
13,791 (1.87/day)
There is no point in doing that , Zen isn't a heterogeneous architecture. Better/faster cache will sort this out , a CPU never talks directly to system memory but rather through each cache level and only then if the instruction/data isn't found it accesses the main memory.

Ryzen 2 has lower cache latency and as a result memory I/O is improved across the board.

I wasn't talking about system memory. I was talking about preferential communication within single CCX complex whenever that is possible. So that apps/games don't use 2 cores from one CCX and 2 from another. It's best if they use all cores from same CCX and only go into another when all of the CCX ones were used (currently CCX holds 4 cores).
 
Joined
Aug 6, 2017
Messages
7,412 (2.75/day)
Location
Poland
System Name Purple rain
Processor 10.5 thousand 4.2G 1.1v
Motherboard Zee 490 Aorus Elite
Cooling Noctua D15S
Memory 16GB 4133 CL16-16-16-31 Viper Steel
Video Card(s) RTX 2070 Super Gaming X Trio
Storage SU900 128,8200Pro 1TB,850 Pro 512+256+256,860 Evo 500,XPG950 480, Skyhawk 2TB
Display(s) Acer XB241YU+Dell S2716DG
Case P600S Silent w. Alpenfohn wing boost 3 ARGBT+ fans
Audio Device(s) K612 Pro w. FiiO E10k DAC,W830BT wireless
Power Supply Superflower Leadex Gold 850W
Mouse G903 lightspeed+powerplay,G403 wireless + Steelseries DeX + Roccat rest
Keyboard HyperX Alloy SilverSpeed (w.HyperX wrist rest),Razer Deathstalker
Software Windows 10
Benchmark Scores A LOT
The fact that it's heterogenous doesn't mean that performing tasks within one ccx isn't better.
 
Joined
Jan 8, 2017
Messages
9,509 (3.27/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
I wasn't talking about system memory. I was talking about preferential communication within single CCX complex whenever that is possible. So that apps/games don't use 2 cores from one CCX and 2 from another. It's best if they use all cores from same CCX and only go into another when all of the CCX ones were used (currently CCX holds 4 cores).

What you are talking about has everything to do with cache and general memory I/O performance, that's why I mentioned it. Faster connections between the distinct L3 cache regions and not using them as victim caches will fix that deficiency. It will also be a much simpler solution versus coming up with complex scheduling that may require complicated hardware blocks who may occupy space than can be otherwise used for something else.
 
D

Deleted member 178884

Guest
10 cores minimum for certain that would be the 2800x probably - they will drop it upon the coffee lake refresh release.
 
Joined
Oct 22, 2014
Messages
14,170 (3.81/day)
Location
Sunshine Coast
System Name H7 Flow 2024
Processor AMD 5800X3D
Motherboard Asus X570 Tough Gaming
Cooling Custom liquid
Memory 32 GB DDR4
Video Card(s) Intel ARC A750
Storage Crucial P5 Plus 2TB.
Display(s) AOC 24" Freesync 1m.s. 75Hz
Mouse Lenovo
Keyboard Eweadn Mechanical
Software W11 Pro 64 bit
Better solution would be to improve IPC and to slightly increase clocks (eg. 8C/16T Ryzen @ 3,8 GHz / 4,5 GHz turbo). Intel is gaining performance almost exclusively from higher clocks ever since Skylake with practically no IPC improvement.
Intel WAS gaining performance … seems the mitigations that are required now have pared a lot of that back.
Perhaps Intel should have put the hard yards in and done real work to improve their IPC, not underhanded tactics to make their product APPEAR faster.
 
Joined
Dec 14, 2013
Messages
2,734 (0.68/day)
Location
Alabama
Processor Ryzen 2600
Motherboard X470 Tachi Ultimate
Cooling AM3+ Wraith CPU cooler
Memory C.R.S.
Video Card(s) GTX 970
Software Linux Peppermint 10
Benchmark Scores Never high enough
Why is no one talking about how incredibly cringey the video is?!

:twitch:

Cheesy video about a board made of cheap-n-cheesy components.....
Yeah, not suprised here. :ohwell:
Next time I need a new MSI board I'll grab a jar of whiz cheese and dump it into the case.

I know some love MSI and that's fine, even Asus has their fair share of crap dropped at times and admittedly as of late they too have been slipping.
I've still had MUCH better use experience from an Asus than anything I've ever had by MSI before in both what it could do and how long it lasted.
 

newtekie1

Semi-Retired Folder
Joined
Nov 22, 2005
Messages
28,473 (4.08/day)
Location
Indiana, USA
Processor Intel Core i7 10850K@5.2GHz
Motherboard AsRock Z470 Taichi
Cooling Corsair H115i Pro w/ Noctua NF-A14 Fans
Memory 32GB DDR4-3600
Video Card(s) RTX 2070 Super
Storage 500GB SX8200 Pro + 8TB with 1TB SSD Cache
Display(s) Acer Nitro VG280K 4K 28"
Case Fractal Design Define S
Audio Device(s) Onboard is good enough for me
Power Supply eVGA SuperNOVA 1000w G3
Software Windows 10 Pro x64
Intel WAS gaining performance … seems the mitigations that are required now have pared a lot of that back.
Perhaps Intel should have put the hard yards in and done real work to improve their IPC, not underhanded tactics to make their product APPEAR faster.

Optimizing an architecture is nothing more than "underhanded" tricks to make the product faster. That is what branch cache prediction was, it was a great way to optimize architectures. That's why pretty much every processor maker uses it in one form or another.

Thre reason Intel was hit so bad by the security issues is because they relied on it the most, and that is because they have had the most time to optimize a single architecture. Because lets face it, Intel has been doing nothing but optimizing the same architecture since Sandybridge(arguably Nehalem).
 
Joined
Dec 28, 2012
Messages
3,956 (0.90/day)
System Name Skunkworks 3.0
Processor 5800x3d
Motherboard x570 unify
Cooling Noctua NH-U12A
Memory 32GB 3600 mhz
Video Card(s) asrock 6800xt challenger D
Storage Sabarent rocket 4.0 2TB, MX 500 2TB
Display(s) Asus 1440p144 27"
Case Old arse cooler master 932
Power Supply Corsair 1200w platinum
Mouse *squeak*
Keyboard Some old office thing
Software Manjaro
Yep. Luckily, everything AMD has said since the launch of Ryzen points towards there being noticeable IPC improvements (the "low hanging fruit" quote in particular) coming in short order, and the move away from low-power processes (GF 14nm) to high-speed ones (12nm to a certain degree, 7nm significantly more) more suited for desktop/high-performance parts should help boost clocks even beyond the 1st-to-2nd gen increase.

While I wouldn't mind pushing the maximum amount of cores on the mainstream platform even further (the option for a 12-core doesn't hurt anyone), the gains are mostly fictional at this point. My GF's TR 1920X workstation crushes my R5 1600X gaming build in Adobe Premiere, but mine is just as fast (or faster) in everyday tasks and gaming. Software (and games in particular) really needs to branch out and utilize more cores (and more CPU resources in general - games barely require more CPU power now than 10 years ago, while GPU utilization has skyrocketed), and increasing core counts on CPUs doesn't really get you anything if that increase in utilization doesn't arrive early in the 3-4-year lifespan of the average enthusiast CPU. der8auer made a good point about this in a recent video - game developers need to start looking into what they can do with the current crop of really, really powerful CPUs.
Games are already doing that. Look at battlefield, hapilly gobbles up as much CPU hardware as you throw at it....in multiplayer.

In singleplayer, the game really only lads 2 or 3 to any significant degree.

The problem is that games are naturally more single thread oriented. Some can benefit from more cores, like multiplayer, but if you are expecting single player or low player count multiplayer games to effectively use 5+ threads, you are going to be dissapointed. The reason CPU requirements havent shot up is simple- there is no need for them, most games are script heavy, and current CPUs are already good enough for these tasks. Graphics are much easier to push higher (and more demanding) year to year.

This is why IPC is just as important as MOAR CORES, some things simply will not be able to take advantage of 8+ cores, and will need that single ore performance.
 
Joined
Dec 10, 2015
Messages
545 (0.17/day)
Location
Here
System Name Skypas
Processor Intel Core i7-6700
Motherboard Asus H170 Pro Gaming
Cooling Cooler Master Hyper 212X Turbo
Memory Corsair Vengeance LPX 16GB
Video Card(s) MSI GTX 1060 Gaming X 6GB
Storage Corsair Neutron GTX 120GB + WD Blue 1TB
Display(s) LG 22EA63V
Case Corsair Carbide 400Q
Power Supply Seasonic SS-460FL2 w/ Deepcool XFan 120
Mouse Logitech B100
Keyboard Corsair Vengeance K70
Software Windows 10 Pro (to be replaced by 2025)
Not surprising if AMD actually bring 12 core to AM4, their server division need the core increase to offer more options and by trickling it down they also increasing the consumer product range

It's a win-win situation
 
Joined
May 2, 2017
Messages
7,762 (2.78/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Games are already doing that. Look at battlefield, hapilly gobbles up as much CPU hardware as you throw at it....in multiplayer.

In singleplayer, the game really only lads 2 or 3 to any significant degree.

The problem is that games are naturally more single thread oriented. Some can benefit from more cores, like multiplayer, but if you are expecting single player or low player count multiplayer games to effectively use 5+ threads, you are going to be dissapointed. The reason CPU requirements havent shot up is simple- there is no need for them, most games are script heavy, and current CPUs are already good enough for these tasks. Graphics are much easier to push higher (and more demanding) year to year.

This is why IPC is just as important as MOAR CORES, some things simply will not be able to take advantage of 8+ cores, and will need that single ore performance.
You're not entirely wrong, but I don't completely agree with you either. What you're describing is the current state of AAA game development and the system load of the features present in these games. What I'm saying is that it's about time early development resources are reallocated from developing new ways of melting your GPU (which has been the key focus for a decade or more) to finding new uses for the abundant CPU power in modern PCs. Sure, CPUs are worse than GPUs for graphics, physics and lighting. Probably for spatial audio too. But is that really all there is? What about improving in-game AI? Making game worlds and NPCs more dynamic in various ways? Making player-to-world interactions more complex, deeper and more significant? That's just stuff I can come up with off the top of my head in two minutes. I'd bet a team of game or engine developers could find quite a lot to spend CPU power on that would tangibly improve game experiences in single-player. It's there for the taking, they just need to find interesting stuff to do with it.

Of course, this runs the risk of breaking the game for people with weak CPUs - scaling graphics is easy and generally accepted ("my GPU is crap so the game doesn't look good, but at least I can play"), scaling AI or other non-graphical features is far more challenging. "Sorry, your CPU is too slow, so now the AI is really dumb and there are all these nifty/cool/fun things you can no longer do" won't fly with a lot of gamers. Which I'm willing to bet the focus on improving graphics and little else comes from, and will continue to come from for a while still.
 
Joined
Jun 1, 2011
Messages
4,679 (0.94/day)
Location
in a van down by the river
Processor faster at instructions than yours
Motherboard more nurturing than yours
Cooling frostier than yours
Memory superior scheduling & haphazardly entry than yours
Video Card(s) better rasterization than yours
Storage more ample than yours
Display(s) increased pixels than yours
Case fancier than yours
Audio Device(s) further audible than yours
Power Supply additional amps x volts than yours
Mouse without as much gnawing as yours
Keyboard less clicky than yours
VR HMD not as odd looking as yours
Software extra mushier than yours
Benchmark Scores up yours
wake me when they finally break 185 points on the cinebench single thread test

 
Joined
Sep 17, 2014
Messages
22,684 (6.05/day)
Location
The Washing Machine
System Name Tiny the White Yeti
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
VR HMD HD 420 - Green Edition ;)
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
You're not entirely wrong, but I don't completely agree with you either. What you're describing is the current state of AAA game development and the system load of the features present in these games. What I'm saying is that it's about time early development resources are reallocated from developing new ways of melting your GPU (which has been the key focus for a decade or more) to finding new uses for the abundant CPU power in modern PCs. Sure, CPUs are worse than GPUs for graphics, physics and lighting. Probably for spatial audio too. But is that really all there is? What about improving in-game AI? Making game worlds and NPCs more dynamic in various ways? Making player-to-world interactions more complex, deeper and more significant? That's just stuff I can come up with off the top of my head in two minutes. I'd bet a team of game or engine developers could find quite a lot to spend CPU power on that would tangibly improve game experiences in single-player. It's there for the taking, they just need to find interesting stuff to do with it.

Of course, this runs the risk of breaking the game for people with weak CPUs - scaling graphics is easy and generally accepted ("my GPU is crap so the game doesn't look good, but at least I can play"), scaling AI or other non-graphical features is far more challenging. "Sorry, your CPU is too slow, so now the AI is really dumb and there are all these nifty/cool/fun things you can no longer do" won't fly with a lot of gamers. Which I'm willing to bet the focus on improving graphics and little else comes from, and will continue to come from for a while still.

You're right and examples like Star Swarm and Ashes are early attempts at that. Not very good ones in terms of a 'game' but... nice tech demos. The APIs are there for this now. I think the main thing we're waiting for is mass adoption because such games will run like a PITA on anything that doesn't use most feature levels of DX12 or Vulkan. There is still not a single killer-app to push those APIs forward while they really do need it or this will easily take 2-3 more years.

As for AI: writing a good AI in fact doesn't take all that much in terms of CPU. Look at UT'99 for good examples of that - those bots were insane. The main thing a good AI requires is expert knowledge and control of game mechanics combined with knowledge of how players play and act. Ironically, the best AI that doesn't 'cheat' or completely overpowers the player in every situation is one that also makes mistakes and acts upon player interaction and not pre-coded stuff. And for that, we now have big data and deep/machine learning but that is still super early adopter stage... and the fun thing about thát is that its done on.... GPU.

AMD gave more IPC increase between 1st and 2nd gen Ryzen than Intel did between its past 3 generations; despite Zen and Zen+ being the same chip physically. I'm hopeful.

I will be highly surprised if AMD manages to structurally surpass Intel IPC. They already do it on specific workloads but that is not enough. Only when they can get past Intel's IPC on all fronts, only then will I buy the Intel bash of 'they're just sitting on Skylake'. I'm more of a believer in the idea that all the fruits are picked by now for x86 and any kind of improvement requires a radically different approach altogether. GPU is currently suffering a similar fate by the way, as the main source of improvements there is found in node shrinks and dedicated resources for specific tasks, clock bumps and 'going faster or wider' (HBM, GDDR6 etc.). I also view that as the main reason GPU makers are pushing things like ray tracing, VR and higher res support, they are really scouring the land for new USPs.

Realistically, the only low hanging fruit in CPU land right now IS adding cores.
 
Last edited:
Top