Thursday, July 19th 2018

MSI Drops First Hint of AMD Increasing AM4 CPU Core Counts

With Intel frantically working on an 8-core socket LGA1151 processor to convincingly beat the 8-core AMD Ryzen 2000 series processor, AMD could be working on the next cycle of core-count increases for the mainstream-desktop platform. Motherboard maker MSI may have dropped the first hint that AMD is bringing >8 cores to the socket AM4 mainstream-desktop platform by mentioning that its upcoming motherboards based on the AMD B450 chipset support 8-core "and up" CPU in a marketing video.

AMD will get its next opportunity to tinker with key aspects of its CPU micro-architecture with "Zen 2," being built on the 7 nm silicon fabrication process. If it decides to stick with the CCX approach to multi-core processors, the company could increase per-CCX core counts. A 50 percent core-count increase enables 12-core processors, while a 100 percent increase brings 16-cores to the AM4 platform. MSI video confirms that these >8-core processors will have backwards-compatibility with existing 400-series chipsets, even if they launch alongside newer 500-series chipset.
The video follows.

Add your own comment

88 Comments on MSI Drops First Hint of AMD Increasing AM4 CPU Core Counts

#26
InVasMani
RejZoRThe IPC thing with Intel has always been talked about, but never actually proven. They only gained performance from ramping up clocks, just look the Core i7 6700 and 7700. All the performance difference came from higher clock and not IPC.
Memory speed bumps as well had to have made a impact too. That's why most enthusiasts aren't using the speeds they officially support.
Posted on Reply
#27
cucker tarlson
btarunrAMD gave more IPC increase between 1st and 2nd gen Ryzen than Intel did between its past 3 generations; despite Zen and Zen+ being the same chip physically. I'm hopeful.
I think it's more a result of reviewers using faster ram in 2018 ryzen reviews than they used back then in 2017 ryzen 1 reviews. This is a huge improvemnt for ryzen 2 over ryzen 1 and also brings ryzen closer to intel's performance. Intel CPUs work on the ring bus, there's little latency. AMD uses CCX's, that's why using 3200 cl14 memory like tpu did in 2700x vs 8700 test usually means a slightly better performance improvemnt for amd than intel. When you test both on budget 2400/2666 CL16 sticks, the gap usually grows the other way, favoring intel.
Posted on Reply
#28
dj-electric
A good improvement i can see in such move would be a 6C\12T APU with Navi in it. That could be one hell of a 7nm powerhouse.
Posted on Reply
#29
springs113
dj-electricA good improvement i can see in such move would be a 6C\12T APU with Navi in it. That could be one hell of a 7nm powerhouse.
Think about the notebook/mobile segment with such a product, or a console.
Posted on Reply
#30
GoldenX
RejZoRThey need to design that CPU alwas works preferential to a single CCX unit as much as possible (if they aren't already doing it). To avoid communications between separate CCX units which are slower than within same CCX.
I think that's the OS's fault.
Posted on Reply
#31
cucker tarlson
springs113Think about the notebook/mobile segment with such a product, or a console.
a ryzen apu with navi would be basically an xbox inside a pc.
GoldenXI think that's the OS's fault.
win 10 was never designed to work with ccx cpus in the first place. amd usually comes up with stuff that privides more raw performance, their gpus have more sp and tflops, their cpus have more cores. That performance usually gets lost in many tasks though, since in order for that to work you need compatible software. Not the fault of the os, not the fault of amd, just requires adoption time as it's just very different.
Posted on Reply
#32
Valantar
Nephilim666Why is no one talking about how incredibly cringey the video is?!

:twitch:
That was my first thought as well. Wow. That was ... frightening.
Posted on Reply
#33
RejZoR
InVasManiMemory speed bumps as well had to have made a impact too. That's why most enthusiasts aren't using the speeds they officially support.
Isn't that always the case? On X58 it was 1333MHz if I remember correctly, I was running a 1600MHz RAM. On X99 it's 2133MHz and was later bumped to 2400MHz iirc. I'm running 2666MHz RAM. Usually we run faster memory than specified.
Posted on Reply
#34
Komshija
Better solution would be to improve IPC and to slightly increase clocks (eg. 8C/16T Ryzen @ 3,8 GHz / 4,5 GHz turbo). Intel is gaining performance almost exclusively from higher clocks ever since Skylake with practically no IPC improvement.
Posted on Reply
#35
Valantar
KomshijaBetter solution would be to improve IPC and to slightly increase clocks (eg. 8C/16T Ryzen @ 3,8 GHz / 4,5 GHz turbo). Intel is gaining performance almost exclusively from higher clocks ever since Skylake with practically no IPC improvement.
Yep. Luckily, everything AMD has said since the launch of Ryzen points towards there being noticeable IPC improvements (the "low hanging fruit" quote in particular) coming in short order, and the move away from low-power processes (GF 14nm) to high-speed ones (12nm to a certain degree, 7nm significantly more) more suited for desktop/high-performance parts should help boost clocks even beyond the 1st-to-2nd gen increase.

While I wouldn't mind pushing the maximum amount of cores on the mainstream platform even further (the option for a 12-core doesn't hurt anyone), the gains are mostly fictional at this point. My GF's TR 1920X workstation crushes my R5 1600X gaming build in Adobe Premiere, but mine is just as fast (or faster) in everyday tasks and gaming. Software (and games in particular) really needs to branch out and utilize more cores (and more CPU resources in general - games barely require more CPU power now than 10 years ago, while GPU utilization has skyrocketed), and increasing core counts on CPUs doesn't really get you anything if that increase in utilization doesn't arrive early in the 3-4-year lifespan of the average enthusiast CPU. der8auer made a good point about this in a recent video - game developers need to start looking into what they can do with the current crop of really, really powerful CPUs.
Posted on Reply
#36
Vya Domus
RejZoRThey need to design that CPU alwas works preferential to a single CCX unit as much as possible (if they aren't already doing it).
There is no point in doing that , Zen isn't a heterogeneous architecture. Better/faster cache will sort this out , a CPU never talks directly to system memory but rather through each cache level and only then if the instruction/data isn't found it accesses the main memory.

Ryzen 2 has lower cache latency and as a result memory I/O is improved across the board.
Posted on Reply
#37
RejZoR
Vya DomusThere is no point in doing that , Zen isn't a heterogeneous architecture. Better/faster cache will sort this out , a CPU never talks directly to system memory but rather through each cache level and only then if the instruction/data isn't found it accesses the main memory.

Ryzen 2 has lower cache latency and as a result memory I/O is improved across the board.
I wasn't talking about system memory. I was talking about preferential communication within single CCX complex whenever that is possible. So that apps/games don't use 2 cores from one CCX and 2 from another. It's best if they use all cores from same CCX and only go into another when all of the CCX ones were used (currently CCX holds 4 cores).
Posted on Reply
#38
cucker tarlson
The fact that it's heterogenous doesn't mean that performing tasks within one ccx isn't better.
Posted on Reply
#39
Vya Domus
RejZoRI wasn't talking about system memory. I was talking about preferential communication within single CCX complex whenever that is possible. So that apps/games don't use 2 cores from one CCX and 2 from another. It's best if they use all cores from same CCX and only go into another when all of the CCX ones were used (currently CCX holds 4 cores).
What you are talking about has everything to do with cache and general memory I/O performance, that's why I mentioned it. Faster connections between the distinct L3 cache regions and not using them as victim caches will fix that deficiency. It will also be a much simpler solution versus coming up with complex scheduling that may require complicated hardware blocks who may occupy space than can be otherwise used for something else.
Posted on Reply
#40
Unregistered
10 cores minimum for certain that would be the 2800x probably - they will drop it upon the coffee lake refresh release.
#41
Caring1
KomshijaBetter solution would be to improve IPC and to slightly increase clocks (eg. 8C/16T Ryzen @ 3,8 GHz / 4,5 GHz turbo). Intel is gaining performance almost exclusively from higher clocks ever since Skylake with practically no IPC improvement.
Intel WAS gaining performance … seems the mitigations that are required now have pared a lot of that back.
Perhaps Intel should have put the hard yards in and done real work to improve their IPC, not underhanded tactics to make their product APPEAR faster.
Posted on Reply
#42
Bones
Nephilim666Why is no one talking about how incredibly cringey the video is?!

:twitch:
Cheesy video about a board made of cheap-n-cheesy components.....
Yeah, not suprised here. :ohwell:
Next time I need a new MSI board I'll grab a jar of whiz cheese and dump it into the case.

I know some love MSI and that's fine, even Asus has their fair share of crap dropped at times and admittedly as of late they too have been slipping.
I've still had MUCH better use experience from an Asus than anything I've ever had by MSI before in both what it could do and how long it lasted.
Posted on Reply
#43
newtekie1
Semi-Retired Folder
Caring1Intel WAS gaining performance … seems the mitigations that are required now have pared a lot of that back.
Perhaps Intel should have put the hard yards in and done real work to improve their IPC, not underhanded tactics to make their product APPEAR faster.
Optimizing an architecture is nothing more than "underhanded" tricks to make the product faster. That is what branch cache prediction was, it was a great way to optimize architectures. That's why pretty much every processor maker uses it in one form or another.

Thre reason Intel was hit so bad by the security issues is because they relied on it the most, and that is because they have had the most time to optimize a single architecture. Because lets face it, Intel has been doing nothing but optimizing the same architecture since Sandybridge(arguably Nehalem).
Posted on Reply
#44
TheinsanegamerN
ValantarYep. Luckily, everything AMD has said since the launch of Ryzen points towards there being noticeable IPC improvements (the "low hanging fruit" quote in particular) coming in short order, and the move away from low-power processes (GF 14nm) to high-speed ones (12nm to a certain degree, 7nm significantly more) more suited for desktop/high-performance parts should help boost clocks even beyond the 1st-to-2nd gen increase.

While I wouldn't mind pushing the maximum amount of cores on the mainstream platform even further (the option for a 12-core doesn't hurt anyone), the gains are mostly fictional at this point. My GF's TR 1920X workstation crushes my R5 1600X gaming build in Adobe Premiere, but mine is just as fast (or faster) in everyday tasks and gaming. Software (and games in particular) really needs to branch out and utilize more cores (and more CPU resources in general - games barely require more CPU power now than 10 years ago, while GPU utilization has skyrocketed), and increasing core counts on CPUs doesn't really get you anything if that increase in utilization doesn't arrive early in the 3-4-year lifespan of the average enthusiast CPU. der8auer made a good point about this in a recent video - game developers need to start looking into what they can do with the current crop of really, really powerful CPUs.
Games are already doing that. Look at battlefield, hapilly gobbles up as much CPU hardware as you throw at it....in multiplayer.

In singleplayer, the game really only lads 2 or 3 to any significant degree.

The problem is that games are naturally more single thread oriented. Some can benefit from more cores, like multiplayer, but if you are expecting single player or low player count multiplayer games to effectively use 5+ threads, you are going to be dissapointed. The reason CPU requirements havent shot up is simple- there is no need for them, most games are script heavy, and current CPUs are already good enough for these tasks. Graphics are much easier to push higher (and more demanding) year to year.

This is why IPC is just as important as MOAR CORES, some things simply will not be able to take advantage of 8+ cores, and will need that single ore performance.
Posted on Reply
#45
Manu_PT
6c for mainstream? Nice! 8 core? Perfect! More than 8 ? Not needed at all. Focus on clocks and ram compability, would benefit the home user more.
Posted on Reply
#46
XiGMAKiD
Not surprising if AMD actually bring 12 core to AM4, their server division need the core increase to offer more options and by trickling it down they also increasing the consumer product range

It's a win-win situation
Posted on Reply
#47
Valantar
TheinsanegamerNGames are already doing that. Look at battlefield, hapilly gobbles up as much CPU hardware as you throw at it....in multiplayer.

In singleplayer, the game really only lads 2 or 3 to any significant degree.

The problem is that games are naturally more single thread oriented. Some can benefit from more cores, like multiplayer, but if you are expecting single player or low player count multiplayer games to effectively use 5+ threads, you are going to be dissapointed. The reason CPU requirements havent shot up is simple- there is no need for them, most games are script heavy, and current CPUs are already good enough for these tasks. Graphics are much easier to push higher (and more demanding) year to year.

This is why IPC is just as important as MOAR CORES, some things simply will not be able to take advantage of 8+ cores, and will need that single ore performance.
You're not entirely wrong, but I don't completely agree with you either. What you're describing is the current state of AAA game development and the system load of the features present in these games. What I'm saying is that it's about time early development resources are reallocated from developing new ways of melting your GPU (which has been the key focus for a decade or more) to finding new uses for the abundant CPU power in modern PCs. Sure, CPUs are worse than GPUs for graphics, physics and lighting. Probably for spatial audio too. But is that really all there is? What about improving in-game AI? Making game worlds and NPCs more dynamic in various ways? Making player-to-world interactions more complex, deeper and more significant? That's just stuff I can come up with off the top of my head in two minutes. I'd bet a team of game or engine developers could find quite a lot to spend CPU power on that would tangibly improve game experiences in single-player. It's there for the taking, they just need to find interesting stuff to do with it.

Of course, this runs the risk of breaking the game for people with weak CPUs - scaling graphics is easy and generally accepted ("my GPU is crap so the game doesn't look good, but at least I can play"), scaling AI or other non-graphical features is far more challenging. "Sorry, your CPU is too slow, so now the AI is really dumb and there are all these nifty/cool/fun things you can no longer do" won't fly with a lot of gamers. Which I'm willing to bet the focus on improving graphics and little else comes from, and will continue to come from for a while still.
Posted on Reply
#48
dirtyferret
wake me when they finally break 185 points on the cinebench single thread test

Posted on Reply
#49
Vayra86
ValantarYou're not entirely wrong, but I don't completely agree with you either. What you're describing is the current state of AAA game development and the system load of the features present in these games. What I'm saying is that it's about time early development resources are reallocated from developing new ways of melting your GPU (which has been the key focus for a decade or more) to finding new uses for the abundant CPU power in modern PCs. Sure, CPUs are worse than GPUs for graphics, physics and lighting. Probably for spatial audio too. But is that really all there is? What about improving in-game AI? Making game worlds and NPCs more dynamic in various ways? Making player-to-world interactions more complex, deeper and more significant? That's just stuff I can come up with off the top of my head in two minutes. I'd bet a team of game or engine developers could find quite a lot to spend CPU power on that would tangibly improve game experiences in single-player. It's there for the taking, they just need to find interesting stuff to do with it.

Of course, this runs the risk of breaking the game for people with weak CPUs - scaling graphics is easy and generally accepted ("my GPU is crap so the game doesn't look good, but at least I can play"), scaling AI or other non-graphical features is far more challenging. "Sorry, your CPU is too slow, so now the AI is really dumb and there are all these nifty/cool/fun things you can no longer do" won't fly with a lot of gamers. Which I'm willing to bet the focus on improving graphics and little else comes from, and will continue to come from for a while still.
You're right and examples like Star Swarm and Ashes are early attempts at that. Not very good ones in terms of a 'game' but... nice tech demos. The APIs are there for this now. I think the main thing we're waiting for is mass adoption because such games will run like a PITA on anything that doesn't use most feature levels of DX12 or Vulkan. There is still not a single killer-app to push those APIs forward while they really do need it or this will easily take 2-3 more years.

As for AI: writing a good AI in fact doesn't take all that much in terms of CPU. Look at UT'99 for good examples of that - those bots were insane. The main thing a good AI requires is expert knowledge and control of game mechanics combined with knowledge of how players play and act. Ironically, the best AI that doesn't 'cheat' or completely overpowers the player in every situation is one that also makes mistakes and acts upon player interaction and not pre-coded stuff. And for that, we now have big data and deep/machine learning but that is still super early adopter stage... and the fun thing about thát is that its done on.... GPU.
btarunrAMD gave more IPC increase between 1st and 2nd gen Ryzen than Intel did between its past 3 generations; despite Zen and Zen+ being the same chip physically. I'm hopeful.
I will be highly surprised if AMD manages to structurally surpass Intel IPC. They already do it on specific workloads but that is not enough. Only when they can get past Intel's IPC on all fronts, only then will I buy the Intel bash of 'they're just sitting on Skylake'. I'm more of a believer in the idea that all the fruits are picked by now for x86 and any kind of improvement requires a radically different approach altogether. GPU is currently suffering a similar fate by the way, as the main source of improvements there is found in node shrinks and dedicated resources for specific tasks, clock bumps and 'going faster or wider' (HBM, GDDR6 etc.). I also view that as the main reason GPU makers are pushing things like ray tracing, VR and higher res support, they are really scouring the land for new USPs.

Realistically, the only low hanging fruit in CPU land right now IS adding cores.
Posted on Reply
Add your own comment
Dec 23rd, 2024 12:46 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts