Saturday, January 2nd 2021
AMD Ryzen Threadripper 5000 Series "Genesis Peak" Processor Lineup Could Begin with a 16-Core Model
AMD is set to introduce its next-generation of Ryzen Threadripper processors in the coming weeks, and rumors are suggesting that it may happen at this year's CES. The new Ryzen Threadripper platform is codenamed Genesis Peak. If we take a look at the current 3000 series "Castle Peak" Threadripper processors, they were launched on CES 2020, with availability in February. So we are assuming that the upcoming 5000 "Genesis Peak" series is going to launch at the virtual CES event, during AMD's show. Thanks to the information from Yuri "1usmus" Bubliy, we found out that AMD is going to start the next-generation Threadripper lineup with a 16 core processor. "1usmus" posted a riddle on Twitter, that is actually a hex code that translates to "GENESIS 16 CORES".
The current generation of Threadripper Castle Peak processors is starting at 24 cores, and going up to 64-core models, so it would be interesting to see where AMD sees the 16-core model in the stack and why it chose to do it. The exact specifications of this processor are unknown, so we have to wait for the announcement event. It is also unknown if the existing TRX40 motherboard will offer support for Zen 3 based Genesis Peak 5000 series Threadripper processors or will AMD introduce a new platform for it.
Sources:
Yuri Bubliy (1usmus) on Twitter, via VideoCardz
The current generation of Threadripper Castle Peak processors is starting at 24 cores, and going up to 64-core models, so it would be interesting to see where AMD sees the 16-core model in the stack and why it chose to do it. The exact specifications of this processor are unknown, so we have to wait for the announcement event. It is also unknown if the existing TRX40 motherboard will offer support for Zen 3 based Genesis Peak 5000 series Threadripper processors or will AMD introduce a new platform for it.
83 Comments on AMD Ryzen Threadripper 5000 Series "Genesis Peak" Processor Lineup Could Begin with a 16-Core Model
I219-V $1.72
I225-V $2.40
While I do enjoy dual NICs on my computers, I still would prefer an extra PCIe slot, even if it was a chipset PCIe slot, for flexibility. I do believe SAS-controllers are more relevant for servers and can certainly be add-in cards, perhaps even should be, in case they need replacement. IPMI for workstations, is that even relevant? Like EPYC 7232P?
That has very low clocks.
What you call a niche market is probably the largest group of pro/semipro users; people who needs the right balance between core speed and core count, like people doing simulations, CAD, photo editing, video editing(but not large encode jobs), development etc. This is certainly "niche" compared to the mainstream, but there are far more of these guys than those who actually benefit from 32 or 64 core CPUs at lower clock speeds.
How about something like this instead
Or the new WRX80 but with no fans on the chipset, proper aluminum armour and VRM for Threadripper NON pro CPUs that wont turn up into being a furnace when kept at load non stop for a week or two?!
But yes like you said, I also prefer PCI-e solution of my choice.
Plenty if not most of the workstations have SAS support - not part of the chipset but still it's there - IME a much better protocol than SATA.
Of course IPMI is relevant, why wouldn't it be? There are several boards that have it.
Threadripper is ideal for those instances where You are megatasking, having for example, one instance of revit open , one instance of 3dsmax open two or 3 instances of maya open, You are rendering on 4 GPUs at full tilt, and at the same time, You have photoshop, or maybe substance alchemist open, and possibly doing something on houdini or marvelous designer..and you are on a zoom or skype call and at the same time on one of your monitors you have the news on.
This is what people buy dual W Xeons for, and what people would like to see for a TR platform.... and not the usual Lenovo or Dell compromise... besides BOXX has made a fantastic Threadripper machine... but why does one ONLY have to go to BOXX as the only option and set up a single machine for the price of 3?!
This is why Intel is still a better choice, because of lack of decent motherboards for Threadripper, which frankly even the blind are asking for after so many years.
Dual socket Threadripper even just 8 to 24 cores?! Hell yeah!! It would sell like bread to pros!
And why not a 9 pcie Slots motherboard?!... I am not gonna say no to that...it would still be better than grabbing a Tyan GPGPU server for 10 gpus.
Thing is that Threadripper has all the potential for one to do without servers for HPC.
But despite the stupidity, since it's becoming the new baseline, home networks that can't justify 10G should probably go 2.5G (if something is upgraded anyway), since it will be a nice low-cost option and inline with what a typical NAS etc. can handle. My impression is that it's more a server thing. I've not yet seen someone manage a bunch of workstations this way, but it might be just me.
But I also suspect, that most motherboard manufacturers are using aquantia, possibly becuase they prefer not to be depending on Intel too much becuase of pricing?!
Mainstream motherboards have become quite expensive...so there is a possibility , that Intel is just not affordable anymore for good return...so they basiclaly mst have said screw it...ill go aquantia 2.5/5 rather than intel 10, and see what happens.
Aquantia 10G is crap, messy drivers and can't get close to 10G speeds consistently. You're much better off even with Intel 2.5G then, especially if some of the machines are running Linux.
10G NICs are expensive, but switches are even worse. So if you're going to spend this kind of money, you better buy the right stuff. Intel X550 is the gold standard of 10G NICs, you can find them for ~$110-120 new on Ebay. Switches are much harder though, unless you only need two machines at 10G, either directly connected or through something like Asus XG-U2008. Beyond that there is mostly pricy server grade stuff with noisy fans. And if you're considering cheap used server stuff, you better know what you're looking for.
Nevertheless, I still believe not including an expensive NIC is the best option for motherboards, provided they have plenty of PCIe.
But im seriuolsy thinking to use this to make home ''server like machine'' on sm eintel or threadripper system one of these days... with smart tv s and so many gimmicks areoudn u can basicaly get rid of all computers at home at one point id guess.
In this scenario maybe aquantia if it makes motherboards cheaper can be a welcome alternative to Intel....it is not that it has to be super mega proffesional grade and mega validated at all costs..it really depends the use case scenario and the type of motherboard you need for that .
Maybe I can use a threadripper on a proffesional grade motherboard for work and another one where i can slap in 2 3 4 gpus and make all the family happy if gaming browsing or simply using a smart tv access with no computation involved..in either case 10g is a must, and this was the mainreason that when x399 was introduced, after just seeing the rear IO and just 4 PCIE 16 slots, i immediatley discarded and not even looked at any other of the motherboard or CPU features..
AMD got extremely lucky with x399, becuase it was complete garbage, there was not a single motherboard that was not copy paste of intel ones BUT underfeatured and of older generation. It is incredible that so many bought into that crap just to run a 16 core cpu at any cost, no matter what.
I guess people got so nauseated by Intels awful ethics, and pricing , and stangant perfomance, that people just had to support AMD to get some hope..... and apparently it just worked out very good for them and all of us.
so yeah since they skimped on pcie on x570 we need something else and they are not delivering anything.
why is it so hard to give pcie?
I'm okay with x570 not having that many pcie since it's a mainstream platform and the CPUs do not support that many lanes anyway.
But like you said the option isn't there for the TR platform either, which makes no sense.
Take a look at ASUS P9X79-E WS or X99-E WS now something like that should be widely available for TR from several mobo manufacturers.
And I also do not like NVME drive slots taking up pcie slots.
Also they should refactor the sata slots to a special unique small plug connection where u can plug single cable that can bifurcate/decouple off the motherboard to add as many Sata devices as u want...basically completely off the motherboard layout...and use all that space for stacking nvme drives instead without taking pcie slots space.
Or alternatively populate m.2 on the rear of the motherboard...though this can be not practical, if u blast one..but then having them under 4 or 5 gpus would be not practical just the same.
it would also be smart and useful to be able to have connections on the rear of the motherboard (maybe the shield?!) so to be able to install a single 120 or 140 mm fan behind the cpu socket.
BTW..i managed to find an ASUS P9X79 motherboard from hong kong and it should actually be delivered tomorrow... got some old 2867W xeon around to drop in there...couldn't be happier! 10G is amust of such worksations... and threadripper would be perfect on EEB form factor for the higher skews.... OR even Long E ATX, you know up to 9 slots long?!
Threadripper suffer from the fact that motherboard vendors don't seems to get what it really is. The boards appear to mainly target content creator who are streaming and editing their videos, since it's the market that's actually "hot". So you get something with nvme slots taking the place of pci-e, and them assuming that mainstream dual cpu is dead since you can get 64 cores on a single socket. Until Threadripper pro, I don't think that they understood that it was AMD answer to desktop Xeon, when Epyc is server specific.
The gigabyte board that you linked is kind of an oddity in itself. Where I live (France) that board is hard to get, all the C621 board sold on popular website are server motherboard with a bare bone PCB, it doesn't look like Asus/Msi got anything comparable for that generation...
If it's on auto or forcibly set to pcie4.0 in bios I get repeated code 43 in device manager on most boots, sometimes it works without issue on cold boot.
If I manually set pcie3.0 I have no issues other than of course being pcie3.0.
Most people in the know have suggested it's an issue with the CPU oddly enough.
Back on topic. I think the reason AMD are introducing a 16 core is for lower core count at the same multithread perf/$ as Castle peak. So they will have the new 16 core priced the same as the current 24 core performing the same in MT and better in ST but with better yeilds/profit for AMD.
I totally agree that motherboard manufacturers do not uderstand threadripper, frankly I am sure NEITHER DOES @amd , and that they trying to push this to you tubers and twitch users, but by doing so, they losing 80% of sales they would get, and their motherboard manufacturing costs will never go down.
It is unreal.. I am forced like most others to buy Intel becuase there is pathetic motherboards for AMD... its out of this world.
While we all hated duel socket system..for workstation pruposes today, it is the smartest choice one can make, cause AMD threadripper package becuase of sluggish motherboards cannot satisfy by any means minimum HPC requirements.
This is why I have explained before in this thread that @amd SERIOULSY NEED to get things straightened out with motherboard manufacturers.
Im done with Dell, Im done with lenovo... these companies live in a past well gone and well far from what reality is today, and they cannot keep to pace by any means with technology...thatkind of mentality was fine 10 years ago...today only an idiot would buy into those machines.(with the Only exception for BOXX, if you willing to sell a kidney for a low end entry level workstation)
People today that use workstations have elegant offices, with clean looks, and need a clean elegant quiet machine that fits in with all.
In My small studio (I am not hot big shot at all) we have put a water cooling copper tubing system under the floor, which allows to put all big massive radiators totally decoupled from the working enviroment... fans run at minimum speeds, and if a machine makes any noise at all it will be a 3.5 inch basic back up drive.. and in the winter, i dont use heating at all.
We watch tv on them, we watch the news, you tube have skype , or zoom, conferences while you might be rendering on one maya instance while or simulating on another one in houdini, of making some texture or shader or whatever...we even got a a music library on one to play music linked to an amplifier...and its playing music pretty much all day long... on my desk i have an extra small 23 inch screen where i like to keep US or International news on, becuase i enjoy this while i work and i have a crappy dedicated quadro 4000 8 years old for it which is the only noisy thing on my system at times..
One of these days...
If all of a sudden a new player comes into motheboard manufacturing offering Apple like or even NVIDIA like cooler designs mentality and design language in the motherboard space, with intelligent functionality and expansion, ASUS Gigabyte MSI and al the others willl suffer a painful death pretty fast, in the motherboard reality.
frankly in 2021 i dont even understand how it is possible that we can access a pcb at all with bare hands... by now they should have completley shrouded motherboards into a solid sealed graphene block and have patented hundreds of new connections which would be more elegant, practical functional, and taking much less space.
A motherboard today shoudl lok like an Apple laptopwithout scren and keyboard...and not like my grand grand father transistor radio.
Instead motherboards look like 1980s stuff with rgb on it but black cause it hides all the antiquate mentality, and just capacitors and vrm vendors developing.
Just Look at USB or sata connectors... they are pathetic...massively big for what?! for how many more years?!
We are still using copper lan when all is fiber, the thinnest of cables ever.
When do I get to see a lan optic fiber Lan cable withtheir dedicated router from Asus?!
99% of what u see on a motherbaord could be packed all in a single massive SOC, and frankly, lets face it..if a lil piece breaks on your motherboard today...good luck with anyone repairing it, unless u have some good old friend who knows how to...
so this di per se, defies the purpose to build motherboard in such a modular antique way, becuase it is convenient for them to get repaired. (just give me a 5 years warranty and lets get dispensed with the bull)
I know that with this i say i am making extreme points that would need a lot of work and feasibility studies done on , but My point is, that instead of glorifying them (motherboard manufacturers) every time they make a new motherboard maybe we should just do the opposite and blast them for being so antiquated.
Stanley Kubkrick, should have made computers and not be a movie director....we would not be at the mercy of taiwan and china trash today, becuase any motherboard in the world today is just that...trash.
It's always a delicate task to make something that is space efficient while being widely compatible and low cost at the same time. People building ITX systems are painfully aware of that. In theory we could have a deeper integration between the case and the PSU/motherboard where the cables would be handled by the case maker to make them as short as possible, but that's more engineering on their part and the price would rise. I often dreamt of building an ITX system that would only need three cables: power; I/O; PCI-e. With 2"5 and eventually 3"5 drive just getting something like a hot swap bay. Just plug and unplug stuff without having to cut a bunch of zip tie even when you are switching motherboard.
Why Not simply plug in a motherboard on a much flatter PSU acting as a motherboard tray?! do you have any idea of how many cables you would get rid of like that?! virtually all of them.
You cannot install enough pcie devices to take advantage of it... OR x570 should have come at least with 48 pcie lannes for the 9 series... so u can comfortably run a raid 10, or use 4 gpus.. and still have raid 10.
And then use trx40 motherboard design for that socket, which is layout that make sense for x570 if it had 48 pcie lanes...opposed to TRX40 which makes zero sense for threadripper and an awful feature design for that cpu.
Real workstation hardware have fins that dissipate heat, not metal blobs which can only retain short bursts of load.
(unless I misread you…) My old P9X79 WS / i7-3930K is limping these days, so I was toying with the idea of buying one of those used 10-12 core Xeons for ~$200 on Ebay and picking up a Supermicro MBD-X11SRA-O on discount, as a stop-gap solution. Fantastic if it works, but I don't have time for it if it doesn't. :D Provided you have something on both ends that can utilize that bandwidth.
My Fractal Design 7 XL can handle SSI-EEB, so bring on them big boards!
in 2021 you would want little to no airflow, to cool motherboard compments, so a more conteporay design and solution... you also need to consider the importance of rugedness.
on topof that today watercooling ones mahcine is safer than it has ever been, and while you can indeed havea relativelysilent aircooled machine... beccaise of how new boost algorithms work with new genration cpus, aircooling is not the wisest of choices, you wilwant ideally to run on water, and decouple radiators in some other room or directly outdoors...
there are a fair amount of pump choices to do this, like iwaki or some koolance ones.
Not thati am forcing anyone to have to watercool, but todays motherboards blobs as you call them are prety much insane good, and the VRms are extrmely powerful, this implies they generate very little heat.... it i sposible, i didn t do the math, that if you use 16 phaze 90 amp vrms with a threadripper you might not even need a heat sink at all.... yoi dont need one for 5950x overclocked on most motherboards, unles you are using ln2 for extreme overclocking.
the chipset at load wont output more than what 16 watts?... it probably runs cooler than an m.2 at load.
on top of that there is these new amD cpu tuners, where u can calibrate every core t its maximum efficiency, so you can further kower the power consuption toup to 20 % or use that headroom to get higher boosts.
Bottom line agains is... mothebroard manufacturers are too far behind with teh pace of technology... tey nehind with pbo, they beind with motherboard components, layouts, and so many other things we cannot even fathom...and tehsd thing is they asking twice as much for a 5 layer pcb, when even ddr5 memory is not out.... 5 6 layer pcb should be the norm today and the cost of produciton shoudl have gne down as on ld motherboard...and not a feature to have an excuse to hike prices..
and if it does indeed sitl cost them alot becuase it takes themlonger to manufacutre, hen hey better get the ''flinstones ut of they manufcturing facilities'' and repace them with something of this century.
Why are you so sceptical about airflow? Dust filters are very effective, just clean them whenever you clean your floor, and once or twice per year clean it internally.
Decent airflow doesn't mean it needs to sound like a jet engine taking off. Get a case with a decent airflow/noise balance and put in some Noctuas.
Il post alink of 7 phase with doubers rated at 60amps.... you can go to minute 32:00 to understand better, and then you can get an idea of how much beefier 90 amp ones intead would create what kind of heat output... again... 16 24 core for sure you can run at ambient WITHOUT a heatsink... at full load...and possibly even 32 core.
I use noiseblocker, not noctua... and keep them spinning at minimum possible....though on next system if for threadirper will offer decent motherboards this time around...im definitley planning into using only one single fan for the case and ONE single fan installed behind the motherboard tray directly on the back of the cpu socket both at lowest rpm as possible.
I will never go back to aircooling.... mmakes no sense when u use csuch high end gpus and cpu, to give away perfoance, especialy now that all boost algorytms are temperatura dependant...and again... all my radiators are decoupled outdoors.... so im done with Computer dust or heat in My working place.
You do that once for always and never need to worry anymore about chassis sizes costs or cooling or dust.
With DDR5 memory it will be intersting to see if you need any airflow at all in one case, or just let the heat move out of the case passively...in that case ill build some chasis with one hole at the botom and one hole at the top and let mother nature do its thing.