• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD TRX40 Chipset Not Compatible with 1st and 2nd Gen Threadrippers

I am curious, is that 30% faster performance than TR despite the AVX clocks / off-set?
Or is that a Xeon only thing?

Faster than AVX2 enabled CPUs despite clocks or offset. I am talking about all stock settings. OC is strictly prohibited on lab HEDT computer. Purely Skylake-X 16 core versus 2950X or 1950X.
 
All of Intel's CPU scale their frequency according to AVX load.
Unless for example many motherboard have MCE or something on that just completely disregard the stock TDP.
But yeah, I am wondering if it is really as much as 30% even with lower clocks.

Faster than AVX2 enabled CPUs despite clocks or offset. I am talking about all stock settings. OC is strictly prohibited on lab HEDT computer. Purely Skylake-X 16 core versus 2950X or 1950X.
Thanks for the info.
Yes I do mean when the CPU is respecting the stock TDP.
Good to know it is still 30% faster despite the clock speed drop.
 
All of Intel's CPU scale their frequency according to AVX load.

Threadripper's AVX2 also has internal AVX offset.

Would love to test TR3 versus CascadeLake-X. At the same time I am not spending my own dime doing the comparison. I will wait for some richer labs to investigate first. :D

At the same time Intel's Deep Learning Boost on CascadeLake-X seems to support Tensorflow

This is also great news. It is easy to justify buying an HEDT CPU for lab workstation. Meanwhile it is extremely hard to justify buying a gaming grade GPU, although those are usually best bang for the buck accelerators. Many lab TR builds ended up with either 1030 or RX550 as VGA output.
 
Last edited by a moderator:
I have to say I disagree with most people in this thread about the criticism of AMD breaking backwards compatibility. While everyone would love to have hardware with endless upgradability, having this only would become useful after ~4-5 years when there is a significant upgrade, but then everything embedded on the motherboard would be outdated. If we were to have real compatibility across several generations we would need to make motherboards more simple, modular and barebone like back in the 286/386/486 days, where motherboards were basically more or less just expansion ports. This is an idea that I like, at least in theory.

We have to remember that this platform is meant for workstations, which makes reliability the most important trait. Motherboard makers are already barely maintaining support for any motherboard beyond 2 years, and they certainly don't test each model with enough hardware before shipping a BIOS update. As we've seen with AM4, the compatibility with older hardware is questionable at best. And while AMD motherboards are getting the same premium prices as some "premium" Intel boards, certain makers still fail to deliver the same quality (*ahem* MSI, Gigabyte…). If anything AMD should focus their energy on two things; 1) Firmware testing ahead of product launch 2) Put pressure on motherboard makers to do their best when making the BIOSes. A few hiccups after a product launch is excusable, repeated problems three months later is a deal breaker for workstation use.

My thoughts are that most people are seeing that this is the reverse of everything AMD has done to this point. This CPU is perceived as a Workstation PC but it is actually HEDT. This was meant as a try by some AMD engineers to see if it could be done and though it is based on Epyc (true workstation and data center CPUs) it is more in line with Ryzen. I do agree with you on technical limitations and the only thing that jumps at me is PCI_E 4.0 support for the CPU and MB. It would be a little difficult to get that to work with all of the PCI_E lanes tied to the CPU on current gen MBs and CPUs.
 
Finaly AMD doing the right thing and avoiding a shitfest that would have happened at launch.
And these new ones will probably last another generation until Zen 4 and DDR5.
 
Threadripper's AVX2 also has internal AVX offset.

I am unable to find any info on that. To my knowledge it doesn't need an offset because all AVX instructions run at half the throughput anyway.
 
I am unable to find any info on that. To my knowledge it doesn't need an offset because all AVX instructions run at half the throughput anyway.

I just observed quite a large CPU frequency drop when I specify AVX2 during my pipelines. From ~3.7GHz all core turbo down to 3.35GHz all core turbo when I max out all 32 threads
 
Threadripper's AVX2 also has internal AVX offset.
I am unable to find any info on that. To my knowledge it doesn't need an offset because all AVX instructions run at half the throughput anyway.
It is different between how AMD and Intel rates their TDP and thus their "stock" clocks.
For example even under AVX work loads my 1950X will stay above the 3.4Ghz base clock, usually around 3.5Ghz. With the all core boost of the 1950X rated at 3.7Ghz.
For the 7980XE the base clock is given at 2.6Ghz but under AVX2 the 7980XE drop to 2.3Ghz when respecting the stock TDP settings.
I am unable to find the AVX clock for the 9980XE.
 
It is different between how AMD and Intel rates their TDP and thus their "stock" clocks.
For example even under AVX work loads my 1950X will stay above the 3.4Ghz base clock, usually around 3.5Ghz. With the all core boost of the 1950X rated at 3.7Ghz.
For the 7980XE the base clock is given at 2.6Ghz but under AVX2 the 7980XE drop to 2.3Ghz when respecting the stock TDP settings.
I am unable to find the AVX clock for the 9980XE.

I see you got a custom loop. The 1950X I am using is cooled by NZXT X62. Maybe that is why your AVX2 boost can go higher? Do you max out all 32 threads?
 
I would not say it is niche in HPC. At my university more than half of the nodes are saturated with varies form of bioinformatics job: biomedical imaging data analysis and intergration; DNA/RNA/Protein sequencing; Protein modeling; Population genetics; neural network based multiomics cancer research.

It is HEDT we are talking about here. With its higher core count and good pricing AMD has seen some good adoption in research labs. What I was saying is the now widely used AVX512 and soon-to-be implemented Deep Learning Boost will slowly claw back the marketshare and mindshare from Threadripper again.

As a matter of fact one of my friend at Oregon State University working for HPC maintenance is already seeing some "buyer's remorse" Oregon State spent a crap ton of money on EPYC processors for bioinformatics applications.

Mainstream desktop nobody cares about AVX512 that much. That I agree RyZen 3xxx is killing it.
Bio bio bio mine me ours.

Perspective blinkers , you and yours are not the only users of HEDT and you are in a niche ,sorry but your university is probably quite aligned to your perspective.

I think my use cases are all that matters too though, at least to me.
AmD doesn't have a specific downclocks for Avx 2 though the increased load and heat will push all core clocks down.
 
I see you got a custom loop. The 1950X I am using is cooled by NZXT X62. Maybe that is why your AVX2 boost can go higher? Do you max out all 32 threads?
Yes, I do max out all 32-threads.
Ryzen is extremely temperature sensitive, with a custom loop (3x420 Rads) I am able to maintain the a Tdie of around 60~61C max in an air-conditioned room.

Edit: Also an important thing about Ryzen, is the stock TDP takes the SOC / IMC into account as well.
So faster memory or more ranks of memory do eat into the available TDP for the cores, more so if the motherboard adjust the SOC voltage.
 
Last edited:
Bio bio bio mine me ours.

Perspective blinkers , you and yours are not the only users of HEDT and you are in a niche ,sorry but your university is probably quite aligned to your perspective.

I think my use cases are all that matters too though, at least to me.
AmD doesn't have a specific downclocks for Avx 2 though the increased load and heat will push all core clocks down.

If thinking like that makes you happy then sure~
 
Yes, I do max out all 32-threads.
Ryzen is extremely temperature sensitive, with a custom loop I am able to maintain the a Tdie of around 60~61C in an air-conditioned room.

That is interesting (we are talking about different CPUS) as I get 4.1 GHZ across all cores with my 1920X using a Noctua Air cooler. One thing I have noticed is that the voltage at stock is set to 1.38 for 3700 MHZ and it boosts to 4.0 but 4.1 @ 1.325 works like a charm and I don't go past 58 C.
 
If the workloads utilize any form of AVX512, Intel nodes runs circles around TR system with same core config. Not sure what field you work in. In my area we are starting to see wide spread adoption of AVX512 in codes which gives AVX512 capable Intel system a huge boost in performance. The same amount of microbiome data pushed through using the same pipeline would finish almost 30% faster on Intel system comparing to the AMD TR system with just AVX2.

Example of a DNA sequence alignment
View attachment 133793

A lot of bioinformatics tools benefit a huge amount from implementation of AVX512. See this paper over here actually discussing the benefit of AVX512 in research



I wish AMD will have AVX512 support on their Zen arc soon.
Yeah, we're not in the bioinformatics business. The CPU farm tends to get used for raytracing using a couple of different renderers at the moment. Neither use AVX-512.

You say that AVX-512 gives your Intel machines a 30% advantage. Have you looked at price/performance and performance/Watt?

For us at the time of 1950X and 2990WX purchase, AMD had more than a 30% advantage in both price and power use. Again, I'm talking about our use case, which is obviously not the same as yours.
 
That is interesting (we are talking about different CPUS) as I get 4.1 GHZ across all cores with my 1920X using a Noctua Air cooler. One thing I have noticed is that the voltage at stock is set to 1.38 for 3700 MHZ and it boosts to 4.0 but 4.1 @ 1.325 works like a charm and I don't go past 58 C.
I am talking about the CPU's stock behavior. Overclock is an whole other ball game.
My post is regarding workstation use where no overclocking is allowed and the CPU is set to respect the actual stock TDP.
 
We already knew that AMD wants to milk ever penny from enthusiasts since they announced the R9 series and disable SMT for lowerend models.
Selling products at prices that are less than half of the competition and offering the same or better performance is milking? OK.
 
I am talking about the CPU's stock behavior. Overclock is an whole other ball game.
My post is regarding workstation use where no overclocking is allowed and the CPU is set to respect the actual stock TDP.

Got it
 
Something doesn't seem right: Epyc CPUs will re-utilize the same socked but TR gen 3 won't???

Had Epyc Rome force a change in socket, one would expect TR gen 3 to do the same, but since it didn't ... something doesn't add up here!
Isn't Epyc a full SoC? I never heard or read that Epyc mobos had chipsets...
 
Yeah, we're not in the bioinformatics business. The CPU farm tends to get used for raytracing using a couple of different renderers at the moment. Neither use AVX-512.

You say that AVX-512 gives your Intel machines a 30% advantage. Have you looked at price/performance and performance/Watt?

For us at the time of 1950X and 2990WX purchase, AMD had more than a 30% advantage in both price and power use. Again, I'm talking about our use case, which is obviously not the same as yours.


TR system was way cheaper per workstation, which is why I helped build so many TR systems. At that time buying Skylake-X with those stupid pricing is just big fking no.

Problem is, Intel has slashed price on current gen CascadeLike-X almost in half. This makes these new Intel HEDT ones very attractive. But yeah I DO NEED to see how TR3 and Cascadelake-X competes first.

TL;DR: TR was unchallenged in HEDT for the performance/cost/$, now it has challenges.
 
Isn't Epyc a full SoC? I never heard or read that Epyc mobos had chipsets...

Technically speaking, all Ryzen CPUs are full SoCs, just with very limited I/O. See AMD A/X300 "chipset" which doesn't seem to exist and just relies on a Super I/O controller and the connectivity built into the CPU/SoC.
Due to the fact that Epyc has more integrated I/O, there's no need for a traditional chipset.
For Threadripper it seems like AMD decided to save some cost per chip by reducing the on-die I/O and as such, a chipset was added mainly for peripheral connectivity, but also some additional PCIe lanes.
Obviously we don't know what TRX40 and the other rumoured chipsets brings to the table, but X399 was pretty much the same as X370/X470, so I would expect TRX40 to be quite similar to X570.
This time around, as the CPU/SoC design has changed, my guess is they're doing this so they can use a more affordable I/O die for Threadripper over Epyc, or be able to use partially functional Epyc I/O dies.
 
Not enough Information at the moment.

This is to cause a knee jerk reaction from the user base and draw us in.
 
TR system was way cheaper per workstation, which is why I helped build so many TR systems. At that time buying Skylake-X with those stupid pricing is just big fking no.

Problem is, Intel has slashed price on current gen CascadeLike-X almost in half. This makes these new Intel HEDT ones very attractive. But yeah I DO NEED to see how TR3 and Cascadelake-X competes first.

TL;DR: TR was unchallenged in HEDT for the performance/cost/$, now it has challenges.
Only place where Intel has advantage is Avx-512 and just like with any compute solution design to your needs and budget.
 
Not enough Information at the moment.

This is to cause a knee jerk reaction from the user base and draw us in.
Speculation, maybe even an arrow to the knee will make anyone jerk.






[:banghead::cry::p]
 
Back
Top