Friday, May 3rd 2019

Possible Listings of AMD Ryzen 9 3800X, Ryzen 7 3700X, Ryzen 5 3600X Surface in Online Stores

Remember to bring your osmosis process to the table here, as a good deal of salt is detected present in this story's environment. Some online webstores from Vietnam and Turkey have started listing AMD's 3000 series CPUs based on the Zen 2 architecture. The present company stands at a Ryzen 9 3800X, Ryzen 7 3700X, and Ryzen 5 3600X, and the specs on these are... Incredible, to say the least.

The Ryzen 9 3800X is being listed with 32 threads, meaning a base 16-core processor. Clock speeds are being reported as 3.9 GHz base with up to 4.7 GHz Turbo on both a Turkish and Vietnamese etailer's webpages. The Turkish Store then stands alone in listing AMD's Ryzen 7 3700X CPU, which is reported as having 12 cores, 24 threads, and operating at an extremely impressive 4.2 GHz base and 5.0 GHz Boost clocks. Another listing by the same website, in the form of the Ryzen 5 3600X, details the processor as having 8 physical cores and running at 4.0 GHz base and 4.8 Boost clocks.
Sources: TPU Forums @Thread starter R0H1T, nguyencongpc.vn, ebrarbilgisayar.com
Add your own comment

242 Comments on Possible Listings of AMD Ryzen 9 3800X, Ryzen 7 3700X, Ryzen 5 3600X Surface in Online Stores

#151
ch3w2oy
Manu_PTYou sold a 9700k @ 5,2ghz + RTX 2080 and bought a Ryzen + VEGA VII? I have no words. Imagine paying money to downgrade, and on top of that use even more power from the hardware while having less performance. No comments.

And btw, 60hz, doesn´t matter if it´s 720p or 8k, is not high-end to me. If you want a CPU for 60hz you grab an i3 8100 or a Ryzen 1300x. High-End to me is 1080p 240hz and 1440p 165hz. Ryzen can´t eve sustain 130fps LOCKED on most engines. Fact.

With the money you spent with the downgrade process, you would have got an i9 9900k + RTX 2080ti, and it would obliterate Ryzen in every possible scenario, from gaming to productivity.
Edit:
I also want to let you know that I get over 140 fps in BF5 (since thats your game of choice) on multi-player.. Very similar to the same exact performance I got on my 2080 and 9700k. I actually feel I'm getting better performance. Both at 1440p with same settings.. Jokes on you bud. Stop watching YouTube videos for performance metrics..

Some people understand that you don't need a 9900k to get things done. I wanted a nice quiet computer. I also stated that the build is being updated to Zen 2 right away.. So it might just be better than a 9900k, we don't know yet.. But even if it's not, who cares.. And who cares about power draw when you have a Prime Ultra Titanium 1000w PSU. My entire point was that you don't need Intel and Nvidia for a high end PC.

You sound petty thinking the 9900k is the only way to go when spending lots of money. It's also apparent you haven't used an AMD card otherwise you would realize the Wattman settings are superior to Nvidias offerings. Who spends a lot of money and then complains about power draw..

I can honestly say for a fact, 100%, that I am so much happier with my AMD build with Ryzen (new gen soon), Radeon VII, 1tb SX8200 Pro, 3600 c16 Samsung B Die Trident Z RGB and Titanium PSU, all in a Custom loop.. Than I was with my 9700k and FTW3 2080. It's not even a comparison in my book. When I had the Intel PC I just felt like a little bitch that followed the crowd and constantly felt I was missing something.. My PC is now the exact way that I wanted it because I didn't have to waste money on Intel and Nvidia tax..

Now, I never said that if you need absolute best don't go Intel or Nvidia.. By all means, do what makes you happy. I'm way happier with my quiet ass water cooled build than I was with my Intel + 2080.. And I guarantee you I get similar performance too. My Firestrike graphics score is almost 35k. My superposition score is higher than any other Reason VII score that you can find online. Userbenchmark doesn't mean much but my VII does %173, my 1080 ti did %170 and my 2080 did %178. Ya my 2080 pulled ahead a little further so what.. Doesn't mean the VII isn't high end.. My Vega 64 LC had amazing performance as well.. Better than any 1080 lol. People need to stop watching reviews on YouTube because most of them put AMD in a bad light. Gamers Nexus is like the only one that a actually tries.

Can't believe people care about power consumption, like it matters.. Are you one of those people that think the TDP on the 9900k is really 95w hahaha lol rofl lmao. Try over 200w to reach peak turbo on all cores.

New build with Radeon VII.. pcpartpicker.com/b/ZcsZxr


Old build with 9700k and 2080.. Lame homie.. pcpartpicker.com/b/BsFtt6
Posted on Reply
#152
Vayra86
ch3w2oySome people understand that you don't need a 9900k to get things done. I wanted a nice quiet computer. I also stated that the build is being updated to Zen 2 right away.. So it might just be better than a 9900k, we don't know yet.. But even if it's not, who cares.. And who cares about power draw when you have a Prime Ultra Titanium 1000w PSU. My entire point was that you don't need Intel and Nvidia for a high end PC.

You sound petty thinking the 9900k is the only way to go when spending lots of money. It's also apparent you haven't used an AMD card otherwise you would realize the Wattman settings are superior to Nvidias offerings. Who spends a lot of money and then complains about power draw..

I can honestly say for a fact, 100%, that I am so much happier with my AMD build with Ryzen (new gen soon), Radeon VII, 1tb SX8200 Pro, 3600 c16 Samsung B Die Trident Z RGB and Titanium PSU, all in a Custom loop.. Than I was with my 9700k and FTW3 2080. It's not even a comparison in my book. When I had the Intel PC I just felt like a little bitch that followed the crowd and constantly felt I was missing something.. My PC is now the exact way that I wanted it because I didn't have to waste money on Intel and Nvidia tax..

Now, I never said that if you need absolute best don't go Intel or Nvidia.. By all means, do what makes you happy. I'm way happier with my quiet ass water cooled build than I was with my Intel + 2080.. And I guarantee you I get similar performance too. My Firestrike graphics score is almost 35k. My superposition score is higher than any other Reason VII score that you can find online. Userbenchmark doesn't mean much but my VII does %173, my 1080 ti did %170 and my 2080 did %178. Ya my 2080 pulled ahead a little further so what.. Doesn't mean the VII isn't high end.. My Vega 64 LC had amazing performance as well.. Better than any 1080 lol. People need to stop watching reviews on YouTube because most of them put AMD in a bad light. Gamers Nexus is like the only one that a actually tries.

Can't believe people care about power consumption, like it matters.. Are you one of those people that think the TDP on the 9900k is really 95w hahaha lol rofl lmao. Try over 200w to reach peak turbo on all cores.

New build with Radeon VII.. pcpartpicker.com/b/ZcsZxr


Old build with 9700k and 2080.. Lame homie.. pcpartpicker.com/b/BsFtt6
Mouth watering build and lighting there bud. Sweet. You seriously nailed it on that top pic.
Posted on Reply
#153
ch3w2oy
Vayra86Mouth watering build and lighting there bud. Sweet. You seriously nailed it on that top pic.
Thank you!
Posted on Reply
#154
kings
MetroidJust like core duo was the best thing to ever happen in 2006, Ryzen 3000 is the best thing to ever happen in 2019 for the pc comunity as a whole.
I wouldn´t go that far. Yes, it's cool to have CPUs with 12, 16 cores and all, but for maybe 95% (if not more) of the people the current CPUs, whether from Intel or AMD, are already overkill.

Some people may have the necessity of many cores for some specific tasks, but that is a niche market. For the majority, having a Zen with 6~8 cores or a Zen 2 with 12~16 cores it will be the same in the end.
Posted on Reply
#155
notb
AquinusI think that remains to be seen since it really depends on the workload(s) that would cause the CPU to run at full tilt because two different tasks can have very different demands on system memory and cache. Also, even if memory bandwidth does become more of a bottleneck, that also just means that memory speed matters. I don't necessarily think that's a bad thing... but that's all running under one big assumption: performance is the only thing that's important.
Look at 2990WX. 32 cores, 4 channels. Awful results.
Zen2 brings the I/O die and chiplets. It's even more complicated and theoretically even more prone to latency issues that plagued Zen 1.
Consider for a moment that the speed of the CPU could be tuned for the amount of memory performance you're expecting to have, so even if the there isn't enough memory bandwidth to drive all the cores at max clocks, it would allow the CPU to distribute parallel load to more cores at lower clocks. That very well might be more efficient than using fewer cores at a higher frequency when it comes to power draw.
But with ideas like that, your CPU moves from being a system-on-chip to a cluster-on-chip. It spends a lot of time thinking what to do with instructions (checking cores/nodes, queuing). The result of this could mean more efficient and cheaper CPUs, but it also makes them less responsive. All this is visible in Zen and will be even more pronounced in Zen2.

This is not the future of computing that I'd like to see. I'd rather see a single, fast (like 50GHz) core with lots of cache and a rich instruction set. And I hope at least some of Intel's and AMD's R&D money is spent on that, not just on finding new ways to connect more and more cores.
Posted on Reply
#157
ch3w2oy
Manu_PTLess than 10%? How delusional..... 2700x at 4,2ghz on Battlefield V multiplayer can´t even sustain 144fps locked, while 9700k/9900k fly at 180-200... 10% yes right...
Hey, smarty pants, I get 140+fps consistently at 1440p with my Radeon VII on my r5 2600, above 160 often as well. About the same, if not better performance than my 9700k and ftw3 2080. Keep eating what the media keeps feeding little kids like you. I bet you think Apple also makes the best products in the whole world..

You're literally one of those people that fall for YouTube reviews. My Vega 64 and Radeon VII both do better than any review you can find on the internet. You need a break at life. Go on vacation or something.. Learn something.. Because as it stands, you know nothing about hardware.
Posted on Reply
#158
Metroid
kingsI wouldn´t go that far. Yes, it's cool to have CPUs with 12, 16 cores and all, but for maybe 95% (if not more) of the people the current CPUs, whether from Intel or AMD, are already overkill.

Some people may have the necessity of many cores for some specific tasks, but that is a niche market. For the majority, having a Zen with 6~8 cores or a Zen 2 with 12~16 cores it will be the same in the end.
I already discussed this many times, game and most general developers are making the use of that core that used to be idle, yes, like many, you may not know about it, I myself am surprised by how fast multi-thread is been used and I am a developer. I thought It would take longer but look at how things are, to date resident evil 2 remake using 4 cores or less is unplayable, i was surprised by that. I could not believe my quadcore could not handle it well. I needed more than 4 cores to make it playable and that is today, cities skylines is laggy if you use less than 8 cores if population is 400k, if you make the use of all 20 tiles, 16 cores is not enough, using smt helps a lot on that and 32 threads may handle it well. So is more and more common games and anything to use more and more cores because that is the cheapest way to get performance out of it and devs have been using this strategy for sometime. A normal game like resident evil 2 remake to need 6 or more cores to be rendered properly, imagine in 2 years or so, so ryzen 3000 3800x 16 cores, 32 threads is not unrealistic as it can be used for today and probably well be fine in 5 years or so.

The other thing is people assume a person will not use all the cores of something, that could be true in the past, nowadays, no, like I said resident evil 2 remake, anybody can buy and play that game and not having the cores needed the game will just be unplayable, so, nothing wrong about people buying quad cores or even hexa cores but be realistic about limitations you are or will be facing in a year or 2 if you decide to use your computer for something else other than browsing the internet, listening to music, watching a movie or playing a video or so.
Posted on Reply
#159
Aquinus
Resident Wat-man
notbBut with ideas like that, your CPU moves from being a system-on-chip to a cluster-on-chip. It spends a lot of time thinking what to do with instructions (checking cores/nodes, queuing). The result of this could mean more efficient and cheaper CPUs, but it also makes them less responsive. All this is visible in Zen and will be even more pronounced in Zen2.
The problem is that a lot of this isn't on AMD to get right with the hardware, it's on Microsoft to get right with the CPU scheduler in Windows and the NT kernel. The 2990WX actually performs fairly well in Linux and a lot of people think that the 2950X is a sweet spot in Windows because it doesn't seem very competent at handling the additional cores, but you're right. It's not really any different than what we're seeing with Threadripper though other than the memory controllers not being spread around the CPU. I do think that unifying memory access and having a common last level cache will make a difference.

Either way, you're not getting one of these CPUs if the only thing you care about is single threaded performance.
notbBut with ideas like that, your CPU moves from being a system-on-chip to a cluster-on-chip. It spends a lot of time thinking what to do with instructions (checking cores/nodes, queuing). The result of this could mean more efficient and cheaper CPUs, but it also makes them less responsive. All this is visible in Zen and will be even more pronounced in Zen2.
It's really not any different than what we have already seen with Threadripper though, but I do think that going to an I/O chip from spreading I/O resources out will definitely make a difference.
notbThis is not the future of computing that I'd like to see. I'd rather see a single, fast (like 50GHz) core with lots of cache and a rich instruction set. And I hope at least some of Intel's and AMD's R&D money is spent on that, not just on finding new ways to connect more and more cores.
Even CPU design has to be measured against reality. If it were that easy to crank really high clocks without a problem, it would have been done already. I honestly think that what we seeing is a logical evolution of CPU design. Hoping for a single fast core "like 50Ghz" as you suggest is wishful thinking and isn't really grounded in reality.

Edit: Honestly, we're already seeing this in mobile CPUs where single threaded boost clocks are pretty high and multi-core boost clocks are really low to keep CPUs within power limits. This is just taking that to another level.
Posted on Reply
#160
efikkan
AquinusThe problem is that a lot of this isn't on AMD to get right with the hardware, it's on Microsoft to get right with the CPU scheduler in Windows and the NT kernel. The 2990WX actually performs fairly well in Linux and a lot of people think that the 2950X is a sweet spot in Windows because it doesn't seem very competent at handling the additional cores, but you're right.
Well, the NT kernel is "ancient" and has fallen behind, but that's a whole new discussion. Threadripper 2990WX can perform better in Linux, since scheduling in Linux is better and allow some tweaking, but is still really hit and miss.

When things gets so bad as with 2990WX where it has to boot in different modes depending on the application you want to run, then its unsuitable for a workstation CPU. We shouldn't and can't start to redesign kernels and applications around the "design flaws" of specific CPUs, it should be the responsibility of the CPU maker to design products that work well. And even if the CPU needs adjustments it should be limited to parameters for the scheduler (like it already does with core configs, boost ranges, etc.), not a redesigned scheduler for Threadripper 3, one for Ryzen 3, one for Threadripper 4, etc. etc.
Posted on Reply
#161
TheMadDutchDude
I just wanted to chime in here with gaming results...

I have a high refresh rate and often see my monitor at 1440P being maxed at 144 Hz. It is obviously game dependent, but BFV regularly sits above 120 with all things cranked up.

Whoever was claiming that a 5.2 GHz Intel chip is faster than an AMD chip at 1 GHz or more less... no shit! Clock speed absolutely can make up for a massive FPS difference.

This is a thread about AMD and we all love the systems. Stop slaying them into the ground because your beliefs are different. I’ve had all Intel systems up until Kaby and can honesty tell you that there is next to zero difference if you didn’t look at the numbers.

Such a fool... honestly. If you believe that AMD is more than 10% behind Intel in terms of IPC because this, that, or the other, then you’ve no idea what “IPC” even means. Go and educate yourself, young padawan, and then come back later. If they aren’t even that close, then explain how the 8c16t next Ryzen chips are ahead of Intel in compute workloads like R15 while drawing 30% less power...? That was compared to a 9900K running at the same clocks. You don’t compare IPC at different clock speeds.

Enough said. Go and educate yourself before belittling more people.
Posted on Reply
#162
TheGuruStud
Gotta love blaming AMD instead of the incompetent idiots at microsoft. So, if microshit didn't jump on x64, you'd blame AMD for not making hardware that works? This is so goddamn laughable. Stop talking. Just go buy an Intel quad core and be happy lol That's all you deserve.

Design flaws? Hmmm, works just fine on linux. Again, the design flaw is WINDOWS (or the app that can't handle high core count)!
JFC, everyday is like I'm taking crazy pills.

What's your excuse gonna be when Intel releases chiplets? Exactly, you'll be mum or praise it (like apple tards do to every iPhone with 5 yr old hardware).
Posted on Reply
#163
notb
AquinusThe problem is that a lot of this isn't on AMD to get right with the hardware, it's on Microsoft to get right with the CPU scheduler in Windows and the NT kernel.
Disagree. Computers are about software. Hardware is just a tool - it should follow needs, not force changes.
Why do you expect Microsoft to adjust? Why can't AMD make a PC CPU that work well with the dominant PC OS? :)

And it doesn't end there, right? Literally since Zen came out with 8 slow cores, AMD fans have been arguing that all software in the world is written wrong.

Imagine AMD were making screwdrivers and one day decided they can somehow save a lot of money by making Zen octo keys instead of hex keys. And AMD fans, instead of having a laugh, said "yay! So innovative! 8 is more! All screws are wrong!"
The 2990WX actually performs fairly well in Linux
Linux is made with high-core multi-node systems in mind. It isn't surprising that it works better with a CPU like that one. It's quite a bit better on 2P machines as well.
Either way, you're not getting one of these CPUs if the only thing you care about is single threaded performance.
Why not?
I'm not getting a TR4 platform - that's for sure (I think it's pointless - just like Intel HEDT).
But I am very interested in server CPU performance and EPYC has the exact same problems Threadripper has.
I honestly think that what we seeing is a logical evolution of CPU design.
Just how exactly is making more cores more logical than making faster cores?

Also, computing is fundamentally single-threaded. There are relatively few situations when you really need many independent cores (to run programs at exactly same time).
Most software, even that which seems to utilize many cores perfectly well, has to be forced to work like that. And it doesn't benefit in any way, i.e. 2 slow cores could be replaced with one 2x faster and it would work equally well.
The opposite is rarely true. And making a program parallel - assuming it's possible in a particular case - always greatly complicates both designing and coding.

I'm pretty positive that if we asked every programmer, every algorithm scientist and every system architect in the world, how much money he could save by making everything single-threaded, it could easily provide an R&D budget for GaN. It's just that world doesn't work this way. We have to get there in a more self-organized, evolutionary way.
Hoping for a single fast core "like 50Ghz" as you suggest is wishful thinking and isn't really grounded in reality.
Of course it is. But we stick to silicon and invest in 16 core gaming CPUs.
I'm sure that as we hit a node wall on silicon, PC CPUs will try different routes.
GaN CPUs are pretty much possible - just expensive.
Graphene CPUs are being developed. Expect hundreds of GHz. :)
Posted on Reply
#164
R0H1T
notbDisagree. Computers are about software. Hardware is just a tool - it should follow needs, not force changes.
Why do you expect Microsoft to adjust? Why can't AMD make a PC CPU that work well with the dominant PC OS?:)

And it doesn't end there, right? Literally since Zen came out with 8 slow cores, AMD fans have been arguing that all software in the world is written wrong.

Imagine AMD were making screwdrivers and one day decided they can somehow save a lot of money by making Zen octo keys instead of hex keys. And AMD fans, instead of having a laugh, said "yay! So innovative! 8 is more! All screws are wrong!"

Linux is made with high-core multi-node systems in mind. It isn't surprising that it works better with a CPU like that one. It's quite a bit better on 2P machines as well.

Why not?
I'm not getting a TR4 platform - that's for sure (I think it's pointless - just like Intel HEDT).
But I am very interested in server CPU performance and EPYC has the exact same problems Threadripper has.

Just how exactly is making more cores more logical than making faster cores?

Also, computing is fundamentally single-threaded. There are relatively few situations when you really need many independent cores (to run programs at exactly same time).
Most software, even that which seems to utilize many cores perfectly well, has to be forced to work like that. And it doesn't benefit in any way, i.e. 2 slow cores could be replaced with one 2x faster and it would work equally well.
The opposite is rarely true. And making a program parallel - assuming it's possible in a particular case - always greatly complicates both designing and coding.

I'm pretty positive that if we asked every programmer, every algorithm scientist and every system architect in the world, how much money he could save by making everything single-threaded, it could easily provide an R&D budget for GaN. It's just that world doesn't work this way. We have to get there in a more self-organized, evolutionary way.

Of course it is. But we stick to silicon and invest in 16 core gaming CPUs.
I'm sure that as we hit a node wall on silicon, PC CPUs will try different routes.
GaN CPUs are pretty much possible - just expensive.
Graphene CPUs are being developed. Expect hundreds of GHz. :)
And there you go on with your inane rant mode!

Software changes according to the hardware prevalent at that time - it's called progress, without AMD you'd probably be gluing 10GHz Netbursts atm - thanks AMD!
And that dominant PC OS has also had to change, DX12 for instance - thanks again, AMD!

Slow 8 cores, WoW :rolleyes:

Linux is made by some of the brightest minds in the world, nothing to do with multi node systems! If it were just limited to that then you wouldn't have the most dominant OS in the world running on Linux Kernel, but hey rant away! Windows caters to the lowest multiple, that's their problem.

Yes must be the reason why Intel glued 56 cores instead of releasing their 5GHz 28 core 1(2?) KW chilled monstrosity.

No it's not.

That's like - I'm not sure what to say here, did you sniff Intel's glue?

You know what's cheaper, improving Windows scheduler.

Way more expensive than you last proposition, but probably the future.
Posted on Reply
#165
efikkan
TheGuruStudGotta love blaming AMD instead of the incompetent idiots at microsoft. So, if microshit didn't jump on x64, you'd blame AMD for not making hardware that works? This is so goddamn laughable. Stop talking. Just go buy an Intel quad core and be happy lol That's all you deserve.

Design flaws? Hmmm, works just fine on linux. Again, the design flaw is WINDOWS (or the app that can't handle high core count)!
This is the same excuse which was used back in the Bulldozer days. For years AMD fans claimed the Bulldozer was superior, it was just bad OS kernels and applications.

But that's where you are totally wrong; while we certainly should focus on efficient multithreading, SIMD and cache optimizations in software, there is a huge difference in optimizing for good design vs. optimizing for "design flaws" in hardware.

Zen's problems is luckily small compared to Bulldozer's fundamental design issues, but claiming that the problem for Threadripper is lack of proper multicore scaling in software is 100% wrong; the problems Threadripper have is tied to its own self-inflicted design limitations causing issues with latency and memory operations, it has nothing to do with core count, as evident by Intel not having these issues.
TheGuruStudWhat's your excuse gonna be when Intel releases chiplets? Exactly, you'll be mum or praise it (like apple tards do to every iPhone with 5 yr old hardware).
As always, what matters is real world performance, how it's achieved is less important.
Intel is working on chip stacking, and how they choose to interconnect these will determine how they perform, not if it's one or more chips.
notbHardware is just a tool - it should follow needs, not force changes.

And it doesn't end there, right? Literally since Zen came out with 8 slow cores, AMD fans have been arguing that all software in the world is written wrong.
Exactly.
It reminds me how many engineers approaches a task; redesigning the problem to fit the solution instead of designing the solution to match the problem.
Posted on Reply
#166
Vayra86
notbDisagree. Computers are about software. Hardware is just a tool - it should follow needs, not force changes.
Why do you expect Microsoft to adjust? Why can't AMD make a PC CPU that work well with the dominant PC OS? :)

And it doesn't end there, right? Literally since Zen came out with 8 slow cores, AMD fans have been arguing that all software in the world is written wrong.

Imagine AMD were making screwdrivers and one day decided they can somehow save a lot of money by making Zen octo keys instead of hex keys. And AMD fans, instead of having a laugh, said "yay! So innovative! 8 is more! All screws are wrong!"

Linux is made with high-core multi-node systems in mind. It isn't surprising that it works better with a CPU like that one. It's quite a bit better on 2P machines as well.

Why not?
I'm not getting a TR4 platform - that's for sure (I think it's pointless - just like Intel HEDT).
But I am very interested in server CPU performance and EPYC has the exact same problems Threadripper has.

Just how exactly is making more cores more logical than making faster cores?

Also, computing is fundamentally single-threaded. There are relatively few situations when you really need many independent cores (to run programs at exactly same time).
Most software, even that which seems to utilize many cores perfectly well, has to be forced to work like that. And it doesn't benefit in any way, i.e. 2 slow cores could be replaced with one 2x faster and it would work equally well.
The opposite is rarely true. And making a program parallel - assuming it's possible in a particular case - always greatly complicates both designing and coding.

I'm pretty positive that if we asked every programmer, every algorithm scientist and every system architect in the world, how much money he could save by making everything single-threaded, it could easily provide an R&D budget for GaN. It's just that world doesn't work this way. We have to get there in a more self-organized, evolutionary way.

Of course it is. But we stick to silicon and invest in 16 core gaming CPUs.
I'm sure that as we hit a node wall on silicon, PC CPUs will try different routes.
GaN CPUs are pretty much possible - just expensive.
Graphene CPUs are being developed. Expect hundreds of GHz. :)
Did Intel offer you that CPU engineering job yet?
Posted on Reply
#167
notb
R0H1TAnd there you go on with your inane rant mode!

Software changes according to the hardware prevalent at that time
No, it doesn't. Performance changes according to the hardware.

Software is designed to do tasks that we need it for. Sometimes it ends up being slow because that's how computers look at given moment. We can't help that. We still need to get the job done.
Linux is made by some of the brightest minds in the world, nothing to do with multi node systems!
LOL on the "brightest minds in the world". They're just programmers. Good ones, but let's not get overexcited.
Time of "brightest minds in the world" is a bit too valuable for writing code.
Even in a scope of a single company or software team, the best or most experienced people usually spend relatively little time coding.

And yes, Linux development today is driven by enterprises that need it for high-performance systems (from big SAP servers to supercomputers). Companies like Intel, Red Hat, IBM, SUSE, Oracle, AMD, Nvidia and Mellanox are among the top contributors. The rest is focused mostly on smartphones/embedded.

The importance of Linux in PCs is very small. It's really not that hard to understand why a rebranded EPYC works better with Linux than with Windows.
But in the end you need PCs for people to actually benefit from what these powerful servers provide. And PCs need purpose-built hardware and software (including OS).
Windows caters to the lowest multiple, that's their problem.
Windows caters to a normal user and aims at easy and smooth operation (like Mac OS). It's a different target than that of most Linux distros.
Yes must be the reason why Intel glued 56 cores instead of releasing their 5GHz 28 core 1(2?) KW chilled monstrosity.
Intel glued 56 cores because they could. Because why not? Because it's an attractive, cost-efficient product.

As for high-core models, Intel offers a very wide choice of CPUs boosting upwards of 3.5 GHz. A few Xeons are past 4.0 barrier already.
It's becoming a standard today.
In the newly announced Cascade Lake-SP majority of CPUs will be able to boost to 3.9 or 4.0.
Sadly, this is not the case with the 56-core monster. It can go "just" to 3.8GHz.

For continuous high load there are also CPUs with high base clock, like Xeon 8168 (24C, 3.4/3.7) or 6154 (18C, 3.7/3.7).

But you would have to know how server CPUs are used to understand why this is important. :)

AMD EPYC are very slow by comparison (left to rot by AMD), but they will most likely catch up next year.
Posted on Reply
#168
Aquinus
Resident Wat-man
notbDisagree. Computers are about software. Hardware is just a tool - it should follow needs, not force changes.
Why do you expect Microsoft to adjust? Why can't AMD make a PC CPU that work well with the dominant PC OS? :)
I expect a CPU scheduler that isn't garbage and if you're buying 32c/64t but you don't need them (like with gaming,), then you're just an idiot who likes to piss away money. It's like buying a 20 core Xeon or something and then whining about single threaded performance when you opt'ed for more cores. It's laughable.
notbLinux is made with high-core multi-node systems in mind. It isn't surprising that it works better with a CPU like that one. It's quite a bit better on 2P machines as well.
Hence why the NT kernel's scheduler is shit, but that's not a problem with hardware if the OS can't effectively use the hardware. AMD can't fix poor decide decisions in the OS and it's even more laughable to think that they can or that they should bend over backwards for it. This kind of mentality would have said we should never have gotten NT and should still be using DOS-based Windows.
notbWhy not?
I'm not getting a TR4 platform - that's for sure (I think it's pointless - just like Intel HEDT).
But I am very interested in server CPU performance and EPYC has the exact same problems Threadripper has.
Because normally one buys more cores to... you know... get more cores? If you're only interested in single threaded performance, you're not interested in TR4 chips, you're interested in burning a hole in your pocket. :kookoo:
notbJust how exactly is making more cores more logical than making faster cores?

Also, computing is fundamentally single-threaded. There are relatively few situations when you really need many independent cores (to run programs at exactly same time).
Most software, even that which seems to utilize many cores perfectly well, has to be forced to work like that. And it doesn't benefit in any way, i.e. 2 slow cores could be replaced with one 2x faster and it would work equally well.
The opposite is rarely true. And making a program parallel - assuming it's possible in a particular case - always greatly complicates both designing and coding.

I'm pretty positive that if we asked every programmer, every algorithm scientist and every system architect in the world, how much money he could save by making everything single-threaded, it could easily provide an R&D budget for GaN. It's just that world doesn't work this way. We have to get there in a more self-organized, evolutionary way.
Of course it's easier to write single-threaded code. You have fewer issues to deal with, but that doesn't mean it's the right decision given the workload. Also, I write multithreaded code all the time and I do it in the day job and let me tell you something, I don't write any data processing job that uses a single core. I use stream abstractions and pipelines all over the place because changing a single argument to a function call can change the amount of parallelism I get at any stage in the pipeline. It also helps when you use a language that's conducive to writing multi-threaded code. Take my main language of choice, Clojure, it's a Lisp-1 with immutability through and through with a bunch of mechanisms to have controlled behavior around mutable state. It's a very different animal than writing multi-threaded code in say, Java or C# and it's really not that difficult.

More cores makes sense, because it can more effectively distribute load without running at higher clocks and for workloads where you already have a bunch of threads running (even if they're not fully taxing the system,) that there is an efficiency benefit there, but we have boost clocks because we still care about single-threaded performance.

Also, you're running on the assumption that time to write the application is the only cost. What about the time it takes for that application to run? Time is money. My ETL jobs would be practically useless if they take a full day to run which is why they're setup in a way where concurrency is tunable in terms of both parallelism and batch size.
notbOf course it is. But we stick to silicon and invest in 16 core gaming CPUs.
I'm sure that as we hit a node wall on silicon, PC CPUs will try different routes.
GaN CPUs are pretty much possible - just expensive.
Graphene CPUs are being developed. Expect hundreds of GHz. :)
You don't need 16c CPU for a gaming machine which is sort of my point. Also Graphene is vaporware until we actually see it in production at a price that's not outlandish, otherwise it's just a pipe dream. We can make CPUs out of a number of different materials, but that doesn't mean it's a viable option. Once again, all of this needs to be measure in reality.
efikkanExactly.
It reminds me how many engineers approaches a task; redesigning the problem to fit the solution instead of designing the solution to match the problem.
That's bullshit and a terrible answer to the problem. Go back to using MS DOS If that's how you feel. Oh wait, you like multi-core scheduling. This is like saying that everything should be built around the crappiest part of the product which makes no sense. You fix the shitty part, you don't build around it. :kookoo:
notbAnd yes, Linux development today is driven by enterprises that need it for high-performance systems (from big SAP servers to supercomputers). Companies like Intel, Red Hat, IBM, SUSE, Oracle, AMD, Nvidia and Mellanox are among the top contributors. The rest is focused mostly on smartphones/embedded.

The importance of Linux in PCs is very small. It's really not that hard to understand why a rebranded EPYC works better with Linux than with Windows.
But in the end you need PCs for people to actually benefit from what these powerful servers provide. And PCs need purpose-built hardware and software (including OS).
You mean like how the importance for these kinds of chips for gaming is really small? :laugh:
notbWindows caters to a normal user and aims at easy and smooth operation (like Mac OS). It's a different target than that of most Linux distros.
...and normal users don't need a Threadripper, right? :slap:
notbFor continuous high load there are also CPUs with high base clock, like Xeon 8168 (24C, 3.4/3.7) or 6154 (18C, 3.7/3.7).
If you have a workload that can saturate one of those CPUs, then you're going to benefit from more cores, so I don't really see what your problem is. It's almost like you want a server CPU and a CPU that's good for gaming at the same time. Yet another pipedream.

I'd love to get some of that stuff your smoking though. It's gotta be a hell of a drug.
Posted on Reply
#169
EarthDog
Aquinusand normal users don't need a Threadripper, right? :slap:
Normal users dont need more than mainstream had to offer two generations ago as far as c/t count and wont need more for another few years at least. But due to limitations in silicon it seems we cant get much faster clocks and IPC gains have been a joke for the most part from both camps (outside of zen after nearly a decade of incremental trash from both sides).

At least in the past clock speed and IPC increases improved everyone's computing experience. Today buying more cores yields nothing unless you actually have software which can utilize (not use) it or if you have a use case and are close to being maxed out. Maybe its gaining traction now on the software side, heavy multithreading... it sure hadn't before even though we had octo cores on the cheap from AMD for several years already so I cant say I have any buy in. IMO, It wont be this generation that does it either where the switch flips.

The real deals here for "normal users" is buying an appropriately sized (c/t) CPU for you needs for the next few years on the cheap. And for 95% of people, even here at a so called enthusiast site, that is still no more than a 6c/12t or 8c/t CPU. More than 8c/16t on the mainstream platform, right now, is absolutely ridiculous and a ploy for those not in the know to buy simply because there are more cores.
Posted on Reply
#170
Aquinus
Resident Wat-man
EarthDogThe real deals here for "normal users" is buying an appropriately sized (c/t) CPU for you needs for the next few years on the cheap. And for 95% of people, even here at a so called enthusiast site, that is still no more than a 6c/12t or 8c/t CPU. More than 8c/16t on the mainstream platform, right now, is absolutely ridiculous and a ploy for those not in the know to buy simply because there are more cores.
People's ignorance doesn't make these CPUs useless through. It's like saying you bought a huge vehicle but then was taken back by the terrible gas mileage with a huge V8. That's not the vehicle's fault, it's the owner's fault for not understanding what they bought. Also, software for the run of the mill consumer is going to be built for what they expect to be in a mainline system. Now that quad cores are pervasive, a lot more software can take advantage of those cores. This is exactly why building hardware around arguably garbage software is dumb, because hardware advances against platforms normal people have is what influences software design, otherwise we'd still be using single core CPUs and MS DOS because that's all DOS supported.
EarthDogAnd for 95% of people, even here at a so called enthusiast site, that is still no more than a 6c/12t or 8c/t CPU.
Enthusiast sometimes means people who actually use computers to do useful things, like software engineers, DBAs, or people who do things genomics. Other times it's people who want the best hardware for gaming. Other times it's people who just have money money than brains. The reality is that an "enthusiast" when it comes to computers isn't likely the kind of person who actually needs this kind of compute power. Buying a CPU just for cores when you're a run of the mill enthusiast is like buying a huge truck with a huge diesel engine because its got a lot of displacement. It's a poor decision on the part of the enthusiast.

The reality is that most people here at TPU who calls themself an enthusiast tends to buy machines for gaming or for bragging rights (benchmarking,) not because they actually need those cores.
Posted on Reply
#171
Metroid
AquinusThe reality is that most people here at TPU who calls themself an enthusiast tends to buy machines for gaming or for bragging rights (benchmarking,) not because they actually need those cores.
Desktop users I would not blame for wanting more cores, even if they dont use now they might use it later, although it will always be more expensive more cores now than later as we progress core count are becoming cheaper and cheaper, so lets say is kind of a wasted money but like i said many times, the owner nowadays might be using the cores and he does not even know the cores are being in use, right now bragging rights that I see worldwide has a name, iphone, too expensive, peopley usually buy iphone or those expensive mobile phones for whatsapp, chrome etc, although chrome might use those cores if many tabs are open, what I see is a waste of money, they can buy a multicore octa core phones for $150 but they prefer to buy an iphone. Probably they are used to apple services and products. The brainwash is strong to see something that cost 10 times less and the usefulness will be the same as if it is an iphone.
Posted on Reply
#172
Aquinus
Resident Wat-man
MetroidThe brainwash is strong to see something that might cost 10 times and the usefulness will be the same as if it was an iphone.
That's probably a wee-bit of an exaggeration, but I'd agree with the overall sentiment.
Posted on Reply
#173
TheoneandonlyMrK
notbLook at 2990WX. 32 cores, 4 channels. Awful results.
Zen2 brings the I/O die and chiplets. It's even more complicated and theoretically even more prone to latency issues that plagued Zen 1.

But with ideas like that, your CPU moves from being a system-on-chip to a cluster-on-chip. It spends a lot of time thinking what to do with instructions (checking cores/nodes, queuing). The result of this could mean more efficient and cheaper CPUs, but it also makes them less responsive. All this is visible in Zen and will be even more pronounced in Zen2.

This is not the future of computing that I'd like to see. I'd rather see a single, fast (like 50GHz) core with lots of cache and a rich instruction set. And I hope at least some of Intel's and AMD's R&D money is spent on that, not just on finding new ways to connect more and more cores.
A fast single 50Ghz core, Your so far away from possible your into dream land , we are no where near buying optic based transistors or any grephene version of transistor but i doubt they would be sold in single units / cores , that makes for a binning nightmare ,it works or its actually in the bin, shows what you knows.

The nodes in the future clearly pre dictate that its too expensive to make all the chip on the cutting edge node so you Will see intel follow suit with chiplets and they have already stated they will , they're busy on that now, see Foveros and intels many statements towards a modular future with Emib connects and 3d stacking.

@Aquinus I agree , more people should join WCG, let's get some research done and out the way.
Posted on Reply
#174
EarthDog
I guess that's what an enthusiast is. It's too bad the desktop market is supported not by enthusiasts, but by the mainstream. Its THOSE people, the overwhelming majority, who tend to lose out in the premature core wars.

Dont get me wrong, i understand the hardware needs to be here, but weve had hex and octo cores for 8/6 years already and we havent really seen a momentum shift yet. I think it's a lot closer to reality, but still a generation or two away from really making a difference for the majority. The 'use it later' argument is something of a given as weve been hearing that argument for years. It just depends on use models for the pc/user.
Posted on Reply
#175
Aquinus
Resident Wat-man
EarthDogDont get me wrong, i understand the hardware needs to be here, but weve had hex and octo cores for 8/6 years already and we havent really seen a momentum shift yet. I think it's a lot closer to reality, but still a generation or two away from really making a difference for the majority. The 'use it later' argument is something of a given as weve been hearing that argument for years. It just depends on use models for the pc/user.
Sure, but you have to consider what that hardware has been in and what typical consumers are buying. The reality is that it hasn't been in laptops and the market is hungry for mobile devices. We're only now starting to see laptops with 6c/12t.
Posted on Reply
Add your own comment
Dec 18th, 2024 11:42 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts