Friday, May 3rd 2019
Possible Listings of AMD Ryzen 9 3800X, Ryzen 7 3700X, Ryzen 5 3600X Surface in Online Stores
Remember to bring your osmosis process to the table here, as a good deal of salt is detected present in this story's environment. Some online webstores from Vietnam and Turkey have started listing AMD's 3000 series CPUs based on the Zen 2 architecture. The present company stands at a Ryzen 9 3800X, Ryzen 7 3700X, and Ryzen 5 3600X, and the specs on these are... Incredible, to say the least.
The Ryzen 9 3800X is being listed with 32 threads, meaning a base 16-core processor. Clock speeds are being reported as 3.9 GHz base with up to 4.7 GHz Turbo on both a Turkish and Vietnamese etailer's webpages. The Turkish Store then stands alone in listing AMD's Ryzen 7 3700X CPU, which is reported as having 12 cores, 24 threads, and operating at an extremely impressive 4.2 GHz base and 5.0 GHz Boost clocks. Another listing by the same website, in the form of the Ryzen 5 3600X, details the processor as having 8 physical cores and running at 4.0 GHz base and 4.8 Boost clocks.
Sources:
TPU Forums @Thread starter R0H1T, nguyencongpc.vn, ebrarbilgisayar.com
The Ryzen 9 3800X is being listed with 32 threads, meaning a base 16-core processor. Clock speeds are being reported as 3.9 GHz base with up to 4.7 GHz Turbo on both a Turkish and Vietnamese etailer's webpages. The Turkish Store then stands alone in listing AMD's Ryzen 7 3700X CPU, which is reported as having 12 cores, 24 threads, and operating at an extremely impressive 4.2 GHz base and 5.0 GHz Boost clocks. Another listing by the same website, in the form of the Ryzen 5 3600X, details the processor as having 8 physical cores and running at 4.0 GHz base and 4.8 Boost clocks.
242 Comments on Possible Listings of AMD Ryzen 9 3800X, Ryzen 7 3700X, Ryzen 5 3600X Surface in Online Stores
I also want to let you know that I get over 140 fps in BF5 (since thats your game of choice) on multi-player.. Very similar to the same exact performance I got on my 2080 and 9700k. I actually feel I'm getting better performance. Both at 1440p with same settings.. Jokes on you bud. Stop watching YouTube videos for performance metrics..
Some people understand that you don't need a 9900k to get things done. I wanted a nice quiet computer. I also stated that the build is being updated to Zen 2 right away.. So it might just be better than a 9900k, we don't know yet.. But even if it's not, who cares.. And who cares about power draw when you have a Prime Ultra Titanium 1000w PSU. My entire point was that you don't need Intel and Nvidia for a high end PC.
You sound petty thinking the 9900k is the only way to go when spending lots of money. It's also apparent you haven't used an AMD card otherwise you would realize the Wattman settings are superior to Nvidias offerings. Who spends a lot of money and then complains about power draw..
I can honestly say for a fact, 100%, that I am so much happier with my AMD build with Ryzen (new gen soon), Radeon VII, 1tb SX8200 Pro, 3600 c16 Samsung B Die Trident Z RGB and Titanium PSU, all in a Custom loop.. Than I was with my 9700k and FTW3 2080. It's not even a comparison in my book. When I had the Intel PC I just felt like a little bitch that followed the crowd and constantly felt I was missing something.. My PC is now the exact way that I wanted it because I didn't have to waste money on Intel and Nvidia tax..
Now, I never said that if you need absolute best don't go Intel or Nvidia.. By all means, do what makes you happy. I'm way happier with my quiet ass water cooled build than I was with my Intel + 2080.. And I guarantee you I get similar performance too. My Firestrike graphics score is almost 35k. My superposition score is higher than any other Reason VII score that you can find online. Userbenchmark doesn't mean much but my VII does %173, my 1080 ti did %170 and my 2080 did %178. Ya my 2080 pulled ahead a little further so what.. Doesn't mean the VII isn't high end.. My Vega 64 LC had amazing performance as well.. Better than any 1080 lol. People need to stop watching reviews on YouTube because most of them put AMD in a bad light. Gamers Nexus is like the only one that a actually tries.
Can't believe people care about power consumption, like it matters.. Are you one of those people that think the TDP on the 9900k is really 95w hahaha lol rofl lmao. Try over 200w to reach peak turbo on all cores.
New build with Radeon VII.. pcpartpicker.com/b/ZcsZxr
Old build with 9700k and 2080.. Lame homie.. pcpartpicker.com/b/BsFtt6
Some people may have the necessity of many cores for some specific tasks, but that is a niche market. For the majority, having a Zen with 6~8 cores or a Zen 2 with 12~16 cores it will be the same in the end.
Zen2 brings the I/O die and chiplets. It's even more complicated and theoretically even more prone to latency issues that plagued Zen 1. But with ideas like that, your CPU moves from being a system-on-chip to a cluster-on-chip. It spends a lot of time thinking what to do with instructions (checking cores/nodes, queuing). The result of this could mean more efficient and cheaper CPUs, but it also makes them less responsive. All this is visible in Zen and will be even more pronounced in Zen2.
This is not the future of computing that I'd like to see. I'd rather see a single, fast (like 50GHz) core with lots of cache and a rich instruction set. And I hope at least some of Intel's and AMD's R&D money is spent on that, not just on finding new ways to connect more and more cores.
www.sweclockers.com/test/25500-amd-ryzen-7-2700x-och-ryzen-5-2600x-pinnacle-ridge/28#content
You're literally one of those people that fall for YouTube reviews. My Vega 64 and Radeon VII both do better than any review you can find on the internet. You need a break at life. Go on vacation or something.. Learn something.. Because as it stands, you know nothing about hardware.
The other thing is people assume a person will not use all the cores of something, that could be true in the past, nowadays, no, like I said resident evil 2 remake, anybody can buy and play that game and not having the cores needed the game will just be unplayable, so, nothing wrong about people buying quad cores or even hexa cores but be realistic about limitations you are or will be facing in a year or 2 if you decide to use your computer for something else other than browsing the internet, listening to music, watching a movie or playing a video or so.
Either way, you're not getting one of these CPUs if the only thing you care about is single threaded performance. It's really not any different than what we have already seen with Threadripper though, but I do think that going to an I/O chip from spreading I/O resources out will definitely make a difference. Even CPU design has to be measured against reality. If it were that easy to crank really high clocks without a problem, it would have been done already. I honestly think that what we seeing is a logical evolution of CPU design. Hoping for a single fast core "like 50Ghz" as you suggest is wishful thinking and isn't really grounded in reality.
Edit: Honestly, we're already seeing this in mobile CPUs where single threaded boost clocks are pretty high and multi-core boost clocks are really low to keep CPUs within power limits. This is just taking that to another level.
When things gets so bad as with 2990WX where it has to boot in different modes depending on the application you want to run, then its unsuitable for a workstation CPU. We shouldn't and can't start to redesign kernels and applications around the "design flaws" of specific CPUs, it should be the responsibility of the CPU maker to design products that work well. And even if the CPU needs adjustments it should be limited to parameters for the scheduler (like it already does with core configs, boost ranges, etc.), not a redesigned scheduler for Threadripper 3, one for Ryzen 3, one for Threadripper 4, etc. etc.
I have a high refresh rate and often see my monitor at 1440P being maxed at 144 Hz. It is obviously game dependent, but BFV regularly sits above 120 with all things cranked up.
Whoever was claiming that a 5.2 GHz Intel chip is faster than an AMD chip at 1 GHz or more less... no shit! Clock speed absolutely can make up for a massive FPS difference.
This is a thread about AMD and we all love the systems. Stop slaying them into the ground because your beliefs are different. I’ve had all Intel systems up until Kaby and can honesty tell you that there is next to zero difference if you didn’t look at the numbers.
Such a fool... honestly. If you believe that AMD is more than 10% behind Intel in terms of IPC because this, that, or the other, then you’ve no idea what “IPC” even means. Go and educate yourself, young padawan, and then come back later. If they aren’t even that close, then explain how the 8c16t next Ryzen chips are ahead of Intel in compute workloads like R15 while drawing 30% less power...? That was compared to a 9900K running at the same clocks. You don’t compare IPC at different clock speeds.
Enough said. Go and educate yourself before belittling more people.
Design flaws? Hmmm, works just fine on linux. Again, the design flaw is WINDOWS (or the app that can't handle high core count)!
JFC, everyday is like I'm taking crazy pills.
What's your excuse gonna be when Intel releases chiplets? Exactly, you'll be mum or praise it (like apple tards do to every iPhone with 5 yr old hardware).
Why do you expect Microsoft to adjust? Why can't AMD make a PC CPU that work well with the dominant PC OS? :)
And it doesn't end there, right? Literally since Zen came out with 8 slow cores, AMD fans have been arguing that all software in the world is written wrong.
Imagine AMD were making screwdrivers and one day decided they can somehow save a lot of money by making Zen octo keys instead of hex keys. And AMD fans, instead of having a laugh, said "yay! So innovative! 8 is more! All screws are wrong!" Linux is made with high-core multi-node systems in mind. It isn't surprising that it works better with a CPU like that one. It's quite a bit better on 2P machines as well. Why not?
I'm not getting a TR4 platform - that's for sure (I think it's pointless - just like Intel HEDT).
But I am very interested in server CPU performance and EPYC has the exact same problems Threadripper has. Just how exactly is making more cores more logical than making faster cores?
Also, computing is fundamentally single-threaded. There are relatively few situations when you really need many independent cores (to run programs at exactly same time).
Most software, even that which seems to utilize many cores perfectly well, has to be forced to work like that. And it doesn't benefit in any way, i.e. 2 slow cores could be replaced with one 2x faster and it would work equally well.
The opposite is rarely true. And making a program parallel - assuming it's possible in a particular case - always greatly complicates both designing and coding.
I'm pretty positive that if we asked every programmer, every algorithm scientist and every system architect in the world, how much money he could save by making everything single-threaded, it could easily provide an R&D budget for GaN. It's just that world doesn't work this way. We have to get there in a more self-organized, evolutionary way. Of course it is. But we stick to silicon and invest in 16 core gaming CPUs.
I'm sure that as we hit a node wall on silicon, PC CPUs will try different routes.
GaN CPUs are pretty much possible - just expensive.
Graphene CPUs are being developed. Expect hundreds of GHz. :)
Software changes according to the hardware prevalent at that time - it's called progress, without AMD you'd probably be gluing 10GHz Netbursts atm - thanks AMD!
And that dominant PC OS has also had to change, DX12 for instance - thanks again, AMD!
Slow 8 cores, WoW :rolleyes:
Linux is made by some of the brightest minds in the world, nothing to do with multi node systems! If it were just limited to that then you wouldn't have the most dominant OS in the world running on Linux Kernel, but hey rant away! Windows caters to the lowest multiple, that's their problem.
Yes must be the reason why Intel glued 56 cores instead of releasing their 5GHz 28 core 1(2?) KW chilled monstrosity.
No it's not.
That's like - I'm not sure what to say here, did you sniff Intel's glue?
You know what's cheaper, improving Windows scheduler.
Way more expensive than you last proposition, but probably the future.
But that's where you are totally wrong; while we certainly should focus on efficient multithreading, SIMD and cache optimizations in software, there is a huge difference in optimizing for good design vs. optimizing for "design flaws" in hardware.
Zen's problems is luckily small compared to Bulldozer's fundamental design issues, but claiming that the problem for Threadripper is lack of proper multicore scaling in software is 100% wrong; the problems Threadripper have is tied to its own self-inflicted design limitations causing issues with latency and memory operations, it has nothing to do with core count, as evident by Intel not having these issues. As always, what matters is real world performance, how it's achieved is less important.
Intel is working on chip stacking, and how they choose to interconnect these will determine how they perform, not if it's one or more chips. Exactly.
It reminds me how many engineers approaches a task; redesigning the problem to fit the solution instead of designing the solution to match the problem.
Software is designed to do tasks that we need it for. Sometimes it ends up being slow because that's how computers look at given moment. We can't help that. We still need to get the job done. LOL on the "brightest minds in the world". They're just programmers. Good ones, but let's not get overexcited.
Time of "brightest minds in the world" is a bit too valuable for writing code.
Even in a scope of a single company or software team, the best or most experienced people usually spend relatively little time coding.
And yes, Linux development today is driven by enterprises that need it for high-performance systems (from big SAP servers to supercomputers). Companies like Intel, Red Hat, IBM, SUSE, Oracle, AMD, Nvidia and Mellanox are among the top contributors. The rest is focused mostly on smartphones/embedded.
The importance of Linux in PCs is very small. It's really not that hard to understand why a rebranded EPYC works better with Linux than with Windows.
But in the end you need PCs for people to actually benefit from what these powerful servers provide. And PCs need purpose-built hardware and software (including OS). Windows caters to a normal user and aims at easy and smooth operation (like Mac OS). It's a different target than that of most Linux distros. Intel glued 56 cores because they could. Because why not? Because it's an attractive, cost-efficient product.
As for high-core models, Intel offers a very wide choice of CPUs boosting upwards of 3.5 GHz. A few Xeons are past 4.0 barrier already.
It's becoming a standard today.
In the newly announced Cascade Lake-SP majority of CPUs will be able to boost to 3.9 or 4.0.
Sadly, this is not the case with the 56-core monster. It can go "just" to 3.8GHz.
For continuous high load there are also CPUs with high base clock, like Xeon 8168 (24C, 3.4/3.7) or 6154 (18C, 3.7/3.7).
But you would have to know how server CPUs are used to understand why this is important. :)
AMD EPYC are very slow by comparison (left to rot by AMD), but they will most likely catch up next year.
More cores makes sense, because it can more effectively distribute load without running at higher clocks and for workloads where you already have a bunch of threads running (even if they're not fully taxing the system,) that there is an efficiency benefit there, but we have boost clocks because we still care about single-threaded performance.
Also, you're running on the assumption that time to write the application is the only cost. What about the time it takes for that application to run? Time is money. My ETL jobs would be practically useless if they take a full day to run which is why they're setup in a way where concurrency is tunable in terms of both parallelism and batch size. You don't need 16c CPU for a gaming machine which is sort of my point. Also Graphene is vaporware until we actually see it in production at a price that's not outlandish, otherwise it's just a pipe dream. We can make CPUs out of a number of different materials, but that doesn't mean it's a viable option. Once again, all of this needs to be measure in reality. That's bullshit and a terrible answer to the problem. Go back to using MS DOS If that's how you feel. Oh wait, you like multi-core scheduling. This is like saying that everything should be built around the crappiest part of the product which makes no sense. You fix the shitty part, you don't build around it. :kookoo: You mean like how the importance for these kinds of chips for gaming is really small? :laugh: ...and normal users don't need a Threadripper, right? :slap: If you have a workload that can saturate one of those CPUs, then you're going to benefit from more cores, so I don't really see what your problem is. It's almost like you want a server CPU and a CPU that's good for gaming at the same time. Yet another pipedream.
I'd love to get some of that stuff your smoking though. It's gotta be a hell of a drug.
At least in the past clock speed and IPC increases improved everyone's computing experience. Today buying more cores yields nothing unless you actually have software which can utilize (not use) it or if you have a use case and are close to being maxed out. Maybe its gaining traction now on the software side, heavy multithreading... it sure hadn't before even though we had octo cores on the cheap from AMD for several years already so I cant say I have any buy in. IMO, It wont be this generation that does it either where the switch flips.
The real deals here for "normal users" is buying an appropriately sized (c/t) CPU for you needs for the next few years on the cheap. And for 95% of people, even here at a so called enthusiast site, that is still no more than a 6c/12t or 8c/t CPU. More than 8c/16t on the mainstream platform, right now, is absolutely ridiculous and a ploy for those not in the know to buy simply because there are more cores.
The reality is that most people here at TPU who calls themself an enthusiast tends to buy machines for gaming or for bragging rights (benchmarking,) not because they actually need those cores.
The nodes in the future clearly pre dictate that its too expensive to make all the chip on the cutting edge node so you Will see intel follow suit with chiplets and they have already stated they will , they're busy on that now, see Foveros and intels many statements towards a modular future with Emib connects and 3d stacking.
@Aquinus I agree , more people should join WCG, let's get some research done and out the way.
Dont get me wrong, i understand the hardware needs to be here, but weve had hex and octo cores for 8/6 years already and we havent really seen a momentum shift yet. I think it's a lot closer to reality, but still a generation or two away from really making a difference for the majority. The 'use it later' argument is something of a given as weve been hearing that argument for years. It just depends on use models for the pc/user.