# Intel Releasing 10 core 20 thread i9-10900KF for $499 very soon... 5.2 Ghz boost



## Space Lynx (Jul 10, 2019)

Intel Core i9-10900KF - 10C/20T @ 5.2GHz for $499 on 14nm+++
					

Oh, would you look at that: Intel's next-gen 10th-gen CPU roadmap leaks.




					www.tweaktown.com
				




ryzen 3900x already ancient history if the 5.2 is all 10 cores no downclocking...


----------



## Bones (Jul 10, 2019)

Meh - I consider the source on this one. 

I seriously doubt 3900x will be "History" over it, would still be a good chip even with this around performing as promised..... Which I'm taking with a grain of saltyness here.

TBH if it does deliver that's good.
The recent history with AMD has forced Intel to drop prices vs what it _could_ have gone for, something the Intel guys should be happy about.


----------



## Divide Overflow (Jul 10, 2019)

lynx29 said:


> Intel Core i9-10900KF - 10C/20T @ 5.2GHz for $499 on 14nm+++
> 
> 
> Oh, would you look at that: Intel's next-gen 10th-gen CPU roadmap leaks.
> ...


Have you read it yourself?  Max frequency for all cores is 4.6.


----------



## StrayKAT (Jul 10, 2019)

Is it me or is that kind of cheap?


----------



## sam_86314 (Jul 10, 2019)

*yawn*

At least the market has competition again. No more $400 4 core 8 thread chips. AMD is finally forcing Intel to innovate, even if that just means pushing clocks higher.

That'll only go so far. Remember the old Pentium 4's and the FX 9590?


----------



## ador250 (Jul 10, 2019)

Still will be slower in productivity than the 3900X. This r skylake ipc, 12~13% slower than Zen 2. And AMD is now controlling the price of processor market. They can slash price of 3000 series with this intel new launch.


----------



## Metroid (Jul 10, 2019)

Base frequency will be reduced to 3.4 to accommodate the 200mhz increase and the 105w. A furnace will be seen from miles when those 20 threads start working here. Amazing, intel 14+++.

Year 2030 and intel will still be 14+++++++, core i9 16900 at base frequency of 1.2ghz and maximum frequency at 5.8ghz, tdp 105w, 60 cores, 120 threads. A full load will take it to 2000w hehe, only the first core will be 5.8, all other cores will be at 1.2ghz or less.









ador250 said:


> Still will be slower in productivity than the 3900X. This r skylake ipc, 12~13% slower than Zen 2. And AMD is now controlling the price of processor market. They can slash price of 3000 series with this intel new launch.



There won't be any ipc, ipc is taken on single thread performance. They can't put more transistors in there 14+++. This is just an attempt to fight x ryzen 3xxx until 10nm or 7nm is ready to go or even 5nm. Intel will come back from this and will be a huge performance improvement from all this and in the end their 14+++ era will be called = nightmare stage / time hehe


----------



## TheLostSwede (Jul 10, 2019)

A yes, new socket, new boards... For those that care 
The socket is the real deal, for the rest, I don't really know. 
It's still the same size as LGA-115x.


----------



## Deleted member 158293 (Jul 10, 2019)

If it could help drop the price of the 16 core 3950x a bit I'll be happy, otherwise meh for the 10 core chip itself.


----------



## ratirt (Jul 10, 2019)

OMG. Never seen Intel so desperate  and the price if true is just ridiculous knowing Intel pricing for so many years  Look what AMD has done to Intel. Finally some good behavior and respecting the customers  Never thought I'd see it


----------



## HTC (Jul 10, 2019)

As Forrest Norrod said in that Oil and Gas presentation (skip to around 16:10), new nodes have issues keeping and / or increasing the clocks VS older nodes. AMD has succeeded in this only because of chiplet design or they would have the same sort of problem Intel is facing with 10nm, as far as clock speeds goes. Worse still for Intel is the fact that, on top of clock issues, they're also having yields issues, which only aggravates the problem.

As for this topic specifically, a CPU with fewer cores (same price VS 3900X but less cores), forcing a new board on top ... it's getting worse for Intel ...


----------



## ShurikN (Jul 10, 2019)

Still monolithic 
Still hot
Still power hungry 
Still 14nm
Nothing to write home about.


----------



## ratirt (Jul 10, 2019)

HTC said:


> As Forrest Norrod said in that Oil and Gas presentation (skip to around 16:10), new nodes have issues keeping and / or increasing the clocks VS older nodes. AMD has succeeded in this only because of chiplet design or they would have the same sort of problem Intel is facing with 10nm, as far as clock speeds goes. Worse still for Intel is the fact that, on top of clock issues, they're also having yields issues, which only aggravates the problem.
> 
> As for this topic specifically, a CPU with fewer cores (same price VS 3900X but less cores), forcing a new board on top ... it's getting worse for Intel ...


Worse it is. I bet intel will be losing money on 10th gen for sure since AMD forced it to go beyond 6 cores. 10 cores from 500$ for intel? The 10 core cpus don't have graphics so it means they are going to be extremely large.


----------



## AsRock (Jul 10, 2019)

Screw buying intel, even if true and was a better chip AMD are the ones who forced it and if you go and buy intel your stabbing AMD in the back by not buying from them, like i said if it wasn't for them you know the price would be higher.

Remember if AMD goes down there isn't any competition for Intel.

Support those who made this happen.


----------



## StrayKAT (Jul 10, 2019)

sam_86314 said:


> *yawn*
> 
> At least the market has competition again. No more $400 4 core 8 thread chips. AMD is finally forcing Intel to innovate, even if that just means pushing clocks higher.
> 
> That'll only go so far. Remember the old Pentium 4's and the FX 9590?



4 Cores haven't been $400 for awhile tbh.


----------



## londiste (Jul 10, 2019)

That slide is more than likely fake.


ShurikN said:


> Still monolithic


Monolithic is *not* a bad thing.


Metroid said:


> They can't put more transistors in there 14+++.


Why not? Of course they can. Whether they will is a different story.


HTC said:


> As Forrest Norrod said in that Oil and Gas presentation (skip to around 16:10), new nodes have issues keeping and / or increasing the clocks VS older nodes. AMD has succeeded in this only because of chiplet design or they would have the same sort of problem Intel is facing with 10nm, as far as clock speeds goes.


AMD succeeded in keeping the clock speeds and even increasing them a bit - if we count 1-2 cores at 4.5-4.6 at 1.5V(!) an increase - because 14/12nm did not clock high either. The sudden downturn in efficiency is still around 4.1-4.3GHz.


ratirt said:


> Worse it is. I bet intel will be losing money on 10th gen for sure since AMD forced it to go beyond 6 cores. 10 cores from 500$ for intel? The 10 core cpus don't have graphics so it means they are going to be extremely large.


Why? 4 cores are 125mm^2, 6 cores are 150mm^2, 8 cores are 175mm^2. Add two cores and do nothing else is a 200mm^2 chip, comparable to Zen/Zen+. This is not extremely large. Get rid of iGPU and it's back down to 175mm^2 or less. They might take a hit on the margin but it would still be very-very far from losing money.


----------



## Totally (Jul 10, 2019)

Dear Intel, let's talk branding. Your naming scheme is nearing it's limits and needs a bit of a rethink. Just to be constructive, I'd like to suggest i9-X900CF, i9-A900CF(best) or simply i9-900CF as good options instead of i9-10900CF(worst).


----------



## StrayKAT (Jul 10, 2019)

Totally said:


> Dear Intel, let's talk branding. Your naming scheme is nearing it's limits and needs a bit of a rethink. Just to be constructive, I'd like to suggest i9-X900CF, i9-A900CF(best) or simply i9-900CF as good options instead of i9-10900CF(worst).



Definitely agree there. But what is up with Ryzen taking similar sounding chipset names themselves (x399, x570, etc). It's like they're trolling, limiting Intel's naming schemes even more.


----------



## Totally (Jul 10, 2019)

londiste said:


> Monolithic is *not* a bad thing.



If they want to remain competitive then yes it is. When yield isn't an issue or it's impact reduced you are right.


----------



## londiste (Jul 10, 2019)

Totally said:


> If they want to remain competitive then yes it is. When yield isn't an issue or it's impact reduced you are right.


We are talking about chips under 200mm^2. Yield is not an issue at 14nm. 10/7nm, it might be but I would not expect it to be a big one. 
Bottom line, if both solutions are at the same price and with otherwise comparable properties, monolithic will almost certainly be inherently advantaged. Chiplet design is a workaround to get around the yield (and scaling?) issues.


----------



## StrayKAT (Jul 10, 2019)

FYI, as long as things are roughly comparable, these details don't matter in the end. I lived through the PowerPC (clearly superior) vs Pentium wars and Intel still wiped the floor with not just one but 3 companies (Apple/IBM/Motorola). Perhaps more if you want to count all of the other RISC/UNIX chips they made obsolete too. It's a bit strange to think AMD is going to do better than them. Hell, AMD has had these great moments where they came out swinging before... and Intel is still here.


----------



## londiste (Jul 10, 2019)

@StrayKAT AMD currently *is* doing better than Intel and it is not the first time either 
Pronouncing Intel dead is premature. Intel won't go anywhere. Neither will AMD for that matter.


----------



## StrayKAT (Jul 10, 2019)

londiste said:


> @StrayKAT AMD currently *is* doing better than Intel and it is not the first time either
> Pronouncing Intel dead is premature. Intel won't go anywhere. Neither will AMD for that matter.



I know.. but I mentioned that AMD has done this before. It seems to see-saw between the two (at least on the tech front.. I don't think Intel has ever been hurt financially by a competitor. In that respect, they do themselves more harm than anyone else).


----------



## racer243l (Jul 10, 2019)

Anybody noticed the dollar sign behind the prices? Totally not how Intel does it. Also I dought they would brag about 14nm+++ especially since they didn´ t on Coffee Lake refresh slides.
A new socket is also a shot in their own foot.
I´m sceptical that this is legit.


----------



## Metroid (Jul 10, 2019)

racer243l said:


> Anybody noticed the dollar sign behind the prices? Totally not how Intel does it. Also I dought they would brag about 14nm+++ especially since they didn´ t on Coffee Lake refresh slides.
> A new socket is also a shot in their own foot.
> I´m sceptical that this is legit.



I have my doubts too, intel has been so messed up that we dont know if is really intel or not eheh

Maybe the vooodoooo white black mages witches cursed intel few years ago, intel needs a kick on the back to wake up.


----------



## ratirt (Jul 10, 2019)

londiste said:


> We are talking about chips under 200mm^2. Yield is not an issue at 14nm. 10/7nm, it might be but I would not expect it to be a big one.
> Bottom line, if both solutions are at the same price and with otherwise comparable properties, monolithic will almost certainly be inherently advantaged. Chiplet design is a workaround to get around the yield (and scaling?) issues.


What are you talking about. The bigger the die then more  you have. If you go from 14nm to 7nm yields are better but of course you need some time for it to happen and wait for the process to mature a bit. The bigger the die the yields are lower. The smaller the node the yields are higher. It is a shrink so more dies on the wafer which means better yields.


racer243l said:


> Anybody noticed the dollar sign behind the prices? Totally not how Intel does it. Also I dought they would brag about 14nm+++ especially since they didn´ t on Coffee Lake refresh slides.
> A new socket is also a shot in their own foot.
> I´m sceptical that this is legit.


Weren't Intel doing new sockets with each gen of CPUs? If Intel will not go with new socket this time that would mean they are not going for more money from customers but rather focusing to keep up with AMD and be more price competitive. Because what they are asking for the products is just ridiculous.
This may not be true (especially if you look at the prices listed).


----------



## phill (Jul 10, 2019)

The question I have is, is anyone surprised by this at all??  As I'm not.... Oh look, another socket as well.... (@TheLostSwede Are you at all surprised by it??)

I do however like the 65, 95 and 105w versions..  I wonder where they got that from....  Lets hope they solder this one.....


----------



## londiste (Jul 10, 2019)

ratirt said:


> What are you talking about. The bigger the die then more  you have. If you go from 14nm to 7nm yields are better but of course you need some time for it to happen and wait for the process to mature a bit. The bigger the die the yields are lower. The smaller the node the yields are higher. It is a shrink so more dies on the wafer which means better yields.


In addition to yields being lower on larger dies, yields are also lower on newer (usually smaller) process nodes.
200mm^2 is not a big die on an old, mature node. It isn't even that big of a die on 7nm apparently, judged by Navi.

My point was, AMD has been shipping 200mm^2 dies for a couple years now. I seriously doubt Intel would be unable to do the same, probably with better margins as they have foundry margins to play with in addition to CPU section of the company.

edit:
Oh, I think I misread your post. Yield generally means percentage of dies from a wafer that are good, not the net amount of dies. Whether good means fully intact chips or usable (with some disabled sections) is largely a matter of perspective.


----------



## racer243l (Jul 10, 2019)

Metroid said:


> I have my doubts too, intel has been so messed up that we dont know if is really intel or not eheh
> 
> Maybe the vooodoooo white black mages witches cursed intel few years ago, intel needs a kick on the back to wake up.



That´ s the thing, they change their own roadmaps like every event.



ratirt said:


> Weren't Intel doing new sockets with each gen of CPUs? If Intel will not go with new socket this time that would mean they are not going for more money from customers but rather focusing to keep up with AMD and be more price competitive. Because what they are asking for the products is just ridiculous.
> This may not be true (especially if you look at the prices listed).



Sandy and Ivy were the same, but then again just a refresh like Coffee Lake refresh or Kabylake was to Skylake. Keeping 1151 v2 would be a small plus on the consumer side. But then again we are talking about Intel.


----------



## Tomgang (Jul 10, 2019)

Yawn. We al ready know what will happen. Intels win when it comes pure gaming, but amd wins when it comes to workstation, video converting load and so on.

And thank to intels 14++++++++++, these chips will only have little or none ipc gain over 9000 series, properly run hot and be more power hungry as well.

Its realy a pain in the ass to chose cpu right now. I feel no matter what ever i chose intel or amd its a compromise as i want a cpu that overclock well and run games at its best (intel wins here), but i also want a good amount of cpu cores, a cpu that dosent consume power as a maniac and dosent run hot (amd clearly wins here).

I will have to make a compromise, as i cant get it all in one package. It feels like chosing between plague or cholera.


----------



## StrayKAT (Jul 10, 2019)

Tomgang said:


> Yawn. We al ready know what will happen. Intels win when it comes pure gaming, but amd wins when it comes to workstation, video converting load and so on.
> 
> And thank to intels 14++++++++++, these chips will only have little or none ipc gain over 9000 series, properly run hot and be more power hungry as well.
> 
> ...



Just get a Core-X and delid as I did. Or be happy with Coffee Lake/6 cores and delid one of those (I wouldn't recommend 9900k or any other of Intel's latest foray into soldering. It's not as good as it could be, and much more dangerous to delid if you want to improve them).


----------



## Metroid (Jul 10, 2019)

Tomgang said:


> Yawn. We al ready know what will happen. Intels win when it comes pure gaming, but amd wins when it comes to workstation, video converting load and so on.
> 
> And thank to intels 14++++++++++, these chips will only have little or none ipc gain over 9000 series, properly run hot and be more power hungry as well.
> 
> ...



AMD could have won here if amd wanted, I'm still trying to figure out why amd played their cards like this, probably they wanted to release these cpus fast and they decide to go this road.

The main reason why they are not winning here in games is *Latency*, chiplets is the problem here. AMD wins in some games in some don't.

Some may say "hey is frequency" but is not frequency, somebody did an ipc test with 3700x, 3900x, 3600, 9700k, 9900k,9600k with 4ghz each and intel still won at 4ghz in some games. That test was apples to apples and intel still lead in games, so i guess is, the only thing remains is latency.


----------



## white phantom (Jul 10, 2019)

Tomgang said:


> Yawn. We al ready know what will happen. Intels win when it comes pure gaming, but amd wins when it comes to workstation, video converting load and so on.
> 
> And thank to intels 14++++++++++, these chips will only have little or none ipc gain over 9000 series, properly run hot and be more power hungry as well.
> 
> ...




I'm the exact same just now trying to upgrade from. Sandybridge (anythings an upgrade) but my force of habits and gaming are saying 9700k or 9900k. My curiosity, reading and budgets telling me 3700 or 3900x for a try. Driving me nuts


----------



## HUSKIE (Jul 10, 2019)

Well Intel strategy i would say. Taking money out of wallet..

I'm still sticking with my 6950x even he is getting older and older. No need to upgrade until it dies...


----------



## Tomgang (Jul 10, 2019)

StrayKAT said:


> Just get a Core-X and delid as I did. Or be happy with Coffee Lake/6 cores and delid one of those (I wouldn't recommend 9900k or any other of Intel's latest foray into soldering. It's not as good as it could be, and much more dangerous to delid if you want to improve them).




I already have a 6 core cpu. An old one, but its 6 core and i for sure want more than 6 cores, infact i want a bit load of cores this time around. So no i will not even look at a 6 or 8 core intel cpu and neither from amd. I want 10 or more cores to play with. Hornestly i'm waiting for amd's ryzen 9 3950X 16 core cpu and see if that is any better. That will still be a compromise throw.



Metroid said:


> AMD could have won here if amd wanted, I'm still trying to figure out why amd played their cards like this, probably they wanted to release these cpus fast and they decide to go this road.
> 
> The main reason why they are not winning here in games is *Latency*, chiplets is the problem here. AMD wins in some games in some don't.
> 
> Some may say "hey is frequency" but is not frequency, somebody did an ipc test with 3700x, 3900x, 3600, 9700k, 9900k,9600k with 4ghz each and intel still won at 4ghz in some games. That test was apples to apples and intel still lead in games, so i guess is, the only thing remains is latency.



Yes i agreed on the latency as amd cpu' s has to comunicate over severel die's and that cause latency, while intel has it all in one die.




white phantom said:


> I'm the exact same just now trying to upgrade from. Sandybridge (anythings an upgrade) but my force of habits and gaming are saying 9700k or 9900k. My curiosity, reading and budgets telling me 3700 or 3900x for a try. Driving me nuts



Glad i am not the only one confused what to get. Yeah gaming screams intel but these chips aso runs hot as intels is realy pushing 14 nm very high and and that cost on the power usesage, but they also win's the overclock round but not by much.

Amd runs cooler thanks to 7nm and consume less power and has the strong lead when it comes to multithread load, while they lose in gaming and overclock.

No matter what we chose, its a compromise sadly. I will wait and see with amd's 16 core part as i think that cpu will be the most fun to have throw it will lose to intel in gaming and overclock. But will win in al most any other part. In the other hand coming from an old i7 980X to a new intel or amd cpu will still be a major upgrade in gaming performance over the old cpu. Its just intel would be a bit better, but i also want a but load of cores this time. So i think i will go with amd 16 core, knowing it will lose in game performance over intel. As i said, no matter the choise, its a compromise as it is right now. It whas so much easy'er to chose a cpu back then X58 whas the king of the hill.


----------



## Metroid (Jul 10, 2019)

Tomgang said:


> Yawn. We al ready know what will happen. Intels win when it comes pure gaming, but amd wins when it comes to workstation, video converting load and so on.
> 
> And thank to intels 14++++++++++, these chips will only have little or none ipc gain over 9000 series, properly run hot and be more power hungry as well.
> 
> ...




I found the video to prove my point at 4ghz it clearly shows intel still lead in some games. So frequency is not the only reason amd is losing in games.










Hardware unboxed, they only have a youtube channel. Very very interesting review. One of the best reviews i have seen so far.


----------



## londiste (Jul 10, 2019)

HardwareUnboxed reviews end up at Techspot: https://www.techspot.com/review/1869-amd-ryzen-3900x-ryzen-3700x/
Sweclockers did a couple same-frequency tests: https://www.sweclockers.com/test/27760-amd-ryzen-9-3900x-och-7-3700x-matisse/28#content


----------



## Tomgang (Jul 10, 2019)

HUSKIE said:


> Well Intel strategy i would say. Taking money out of wallet..
> 
> I'm still sticking with my 6950x even he is getting older and older. No need to upgrade until it dies...



If you wait for it to die, based on my exsperince with X58/i7 980X, you cut be in for a long wait then. I have been on X58 for 10 years now and abused my cpu severel times now like put 1.55 volts throw it for benchmark runs and let it run pretty hot as well. And yet still after all these years cpu as well as my asus motherboard gives me he finger and likes saying: fuck you, we are not ready to die just yet. I have as well been waiting for it to die so i had a reason to upgrade and thats why i also abused it the last 2-3 years. But even throw i dit that, i still failed to kill it. So you might be in for a long wait as well before it dies.



Metroid said:


> I found the video to prove my point at 4ghz it clearly shows intel still lead in some games. So frequency is not the only reason amd is losing in games.
> 
> 
> 
> ...



There is not douts that intels wins in gaming. Reviews clearly shows that. Intels cpu desing is more efficient for gaming.


----------



## trog100 (Jul 10, 2019)

gaming dosnt matter here.. both teams produce more than enough frames rates for real world use with a decent graphics card and only half the cores get used so heat isnt a problem..

high core counts matter for productivity loads.. intel has to add more just to keep up but then again you cant keep adding more cores without adding more heat.. currently heat (and price) is intels main problem..

they can lower the price but its gonna cost them to lower the heat.. binning for low voltages on the high core count chips will lower usable yields by a fair amount..

trog


----------



## londiste (Jul 10, 2019)

Lack of HT is Intel's main problem when we are talking about productivity. There is no other factor that is even remotely close to that.


----------



## P4-630 (Jul 10, 2019)

I'm sold...Who else is getting a 10 core 20 thread i9-10900KF for $499 very soon!!?.....

@Knoxx29


----------



## ratirt (Jul 10, 2019)

Metroid said:


> AMD could have won here if amd wanted, I'm still trying to figure out why amd played their cards like this, probably they wanted to release these cpus fast and they decide to go this road.
> 
> The main reason why they are not winning here in games is *Latency*, chiplets is the problem here. AMD wins in some games in some don't.
> 
> Some may say "hey is frequency" but is not frequency, somebody did an ipc test with 3700x, 3900x, 3600, 9700k, 9900k,9600k with 4ghz each and intel still won at 4ghz in some games. That test was apples to apples and intel still lead in games, so i guess is, the only thing remains is latency.





Tomgang said:


> Yes i agreed on the latency as amd cpu' s has to comunicate over severel die's and that cause latency, while intel has it all in one die.





londiste said:


> HardwareUnboxed reviews end up at Techspot: https://www.techspot.com/review/1869-amd-ryzen-3900x-ryzen-3700x/
> Sweclockers did a couple same-frequency tests: https://www.sweclockers.com/test/27760-amd-ryzen-9-3900x-och-7-3700x-matisse/28#content


Sure Intel is faster. Probably because of the latency. On the other hand we have to remember when we turns to gaming, developers are building their products to a current best. They have been making games with Intel's chips in their hands. This is a huge advantage if you have software support which Intel has till this day. Of course there are already titles that AMD is in the lead. Few maybe just one even but there are. I'm not so convinced that this is only about the latency. Maybe latency is one of the reasons and fixing this would boost performance but I'd go with the software development more than just the latency. Even though AMD is keeping up with Intel still and this new Ryzen is a good example for this. This will change probably turning AMD the most desirable CPU producer. I take this from my conclusions about the vulnerabilities Intel struggles with, and console market AMD has support from.
Also more and more games support more than 4 cores now and it will increase over time. I'm sure of it. You can no longer rely on frequency to boost performance in games and it has been said already, that shrinking dies will not boost frequency. It will degrade frequency. So, the only way for developers now is to increase multicore support to mitigate lower frequencies and yet still boost performance. We all know that Intel has a problem with more core die (due to monolithic structure of the die) unless they will start lowering costs and still get more cores.
If Intel goes 7nm or 10nm whatever they are trying to do, you can kiss the 5Ghz CPUs bye bye. Maybe that's also the reason why Intel is still manufacturing on 14nm++.



P4-630 said:


> I'm sold...Who else is getting a 10 core 20 thread i9-10900KF for $499 very soon!!?.....
> 
> @Knoxx29


I might give it a go since I live in Norway now so it would keep me warm on a lonely, cold, winter nights  well, that's not happening


----------



## Metroid (Jul 10, 2019)

ratirt said:


> Sure Intel is faster. Probably because of the latency. On the other hand we have to remember when we turns to gaming, developers are building their products to a current best. They have been making games with Intel's chips in their hands. This is a huge advantage if you have software support which Intel has till this day. Of course there are already titles that AMD is in the lead. Few maybe just one even but there are. I'm not so convinced that this is only about the latency. Maybe latency is one of the reasons and fixing this would boost performance but I'd go with the software development more than just the latency. Even though AMD is keeping up with Intel still and this new Ryzen is a good example for this. This will change probably turning AMD the most desirable CPU producer. I take this from my conclusions about the vulnerabilities Intel struggles with, and console market AMD has support from.
> Also more and more games support more than 4 cores now and it will increase over time. I'm sure of it. You can no longer rely on frequency to boost performance in games and it has been said already, that shrinking dies will not boost frequency. It will degrade frequency. So, the only way for developers now is to increase multicore support to mitigate lower frequencies and yet still boost performance. We all know that Intel has a problem with more core die (due to monolithic structure of the die) unless they will start lowering costs and still get more cores.
> If Intel goes 7nm or 10nm whatever they are trying to do, you can kiss the 5Ghz CPUs bye bye. Maybe that's also the reason why Intel is still manufacturing on 14nm++.



This has to do too with instructions and codes that only intel processors support. For example tsx, only intel processors support that and that make a huge impact when done properly.


----------



## Assimilator (Jul 10, 2019)

lynx29 said:


> Intel Core i9-10900KF - 10C/20T @ 5.2GHz for $499 on 14nm+++
> 
> 
> Oh, would you look at that: Intel's next-gen 10th-gen CPU roadmap leaks.
> ...



Man, I wish that the people who posted this stuff weren't functionally illiterate. From the damn slide:

i9-10900KF
Maximum single core turbo: 5.2GHz
Maximum all cores turbo: 4.6GHz


----------



## Aquinus (Jul 10, 2019)

StrayKAT said:


> Just get a Core-X and delid as I did. Or be happy with Coffee Lake/6 cores and delid one of those (I wouldn't recommend 9900k or any other of Intel's latest foray into soldering. It's not as good as it could be, and much more dangerous to delid if you want to improve them).


That's not a great plan considering that the 3900x keeps up with the 7960x, at least in Linux. The difference in price between those two chips is so large it makes literally zero sense to get the 7960x and why get something lesser when you know the 3900x will outperform it?


----------



## Eskimonster (Jul 10, 2019)

Now theres a bargain, i blame the great competition, thx AMD.
I want that 10 cores 20 threads monster badly.
If it aint bullsheit this leaks ofc.


----------



## ratirt (Jul 10, 2019)

Metroid said:


> This has to do too with instructions and codes that only intel processors support. For example tsx, only intel processors support that and that make a huge impact when done properly.


AMD proposed something equivalent but since Intel was on the pedestal with development TSX was used not AMD's ASF (if I'm not wrong). So that only concludes my previous statement that developers support is very crucial. Besides, I thought Intel disables TSX.


----------



## Metroid (Jul 10, 2019)

ratirt said:


> AMD proposed something equivalent but since Intel was on the pedestal with development TSX was used not AMD's ASF (if I'm not wrong). So that only concludes my previous statement that developers support is very crucial. Besides, I thought Intel disables TSX.



Yes very, Intel has been sneaking no stopping inside everything to add lines of instructions to games and applications for years. Intel sneaking habits is everywhere hehe


----------



## Mats (Jul 10, 2019)

Ok, so it's 10 cores, but it's still Skylake, and HOT RUNNING. It's faster than Matisse in many games, but I believe Intel can do better than that in terms of IPC and heat.

Buying this now is a bit like (*but obviously not the same as*) buying the last Pentium D Presler right before Core 2 Duo Conroe showed up, back in the day.
I know, Comet Lake is a lot more competitive than Presler ever was.

On the other hand, I can't blame those who can't wait anymore...  14 nm+++++++++++++++++++++++++++++++++++++++++++++++++^+


----------



## londiste (Jul 10, 2019)

ratirt said:


> AMD proposed something equivalent but since Intel was on the pedestal with development TSX was used not AMD's ASF (if I'm not wrong). So that only concludes my previous statement that developers support is very crucial. Besides, I thought Intel disables TSX.


Intel implemented TSX in Haswell (and it was broken), it is implemented in a fixed way in Skylake - and used in one of the KASLR-breaking vulnerabilities which again prompts for disabling it. AMD has (proposed) ASF extension as counterpart that have not actually been implemented in anything so far.

It is not about prefering Intel's extension but using what CPU manufacturer has implemented in actual products (= can be used). TSX exists and works in an actual product, ASF does not.


----------



## ratirt (Jul 10, 2019)

londiste said:


> Intel implemented TSX in Haswell (and it was broken), it is implemented in a fixed way in Skylake - and used in one of the KASLR-breaking vulnerabilities which again prompts for disabling it. AMD has (proposed) ASF extension as counterpart that have not actually been implemented in anything so far.
> 
> It is not about prefering Intel's extension but using what CPU manufacturer has implemented in actual products (= can be used). TSX exists and works in an actual product, ASF does not.


AS always you have missed the conversation we been having here before you showed up. It is well known that Intel been the one developers for software turn into and used features they have offered. That's why software has been developed with Intel's TSX not AMD's ASF due to the fact Intel's been a leader for some time and has some sort of advantage. You repeating what I wrote dude 
And yes, developers preferred Intel's feature. Even though it didn't work and now it is disabled. Just like Metroid mentioned. ASF works but has not been used. Hopefully this will change soon and I hope developers will give it a shot.


----------



## londiste (Jul 10, 2019)

There is no CPU with ASF.

Edit:
ISA Extensions are hardware features. Extensions can be used (software written to utilize them) when there is hardware that implements the extension.
I mean, I suppose theoretically you could write software to support ASF but that would be an academic exercise.

Edit2:
Not everyone thinks in terms of Intel vs AMD. Software is developed to use whatever can be used to squeeze more performance out of it. If an extension turns out to be useful and efficient enough, it (or its counterpart) will be implemented by other vendors as well, eventually. Generally these counterparts are close enough the the original that they can be used by the same software with only minor additions.


----------



## ratirt (Jul 10, 2019)

londiste said:


> There is no CPU with ASF.
> 
> Edit:
> ISA Extensions are hardware features. Extensions can be used (software written to utilize them) when there is hardware that implements the extension.
> I mean, I suppose theoretically you could write software to support ASF but that would be an academic exercise.


What's the point of implementing it when no one want's to use it when they've got TSX and have chosen to go with the last one?
I hope this will change due to vulnerabilities with TSX and maybe developers will turn into this. It works. I didn't say it's in the processors. Besides it's an alternative not actual TSX. Maybe from a developer side it's TSX or ASF?


----------



## londiste (Jul 10, 2019)

ratirt said:


> What's the point of implementing it when no one want's to use it when they've got TSX and have chosen to go with the last one?


This is not one or the other. Two extensions can be used by the same software depending on which one is available.





ratirt said:


> I hope this will change due to vulnerabilities with TSX and maybe developers will turn into this. It works. I didn't say it's in the processors. Besides it's an alternative not actual TSX. Maybe from a developer side it's TSX or ASF?


Both TSX and ASF define CPU ISA instructions, commands you give to a CPU. If there is no CPU to respond to these commands why write software for it?


----------



## Vya Domus (Jul 10, 2019)

TSX is incredibly sparsely used.


----------



## ratirt (Jul 10, 2019)

londiste said:


> This is not one or the other. Two extensions can be used by the same software depending on which one is available.Both TSX and ASF define CPU ISA instructions, commands you give to a CPU. If there is no CPU to respond to these commands why write software for it?


Is that so?
Do You remember 3Dnow!!! feature by AMD? It is not being useed on any of the new CPUs. Even bulldozer didn't have it.
Can you tell why that's the case?  3DNow!!! was really good. Even better than SSE back in the days. Yet developers chose SSE instead and the development of the SSE instructions expanded to SSE2 SSE3 and so on. Even though the 3DNow!! was sucked in by Intel to its SSE later on but that's beside the point. It was better.


----------



## ppn (Jul 10, 2019)

londiste said:


> ...4 cores are 125mm^2, 6 cores are 150mm^2, 8 cores are 175mm^2. Add two cores and do nothing else is a 200mm^2 chip, comparable to Zen/Zen+. This is not extremely large. Get rid of iGPU and it's back down to 175mm^2 or less. They might take a hit on the margin but it would still be very-very far from losing money.



IGPU is around 50mm^2, so the 10 core Comet GFX-less is as  big as the 6 Core + IGPU, 6 more cores gets 9mm wider
2-core group is 3mm wide, the die size can only grow in width and the heatpsreader is the limit. 16 core on 14++,++, looks very real.


----------



## londiste (Jul 10, 2019)

ratirt said:


> Is that so?
> Do You remember 3Dnow!!! feature by AMD? It is not being useed on any of the new CPUs. Even bulldozer didn't have it.
> Can you tell why that's the case?  3DNow!!! was really good. Even better than SSE back in the days. Yet developers chose SSE instead and the development of the SSE instructions expanded to SSE2 SSE3 and so on. Even though the 3DNow!! was sucked in by Intel to its SSE later on but that's beside the point. It was better.


3DNow! did have nice traction with developers. Not all extensions succeed whether they get software written for them or not. If my memory serves right, K6-2 that brought in 3DNow! went against Pentium II that came earlier and was probably better at the time.

Didn't 3DNow! share resources with something else while SSE didn't?

Edit: 
Registers, with MMX/x87. SSE brought in new registers for its use.

It was a time with faster changes in processors and SIMD extensions were the new thing - MMX in 1997, 3DNow! in 1998, SSE in 1999. 3DNow! was initially met with some success but was later simply overshadowed by SSE. Something to keep in mind is that new extensions are not going to get used immediately. It usually takes year or two unless they are something fundamental. By the time 3DNow! got wide enough usage in software, SSE was already out with benefits over it. 3DNow! and Enhanced 3DNow! were definitely a during Thunderbird era.


----------



## dirtyferret (Jul 10, 2019)

Metroid said:


> I found the video to prove my point at 4ghz it clearly shows intel still lead in some games. So frequency is not the only reason amd is losing in games.
> 
> 
> 
> ...



Hardware Unboxed is techspot, Steve in the video is the head review guy for techspot.  He just makes a video while doing his web site review.


----------



## white phantom (Jul 10, 2019)

P4-630 said:


> I'm sold...Who else is getting a 10 core 20 thread i9-10900KF for $499 very soon!!?.....
> 
> @Knoxx29




Yeah when it arrives for 400 UK pounds like rumoured I'd think about it, atleast I know I'll still have 400 quid laying around because I highly doubt its that price


----------



## StrayKAT (Jul 10, 2019)

Aquinus said:


> That's not a great plan considering that the 3900x keeps up with the 7960x, at least in Linux. The difference in price between those two chips is so large it makes literally zero sense to get the 7960x and why get something lesser when you know the 3900x will outperform it?



I admit my ignorance on the pricing. I wouldn't do that either.


----------



## HTC (Jul 10, 2019)

londiste said:


> *AMD succeeded in keeping the clock speeds and even increasing them a bit* - if we count 1-2 cores at 4.5-4.6 at 1.5V(!) an increase - because 14/12nm did not clock high either. The sudden downturn in efficiency is still around 4.1-4.3GHz.



That only happened *precisely because of the "chiplet approach"*. It's why Intel is having so many problems with clock speeds: since their CPUs are monolithic, they can't have high enough clocks to have them "be an upgrade" VS current offerings.

Much more information regarding chiplets here: http://www.eecg.toronto.edu/~enright/Kannan_MICRO48.pdf (page two, "Background and Motivation").


----------



## TheGuruStud (Jul 10, 2019)

"very soon TM (c)" They might as well TM "Sold At A Loss" too


----------



## TheoneandonlyMrK (Jul 10, 2019)

lynx29 said:


> Intel Core i9-10900KF - 10C/20T @ 5.2GHz for $499 on 14nm+++
> 
> 
> Oh, would you look at that: Intel's next-gen 10th-gen CPU roadmap leaks.
> ...


Hype train leaving the station already.

All ten cores on 14nm+++ @ 5.2Ghz, reallyyy.
The article doesn't say that ao why are you.

Plus imagine the Tdp and heat flux off your unicorn version.

A core , perhaps 2 at up to 5.2ghz Is my version of made up non-sense to add to this Pr Bs.

As for yet another 14nm based socket Wtaf intel , you reaLly really are taking the piss.
Note they added pins at least to totally f#ck over any attempt to prove they are lying arseholes when it comes to socket swaps.


----------



## Aquinus (Jul 11, 2019)

HTC said:


> That only happened *precisely because of the "chiplet approach"*. It's why Intel is having so many problems with clock speeds: since their CPUs are monolithic, they can't have high enough clocks to have them "be an upgrade" VS current offerings.
> 
> Much more information regarding chiplets here: http://www.eecg.toronto.edu/~enright/Kannan_MICRO48.pdf (page two, "Background and Motivation").


I'm glad someone said this because this is exactly why AMD is killing it with the 3900x. Intel has multi-die CPUs in their Xeon lineup for the *really* big CPUs with _The Good Glue™_, but they really have no solution to this problem in the mainstream market. Intel really needs to be taking the same approach to CPU design because you can only make dies so big. I think AMD hit a sweet spot. We'll find out when we start seeing 8c/16t chiplets on EPYC chips because a competing AMD chip with this goodness in the server market at the right price would be a gut punch for Intel and it definitely has been earned.


----------



## Space Lynx (Jul 11, 2019)

Aquinus said:


> I'm glad someone said this because this is exactly why AMD is killing it with the 3900x. Intel has multi-die CPUs in their Xeon lineup for the *really* big CPUs with _The Good Glue™_, but they really have no solution to this problem in the mainstream market. Intel really needs to be taking the same approach to CPU design because you can only make dies so big. I think AMD hit a sweet spot. We'll find out when we start seeing 8c/16t chiplets on EPYC chips because a competing AMD chip with this goodness in the server market at the right price would be a gut punch for Intel and it definitely has been earned.



killing it in regards to non-gaming applications.  new benches are showing not only is FPS slower in 3900x, but latency is actually worse than predicted as well because of the chiplet design. gamersnexus was talking about it some.  all I do is game, so eh.


----------



## Aquinus (Jul 11, 2019)

lynx29 said:


> killing it in regards to non-gaming applications.  new benches are showing not only is FPS slower in 3900x, but latency is actually worse than predicted as well because of the chiplet design. gamersnexus was talking about it some.  all I do is game, so eh.


The server market is where the real money is at though. There is definitely a price to pay, but if you *need* 12c/24t, there is a good bet that such a tradeoff is acceptable. If you need this CPU, there is also a really good bet that gaming isn't the only thing you're using it for.


----------



## Space Lynx (Jul 11, 2019)

Aquinus said:


> The server market is where the real money is at though. There is definitely a price to pay, but if you *need* 12c/24t, there is a good bet that such a tradeoff is acceptable. If you need this CPU, there is also a really good bet that gaming isn't the only thing you're using it for.



yeah its great in that regard, I agree in full.


----------



## FireFox (Jul 11, 2019)

P4-630 said:


> I'm sold...Who else is getting a 10 core 20 thread i9-10900KF for $499 very soon!!?.....
> 
> @Knoxx29



I assume it will run hot but that is not a problem for me so i guess i am getting one just for the sake of having it

Edit: I am curious how much they will charge for it here in Germany/Europe


----------



## JustNiz (Nov 12, 2019)

I guess "Very Soon" here has a different meaning to what most people understand.


----------



## Space Lynx (Nov 12, 2019)

JustNiz said:


> I guess "Very Soon" here has a different meaning to what most people understand.



That's what the article said, so I don't know, if you are in this industry you already know to take that source with a grain of salt.  /shrug


----------



## R-T-B (Nov 12, 2019)

ShurikN said:


> Still monolithic



I love this part, frankly.

I'll take a small monolithic core for my needs any day...  it's simply faster than an identical mcm, but I am a dying breed.  I still don't know what to do with 8 cores.  MCM is the way to go if you want more really.



Vya Domus said:


> TSX is incredibly sparsely used.



Not in serverland.  It's like god's gift to database performance in terms of cpu overhead.


----------



## Aquinus (Nov 12, 2019)

R-T-B said:


> Not in serverland. It's like god's gift to database performance in terms of cpu overhead.


Do you have an example of a database that uses TSX? I mainly work with PostgreSQL and as far as I know, it doesn't take advantage of TSX at all. The only one I'm aware of is MS SQL Server 2016+ from what I've read online, but beyond that, I can't find anything else that uses it.


----------



## Vya Domus (Nov 12, 2019)

R-T-B said:


> Not in serverland.  It's like god's gift to database performance in terms of cpu overhead.



I've looked at many database benchmarks comparing Epyc and Xeons and I never witnessed something that I could describe as god's gift on Intel's side. Of course I have to mention that I have no idea if any of those even use TSX, I doubt they do, but I would like if you'd give me an example.

Also : https://www.zdnet.com/article/intels-cascade-lake-cpus-impacted-by-new-zombieload-v2-attack/

Certainly not a stellar feature. Ironically the only piece of software that I know which uses TSX is the RPCS3 emulator and it has nothing to do with databases.


----------



## Darmok N Jalad (Nov 12, 2019)

I shall pay for mine with hen’s teeth.


----------



## R-T-B (Nov 12, 2019)

Vya Domus said:


> I've looked at many database benchmarks comparing Epyc and Xeons and I never witnessed something that I could describe as god's gift on Intel's side. Of course I have to mention that I have no idea if any of those even use TSX, I doubt they do, but I would like if you'd give me an example



As mentioned, it's been disabled for almost three years now in nearly all software for security reasons, so short of looking at old benchmarks on old microcode and/or builds comparing IntelvsIntel or similar, you'll never get that.

Plus I'm talking cpu overhead only, not overall performance.



Vya Domus said:


> Ironically the only piece of software that I know which uses TSX is the RPCS3 emulator and it has nothing to do with databases.



Almost everything has removed support until it is fixed.  Still, it's target function is databases, as even wikipedia acknowledges in the opening paragraph:






						Transactional Synchronization Extensions - Wikipedia
					






					en.m.wikipedia.org


----------



## John Naylor (Nov 12, 2019)

I still don't understand the back and forth about cores, die size, Ghz ... I don't care if you have more cores, smaller die size, higher GHz if you can't run programs faster.  Until we see those results, don't see as anything else matters.


----------



## trog100 (Nov 12, 2019)

i run my 9900k with HT off.. it games okay and cpu benchmarks dont fry it.. 

trog


----------



## Vya Domus (Nov 12, 2019)

R-T-B said:


> it's been disabled for almost three years now in nearly all software



So my initial assessment that this feature is virtually non existent in the software that's out there was correct. I don't really understand why you felt it was necessary to contradict me.



R-T-B said:


> Still, it's target function is databases, as even wikipedia acknowledges in the opening paragraph:



If it is (or was) that useful one can expect that there would be at least some examples left for us to see but I can't find any and you didn't mention one either so I basically have to take your word and wikipedia's for it being useful.


----------



## R-T-B (Nov 13, 2019)

Vya Domus said:


> wikipedia's



Yeah, I would take that as worth something now, but meh.

My point was darn near every database program added support for that extension (mySQL, mariaDB, postgresSQL are ones I recall) and nearly everyone still has support if you are willing to compile yourself with the right flags.  Don't though, there are very valid reasons it's disabled.

It was met with enthusiasm by db admins, and likely will be readded with enthusiasm once properly fixed, by either AMD or Intel.



Vya Domus said:


> So my initial assessment that this feature is virtually non existent in the software that's out there was correct.



Yes, but it's also misleading as it implies that this is a feature no market cares about hence no use.  I was cautioning would be readers away from that conclusion, nothing more.


----------



## JustNiz (Nov 14, 2019)

R-T-B said:


> I love this part, frankly.
> 
> I'll take a small monolithic core for my needs any day...  it's simply faster than an identical mcm, but I am a dying breed.  I still don't know what to do with 8 cores.  MCM is the way to go if you want more really....




Yes very much agreed. The truth for me at least is that single-threaded performance is critical, and I'm hardly ever using even just the 4 cores my current CPU has.
Ive noticed that during MAME emulation, VR gaming and nearly anything else CPU intensive I do, It's nearly always that one core is just about pegged, maybe one other is like 30% loaded at most, and the last 2 are just bouncing around idle. It's clear that even recent games are either unavoidably serial or developers are still not making use of available parallelism enough, which seems unlikely at least in a gaming context where FPS is everything. I accept that in the server world you can never have enough cores, but my contention is that a 10 core desktop CPU is a pointless product (even for hardcore gaming/VR). I wish Intel would switch back to focusing on single core performance instead of taking the brain-dead marketing approach of just adding more cores and incorrectly pretending that any/all desktop workloads will benefit or even could take advantage of that.


----------



## John Naylor (Nov 14, 2019)

trog100 said:


> i run my 9900k with HT off.. it games okay and cpu benchmarks dont fry it..



I always prepared several BIOS profile for each build.   In addition to stock, one will include the highest OC I can get stable using ONLY application based stress tests; another I wold turn HT off and push the OC a bit higher.   This allows the user to run all cores at max speed when applications benefit from those cores and to run a bit higher when coming.  To swithc, just reboot, load the preferred BIOS profile, save and exit and you are all set.

As for fryomh the CPU, simply avoid synthetic benchmarks which give you nothing but bragging rights, and use an application based benchmark like RoG Real bench.   I have had 24 hour stable P95  OCs bfail in > 2 hours under RB and CPU temps are up to 10C lower.


----------



## moproblems99 (Nov 14, 2019)

lynx29 said:


> ryzen 3900x already ancient history if the 5.2 is all 10 cores no downclocking...



Please tell me my sarcasm detector is broken.


----------



## Space Lynx (Nov 14, 2019)

moproblems99 said:


> Please tell me my sarcasm detector is broken.



if the rumor is true and its $499... nope... cause even tho its 2 cores short, it prob will beat it in IPC pretty handily.  which translates to even beating it in multithreaded workloads... and at same price.

but again these are rumors so who knows. prob not true.  5.2 seems high for an all core.


----------



## dj-electric (Nov 14, 2019)

"Good, but not spectacular" seem to be a guiding line for Intel of 2020, according to speculations and leaks.

Intel has to think long and hard about pricing of upcoming 10th Gen, Skylake v4 lineup. A 10600K that is basically an 8700K can be OK for 269$, but will be much more competitive for something like 229$.

It wouldn't take too much of Intel to get competitive products again, if their plan is basically allowing HT on all segments and jacking frequencies way high up. You could basically get top notch, no compromises 144Hz friendly CPU for under 300$. 
Just that pricing is key, that's all...


----------



## oxrufiioxo (Nov 14, 2019)

lynx29 said:


> if the rumor is true and its $499... nope... cause even tho its 2 cores short, it prob will beat it in IPC pretty handily.  which translates to even beating it in multithreaded workloads... and at same price.
> 
> but again these are rumors so who knows. prob not true.  5.2 seems high for an all core.




I think people confuse IPC with clockspeed advantage. 

When comparing what CPU actually has better IPC clocks need to be matched. Considering Ryzen typically runs 4-600mhz slower on a single core I would say that ipc favors Ryzen 3000 by a decent margin. 

At the same time at least for gaming specifically even when clock speeds are matched and Ryzen 3000 has a decent IPC advantage intel still wins pointing to latency more than IPC that divides the 2 architectures.


----------



## Darmok N Jalad (Nov 14, 2019)

lynx29 said:


> if the rumor is true and its $499... nope... cause even tho its 2 cores short, it prob will beat it in IPC pretty handily.  which translates to even beating it in multithreaded workloads... and at same price.
> 
> but again these are rumors so who knows. prob not true.  5.2 seems high for an all core.


But there’s nothing to keep AMD from undercutting this the day it launches. I would bet that the 3900X is much cheaper to produce, and now AMD has the 3950X and TR  products to go up against whatever Intel can manage at 14+++. Intel can still win in some tests due to raw clocks, but AMD is pushing hard on every other aspect of Intel’s business.


----------



## moproblems99 (Nov 14, 2019)

Darmok N Jalad said:


> But there’s nothing to keep AMD from undercutting this the day it launches. I would bet that the 3900X is much cheaper to produce, and now AMD has the 3950X and TR  products to go up against whatever Intel can manage at 14+++. Intel can still win in some tests due to raw clocks, but AMD is pushing hard on every other aspect of Intel’s business.



Even if the rumors are true, the 3900x is still a great chip.  It is not ancient history by any means.  It will likely still be neck and neck in anything except 1080p gaming.

Yep, total waste of money /sarcasm


----------



## EarthDog (Nov 15, 2019)

moproblems99 said:


> Please tell me my sarcasm detector is broken.





lynx29 said:


> if the rumor is true and its $499... nope... cause even tho its 2 cores short, it prob will beat it in IPC pretty handily.  which translates to even beating it in multithreaded workloads... and at same price.
> 
> but again these are rumors so who knows. prob not true.  5.2 seems high for an all core.


I'd imagine it wont do that... but be in the mid to upper 4 ghz range. Which is still faster than any zen 2. If it's closer to 5 ghz than 4.5, it's a win for the overclockers. I'll bet all core boost is still faster than 3900x all core. So... amd will lower prices and we all win.


----------



## moproblems99 (Nov 15, 2019)

EarthDog said:


> I'd imagine it wont do that... but be in the mid to upper 4 ghz range. Which is still faster than any zen 2. If it's closer to 5 ghz than 4.5, it's a win for the overclockers. I'll bet all core boost is still faster than 3900x all core. So... amd will lower prices and we all win.



I don't disagree at all.  I thought when reading the review on the 9900KS that all core was like 4.6?  I know these aren't the same chips but they are pretty much the same process so I am not suspecting much differences unless they got creative.


----------



## EarthDog (Nov 15, 2019)

moproblems99 said:


> I don't disagree at all.  I thought when reading the review on the 9900KS that all core was like 4.6?  I know these aren't the same chips but they are pretty much the same process so I am not suspecting much differences unless they got creative.


depends, lol. After turbo runs out, then the KS goes down to 4.7. The K is 4.6 on the box but is a little lower. Either way, overclocked, these are 5 ghz cpus, a majority of them. I just dont know what 10c/20t will look like to cool and how its turbo will work.


----------



## lewis007 (Nov 15, 2019)

StrayKAT said:


> Is it me or is that kind of cheap?


It's not just you, maybe it's because it'll melt down to the Earths core and destroy the planet.


----------



## R-T-B (Nov 15, 2019)

lewis007 said:


> It's not just you, maybe it's because it'll melt down to the Earths core and destroy the planet.



I must've missed the lesser hole in my floor from my (only 8-core) 9900k.


----------



## ppn (Nov 15, 2019)

199 mm² die size should be easy to cool on reasonable voltages of 1,15. If clock speed is to be raised by just 15% this will result 150% power, 4.4GHz 100 watts 5.0GHz 150 watts.


----------



## Fangio1951 (Nov 15, 2019)

ppn said:


> 199 mm² die size should be easy to cool on reasonable voltages of 1,15. If clock speed is to be raised by just 15% this will result 150% power, 4.4GHz 100 watts 5.0GHz 150 watts.


Yeah, then @ 5.2 Ghz it will be = Liqid nitrogen cooling


----------



## EarthDog (Nov 15, 2019)

Fangio1951 said:


> Yeah, then @ 5.2 Ghz it will be = Liqid nitrogen cooling


Better than over 4.6 ghz...


----------



## biffzinker (Nov 15, 2019)

EarthDog said:


> Better than over 4.6 ghz...


Was that a jab at AMD I just read? 
/jk


----------



## Space Lynx (Nov 15, 2019)

oxrufiioxo said:


> I think people confuse IPC with clockspeed advantage.
> 
> When comparing what CPU actually has better IPC clocks need to be matched. Considering Ryzen typically runs 4-600mhz slower on a single core I would say that ipc favors Ryzen 3000 by a decent margin.
> 
> ...



It's also important to never forget ram... I overclocked my b-die ram to 3800 cas 16-17-16-16 manual timings from ryzen dram calculator... ran mem test 24 hr passes no errors, prime95 5 hours no errors. and then ran benches, and I had a huge leap in performance...  something tells me when DDR5 ram comes out... and socket am5 in 2021...  ryzen is going to be the new king... inifnity fabric is a long term game for Lisa Su I think. its amazing how much of a performance gain I got with the oc'd ram.


----------



## cucker tarlson (Nov 15, 2019)

AsRock said:


> if you go and buy intel your *stabbing AMD in the back by not buying from them*



you guys are adorable


btw this isn't gonna be $500 even if it launches at $500,just as 3900x isn't selling at msrp.


----------



## AsRock (Nov 15, 2019)

cucker tarlson said:


> you guys are adorable



Right back at ya, with your edited quote(s).  Again if it was not for AMD shit be worst for all of us, just a fact that prices would be higher.


----------



## Space Lynx (Nov 15, 2019)

AsRock said:


> Right back at ya, with your edited quote(s).  Again if it was not for AMD shit be worst for all of us, just a fact that prices would be higher.



it is nice having competition again. intel would have sat on 14nm variants for 5 more years if they had no comeptition


----------



## cucker tarlson (Nov 15, 2019)

AsRock said:


> Right back at ya, with your edited quote(s).  Again if it was not for AMD shit be worst for all of us, just a fact that prices would be higher.


apparently it works both ways.
not all of us make pc part purchases to make a statement for the corporations.
a thinking customer will buy whatever is good for them,not for either corporation.


----------



## biffzinker (Nov 15, 2019)

I should of waited for AMD's 5700/XT, and Nvidia's answer. At the time I was looking for more performance than Vega offered to replace a RX 480. I've been happy with the performance the RTX 2060 offers.


----------



## londiste (Nov 15, 2019)

Darmok N Jalad said:


> But there’s nothing to keep AMD from undercutting this the day it launches. I would bet that the 3900X is much cheaper to produce


Why exactly do you think 3900X is much cheaper to produce?
- 3900X consists of 14nm IO die (~125mm^2) and two 7nm CCD dies (76mm^2).
- Both Zen and Zen+ dies are around 209mm^2.
- We do not know exactly how large Intel's 10-core thing is but since Skylake and derivatives have a long history, we know that 4 core die (7700K) is ~125mm^2, 6 core die (8700K) is ~175mm^2 and 8-core die (9900K) is ~200mm^2. The numbers are not exact, a few mm^2 here or there but close enough. Adding two more cores would put a 10-core die at ~225mm^2 assuming no other major changes are made.

Edit:
I messed up the sizes here. 2 additional cores add about 25mm^2 but I somehow went +50mm^2 from 4 to 6 cores. See comment below from @ppn . He correctly estimates 10-core CPU at 200mm^2 and 12-core CPU  at 225mm^2.

200-250mm^2 is not a big die yet, it's reasonable in terms of yields and production.
3900X has smaller dies but more die size and 7nm is more expensive.

When it comes to cost also consider in case of Intel CPUs Intel is both CPU architecture designer as well as manufacturer. They can work with both design and production margins. In case of AMD, AMD designs the CPU architecture but dies are manufactured by TSMC. TSMC wants its own profit and its own margins regardless of what AMD does.

I am not saying Intel would be willing to cut into its margins - recent history shows exactly the opposite - but when shit hits the fan, they can.



lynx29 said:


> it is nice having competition again. intel would have sat on 14nm variants for 5 more years if they had no comeptition


Are you serious? History quite clearly shows Intel has had no intention of sitting on an older node if they can produce on a smaller one. Sandy Bridge was at 32nm, Ivy Bridge was at 22nm, Broadwell was at 14nm. 
They are on 14nm because they do not have a new process node available for production, pure and simple.

Hell, Intel *is* sitting on 14nm variants for 5 years even though they do have competition.


----------



## TheLostSwede (Nov 15, 2019)

londiste said:


> 3900X has smaller dies but more die size and 7nm is more expensive.
> 
> When it comes to cost also consider in case of Intel CPUs Intel is both CPU architecture designer as well as manufacturer. They can work with both design and production margins. In case of AMD, AMD designs the CPU architecture but dies are manufactured by TSMC. TSMC wants its own profit and its own margins regardless of what AMD does.
> 
> I am not saying Intel would be willing to cut into its margins - recent history shows exactly the opposite - but when shit hits the fan, they can.



Yes, 7nm wafers are more expensive to make, but this is also why AMD is only making the chiplets at 7nm. Assuming the yields are decent, they should get a lot of chiplets per wafer.
The only number that came out early on, was a 70% yield, which has hopefully improved by now.
One main advantage of chiplets, as you you can see below, is that you're likely to have a much higher yield of good dies, over making larger chips.








						Chiplet - WikiChip
					

A chiplet is an integrated circuit block that has been specifically designed to work with other similar chiplets to form larger more complex chips. In such chips, a system is subdivided into functional circuit blocks, called 'chiplets', that are often made of reusable IP blocks.




					en.wikichip.org
				




You're right that Intel has more control over things though, but, as with so many things, there's also internal pricing, so it's not like Intel's manufacturing costs are free of charge.
Samsung is a great example here as well, where some of its BU's have outsourced chip production, as it would've been more costly to do it in-house, or they simply didn't have the production capability at times to keep up with demand.

We know Intel is having similar problems, partially due to them making cellular modems for Apple, which is why we have a desktop consumer CPU shortage. If you don't have the fab space/production lines to meet demand, then something is going to suffer and in Intel's case, it has been their own products where they're not under contractual obligation to supply a certain amount of chips per month.

On top of that, these companies needs to keep shareholders happy, so they need to make a certain profit per chip, which is why we're now seeing AMD increasing the cost of their products. This was kind of expected, but I doubt people are happy about it.


----------



## londiste (Nov 15, 2019)

All I am saying is that it is not clear-cut like it is often made out to be - chiplets, so AMD wins because cheap.

At these die sizes and with 14nm being much more mature than 7nm is, I seriously doubt chiplet design has that significant of a benefit. Again, AMD has been competing extremely well in the last 2.5 years with monolithic dies that are within 10% of the size what I assume Intel's 10 core will be.

By the way, AMD has said they only make chiplets on 7nm because IO does not scale down well. From what I understand it is not so much about IO being problematic but about making IO on 7nm being waste because features are large enough that they can be manufactures much more efficiently on 12/14nm.


----------



## biffzinker (Nov 15, 2019)

londiste said:


> 14nm IO die (~125mm^2)


The I/O die is 12nm for desktop or 14nm for Threadripper/EPYC is what I had read.


----------



## TheLostSwede (Nov 15, 2019)

londiste said:


> All I am saying is that it is not clear-cut like it is often made out to be - chiplets, so AMD wins because cheap.
> 
> At these die sizes and with 14nm being much more mature than 7nm is, I seriously doubt chiplet design has that significant of a benefit. Again, AMD has been competing extremely well in the last 2.5 years with monolithic dies that are within 10% of the size what I assume Intel's 10 core will be.
> 
> By the way, AMD has said they only make chiplets on 7nm because IO does not scale down well. From what I understand it is not so much about IO being problematic but about making IO on 7nm being waste because features are large enough that they can be manufactures much more efficiently on 12/14nm.


Indeed. They have a good chance of having good yield on the chiplets, but as a lot of CPUs need two or more of them, they better have good yields or they're screwed.

Well, mature doesn't mean better. TSMC seems to be pretty damn good at cranking out maximum chips per wafer, as if they weren't, the business would've gone elsewhere.
On the other hand, Intel should have as good yields as they'll ever get on 14nm, so there's nothing to improve, whereas AMD and TSMC might still have some improvements coming that will increase yields.

And this will be/might already be a problem for Intel with the move to 10nm. Plenty of things don't scale well and that's why Intel is working hard on EMIB (Embedded Multi-Die Interconnect Bridge), as it'll allow them to put multiple different production nodes inside a single processor. I'm sure we'll see something similar from a wide range of chip makers in the future. Glue is going to get popular...






Source: https://www.tomshardware.com/news/intel-emib-interconnect-fpga-chiplet,35316.html


----------



## biffzinker (Nov 15, 2019)

TheLostSwede said:


> TSMC seems to be pretty damn good at cranking out maximum chips per wafer, as if they weren't, the business would've gone elsewhere.


Likely why Intel moved to having Atom, and the lower end chipsets (H310) fabbed at TSMC.

It would seem Intel may of been outsourcing the production of Atom all the way back to 2009.





						Why Is Intel Out-Sourcing Atom?
					

One of the questions Intel has undoubtedly got lots of briefing notes on for this evening's announcement of its outsourcing deal on Atom with TSMC is: Why




					www.electronicsweekly.com


----------



## londiste (Nov 15, 2019)

TheLostSwede said:


> On the other hand, Intel should have as good yields as they'll ever get on 14nm, so there's nothing to improve, whereas AMD and TSMC might still have some improvements coming that will increase yields.


Pretty sure AMD is moving to N7+ and N6


----------



## TheLostSwede (Nov 15, 2019)

londiste said:


> Pretty sure AMD is moving to N7+ and N6


Which in turn, might mean worse yields, at least for a time.


----------



## londiste (Nov 15, 2019)

TheLostSwede said:


> Which in turn, might mean worse yields, at least for a time.


Yes. But also worth it because small improvements are expected in both density and power/frequency. Couple hundred MHz and more transistors go a long way


----------



## TheLostSwede (Nov 15, 2019)

Oh and with regards to the topic of this thread, not going to be soon, expect it to be announced at Computex next year.


----------



## ppn (Nov 15, 2019)

londiste said:


> Why exactly do you think 3900X is much cheaper to produce?
> - 3900X consists of 14nm IO die (~125mm^2) and two 7nm CCD dies (76mm^2).
> ...



Coffee Lake
4-Core 126 mm² die size
6-Core ~149.6 mm² die size
8-Core ~174 mm² die size
10-Core ~199 mm² die size Comet Lake

(theoretical) Rocket Lake 14nm ++++
12 14 and 16 Core ~225  mm² die size w/o Integrated Graphics GFX whatsoever, Xe GFX on separate chip

For instance Quad-core 32nm Sandy Bridge is 216 mm² die size, so 225 mm² 16-core GFX-less is not that much bigger. We are still safe. It will hold the line until Ice lake -S for Desktop.


----------



## londiste (Nov 15, 2019)

@ppn thanks for the correction. I don't know how I messed that lineup up and now added a comment in my original post about your correction. Thanks!

Intel is not really safe with this though. Die size is only one of the factors. The big one that will bite them is power, especially over many-core load.

That 5.2 GHz for i9-10900KF in the original news blurb might be true but it is very likely that Intel will follow AMD's lead and send high voltages into single or a few cores for these high boost speeds. The same 1.4V that AMD uses for boost clocks in Ryzen 3000 series should allow Intel to boost to 5.2-5.3GHz. They can go further for more. Single core at 1.5V might be doable and that would win them another couple hundred MHz while efficiency goes out the window. Whether the die can take something like that, we simply don't know.


----------



## EarthDog (Nov 15, 2019)

londiste said:


> Intel is not really safe with this though. Die size is only one of the factors. The big one that will bite them is power, especially over many-core load.


Its weird to me when the shoe (power) is on the other foot it is an issue. People are fickle.


----------



## Darmok N Jalad (Nov 15, 2019)

londiste said:


> Why exactly do you think 3900X is much cheaper to produce?
> - 3900X consists of 14nm IO die (~125mm^2) and two 7nm CCD dies (76mm^2).
> - Both Zen and Zen+ dies are around 209mm^2.
> - We do not know exactly how large Intel's 10-core thing is but since Skylake and derivatives have a long history, we know that 4 core die (7700K) is ~125mm^2, 6 core die (8700K) is ~175mm^2 and 8-core die (9900K) is ~200mm^2. The numbers are not exact, a few mm^2 here or there but close enough. Adding two more cores would put a 10-core die at ~225mm^2 assuming no other major changes are made.
> ...


Simply looking at history, AMD has been able to go cheaper than Intel. AMD has tons less overhead than Intel. It’s not about silicon sizes and substrates—it’s margins. AMD can sell for less because they don’t have to pay as many employees or keep as many buildings running. Intel has fabs scattered across the globe, and Intel depends on those to keep die sizes down to meet those margins. Being stuck at 14++ is no help. AMD can cease production with TSMC at any time with far less risk. This situation is probably close to Intel’s worst nightmare—to make a large (for them) CPU on a HDET platform and sell it for low margins. You can’t just compare two dice and assume all other things are equal.


----------



## kapone32 (Nov 15, 2019)

It would seem that this release is focused on competing with AMD's 12 and 16 core offerings. The 9900K was the answer to the 1700/1800/2700 and the KS was the response to the 3700/3800. As they had nothing to respond to the 3900X and 3950X they have probably been working on this since before the official announcement of the 3900/3950. AMD right now is leading in the CPU space. At every pricing level they have a compelling option. The prices of the 3900/3950 have been a bit controversial as they are more than AMD has ever sold a Desktop CPU for. If indeed Intel does do this though it would force down the price of the 9900K which is already more than $499 and cause greater competition for the 3700/3800 series. AMD would potentially have to lower the price of the 3900/3950 if reviews and other data show that a 10 core hyper threaded chip does bring significant gains for Intel having the extra 2 cores for multi threaded applications and may be the sweet spot for upcoming games.


----------



## CrAsHnBuRnXp (Nov 15, 2019)

10 series CPU names dont roll off the tongue easily. We need a new series with new names.


----------



## ratirt (Nov 15, 2019)

kapone32 said:


> The prices of the 3900/3950 have been a bit controversial as they are more than AMD has ever sold a Desktop CPU for.


It's weird you say that though. Just because AMD never charged that much these prices must be controversial? These processors have much more to offer than Intel's CPUs so the price is higher. It is not about controversial but if it is justified to price these that much. I think these are very good prices for the 3900/3950 AMD CPUs. So I'm trying to say is, look at a product and price to performance or value or anything that processors offers not a company and what it used to sell their products for. Cause that's fools errand to me.  
We've moved to 16c from 4c in the desktop segment to what has been server market for fraction of the price.


----------



## EarthDog (Nov 15, 2019)

ratirt said:


> We've moved to 16c from 4c in the desktop segment to what has been server market for fraction of the price.


if only a general consumer and enthusiast can utilize more than half that....


----------



## kapone32 (Nov 15, 2019)

ratirt said:


> It's weird you say that though. Just because AMD never charged that much these prices must be controversial? These processors have much more to offer than Intel's CPUs so the price is higher. It is not about controversial but if it is justified to price these that much. I think these are very good prices for the 3900/3950 AMD CPUs. So I'm trying to say is, look at a product and price to performance or value or anything that processors offers not a company and what it used to sell their products for. Cause that's fools errand to me.
> We've moved to 16c from 4c in the desktop segment to what has been server market for fraction of the price.



I am not one of those people that feel the price for those CDPUs are too high. I was just going based on comments I have seen here and other places about people lamenting on how AMD used to be the value king and now isn't. I do not disagree that if AMD had never gone past 4 or 8 cores we would not ever have seen anything like this from Intel for the price being quoted.


----------



## AsRock (Nov 15, 2019)

cucker tarlson said:


> apparently it works both ways.
> not all of us make pc part purchases to make a statement for the corporations.
> a thinking customer will buy whatever is good for them,not for either corporation.



True, which is why i own mainly intel systems, as AMD were not producing any thing near Intels performance back then. Clearly not the case  now.


----------



## cucker tarlson (Nov 15, 2019)

AsRock said:


> True, which is why i own mainly intel systems, as AMD were not producing any thing near Intels performance back then. Clearly not the case  now.


I'm playing the waiting game.
I want either 3700X to drop to former 2700X price (around 1250pln,it's 1500 now) and I'll get it with x570 then (800 for the asus prime) or 9700k drops to current 3700x price (it's 1700 now) and I'll get it with a z390  gaming x (550).


----------



## londiste (Nov 15, 2019)

Darmok N Jalad said:


> Simply looking at history, AMD has been able to go cheaper than Intel. AMD has tons less overhead than Intel. It’s not about silicon sizes and substrates—it’s margins. AMD can sell for less because they don’t have to pay as many employees or keep as many buildings running. Intel has fabs scattered across the globe, and Intel depends on those to keep die sizes down to meet those margins. Being stuck at 14++ is no help. AMD can cease production with TSMC at any time with far less risk. This situation is probably close to Intel’s worst nightmare—to make a large (for them) CPU on a HDET platform and sell it for low margins. You can’t just compare two dice and assume all other things are equal.


- History shows Intel getting stupid high margins. AMD has gone lower in price but usually not because they wanted to.
- Intel has larger company but they also compete in many markets. Foundry is very capital-heavy one at that.
- AMD cannot cease production with TSMC. This is their risk. If something should happen in/to TSMC that sets them back, AMD would be quite out of luck.
- Situation is Intel's nightmare alright but what are you bringing HDET into it? Desktop CPUs were the topic here. Intel's manycore Xeons are screwed in most places, HDET is a niche market and has a lot more to do with marketing rather than profit.
- When it comes to production and cost, dies can be compared fairly well. When it comes to that, having multi-die packages for CPUs is more complex, more costly and more likely to bottleneck somewhere.



kapone32 said:


> The prices of the 3900/3950 have been a bit controversial as they are more than AMD has ever sold a Desktop CPU for.


Sure they have. Athlon 64 FX-53 was over $700, FX-55 was over $800 and FX-57 was little over $1000.


----------



## Bones (Nov 15, 2019)

Worth noting AMD also had the Athlon X2 4800 for Socket 939 that sold for $1,000 when first introduced. 

Can't blame AMD for the current price range, it's actually competitive and will give them some much needed capital to continue with. With yields from the 7nm process improving as time goes it's going to get better for AMD which in turn means better for us too. 

Believe me, if I were able I'd be all over a 3900/3950 chip but for now will have to settle for the 3600x I have now - May not be the biggest but it's still good. 
The 2700x I'm running in this build now isn't bad either and getting it done.


----------



## Darmok N Jalad (Nov 15, 2019)

londiste said:


> - History shows Intel getting stupid high margins. AMD has gone lower in price but usually not because they wanted to.
> - Intel has larger company but they also compete in many markets. Foundry is very capital-heavy one at that.
> - AMD cannot cease production with TSMC. This is their risk. If something should happen in/to TSMC that sets them back, AMD would be quite out of luck.
> - Situation is Intel's nightmare alright but what are you bringing HDET into it? Desktop CPUs were the topic here. Intel's manycore Xeons are screwed in most places, HDET is a niche market and has a lot more to do with marketing rather than profit.
> ...



This CPU we are talking about probably won’t be on the consumer socket, but the HEDT socket. That is why I bring it up. 8 cores at 14nm is probably already a strain on the current consumer board, as Intel was counting on 10nm chips to be mainstream years ago. The HEDT socket is more complex and therefore more expensive. As for the rest, AMD can drop one semi fab for another if the current one fails or another has better advancements. Intel has to develop this themselves—historically, that’s been an advantage (one that Intel has paid dearly for), until something like 10nm fails to qualify. Intel has lost tons of money investing capital into a node that has failed to reach production goals. Now, AMD can obviously not go on without a fab contract, but they can chose from any fab outside of Intel for their production needs, and that market has clearly advanced since the days of Athlon64. And I know that AMD obviously doesn’t want to go low, what company does? My point is they have shown the ability and willingness to do so. The GPU division is a prime but sad example of that.


----------



## londiste (Nov 15, 2019)

10900KF or whatever its final name will be? It will definitely by in desktop socket.
Original post here is a leak and details might be wrong. Socket is said to be s1159, other roadmap slides say s1200.
10-core Comet Lake itself has been on roadmaps for a while and we do know it is in the works.

There are two fabs for 7nm today - TSMC and Samsung. Samsung is so far unproven. Intel's 10nm goal is to be at the same level.
Intel, Samsung and TSMC have all spent tens of billions of dollars on research and fabs.


----------



## ShrimpBrime (Nov 15, 2019)

Well, it's gonna be a faster than previous chip with more cores. People are going to buy this product. The price isn't really that bad, and far lower than Intel traditional pricing.

Gotta say, I'm impressed with Intel actually. They've stretched the current 14nm just about as far as they could possibly go, and then.... release yet another!!! Awesome.

H. highly
E. extended
D. dangerous
T. temperatures

I mean, High end desktop.... sorry for the confusion.
meaning always a new socket.. lol forgot that part sry.

edit: fixed type 10nm to 14nm


----------



## ratirt (Nov 18, 2019)

EarthDog said:


> if only a general consumer and enthusiast can utilize more than half that....


You don't need to buy it. Intel offers 8c for lower price so you can go with that or 12c Ryzen. If this is still too much for you go 6c older CPU or just buy PS4 or wait for PS5 or Xbox and call it a day.
Besides it is not about how much you can utilize but what you can get for your money.



kapone32 said:


> I am not one of those people that feel the price for those CDPUs are too high. I was just going based on comments I have seen here and other places about people lamenting on how AMD used to be the value king and now isn't. I do not disagree that if AMD had never gone past 4 or 8 cores we would not ever have seen anything like this from Intel for the price being quoted.


Isn't? The price is higher but the performance is as well. I don't think you see the bigger picture here. Maybe watch some reviews about the 3900 and 3950? It's not just price but what you get for the money. Price to performance is what you should be after here.


----------



## AsRock (Nov 18, 2019)

cucker tarlson said:


> I'm playing the waiting game.
> I want either 3700X to drop to former 2700X price (around 1250pln,it's 1500 now) and I'll get it with x570 then (800 for the asus prime) or 9700k drops to current 3700x price (it's 1700 now) and I'll get it with a z390  gaming x (550).



Waiting here too even more so with AMD's new socket on it's way soon, other wise i would of opted for a 3900X.


----------



## Zach_01 (Nov 18, 2019)

AsRock said:


> Waiting here too even more so with AMD's new socket on it's way soon, other wise i would of opted for a 3900X.


New mainstream desktop socket (AM4 successor) does not coming until 2021~22 (AM5???). The next Ryzen 4000 series will be AM4 also.
IMHO for the next 5 years at least, the waiting game will have minimum ROI, because AMD will keep releasing newer and faster products almost every year and though Intel is now between rock and hard place (regurgitating same things) will eventually (in 1 or 2 years) join AMD in this game with competitive products.


----------



## ratirt (Nov 18, 2019)

Zach_01 said:


> New mainstream desktop socket (AM4 successor) does not coming until 2021~22 (AM5???). The next Ryzen 4000 series will be AM4 also.
> IMHO for the next 5 years at least, the waiting game will have minimum ROI, because AMD will keep releasing newer and faster products almost every year and though Intel is now between rock and hard place (regurgitating same things) will eventually (in 1 or 2 years) join AMD in this game with competitive products.


Are you sure about this? What I've read is the new 4000 series Ryzen might be on a different socket and add DDR5 to that?


----------



## TheLostSwede (Nov 18, 2019)

ratirt said:


> Are you sure about this? What I've read is the new 4000 series Ryzen might be on a different socket and add DDR5 to that?


No, Ryzen 4000 will use the same socket and it's the last CPU to use the socket. No DDR5 for consumer PCs in 2020.


----------



## Zach_01 (Nov 18, 2019)

ratirt said:


> Are you sure about this? What I've read is the new 4000 series Ryzen might be on a different socket and add DDR5 to that?


Way too soon for DDR5 either. Yes I read a lot and watch a lot about the subject and all info by inference leads to this.


----------



## cucker tarlson (Nov 18, 2019)

AsRock said:


> Waiting here too even more so with AMD's new socket on it's way soon, other wise i would of opted for a 3900X.


Imo no reason for amd to go with a new socket.
They're not Intel.I'm glad my Z97 was compatible with haswell,refresh and broadwell but what they've been doing since z170 is ridiculous.


----------



## londiste (Nov 18, 2019)

TheLostSwede said:


> No, Ryzen 4000 will use the same socket and it's the last CPU to use the socket. No DDR5 for consumer PCs in 2020.


I do not think this has been confirmed either way. We assume Ryzen 4000 will work because of "AM4 support until 2020" statement and the fact that Ryzen 4000 is expected next year.

AMD will switch socket - presumably to AM5 - when DDR5 is out. SK Hynix has said they will come out with DDR5 modules by end of 2019 and DDR5 modules should be fully available in 2020. Micron and Samsung have seconded this to some degree. By all indications, DDR5 should be a thing in 2020. Whether either Intel or AMD will embrace it remains to be seen.


----------



## ratirt (Nov 18, 2019)

Zach_01 said:


> Way too soon for DDR5 either. Yes I read a lot and watch a lot about the subject and all info by inference leads to this.


Well I'm waiting for the new Ryzen to change my 2700x. I was certain that the socket is changing as well that is why I didn't go for 3000 series to avoid the incompatibility with the next years' Ryzen. Maybe I should have gone for 3000 series after all. With the rumors it is hard to keep up


----------



## Zach_01 (Nov 18, 2019)

ratirt said:


> Well I'm waiting for the new Ryzen to change my 2700x. I was certain that the socket is changing as well that is why I didn't go for 3000 series to avoid the incompatibility with the next years' Ryzen. Maybe I should have gone for 3000 series after all. With the rumors it is hard to keep up


Your CPU/board are still very nice, performance/features wise. Of course you can wait and still stick a 4000 series in that board. Does not really have major differences from my 3600.
In about ~8 months (mid 2020) probably going to be out.

I get the feeling we are far aside from this thread tho...


----------



## 95Viper (Nov 18, 2019)

Yep, keep the thread on topic.
Thank You.


----------



## EarthDog (Nov 18, 2019)

ratirt said:


> You don't need to buy it. Intel offers 8c for lower price so you can go with that or 12c Ryzen. If this is still too much for you go 6c older CPU or just buy PS4 or wait for PS5 or Xbox and call it a day.
> Besides it is not about how much you can utilize but what you can get for your money.


Wait? Those are my choices? Oh, thank you!! I was unware...lol..........wasn't my point though, lol.

It's clear to me that AMD cannot compete with clocks using Ryzen arch. They knew it and went modular/wide instead. The benefits of that, today, aren't much for the average user and even most enthusiasts. With the PS5 ro Xbox or w/e has AMD hardware in it coming out late next year, perhaps we will FINALLY see width and more cores and threads utilized for most users. But that will take TIME... remember hex cores have been in the market for at least what, 8 years, and just today, 4c/8t would be considered a 'minimum' for most. More than 10c/20t on the mainstream platform, either side, is just too much. AMD blurred the lines, and speaking strictly from a core count perspective, brought out products we don't need. The good about this the cheap pricing... otherwise, yes, today, for someone buying a PC, unless you are a content creator etc, 6c/12 is the sweetspot while 8c/16t is enthusiast level, 10c/20t is just nuts for most... I just don't like seeing that many cores/threads in mainstream when the reality is very few can use it.


----------



## ratirt (Nov 18, 2019)

EarthDog said:


> Wait? Those are my choices? Oh, thank you!! I was unware..........wasn't my point though, lol.


It wasn't my point to tell you about your choices but a sincere suggestion to think about them. 
It is clear to me that intel can't compete in core count with either of its arch with AMD. It is a good move and for me only one. Intel can't up clocks every new gen and you know that. So clocks are pointless in this case as an argument. Unless you have different? At least try to understand this.
Number of cores and threads that are the sweet-spot is your opinion and you are clearly referring to games. So, buy a console if you want to play games and stay away from computers since they can do way more than run Minecraft or CS:GO.
Since you point out cores and threads utilization. Remember, first you need resources, afterwards somebody will utilize them. If you are stuck for so long with 4c (in a mainstream like desktop) then nobody sane will develop anything beyond what is being given to cripple his own product. Now we have more resources, lets see what developers will do with it.
BTW, I'm sure you have already noticed that some new titles require more than 4c since you are into gaming so much.


----------



## kapone32 (Nov 18, 2019)

ratirt said:


> You don't need to buy it. Intel offers 8c for lower price so you can go with that or 12c Ryzen. If this is still too much for you go 6c older CPU or just buy PS4 or wait for PS5 or Xbox and call it a day.
> Besides it is not about how much you can utilize but what you can get for your money.
> 
> 
> Isn't? The price is higher but the performance is as well. I don't think you see the bigger picture here. Maybe watch some reviews about the 3900 and 3950? It's not just price but what you get for the money. Price to performance is what you should be after here.



I totally understand that the 3900 and 3950 are not even marginally over priced for AMD offerings (the Intel 10 core was $1999 in every issue of CPU magazine I read) . They are the best for price performance but I will give you this. The 2920X is $474 Canadian right now and the 1920X is as low as $300 even though the 3900X is 10 to 15% faster would those not be the price/performance leaders for 12 core CPUs?


----------



## juiseman (Nov 18, 2019)

EarthDog said:


> Wait? Those are my choices? Oh, thank you!! I was unware...lol..........wasn't my point though, lol.
> 
> It's clear to me that AMD cannot compete with clocks using Ryzen arch. They knew it and went modular/wide instead. The benefits of that, today, aren't much for the average user and even most enthusiasts. With the PS5 ro Xbox or w/e has AMD hardware in it coming out late next year, perhaps we will FINALLY see width and more cores and threads utilized for most users. But that will take TIME... remember hex cores have been in the market for at least what, 8 years, and just today, 4c/8t would be considered a 'minimum' for most. More than 10c/20t on the mainstream platform, either side, is just too much. AMD blurred the lines, and speaking strictly from a core count perspective, brought out products we don't need. The good about this the cheap pricing... otherwise, yes, today, for someone buying a PC, unless you are a content creator etc, 6c/12 is the sweetspot while 8c/16t is enthusiast level, 10c/20t is just nuts for most... I just don't like seeing that many cores/threads in mainstream when the reality is very few can use it.



Well; the IPC improvements make up for most of that. The goal is too not push the clock to high. Efficiency goes way down.


----------



## ratirt (Nov 18, 2019)

kapone32 said:


> I totally understand that the 3900 and 3950 are not even marginally over priced for AMD offerings (the Intel 10 core was $1999 in every issue of CPU magazine I read) . They are the best for price performance but I will give you this. The 2920X is $474 Canadian right now and the 1920X is as low as $300 even though the 3900X is 10 to 15% faster would those not be the price/performance leaders for 12 core CPUs?


Well you do realize the prices for the 2920x and 1920x are cut-down to make room for new Ryzens? Of course if you want performance for the money, better to go with older gen. If you go with used you can go even lower and have a decent performance but is this what it really is all about when you compare newly released products? You compare them to older CPU gens with cut prices or to competition and new processors that are counterparts?


----------



## kapone32 (Nov 18, 2019)

ratirt said:


> Well you do realize the prices for the 2920x and 1920x are cut-down to make room for new Ryzens? Of course if you want performance for the money, better to go with older gen. If you go with used you can go even lower and have a decent performance but is this what it really is all about when you compare newly released products? You compare them to older CPU gens with cut prices or to competition and new processors that are counterparts?



Of course I realize that, I have said this before and I stand by it. All of these are Ryzen processors. Yes you could get better deals from the used market (for sure). You are expressing my thought process in the fact there are no counterparts to those 12 core CPUs that use DDR4 except for the 1920X and 2920x (which were both more than the 3950x at launch). When Intel releases this 10 core CPU it might be the closest direct competitor to AMD's 12 core monsters from team blue.


----------



## Frick (Nov 18, 2019)

EarthDog said:


> I just don't like seeing that many cores/threads in mainstream when the reality is very few can use it.



That's part of why it's good though: raising the lowest common denominator.


----------



## EarthDog (Nov 18, 2019)

ratirt said:


> It wasn't my point to tell you about your choices but a sincere suggestion to think about them.
> It is clear to me that intel can't compete in core count with either of its arch with AMD. It is a good move and for me only one. Intel can't up clocks every new gen and you know that. So clocks are pointless in this case as an argument. Unless you have different? At least try to understand this.
> Number of cores and threads that are the sweet-spot is your opinion and you are clearly referring to games. So, buy a console if you want to play games and stay away from computers since they can do way more than run Minecraft or CS:GO.
> Since you point out cores and threads utilization. Remember, first you need resources, afterwards somebody will utilize them. If you are stuck for so long with 4c (in a mainstream like desktop) then nobody sane will develop anything beyond what is being given to cripple his own product. Now we have more resources, lets see what developers will do with it.
> BTW, I'm sure you have already noticed that some new titles require more than 4c since you are into gaming so much.


1. I have thought about them. That much should be clear.
2. Their clocks are higher... especially dual core and all core boost. Clocks are not pointless. Clearly.
3. Yep. I am talking about games and also made a distinction about content creation and users that can actually UTILIZE the cores and threads.
4. I pointed out that hex cores have been in the market for nearly a decade and most things, including games, don't utilize them. Adaptation is slow. But make no mistake about it, the resources have been there for several years already and devs haven't done much in that time.
5. I have noticed that a couple of new titles can run more than 4c. And you may notice I covered that point as well... I clearly stated that a MINIMUM for gaming would be 4c/8t where a sweetspot is 6c/12t today.

Intel's new CPU is really on the edge for me of mainstream and HEDT...

Let me quote myself for clarity.....


EarthDog said:


> ... remember hex cores have been in the market for at least what, 8 years, and just today, 4c/8t would be considered a 'minimum' for most.





EarthDog said:


> ... otherwise, yes, today, for someone buying a PC, unless you are a content creator etc, 6c/12 is the sweetspot while 8c/16t is enthusiast level, 10c/20t is just nuts for most... I just don't like seeing that many cores/threads in mainstream when the reality is very few can use it.


Capeesh?



juiseman said:


> Well; the IPC improvements make up for most of that. The goal is too not push the clock to high. Efficiency goes way down.


They are on par with Intel or slightly ahead... that is awesome but has little to do with the obnoxious core count on the news Intel chip here and AMD's mainstream offerings.



Frick said:


> That's part of why it's good though: raising the lowest common denominator.


Again, we've had hex cores out for almost a decade already. So I agree with this, but to what end today when devs had a chance for several years already. I think the new consoles will grease the wheels... but the resources have been there and outside of power users, few can utilize the large core counts.




In the end, we'll all have to agree to disagree. I simply wish that we are NOT in a core war and Intel followed suit... I wish there was more of a black and white line between Mainstream and HEDT core count instead of shoe horning in 16/c32t in mianstream (or anything 10c+ honestly).


----------



## Frick (Nov 18, 2019)

EarthDog said:


> Again, we've had hex cores out for almost a decade already. So I agree with this, but to what end today when devs had a chance for several years already. I think the new consoles will grease the wheels... but the resources have been there and outside of power users, few can utilize the large core counts.



That's the same point, isn't it? Moar people have moar cores, meaning devs also have moar cores to play against. Consoles have a big impact for sure.


----------



## EarthDog (Nov 18, 2019)

Frick said:


> That's the same point, isn't it? Moar people have moar cores, meaning devs also have moar cores to play against. Consoles have a big impact for sure.


A hex core (or greater) CPU has been out for at least 8 years (AMD Phenom, so it was cheap and available) and we've barely gotten anywhere in gaming (again content creators and such yes, but that is few and far between for home users). I do believe consoles will be the lubricant, but 12-16c on mainstream is just ridiculous the consoles are 8c/16t CPUs, right???), at this time, to me. Maybe in 2021 in my feeble little brain would this have been better.... again, since the resources have been there for a while...

It's only a name, mainstream, I get it... but the way that cores are paraded around, it makes those who have no clue (99% of users) think it means more than it does for an overwhelming majority of them.


----------



## Zach_01 (Nov 18, 2019)

For me is somewhere in the middle...
My opinion and assumption according what I am seeing getting past (downwards) that 14~12nm process and what Im hearing and reading across the net.

Clockspeeds are not irrelevant today. Most mainstream and some pro software is build from years to now upon the clockspeed/IPC gains of CPUs mostly. Developers did not bother too much to utilize the wider resources (cores/threads) because its way easier to use clockspeed.
The have been lazy the past decade because CPUs kept increasing clock speed significantly along and over IPC. The have been sleeping on the job...
I believe this was an era that is soon to be over. Intel gasps the last breaths of clocks increases. I bet her next all new arch is a more AMD like. Clockspeed cannot increase infinitely, especially at those processes of 7nm >> 5nm >> 3nm... It just cant happen without significant leaks (EMI).

Developers will eventually adopt the multi core/threaded resources or go home because there is a wall coming and is coming fast... When Intel drops the clockspeed hunt... after 2020...


----------



## ppn (Nov 18, 2019)

can we safely predict 10 cores 5.2 = 312 watts in prime+avx. ony if socket 1200 can even handle that power.


----------



## Zach_01 (Nov 18, 2019)

Not to be mistaken I'm not ranting on devs. I would have gone the same way in their place... to be honest...


----------



## juiseman (Nov 18, 2019)

EarthDog said:


> 1. I have thought about them. That much should be clear.
> 2. Their clocks are higher... especially dual core and all core boost. Clocks are not pointless. Clearly.
> 3. Yep. I am talking about games and also made a distinction about content creation and users that can actually UTILIZE the cores and threads.
> 4. I pointed out that hex cores have been in the market for nearly a decade and most things, including games, don't utilize them. Adaptation is slow. But make no mistake about it, the resources have been there for several years already and devs haven't done much in that time.
> ...



I'm going to "not agree" rather than "disagree" also. Are you saying that you would want to pay $400 for an Intel 4 core CPU vs a Intel 10 core CPU for $500? I don't get that mindset.
If the core war had not started due AMD being competitive again; this is where we would still be. $350-$390 for a 4 core i7. Then the HEDT CPU went usually $500 plus from what I recall...
For people that use productivity software; these cores help ALOT!! Its nice having the extra power for multitasking also. And yes; games will start to scale better in the future. So if your a gamer
it will matter someday. 

Just my 2 cents. I think ultimately this helps all of us out. cheaper parts!! I still can't get past the whole mining craze 2017-2018.....that was clearly robbery...I'm glad prices finally started
to return to normal. But if you will notice they still are slightly above the pre-mining craze. Who knows, maybe Intel will surprise with a good GPU; even if its only good in the price vs performance ratio;
its still may drive the 2 big players to lower some prices in the low to mid low end GPU segments..



ppn said:


> can we safely predict 10 cores 5.2 = 312 watts in prime+avx. ony if socket 1200 can even handle that power.



yea; that is a pretty safe bet. I have wondered the same about LGA1200 (or LGA1159) or whatever they use next....I was thinking that Intel had to add more Vcc to power these added cores...
But then take a look at this: https://hexus.net/tech/news/cpu/125...-analysis-shows-extra-power-pins-unnecessary/
So; that's hard to say. I'm not a Microprocessor design engineers.....just a nerd...


----------



## EarthDog (Nov 18, 2019)

juiseman said:


> Are you saying that you would want to pay $400 for an Intel 4 core CPU vs a Intel 10 core CPU for $500? I don't get that mindset. If the core war had not started due AMD being competitive again; this is where we would still be. $350-$390 for a 4 core i7. Then the HEDT CPU went usually $500 plus from what I recall...


Nope. I never said that. To you, it seems like adding more cores/threads is mutually exclusive to competition and lower pricing. It isn't. Both companies could have stuck to 6/8c parts in the mainstream and had higher core count CPUs still in HEDT...also priced better. 2021, after the consoles do their thing on the core war (and again those are simply 8c CPUs, right?), then start making more cores available in the mainstream segment.


juiseman said:


> For people that use productivity software; these cores help ALOT!! Its nice having the extra power for multitasking also. And yes; games will start to scale better in the future. So if your a gamer
> it will matter someday.


My point exactly. There are those who can utilize the c/t count above 8c/16t. But it isn't many. Most who do stick to the HEDT platform in the past. Now, the lines are blurred. I said from the get go that there is a small contingent using these at home as such. Otherwise, typically, for an office (at least the three places I have worked at including large pharma, large utility, and AWS) they buy mainstream potatos for the office and HEDT/Xeon/TR for workstations.

Someday....... we've been waiting for that for almost a decade now, no? You don't find it odd that a quad core was released over a decade ago, hex cores over 8 years ago, and just today 4c/8t CPUs are getting long in the tooth on a few titles? But yet, now that there are 10-16c products out suddenly things will change and quickly? I disagree, especially with the quickly part. 



juiseman said:


> Just my 2 cents. I think ultimately this helps all of us out. cheaper parts!! I still can't get past the whole mining craze 2017-2018.....that was clearly robbery...I'm glad prices finally started
> to return to normal. But if you will notice they still are slightly above the pre-mining craze. Who knows, maybe Intel will surprise with a good GPU; even if its only good in the price vs performance ratio;
> its still may drive the 2 big players to lower some prices in the low to mid low end GPU segments..


It does ultimately help all of us out, but it has little to do with the core count IMO (hence the agree to disagree, lololol). My contention with this new Intel CPU and the core wars is simply that it is premature and blurring the lines of mainstream and HEDT/workstations and can confuse the average joe quite easily.


----------



## John Naylor (Nov 18, 2019)

If a new CPU came out that had more cores and smaller die size ... what % if folks would ya think would choose it over a competing product w/   less cores and  larger die size, that ran their proggrams faster ?


----------



## EarthDog (Nov 18, 2019)

I think you are confusing things people would know (core count and performance) versus something 99% dont (die size) and the relevance of such a thing.


----------



## ratirt (Nov 19, 2019)

EarthDog said:


> 1. I have thought about them. That much should be clear.
> 2. Their clocks are higher... especially dual core and all core boost. Clocks are not pointless. Clearly.
> 3. Yep. I am talking about games and also made a distinction about content creation and users that can actually UTILIZE the cores and threads.
> 4. I pointed out that hex cores have been in the market for nearly a decade and most things, including games, don't utilize them. Adaptation is slow. But make no mistake about it, the resources have been there for several years already and devs haven't done much in that time.
> 5. I have noticed that a couple of new titles can run more than 4c. And you may notice I covered that point as well... I clearly stated that a MINIMUM for gaming would be 4c/8t where a sweetspot is 6c/12t today.


ad. 1 sure 
ad. 2 Pointless unless they can go much higher. 100 or 200 Mhz is not much of a difference.
ad. 3 Good for you.
ad. 4 Where in the server market or you are talking about Phenom? Open your eyes, nobody will support a processor that didn't have the performance to begin with.
ad. 5 so if you covered everything so precisely where this "ïf only a general consumer could utilize these" coming from?

BTW. AMD doesn't compete with clocks but performance and as you already know Intel will not go 5Ghz and up all the time with node shrinks so this "higher clocks" is in my eye juvenile
This car is better, 'cause it has more BHP. C'mon man


----------



## cucker tarlson (Nov 19, 2019)

EarthDog said:


> Wait? Those are my choices? Oh, thank you!! I was unware...lol..........wasn't my point though, lol.
> 
> It's clear to me that AMD cannot compete with clocks using Ryzen arch. They knew it and went modular/wide instead. The benefits of that, today, aren't much for the average user and even most enthusiasts. With the PS5 ro Xbox or w/e has AMD hardware in it coming out late next year, perhaps we will FINALLY see width and more cores and threads utilized for most users. But that will take TIME... remember hex cores have been in the market for at least what, 8 years, and just today, 4c/8t would be considered a 'minimum' for most. More than 10c/20t on the mainstream platform, either side, is just too much. AMD blurred the lines, and speaking strictly from a core count perspective, brought out products we don't need. The good about this the cheap pricing... otherwise, yes, today, for someone buying a PC, unless you are a content creator etc, 6c/12 is the sweetspot while 8c/16t is enthusiast level, 10c/20t is just nuts for most... I just don't like seeing that many cores/threads in mainstream when the reality is very few can use it.


quoted for truth.


----------



## Zach_01 (Nov 19, 2019)

Zach_01 said:


> For me is somewhere in the middle...
> My opinion and assumption according what I am seeing getting past (downwards) that 14~12nm process and what Im hearing and reading across the net.
> 
> Clockspeeds are not irrelevant today. Most mainstream and some pro software is build from years to now upon the clockspeed/IPC gains of CPUs mostly. Developers did not bother too much to utilize the wider resources (cores/threads) because its way easier to use clockspeed.
> ...





Zach_01 said:


> Not to be mistaken I'm not ranting on devs. I would have gone the same way in their place... to be honest...



And to add something more to the whole conversation, I want to say that windows also is responsible for not utilizing properly the wider resources. The windows scheduler in particular is dumb... works great with current Intel architecture though. Norrow, long and fast pipeline...


----------



## londiste (Nov 19, 2019)

Zach_01 said:


> And to add something more to the whole conversation, I want to say that windows also is responsible for not utilizing properly the wider resources. The windows scheduler in particular is dumb... works great with current Intel architecture though. Norrow, long and fast pipeline...


Skylake is 8-issue wide and 14-19 pipeline stages. Zen is 10-issue wide and 19 pipeline stages, Zen2 should be pretty much the same.
Windows scheduler issues are primarily related to CCX/NUMA node handling with Zen's peculiar layout and fast(est) core issues with Zen2.


----------



## 1000t (Nov 19, 2019)

Zach_01 said:


> For me is somewhere in the middle...
> My opinion and assumption according what I am seeing getting past (downwards) that 14~12nm process and what Im hearing and reading across the net.
> 
> Clockspeeds are not irrelevant today. Most mainstream and some pro software is build from years to now upon the clockspeed/IPC gains of CPUs mostly. Developers did not bother too much to utilize the wider resources (cores/threads) because its way easier to use clockspeed.
> ...


This post looks like it's a decade late (except the nm numbers). I'll explain.

Benefits of parallelism are extracted in two ways. One is in a core at instruction level by out-of-order architecture with help of wide pipeline or vector instructions and the second is multicore/multithreading. CPUs with 4c/4t or 4c/8t have been here for well over a decade and dualcores even longer. I think in that time practically every developer tried to make their programs multithreaded. The benefits of faster processing that multithreading enables are not always worth the effort (developing/maintaining) because the amount of data user process sometimes is not that big.

Embarrassingly parallel tasks are easiest to harvest and see the benefits of MT. But that's given and every such task today can utilize the resources if available (which in mainstream CPUs they are).
Not obviously parallel task are the hard part. It requires a lot of thinking and could be limited by number of threads when adding more does not help.
Then there are algorithms that cannot be parallelized or it's not worth the effort.

Now, what tasks does an average mainstream user? More or less the same as 10 years ago. Developers had the incentive to rewrite programs for multithreading for quadcores. And they mostly did it, piece by piece.

This post is not a defense of quadcores. It illustrates the diminishing returns everyday PC user will see from more cores.


----------



## londiste (Nov 19, 2019)

Even games have had very strong incentive to use 6-7 cores since 2013. Both Xbox One and Playstation 4 came with 8 (weak) cores, out of which 1-2 are reserved to OS.
Next year (supposedly), new consoles will move these goalposts further on with 8c/16t Zen2 CPUs.


----------



## 1000t (Nov 19, 2019)

londiste said:


> Even games have had very strong incentive to use 6-7 cores since 2013. Both Xbox One and Playstation 4 came with 8 (weak) cores, out of which 1-2 are reserved to OS.
> Next year (supposedly), new consoles will move these goalposts further on with 8c/16t Zen2 CPUs.


Games and their gameplay are varied and only some are big and complex enough to require that many cores. But it does not mean it's not a useful and welcome advancement.


----------



## cucker tarlson (Nov 19, 2019)

How many games use over 12 threads? Quite a lot of them.
What is the difference ? Core load is lower but performance increase is small.
What is the premium you pay for 8 cores over 6 ? Usually at least 60%.4c to 6c is 50% more and you end up paying about 50%.6c to 8c is 33% and the premium is 60% or more.
End of story.single threaded performance is still as relevant as its always been,dont matter how you achieve that-higher freqency,better ipc,faster memory or lower latency.


----------



## Vayra86 (Nov 19, 2019)

ratirt said:


> It wasn't my point to tell you about your choices but a sincere suggestion to think about them.
> It is clear to me that intel can't compete in core count with either of its arch with AMD. It is a good move and for me only one. Intel can't up clocks every new gen and you know that. So clocks are pointless in this case as an argument. Unless you have different? At least try to understand this.
> Number of cores and threads that are the sweet-spot is your opinion and you are clearly referring to games. So, buy a console if you want to play games and stay away from computers since they can do way more than run Minecraft or CS:GO.
> Since you point out cores and threads utilization. Remember, first you need resources, afterwards somebody will utilize them. If you are stuck for so long with 4c (in a mainstream like desktop) then nobody sane will develop anything beyond what is being given to cripple his own product. Now we have more resources, lets see what developers will do with it.
> BTW, I'm sure you have already noticed that some new titles require more than 4c since you are into gaming so much.



All those cores and resources go underused, that's the point, while the higher clocks do not. It can easily take another 5-7 years before AMD's design truly pays off. A convenient amount of time for everyone else to go wider too.

You're right we need resources, but the push to 16 core desktop parts is silly. Its also a very bad balance with dual channel memory. Zen is first and foremost a server part, and this core count is derived from that - the high core count is not a great push to actually using that on desktop, its just 'possible' because the dies exist. It wasn't designed 'for us' lowly MSDT plebs, but somehow we like to convince ourselves otherwise (you, not I). There is a difference between enough and too much.


----------



## ratirt (Nov 19, 2019)

Vayra86 said:


> All those cores and resources go underused, that's the point, while the higher clocks do not. It can easily take another 5-7 years before AMD's design truly pays off. A convenient amount of time for everyone else to go wider too.
> 
> You're right we need resources, but the push to 16 core desktop parts is silly. Its also a very bad balance with dual channel memory. Zen is first and foremost a server part, and this core count is derived from that - the high core count is not a great push to actually using that on desktop, its just 'possible' because the dies exist. It wasn't designed 'for us' lowly MSDT plebs, but somehow we like to convince ourselves otherwise (you, not I). There is a difference between enough and too much.


Well for me this is not silly and I'm sure a lot of people will agree with me. What is silly on the other hand is saying that there's no need for more cores and boosting frequency is better. Also saying that we don't need more than 4 or 6c is silly (they are being maxed out in games or very close to reaching 100% utilization).

BTW, When you buy a PSU do you buy one 550 and draw 500Watts outta it? No you want some headroom. You get 16c CPU with headroom and yet you complain it is too much and you don’t need it. That's just ridiculous.
You need to push tech forward and then use it. Game developers will not ask CPU manufacturers or announce in media for them, "Guys it is time for you to start manufacturing 8c per CPU for software and game developers now because we need it". This is not how this goes. First, you have resources and then software developers will eventually use it or at least have a choice, balance if needed. Would you like to have a choice? Of course and you have one now. You dont need to go 8c if you don't want. You want to go high clocks go Intel. You wanna go more cores? AMD (for a reasonable price). So do not bash or call it silly to have a 16c in the desktop market because you don't like it or you think it is not needed because, I don't think anyone of us is in the position to say that. BTW.
Saying that, we don't need more cores is just stupid. It is not about cores but performance. Saying you don't want more cores is like saying you don't want more performance. Also saying you would rather have more clock speed than cores is even more ridiculous. Clock speed is not something you will get because you prefer to have it rather than cores. It won't happen and neither Intel nor AMD will boost clocks of their upcoming processors to 5.5 or 6Ghz to satisfy your liking.
I don't understand you people. You get more cores and more performance with it and you complain because you want higher clock speed instead. There's just no pleasing you.
There is a difference between having a choice and being told what to pick.


----------



## londiste (Nov 19, 2019)

Unless you really do productivity stuff - rendering or video encoding seem to be the main ones here - 16 cores is overkill. Even with the price being almost linear (with slight upwards trend) the price/performance in consumer tasks goes out of hand for more expensive CPUs. This is especially the case for gaming. Additionally, gaming generally benefits from high clock speeds but relatively little from additional cores past 8 (or threads past 8-10).

It all depends on how long are you planning to keep the CPU. Buying 16-core Ryzen 3950X today for gaming or consumer use cases is overkill to the max. If you plan to keep it for 5 years, then it maybe makes kind of sense. I wouldn't hold my breath though. Same applies to Ryzen 3900X at a lesser degree. Keep in mind that these are $749 and $499 CPUs, respectively.


----------



## ratirt (Nov 19, 2019)

londiste said:


> Unless you really do productivity stuff - rendering or video encoding seem to be the main ones here - 16 cores is overkill. Even with the price being almost linear (with slight upwards trend) the price/performance in consumer tasks goes out of hand for more expensive CPUs. This is especially the case for gaming. Additionally, gaming generally benefits from high clock speeds but relatively little from additional cores past 8 (or threads past 8-10).
> 
> It all depends on how long are you planning to keep the CPU. Buying 16-core Ryzen 3950X today for gaming or consumer use cases is overkill to the max. If you plan to keep it for 5 years, then it maybe makes kind of sense. I wouldn't hold my breath though. Same applies to Ryzen 3900X at a lesser degree. Keep in mind that these are $749 and $499 CPUs, respectively.


That only tells me that you guys are limited and narrow minded with a lot of attitude. You perceive today's computing as gaming only and thus you say 16c cpu is not needed in the desktop market because nobody (or marginal percent) will benefit from it. I'm telling you, go console and your problems are over. On top of that you can get that 16c for such a good price. You are against advancement of technology you just don't know what you want and you can't appreciate what you are getting.

That's  you guys.
AMD stop with this 16c madness in the desktop market, we had enough. 4c is all you need so stop this. The game developers wont use it ever. 16c is only for server and professionals, for desktops it is an overkill.
NV stop realeasing new GPUs. We've got 2080 Ti that's enough. We can play 1080p 144Hz who needs more than that?
AMD don't try to catch up with NV and release new 5000 series we've got 2080 Ti that's more than enough besides you are always an underdog so stop.
Why go 4k RT when we can go 60FPS RT 1080p who needs more than that?
4k sucks it is better to go 720p 1000FPS.

It is kinda funny to think about but that is how I see you guys  It would seem like you are just born to argue. Typical European way of living now


----------



## juiseman (Nov 19, 2019)

16 cores may be overkill as of today; but you have to remember, if the mainstream core count increases the trend will continue to optimize software to use the
extra resources. Can you run windows now on a 2 core/ 2 thread cpu? not very well; its starting to choke. The mainstream CPU now has 4-6 cores w/ 8-16 threads.
with at least 8GB-16GB RAM. That seems to make windows 10 happy. 2-3 years ago that was not the case. Some early Versions of Win 10 were running just as good as Win 7.
Now that is not the case. Windows has become bloated.


----------



## londiste (Nov 19, 2019)

@ratirt you really like these straw men, don't you?

Edit:
I mean, gaming is the main performance hog on my system. It is much more GPU-heavy than it is CPU heavy. Other than occasional video encoding there is no production workload on my computer. And it will inevitably be replaced in about 2-3 year cadence. Is it surprising that I am focusing on what I see? Most of the people I know have even more casual approach to using their PCs.

Looking at TPU's 9900KS review, my 170€ CPU from over a year ago loses to 749€ 3900X by less than a percent at 1440p and about 7% at 720p. Noticeable difference at 720p but at 4-5x price difference - 550-600€? Games in TPU's review are pretty well threaded ones as well with the exception of Witcher 3.


----------



## Vayra86 (Nov 19, 2019)

ratirt said:


> That only tells me that you guys are limited and narrow minded with a lot of attitude. You perceive today's computing as gaming only and thus you say 16c cpu is not needed in the desktop market because nobody (or marginal percent) will benefit from it. I'm telling you, go console and your problems are over. On top of that you can get that 16c for such a good price. You are against advancement of technology you just don't know what you want and you can't appreciate what you are getting.
> 
> That's  you guys.
> AMD stop with this 16c madness in the desktop market, we had enough. 4c is all you need so stop this. The game developers wont use it ever. 16c is only for server and professionals, for desktops it is an overkill.
> ...



Right. Let's just say I'm glad you're not designing our CPUs. This is clearly way over your head. Read carefully, again, what's being said. Nobody is against progress.


----------



## kapone32 (Nov 19, 2019)

This is purely anecdotal but if game developers started or are using Threadripper based workstations to create games would we not start to see more cores being utilized to run the game engine? is AOTS a sign of the future or a flash in the pan?


----------



## juiseman (Nov 19, 2019)

I'll say this again; I love the competition back in the CPU market. That is nothing to complain about. Just because the software basically can't utilize the full potential of some 
new hardware is not a reason to get upset. Really all this is good; if your into 4 core cpus; then you should be happy as crap. the price has dropped almost $150 on average for a 4 core.


----------



## kapone32 (Nov 19, 2019)

juiseman said:


> I'll say this again; I love the competition back in the CPU market. That is nothing to complain about. Just because the software basically can't utilize the full potential of some
> new hardware is not a reason to get upset. Really all this is good; if your into 4 core cpus; then you should be happy as crap. the price has dropped almost $150 on average for a 4 core.



Why stop there you can get a 1900X for $149 on Amazon.com........


----------



## juiseman (Nov 19, 2019)

I don't think it works like that; I think it comes down to the game engine and how its written. But I'm no software expert.
I would say that the most games are optimized for 1-6 threads...it may require a rewrite; or a more likely scenario is ported over from another platform.


----------



## ratirt (Nov 19, 2019)

londiste said:


> @ratirt you really like these straw men, don't you?





Vayra86 said:


> Right. Let's just say I'm glad you're not designing our CPUs. This is clearly way over your head. Read carefully, again, what's being said. Nobody is against progress.


You guys are like that. Why don't you agree? You want to tell the world how things should have been done because that is what you want and think.
Why don't you build one and prove your point? Whatever that is.
You are against progress and you are not in a place to tell if 16c is an overkill for desktops. There's more than games in this world with desktops computing and 16c are very much welcomed.  Go console and stop saying it is an overkill. Just because you don't know how to utilize 16c doesn't mean it is an overkill because you say so.


----------



## londiste (Nov 19, 2019)

juiseman said:


> 16 cores may be overkill as of today; but you have to remember, if the mainstream core count increases the trend will continue to optimize software to use the
> extra resources. Can you run windows now on a 2 core/ 2 thread cpu? not very well; its starting to choke. The mainstream CPU now has 4-6 cores w/ 8-16 threads.
> with at least 8GB-16GB RAM. That seems to make windows 10 happy. 2-3 years ago that was not the case. Some early Versions of Win 10 were running just as good as Win 7.
> Now that is not the case. Windows has become bloated.


Yes, you can run Windows on 2core/2thread CPU. 2c/4t is much more common right now though and I am doing that daily on both work as well as my own laptop with CPU resources to spare. Storage and RAM are much more critical in terms of running Windows. I have used Windows (10) on a 4-core Atom (Celeron J1900) for a long while and it works fine with the exception of video playback including web pages with videos and other active content 

Windows 10 and Windows 7 are pretty much in the same spot in terms of performance, Windows 10 tends to use a bit more RAM but doesn't necessarily require it. Operating systems in general are fairly lean at this point anyway, it's the applications and especially browsers that take a toll on CPU and RAM.



ratirt said:


> You are against progress and you are not in a place to tell if 16c is an overkill for desktops. There's more than games in this world with desktops computing and 16c are very much welcomed.  Go console and stop saying it is an overkill. Just because you don't know how to utilize 16c doesn't mean it is an overkill because you say so.


If you noticed, I was talking about practicality and cost with cost for the performance received being not very kind for the higher-end desktop CPUs today.
What are you doing with your computer that benefits from 16 cores and 32 threads?


----------



## ratirt (Nov 19, 2019)

londiste said:


> What are you doing with your computer that benefits from 16 cores and 32 threads?


Why do you care? You wont use it anyway because it is an overkill right? You guys have no idea what you are talking about and what in fact 16c for the price and performance it has, has brought to the desktop segment. And, since it is such an unneeded  overkill, that is why Intel is releasing 10c20t 10900KF for desktop for $500 to join AMD in the desktop segment core count overkill club.


----------



## londiste (Nov 19, 2019)

Curiosity. You are strongly arguing that 16 cores is very useful, I am curious about what exactly do you use that benefits from it.

Edit:
Of course I would use a more powerful CPU if it provided a benefit that at least to some degree outweighed the cost. At this point, 3900X could provide a single-digit percentage performance increase in my use cases over much cheaper CPUs.


----------



## Vya Domus (Nov 19, 2019)

londiste said:


> Curiosity. You are strongly arguing that 16 cores is very useful, I am curious about what exactly do you use that benefits from it.



Just out of curiosity as well, why do you insist on trying to convince people that it's not useful ?


----------



## londiste (Nov 19, 2019)

Vya Domus said:


> Just out of curiosity as well, why do you insist on trying to convince people that it's not useful ?


Because most people who buy it do not seem to use the potential. I do know people who have use cases that benefit directly - rendering, video encoding, science compute stuff. Most of that is not common or not really desktop use case though.


----------



## ratirt (Nov 19, 2019)

londiste said:


> Curiosity. You are strongly arguing that 16 cores is very useful, I am curious about what exactly do you use that benefits from it.


Video editing, Surveillance system and security designing/monitoring, algorithm and data crunching, and many more.
Here's a tip for all of you, who don't know how to utilize 16c32t today, whatever segment of processor it may be, DON'T buy it and zip your lip with saying it is not needed because you think so because it is pathetic.


----------



## londiste (Nov 19, 2019)

I do not really get it why are you reading and arguing this selectively though. From the get-go I was talking about consumer use cases and gaming more specifically and also said there are productivity workloads that do benefit from many cores. Everything you listed is anything but consumer stuff. Yes, your use cases along with others do benefit from more cores and nobody has been arguing against that.


----------



## EarthDog (Nov 19, 2019)

londiste said:


> I do not really get it why are you reading and arguing this selectively though. From the get-go I was talking about consumer use cases and gaming more specifically and also said there are productivity workloads that do benefit from many cores. Everything you listed is anything but consumer stuff. Yes, your use cases along with others do benefit from more cores and nobody has been arguing against that.


W00t! This. Me too...



EarthDog said:


> More than 10c/20t on the mainstream platform, either side, is just too much. AMD blurred the lines, and speaking strictly from a core count perspective, brought out products we don't need. The good about this the cheap pricing... otherwise, yes, today, for someone buying a PC, unless you are a content creator etc, 6c/12 is the sweetspot while 8c/16t is enthusiast level, 10c/20t is just nuts for most... I just don't like seeing that many cores/threads in mainstream when the reality is very few can use it.



Tough to have a decent conversation when the goal posts move and the dude drops insults (such as juvenile and pathetic). I'm out.


----------



## ratirt (Nov 19, 2019)

londiste said:


> I do not really get it why are you reading and arguing this selectively though. From the get-go I was talking about consumer use cases and gaming more specifically and also said there are productivity workloads that do benefit from many cores. Everything you listed is anything but consumer stuff. Yes, your use cases along with others do benefit from more cores and nobody has been arguing against that.


Since when the consumer use became a gaming use? Is desktop market only gaming for you too? You are now justifying your previous statements and arguments and EarthDog is joining in, clapping his "paws" in admiration you've found a justification.  I'm telling you guys. You'd be better off in the console segment, giving ideas and thoughts there than desktop PC.
It is very common that a living organism, even if wrong, will defend itself and try to explain and look for a way out. That is exactly what you are doing now. You are wrong and now you twist the meaning of desktop to consumer to gaming altogether.



EarthDog said:


> Tough to have a decent conversation when the goal posts move and the dude drops insults (such as juvenile and pathetic). I'm out.


Insults? Apparently you don't know what an insult is considering you assumed that so many brainless desktop users will not know how to use the 16c. If you don't know what you are talking about don't say it. Nobody will call you juvenile or pathetic. Cause what you said was definitely childish and narrow and somebody must have taken you down a peg.
Although if you'd like to continue the conversation, I'd really like to see your point of view because I seriously don't understand crap of what you are talking about. It's like some people are such big-headed they forget which way is up.


----------



## londiste (Nov 19, 2019)

ratirt said:


> Since when the consumer use became a gaming use? Is desktop market only gaming for you too?


I do think gaming is the one of the main drivers (if not the main driver) for CPU performance in desktop market.


----------



## ratirt (Nov 19, 2019)

londiste said:


> I do think gaming is the one of the main drivers (if not the main driver) for CPU performance in desktop market.


Desktops market is big and gaming is a fraction of it. So no, desktop computers' main purpose is not gaming but you can game on them. You have gaming rigs right? Does this mean you can only game on them or do other stuff?
Just like phones' purpose is not gaming but you can game on them. 
I think we are going off topic here slightly. 
For instance. 10900KF is a desktop processor and it doesn't mean you have to game on it. You confuse desktops as only gaming computers but the fact is , that gaming computers are a low % of desktop market in general. Besides, originally computers were not for gaming, but it didn't hurt to have an option to game on them? It is a matter of choice that I've mentioned before.
Same as phones are not for gaming but you can game on them.
hope you get it.


----------



## londiste (Nov 19, 2019)

ratirt said:


> Desktops market is big and gaming is a fraction of it. So no, desktop computers' main purpose is not gaming but you can game on them. You have gaming rigs right? Does this mean you can only game on them or do other stuff?
> Just like phones' purpose is not gaming but you can game on them.


True. But the majority of desktop market needs even less CPU performance than gaming does.


----------



## cucker tarlson (Nov 19, 2019)

Vayra86 said:


> Right. Let's just say I'm glad you're not designing our CPUs. This is clearly way over your head. Read carefully, again, what's being said. Nobody is against progress.


I mean why go with 6 cores when you have 16 available.this is so easy.


----------



## londiste (Nov 19, 2019)

cucker tarlson said:


> I mean why go with 6 cores when you have 16 available.this is so easy.


There is a consideration of price here - 6 cores cost 200€, 16 cores cost 750€.


----------



## cucker tarlson (Nov 19, 2019)

londiste said:


> There is a consideration of price here - 6 cores cost 200€, 16 cores cost 750€.


Shut up, you're anti progress.


----------



## ratirt (Nov 19, 2019)

londiste said:


> True. But the majority of desktop market needs even less CPU performance than gaming does.


Where did you get that conclusion from? This is why I think you have no idea what you are talking about. You keep thinking that what pushes cpu performance forward is gaming and since you claim, games don't need 16 cores, desktop market should not use that many. This alone should tell you that you are wrong and desktop market doesn't care about gaming otherwise the 16c desktops would not come. That is your problem with your narrow thinking and perceiving the facts and putting gaming above everything else. 
It is like you would consider phones as gaming devices. These are for communication purposes not gaming. Although gaming for phones gives chances to developers and new opportunities and bigger range of usefulness for the device.


----------



## londiste (Nov 19, 2019)

ratirt said:


> Where did you get that conclusion from?


Considerable amount of people, companies and hardware I have met or worked with/on. There are very specific use cases that actually need high-end hardware. These use cases are not very common. Vast majority of machines are netboxes or office PCs.


ratirt said:


> This is why I think you have no idea what you are talking about. You keep thinking that what pushes cpu performance forward is gaming and since you claim, games don't need 16 cores, desktop market should not use that many. This alone should tell you that you are wrong and desktop market doesn't care about gaming otherwise the 16c desktops would not come.


I never said desktop market should not use 16 cores or that 16 core CPUs would not or should not come. Sure 16 cores have their uses. Just implying that people should think and consider their use cases a little before buying the latest and greatest at very high cost simply because it has most cores.


----------



## ratirt (Nov 19, 2019)

londiste said:


> There is a consideration of price here - 6 cores cost 200€, 16 cores cost 750€.


This is what I've said in this thread for 100 times. Go Console and play games. Don't concern yourself with how many cores is needed for gaming. Or tell people that you want to game on 6c not 16c cause nobody cares. Or that you prefer high frequency CPUs for gaming than more cores, cause nobody cares. 
What's important is, gaming is not the main reason for desktop computing and thinking it is is juvenile and narrow. 
Different products are not because it is for gaming or not. It is a different segment with a certain price range for variety of customers. Gaming has nothing to do with this but you need certain CPU with performance to play games. That depends on the game but still. Again, your problem is you narrow your thinking to gaming only, considering desktop market or segment as gaming only and that's why the dispute we have here.


cucker tarlson said:


> Shut up, you're anti progress.


If that's not juvenile then I don't know what is and yet I'm being offensive What a ruse. 
Bravo to the two (so-called grown ups) sharing thoughts but instead tuck tail and run when no arguments and make childish remarks . Expected more but if that is what I get. Oh well


----------



## londiste (Nov 19, 2019)

ratirt said:


> What's important is, gaming is not the main reason for desktop computing and thinking it is is juvenile and narrow.


What do  you consider the main reason for desktop computing?


----------



## Ahhzz (Nov 19, 2019)

Guys, it's really simple. A differing opinion stated politely is encouraged. Poking at each other, especially just because you don't agree with someone is going to get you a vacation from this thread. Continued such behavior will escalate the staff responses. 

The Guidelines are here, refresh your knowledge, and remember the unwritten one: Don't be a Dick.


----------



## Vya Domus (Nov 19, 2019)

londiste said:


> Because most people who buy it do not seem to use the potential.



Really ? Most ? How did you conclude that, did you ask everyone ? Did you do an extensive survey on what software do these people use ?

These conclusions always fascinate me because we obviously know you didn't do any of that, you're just assuming that's the case based on some anecdotal evidence or gut feeling. I know it's impossible to know what people do with certainty but the least you can do is use common sense to draw your conclusions.

For instance, these high core count CPUs are still relatively expensive, out of reach for most people. That already reduces drastically the chances that someone who is willing to spend the cash is unaware of the potential of the product that they buy. Moreover, even if you're completely out of touch with any technical aspect of computing, if say you're for example a video editor, you're still more likely to end up buying a machine that has a lot of cores.

It's strange, it's almost as if people gravitate towards products that they have a use for. Crazy right ?



londiste said:


> Most of that is not common or not really desktop use case though.



Not really, buying expensive CPUs is already very uncommon. It's not strange that uses cases for them are also uncommon.


----------



## londiste (Nov 19, 2019)

Vya Domus said:


> Really ? Most ? How did you conclude that, did you ask everyone ? Did you do an extensive survey on what software do these people use ?


You are probably right about that and I am wrong. Most people who buy these just use them and never talk about it


----------



## ratirt (Nov 19, 2019)

londiste said:


> What do  you consider the main reason for desktop computing?


What I consider? Are you still trying to get me for pointing you are wrong or you want to catch me off guard with what I will say? You know what I use desktop for, would that justify desktop computer purpose? I think you are limited only by your imagination what you can use it for. If gaming is what you use it for great. If you think that desktop computing has no use for 16 or more cores because games don't use that many (now they don't and 2-3 years back people said 4c is max you would need. Look at this now) than join EarthDog and Cucker Tarlson club.


----------



## londiste (Nov 19, 2019)

Wanted to see what exactly you meant by that because we seem to be talking about different things.
As I said before, I consider gaming the primary performance driver for desktop CPUs, with other common use cases being easier on CPUs and more heavy use cases being less common.


----------



## Tatty_One (Nov 19, 2019)

ratirt said:


> What I consider? Are you still trying to get me for pointing you are wrong or you want to catch me off guard with what I will say? You know what I use desktop for, would that justify desktop computer purpose? I think you are limited only by your imagination what you can use it for. If gaming is what you use it for great. If you think that desktop computing has no use for 16 or more cores because games don't use that many (now they don't and 2-3 years back people said 4c is max you would need. Look at this now) than* join EarthDog and Cucker Tarlson club.*


I think I need to join that club too then but I think here in this debate terminology may not be peoples friends.  So just to be clear on my take, a few years ago I would have called myself an enthusiast but these days I consider myself rightly or wrongly a mainstream user, so I surf the net, watch the odd video, do the odd work in MS Office pro and do some light gaming and therefore I consider my "needs" to not extend further than lets say a 3600X or an i7 9700, even if I considered myself a gamer I would likely  stick to one of those CPU's but upgrade my graphics card to something more powerful.

You most definitely need more cores from your explanation and I agree there are loads of users out there that have productivity scenario's similar to yours where the more cores are the better but for many there will always be a point where a line has to be drawn (unless they have a degree of financial freedom) between need and productivity, the thing is this, if anyone really does believe that there are more high productivity users out there than "mainstream" then I really have got it wrong as productivity would actually be the new mainstream although I don't believe that is the case.  From reading the last page or so, all I read into Londiste's words are "_not everyone needs high core count desktop PC's_" 

Hopefully at this point we can draw a line and get back on topic as it's seems in part the Moderator warning a few posts back has been ignored, it is an interesting topic but not for an Intel CPU release news piece and I would rather not have to manage reply bans.


----------



## cucker tarlson (Nov 19, 2019)

ratirt said:


> If that's not juvenile then I don't know what is and yet I'm being offensive What a ruse.
> Bravo to the two (so-called grown ups) sharing thoughts but instead tuck tail and run when no arguments and make childish remarks . Expected more but if that is what I get. Oh well


no one is disputing your point,I'd also like to have more cores if they did anything for me and cost me little.
but as snoop earth pointed out,there's the sweetspot,and above that most of us are paying bigger premiums for diminishing returs.I don't really see a point in engagin in a serious discussion while you're denying the obvious.
there's little 8c/16t can do that 6c/12t can't atm.

I'll gladly move to 10c/20t,when I feel like 10c/20t is the sweet spot for performance.right now it's 6c/12t and for casual gaming/home use my 4c/8t is doing just fine when I push the clocks,just like any 6700k/7700k user.

3700x is 70% more expensive than 3600.Yes,progress


----------



## juiseman (Nov 19, 2019)

I'm glad that is settled.


----------



## kapone32 (Nov 20, 2019)

I have been reading the debate about cores and I must say that in my opinion both sides are right. It is a fact that current games are best used with 6/12 cores (including streaming on the AMD side). Where 8 to 10 and beyond help is if you are a multi tasker. If you do anything in Premiere or Vegas the extra cores make sense. If you are into any type of CAD work more cores are beneficial. 

Even though most users of PCs are not hard core gamers the industry has put that in the head of most people in terms of how they market their products with the "Gaming" moniker. Even cut down workstation products will get that designation. 

The last fact though is that people buy what they can afford or want. The notion of a powerful PC is based on the users own desires. Some of you on here do product reviews and as such get hardware that the average person does not have the same access to. I will use myself as an example when I first started getting into PC (build myself) I started with a 4 core 965BE. After I had used that to my desire I went with a 6 core 1090T as what I was doing (making DVDs) would help with more cores. After I was satisfied with that I went with a 8320 8 core (arguably) as those extra cores helped with what I was doing. My next CPU was the 1700 8 cores SMT and it was better and faster than the 8320. The next move I made was for the 2600. It only had 6 cores but was faster than the 1700 in games and most applications I use. I had been looking at TR4 as my wishlist then one day I say an X399 board for $249.99 or $330 Canadian (at the time). I got a 1900X for $349.99 (I can't believe they are now $149 US or $200 Canadian on Amazon) to go with that board and it was faster than the 2600 for what I do. I was happy with that until I started a Vegas project(S) I decided I needed more cores and lo and behold the 1920X was $349.99 on Tiger Direct. I can say that the 1920X is faster than the 1900x in every regard. I was waiting for TR3 details to see what I would do next and as they are now professional workstation products I am not looking at them (though I wish they would release a 7NM CPU for X399) but my next CPU will probably be either the 2920X or 2950X. As far as gaming I game at 4K and the CPU does not matter at that resolution.  

In regards to this specific thread it is good for everyone if Intel does release this for $499 if nothing else it should drive down the price of the rest of the stack underneath including the "gaming centric" CPUS (9700,8700,9900K and KS).


----------



## Tatty_One (Nov 20, 2019)

Seems a line cannot be drawn so on that note, the door is closed


----------

