Enthusiast sometimes means people who actually use computers to do useful things, like software engineers, DBAs, or people who do things genomics.
By all means,
no.
Enthusiast is someone who is enthusiastic about PCs. He likes to talk about them, he likes to read reviews, he likes to spend more than needed.
It has absolutely nothing to do with how you use a PC.
It's like buying a 20 core Xeon or something and then whining about single threaded performance when you opt'ed for more cores. It's laughable.
You don't know much about how servers are used, do you - mister "enthusiast"? ;-)
Because normally one buys more cores to... you know... get more cores?
Nope. Single-thread performance is improving slowly and will hit a wall soon. If one wants more processing power, he is forced to buy more cores.
But having more cores doesn't automatically mean software will run faster. Someone has to write them to do so (assuming it's possible in the first place).
That's the main advantage of increasing single-core performance.
Of course it's easier to write single-threaded code. You have fewer issues to deal with, but that doesn't mean it's the right decision given the workload. Also, I write multithreaded code all the time and I do it in the day job and let me tell you something, I don't write any data processing job that uses a single core. I use stream abstractions and pipelines all over the place because changing a single argument to a function call can change the amount of parallelism I get at any stage in the pipeline. It also helps when you use a language that's conducive to writing multi-threaded code. Take my main language of choice, Clojure, it's a Lisp-1 with immutability through and through with a bunch of mechanisms to have controlled behavior around mutable state. It's a very different animal than writing multi-threaded code in say, Java or C# and it's really not that difficult.
Not everyone is a programmer and not everyone has the comfort you do. Data processing is by definition the best case possible for multi-threaded computing.
I know it may be difficult, but you have to consider that coding today is also done by analyst and it has to happen fast. I also write code as a day job (and as a hobby). But I also train analysts to use stuff like R or VBA. They need high single-core performance. And yes, they work on Xeons.
And as usual, I have to repeat the fundamental fact: some programs are sequential - no matter how well you code and what language you use. It can't be helped.
Also, you're running on the assumption that time to write the application is the only cost. What about the time it takes for that application to run? Time is money. My ETL jobs would be practically useless if they take a full day to run which is why they're setup in a way where concurrency is tunable in terms of both parallelism and batch size.
Well exactly! The balance between writing and running a program is the key issue.
Some people write ETLs and spend hours optimizing code. It's their job.
But other people have other needs. You have to open a bit and try to understand them as well.
I was doing physics once, now I'm an analyst. A common fact: a lot of the programs are going to be used few times at best, often just once.
There's really no point in spending a lot of time optimizing (and often you don't have time to do it).
Also Graphene is vaporware until we actually see it in production at a price that's not outlandish, otherwise it's just a pipe dream.
I don't know what you mean. Graphene exists and PoC transistors have been made a while ago.
Yes, it is a distant future for personal computers, but that's the whole point of technological advancement - we have to plan many years ahead. And that's what makes science and engineering interesting.
When graphene CPUs arrive in laptops, they'll be very well controlled and boring.
We can make CPUs out of a number of different materials, but that doesn't mean it's a viable option. Once again, all of this needs to be measure in reality.
And the reality is that some applications would benefit from very fast cores, not dozens of them. That's all I'm saying.
A fast single 50Ghz core, Your so far away from possible your into dream land , we are no where near buying optic based transistors or any grephene version of transistor but i doubt they would be sold in single units / cores , that makes for a binning nightmare ,it works or its actually in the bin, shows what you knows.
To be honest, I don't really care when such CPUs will be available for PCs. I'm interested when they'll arrive in datacenters. I always hoped it'll happen before quantum processors, but who knows?
50GHz will produce a lot of heat. In fact the whole idea of GaN in processors is that it can sustain much higher temperatures than silicon.
Quantum computers need extreme cooling solutions just to actually work.
It's always good to look back at the progress computers made in the last decade. Just how much faster have cores became? And why stop now?
Moreover, can you imagine the discussions people had in the early 50s? Transistors already existed. PoC microprocessors as well. And still many didn't believe microprocessors will be stable enough to make the idea feasible. So it wasn't very different from the situation we have with GaN and graphene in 2019.
A few years later microprocessors were already mass produced. When I was born ~30 years later, computers were as normal and omnipresent as meat mincers.
The nodes in the future clearly pre dictate that its too expensive to make all the chip on the cutting edge node so you Will see intel follow suit with chiplets and they have already stated they will , they're busy on that now, see Foveros and intels many statements towards a modular future with Emib connects and 3d stacking.
I have absolutely nothing against MCM. It's a very good idea.
notb Specifically for 2990WX an argument can be made that it's not the best TR chip out there
Well, I'm precisely mentioning 2990WX because it represents a similar scenario to how Zen2 will work.
And not just the 16-core variant. Every Zen2 processor will have to make an additional "hop" because the whole communication with memory will be done via the I/O die. No direct wires.
Some of this will be mitigated by huge cache, but in many loads the disadvantage will be obvious. You'll see soon enough.
Also, I understand many people are waiting for Zen2 APUs (8 cores + Navi or whatever). Graphics memory will also have to be accessed via the I/O die. Good luck with that.
Intel's much better with mem latency, that may change slightly with IF2 & zen2 however.
Just don't bet your life savings on that. ;-)