Thursday, February 18th 2021

Intel Rocket Lake-S Lands on March 15th, Alder Lake-S Uses Enhanced 10 nm SuperFin Process

In the latest round of rumors, we have today received some really interesting news regarding Intel's upcoming lineup of desktop processors. Thanks to HKEPC media, we have information about the launch date of Intel's Rocket Lake-S processor lineup and Alder Lake-S details. Starting with Rocket Lake, Intel did not unveil the exact availability date on these processors. However, thanks to HKEPC, we have information that Rocket Lake is landing in our hands on March 15th. With 500 series chipsets already launched, consumers are now waiting for the processors to arrive as well, so they can pair their new PCIe 4.0 NVMe SSDs with the latest processor generation.

When it comes to the next generation Alder Lake-S design, Intel is reported to use its enhanced 10 nm SuperFin process for the manufacturing of these processors. This would mean that the node is more efficient than the regular 10 nm SuperFin present on Tiger Lake processors, and some improvements like better frequencies are expected. Alder Lake is expected to make use of big.LITTLE core configuration, with small cores being Gracemont designs, and the big cores being Golden Cove designs. The magic of Golden Cove is expected to result in 20% IPC improvement over Willow Cove, which exists today in Tiger Lake designs. Paired with PCIe 5.0 and DDR5 technology, Alder Lake is looking like a compelling upgrade that is arriving in December of this year. Pictured below is the LGA1700 engineering sample of Alder Lake-S processor.
Sources: HKEPC, via VideoCardz
Add your own comment

82 Comments on Intel Rocket Lake-S Lands on March 15th, Alder Lake-S Uses Enhanced 10 nm SuperFin Process

#51
Fouquin
CobainDidnt learn my lesson and jumped to DDR4 day 1, paying 200€ for 2x4gb 2400mhz CL18. 2 years later and 3200mhz CL16 was the norm, Costing 100€ for 2x4gb.
Geeze dude I also got X99+DDR4 day one and my Kingston HyperX Predator 4x4GB kit was 2933MHz CL16 for $270.
Posted on Reply
#52
watzupken
dgianstefaniYou're wrong. Intel 14nm can actually be better for heat despite using more power than ryzen 7nm, due to the size of the die being much larger and allowing easier thermal transfer to the cooling.
I think this is where you are comparing a monolithic vs a chiplet design. If Intel designed a chiplet design with 14nm, I don't believe it will run that cool given the size of the chiplet. And I disagree that 14nm runs cool and you can find facts on it when reading up reviews of Comet Lake. Most reviewers are using some very high end air or water cooling, which don't reveal the thermal issues.
dgianstefaniBetter than doing a Samsung and calling updated "12nm" "8nm".

You lads like to shit on Intel for everything they do, but they managed to be competitive for five years on revisions of the same process which is a testament to their engineers.

It's been proven time again that Intels 14nm++ is as good as other companies 10nm, and their 10nm+ is as good as TSMC 7nm.

Calm down and understand all the "nm" numbers are just marketing numbers and don't reflect actual transistor measurements. What's important is the performance.
You seem to be cherry picking comparison. Nowhere did I mentioned Samsung in the first place. TSMC for example has N7 and N7P which I feel sounds appropriate.

I don't disagree that Intel's node tend to be better than competitors just like you mentioned, but I think you've highlighted the issue yourself here. Intel is pitting their 14nm against TSMC's 7nm which puts them at a disadvantage. What is important is performance, and we have already witnessed how Zen 3 is beating Intel in most metrics while using around 60% of power, and beating them to soundly in multicore scenario. These are cold hard facts. The reality is Intel took it too easy as they grew in dominance and got caught with their pants down when AMD and ARM caught up. While I don't disagree that it is great engineering feat to squeeze so much out of 14nm which I give credit to the engineers, I won't give Intel the credit because this is a band aid solution.
Posted on Reply
#53
Sunny and 75
Alder Lake is due in Fall 2021 and Zen 3+ is expected around the same time so.
Posted on Reply
#54
ExcuseMeWtf
dgianstefaniPower usage is irrelevant on a desktop as long as temperatures are fine and the psu can comfortably handle it. Only things that matter are component temperatures, performance and noise.
It's not irrelevant to environmentally conscious folks out there.
Posted on Reply
#55
dgianstefani
TPU Proofreader
ExcuseMeWtfIt's not irrelevant to environmentally conscious folks out there.
Environmentally conscious folks should continue living on farms and being completely self sufficient including energy. If their computer draws too much power maybe they should consider improving their electricity system. Noone is saying that using less power for identical performance is a bad thing, the problem is it's not identical performance. There are many other factors in play.
watzupkenI think this is where you are comparing a monolithic vs a chiplet design. If Intel designed a chiplet design with 14nm, I don't believe it will run that cool given the size of the chiplet. And I disagree that 14nm runs cool and you can find facts on it when reading up reviews of Comet Lake. Most reviewers are using some very high end air or water cooling, which don't reveal the thermal issues.
It's irrelevant. The post I was replying to was making fun of Intels naming scheme, as they prefer to call improved 14nm, 14nm+, 14nm++ etc., rather than going the route of calling a process improvement a new node, which seems to be the fashion.
Posted on Reply
#56
ExcuseMeWtf
dgianstefaniEnvironmentally conscious folks should continue living on farms and being completely self sufficient including energy. If their computer draws too much power maybe they should consider improving their electricity system. Noone is saying that using less power for identical performance is a bad thing, the problem is it's not identical performance. There are many other factors in play.


It's irrelevant. The post I was replying to was making fun of Intels naming scheme, as they prefer to call improved 14nm, 14nm+, 14nm++ etc., rather than going the route of calling a process improvement a new node, which seems to be the fashion.
It's irrelevant TO YOU. You don't get to dictate if others can find that relevant.

And Ryzen happens to provide similar or better performance depending on task type anyways. "Identical" performance between two different chips doesn't exist in the first place.
Posted on Reply
#57
dgianstefani
TPU Proofreader
ExcuseMeWtfIt's irrelevant TO YOU. You don't get to dictate if others can find that relevant.

And Ryzen happens to provide similar or better performance depending on task type anyways.
I get to dictate whatever I want. Its a public forum. Your perogative is to determine if I am right or not.

Yeah, amazing benchmark numbers until you get to the nitty gritty and realise the AGESA is literally a beta for about a year after every new AMD CPU release, and AMD seems to find this acceptable so long as they do slightly better than Intel in whatever hot topic measure is currently fashionable, and push out cutting edge improperly tested products (a boon or a curse to enthusiasts?) . Read my specs, I use a 5950x. The performance is great. It's not, however, a shining example of a 100% stable platform, even at stock, I get the odd WHEA error even 3 months later of BIOS updates. Software and correctional routines are good enough these days to absorb mild instability or poor firmware implementation without hard crashes, like bluescreens or data corruption, so for most people, 99.9% stability is fine. AMD and even Linus Torvalds recently shit on Intel for "holding back ECC implementation by making you pay more for it", which is a fair point. However AMD isn't exactly a shining example of producing products that don't ever corrupt bits lmao.
Posted on Reply
#58
TheinsanegamerN
dgianstefaniOn one core, assuming temperature is below a threshold. All core boost is 4.9, therefore an all core OC of 5.4 is a 500mhz more than 10% OC. Which is appreciable.
For a whopping 3-4% performance boost, as scaling begins to fall apart at that speed. Not to mention, how many 10700 and 10900ks do you think can do 5.4 GHz all core? I cant remember a single reviewer being able to hit taht,a nd it would require a massive water cooler to run stable.
ExcuseMeWtfIt's not irrelevant to environmentally conscious folks out there.
"enviromentally conscious folks" who buya new CPU because it uses less power then their old one is the definition of virtue signalling. Just like all those eco-mentalists prattling on about how you eating a burger is destroying the world while driving their 5 kids around in their hybrid subaru. You will NEVER recoup the energy cost that went into developing and manufacturing that processor no matter how much lower its TDP is. Anyone who wants to save the enviroment should be buying and using old computers that still do the job fine, not buying new ones. Reuse is the single most important part of the reduce-reuse-recycle program, after all.
Posted on Reply
#59
dgianstefani
TPU Proofreader
TheinsanegamerNFor a whopping 3-4% performance boost, as scaling begins to fall apart at that speed. Not to mention, how many 10700 and 10900ks do you think can do 5.4 GHz all core? I cant remember a single reviewer being able to hit taht,a nd it would require a massive water cooler to run stable.
A 10% overclock is appreciable, regardless of how scaling works. That's my point and my only point. I've had a 8700k, which wasn't binned or selected or whatever, hit 5.2ghz all core on air without much tuning. The silicon and process quality has only improved since then. The fact it's hard to cool 10 cores operating at those frequencies or voltages isn't the point. The point is the silicon can do those frequencies if you can keep it cool - which there are many ways to do. The fact that most reviewers spend one or two days with a product (on release generally, prior to firmware optimisations/software supports) and think they can, with their generic or AIO cooling reach the silicon potential in that time is laughable.
Posted on Reply
#60
TheinsanegamerN
dgianstefaniA 10% overclock is appreciable, regardless of how scaling works. That's my point and my only point. I've had a 8700k, which wasn't binned or selected or whatever, hit 5.2ghz all core on air without much tuning. The silicon and process quality has only improved since then. The fact it's hard to cool 10 cores operating at those frequencies or voltages isn't the point. The point is the silicon can do those frequencies if you can keep it cool - which there are many ways to do. The fact that most reviewers spend one or two days with a product (on release generally, prior to firmware optimisations/software supports) and think they can, with their generic or AIO cooling reach the silicon potential in that time is laughable.
A 10% overclock is only appreciable if it can generate actual good performance. See also AMD's construction cores or the pentium 4.And all this trying to close the performance gap the competition can get to on the stock cooler. SMH.

This is the same line AMD apologists int he early 2010s used to excuse bulldozer's attrocious power draw and lack of performance "but but but muh overclock". You're still getting whipped in performance, and you have to resort to ever more exotic methods of cooling trying desperately to squeeze another 1% out of your dinosaur of an architecture.
Posted on Reply
#61
dgianstefani
TPU Proofreader
The dinosaur of an architecture that up until late 2020 had higher per core performance, is still close to parity on singlecore with zen 3, and was almost 100% stable on mature firmware? The dinosaur of a process that, with rocket lake, will once again claim the 1st place for single core performance? The single core performance that just happens to be the primary driver of performance for more than 90% of typically used software? Not everyone is a video editor or 3D renderer or Scientist that needs hundreds of threads. Most people use the internet, game, and do basic productivity with their computers. I say this as a student and professional who does actually take advantage of my 16c/32t 5950x, for productivity and gaming.

Don't discount something just because it's not your preferred option. In the same way that scaling works you can make the argument that the introduction of 12 and 16 core SMT ryzen chips on mainstream platforms did little for actual performance, because as you said "scaling begins to fall apart". Applies to high numbers of threads not just high frequencies too, don't forget that.

AMD has had a much easier time of it because all they have to do is make the design of the chip, actual manufacturing is outsourced to TSMC, and all they have to do is focus on manufacturing.

Intel is one of the last companies that has a fully owned chain from design to delivery of product, and yes, that can lead them to be slower with innovation (not always, they have introduced many gamechanging concepts and designs to the world), but it also results in a very well integrated product. Even if you discount the technical advantages of this approach, which obviously have drawbacks too, you have to respect the benefits of that approach both economically and practically.
Posted on Reply
#62
ExcuseMeWtf
I get to dictate whatever I want. Its a public forum. Your perogative is to determine if I am right or not.
No, Sunshine. You can believe you dictate anything and attempt to. Nobody is obliged to give a crap about what you think and they can go on doing their own thing. As they naturally do. Getting worked up about it isn't changing anything.

If you think you can somehow enforce that, you're wrong full stop.
"enviromentally conscious folks" who buya new CPU because it uses less power then their old one is the definition of virtue signalling
No. It's called "it's their own money and they can spend it as they see fit". You don't get to dictate that either. Again, you can believe and attempt to, nobody is obliged to give a crap and they can spend them as they originally thought regardless. And once more, complaining about "virtue signalling" isn't gonna change that either.
Posted on Reply
#63
TheinsanegamerN
ExcuseMeWtfNo, Sunshine. You can believe you dictate anything and attempt to. Nobody is obliged to give a crap about what you think and they can go on doing their own thing. As they naturally do. Getting worked up about it isn't changing anything.

If you think you can somehow enforce that, you're wrong full stop.



No. It's called "it's their own money and they can spend it as they see fit". You don't get to dictate that either. Again, you can believe and attempt to, nobody is obliged to give a crap and they can spend them as they originally thought regardless. And once more, complaining about "virtue signalling" isn't gonna change that either.
No, its called Hypocracy. You can go full harpy shreiking at anyone who disagrees wiht your opinions, doesnt mean anyone is going to respect what you have to say. Also LOLcalmdown, it's an internet argument.
dgianstefaniThe dinosaur of an architecture that up until late 2020 had higher per core performance, is still close to parity on singlecore with zen 3, and was almost 100% stable on mature firmware? The dinosaur of a process that, with rocket lake, will once again claim the 1st place for single core performance? The single core performance that just happens to be the primary driver of performance for more than 90% of typically used software? Not everyone is a video editor or 3D renderer or Scientist that needs hundreds of threads. Most people use the internet, game, and do basic productivity with their computers. I say this as a student and professional who does actually take advantage of my 16c/32t 5950x, for productivity and gaming.
Intel lost the higher performance per core years ago. AMD's IPC has been higher then intels since the 3000 series, the lower latency of ringbus and higher clock speeds allowed intel to maintain the gaming crown, but in production workloads or ANYTHING that isnt vidya intel has been getting slaughtered by AMD, who also pulls a fraction of the power to do so.
Don't discount something just because it's not your preferred option. In the same way that scaling works you can make the argument that the introduction of 12 and 16 core SMT ryzen chips on mainstream platforms did little for actual performance, because as you said "scaling begins to fall apart". Applies to high numbers of threads not just high frequencies too, don't forget that.
Dont make assumptions based on your own view of the world, it makes you look like a total ignoramous. Intel was my preferred otpion, hence why I have a 9700k. Doesnt mean I cant slag them for being totally non competitive outside of the gaming sphere and pushing teh same lame skylake level of performance for the last 5 years.

And noone really argues that the 12 and 16 cores did anything for gaming, as it is generally agreed that past 6 core sscaling begins to fall off, and REALLY falls off past 8 cores. However, it is accurate to say that AMD introducing 8 core mainstream CPUs kicked intel int he pants and finally forced them to increase core counts. It is not coincidence that Intel went from pushing quad cores to 6, 8, and 10 cores as a response to the ryzen 1700.
AMD has had a much easier time of it because all they have to do is make the design of the chip, actual manufacturing is outsourced to TSMC, and all they have to do is focus on manufacturing.

Intel is one of the last companies that has a fully owned chain from design to delivery of product, and yes, that can lead them to be slower with innovation (not always, they have introduced many gamechanging concepts and designs to the world), but it also results in a very well integrated product. Even if you discount the technical advantages of this approach, which obviously have drawbacks too, you have to respect the benefits of that approach both economically and practically.
AMD has to design their hardware on a shoestring budget. Intel's single quarter R+D budget was higher then AMD's entire yearly revenue stream was. They hardly have it "easier", as the manufacturing and design elements of intel anre entirely different departments with different management.
Posted on Reply
#64
dgianstefani
TPU Proofreader
TheinsanegamerNIntel lost the higher performance per core years ago. AMD's IPC has been higher then intels since the 3000 series, the lower latency of ringbus and higher clock speeds allowed intel to maintain the gaming crown, but in production workloads or ANYTHING that isnt vidya intel has been getting slaughtered by AMD, who also pulls a fraction of the power to do so.
Regardless of how they achieve the higher per core performance (means is almost irrelevant), they did and do.
You keep talking about gaming, but the example I used was that the vast majority of people who use computer are not gamers, and the vast majority of software those people use is not heavily multithreaded. That is true - it's simply a fact.
TheinsanegamerNDont make assumptions based on your own view of the world, it makes you look like a total ignoramous. Intel was my preferred otpion, hence why I have a 9700k. Doesnt mean I cant slag them for being totally non competitive outside of the gaming sphere and pushing teh same lame skylake level of performance for the last 5 years.
I make the observations I do not based on assumption. I slag everyone, my point here is that it's better to do so from a balanced point of view with a strong foundation in fact.
TheinsanegamerNAnd noone really argues that the 12 and 16 cores did anything for gaming, as it is generally agreed that past 6 core sscaling begins to fall off, and REALLY falls off past 8 cores. However, it is accurate to say that AMD introducing 8 core mainstream CPUs kicked intel int he pants and finally forced them to increase core counts. It is not coincidence that Intel went from pushing quad cores to 6, 8, and 10 cores as a response to the ryzen 1700.
You keep bringing up gaming. It's something less than 25% of the world spends any real time doing on their computers, and of that, less than 10% who game as the primary function. Maybe stop bringing up debates that aren't being argued. There's nothing wrong with high core count as long as it isn't done at the sacrifice of single core speed, and as long as the software being used actually takes advantage of more than, for typical example, 6 or 8 cores.
TheinsanegamerNAMD has to design their hardware on a shoestring budget. Intel's single quarter R+D budget was higher then AMD's entire yearly revenue stream was. They hardly have it "easier", as the manufacturing and design elements of intel anre entirely different departments with different management.
Intel is more ambitious with their product stack than AMD. With zen, AMD focused on delivering an efficient, scalable and competitive architecture performance wise. They achieved this, good for them and good for consumers. What they did not magically do was develop in the same time, a rock solid firmware implementation. As much shit as Intel gets for their security vulnerabilities, they're the biggest player and are subject to the most attacks. Apple discovered the same thing when they actually started to get some market share, mac's weren't "immune to computer viruses", they were actually rather vulnerable as security hadn't been a focus. As I've stated before, the AGESA firmware for Zen chips, and also the RADEON drivers for GPUs, are nowhere near the featureset or maturity as Intel/Nvidia platforms. There is some good progress being made on the enterprise side of things, for last generation chips (zen 2 EPYC), but that architecture is almost two years old at this point, and still has some issues. There is a reason why EPYC and Threadripper chips lag behind the consumer grade Zen chips by 6 months or even more than a year.
Posted on Reply
#65
londiste
TheinsanegamerNAMD has to design their hardware on a shoestring budget. Intel's single quarter R+D budget was higher then AMD's entire yearly revenue stream was. They hardly have it "easier", as the manufacturing and design elements of intel anre entirely different departments with different management.
Intel's manufacturing and design elements share the same R&D budget. Not only that, Intel is dealing with a wider range of products or areas than AMD.

No idea why you need to exaggerate. AMD's yearly revenue is 2.7x or so larger than Intel's single quarter R&D budget.
Intel's R&D spending in 2020 - $13.5B ($3.6B in Q4)
AMD's revenue in 2020 - $9.7B

For reference:
AMD R&D spending in 2020 - $2B
TSMC R&D spending in 2019 - $3.3B (somewhat more in 2020)
Samsung R&D spending for chip-related stuff in 2020 was $5.6B
Nvidia R&D spending in 2020 - $2.8B
TheinsanegamerNIntel lost the higher performance per core years ago. AMD's IPC has been higher then intels since the 3000 series, the lower latency of ringbus and higher clock speeds allowed intel to maintain the gaming crown, but in production workloads or ANYTHING that isnt vidya intel has been getting slaughtered by AMD, who also pulls a fraction of the power to do so.
Again, why all the exaggerations?
Years ago? July 2019 is 1 year and 7 months back.
When you say AMD's IPC and Intel's you mean desktop (and server). Intel does have Ice Lake and Tiger Lake in mobile that are much closer in IPC and Zen2-based Ryzen 4000 is less than a year old.
Getting slaughtered? In power efficiency, yes. Otherwise, not really. Lower IPC can be and has been largely compensated by higher clock speeds. Price? That was kind of true until Ryzen 3000. Today Intel tends to be the cheaper option.
Fraction of the power? Depends on how you want to define fraction but the difference even in well-threaded tests is often not as pronounced as the maximum numbers that tend to be shown in reviews. For example TPU's 5800X review shows the power usage difference between 5800X and 10700K in multi-threaded tests was 32W.
Posted on Reply
#66
RandallFlagg
TheinsanegamerNIntel lost the higher performance per core years ago. AMD's IPC has been higher then intels since the 3000 series, the lower latency of ringbus and higher clock speeds allowed intel to maintain the gaming crown, but in production workloads or ANYTHING that isnt vidya intel has been getting slaughtered by AMD, who also pulls a fraction of the power to do so.
Just like to point out that the above is a demonstrably false statement with false comparisons.

Performance is not the same as IPC. RISC chips were originally meant to be lower IPC (took more instructions to do the same thing vs CISC) but capable of higher clocks, hence higher performance, as example.

As far as Intel vs AMD, you stated : "Intel lost the higher performance per core years ago.....in production workloads or ANYTHING that isnt vidya intel has been getting slaughtered by AMD"

This is demonstrably false. Now I fully expect you to start goalpost moving but this is a simple fact - AMD never had a clean sweep and far from it with Zen 1, 1+, and 2. In fact, the only time they had an advantage was in highly intense multi-core workloads, and even then it was not a clean sweep.

See below :
8 core 9900K and 10 core 10900K solidly defeats the 12 core 3900X :


10 core 10900K defeats 12 core 3900X :


MS Office, the most used productivity application on the planet - it's lightly threaded depending on use, a 6 core 10600K beats the 12 core 3900X :







Photoshop... probably the single biggest image editor on the planet :



Premier Pro, the #1 video editor :



OCR, a very common use in office productivity environments :

Posted on Reply
#67
Makaveli
Adc7dTPUAlder Lake is due in Fall 2021 and Zen 3+ is expected around the same time so.
Alderlake is now a Dec 2021 release and not Sept from the latest leak. So it will be much closer to the Zen 4 Q1 2022 launch than the previous date that was given.
RandallFlaggJust like to point out that the above is a demonstrably false statement with false comparisons.

Performance is not the same as IPC. RISC chips were originally meant to be lower IPC (took more instructions to do the same thing vs CISC) but capable of higher clocks, hence higher performance, as example.

As far as Intel vs AMD, you stated : "Intel lost the higher performance per core years ago.....in production workloads or ANYTHING that isnt vidya intel has been getting slaughtered by AMD"

This is demonstrably false. Now I fully expect you to start goalpost moving but this is a simple fact - AMD never had a clean sweep and far from it with Zen 1, 1+, and 2. In fact, the only time they had an advantage was in highly intense multi-core workloads, and even then it was not a clean sweep.

See below :
8 core 9900K and 10 core 10900K solidly defeats the 12 core 3900X :
Zen 2 has higher IPC at equal clock speed. However Intel clocks higher vs Zen 2 there for performance is higher.
Posted on Reply
#68
RandallFlagg
MakaveliZen 2 has higher IPC at equal clock speed. However Intel clocks higher vs Zen 2 there for performance is higher.
Performance is what the guy I was responding to said, and he conflated it with IPC. IPC in and of itself is both highly subjective and largely irrelevant as there is no standard as to what "instructions" you are talking about. For some reason people seem to think Cinebench defines IPC. Any time you see someone using that measure, you can be 100% certain they are clueless.
Posted on Reply
#69
Tom Sunday
dgianstefaniIntel managed to be competitive for five years on revisions of the same process which is a testament to their engineers
You are correct all the way. Intel delivered and is still a much more mature product. AMD on product maturity has along way to go. Memory compatibility being one of them. But I loved AMD as I jumped on their stock at around $48 a share and had my 401K and RMD play it's tune. Cashing out at $92 because the 'AMD Gold Rush' or bubble started to hover and because how greedy can one get? It was however a wonderful and rewarding 10-months stock ride. The hype AMD gotten here on sites like this also helped even though the total DIY's marketshare is realistically less than 1%. As someone on our street once remarked: "How many people do you know that actually build their own computer!"

I have finally been itching in totally replacing my 12 year old Dell XPS 730x but not with with AMD. But with Alder Lake and what it will offer for the first time out of the box. I will also splurge on a much wanted all NVMe storage arrangement as the new Mobo generation I expect will be fantastic. I may even go as far to carefully watching the next few Intel earnings reports (reading between the lines) to considering taken some AMD stock cash and putting it towards the Alder Lake Wall Street action in July/August this year. 2021 has already been a very good year for me. But then we all know that everyone makes their own luck. Just perhaps when Intel reports their full 2021 earnings in January 2022 it be another good year for luck and smiles?
Posted on Reply
#70
Sunny and 75
MakaveliZen 4 Q1 2022
According to this, there will be a Zen 3+ first so if AMD actually ends up doing that, Zen 4 launch would be Q2 at the earliest.
Posted on Reply
#71
Unregistered
Reading this makes me wonder why people care about a few miliseconds or a few extra frames in games as if they will even notice it. I'm perfectly happy with the performance my 3900X delivers both in gaming and productivity.
#72
dgianstefani
TPU Proofreader
AlexaReading this makes me wonder why people care about a few miliseconds or a few extra frames in games as if they will even notice it. I'm perfectly happy with the performance my 3900X delivers both in gaming and productivity.
"I don't want or notice better performance so why would anyone else."
Posted on Reply
#73
Unregistered
dgianstefani"I don't want or notice better performance so why would anyone else."
Good luck noticing the difference between 250 and 300 fps.

And hey, if you want to pay a premium for a chip that does statistically increase performance but makes no difference in the real world (as you are focused on gaming and not the ego-inflating FPS numbers at the top left), to replace your current, perfectly fine chip, go for it. This is why I don't care that Intel had/will have better gaming chips.

I suppose if it makes like 10 seconds difference in productivity, that's something else.

Look at the 11900K for example. Sacrificing productivity by shaving off cores off the 10900K for le gaming performance that literally nobody notices. But that's just me I guess. Carry on.
#74
londiste
The subset of desktop users that needs even 8 cores is a small one. Midrange and <200€$£ CPUs is where the real mass of users is happy at.
Posted on Reply
#75
dgianstefani
TPU Proofreader
AlexaGood luck noticing the difference between 250 and 300 fps.

And hey, if you want to pay a premium for a chip that does statistically increase performance but makes no difference in the real world (as you are focused on gaming and not the ego-inflating FPS numbers at the top left), to replace your current, perfectly fine chip, go for it. This is why I don't care that Intel had/will have better gaming chips.

I suppose if it makes like 10 seconds difference in productivity, that's something else.

Look at the 11900K for example. Sacrificing productivity by shaving off cores off the 10900K for le gaming performance that literally nobody notices. But that's just me I guess. Carry on.
Lmao. Show me a Zen2 CPU that sits comfortably at 250fps 99%. 1% lows are below 100fps most of the time.

Your real world is as limited as your opinion. Let people enjoy things.
Posted on Reply
Add your own comment
Dec 19th, 2024 23:34 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts