• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

New Spectre Vulnerability Version Beats All Mitigations, Performance to Badly Degrade After the Fix

Nothing we haven't seen before... Superscalar out of order with the levels of speculative execution and caching that are enabled by modern processes will always be vulnerable to this kind of attack.
Actually, there is nothing principally wrong with speculative execution and caching. It's only a matter of making sure all caches are cleaned etc., which will require redesigns of microarchitectures, not just the mitigations we've seen this far which only makes it harder. Getting rid of SMT would help a lot though.

Until this happens, we should expect a stream of new Spectre class exploits.

So physical access is required to implement any exploit?
Local access is required, as usual. These vulnerabilites are not a real problem for consumers or non-cloud servers, so software mitigations should really be opt-in. There is no reason for all of us to suffer.

This requires high level access to execute, which traditional security measures already prevent. This is one of those "if they get this you're already hosed" type situation. I'd be really surprised if the mitigations are required rather then optional patches that can be applied to just mission critical equipment that is most likely to get hit by this.
Well, this is exactly why we do security in layers. Sooner or later you should expect a vulnerability in one layer.
The real elephant in the room is the perpetual stupidity of (public) cloud computing, where a vulnerability on any layer can potentially bypass nearly all security measures. Nothing sensitive should ever run in the public cloud, unfortunately it does.

Although IMO most of this stuff has been wildly overblown, the majority of CPU attacks require a pre pwned system with remote administrator/BIOS access. I can see emergency patches for the remote execution ones, but the rest should be optional IMO.
Yes. Consumers should not worry about the exploits, only about the mitigations. I wish patches were opt-in.

Perhaps someone cleverer than me can tell me why adding its signature/behavior to antivirus/antimalware wouldn't solve the issue?
Because antimalware don't have the ability to stop any attack, just identify known bad software.
This is why there are endless streams of new virus variants for Windows, until the specific underlying vulnerabilities (/design faults) are resolved.

If you find a vulnerability, you can just make a script that makes thousands of small variants of the program performing the exploit, resulting in different binary signatures, and the cat and mouse game is on. Antimalware doesn't work the way people think, they can never fix an exploit, and it's even debatable whether they do much "good" at all. Having priveleged software like this may even open up new attack vectors, and there are even some antimalware software that can be regarded as malware/spyware itself.
 
To be honest. I think they should stop finding these things. Because when they do the source gets leaked then the new virus's come out. Stop finding holes period and things would not come out. Now adays we hear oh we found a new way to break a system. The weeks later the source is released to public and hackers just suck it up.
Sticking your head in the sand doesn't improve security, it just means you don't know what you don't know. Others are looking for these holes already.
 
It should be noted that this set of vulnerabilities are even more difficult(read near impossible) to exploit than any of the rest like it and that it is not worth any level of worry for the common user or even most business and corporate entities. The fix can rather safely be ignored and avoided.

Remember kids, 100% of people that drink water die.
This!
 
This requires high level access to execute, which traditional security measures already prevent.
No, this type of code timing attack can certainly be executed from userland if I understand it correctly.

It should be noted that this set of vulnerabilities are even more difficult(read near impossible) to exploit than any of the rest like it and that it is not worth any level of worry for the common user or even most business and corporate entities. The fix can rather safely be ignored and avoided.


This!
I'm not so sure, but my philosophy on this remains the same. Fix the software design, don't gimp the cpu globally.
 
Because antimalware don't have the ability to stop any attack, just identify known bad software.
This has been the flaw of AV since time began. It still doesnt stop User stupidity.
 
Local access is required, as usual.
Where are people getting that idea? The previous spectre was demonstrated to work via javascript in a browser... I have no idea why this would be different as a variant unless I missed something.

What really makes it not noteworthy is the attack is slow and guessing memory locations hard. It requires a lot of setup, and generally, a skilled human hacker.
But not "local access."
 
So physical access is required to implement any exploit?
More or less. I mean I can only steal your car if I have physical access to it. This makes all cars unsafe, and we better ban/remove all cars to remove said threat. Obviously we could patch the car by putting it in a bunker, therefor making said car less useful. But that's obviously the best choice for the current situation.
 
This also requires competent QA testing.
If your software is really dealing with sensitive data you better fucking have Q&A or you deserve that lawsuit.

If it's a video game, have fun with that.

Its like reporting the news, "if it bleeds, it leads", so if it makes this kind of threat sound scarier, its in.
No my point is physical access is NOT required. They are being dismissive of this, which is valid in some ways, but for an invalid reason.
 
No my point is physical access is NOT required.
Um, bro? You need to re-read that white paper.. Pay careful attention to section VI. Put simply, no one is going to achieve that level of code execute injection over a network(any network). Direct physical access is required, and even then, setting up the exploit seems to be very machine/platform specific and will require a extensive effort to gain success.
 
Last edited:
YES!!!

I have been waiting for TPU to pick this up, so I can finally correct the bad reporting, and the terrible assumptions users who cant read make.

Neat take aways from this white paper:

- They only specify "Skylake" but fail to say which rendition of the arch, and its important to note, after initial skylake protection has been built in on an arc level

- They mention "Zen" testing, but not which one. Zen is old and been around awhile, they make a uOP mention with "Zen2" but its just an example.

- They mention ARM in the title and the text, but never actually show testing done with the ARM arc.

People are already questioning the methods used in this work as the flaws mentioned above are a pretty bid deal.

Remember kids, 100% of people that drink water die.
They mention the differences between zen and zen 2 and only test on Zen... but don't specify the chip, they specified Skylake refresh 8700t.
They are also intel funded, which might explain the vagueness of other chips used or just theoretically vulnerable.
In general, yet another poorly done "security piece" not learning from other groups stumbles or intentional misdirection's.
No CVE, no 90 days given to architecture owners, no credibility. I don't see any proof they tested against mitigated hardware.

Screenshot 2021-05-03 120119.png
 
They mention the differences between zen and zen 2 and only test on Zen... but don't specify the chip, they specified Skylake refresh 8700t.
They are also intel funded, which might explain the vagueness of other chips used or just theoretically vulnerable.
In general, yet another poorly done "security piece" not learning from other groups stumbles or intentional misdirection's.
No CVE, no 90 days given to architecture owners, no credibility. I don't see any proof they tested against mitigated hardware.

View attachment 199059

Yup. presented as peer reviewed gospel but it isn't. I can make a cool PDF as well for the $7 monthly for Acrobat.
 
Yup. presented as peer reviewed gospel but it isn't. I can make a cool PDF as well for the $7 monthly for Acrobat.
Trying to match University of Minnesota ethics ...
 
Actually, there is nothing principally wrong with speculative execution and caching. It's only a matter of making sure all caches are cleaned etc.
There is nothing principally wrong, it is just a method that is naturally open to side channel attacks. Any shared resource in the system is a potential vector for a side channel attack (I would suggest reading some of the papers with respect to these types of attack e.g Xiong and Szefer "Leaking information through cache LRU states") because you can always get information through timing/QOS behaviour of the resource. Are you going to flush your entire L3 every time you context switch just to stop potential malicious threads snooping on something else? You don't, the performance penalty would be too big.

At the end of the day the software side needs to live up to a certain standard of security as well, if that doesn't happen you will never be able to make a computer that is both fast and versatile while being secure.

If you are completely anal about security then you will have to sacrifice either speed or versatility, and you will have to control the software and runtime environment well.
Getting rid of SMT would help a lot though.
Say bye to your performance then, because now you are going back to the early 2000s where you are 100% guaranteed that your back end is grossly underutilised.

The whole idea of resource sharing is that you can dynamically allocate resources to whatever needs it most. Sure if you can create a runtime where from the getgo you know how much resources each task will need there is no need for dynamic allocation, but good luck convincing any programmer to do that.

Static scheduling and partitioning has proven time and time again to go against what programmers want, else IA-64 wouldn't be left in a ditch and everything would be VLIW...
 
Um, bro? You need to re-read that white paper.. Pay careful attention to section VI. Put simply, no one is going to achieve that level of code execute injection over a network(any network). Direct physical access is required, and even then, setting up the exploit seems to be very machine/platform specific and will require a extensive effort to gain success.
You caught me. I'm at work and haven't had time to review the whitepaper in detail beyond a skim. I'm just operating on the assumption it's similar to past spectre exploits which were demonstrated to be usable in javascript.

I'll shutup until I read get home and can read it properly.
 
Vaccinate that crap already. Jesus.

Where are people getting that idea? The previous spectre was demonstrated to work via javascript in a browser... I have no idea why this would be different as a variant unless I missed something.

What really makes it not noteworthy is the attack is slow and guessing memory locations hard. It requires a lot of setup, and generally, a skilled human hacker.
But not "local access."
Ignorance is bliss I guess? Or its just parroting something that went around for a while. Much like how people still flash BIOS on GPUs like they upgrade drivers. -_-

Regardless, @efikkan you also mentioned disabling SMT. Now look at the current marketplace :D Say an enterprise disables that for its server farm. Suddenly you have a capacity problem and there is no supplier to build you another bunch of servers. And you're already under pressure because of supply issues in a normal cycle for hardware. Its a rock and a hard place, and there is really never enough time as it is. Fast and some collateral is always going to win the day over slow and careful.
 
Last edited:
Someone at Microsoft should pick up that phone.
Microsoft's enterprise products have Q&A testers. They just aren't who you think.

Hint: They are... you guys!
 
Microsoft's enterprise products have Q&A testers. They just aren't who you think.

Hint: They are... you guys!
Wouldn't I know it...

1620068939575.png


I don't have much to complain about, though. So far, the issues I had in the last 5 years, that I could consider rather grave, I could probably count them with one or two hands. Not exactly a lot.

Definitely not as much as other people say they have had.
 
similar to past spectre exploits which were demonstrated to be usable in javascript.
And that only works given a TON of assumptions and perfect circumstances, none of which are real world possibilities. That supposed "proof of concept" was only barely so and had zero practical application.
 
Pay careful attention to section VI. Put simply, no one is going to achieve that level of code execute injection over a network(any network). Direct physical access is required, and even then, setting up the exploit seems to be very machine/platform specific and will require a extensive effort to gain success.
This gives me hope that patches will be opt-in. Hurting the performance of every x86 connected to the internet seems like a knee-jerk overreaction if the risk of the vulnerability is low.

The risk/effort balance has to be right, and you can't ever stop everything. If this vulnerability requires direct physical access then it's of no consequence to any consumer device. Even thinking about my servers in a colo datacenter, the amount of ID checking and paperwork to gain access to my own hardware is enough to prevent this from being a casual drive-by exploit.
 
...I could probably count them with one or two hands.
Using my wrists in the up, or down, position, plus my 10 fingers, gives me the ability to count to 2^12 = 4096.
 
I'm not so sure, but my philosophy on this remains the same. Fix the software design, don't gimp the cpu globally.
Please elaborate, fix which software design? And what relevance does this have for a hardware bug?

What really makes it not noteworthy is the attack is slow and guessing memory locations hard. It requires a lot of setup, and generally, a skilled human hacker.
But not "local access."
Virtual memory address space is huge, it's not a matter of "hacking skills", but defeating something called entropy. Don't forget memory is moved a lot around too, so extracting useful continuous blocks of memory wouldn't be easy.

Are you going to flush your entire L3 every time you context switch just to stop potential malicious threads snooping on something else? You don't, the performance penalty would be too big.
There should be no need. The CPU will know if the code is privileged to read a cache line, and once these enforcements are firmly in place, the Spectre class of bugs will go away.

At the end of the day the software side needs to live up to a certain standard of security as well, if that doesn't happen you will never be able to make a computer that is both fast and versatile while being secure.
Right there you demonstrated that you don't grasp this subject.
A hardware bug must be resolved in hardware. As long as the user can run any software they want, other software can't protect against a hardware bug like this.

Getting rid of SMT would help a lot though.
Say bye to your performance then, because now you are going back to the early 2000s where you are 100% guaranteed that your back end is grossly underutilised.
You don't understand how SMT works either then. SMT is sharing a core's resources between multiple threads. The usefulness of SMT is decreasing with more efficient CPU architectures, while the complexity of all the extra safeguards throughout the pipeline to facilitate multiple threads is only growing. Back when SMT was introduced, it made a lot of sense since the pipelines were stalled much more and implementing SMT required very little die space. Right now SMT is mostly a marketing thing, with mounting security implications, and this die space would be better spent making faster cores.

Unfortunately though, I wouldn't expect SMT to go away anytime soon.

Static scheduling and partitioning has proven time and time again to go against what programmers want, else IA-64 wouldn't be left in a ditch and everything would be VLIW...
Itanium had many flaws, probably the biggest one was a very complex instruction scheme.
But as of now, the primary bottleneck of CPUs is primarily cache misses, then secondary branching. If something is going to beat speculative execution for general workloads, it needs to solve/avoid these two problems.
 
Oh my God there goes my sleep tonight.
 
Back
Top