>Admittedly there wouldn't be any easy way to tell if you're genuine, so we'll have to assume you really are who you say you are!
Actually, you can just go to the NTBugtraq home page (
www.ntbugtraq.com) and either email me at the address listed there, or call my phone...;-]
>VISTA & its version of IE7, per the URL you cited? Does counter for THIS, here:
I think my point was missed here a little. PM relies on the technologies you cited (MIC and UIPI) to provide enforcement within IE. These technologies govern what a process does. However, when IE prompts the user asking whether they want to install an ActiveX control (providing they’re a member of the Local Administrator’s group) PM then branches to objects that are outside of the PM. The tasks passed can then do anything the user can do, with Administrative privilege.
If this were not true it would be impossible for a user to install an ActiveX control, or modify registry/file settings that may need to be done from time to time (e.g. update an existing ActiveX control.)
This is the “hole” in the PM. I’m not suggesting it’s flawed; only that it is present. And its presence does mean the PM is not truly a sandbox (and I’m not sure I’ve seen MS refer to it as such, to their credit.)
Most malware that ends up on people’s systems gets there by the user double-clicking on something (not via Browser exploits), so as long as IE prompts people to take an action, they will. PM stops drive-by downloads and exploitation of some browser vulnerabilities (not XSS, for example), but if you consider the percentage of people who’ve been infected via IE versus other ways, it is, IMO, solving a very small problem.
I’ll come back to this.
>Rootkits are on the rise. Zero 1-2 years ago.
I have to assume we’re having a problem with the term “rootkits.” My definition is some code which is completely invisible to the user through normal inspection. So it has to be covert enough to not show up in Task Manager and/or Explorer and be invisible to AV. Otherwise, its not a rootkit in my book.
The term has become overly used to refer to anything that does backdoors, and/or covert command and control channels. Have a look at
http://www.rootkit.com/ for a list. NT Rootkit, by Greg Hogland, was released initially in 1999. So saying there were ZERO in 2003 is just wrong.
Even before that there were discussions and Proof-of-Concept (PoC) code that exploited Alternate Data Streams (ADS) to hide themselves on disk (albeit not being able to hide the running process.) So we’ve had rootkits for a long time.
I will, again, say IMO that the number of machines infected with completely undetectable malware components is not a significantly higher percentage than it was in 2003. FWIW, my employer (Cybertrust/ICSA Labs) manages the WildList.org site, which tracks In-the-Wild malware. You can have a look at the October 2006 data (latest posted) and get an idea of what’s out there. You can then lookup the names of the malware to see what it does.
http://www.wildlist.org/WildList/200610.htm
This doesn’t mean that you can rely on AV to completely remove an infection. I agree that rebuilding after an infection is discovered is the Best Practice.
But again we have to stop looking at infections as a binary object, the same way we have to stop looking at vulnerability as being binary.
Let me take your example of walking through a party full of plague’d people. Yes, if you do that, and you’re vaccinated, you are more protected than if you’re not vaccinated.
However, why is it that we all don’t get vaccinated for the plague? It’s simple, it’s because the vast majority of us will never come into contact with anyone who has an active case that can infect us. Is it impossible for me to become infected? No! But the threat of me being infected is near zero, hence we don’t vaccinate against it. Yet the cost of being infected could be death…still we don’t get vaccinated.
So, in the case of the plague and people in, say, North America:
Vulnerability Prevalence = 100%
Cost of Infection = Death (let’s call it 100%)
Threat Rate = 100 people with active infections within the U.S. (~300m people)
Risk = Vulnerability Prevalence * Cost of Infection * Threat Rate
Risk = 100% * 100% * 0.000000333
This is your risk if you do nothing. Now consider what happens when you travel outside of the U.S., to a country where plague is present. The Threat Rate increases, possibly dramatically.
CountryX where plague is known to be present. Let’s say 1% of their population has plague, and the country has 100m people
Threat Rate = 1m/100m = 1%
Risk = 1% * 100% * 1%
Wow, now that’s a HUGE increase in risk, 3m% increase in fact! But it doesn’t consider all of the facts:
- What’s the chance I am going to meet one of those people?
- What’s the chance they’ll have an active infection when I do meet them?
- What’s the chance I’ll have no indications I might be getting near plague victims?
- What’s the chance my contact will actually lead to plague?
Each of these (and more) affect the final risk value, and any that are less than 100% cause that initial 1% risk to reduce.
Now apply this thinking to computer security and vulnerabilities:
Adobe PDFs can be used to cause Cross Site Scripting (XSS) in Firefox.
Vulnerability Prevalence = 35%?? (whatever market share value you want to give to Firefox is fine by me.)
Cost of Exploitation = Let’s say 100% again, as in being exploited means you lose all of your bank balance??
Threat Rate = 0% (We’ve had no reports of any sites hosting exploits)
Risk = 35% * 100% * 0%
Anything times 0% is 0, right?
Ok, so let’s revise the Threat Rate. Let us assume that some 10,000 sites are currently hosting PDF/XSS attacks today.
Threat Rate = 0.000093567 (10,000/106,875,138 – number of sites reported by Netcraft in January 2007)
Risk = 35% * 100% * 0.000093567 = 0.00327485%
Now this is from a world perspective. This is how we look at the risk in the world as a result of some new thing. If you ran Firefox, the number would be different:
Risk = 100% * 100% * 0.000093567 = 0. 0093567%
We’re still less than 1/100 of a percent.
So how much of your time should you spend on something that carries that much risk? And don’t forget, we haven’t even applied mitigators to this yet:
- Chances the malicious site is still up by the time I get there
- Chances the criminals actually succeed in getting all of my money, despite having my credentials
- Chances the bank isn’t going to give me all my money back
Etc…
Vulnerability-based thinking is binary. You are, or you’re not. You either have something to do, or you don’t. It’s very easy, however it’s enormously time consuming and wastes ridiculous amounts of resources world-wide every day.
It happens because, for most people, it’s impossible to do the risk calculation to the extent they think they should. In the above example most people would be stumped on the Threat Rate. “How do I know how many criminal sites are out there exploiting the vulnerability?” But if you look at it reasonably, before I even have a 1% risk, there’d have to be a million sites exploiting the vulnerability. For that to be true that would be 1 out of every 100 web sites exploiting this vulnerability.
I would argue that it would be impossible to imagine that 1 out of every 100 web sites is criminal and exploiting anything. That’s just way more criminal activity than has ever been seen before. So take any other browser exploiting vulnerability you can think and apply the above math and you’ll see that browser exploits just aren’t worth worrying about.
Now don’t get me wrong, it’s not as if we say “Oh just go anywhere you want with your browser and do nothing to it” to be secure. It’s a question of resources, and how you should spend them.
Do you give up what Active Scripting and ActiveX provides to the average person on a site (usually a better experience) because we fear such a small risk? Or, do we do a better job of educating our users to ensure they don’t end up at 1 of the 10,000, or 100,000, criminal sites?
Do we take the time and resource we put into patching and apply it to better Group Policy Object definition, or better proxy/IDS (Intrusion Detection System)/IPS (Intrusion Prevention System) filters? Do we instead focus on the few people in a company who, typically, repeatedly get infected versus the balance who never do?
There are so many ways to lower that Risk number without ever having to patch anything…honestly.
Hopefully this sheds some light on why being vulnerable is not equal to having a risk, or why an increased threat doesn’t necessarily translate to increased risk either.
The concepts above are not really difficult to understand, but I do know that they are hard to believe and/or accept. But in my 30 years experience in the business they are the most effective at reducing and/or eliminating risk.
>I set up a pal's machine on XP 2 days ago; we got nearly INSTANTLY "hit" w/ a "Messenger Service" 'attack'
Well, you must have done something wrong as XP SP2 installs by default with the Windows Firewall enabled, meaning Messenger shouldn’t have been exposed!
Alternatively, had you installed attached to any of the $50 routers, Wireless Access Points (WAP) or Cable Modems, you’d have what we call “Default Deny” enabled and it wouldn’t have got past it.
>Killbits in Internet Explorer 6.0
http://support.microsoft.com/kb/240797/en-us provides detailed instructions on how to set them. Basically, IE checks in the registry to see whether it should or should not run a given control. You can take any given control and set it such that IE will not run it, but it will run in other applications. As long as the control is registered (and virtually every DLL is) you merely have to figure out its Class ID (CLSID) and then add it to the IE list and it cannot be invoked from within IE 6.0.
>CRLs
Certificate Revocation Lists. When a Digital Certificate is produced, it is signed by a Root Trust Authority. It has parameters that state how long it should be valid for, amongst other things. Once a certificate has expired, it should, and is, no longer trusted. However, what if you need to make a certificate not trustworthy for some other reason?
Imagine that you private key, the key you use for signing your software, is stolen. Since you don’t know where it is, you don’t know if someone else is going to use it to try and leverage the trust someone else might have in you (via your cert.) So you need to revoke the cert. You can’t simply alter the expiration date.
This is where CRLs come in. The concept of PKI (Public Key Infrastructure) always included the ability to revoke a cert. When you are presented with a certificate, your system was supposed to check with a trusted authority to find out whether the cert had been revoked. For myriad reasons, this was rarely implemented (including not being supported at all in Windows.)
CRLs are now supported in Vista…and now we just have to wait and see if the Certificate Issuers are going to deliver them (FWIW, we Cybertrust are a Trusted Root Certificate Authority – GTE Cybertrust Root.)
Cheers,
Russ