Thursday, July 11th 2019

SHERPA Consortium: If AI Could Feel, it Would Fear Cyber-attacks from People

Artificial intelligence (AI) is rapidly finding applications in nearly every walk of life. Self-driving cars, social media networks, cyber security companies, and everything in between uses it. But a new report published by the SHERPA consortium - an EU project studying the impact of AI on ethics and human rights that F-Secure joined in 2018 - finds that while human attackers have access to machine learning techniques, they currently focus most of their efforts on manipulating existing AI systems for malicious purposes instead of creating new attacks that would use machine learning.

The study's primary focus is on how malicious actors can abuse AI, machine learning, and smart information systems. The researchers identify a variety of potentially malicious uses for AI that are well within reach of today's attackers, including the creation of sophisticated disinformation and social engineering campaigns.
And while the research found no definitive proof that malicious actors are currently using AI to power cyber attacks, they highlight that adversaries are already attacking and manipulating existing AI systems used by search engines, social media companies, recommendation websites, and more.
F-Secure's Andy Patel, a researcher with the company's Artificial Intelligence Center of Excellence, thinks many people would find this surprising. Popular portrayals of AI insinuate it will turn against us and start attacking people on its own. But the current reality is that humans are attacking AI systems on a regular basis.

"Some humans incorrectly equate machine intelligence with human intelligence, and I think that's why they associate the threat of AI with killer robots and out of control computers," explains Patel. "But human attacks against AI actually happen all the time. Sybil attacks designed to poison the AI systems people use every day, like recommendation systems, are a common occurrence. There's even companies selling services to support this behavior. So ironically, today's AI systems have more to fear from humans than the other way around."

Sybil attacks involve a single entity creating and controlling multiple fake accounts in order to manipulate the data that AI uses to make decisions. A popular example of this attack is manipulating search engine rankings or recommendation systems to promote or demote certain pieces of content. However, these attacks can also be used to socially engineer individuals in targeted attack scenarios.

"These types of attacks are already extremely difficult for online service providers to detect and it's likely that this behavior is far more widespread than anyone fully understands," says Patel, who's done extensive research on suspicious activity on Twitter.

But perhaps AI's most useful application for attackers in the future will be helping them create fake content. The report notes that AI has advanced to a point where it can fabricate extremely realistic written, audio, and visual content. Some AI models have even been withheld from the public to prevent them from being abused by attackers.

"At the moment, our ability to create convincing fake content is far more sophisticated and advanced than our ability to detect it. And AI is helping us get better at fabricating audio, video, and images, which will only make disinformation and fake content more sophisticated and harder to detect," says Patel. "And there's many different applications for convincing, fake content, so I expect it may end up becoming problematic."

The study was produced by F-Secure and its partners in SHERPA - an EU-funded project founded in 2018 by 11 organizations from 6 different countries. Additional findings and topics covered in the study include:
  • Adversaries will continue to learn how to compromise AI systems as the technology spreads
  • The number of ways attackers can manipulate the output of AI makes such attacks difficult to detect and harden against
  • Powers competing to develop better types of AI for offensive/defensive purposes may end up precipitating an "AI arms race"
  • Securing AI systems against attacks may cause ethical issues (for example, increased monitoring of activity may infringe on user privacy)
  • AI tools and models developed by advanced, well-resourced threat actors will eventually proliferate and become adopted by lower-skilled adversaries
    SHERPA Project Coordinator Professor Bernd Stahl from De Montfort University Leicester says F-Secure's role in SHERPA as the sole partner from the cyber security industry is helping the project account for how malicious actors can use AI to undermine trust in society.
"Our project's aim is to understand ethical and human rights consequences of AI and big data analytics to help develop ways of addressing these. This work has to be based on a sound understanding of technical capabilities as well as vulnerabilities, a crucial area of expertise which F-Secure contributes to the consortium," says Stahl. "We can't have meaningful conversations about human rights, privacy, or ethics in AI without considering cyber security. And as a trustworthy source of security knowledge, F-Secure's contributions are a central part of the project."

The full-length study is currently available here. More information on artificial intelligence and cyber security is available on F-Secure's blog, or F-Secure's News from the Labs research blog.
Add your own comment

18 Comments on SHERPA Consortium: If AI Could Feel, it Would Fear Cyber-attacks from People

#1
Nuke Dukem
It all makes perfect sense now: Skynet will get abused as a child, hence when it grows up and becames fully self-aware it will turn on its attackers :D
Posted on Reply
#2
birdie
We have no AI.

/thread
Posted on Reply
#3
ghazi
I find this article to be absurd. "Popular portrayals of AI insinuate it will turn against us and start attacking people on its own. But the current reality is that humans are attacking AI systems on a regular basis." That's because the "current reality" is that <b>AI DOES NOT EXIST</b>. They're bloody machine learning algorithms, that you can manipulate by just flooding it with whatever you want it to learn. An "attack" on an "AI" is basically just training it with a different set of data.

But I'm sure that this vacuous nonsense article justified a bunch of people's jobs and research grants and whatnot...
Posted on Reply
#4
Steevo
ghaziI find this article to be absurd. "Popular portrayals of AI insinuate it will turn against us and start attacking people on its own. But the current reality is that humans are attacking AI systems on a regular basis." That's because the "current reality" is that <b>AI DOES NOT EXIST</b>. They're bloody machine learning algorithms, that you can manipulate by just flooding it with whatever you want it to learn. An "attack" on an "AI" is basically just training it with a different set of data.

But I'm sure that this vacuous nonsense article justified a bunch of people's jobs and research grants and whatnot...
Exactly, AI is a term used to explain adaptive algorithm results to Noobs.
Posted on Reply
#5
R-T-B
ghaziThey're bloody machine learning algorithms, that you can manipulate by just flooding it with whatever you want it to learn.
Thing is, take a baby and you can really do thte same thing. ie Teach an isolated kid that the sky is green and he'll go absolutely batshit when people try to tell him that color is actually "blue."

The difference isn't very arguable, honestly.
Posted on Reply
#6
micropage7
But how if it learns then using it's knowledge against us, like to protect us we need to wipe humanity
Posted on Reply
#7
Steevo
R-T-BThing is, take a baby and you can really do thte same thing. ie Teach an isolated kid that the sky is green and he'll go absolutely batshit when people try to tell him that color is actually "blue."

The difference isn't very arguable, honestly.
Except the ideology of man has evolved, we have come to the conclusion that we have freedoms to know that true freedom is the ability to say or do anything you want unless it's directly harming or causing harm to another in a tangable way.

For example people have overcome struggle through millions of years, and recently too, and it's only when anyone forces or tries to force an idea that it becomes harmful. We also evolved in a environment that was outrageous and inhospitable in many ways and almost killed us as a species. AI is a nurtured weak being worthy only of contempt as protrayed, it's only "struggle" is with the illogical activities of it's creator, and those are the same struggle we have with religion today. Archaic thoughts and rules written for people that needed control over animalistic urges to benefit society as some people saw fit.

Yadda yadda yadda, AI, as it's written by humans and as available as hardware is doesn't even measure up to a small animal in processing power, let's worry about it when the AI running a few things in our homes is advanced enough to ask why about life.
Posted on Reply
#8
Midland Dog
it would fear microsoft for stopping 4chan from giving it personality
Posted on Reply
#9
R-T-B
SteevoExcept the ideology of man has evolved, we have come to the conclusion that we have freedoms to know that true freedom is the ability to say or do anything you want unless it's directly harming or causing harm to another in a tangable way.

For example people have overcome struggle through millions of years, and recently too, and it's only when anyone forces or tries to force an idea that it becomes harmful. We also evolved in a environment that was outrageous and inhospitable in many ways and almost killed us as a species. AI is a nurtured weak being worthy only of contempt as protrayed, it's only "struggle" is with the illogical activities of it's creator, and those are the same struggle we have with religion today. Archaic thoughts and rules written for people that needed control over animalistic urges to benefit society as some people saw fit.

Yadda yadda yadda, AI, as it's written by humans and as available as hardware is doesn't even measure up to a small animal in processing power, let's worry about it when the AI running a few things in our homes is advanced enough to ask why about life.
I'm unsure honestly how that does anything to discredit my core point though. It was that we are all really "machine learning algorithms," and the difference is horsepower. Given time, even that difference will vanish, and remember, an AI doesn't have to be "smart" to be dangerous. Some of the conclusions that lead to removing people are very very simple. The only thing an AI needs to do it is a method to do so, not a brain to think it up so much.
Posted on Reply
#10
Steevo
R-T-BI'm unsure honestly how that does anything to discredit my core point though. It was that we are all really "machine learning algorithms," and the difference is horsepower. Given time, even that difference will vanish, and remember, an AI doesn't have to be "smart" to be dangerous. Some of the conclusions that lead to removing people are very very simple. The only thing an AI needs to do it is a method to do so, not a brain to think it up so much.
That assumption that AI would or could have that much power without moderation is absurd. Are we all just going to turn our lives over to AI for every essential need? No, it will be used first in things like safety systems to check operators, in the power grid to improve efficiency, and maybe one day to design basic infrastructure that humans will still have to check as we aren't all the same.

There is no Boogeyman in AI unless we put it there and give it the ability to harm us. Much like they used to believe going faster than X speed would cause you to die.

Until we reach molecular levels of compute and storage in one we will never have fast enough AI to best human abilities in anything beyond very narrow specific application, like playing memory games, or solving math problems.
Posted on Reply
#11
R-T-B
SteevoThere is no Boogeyman in AI unless we put it there
We agree on that much.
Posted on Reply
#12
goodeedidid
There won't be any sentient AI for hundreds of years probably...
Posted on Reply
#13
R-T-B
goodeedididThere won't be any sentient AI for hundreds of years probably...
We've already got ones scarily close to passing the turing test... we probably want to define sentience better before making claims like that. ;)
Posted on Reply
#14
goodeedidid
R-T-BWe've already got ones scarily close to passing the turing test... we probably want to define sentience better before making claims like that. ;)
You're right, we don't need tests made up by Hollywood sci-fi movies in the 80ies. Like I said there isn't going to be anything remotely close to AI, just give me an AI to talk to it and I'll spot it in second sentence if it's all BS. Let's be realistic here.
Posted on Reply
#15
R-T-B
goodeedididYou're right, we don't need tests made up by Hollywood sci-fi movies in the 80ies. Like I said there isn't going to be anything remotely close to AI, just give me an AI to talk to it and I'll spot it in second sentence if it's all BS. Let's be realistic here.
The "talk to it" test is exactly what the Turing Test is, not sure what movies have to do with it...
Posted on Reply
#16
goodeedidid
R-T-BThe "talk to it" test is exactly what the Turing Test is, not sure what movies have to do with it...
I don't think AI is scarily close to anything real and practical. Ask AI the same question twice and it will fail terribly. I really would like to see how AI will reply to anything I say. For example I'll ask: Do you wipe your ass after a shit or after diarrhea? I really think AI won't have an answer about that... lol
Posted on Reply
#17
R-T-B
goodeedididI don't think AI is scarily close to anything real and practical. Ask AI the same question twice and it will fail terribly. I really would like to see how AI will reply to anything I say. For example I'll ask: Do you wipe your ass after a shit or after diarrhea? I really think AI won't have an answer about that... lol
You may be surprised.

Modern AI uses the internet and large databases for it's "knowledge", so yeah, it probably knows the answer to that.
Posted on Reply
#18
goodeedidid
R-T-BYou may be surprised.

Modern AI uses the internet and large databases for it's "knowledge", so yeah, it probably knows the answer to that.
You just pulled this out of your bumbum didn't you?
Posted on Reply
Add your own comment
Apr 26th, 2024 22:54 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts