- Joined
- Oct 9, 2007
- Messages
- 47,255 (7.54/day)
- Location
- Hyderabad, India
System Name | RBMK-1000 |
---|---|
Processor | AMD Ryzen 7 5700G |
Motherboard | ASUS ROG Strix B450-E Gaming |
Cooling | DeepCool Gammax L240 V2 |
Memory | 2x 8GB G.Skill Sniper X |
Video Card(s) | Palit GeForce RTX 2080 SUPER GameRock |
Storage | Western Digital Black NVMe 512GB |
Display(s) | BenQ 1440p 60 Hz 27-inch |
Case | Corsair Carbide 100R |
Audio Device(s) | ASUS SupremeFX S1220A |
Power Supply | Cooler Master MWE Gold 650W |
Mouse | ASUS ROG Strix Impact |
Keyboard | Gamdias Hermes E2 |
Software | Windows 11 Pro |
Artificial intelligence (AI) is rapidly finding applications in nearly every walk of life. Self-driving cars, social media networks, cyber security companies, and everything in between uses it. But a new report published by the SHERPA consortium - an EU project studying the impact of AI on ethics and human rights that F-Secure joined in 2018 - finds that while human attackers have access to machine learning techniques, they currently focus most of their efforts on manipulating existing AI systems for malicious purposes instead of creating new attacks that would use machine learning.
The study's primary focus is on how malicious actors can abuse AI, machine learning, and smart information systems. The researchers identify a variety of potentially malicious uses for AI that are well within reach of today's attackers, including the creation of sophisticated disinformation and social engineering campaigns.
And while the research found no definitive proof that malicious actors are currently using AI to power cyber attacks, they highlight that adversaries are already attacking and manipulating existing AI systems used by search engines, social media companies, recommendation websites, and more.
F-Secure's Andy Patel, a researcher with the company's Artificial Intelligence Center of Excellence, thinks many people would find this surprising. Popular portrayals of AI insinuate it will turn against us and start attacking people on its own. But the current reality is that humans are attacking AI systems on a regular basis.
"Some humans incorrectly equate machine intelligence with human intelligence, and I think that's why they associate the threat of AI with killer robots and out of control computers," explains Patel. "But human attacks against AI actually happen all the time. Sybil attacks designed to poison the AI systems people use every day, like recommendation systems, are a common occurrence. There's even companies selling services to support this behavior. So ironically, today's AI systems have more to fear from humans than the other way around."
Sybil attacks involve a single entity creating and controlling multiple fake accounts in order to manipulate the data that AI uses to make decisions. A popular example of this attack is manipulating search engine rankings or recommendation systems to promote or demote certain pieces of content. However, these attacks can also be used to socially engineer individuals in targeted attack scenarios.
"These types of attacks are already extremely difficult for online service providers to detect and it's likely that this behavior is far more widespread than anyone fully understands," says Patel, who's done extensive research on suspicious activity on Twitter.
But perhaps AI's most useful application for attackers in the future will be helping them create fake content. The report notes that AI has advanced to a point where it can fabricate extremely realistic written, audio, and visual content. Some AI models have even been withheld from the public to prevent them from being abused by attackers.
"At the moment, our ability to create convincing fake content is far more sophisticated and advanced than our ability to detect it. And AI is helping us get better at fabricating audio, video, and images, which will only make disinformation and fake content more sophisticated and harder to detect," says Patel. "And there's many different applications for convincing, fake content, so I expect it may end up becoming problematic."
The study was produced by F-Secure and its partners in SHERPA - an EU-funded project founded in 2018 by 11 organizations from 6 different countries. Additional findings and topics covered in the study include:
The full-length study is currently available here. More information on artificial intelligence and cyber security is available on F-Secure's blog, or F-Secure's News from the Labs research blog.
View at TechPowerUp Main Site
The study's primary focus is on how malicious actors can abuse AI, machine learning, and smart information systems. The researchers identify a variety of potentially malicious uses for AI that are well within reach of today's attackers, including the creation of sophisticated disinformation and social engineering campaigns.
And while the research found no definitive proof that malicious actors are currently using AI to power cyber attacks, they highlight that adversaries are already attacking and manipulating existing AI systems used by search engines, social media companies, recommendation websites, and more.
F-Secure's Andy Patel, a researcher with the company's Artificial Intelligence Center of Excellence, thinks many people would find this surprising. Popular portrayals of AI insinuate it will turn against us and start attacking people on its own. But the current reality is that humans are attacking AI systems on a regular basis.
"Some humans incorrectly equate machine intelligence with human intelligence, and I think that's why they associate the threat of AI with killer robots and out of control computers," explains Patel. "But human attacks against AI actually happen all the time. Sybil attacks designed to poison the AI systems people use every day, like recommendation systems, are a common occurrence. There's even companies selling services to support this behavior. So ironically, today's AI systems have more to fear from humans than the other way around."
Sybil attacks involve a single entity creating and controlling multiple fake accounts in order to manipulate the data that AI uses to make decisions. A popular example of this attack is manipulating search engine rankings or recommendation systems to promote or demote certain pieces of content. However, these attacks can also be used to socially engineer individuals in targeted attack scenarios.
"These types of attacks are already extremely difficult for online service providers to detect and it's likely that this behavior is far more widespread than anyone fully understands," says Patel, who's done extensive research on suspicious activity on Twitter.
But perhaps AI's most useful application for attackers in the future will be helping them create fake content. The report notes that AI has advanced to a point where it can fabricate extremely realistic written, audio, and visual content. Some AI models have even been withheld from the public to prevent them from being abused by attackers.
"At the moment, our ability to create convincing fake content is far more sophisticated and advanced than our ability to detect it. And AI is helping us get better at fabricating audio, video, and images, which will only make disinformation and fake content more sophisticated and harder to detect," says Patel. "And there's many different applications for convincing, fake content, so I expect it may end up becoming problematic."
The study was produced by F-Secure and its partners in SHERPA - an EU-funded project founded in 2018 by 11 organizations from 6 different countries. Additional findings and topics covered in the study include:
- Adversaries will continue to learn how to compromise AI systems as the technology spreads
- The number of ways attackers can manipulate the output of AI makes such attacks difficult to detect and harden against
- Powers competing to develop better types of AI for offensive/defensive purposes may end up precipitating an "AI arms race"
- Securing AI systems against attacks may cause ethical issues (for example, increased monitoring of activity may infringe on user privacy)
- AI tools and models developed by advanced, well-resourced threat actors will eventually proliferate and become adopted by lower-skilled adversaries
SHERPA Project Coordinator Professor Bernd Stahl from De Montfort University Leicester says F-Secure's role in SHERPA as the sole partner from the cyber security industry is helping the project account for how malicious actors can use AI to undermine trust in society.
The full-length study is currently available here. More information on artificial intelligence and cyber security is available on F-Secure's blog, or F-Secure's News from the Labs research blog.
View at TechPowerUp Main Site