Monday, November 14th 2022

Intel Introduces Real-Time Deepfake Detector

As part of Intel's Responsible AI work, the company has developed FakeCatcher, a technology that can detect fake videos with a 96% accuracy rate. Intel's deepfake detection platform is the world's first real-time deepfake detector that returns results in milliseconds. "Deepfake videos are everywhere now. You have probably already seen them; videos of celebrities doing or saying things they never actually did," said Ilke Demir, senior staff research scientist in Intel Labs.

Intel's real-time deepfake detection uses Intel hardware and software and runs on a server and interfaces through a web-based platform. On the software side, an orchestra of specialist tools form the optimized FakeCatcher architecture. Teams used OpenVino to run AI models for face and landmark detection algorithms. Computer vision blocks were optimized with Intel Integrated Performance Primitives (a multi-threaded software library) and OpenCV (a toolkit for processing real-time images and videos), while inference blocks were optimized with Intel Deep Learning Boost and with Intel Advanced Vector Extensions 512, and media blocks were optimized with Intel Advanced Vector Extensions 2. Teams also leaned on the Open Visual Cloud project to provide an integrated software stack for the Intel Xeon Scalable processor family. On the hardware side, the real-time detection platform can run up to 72 different detection streams simultaneously on 3rd Gen Intel Xeon Scalable processors.
Most deep learning-based detectors look at raw data to try to find signs of inauthenticity and identify what is wrong with a video. In contrast, FakeCatcher looks for authentic clues in real videos, by assessing what makes us human—subtle "blood flow" in the pixels of a video. When our hearts pump blood, our veins change color. These blood flow signals are collected from all over the face and algorithms translate these signals into spatiotemporal maps. Then, using deep learning, we can instantly detect whether a video is real or fake.

Deepfake videos are a growing threat. Companies will spend up to $188 billion in cybersecurity solutions, according to Gartner. It's also tough to detect these deepfake videos in real time - detection apps require uploading videos for analysis, then waiting hours for results.

Deception due to deepfakes can cause harm and result in negative consequences, like diminished trust in media. FakeCatcher helps restore trust by enabling users to distinguish between real and fake content.

There are several potential use cases for FakeCatcher. Social media platforms could leverage the technology to prevent users from uploading harmful deepfake videos. Global news organizations could use the detector to avoid inadvertently amplifying manipulated videos. And nonprofit organizations could employ the platform to democratize detection of deepfakes for everyone.
Add your own comment

14 Comments on Intel Introduces Real-Time Deepfake Detector

#1
claes
Doesn’t AI just find a way to outwit it again and again? Dog chasing it’s own tail much?
Posted on Reply
#2
Crackong
I think this is a 'Deep Fake trainer' instead of a detector.
All those deepfake developers will use this as a training tool.

Intel is actually perfecting the deepfake tech by releasing this.
Posted on Reply
#3
phanbuey
only a matter of time anyways.
Posted on Reply
#4
Leobar
Seems like a temporary fix for detecting deep fakes until the people who make deep fakes reverse engineer it and just make them more advanced feels like only a matter of time till a deep fake ruins someones career/life and even if it comes out as it was fake that persons life is already ruined yikes
Posted on Reply
#5
theGryphon
phanbueyonly a matter of time anyways.
Exactly what I thought. Only a matter of time to develop an AI that catches the other AI.

May the best AI win!
CrackongI think this is a 'Deep Fake trainer' instead of a detector.
All those deepfake developers will use this as a training tool.

Intel is actually perfecting the deepfake tech by releasing this.
Without a product like this, we're left with our measly human eyes and brain, and we're bound to lose against good deepfake. This kind of "solutions" to our self inflicted problems gives us a chance... for a small fee of...
Posted on Reply
#6
zlobby
theGryphonExactly what I thought. Only a matter of time to develop an AI that catches the other AI.

May the best AI win!



Without a product like this, we're left with our measly human eyes and brain, and we're bound to lose against good deepfake. This kind of "solutions" to our self inflicted problems gives us a chance... for a small fee of...
Carbon-based lifeforms will not remain peak of evolution on this planet for too long. AI will eventually outgrow our capabilities to control it.
Biggest problem (?) with AI the moral and ethical bias, or the lack thereof. Not that this helped humans much anyway.

*licks eyeball and turns invisible*
Posted on Reply
#7
Valantar
Now we just need a method of Deepfaking Deepfake Detectors. And then detecting those. Who could ever doubt that AI would open new areas of business in the tech sector?
Posted on Reply
#8
Vayra86
claesDoesn’t AI just find a way to outwit it again and again? Dog chasing it’s own tail much?
Perfect business model isn't it?
zlobbyCarbon-based lifeforms will not remain peak of evolution on this planet for too long. AI will eventually outgrow our capabilities to control it.
Biggest problem (?) with AI the moral and ethical bias, or the lack thereof. Not that this helped humans much anyway.

*licks eyeball and turns invisible*
Moral and ethical bias is in fact also a product of evolution; so it might very well be possible AI can learn that too, because the outcome of results in interaction might improve.

Its about what the definition of success really is. Is it harmony, or domination?
Posted on Reply
#9
Dirt Chip
Jokes about deepfake of e-cores, frame-generation & upscaling, graph in PR presentation- under here please.
Posted on Reply
#10
DeathtoGnomes
btarunrvideos of celebrities doing or saying things they never actually did
Denial is not a river in Egypt. They did it all.
Posted on Reply
#11
Steevo
Soon we will have to rely on people we know and trust to talk to instead of talking heads telling us what to believe as they can be made to say anything.
Posted on Reply
#12
R-T-B
claesDoesn’t AI just find a way to outwit it again and again? Dog chasing it’s own tail much?
I mean that's really all we can do, yeah.
DeathtoGnomesDenial is not a river in Egypt. They did it all.
I hope you are being silly and don't actually think every deepfake is real...
Posted on Reply
#13
DeathtoGnomes
R-T-BI hope you are being silly and don't actually think every deepfake is real...
Salt is cheap. :D
Posted on Reply
#14
R-T-B
DeathtoGnomesSalt is cheap. :D
Fair, it's the internets most abundant resource, pretty sure.
Posted on Reply
Add your own comment
Nov 12th, 2024 19:46 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts