- Joined
- Oct 9, 2007
- Messages
- 47,201 (7.56/day)
- Location
- Hyderabad, India
System Name | RBMK-1000 |
---|---|
Processor | AMD Ryzen 7 5700G |
Motherboard | ASUS ROG Strix B450-E Gaming |
Cooling | DeepCool Gammax L240 V2 |
Memory | 2x 8GB G.Skill Sniper X |
Video Card(s) | Palit GeForce RTX 2080 SUPER GameRock |
Storage | Western Digital Black NVMe 512GB |
Display(s) | BenQ 1440p 60 Hz 27-inch |
Case | Corsair Carbide 100R |
Audio Device(s) | ASUS SupremeFX S1220A |
Power Supply | Cooler Master MWE Gold 650W |
Mouse | ASUS ROG Strix Impact |
Keyboard | Gamdias Hermes E2 |
Software | Windows 11 Pro |
As part of Intel's Responsible AI work, the company has developed FakeCatcher, a technology that can detect fake videos with a 96% accuracy rate. Intel's deepfake detection platform is the world's first real-time deepfake detector that returns results in milliseconds. "Deepfake videos are everywhere now. You have probably already seen them; videos of celebrities doing or saying things they never actually did," said Ilke Demir, senior staff research scientist in Intel Labs.
Intel's real-time deepfake detection uses Intel hardware and software and runs on a server and interfaces through a web-based platform. On the software side, an orchestra of specialist tools form the optimized FakeCatcher architecture. Teams used OpenVino to run AI models for face and landmark detection algorithms. Computer vision blocks were optimized with Intel Integrated Performance Primitives (a multi-threaded software library) and OpenCV (a toolkit for processing real-time images and videos), while inference blocks were optimized with Intel Deep Learning Boost and with Intel Advanced Vector Extensions 512, and media blocks were optimized with Intel Advanced Vector Extensions 2. Teams also leaned on the Open Visual Cloud project to provide an integrated software stack for the Intel Xeon Scalable processor family. On the hardware side, the real-time detection platform can run up to 72 different detection streams simultaneously on 3rd Gen Intel Xeon Scalable processors.
Most deep learning-based detectors look at raw data to try to find signs of inauthenticity and identify what is wrong with a video. In contrast, FakeCatcher looks for authentic clues in real videos, by assessing what makes us human—subtle "blood flow" in the pixels of a video. When our hearts pump blood, our veins change color. These blood flow signals are collected from all over the face and algorithms translate these signals into spatiotemporal maps. Then, using deep learning, we can instantly detect whether a video is real or fake.
Deepfake videos are a growing threat. Companies will spend up to $188 billion in cybersecurity solutions, according to Gartner. It's also tough to detect these deepfake videos in real time - detection apps require uploading videos for analysis, then waiting hours for results.
Deception due to deepfakes can cause harm and result in negative consequences, like diminished trust in media. FakeCatcher helps restore trust by enabling users to distinguish between real and fake content.
There are several potential use cases for FakeCatcher. Social media platforms could leverage the technology to prevent users from uploading harmful deepfake videos. Global news organizations could use the detector to avoid inadvertently amplifying manipulated videos. And nonprofit organizations could employ the platform to democratize detection of deepfakes for everyone.
View at TechPowerUp Main Site
Intel's real-time deepfake detection uses Intel hardware and software and runs on a server and interfaces through a web-based platform. On the software side, an orchestra of specialist tools form the optimized FakeCatcher architecture. Teams used OpenVino to run AI models for face and landmark detection algorithms. Computer vision blocks were optimized with Intel Integrated Performance Primitives (a multi-threaded software library) and OpenCV (a toolkit for processing real-time images and videos), while inference blocks were optimized with Intel Deep Learning Boost and with Intel Advanced Vector Extensions 512, and media blocks were optimized with Intel Advanced Vector Extensions 2. Teams also leaned on the Open Visual Cloud project to provide an integrated software stack for the Intel Xeon Scalable processor family. On the hardware side, the real-time detection platform can run up to 72 different detection streams simultaneously on 3rd Gen Intel Xeon Scalable processors.
Most deep learning-based detectors look at raw data to try to find signs of inauthenticity and identify what is wrong with a video. In contrast, FakeCatcher looks for authentic clues in real videos, by assessing what makes us human—subtle "blood flow" in the pixels of a video. When our hearts pump blood, our veins change color. These blood flow signals are collected from all over the face and algorithms translate these signals into spatiotemporal maps. Then, using deep learning, we can instantly detect whether a video is real or fake.
Deepfake videos are a growing threat. Companies will spend up to $188 billion in cybersecurity solutions, according to Gartner. It's also tough to detect these deepfake videos in real time - detection apps require uploading videos for analysis, then waiting hours for results.
Deception due to deepfakes can cause harm and result in negative consequences, like diminished trust in media. FakeCatcher helps restore trust by enabling users to distinguish between real and fake content.
There are several potential use cases for FakeCatcher. Social media platforms could leverage the technology to prevent users from uploading harmful deepfake videos. Global news organizations could use the detector to avoid inadvertently amplifying manipulated videos. And nonprofit organizations could employ the platform to democratize detection of deepfakes for everyone.
View at TechPowerUp Main Site