Deepfake technology uses artificial intelligence (AI) and machine learning to manipulate or generate realistic video or audio content.
The rapid development of Generative AI has led to the emergence of many deepfake services which can generate non-existent persons photos that look real.
Deepfake technology poses significant risks in five key areas such as fraud and financial gain, extortion, disinformation, identity theft, and manipulation of public trust.
Deepfakes are used to craft false narratives, fostering distrust in official sources and institutions that has already been seen in the U.S presidential campaign.
Deepfake technology has introduced a new realm of cybersecurity threats as it has become much easier to create convincing deepfakes using generative AI models.
Creating deepfake videos requires either face recognition and face swapping models, or generative AI.
Generative AI has become much more accessible because it doesn’t require specialized hardware or advanced technical skills.
Tools like Eleven Labs offer a voice cloning feature that can replicate the voice of a person with just 10 minutes of decent audio clip.
Detecting deepfakes or AI-generated media is difficult, but software like SightEngine AI image detector can detect deepfakes.
C2PA, an industry initiative aimed at combating disinformation and deepfakes is working on enabling transparency about AI-generated content and providing a way for publishers, creators, and consumers to verify content provenance and authenticity.