The terminology "deepfake" emerged in 2017 to describe audio and video manipulated with the assistance of artificial intelligence (AI) to resemble real individuals. Initially used for entertainment purposes, deepfakes ranged from playful activities like applying social media filters resembling movie stars to superimposing faces onto videos of favorite singers. However, due to their striking similarity to reality, deepfakes are increasingly being exploited for fraudulent purposes, from deceiving on social media to financially harmful scams.
What are some cases of fraud involving deepfakes?
Face swapping is the most common form of deepfake encountered. Initially appearing as facial filters on social media for entertainment, face swaps became concerning when used to shape opinions or spread fake news. For instance, videos of speeches by Barack Obama or Joko Widodo with nonsensical content circulated on social media are examples of face swap deepfakes. While seemingly amusing, they harbor potential dangers.
Recently, a company reported a deepfake scam impersonating their CEO. An employee received a voice message on WhatsApp that sounded exactly like the CEO. Fortunately, the employee didn't immediately trust it and thwarted the scam.
Fraudsters utilize Voice Cloning Deepfakes, cloning someone's voice to make the recipient believe and comply with requests. Creating voice clones is now easily achievable due to user-friendly tools.
A company incurred billions in losses when one of its employees received a video call from the CFO instructing them to transfer money. The employee fell victim to a deepfake scam as the video call was actually from a fraudster using deepfake technology to reproduce someone's face in a new photo or video.
The aforementioned deepfake fraud cases typically don't occur in real-time. However, are there real-time deepfakes? Indeed, there are deepfake attacks targeting identity verification processes during app registration. Deepfakes "disguise" themselves as us and freely transact and use our personal data. These attacks are categorized as presentation attacks and injection attacks.
Presentation Attack involves attempting to defraud biometric verification and authentication systems by presenting fake biometrics such as photos, masks, or other disguises to deceive the biometric system. The aim is unauthorized access to security systems. Deepfake technology can create realistic images or videos taken from real individuals.
This attack is more sophisticated than Presentation Attack, involving the injection of malicious code or commands into biometric systems to gain unauthorized access and manipulate the system. For example, fraudsters inject deepfake audio into voice recognition systems in verification processes. Similar to Presentation Attack, the goal is unauthorized access to security systems. According to Gartner's report, Injection Attack increased by 200% in 2023.
As part of data protection solutions, VIDA offers cutting-edge technology development with VIDA Deepfake Shield. In this regard, VIDA has been strengthened with the ability to control the entire process within the biometric system, enabling the swift prevention of even the smallest fraud loopholes. Faced with continually evolving cyber threats, adopting solutions like VIDA Deepfake Shield is no longer an option but a necessity.