BLOG | VIDA DIGITAL IDENTITY

Combating Deepfakes with Technology and Regulation

Written by VIDA | Apr 18, 2024 2:00:00 AM

Deepfake poses a new threat as it has the potential to be abused for the dissemination of false information and privacy violations. Initially used for entertainment, particularly in films, by mimicking someone's image and voice using AI, deepfake results look highly realistic but are entirely fake.

Meta, the parent company of Facebook and Instagram, announced plans to label AI-based content starting from May onwards. Instead of removing AI content, Meta will label it to preserve freedom of expression. Henceforth, AI content will be labeled as "Made with AI". This step is considered appropriate as it is increasingly difficult to distinguish between authentic content and deepfakes.

Before this initiative was launched, there were already several tools available for detecting deepfake content.

Technologies for Detecting Deepfake:

  • Microsoft Video Authenticator Tool: Launched in 2023, this tool analyzes videos and photos and generates a "confidence score" indicating the likelihood of manipulation. It focuses on visual inconsistencies, especially gray elements in the content.
  • Intel's FakeCatcher: Claimed to have a 96% accuracy rate, this tool can analyze physiological aspects of faces such as blood flow and eye movements.
  • Resemble Detect: This tool specifically targets deepfakes in audio and video formats.

If you don't have the above tools, you can still perform early detection of deepfake content. However, it's important to note that this manual method may not always be accurate as deepfakes are becoming increasingly difficult to distinguish from genuine content.

Regulatory to Address Deepfakes in Several Countries Worldwide

1. United States

The Federal Trade Commission (FTC) has proposed new legislation to address the growing threat of deepfake technology used for fraud. This proposed law aims to expand regulations to combat AI-supported fraud. This proposal follows complaints about impersonation and fraud using deepfakes. Meanwhile, at the World Economic Forum Annual Meeting in Davos in January 2024, the main focus was on building trust. This concerns deepfakes posing a serious threat to public trust. Such fraud can undermine confidence in government, media, the justice system, and private institutions.

2. China

In 2019, the Chinese government implemented a law requiring individuals and organizations to disclose their use of deepfakes. Then, based on the official website of the China Cyber Administration (CAC), since January 2023, the National Internet Information Office, the Ministry of Industry and Information Technology, and the Ministry of Public Security have jointly issued regulations governing the management of synthetic content in internet information.

These provisions aim to address the misuse of synthetic technology by emphasizing law, data security, and user authentication. The regulations also outline responsibilities for supervision, inspection, and legal accountability. This is aimed at regulating procedures from the production to the distribution of deepfake technology and preventing the risks of its use.

3. South Korea

Quoted from CNTI, a special parliamentary committee in South Korea in January 2024 passed a revision to the Public Official Election Act calling for a ban on political campaign videos that use deepfakes generated by artificial intelligence (AI) during the election season.

According to the revised law, individuals can be sentenced to a maximum of seven years in prison or fined up to nearly 50 million won ($37,618) if found displaying or distributing political campaign videos in the form of deepfakes within 90 days before the election. Not only that, but the creators are also required to inform viewers about synthetic information present in deepfake videos, even if the video is posted before the 90-day period.

4. Indonesia

In Indonesia, the ban on deepfakes is regulated in Law Number 11 of 2008 Concerning Electronic Information and Transactions. Article 35 of the ITE Law states "Every Person intentionally and without rights or against the law manipulates, creates, changes, removes, damages Electronic Information and/or Electronic Documents with the aim that the Electronic Information and/or Electronic Documents are considered as authentic data."

This action is punishable by imprisonment for up to 12 years and/or a maximum fine of Rp12,000,000,000.00 (twelve billion rupiahs) according to Article 51 of the ITE Law.

Article 66 of Law Number 27 of 2022 Concerning Personal Data Protection also states that "Every Person is prohibited from creating false Personal Data or forging Personal Data with the intention of benefiting themselves or others which may result in harm to others." Then Article 68 states that "Every Person who intentionally creates false Personal Data or forges Personal Data with the intention of benefiting themselves or others as referred to in Article 66 is punished by imprisonment for a maximum of 6 (six) years and/or a fine of up to Rp6,000,000,000.00 (six billion rupiahs)."

Protect your Business with VIDA Deepfake Shield

Deepfakes have been widely used for personal identity manipulation. Generally, they are employed during identity verification processes using biometrics. For instance, your photo could be deepfaked and used to register for illegal online loan applications, making you an unwitting user of such services. That's why it's important to be cautious when sharing personal photos online. For companies, especially those dealing with user personal data, it's crucial to utilize technology that can prevent deepfake intrusions.

VIDA Deepfake Shield is the latest security feature from VIDA, safeguarding biometric verification systems. It prevents identity forgery, including the use of fake photos, videos, and masks to ensure verification is done by the right person.

This technology is part of the VIDA Identity Platform, designed to protect against identity forgery attacks, including using photos, videos, and masks. VIDA Deepfake Shield stands out from other systems by employing multiple security layers, including API and SDK components, to defend against various deepfake threats, including Presentation and Injection attacks.

How Does VIDA Deepfake Shield Work?

VIDA Deepfake Shield is similar to standard biometric verification, like taking a selfie. When a user undergoes biometric verification, the system immediately ensures its quality and authenticity. VIDA's system verifies that the image was taken by a real person, not digitally altered. If biometric verification passes, the user's identity is confirmed.

Combining security and user convenience, here are the advantages of VIDA Deepfake Shield:

1. Presentation Attack Detection (PAD): Detects Presentation Attacks in the verification system with Passive Liveness and Morphing Detection.

2. Injection Attack Security: Ensures no injection of malicious code or commands into the verification system.

3. Image Quality Feedback: Users receive real-time feedback on image quality during biometric verification.

Download VIDA Whitepaper here