Skip to content
deepfake

Apr 18, 2024

Combating Deepfakes with Technology and Regulation

There are already several tools available for detecting deepfake content. What are they?

Deepfake poses a new threat as it has the potential to be abused for the dissemination of false information and privacy violations. Initially used for entertainment, particularly in films, by mimicking someone's image and voice using AI, deepfake results look highly realistic but are entirely fake.

Meta, the parent company of Facebook and Instagram, announced plans to label AI-based content starting from May onwards. Instead of removing AI content, Meta will label it to preserve freedom of expression. Henceforth, AI content will be labeled as "Made with AI". This step is considered appropriate as it is increasingly difficult to distinguish between authentic content and deepfakes.

Before this initiative was launched, there were already several tools available for detecting deepfake content.

Technologies for detecting deepfakes:

  • Microsoft Video Authenticator Tool: Launched in 2023, this tool analyzes videos and photos and generates a "confidence score" indicating the likelihood of manipulation. It focuses on visual inconsistencies, especially gray elements in the content.
  • Intel's FakeCatcher: Claimed to have a 96% accuracy rate, this tool can analyze physiological aspects of faces such as blood flow and eye movements.
  • Resemble Detect: This tool specifically targets deepfakes in audio and video formats.

If you don't have the above tools, you can still perform early detection of deepfake content. However, it's important to note that this manual method may not always be accurate as deepfakes are becoming increasingly difficult to distinguish from genuine content.

Regulatory to address deepfakes in several countries worldwide

1. United States

The Federal Trade Commission (FTC) has proposed new legislation to address the growing threat of deepfake technology used for fraud. This proposed law aims to expand regulations to combat AI-supported fraud. This proposal follows complaints about impersonation and fraud using deepfakes. Meanwhile, at the World Economic Forum Annual Meeting in Davos in January 2024, the main focus was on building trust. This concerns deepfakes posing a serious threat to public trust. Such fraud can undermine confidence in government, media, the justice system, and private institutions.

2. China

In 2019, the Chinese government implemented a law requiring individuals and organizations to disclose their use of deepfakes. Then, based on the official website of the China Cyber Administration (CAC), since January 2023, the National Internet Information Office, the Ministry of Industry and Information Technology, and the Ministry of Public Security have jointly issued regulations governing the management of synthetic content in internet information.

These provisions aim to address the misuse of synthetic technology by emphasizing law, data security, and user authentication. The regulations also outline responsibilities for supervision, inspection, and legal accountability. This is aimed at regulating procedures from the production to the distribution of deepfake technology and preventing the risks of its use.

3. South Korea

Quoted from CNTI, a special parliamentary committee in South Korea in January 2024 passed a revision to the Public Official Election Act calling for a ban on political campaign videos that use deepfakes generated by artificial intelligence (AI) during the election season.

According to the revised law, individuals can be sentenced to a maximum of seven years in prison or fined up to nearly 50 million won ($37,618) if found displaying or distributing political campaign videos in the form of deepfakes within 90 days before the election. Not only that, but the creators are also required to inform viewers about synthetic information present in deepfake videos, even if the video is posted before the 90-day period.

4. Indonesia

In Indonesia, the ban on deepfakes is regulated in Law Number 11 of 2008 Concerning Electronic Information and Transactions. Article 35 of the ITE Law states "Every Person intentionally and without rights or against the law manipulates, creates, changes, removes, damages Electronic Information and/or Electronic Documents with the aim that the Electronic Information and/or Electronic Documents are considered as authentic data."

This action is punishable by imprisonment for up to 12 years and/or a maximum fine of Rp12,000,000,000.00 (twelve billion rupiahs) according to Article 51 of the ITE Law.

Article 66 of Law Number 27 of 2022 Concerning Personal Data Protection also states that "Every Person is prohibited from creating false Personal Data or forging Personal Data with the intention of benefiting themselves or others which may result in harm to others." Then Article 68 states that "Every Person who intentionally creates false Personal Data or forges Personal Data with the intention of benefiting themselves or others as referred to in Article 66 is punished by imprisonment for a maximum of 6 (six) years and/or a fine of up to Rp6,000,000,000.00 (six billion rupiahs)."

The act of deepfake is the act of falsifying the personal data of others, namely biometric data in the form of facial photos, and if combined with other types of personal data, it is believed that the deepfake results are genuine individuals, then the deepfake perpetrator violates the privacy rights of the data owner in the context of the deepfake perpetrator carrying out certain activities on behalf of the data owner that are not based on decisions taken directly by the data owner.

Detecting deepfakes:

1. Unnatural Movements

Try to observe facial expressions and movements in some deepfake videos. The movements are unnatural. Facial and body movements in genuine humans are more natural and fluid, while in deepfakes, they appear jerky, unsynchronized between body parts, and rigid.

2. Audio and Video Out of Sync

The lack of synchronization between audio and video is another characteristic of potential deepfakes. Genuine videos have consistent synchronization between visual and audio elements. Meanwhile, deepfakes often struggle to synchronize these aspects, resulting in differences between audio and video.

3. Eye Movement and Blink Detection

Since deepfakes are AI-generated, eye movements or blinks are usually unnatural. Unfortunately, increasingly sophisticated deepfakes make eye movement detection difficult. Using detection algorithms to monitor eye movements and blinks in video subjects can help identify manipulation.

4. Inconsistent Colors and Shadows

Inconsistency in colors and shadows can help identify deepfake videos. This happens because AI still struggles to accurately replicate lighting conditions and shadows in the real world. Sometimes, colors and shadows are also distorted.

Deepfakes have been widely used for manipulating personal identities. Generally, deepfakes are used during the identity verification process using biometrics. For example, your photo is turned into a deepfake and used when registering for illegal online loan applications, so you become a user of that online loan application without your knowledge. That's why it's also important to be cautious when sharing your photo online. Then for companies, especially those operating in sectors related to user personal data, it's important to use technology that can prevent deepfake intrusions.

As part of the data protection solution, VIDA offers the latest technology development, namely VIDA Deepfake Shield. VIDA has been strengthened with the ability to control the entire process during biometric verification, so that any fraud loophole can be quickly prevented.

Download VIDA Whitepaper here

VIDA - Verified Identity for All. VIDA provides a trusted digital identity platform.

Latest Articles

Understanding Deepfake Attacks in the Identity Verification Process
deepfake

Understanding Deepfake Attacks in the Identity Verification Process

Before becoming a crime loophole, deepfakes circulated on social media as entertainment. However, deepfakes evolved into crimes when the sa...

May 03, 2024

90% of Business Pros Unaware of Deepfake Defense Strategies
deepfake

90% of Business Pros Unaware of Deepfake Defense Strategies

In a survey conducted by VIDA among business professionals in Indonesia, awareness of AI remains low, especially regarding deepfake. Here a...

May 02, 2024

When It Becomes a Threat, Does Deepfake Still Remain Entertainment?
deepfake

When It Becomes a Threat, Does Deepfake Still Remain Entertainment?

While deepfake innovation provides benefits, it is also important to set boundaries to prevent deepfake from becoming a source of crime.

April 23, 2024