Deepfake poses a challenge to the security and integrity of digital financial transactions. Deepfake, which is the manipulation of image, audio, and video content using AI, has now become a new loophole for fraudsters in the world of digital financial transactions. What is the overview of digital transactions in Indonesia along with the deepfake threats looming over it?
Overview of Digital Bank Transactions
The value of digital bank transactions in Indonesia reached 58.478 trillion rupiahs in 2023, and is expected to grow by 9.11% in 2024. Currently, 72% of Indonesians prefer digital methods for their daily transactions. In 2021, there were 47 million Indonesians with digital bank accounts. The shift in how Indonesians conduct financial transactions has also led to various demands, such as personalized services, secure online transactions, and the privacy of personal data.
Understanding Deepfake and Threats to Digital Finance
Deepfake is fake photos, videos, and audio that are reproduced from their original sources using artificial intelligence (AI). The resulting deepfake content looks real and similar to the original source. Initially, this technology was widely used in the entertainment industry. According to the Home Security Heroes report, the presence of deepfake videos has increased by more than 550% since 2019. As technology advances, these deepfake videos are becoming increasingly difficult to detect. Unfortunately, deepfake technology has now become a commonly used tool by criminals who exploit vulnerabilities in digital systems. In the financial sector, deepfake is used to impersonate bank representatives, falsify transaction details, and even access customer accounts.
Cases of Deepfake in Digital Transactions
1. Executive Impersonation
In 2019, an employee at a bank in the United Arab Emirates was fired for allegedly using deepfake audio to mimic the voice of one of the company's executives and request a transfer of $35 million.
2. Video Fraud
In 2021 in Singapore, someone lost $14,000 after being tricked by a deepfake video call. The fraudster impersonated the victim's friend and asked for money to be transferred. Fortunately, this incident was prevented.
3. Impersonation for Money Transfer
In 2022 in Hong Kong, a bank employee was deceived out of millions of dollars by someone who used deepfake to mimic their client's voice. The same incident occurred at a bank in India in 2023. An employee of a private bank received a call from a deepfake scammer claiming to be their superior and asking for a sum of money to be transferred. Thankfully, both attempts were detected.
Losses from Deepfake for Financial Institutions
1. Financial Losses
In financial institutions, deepfake fraud in verification and biometric authentication processes allows personal data to be accessed without the user's permission. Not only that, fraudsters can also drain accounts, causing financial losses. Financial losses not only affect users but also financial institutions.
2. Weakened Consumer Trust
The prevalence of deepfake fraud practices that lead to data breaches causes users or consumers to lose trust in digital banking services and financial services.
3. Decreased Adoption of Financial Technology
As consumer trust declines, there may be reluctance to adopt innovative financial technology, such as mobile banking, digital payments, and online investment platforms. This can hinder the growth and development of the financial services industry.
4. Company Reputation Damage
A decline in user or consumer trust affects a company's reputation. Reputation is something that is built in the long term with various business aspects. Thus, reputation damage will have a significant impact on the company's business performance.
5. Operational Disruptions
When banking systems are disrupted, certain services may need to be temporarily suspended to limit further breaches and mitigate larger risks. This certainly affects user satisfaction.
Protect Your Business with Deepfake Shield
VIDA Deepfake Shield is VIDA's latest security feature that protects biometric verification systems from deepfake attacks. There are 2 types of deepfake attacks, namely:
-
Presentation Attack
A Presentation Attack is an attempt to defraud a biometric authentication system by presenting fake biometrics. These biometrics can be in the form of photos, masks, or other disguises to deceive the biometric system.
-
Injection Attack
This attack is more sophisticated than a Presentation Attack. This attack involves injecting malicious code or commands into the biometric system to gain unauthorized access and manipulate the system. For example, fraudsters inject deepfake audio into voice recognition on the verification system.
VIDA Deepfake Shield has several advantages:
1. Presentation Attack Detection (PAD): A feature that detects Presentation Attacks in the verification system with Passive Liveness and Morphing Detection.
2. Injection Attack Security: A system to ensure that there are no injections of malicious code or commands into the verification system.
3. Image Quality Feedback: Users receive real-time feedback on image quality when performing biometric verification.
Read more about deepfake and VIDA Deepfake Shield here