Picture this: You’re getting daily video messages from George Clooney himself – charming, warm, and oh-so-convincing. That’s what one woman in Argentina thought when she connected with “Clooney” on Facebook. For six weeks, she interacted with him, believing it is the Hollywood star, amused by his warm, lifelike videos. Then, one day, came the pitch: join a fan club, pay for a special card, and unlock exclusive work opportunities. Trusting the Hollywood icon, she transferred over Rs 11 lakh (approximately), only to discover – after contacting the FBI – that she’d been duped by an AI-generated deepfake.
This woman’s story is a wake-up call. Scammers are weaponising deepfake technology to impersonate celebrities, executives, and even your loved ones, turning trust into a dangerous trap. These scams are exploding across social media platforms, and anyone can be a target.
What are deepfake videos, and why are they so dangerous?
Deepfakes are hyper-realistic videos or audio created using artificial intelligence (AI), machine learning, and face-swapping technology. By combining real images, videos, or voice samples, scammers create convincing fakes that show people saying or doing things that never happened.
“Deepfakes help cybercriminals build trust, creating a base for fraud like digital arrests or identity theft,” said Dr Azahar Machwe, an AI expert from the banking and financial services sector. “They might use a real video with AI-altered audio or create an entirely fake video using just a photo and a voice sample. This is then used to drive the fraud with the deepfake pretending to be a customer, a law enforcement official, or someone in a position of authority.”
Manish Mohta, founder of Learning Spiral AI, said, “These hyper-realistic videos often impersonate CEOs, relatives, or government officials, tricking victims into sending money or sharing sensitive data.”
Anuj Khurana, CEO of Anaptyss, warned of another chilling tactic: “Scammers use deepfakes for KYC fraud, creating fake video IDs to bypass identity checks. Once they gain access, they exploit accounts for money laundering or other crimes. Fake celebrity endorsements for sham investments are also a growing threat.” According to him, unlike traditional phishing emails with obvious typos or awkward phrasing, deepfakes are nearly indistinguishable from reality.
Story continues below this ad
“Fraudsters exploit deepfakes’ unquestionable authenticity to exploit human trust and spoof security protocols, including facial verification and voice authorization, to commit financial fraud,” said Khurana.
“While phishing emails rely on text-based deception, deepfake videos exploit the brain’s instinctive trust in facial expressions, voice tonality, and body language,” said Venky Sadayappan, cybersecurity director at Arche. He also said, “Deepfakes are difficult to detect using conventional email filters or cybersecurity tools, as the content often arrives via trusted platforms like Zoom or WhatsApp, creating a dangerous blind spot in traditional security architectures.”
How do scammers create these convincing deepfakes?
Scammers don’t need much to craft a convincing deepfake. Here’s how they gather the raw materials:
Publicly available content: Social media profiles, corporate websites, news clips, and online interviews provide ample photos and videos.
Story continues below this ad
Professional appearances: Executives often feature in webinars, interviews, and presentations, providing scammers with clear visuals and speech patterns.
Voice samples: Podcasts, recorded meetings, or even short audio clips from social media are enough to mimic voices.
Minimal input: AI can generate a deepfake with just 5–10 seconds of video or audio, making almost anyone a potential target.
Spotting deepfakes: Red flags to watch out for
Dr Machwe shared key signs to identify a deepfake before it’s too late:
Story continues below this ad
Look closely at tiny facial features: Things like hair strands, ears, lips, and eye movements are hard for AI to replicate accurately, especially when someone is talking.
Watch for blurring: If parts of the face look smudged or seem to “melt,” especially around those fine features, it could be a sign the video is fake.
Overall video quality: Sudden drops in clarity, or if the whole thing feels unnaturally fuzzy, it may be AI-generated.
Listen to the voice: AI voices often sound a bit flat or emotionless. Plus, the lip movements might not sync perfectly with the audio.
Story continues below this ad
Verify the claims: Use reliable sources and online tools to verify claims made in the video.
Trust your gut: If something feels “off” or just too good to be true, chances are, it probably is.
Digital hygiene tips by the experts:
Tighten your privacy settings: Make sure only trusted people can see what you post, and use strict privacy settings.
Think twice before sharing: Think before sharing clear, high-quality selfies or videos publicly. The clearer your face is, the easier it is to misuse.
Story continues below this ad
Go easy on hyper-realistic filters: They may seem fun, but they can give scammers more to work with.
Do a regular clean-up of your online presence: Remove old photos or videos that don’t need to be out there anymore.
Turn on two-factor authentication: Ensure you turn on two-factor authentication for all your accounts to add an extra layer of security.
Stay updated: Be in sync with new scams, and if you ever find your image or video being misused, report it immediately.
Story continues below this ad
What to do if you fall prey to these deepfakes?
Sadayappan and Dr Machwe recommended the following in case you fall prey to these videos.
Take a breath, don’t panic: If you suspect you’ve been targeted with a deepfake video, it’s important not to react impulsively.
Start by informing the police: Contact your local law enforcement and check if they have a cybercrime unit that can help.
Notify key organisations: Let your bank, employer, or any other relevant institution know, especially if the video could impact your personal or professional life. Share the video with your organization’s cybersecurity or IT team, they can help with a technical review.
Story continues below this ad
Save everything: Keep a copy of the video, any messages, and details related to it. This will help with the investigation.
Inform your close circle: Keep your family and friends in the loop so they’re not caught off guard and can support you.
Make a public statement: If you’re comfortable, post on your social media to alert your wider network. If you have the deepfake video, share it with a clear warning that it’s fake, this helps stop misinformation from spreading.
Never give in to blackmail: Threats and demands usually don’t stop, don’t engage, and let the authorities handle it.
Report it: In India, you can report such incidents on the National Cybercrime Reporting Portal (https://cybercrime.gov.in/).
Consider escalation: If the incident is serious, organisations should reach out to law enforcement or CERT-In (India’s cybersecurity agency).
Act fast: The quicker you respond, the better your chances of avoiding financial loss or damage to your reputation.
Remember you are the victim. At worst, a deepfake might cause embarrassment, but the blame lies fully with those creating or sharing it, not you.
Staying informed, alert, and proactive is your best defence against the rising threat of deepfake video financial scams.
The Safe Side:
As the world evolves, the digital landscape does too, bringing new opportunities—and new risks. Scammers are becoming more sophisticated, exploiting vulnerabilities to their advantage. In our special feature series, we delve into the latest cybercrime trends and provide practical tips to help you stay informed, secure, and vigilant online.