Express shorts

Can you trust what you see? How AI videos are taking over your social media | Technology News


A few days ago, a video that claimed to show a lion approaching a man asleep on the streets of Gujarat, sniffing him and walking away, took social media by storm. It looked like it was CCTV footage. The clip was dramatic, surreal, but completely fake. It was made using Artificial Intelligence (AI), but that didn’t stop it from going viral. The video was even picked up by some news outlets, and reported as if it was a real incident, without any verification. The video originated from a YouTube channel – The world of beasts, which inconspicuously mentioned ‘AI-assisted designs’ in its bio. 

In another viral clip, a kangaroo – allegedly an emotional support animal – was seen attempting to board a flight with its human. Again, viewers were fascinated, many believing the clip to be real. The video first appeared on the Instagram account ‘Infinite Unreality,’ which openly brands itself as ‘Your daily dose of unreality.’ 

The line between fiction and reality, now more than ever, isn’t always obvious to idle users.

Story continues below this ad

From giant anacondas swimming freely through rivers to a cheetah saving a woman from danger, AI-generated videos are flooding platforms, often blurring the boundary between the unbelievable and the impossible. With AI tools becoming more advanced and accessible, these creations are only growing in number and becoming sophisticated.

To understand just how widespread the problem of AI-generated videos is, and why it matters, The Indian Express spoke to experts working at the intersection of technology, media, and misinformation.

Harder to spot, easier to make:

“Not just the last year, not just the last month, even in the last couple of weeks, I’ve seen the volume of such videos increase,” said Ben Colman, CEO of deepfake detection firm Reality Defender. He gave a recent example – a 30-second commercial by betting platform Kalshi that aired a couple of weeks ago, during Game 3 of the 2025 NBA Finals. The video was made using Google’s new AI video tool, Veo 3. “It’s blown past the uncanny valley, meaning it’s infinitely more believable, and more videos like this are being posted to social platforms today compared to the day prior and so on,” Colman said.

Sam Gregory, executive director of WITNESS, a non-profit that trains activists in using tech for human rights, said, “The quantity and quality of synthetic audio have rapidly increased over the past year, and now video is catching up. New tools like Veo generate photorealistic content that follows physical laws, matches visual styles like interviews or news broadcasts, and syncs with controllable audio prompts.”.

Story continues below this ad

Why AI videos dominate your feeds

The reason behind platforms like Instagram, Facebook, TikTok, and YouTube pushing AI-generated videos, beyond technical novelty, is not very complex – such videos grab user attention, something all platforms are desperate for. 

Colman said, “These videos make the user do a double‑take. Negative reactions on social media beget more engagement and longer time on site, which translates to more ads consumed.”

“Improvements in fidelity, motion, and audio have made it easier to create realistic memetic content. People are participating in meme culture using AI like never before,” said Gregory.

According to Ami Kumar, cofounder Contrails.ai, “The amplification is extremely high, unfortunately, platform algorithms prioritise quantity over quality, promoting videos that generate engagement regardless of their accuracy or authenticity.”

Story continues below this ad

Gregory, however, said that demand plays a role. “Once you start watching AI content, your algorithm feeds you more. ‘AI slop’ is heavily monetised,” he said.

Detection can’t keep pace:

“Our own PhDs have failed to distinguish real photos or videos from deepfakes in internal tests,” Colman admitted. 

Are the big platforms prepared to put labels and checks on AI-generated content? Not yet. Colman said most services rely on “less‑than‑bare‑minimum provenance watermark checks,” which many generators ignore or can spoof. 

Gregory warned that “research increasingly shows the average person cannot distinguish between synthetic and real audio, and now, the same is becoming true for video.” 

Story continues below this ad

When it comes to detection, Gregory pointed to an emerging open standard, C2PA (Coalition for Content Provenance and Authenticity), that could track the origins of images, audio and video, but it is “not yet adopted across all platforms.” Meta, he noted, has already shifted from policing the use of AI to policing only content deemed “deceptive and harmful.”

Talking about AI-generated video detection, Kumar said, “The gap is widening. Low-quality fakes are still detectable, but the high-end ones are nearly impossible to catch without advanced AI systems like the one we’re building at Contrails.” However, he is cautiously optimistic that the regulatory tide, especially in Europe and the US, will force platforms to label AI output. “I see the scenario improving in the next couple of years, but sadly loads of damage will be done by then,” he said.

Everyone is a creator now, because monetisation

A good question to ask is, “Who is making all these clips?” And the answer is, “Everyone”. 

“My kids know how to create AI-generated videos and the same tools are used by hobbyists, agencies, and state actors,” Colman said. 

Story continues below this ad

Gregory agreed. “We are all creators now,” he said. “AI influencers, too, are a thing. Every new model spawns fresh personalities,” he said, adding that there is a growing trend of commercial actors producing AI-slop – cheap, fantastical content designed to monetise attention.



Kumar estimated that while 90 per cent of such content is made for fun, the remaining 10 per cent is causing real-world harm through financial, medical, or political misinformation. A case in point is the falsified footage of United Kingdom-based activist Tommy Robinson’s viral migrant‑landing video.

Creativity versus manipulation

According to Colman, AI is a creative aid – not a replacement – and insisted that intentional deception should be clearly separated from artistic expression. “It becomes manipulation when people’s emotions or beliefs are deliberately exploited,” he said.

Gregory pointed out one of the challenges – satire and parody can easily be misinterpreted when stripped of context.

Story continues below this ad

Kumar had a pragmatic stance: “Intent and impact matter most. If either is negative, malicious, or criminal, it’s manipulation.”

The stakes leap when synthetic videos enter conflict zones and elections. Gregory recounted how AI clips have misrepresented confrontations between protesters and US troops in Los Angeles. “One fake National Guard video racked up hundreds of thousands of views,” he said. Kumar said deepfakes have become routine in wars from Ukraine to Gaza and in election cycles from India to the US.

What can be done?

Colman called for forward-looking laws: “We need proactive legislation mandating detection or prevention of AI content at the point of upload. Otherwise, we’re only penalising yesterday’s problems while today’s spiral out of control.”

Gregory advocated for tools that reveal a clip’s full “recipe” across platforms, while warning of a “detection-equity problem”. Current tools often fail to catch AI content in non-English languages or compressed formats.

Story continues below this ad

Kumar demanded “strict laws and heavy penalties for platforms and individuals distributing AI-generated misinformation.”

What’s at stake for truth?

“If we lose confidence in the evidence of our eyes and ears, we will distrust everything,” Gregory warned. “Real, critical content will become just another drop in a flood of AI slop. And this scepticism can be weaponised to discredit real journalism, real documentation, and real harm.” 

Synthetic content is, clearly, here to stay. Whether it becomes a tool for creativity or a weapon of mass deception will depend on the speed at which platforms, lawmakers and technologists can build, and adopt, defences that keep the signal from being drowned by the deepfake noise.

 





Source link

Leave a Comment

Scroll to Top
Receive the latest news

Subscribe To Our Weekly Newsletter

Get notified about new articles