Google Veo 3: Creative Breakthrough or Crisis for Journalism?

Google Veo 3: Creative Breakthrough or Crisis for Journalism?


Launched in May 2025 at Google’s annual I/O developer conference, Google Veo 3 is the tech giant’s direct challenge to Microsoft-backed OpenAI’s video generation model, Sora. Developed by Google DeepMind, the advanced model marks a major leap in generative AI, promising high-quality, realistic video creation from text or image prompts.

But in an age flooded with misinformation and deepfakes, a tool like Veo 3—with its ability to produce lifelike video and synchronised audio—raises pressing questions for journalism. It opens new creative possibilities, yes, but also invites serious challenges around credibility, misuse, and editorial control.

What is Google Veo?

Veo 3 touts itself as a “cutting-edge tool” offering “unmatched realism, audio integration, and creative control”. It comes at a high price—$249.99/month under the AI Ultra plan—and is currently available in the US and 71 other countries, excluding India, the EU, and the UK. Ethical concerns loom, but Google pitches Veo as a powerful resource for filmmakers, marketers, and developers.

According to Google, Veo 3 can generate 4K videos with realistic physics, human expressions, and cinematic style. Unlike many competitors, it also produces synchronised audio—dialogue, ambient noise, background music—adding to the illusion of realism.

Also Read | When AI breaks the law, who gets arrested—the bot or its maker?

The model is designed to follow complex prompts with precision, capturing detailed scenes, moods, and camera movements. Users can specify cinematic techniques like drone shots or close-ups, and control framing, transitions, and object movement. A feature called “Ingredients” allows users to generate individual elements—like characters or props—and combine them into coherent scenes. Veo can also extend scenes beyond the frame, modify objects, and maintain visual consistency with shadows and spatial logic.

Google’s website features examples of Veo in action, including projects in marketing, social media, and enterprise applications. The Oscar-nominated filmmaker Darren Aronofsky used it to create a short film, Primordial Soup. On social media, AI artists have released viral Veo clips like Influenders, a satire featuring influencers at the end of the world.

Veo 3 is integrated into Google’s AI filmmaking tool Flow, which allows intuitive prompting. Enterprise access is available via Vertex AI, while general users in supported countries can use it through Google’s Gemini chatbot.

The journalism dilemma

Veo’s features raise alarms about potential misuse. It could facilitate the creation of deepfakes and false narratives, further eroding trust in online content. There are also broader concerns about its economic impact on creators, legal liabilities, and the need for stronger regulation.

The risks are not theoretical. As highlighted in a June 2025 TIME article, titled “Google’s Veo 3 Can Make Deepfakes of Riots, Election Fraud, Conflict”, Veo was used to generate realistic footage of fabricated events—like a mob torching a temple or an election official shredding ballots—paired with false captions designed to incite unrest. Such videos could spread rapidly, with real-world consequences.

A screen grab from a video depicting election fraud generated by TIME using Veo 3. Realistic footage of fabricated events, paired with false captions designed to incite unrest, could spread rapidly with real-world consequences.

A screen grab from a video depicting election fraud generated by TIME using Veo 3. Realistic footage of fabricated events, paired with false captions designed to incite unrest, could spread rapidly with real-world consequences.
| Photo Credit:
By Special Arrangement

Cybersecurity threats—like impersonating executives to steal data—are also plausible, alongside looming copyright issues. TIME reported that Veo may have been trained on copyrighted material, exposing Google to lawsuits. Meanwhile, Reddit forums cite personal harms, such as a student jailed after AI-generated images were falsely attributed to them.

There is also the threat to livelihoods. AI-generated content could displace human creators, particularly YouTubers and freelance editors, accelerating what some call the “dead internet”—a space overrun by AI-generated junk media.

To mitigate risk, Google claims that all Veo content includes an invisible SynthID watermark, with a visible one in most videos (though it can be cropped or altered). A detection tool for SynthID is in testing. Harmful or misleading prompts are blocked, but troubling content has still emerged, highlighting the limits of guardrails.

What should newsrooms do?

Despite the risks, Veo presents compelling opportunities for journalism—particularly for data visualisation, explainer videos, recreating historical events, or reporting on under-documented stories. It can help small newsrooms produce professional-quality videos quickly and affordably, even for breaking news.

Used responsibly, Veo could improve storytelling—turning eyewitness accounts of a disaster into a visual narrative, for instance, or transforming dry data into cinematic sequences. Prototyping ideas before committing to full production becomes more feasible, especially for digital-first outlets.

But Veo’s strengths are also its dangers. Its ability to produce convincing footage of events that never happened could destabilise the information ecosystem. If deepfakes flood the news cycle, real footage may lose credibility. The visible watermark is easily removed, and Google’s SynthID Detector remains limited in scope, giving malicious actors room to operate undetected.

To maintain public trust, newsrooms must clearly disclose when content is AI-generated. Yet the temptation to pass off fabricated visuals as real—especially in competitive, high-pressure news environments—will be strong. And because AI outputs reflect their training data, biases could sneak in, requiring rigorous editorial scrutiny.

There is also the human cost. Veo’s automation could eliminate roles for video editors, animators, and field videographers, especially in resource-strapped newsrooms. Journalists may need to learn prompt engineering and AI verification just to stay afloat.

Also Read | AI is changing work, privacy, and power—what comes next?

The legal landscape is murky. If an outlet publishes an AI-generated video that causes harm, accountability is unclear. Ownership of Veo-generated content also remains opaque, raising potential copyright disputes.

And then there is the burden of verification. Fact-checkers will face a deluge of synthetic content, while reporters may find their own footage treated with suspicion. As the Pew Research Center reported in 2024, three in five American adults were already uneasy about AI in the newsroom.

A critical juncture

As Veo and tools like it become cheaper and more widely available, their impact on journalism will deepen. The challenge is not simply to resist the tide but to adapt—ethically, strategically, and urgently.

According to experts, newsrooms must invest in training, transparency, and detection tools to reap the creative rewards of AI while safeguarding credibility. Innovation and trust must evolve together. If journalism is to survive this next phase of disruption, it must do so with eyes wide open, they say.

(Research by Abhinav Chakraborty)


Source:https://frontline.thehindu.com/news/google-veo-3-ai-video-generation-deepfakes-journalism/article69709205.ece

Leave a Comment

Scroll to Top
Receive the latest news

Subscribe To Our Weekly Newsletter

Get notified about new articles