How AI Videos are changing our world – and how to defend against deepfakes

July 24, 2025
How AI Videos Work
AI-generated videos are created through several key steps:
- Input interpretation – Based on text, image, or video, the system identifies the expected content.
- Visual and narrative concept generation – It composes visuals and audio using pre-trained datasets.
- Speech and sound synthesis – It generates natural-sounding voices, music, and sound effects.
- Rendering – Combines all elements into a coherent moving image.
The technology relies on two core model architectures:
- GAN (Generative Adversarial Network): fast, but often unstable image generation.
- Diffusion models: slower, but more reliable and capable of photorealistic results.
Ongoing innovations (e.g., OpenAI Sora, Runway, Google Veo) produce videos that are increasingly lifelike and difficult to distinguish from real footage.
The Deepfake Threat
1. Business Fraud
AI-generated deepfake videos are now being used in corporate fraud. One common method is “business identity compromise,” where attackers impersonate a company executive via deepfake video or voice to request urgent financial transfers.
Real-world case: In 2024, Arup’s Hong Kong office lost 25 million USD after an employee was tricked by a deepfake video conference.
These attacks don’t target systems—they exploit human trust.
2. Political Manipulation
Recent years have seen deepfakes play a role in political interference:
- 2023: Slovak elections – misleading campaign videos.
- 2024–2025: European elections – deepfake “statements” by German, French, and Polish politicians.
- Russia–Ukraine war – fake wartime messages and AI-generated disinformation.
The goal is often not direct deception but eroding public trust and amplifying social division.
How Can We Defend Ourselves?
1. Technological Solutions
- Visual watermarking: AI-generated content can be marked with visible or digital signatures.
- Detection algorithms: Trained to spot subtle anomalies that give deepfakes away.
- Source verification: Checking who published content and when, especially if it's controversial or emotional.
2. Organizational and Regulatory Responses
- Internal awareness training: Educating employees on deepfake risks and creating internal safety protocols.
- Multi-factor verification: For financial orders, approvals, or sensitive communications.
- Regulation: Transparent, tech-neutral rules are needed at both global and local levels to govern synthetic media.
AI-generated videos represent both an aesthetic revolution and a societal challenge. While the creative potential is vast, the risk of manipulation has never been higher. To harness the benefits without falling victim to the dangers, we need collective awareness, smart regulation, and technological vigilance—before the line between real and fake disappears completely.
Author:
Sándor Salamon, IT Team Lead
Sources:
Based on the study “How AI Videos Are Reshaping Our World and How to Defend Against the Deepfake Threat" - LinkedIn-Sándor Salamon