Trust is slow to build and easy to break. It’s the fabric of relationships, institutions, even entire societies. And yet a single fake video can unravel it all - reputations nurtured over decades, bonds forged across generations and faith in evidence itself.

A grandmother wires money after a call from her “grandson.” An employee approves a fraudulent payment after hearing what sounds like their CEO’s instructions. These scams don’t need millions of views but only a few seconds of belief. Now scale that into politics: in an election year, a single fabricated video doesn’t have to convince everyone, just enough people in the right places. Worse, deepfakes don’t only spread lies, they seed doubt. Every real video can be dismissed as fake, every fake defended as real. Even the courts aren’t immune. Eyewitness testimony once carried the weight of truth and later video became the gold standard of evidence. If that too collapses, justice itself becomes negotiable and once it slips, trust in institutions is almost impossible to rebuild.

The impact isn’t abstract. It touches the personal, political and societal, but whether it corrodes or strengthens trust depends on how we choose to govern it.

Calling for a blanket ban may sound tempting. But bans risk throwing away the medicine with the poison. The smarter path is to regulate intent and outcome, just as we do with pharmaceuticals: test, approve and monitor. The compound itself is neutral and what matters is how it’s used.

Because in the right hands, deepfakes can heal. Simulations can help trauma responders prepare for emergencies. Stroke survivors can reclaim their voices through models trained on old recordings. Teachers can bring historical figures into classrooms in ways textbooks never could. The same tool that corrodes trust can also create it.

We have seen this tension before. Plastics revolutionised packaging before choking our oceans. CRISPR gave us the power to edit genes with breathtaking precision, but also raised questions we are still struggling to answer about ethics, consent and long-term risk. Deepfakes sit in that same uneasy space: liberating for some, corrosive for others. The challenge isn’t whether the tool exists. It’s whether we are willing to design rules that distinguish enrichment from exploitation.

Denmark has chosen to answer that challenge in a radical way: by giving citizens ownership of their own faces and voices. Under the proposal, your likeness would be treated much like intellectual property. If someone created a deepfake of you without consent, you could demand it be removed, sue for damages and hold platforms accountable if they failed to act.

Imagine being able to pull down a fake video of yourself with the same authority as removing stolen music. That’s the spirit of Denmark’s proposal. It’s not an outright ban - satire and parody remain protected - but it is a statement that human dignity deserves legal weight in the digital age. Where the EU’s AI Act focuses on system-wide risk and the USresponse has been fragmented, Denmark’s approach goes straight to the individual. It asks a disarmingly simple question: shouldn’t you have the right to control your own identity?

The law is still making its way through consultation and enforcement may not prove to be easy, but the symbolism is striking. At a moment when deepfakes threaten to dissolve trust in everything we see and hear, Denmark is saying: the first line of defence isn’t the technology, it’s the person.

Part of me hopes we will never need a law like this - but another part knows it may already be too late.

Deepfakes aren’t waiting for perfect laws. They’re already reshaping politics, corroding trust and exploiting individuals. The longer we hesitate, the harder it will be to rebuild what’s lost.

And yet, regulation is never black and white. A blanket ban risks erasing therapeutic and creative benefits. A purely case-by-case system may struggle to keep pace with misuse. Denmark’s proposal isn’t flawless, but it forces us to confront the question at the heart of this debate: do we regulate the technology itself or the way it’s used?

The danger isn’t only that deepfakes will deceive us. It’s that they’ll leave us unable to trust anything or anyone at all. Which raises a final, uncomfortable question: do we fear deepfakes themselves or the society that can’t tell the difference?

Reply

or to participate

Keep Reading