Kristen Gyles | Deepfakes can’t be left unregulated
The explosion in the creation of deepfakes has left us in the middle of an information war. No longer can we safely consume audiovisual content without questioning whether we are watching something real or totally fabricated. For context, a deepfake is an AI-generated picture, video or audio clip depicting a real person saying or doing something they never said or did.
We are at an uncanny stage in the technological era where technological advancement is no longer measured solely by the increase in functionality and convenience that human beings can enjoy, but now by how creative idle people can get in their application of technology. Rather than being used to solve problems, we are now at a turning-point where technology has started creating problems. What began as technical curiosity has quickly evolved into a social problem that challenges the very idea of truth, identity, and consent in the digital age.
Anyway, we have passed the point of no return. Deepfakes and AI-generated images, videos, and audio that convincingly mimic real people will only get more and more realistic. It is all here to stay. That is why governments should start thinking about how they will face head-on, the risks associated with unregulated AI use (or misuse).
ACCESSIBLE
Deepfakes are no longer fringe experiments. Advances in generative AI have made deepfake content accessible, cheap to produce, and increasingly indistinguishable from authentic media. In some cases, to create a convincing fake requires only a handful of images and modest computing power. Whether people have nothing to do and just want to create mischief through the use of deepfake technology, or they are hoping to monetise their ‘work’, the creation of deepfakes in a world of dwindling trust for the media, for governments and for authority in general, is itself troubling.
Consider that once upon a time, we blamed the media for misinformation and brainwashing. Now, this democratisation of manipulation tools puts the ability to fabricate reality squarely in the hands of … literally anyone with an Internet connection.
The risk here extends far beyond embarrassment or social awkwardness. At a societal level, deepfakes erode public trust in information itself. If any video or audio clip can plausibly be fabricated, then even genuine evidence becomes suspect. News clips and recordings from our national leaders and from subject matter experts become subject to increasing scepticism since no one really knows what to believe anymore. Furthermore, people with all sorts of agendas finally have ample ammunition to dismiss real footage as fake, adding chaos to an already chaotic society.
The range of harms is widening and becoming deeply invasive too. One of the most troubling uses of deepfake technology has been the creation of non-consensual explicit content, disproportionately targeting women. We are talking about deepfake pornography. This form of digital exploitation has rightly prompted legal responses in several countries.
Denmark is leading the charge in the legal response to the growth of deepfakes. The country has proposed amendments to its Copyright Act aimed specifically at tackling deepfakes. The core idea is simple, but legally radical, and it is that individuals should have enforceable rights over their own likeness, including their face and voice. The idea is that we should all own the rights to how we look and sound.
Under the proposed reforms, unauthorised deepfakes could become subject to compensation claims. In other cases, the individual whose likeness has been stolen, could demand that the content be removed or deleted and platforms that fail to act could face significant penalties. Importantly, the law would still preserve space for parody and satire, acknowledging the importance of free expression while drawing a line against harmful misuse.
FORWARD THINKING
Denmark’s approach reflects a refined and forward-thinking outlook on identity in the digital age. Traditionally, copyright laws protect creative works like books, music and films, as opposed to the individuals portrayed in them. Denmark’s proposal effectively treats personal identity as something akin to intellectual property.
Naturally, there are unresolved questions about how these personal identity rights would be enforced across borders, how they would interact with existing free speech protections, and whether they might inadvertently diminish legitimate creative expression. But the alternative is for us to do nothing and eventually start scrolling into videos of ourselves committing the worst of crimes.
Frankly, high-profile figures, from politicians to entertainers, are increasingly seeking their own legal protections against AI impersonation. The problem for the broader society is that this issue is no longer constrained to well-known people. The generation of deep fakes is no longer so novel that it only affects celebrities and public figures. Scammers and other troublemakers have started creating deep fakes of ordinary people like you and me, and especially of those who have a social media presence. This issue is societal and should be treated that way.
Ultimately, the question is who owns your identity in a world where it can be perfectly replicated? Governments are beginning to experiment with new legal frameworks to address this. Jamaica ought not to be the last in the race on this. Boundaries of selfhood have to be clearly defined in an age of replication. If we failed to draw those boundaries clearly, we risk entering a future where seeing is no longer believing.
Kristen Gyles is a free-thinking public affairs opinionator. Send feedback to [email protected] and [email protected]
Syndicated from Jamaica Gleaner · originally published .
Legal context · powered by Jurifi
Get the legal angle on this story. Pick a prompt and Jurifi's AI will explain it using Jamaican law.
AI replies are based on Jamaican law via Jurifi. Not legal advice.