FutureFive New Zealand - Consumer technology news & reviews from the future
Story image
The deepfake dilemma: How it affects privacy, security & law in Aotearoa
Wed, 17th Nov 2021
FYI, this story is more than a year old

On a YouTube channel called Genuine Fake, a video shows Prime Minister Jacinda Ardern as the character of Maleficent. Her husband Clarke Gayford then appears shortly afterwards. Even National Party leader Judith Collins looks a bit like a forest fairy princess.

The faces are eerily lifelike, but they're not quite right - they're too smooth, and the eyes barely blink. Also, the voices are wrong - Jacinda Ardern is not a Hollywood actress (as far as we know), and anyone who has seen the film will know that actress Angelina Jolie actually played the lead role of Maleficent. The editing is a bit choppy, but this video is meant for entertainment, not big-budget movies. A quick browse through the same YouTube channel reveals more than 50 other videos making fun of various local and international public figures. Welcome to the world of deepfakes.

'Deepfake' is a portmanteau of two words: 'Deep learning' and 'fake'. A deepfake is generally described as a video clip or image in which a person's face has been replaced with someone else's. The replacement face is usually replaced by a combination of technology and artificial intelligence-based deep learning.  Deepfakes are used for 'entertainment' or politics, such as putting words into the mouths of people like Mark Zuckerberg or Barack Obama.

However, deepfakes are not to be confused with CGI. There is also the term "uncanny valley", which has been around since the 1970s and describes the feeling we get when we see and hear digital characters that look uncomfortably human.

For better or worse, deepfakes are the next step in the evolution of image manipulation - photographers have been editing images for years, from the days of plates to the dawn of digital tools like Photoshop and now, video editing software powered by deep learning.

The 'deep learning' part of deepfakes can be slightly misleading if you think of them like videos that take massive amounts of processing power to replace things frame by frame. However, much of it is cloud-based, and deepfakes don't even take much skill to create - there are now face swap apps available that can swap out faces at the touch of a button. All of these work through a simple smartphone.

While it's entertaining to see New Zealand's public figures in entertaining or downright bizarre situations, these deepfakes not only raise questions about morality, legality, privacy, and online harm.

Could someone go to the effort of creating a digital version of your CEO - and replicating their voice - to declare an untrue statement like the company is going into liquidation? Or what if someone created a video of someone committing a crime, only to swap out the offender's face with yours?  During what some call the misinformation age, it could easily be taken as truth - with catastrophic consequences.

Last year Microsoft launched a Video Authenticator tool to detect whether an image or video has been created with AI. Google has also been involved in similar projects, such as the FaceForensics benchmark. These efforts are designed to crack down on deepfakes and separate the 'real' from the 'manipulated' or 'made up'.  But what's the big fuss?

Up for debate: Morality, privacy, and online harm 

Deepfakes are not all about entertainment. There is a nasty side. Netsafe's chief Martin Cocker says deepfakes, like most technologies, exist in a grey area.

"Usually the benefits from new technologies outweigh the harms. It's hard to say that about deepfakes."

While deepfakes are still reasonably rare, he notes that more occurrences of what he calls 'cheap fakes' can often be faces added to adult images or videos.

Co-author of the Perception inception: Preparing for deepfakes and the synthetic media of tomorrowreport, Curtis Barnes says, "Unfortunately, it's just too easy to misuse or abuse synthetic media (including deepfakes), and it's both technically challenging to prevent or mitigate.

"The combination of synthetic media and the web as a platform makes it possible for more people to produce media that makes it look or sound like something happened when it didn't, then share it quickly to other people. It is easy to see the kinds of harm that might occur if this is done maliciously or ignorantly."

Deepfakes: Legal - and legally ambiguous

New Zealand does not have any specific rules or regulations that cover deepfakes. Still, tangential laws such as the Privacy Act, Films, Videos and Publications Classification Act, the Copyright Act, the Human Rights Act, and the Harmful Digital Communications Act offer some protection.

In 2019, the Law Foundation backed a research report into synthetic media, including deepfakes. Report co-authors Curtis Barnes and Tom Barraclough explored how New Zealand law could deal with the existence of created and manipulated forms of media.

In the report, Tom Barraclough noted, "Enforcing the existing law will be difficult enough, and it is not clear that any new law would be able to do better. Overseas attempts to draft law for deepfakes have been seriously criticised."

"It is completely legitimate to call for regulatory intervention. But the merits of any course of action cannot be assessed without specifics. What exactly is being proposed? In the case of harmful synthetic media, even if we all agreed we should ban it or regulate it, how could we realistically do that? What exactly are we looking to prevent?"

When we spoke to Curtis Barnes this year for an update, he mentioned there is a glaring vaguery in the current law, particularly around synthetic media and sexual image abuse.

"For New Zealand, the key policy question is whether this kind of sexual synthetic media is (or should be treated as) an "intimate visual recording" for the purposes of section 216G of the Crimes Act.

"Much turns on the intended purpose of the existing provision, as well as whether the harms of misusing an actual intimate visual recording of a person are the same as a sexualised but 'fake' representation of them. I think there are several differences between the two phenomena. Nonetheless, sexual synthetic media abuses are still capable of causing kinds of harm that the law should seek to redress and prevent. As such, I think it would be sensible to account for them somewhere else, probably in the Crimes Act.

Barnes adds, "More important than what I think is the matter of what Parliament thinks, and at the moment they have chosen not to seriously debate the topic. They may soon, as Louisa Wall's private members Bill on revenge pornography has several overlaps. Until Parliament debates the issue and decides one way or another, the legal questions around the status of sexually abusive synthetic media remain an unresolved question in New Zealand law."

Could deepfakes be the next frontier for social engineering and malware?

Security firm Malwarebytes stated in a blog earlier this year that deepfakes could end up taking centre stage as bait for ransomware attacks. Whilst somewhat alarmist, it does acknowledge the dangers that deepfakes present.

"A threat actor scrapes videos and voice samples of their target from publicly-available websites to create a deepfake video—but sprinkling in certain elements inspired from ransomware, such as a countdown timer that lasts for 24-48 hours.

"Deepfake ransomware could also happen this way: A threat actor creates deepfake video of their target. Takes screenshots of this video and, pretending to be a legitimate contact of their target, sends them the screenshots and a link to the supposed video that they can watch themselves if they are in doubt."

However, Curtis Barnes says he is not convinced that synthetic media like deepfakes pose security risks, but it is easy to speculate about how they could be used.

"Most scenarios are already possible without the use of synthetic media. For this reason, most businesses and organisations have already developed systems of verification and trust to avoid being duped. However, where businesses haven't developed these systems, I see no reason to believe that they won't adapt quickly to new threats as they arise - they always do.

"It is now several years since the emergence of this technology and there are very few cases where it is clear that synthetic media has been used to commit a crime."

Barnes has a point - deepfake attacks are rare, although they have garnered the interest of various security firms and media.

Take business email compromise (BEC) scams, for example. These are ways in which attackers either hijack an executive's email account or pose as the executive. For example, one form of a BEC scam involves a request for a money transfer or invoice, which looks like it's from an executive. Unbeknownst to the person who initiatives the transfer, the request is fake, and the money ends up in a scammer's bank account.

Traditionally these relied on carefully sculpted emails and stolen email signatures, but deepfakes take it to a new level. For example, an attacker can create a video or use audio, using stolen characteristics of the executive's face and voice to add another level of authenticity to their scam.

It seems wild, but it has happened - allegedly. In 2019, the Wall Street Journal posted the story of a BEC scam in real life. A CEO in the United Kingdom unwittingly handed over €220,000 after he thought he was talking to his boss at his firm's parent company. But, unfortunately, he was actually talking to a fraudster who had used AI to spoof his boss's German accent and voice tone.

However, Curtis Barnes says this example has never been properly verified, and it's possible that a deepfake voice was never used.

"In my opinion, a deepfake voice was probably never used. In truth, the number of false claims of deepfake-crimes far outweighs the actual number. This may hint towards a greater threat - that synthetic media provides plausible deniability for people who commit ordinary crimes, even when it is not used. But frankly, I'm not persuaded that this is likely to create intractable problems."

To Barnes' point, it's not clear how many of these types of deepfake or synthetic media attacks have occurred in New Zealand - CERT NZ's quarterly reports don't yet have an explicit category for deepfakes, but they may well be buried in other categories.

So what's the solution?

Malwarebytes suggests that people should not give cybercriminals the materials they need to conduct attacks - by that, they mean your images, your videos, or your voice. Unfortunately, that can be difficult if you've ever been posted a public image, video, or voice recording on social media or the internet.

Legally, New Zealand, like the rest of the world, has a long path to follow.  Individual countries could ban the use of deepfake technologies, but as Martin Cocker says, "It is possible to regulate deepfakes – but not specifically saying an image has to be real. So for example, if it is an offence to send an image of a person naked – then a deepfake is as much and offence as a real image."

"Governments focus regulation on harms and harmful behaviour. So, for example, if people use technology to harm another person – that should be considered an offence.

"Companies that build and create deep fakes should ensure that outputs are 'watermarked' so they can be detected and removed. Likewise, platforms that host deepfakes should remove them when they are causing harm, just as with any other harmful content.

"Content creators should be liable for the harm that their creations cause, and people who watch deepfakes should be educated to recognise the possibility of deepfakes."

Social media platforms like Facebook and YouTube are cracking down on deepfakes by marking them as manipulated content or changing their algorithms to make them less visible, which won't stop them from existing. Viewers and listeners need to be able to tell the difference - but as technology improves, will we be able to, or will we rely on external video authentication tools from the likes of Microsoft and Google to tell us what is real and what isn't?

And what happens if someone finds themselves on the receiving end of a potentially damaging deepfake? Martin Cocker says that anyone who has found online content that appears to use their likeness can contact Netsafe or the Police.

"It really depends on how the likeness is being used. It could be for a scam, or in a way that breaches the Harmful Digital Communications Act 2015. Netsafe has built a network of contacts across the international ICT industry – so we can often facilitate removal of content from major platforms. We can also provide advice on legal options."

You can report online harm incidents to Netsafe on their website or by phoning 0508 NETSAFE.