This is the first part in a series of articles examining the legal regulation of deepfake technology and deepfakes in Australia and the possible future legal consequences.

Deepfakes are a human image synthesis technique that is emerging as a regulatory challenge in the legal landscape. Deepfakes are a form of media content that have been digitally manipulated by artificial intelligence. In a typical application, the artificial intelligence seamlessly superimposes one person’s face onto another person’s body in a video. This allows the creator of the deepfake to create media portraying a person saying and doing things that the individual may never have said or done.

While there are wide-ranging applications for deepfake technology, there is undoubtedly a significant potential for its application for nefarious purposes: political leaders can be depicted in compromising situations,[1] people can be portrayed engaging in sex acts without their consent,[2] and consumers can easily be misled by fraudsters.[3]

This article will outline the technology of deepfakes, the various legal regimes currently in place in Australia that potentially regulates deepfakes, and the possible future legal consequences arising out of the technology.


The term deepfake is a portmanteau of “deep learning” and “fake.”[4] It first appeared as the username of the Reddit poster who initially posted this form of content online in 2017. However, the technology underpinning deepfakes have been studied in academia since the 1990s.

Deepfake content is generated through the use of artificial neural networks – a branch of machine learning.[5] This branch of machine learning relies on a framework known as a generative adversarial network (GAN).[6] The GAN works using a set of two algorithms competing against each other to generate an artificial image.

First, one of the algorithms generates an initial “prototype” of deepfake, which is based on the information fed into the algorithm. This information includes photos, videos, and sound recordings of the target of the deepfake. The second algorithm then analyses the “prototype” and compares it against the input dataset to determine if the “prototype” is fake. This information is then relayed to the first algorithm which then uses the new information to generated a more realistic fake. This process can be repeated millions of times to generate a convincing deepfake.[7]

Machine learning with GAN is not a recent technological development. However, prior to 2018, it was mostly confined to academia. Now, with the advent of powerful computer chips available to consumers, in combination with the enormous database of information which the Internet provides, anyone with a sufficiently powerful computer can create convincing deepfakes. The democratisation of this technology and its widespread accessibility inevitably means that some creators will abuse the technology with malicious intent, and that legal consequences will arise.


Currently in Australia, formal regulation expressly targeting deepfakes does not exist. Instead, several areas of law incidentally address some of the issues stemming from deepfakes.


The tort of defamation may provide some recourse for a victim of a deepfake.[8] Deepfake creators can maliciously create deepfake content falsely depicting victims in compromising situations, defaming the reputation of the victim by making various defamatory imputations. For instance, a vengeful former partner can create videos portraying their victim engaging in sex acts; political parties can create videos of their opponents consuming drugs. If defamatory deepfakes are created and published, the victim of the deepfake may have a claim against anyone involved in the publication of the deepfake to compensate for the damage to the victim’s reputation.[9]

Historically, defamation law dealt with the publication of defamatory written or spoken material, formulated in an era of print media dominance. However, defamation law, in adapting to the development in the forms of media, has consequently recognised images as also defamatory.[10]

Defamation law also applies to digitally altered images. In Gilbert v Nationwide News Pty Ltd [2016] NSWSC 845, a case concerning a newspaper article containing defamatory matters including “Sharkwit” and “Greenies caught up in drum-line sabotage scandal.” The plaintiffs were found to have been defamed partly based on a digitally altered photograph featuring the plaintiffs and a “large, aggressive-looking shark evidently attempt [sic] to board their rubber boat.”

Given that defamation law applies to digitally altered images and focuses on repairing and compensating a victim for a victim’s damaged reputation, defamation law appears to be well-suited to managing some of the issues which will be caused by deepfakes. However, as with other internet publications in the online age, the law can be slow and limited in its ability to intervene. In the case of deepfakes, this delay and limited ability is further exacerbated by the fact that almost anyone can make a convincing deepfake, and that convincing video evidence is typically perceived as highly credible. Further, difficulties will arise in respect of proving the identity of the primary author of the deepfake, which may affect the availability of defences and damages. Further, with deepfakes, a convincing and highly damaging video can be created and disseminated virtually anonymously and hosted on foreign servers, making serving and prosecuting claims additionally difficult.

In addition, while defamation law can often provide a mechanism for monetary compensation for victims of defamation, it is not well-suited in restraining the dissemination and removal of the imputed content. This is because Australian Courts are often reluctant to grant injunctions in defamation proceedings especially on an interlocutory basis.[11] Interlocutory injunctions are more likely to be granted in cases where there is no public interest in preserving a freedom like the implied right to free political communication, for instance deepfake revenge porn.

Copyright law

Copyright law in some limited instances may provide a remedy for victims of a malicious deepfake. Deepfakes typically involve the combination of separate video or audio content (eg. Nicolas Cage’s face imposed on Miley Cyrus’ body).[12] Under the Copyright Act 1968 (Cth), the original video footage will generally be owned by the creator of the footage. This could allow for a cause in action in copyright infringement against the person who created the deepfake through the reproduction of the original footage. The copyright owner could claim for damages or an account of profits. In addition, the copyright owner could obtain an injunction from a court for the removal of the deepfake from online platforms.

The major limitation in relying on copyright law in addressing deepfakes is that only the owner of the copyrighted materials has standing to prosecute the claim, not necessarily the victim of the deepfake.

This means that, in revenge porn cases, for instance, the person who is being identifiably depicted in a sex act will not be able to sue in copyright since he/she did not own the original footage. This would require the victim to rely on the owner of the footage to prosecute the claim. It is unlikely that the owner of the original footage would be interested in prosecuting such a claim for copyright infringement unless the victim indemnifies the copyright owner for legal costs and risks in connection with the proceedings. This effectively renders copyright law, as it currently stands, nearly futile in regulating deepfakes.

Consumer law

For protecting consumers from misleading deepfakes created by fraudsters or malicious actors, aggrieved claimants can arguably rely on the Australian Consumer Law (ACL). Section 18 of the ACL prohibits misleading and deceptive conduct in trade or commerce. In addition, section 29(1)(g) of the ACL provides that “a person must not, in trade or commerce, in connection with the supply or possible supply of goods or services or in connection with the promotion by any means of the supply or use of goods or services…make a false or misleading representation that goods or services have sponsorship, approval, performance characteristics, accessories, uses or benefits.”

These sections protect consumers from unscrupulous individuals who may create deepfakes depicting celebrities endorsing or purportedly affiliating with a product.

The limitation of these ACL provisions is that they only apply to conduct in “trade or commerce.” Accordingly, the ACL is only effective in protecting consumers in a consumer context where the deepfake creator made the deepfake in trade or commerce, which may be a difficult element to prove. Further, the ACL does not address other more damaging non-consumer applications of deepfakes, such as revenge porn.


This post outlined some of the current legal frameworks in Australia which possibly address the issues posed by deepfakes. As it stands, the legal frameworks are poorly-adapted to addressing the most sinister applications of deepfakes. In the coming years, the legislature must adapt to this technology by enacting new laws. Somehow, the legislature must do so whilst somehow preserving the freedom of political communication and acknowledging the constraints on the effectiveness of legal regulation in the Internet age.

The next post in this series on deepfakes explores potential legislation that can be enacted by exploring other jurisdictions.

[1] Eg. a deepfake video surfaced showing a political aide in Malaysia engaging in gay sex- a criminal offence.




[5]  Yisroel Mirsky and Wenke Lee, “The Creation And Detection Of Deepfakes” (2021) 54(1) ACM Computing Surveys.

[6] Ted Talas, Maggie Kearney and Ashurst, “Diving Into The Deep End: Regulating Deepfakes Online” (2019) 38(3) Communications Law Bulletin.

[7] Ted Talas, Maggie Kearney and Ashurst, “Diving Into The Deep End: Regulating Deepfakes Online” (2019) 38(3) Communications Law Bulletin.

[8] Defamation Act 2005 No 77 (NSW)

[9] Dow Jones & Co Inc v Gutnick (2002) 210 CLR 575 [25]

[10] Ettingshausen v Australian Consolidated Press Ltd (1991) 23 NSWLR 443

[11] Ted Talas, Maggie Kearney and Ashurst, “Diving Into The Deep End: Regulating Deepfakes Online” (2019) 38(3) Communications Law Bulletin.