The tech world’s latest fascination, AI, may face legal challenges as its tendency to generate news articles and events clashes with defamation laws. Can an AI model, like ChatGPT, commit libel? The unknown and unprecedented nature of this issue is about to be tested in upcoming legal battles.
Defamation refers to the act of publishing or uttering damaging and false statements about someone. This complex legal area varies across jurisdictions, such as the U.S., the U.K., or Australia, where the current drama is unfolding.
Generative AI has already raised unanswered legal questions, such as whether its use of copyrighted material constitutes fair use or infringement. However, just a year ago, AI models for image and text generation were not sophisticated enough to produce content that could be easily mistaken for reality, so issues of false representation were largely theoretical.
But now, the large language model powering ChatGPT and Bing Chat is capable of generating content on a massive scale. Its integration with mainstream products, including search engines, has elevated it from an experimental tool to a mass publishing platform.
So what happens when an AI-generated article claims that a government official was charged with malfeasance or a university professor was accused of sexual harassment? A year ago, with limited integrations and unconvincing language, few would have taken such false statements seriously. But today, these models confidently provide answers on widely accessible consumer platforms, even when the answers are fabricated or falsely attributed to nonexistent articles. False statements are attributed to real articles, true statements to invented ones, or everything is simply made up.
The nature of how these models work is that they do not know or care whether something is true; they only prioritize making it look true. While this may not be an issue when using AI for simple tasks like homework, it becomes problematic when the technology accuses someone of a crime they did not commit, potentially constituting libel.
Brian Hood, mayor of Hepburn Shire in Australia, has asserted that ChatGPT falsely named him as having been convicted in a bribery scandal from 20 years ago. The scandal was real, and Hood was involved, but he was never charged with a crime, as Reuters reports his lawyers saying.
The question arises: who is responsible for the false statement? Is it OpenAI, the developer of the software? Is it Microsoft, which licensed and deployed it under Bing? Is it the software itself, acting as an automated system? If so, who is liable for prompting the system to create the statement? Does making such a statement in this context constitute “publishing,” or is it more akin to a conversation between two people, potentially amounting to slander? Did OpenAI or ChatGPT “know” that the information was false, and how do we define negligence in such a case? Can an AI model exhibit malice? Does it depend on the law, the case, the judge?
These are all open questions because the technology in question did not exist just a year ago when defamation laws and precedents were established. While it may seem unusual to sue a chatbot for uttering something false, chatbots are no longer mere toys. With major companies proposing them as the next generation of information retrieval, replacing search engines, these AI tools are regularly used by millions of people.
Hood has sent a letter to OpenAI, urging them to take action, but it remains unclear what, if anything, can be done under Australian or U.S. law. In another recent case, a law professor found himself accused of sexual harassment by a chatbot citing a fictitious Washington Post article. It is likely that such false and potentially damaging statements are more prevalent than we realize, and they are only now gaining attention and being reported by those implicated.
This legal drama is just beginning, and even legal and AI experts