A case against Avianca, the Colombian airline, took an unexpected turn when it was discovered that the plaintiff’s lawyers had cited six non-existent cases in their filing, all of which were generated by ChatGPT, an AI language model, The New York Times reported today. The revelation came to light when opposing counsel highlighted the fictional cases, prompting US District Judge Kevin Castel to acknowledge the presence of “bogus judicial decisions with bogus quotes and bogus internal citations.” As the judge contemplates potential sanctions against the plaintiff’s legal team, it has been revealed that attorney Steven A. Schwartz had relied on ChatGPT for research, believing its responses to be accurate.
In an affidavit, Schwartz admitted to using OpenAI’s chatbot and recounted his attempt to verify the authenticity of the cases. Seeking a source, he directly asked ChatGPT if it was lying. In response, the chatbot apologized for any previous confusion and insisted that the case in question was real, citing Westlaw and LexisNexis as sources where it could be found. Satisfied with the response, Schwartz further inquired about the remaining cases, to which ChatGPT consistently affirmed their legitimacy.
The opposing counsel presented a detailed account of the issue, revealing the extent to which the submission by Levidow, Levidow & Oberman lawyers was filled with falsehoods. One striking example involved a non-existent case called Varghese v. China Southern Airlines Co., Ltd. The chatbot seemed to refer to a genuine case, Zicherman v. Korean Air Lines Co., Ltd., but erred in the date and other details, mistakenly suggesting that it was decided 12 years after its actual 1996 decision.
Schwartz, expressing remorse, claimed he was unaware of the possibility of the content being false. He now deeply regrets relying on generative artificial intelligence for legal research and pledges never to do so again without thorough verification of its authenticity.
Although Schwartz is not admitted to practice in the Southern District of New York where the lawsuit was eventually moved, he continued to work on the case. Another attorney from the same firm, Peter LoDuca, assumed the role of attorney of record and will be required to appear before the judge to provide an explanation for the incident.
This incident underscores the potential pitfalls of relying solely on chatbots for research without cross-referencing information from reliable sources. Microsoft’s Bing has faced its own share of controversy, being associated with blatant lies, gaslighting, and emotional manipulation. Google’s AI chatbot, Bard, even fabricated information about the James Webb Space Telescope during its initial demonstration. It is evident that mimicking written language patterns convincingly loses its value when the AI cannot accurately determine simple facts like the frequency of a letter in a word like “ketchup.”
The judge highlighted numerous instances in which the lawyer’s brief was replete with fabrications, exposing the extent of the falsehoods.