The use of artificial intelligence backfired for a legal team in South Africa when an AI tool generated fake case studies. As a result, lawyers are now calling for clear regulations and strict adherence to ethical standards.
A team of lawyers turned to ChatGPT, an artificial intelligence (AI) chatbot, to assist in gathering additional case examples to support their arguments in a dispute before the High Court in Pietermaritzburg, located in the coastal province of KwaZulu-Natal.
The AI tool appeared to do just that, and the legal team submitted a notice of appeal citing various authorities and case studies provided by the chatbot.
However, during an independent verification, the judge also used ChatGPT to check one of the references and was astonished to discover that many of the cited cases did not exist in any official legal database.
Ultimately, the court ruled against the plaintiff. In the written judgment, the court stated: “It appears that the lawyers placed unwarranted trust in the accuracy of AI-generated legal research and, due to negligence, failed to properly verify the information.”
Facts seemingly invented
Tayla Pinto, a Cape Town-based lawyer specializing in AI, data protection, and IT law, warned of a growing risk to the legal profession.
“When questioned about the origins of the citations, the legal counsel admitted to using generative AI,” Pinto told DW. “This highlights a troubling trend of lawyers misusing generative AI without understanding how to do so ethically and responsibly.”
According to lawyer Tayla Pinto, there have been three known cases in South Africa where legal advisers used AI to draft court documents. One such instance occurred in June during a case involving Northbound Processing, a mining company, against the South African Diamond and Precious Metals Regulatory Authority, where AI was again misapplied.
Similar issues arose in a 2023 defamation trial and in the high-profile Pietermaritzburg High Court case, which drew significant attention in 2024. That case is now under review by both the Legal Practice Council and the provincial bar association.
AI must be used ‘ethically, responsibly, and in line with legal standards’
The Pietermaritzburg case was initiated by Philani Godfrey Mavundla, who had been suspended from his role as mayor of the Umvoti municipality in KwaZulu-Natal. While he initially won his case against the regional authority, the decision was appealed — and his legal team reportedly placed blind faith in AI-generated case references in their arguments before the High Court.
“This isn’t a technology issue,” Pinto emphasized. “We’ve long relied on technology like calculators and grammar checkers. The problem now is human — it’s about how we use these tools.”
She added, “As AI continues to evolve rapidly, it’s essential that its use in the legal profession is ethical, responsible, and aligned with our professional obligations.”
The court rejected Mavundla’s appeal, citing a low likelihood of success and criticizing the legal submission as poorly prepared and unprofessional. The judge ordered his law firm to cover the costs of the additional court hearings, signaling the court’s strong disapproval of the firm’s reliance on unverified and fabricated legal sources.
A copy of the ruling has been forwarded to the KwaZulu-Natal Legal Practice Council for investigation and possible disciplinary action against the lawyers involved.
Misuse of AI-Generated Content Undermines Trust in the Judiciary
While only a few formal complaints have been filed so far, several cases are now being referred to the Legal Practice Council (LPC) for further investigation, confirmed Kabelo Letebele, spokesperson for the Legal Practice Court in Johannesburg.
Letebele stated that the LPC is closely monitoring trends and developments related to artificial intelligence. “At this point, the LPC believes there is no immediate need for a new ethical rule, as our current regulations, rules, and code of conduct are sufficient to address issues involving AI usage,” he told DW. However, he acknowledged that internal discussions on the matter are ongoing.
Letebele warned legal practitioners against uncritically citing case law generated by AI tools, noting that inaccuracies resulting from such reliance could be considered negligent and potentially misleading to the court.
He also emphasized that the LPC Law Library is freely accessible to all legal professionals, allowing them to verify case law and conduct accurate legal research. To further support ethical practice, the LPC regularly hosts awareness webinars, educating practitioners on common pitfalls and how to remain compliant with professional standards.
Letebele concluded by highlighting a growing concern: judges, prosecutors, and court officials must now be alert to both human and AI-generated errors in legal submissions.
Heavy Reliance on AI Risks Undermining Judicial Trust, Experts Warn
“Judges depend significantly on the submissions presented by lawyers during court proceedings, especially concerning legal matters,” said Mbekezeli Benjamin, a human rights lawyer and spokesperson for Judges Matter, an organization promoting transparency and accountability within the judiciary.
Benjamin expressed concern over the growing reliance on artificial intelligence by legal professionals, warning that the technology’s potential for error could mislead the courts.
“This severely undermines the judicial process, as it breeds mistrust among judges about the accuracy and reliability of arguments presented by lawyers,” he explained.
Call for Clear Guidelines and Code of Conduct Revisions
While lawyer Tayla Pinto believes specific regulations for AI use in legal research may not be necessary, she emphasizes the importance of verifying AI-generated citations and ensuring compliance with ethical standards.
Benjamin, however, argued that internal warnings alone are inadequate. “The legal profession must issue clear, formal guidelines — including amending the Code of Conduct — to regulate AI’s role in judicial proceedings,” he said. “It should also be clearly stated that overreliance on AI without proper verification amounts to professional misconduct.”
He further recommended updating the legal profession’s code of conduct to ensure that inappropriate use of AI could result in serious penalties, including substantial fines or even disbarment.
The South African Law Society echoed this caution, noting that even unintentional submission of false information can be career-ending for a legal practitioner.