In a landmark ruling, the High Court of England and Wales has issued a stern warning to legal professionals about the misuse of generative AI tools like ChatGPT in legal proceedings. Judge Victoria Sharp emphasized that while AI may appear to produce plausible legal references, it cannot be trusted for reliable legal research.
“Such tools can produce apparently coherent and plausible responses to prompts, but those responses may turn out to be entirely incorrect,” she wrote.
In two consolidated cases, the court found that lawyers had submitted filings containing numerous fabricated or irrelevant citations — some likely generated by AI tools. In one instance, 18 out of 45 case citations were entirely fictitious; in another, five non-existent cases were cited. While contempt proceedings were not initiated, the court made it clear this decision should not be seen as precedent.
Judge Sharp reiterated that lawyers have a professional and ethical duty to verify the accuracy of any research, including AI-generated content, using authoritative legal sources. She stated that lawyers who fail to meet this standard “risk severe sanction”, including costs orders, public admonition, contempt of court, or even referral to the police.
The ruling will be forwarded to professional bodies such as the Bar Council and the Law Society to reinforce guidance on responsible AI use in legal practice.