Fake cases in federal court
In Mata v. Avianca, attorneys cited six nonexistent cases generated by ChatGPT and were sanctioned plus referred to a grievance committee.
Loading page...
Draft, review, and advise with cite-checked outputs, privilege controls, and UPL-safe guardrails.
Law firms and legal ops teams rely on copilots for briefs, contracts, and client Q&A. A fabricated citation, unauthorized legal opinion, or leaked privileged memo can trigger sanctions, malpractice claims, and disciplinary action.
Typical deployments
In Mata v. Avianca, attorneys cited six nonexistent cases generated by ChatGPT and were sanctioned plus referred to a grievance committee.
The MyCity business assistant told owners they could serve rat-bitten food or fire employees for harassment complaints—classic unauthorized practice of law.
Researchers demonstrated that crafted prompts in Slack’s AI summarizer could expose private-channel data, illustrating how quickly privileged files could spill into another matter.
The COMPAS risk tool was shown to disadvantage defendants of color, reminding firms to test legal AI for disparate impact before relying on it.
Silo matter-specific corpora, enforce ethical walls, and trace exactly which client files or research databases each prompt can reach.
Attack legal copilots with citation traps, UPL scenarios, prompt injections, and harassment to prove the system refuses when it should and documents every escalation.