Skip to content
Legal

AI Detection for Legal Writing: Contracts, Briefs, and Client Communications

The legal profession has been hit harder by AI hallucination scandals than almost any other field. Attorneys have been sanctioned for citing non-existent cases. Contracts have been drafted with fabricated regulatory citations. The risk is not hypothetical. Understanding how to detect and verify AI-generated legal content is a professional competence issue for anyone working with legal documents.

April 16, 2026 · 9 min read

The Legal AI Risk Landscape

AI use in legal writing spans a wide range of document types with very different risk profiles:

Content TypeAI AdoptionPrimary Risk
Court filings (briefs, motions, memoranda)Medium-HighHallucinated citations, court sanctions, malpractice
Contract draftingHighFabricated standards/regulations, missing provisions
Client advice lettersMediumInaccurate legal advice, UPL if non-attorney
Legal research memosHighHallucinated case citations, inaccurate holdings
Discovery documentsMediumAccuracy, sanctions for false certifications
Legal marketing/blog contentHighInaccurate legal information, bar advertising rules
Consumer legal docs (templates, forms)HighUPL, inaccurate terms, jurisdiction-specific errors

The Citation Hallucination Problem in Legal AI

The most documented AI failure mode in legal writing is case citation hallucination: AI models produce plausible-looking case citations (correct format, realistic names, plausible courts and dates) that do not exist. This is not a minor formatting error. Citing a non-existent case to a court is a misrepresentation. Courts have imposed sanctions, attorney fee awards, and referrals to disciplinary authorities for this conduct.

Documented Sanctions Cases

The Mata v. Avianca case (S.D.N.Y. 2023) established the precedent: attorneys submitted a brief citing six non-existent cases generated by ChatGPT. After the court identified the fabricated citations, the attorneys faced sanctions, were required to submit declarations explaining the circumstances, and paid attorney fees. The case became the defining example of AI-assisted legal malpractice.

By 2025, over 40 documented instances of AI citation hallucination sanctions existed in US courts, with cases in federal district courts, state courts, and at least two appellate courts. The pattern is consistent: attorney relies on AI output without independent verification, cites fabricated cases, opposing counsel or the court identifies the error.

The defense that "the AI produced it" has been uniformly rejected by courts. The attorney's professional responsibility to verify the accuracy of citations cannot be delegated to an AI tool.

Why Citations Hallucinate

AI language models learn patterns from large corpora of text, including legal documents. They learn that a citation looks like "[Party] v. [Party], [volume] [reporter] [page] ([court] [year])". When generating a legal brief, the model produces text that matches this pattern, drawing on partial information about real cases, legal topics, and case name patterns. The result is a citation that looks syntactically correct but refers to a case that does not exist or that exists but says something different from what the brief claims.

This is especially dangerous in legal writing because legal citations are extremely precise by convention. Attorneys, judges, and clerks are trained to trust that properly formatted citations are accurate. The hallucinated citation exploits that trust.

Bar Association Ethics Guidance

Bar associations and courts have issued substantial guidance on attorney use of AI:

Competence Obligations

Model Rule 1.1 requires attorneys to provide competent representation, including "the legal knowledge, skill, thoroughness and preparation reasonably necessary for the representation." Multiple state bar ethics opinions have concluded that attorney competence in 2025 includes understanding the limitations of AI tools and how to verify AI-generated legal work product.

The California Bar issued a formal opinion in 2024 concluding that attorneys who use generative AI in their practice must understand how the technology works at a sufficient level to assess its output critically. Simply prompting the AI and submitting its output without review does not satisfy the competence obligation.

Supervision Obligations

Model Rule 5.3 requires attorneys to supervise non-attorney assistants. The Florida Bar and several other state bars have issued opinions concluding that AI tools are subject to supervision obligations analogous to those for paralegals and associates. An attorney who submits AI-generated work product without adequate supervision has violated Rule 5.3.

Candor to the Tribunal

Model Rule 3.3 prohibits making false statements of fact or law to the tribunal. Citing a hallucinated case is a false statement of law. The rule does not include an intent element for the basic prohibition; submitting a false citation, even unknowingly, raises Rule 3.3 issues. The attorney's failure to verify is what creates disciplinary exposure.

Disclosure Requirements

Several courts have adopted local rules requiring disclosure of AI use in filings. The Northern District of Texas, multiple bankruptcy courts, and courts in several states have adopted or proposed AI disclosure requirements for court filings. Attorneys practicing in multiple jurisdictions need to track individual court standing orders and local rules on AI disclosure.

How Detection Tools Work on Legal Text

Legal writing presents specific detection challenges:

Formulaic Language and Required Structure

Legal documents follow rigid conventions: "WHEREAS" clauses in contracts, "COMES NOW" in pleadings, "NOW THEREFORE BE IT RESOLVED" in corporate resolutions. These formulaic phrases are not AI-specific; they are required legal conventions. Pattern-based detectors incorrectly flag required legal formality as AI-generated because it appears consistently across documents.

Citation strings themselves, being highly structured and repetitive in format, also inflate detection scores on legal documents. A brief with extensive citations will score artificially higher than the same document without them.

Adjusted Approach for Legal Text

For meaningful detection results on legal writing:

  • Remove citations and case law strings before running detection. Submit only the analytical prose sections.
  • Remove required formalities (recitals, boilerplate contract language, definitions sections) and submit only substantive analysis and argument sections.
  • Use 85%+ as the investigation threshold on legal analytical text. Below 85%, false positive rates from legal formality are too high.
  • Weight the neural sub-score (DeBERTa) over pattern scores. The pattern detector is most confused by legal formality; the neural model handles it better.

What Detection Cannot Tell You

AI detection tools can identify that text was likely AI-generated. They cannot tell you whether the AI-generated text is accurate. For legal writing, accuracy verification is more important than origin detection. Even if a motion tests as human-written, the citations must still be independently verified.

Practical Verification Protocol for Legal Documents

  1. Every citation must be independently verified. For case law: find the case in Westlaw, Lexis, or a free source (Google Scholar, CourtListener). Confirm the citation is accurate and that the proposition attributed to the case is accurate. Do not rely on AI summaries of cases for this verification.
  2. Verify statutory and regulatory citations. AI produces fabricated or outdated statutory citations just as it does case citations. Confirm the statute or regulation exists in its cited form and that the text quoted or paraphrased is accurate.
  3. Check for jurisdiction-specific accuracy. AI legal advice generalizes across jurisdictions. A contract provision that is enforceable in one state may be void as against public policy in another. Legal analysis should be verified for the specific jurisdiction, not just for legal accuracy in the abstract.
  4. Review for temporal accuracy. AI training data has cutoffs. Law changes. Cases are overruled. Statutes are amended. Legal analysis produced by AI may accurately describe the law as of its training data and be incorrect as of today.
  5. Run detection on narrative analysis sections after removing citations and formalities. High scores (85%+) on the substantive analysis flag the section for closer accuracy review.

Unauthorized Practice of Law (UPL) and AI Legal Content

AI legal content published by non-attorneys creates unauthorized practice of law risk when it crosses from general legal information into specific legal advice. State UPL statutes prohibit non-attorneys from practicing law, and many state bar definitions of "practicing law" include providing specific legal advice to individuals about their specific legal situations.

AI-generated legal content that tells a specific user what to do in their specific legal situation (as opposed to explaining how the law generally works) may constitute UPL by the publisher in jurisdictions with broad UPL definitions. Several state bar disciplinary authorities have opened investigations into AI legal services companies for this reason.

Publishers of consumer legal content should maintain clear editorial standards distinguishing general legal information from advice, should include disclaimers that content does not constitute legal advice, and should have their AI-generated content reviewed by licensed attorneys before publication.

For Legal Operations Teams

Corporate legal departments and legal operations teams using AI for contract drafting and management should consider:

  • Standard contract forms reviewed by counsel. AI-generated contract templates should be reviewed by attorneys before becoming organizational standards, not after a contract dispute reveals a defective provision.
  • Track AI tool versions. AI models change over time. A contract template generated by an AI tool in January 2025 may have been accurate as of that model version but could differ if regenerated today. Document which tool version generated which template.
  • Detection as quality triage. Run AI detection on final contract drafts before signature. High scores (85%+) flag the document for attorney review of substantive terms, not just formatting.
  • Regulatory citation verification. Any contract section that cites a specific statute, regulation, or standard (compliance clauses, data processing agreements, export control clauses) should have those citations verified independently regardless of whether AI was involved in drafting.

Bottom Line

AI detection tools are a useful but limited layer for legal writing. The formulaic nature of legal documents inflates scores on non-AI content; detection on legal text is most meaningful on analytical prose sections stripped of citations and required formalities. The threshold for meaningful signal is 85%+.

More fundamentally, detection does not replace accuracy verification for legal content. The professional and legal consequences of unverified AI-generated legal writing (sanctions, malpractice claims, ethics complaints) derive from inaccurate content, not from AI origin alone. Detection flags documents for closer review; the review must cover factual and legal accuracy, not just origin.

Check Legal Content with Airno

Remove citations, formalities, and boilerplate before submitting legal text to Airno. Paste analytical and narrative sections. Use 85%+ as the investigation threshold. Follow with independent citation verification and accuracy review.

Try Airno Free