· 3 min read · ⚖️ Lawyers How-To Guides

AI Ethics for Law Firms — A Practical Guide


AI tools are powerful, but using them without understanding the ethical implications can end your career. Several attorneys have already faced sanctions for careless AI use. Here’s what you need to know.

Confidentiality (Rule 1.6)

The most immediate risk. When you paste client information into an AI tool, you may be disclosing confidential information to a third party.

The problem: Most consumer AI tools (ChatGPT, Claude, Gemini free tiers) may use your inputs for model training. That means client information could theoretically influence future outputs.

Safe practices:

  • Never paste client names, case numbers, or identifying details into consumer AI tools
  • Use anonymized or hypothetical versions of your facts
  • Use enterprise AI tools with data processing agreements (CoCounsel, Harvey, Clio Duo)
  • Check whether your tool’s terms of service include a no-training clause
  • ChatGPT’s API and Team/Enterprise plans offer no-training guarantees. The free tier does not.

Template for anonymizing:

Instead of: “John Smith was injured at ABC Corp’s warehouse on January 15” Use: “A plaintiff was injured at the defendant employer’s facility. The key facts are…”

Competence (Rule 1.1)

You have a duty to understand the tools you use. “The AI told me” is not a defense for incorrect legal work.

What this means in practice:

  • Understand how the AI tool generates output (probabilistic text generation, not legal reasoning)
  • Know the tool’s limitations (hallucination, training data cutoffs, jurisdiction gaps)
  • Verify every piece of AI output before relying on it
  • Stay current on AI developments relevant to your practice

The Mata v. Avianca standard: An attorney was sanctioned for filing a brief with fabricated case citations generated by ChatGPT. The court found that the attorney failed in their duty of competence by not verifying the AI’s output.

Supervision (Rules 5.1, 5.3)

If your associates or staff use AI, you’re responsible for ensuring they use it properly.

Firm-level requirements:

  • Establish a written AI use policy
  • Train all attorneys and staff on approved tools and prohibited uses
  • Require human review of all AI-generated work product
  • Document AI use in work product where required

A basic AI policy should cover:

  1. Which tools are approved for use
  2. What types of information can and cannot be input
  3. Review requirements before any AI output is used
  4. Disclosure requirements to clients and courts
  5. Who to contact with questions

Disclosure

A growing number of jurisdictions require disclosure of AI use in court filings.

Current landscape:

  • Some federal judges require disclosure of AI use in all filings
  • Several state bars have issued guidance requiring disclosure
  • The trend is clearly toward more disclosure, not less

Safe approach: Disclose AI use in court filings proactively. A simple statement like “AI tools were used to assist in the preparation of this document. All legal research, analysis, and citations have been verified by the undersigned attorney” covers most requirements.

Billing

Can you bill for time spent using AI? The ethics are evolving.

Current consensus:

  • You can bill for time spent reviewing and editing AI output
  • You should not bill the same hours you would have spent doing the work manually if AI made it significantly faster
  • Be transparent with clients about AI use if it materially affects billing
  • Consider whether AI efficiency savings should be passed to the client

Should you tell clients you’re using AI?

Best practice: Yes. Include a brief AI disclosure in your engagement letter:

“Our firm may use artificial intelligence tools to assist with certain tasks such as research, document review, and drafting. All AI-assisted work is reviewed and approved by a licensed attorney. Client confidential information is only processed through enterprise-grade tools with appropriate data security measures.”

This builds trust and protects you if AI use becomes an issue later.

Building Your Firm’s AI Policy

Start with these five questions:

  1. Which AI tools are approved? (List specific tools, not categories)
  2. What data can be input? (Define what’s allowed and what’s prohibited)
  3. What review is required? (Every output reviewed by a licensed attorney)
  4. When must AI use be disclosed? (Court filings, client communications, both)
  5. Who maintains the policy? (Designate an AI ethics point person)

Review the policy quarterly. AI capabilities and ethical guidance are changing fast.

The lawyers who thrive with AI won’t be the ones who use it most aggressively. They’ll be the ones who use it most responsibly.