Client Confidentiality and AI — A Practical Guide for Lawyers
A partner at a mid-size firm told me something that stuck: “I know AI would save me hours every week. But the first time I opened ChatGPT and started typing a case summary, I froze. Where is this data going?”
That instinct is correct. Lawyers have an ethical obligation to protect client information, and pasting case details into a consumer AI tool raises real concerns. But “I’m worried about confidentiality” has become the default excuse for not using AI at all — and that’s throwing the baby out with the bathwater.
Here’s how to use AI responsibly without compromising your duties. It’s more straightforward than most lawyers think.
The Risk
When you type client information into a consumer AI tool like free ChatGPT:
- Your input may be stored on the provider’s servers
- Your input may be used to train future AI models
- Other users could theoretically see your data (unlikely but not impossible)
- You’ve potentially waived privilege by sharing with a third party
This doesn’t mean you can’t use AI. It means you need to use it carefully.
The Safe Approach: Anonymization
The simplest solution: remove identifying information before using AI.
Instead of: “Draft a letter to John Smith at 123 Main St about his divorce case with Jane Smith. They have two children, ages 8 and 12.”
Use: “Draft a letter to a client about their divorce case. They have two children, ages 8 and 12.”
The AI doesn’t need real names, addresses, or case numbers to produce useful output. Strip them out, get the draft, then add the details back manually.
Enterprise AI Tools
For firms that need to use AI with client data, enterprise tools offer contractual protections:
ChatGPT Enterprise / Team
- Data is not used for training
- SOC 2 compliant
- Admin controls for data retention
- $25-60/user/month
Claude for Business
- No training on your data
- Enterprise data agreements available
- SSO and admin controls
CoCounsel (Thomson Reuters)
- Built for legal use
- Westlaw-grade data security
- Firm-level data isolation
Microsoft Copilot for Enterprise
- Data stays within your Microsoft 365 tenant
- Inherits your existing security policies
- No data shared with Microsoft for training
What You Can Safely Use Consumer AI For
Even with free AI tools, some tasks are safe:
- Generic legal research — “What is the statute of limitations for breach of contract in California?”
- Template drafting — “Draft a standard NDA template” (no client details)
- Writing improvement — “Make this paragraph more concise” (if the paragraph contains no identifying info)
- General knowledge — “Explain the difference between Chapter 7 and Chapter 13 bankruptcy”
What Requires Enterprise Tools or Anonymization
- Anything with client names, case numbers, or identifying details
- Document review with actual client documents
- Case-specific legal analysis
- Communications containing privileged information
Creating a Firm AI Policy
Every firm needs clear rules. A basic policy should state:
- Approved tools: List which AI tools are authorized
- Anonymization requirement: All client-identifying information must be removed before using non-enterprise AI tools
- Enterprise tools: Specify which tools are approved for use with client data
- Prohibited uses: No uploading of privileged documents to consumer AI tools
- Training: All attorneys must complete AI training before using AI tools
- Incident response: What to do if client data is accidentally exposed
The Practical Reality
Most lawyers can get enormous value from AI while maintaining strict confidentiality. The key is building habits:
- Pause before pasting — does this contain client information?
- Anonymize by default — make it automatic, not an afterthought
- Use enterprise tools for sensitive work — the cost is justified
- Document your practices — if a client or bar association asks, you can show your safeguards
Confidentiality and AI aren’t incompatible. They just require intentionality. The firms that figure this out now will have a significant competitive advantage over those still debating whether to use AI at all.