A solo personal injury attorney in Tampa told me last fall that her firm books most of its new cases between 7 PM and 11 PM on weekdays. Not during business hours. Not the next morning. That evening window, while the caller is still angry about the accident and still on the first plaintiff's lawyer they Googled.
For years her answer was a contract answering service that took a message and forwarded it the next morning. The conversion rate from message to signed client was 14 percent. After switching to a custom AI intake agent that runs a real qualifying script, flags conflicts against the case management system, and warm-transfers urgent matters to her cell, that number went to 41 percent. The intake script did not change. The speed of response did.
This is the actual problem an AI receptionist solves for a law firm. It is not the receptionist part. It is the intake part - the structured, defensible, time-sensitive capture of facts during the window when a prospective client is still deciding which firm to hire.
Why Legal Intake Is Not a Receptionist Problem
Most articles about AI receptionists treat legal intake as a fancier version of a hair salon booking flow. It is not. Legal intake carries a stack of obligations that the typical answering service was never designed to handle, and that the typical AI chatbot vendor rarely understands.
A real legal intake call has to do at least the following before the call ends:
- Capture the caller's identity and contact details without creating an attorney-client relationship the firm did not intend.
- Run an initial conflict check against current and former clients, opposing parties, and related entities.
- Identify the jurisdiction, because a wage-and-hour matter in California is not the same problem as the same facts in Texas.
- Triage by practice area, because a firm that handles employment will not take a probate case and should refer it cleanly.
- Flag statute-of-limitations risk so an urgent matter does not sit in a queue until Monday.
- Avoid eliciting privileged information from someone who may already be represented.
- Log everything in a way that survives a malpractice review or a bar complaint.
A generic AI receptionist running on a five-question script will fail at least four of those items. The American Bar Association's Formal Opinion 512 on generative AI makes the supervision and confidentiality duties explicit: the firm cannot outsource judgment, and it cannot expose client information to a system without controls. That guidance applies to your receptionist just as much as to your associate.
The implication is straightforward. If you buy a tool that was designed for medspas and dental offices and then pointed at a legal use case, you have bought a liability surface, not a workflow.
The Four Failure Modes That Kill Cases
Across the law firm intake deployments we have reviewed, four patterns account for almost all of the lost cases and bar-adjacent risk. They are worth naming.
The Phantom Engagement. The AI is friendly. It says "we can definitely help with that" before any conflict check has run. The caller hangs up believing they have a lawyer. Six weeks later, they call back to ask why nothing has happened, and the firm discovers that the matter is in conflict with a current client. According to the Clio Legal Trends Report, missed and mishandled intake is the most common reason prospects rate firms poorly before they ever sign an engagement letter.
The Wrong Lane. The AI captures the matter as "employment" because the caller said "my boss." It is actually a workers' compensation claim with a 30-day notice requirement that has already started running. The handoff goes to the employment partner, who reads the notes Monday morning, by which point the prospect has already retained a comp specialist.
The Privilege Leak. The AI keeps the caller on the line and asks open-ended follow-up questions. The caller volunteers that they have already spoken to opposing counsel, or that they are mid-deposition in a related matter. That transcript now sits in a third-party vendor's database with no clear retention policy and no protection against subpoena.
The Black Box. A bar complaint or malpractice claim arrives. The firm asks the vendor for the full call log, the transcript, the prompts the model was given, and the conflict-check decisions. The vendor produces a summary screen. There is no way to reconstruct what the AI actually said to the caller. That is not an audit trail. It is a marketing dashboard.
The first three failures are intake design problems. The fourth is an architecture problem, and it is the one that determines whether you can sleep at night.
How Custom AI Intake Actually Works
A properly designed AI intake agent for a law firm is not a single chatbot. It is a small choreography of structured steps, each with a guardrail and an escape hatch to a human.
The structure most firms end up with looks like this:
- Greeting and identification. The AI opens by stating that it is an automated intake assistant for the firm, that the call is being recorded, and that no attorney-client relationship is formed until the firm confirms representation. This single disclosure prevents most phantom-engagement disputes.
- Pre-conflict screening. Before the caller describes the matter in detail, the AI captures the names of all parties, including opposing parties and any other involved attorneys. These names hit the firm's case management system in real time and return a conflict signal.
- Practice area triage. A small classifier maps the caller's stated problem to the firm's matter taxonomy. If the matter is out of scope, the AI offers a clean referral and ends the call without taking unnecessary facts.
- Jurisdiction and timing. The AI captures the state and county of the underlying facts and applies a simple lookup against statute-of-limitations and notice-of-claim deadlines. Anything inside a 30-day window flags as urgent and triggers a warm transfer or after-hours page.
- Structured fact capture. Only after conflict-clear and in-scope does the AI go into the qualifying script. Damages, dates, documents, prior counsel, current representation status.
- Human handoff. The transcript, the structured fields, the conflict-check result, and an audio recording all land in the case management system as a single intake record. An attorney reviews and decides whether to schedule a consultation.
Two things in that flow are worth pulling out, because they are where vendors most often cut corners.
The first is the pre-conflict screen before fact capture. Doing it in the right order matters. If the caller has already spilled the facts before the system knows there is a conflict, you have a problem the disclosure language cannot fully fix.
The second is the warm transfer logic. A modern voice agent built on tools like Retell or a custom stack on Anthropic or OpenAI can hand off to a live attorney mid-call with the full context already in the CRM. That is the difference between catching the personal injury case at 9 PM and reading about it on Monday in a competitor's press release. See our earlier analysis of voice agents for customer support for the underlying patterns.
Audit Trails Are the Real Product
If you remember nothing else from this article, remember this: in legal work, the artifact matters more than the call.
Every intake call produces, or should produce, a defensible record. That record is what an insurer, a bar examiner, or opposing counsel will ask for. The minimum useful audit trail for an AI intake system has six elements:
- The full audio recording, retained according to the firm's records policy.
- The verbatim transcript, with speaker labels and timestamps.
- The structured fields extracted by the AI, with the prompt and model version that produced them.
- The conflict-check inputs, the records that were searched, and the result returned.
- The disclosure language the AI actually used, captured exactly as spoken.
- The handoff event, including which human received the matter and when.
That list reads like overkill until the first time a prospective client claims the firm refused to help them. Then it reads like the only defense you have. Research from Thomson Reuters Institute on legal tech adoption has consistently shown that firms with structured intake records resolve disputes faster and at lower cost than firms that rely on attorney recollection.
A useful diagnostic when evaluating a vendor: ask them to show you yesterday's audit log for a call that was escalated. If they cannot produce one inside ten minutes, you do not have an audit trail. You have a sales demo.
Comparing the Four Options
Most firms end up choosing among four real options. Each one has a fair use case.
| Option | Best For | Conflict Check | Audit Trail | Typical Monthly Cost | Conversion Lift vs Voicemail |
|---|---|---|---|---|---|
| In-house human receptionist | Firms with stable 9-5 call volume and one office | Manual, attorney-dependent | Notes in CRM | $4,000 to $7,000 (loaded) | Baseline |
| Legal answering service | Solo and small firms with low after-hours volume | None, just message-taking | Call log, no transcript | $200 to $500 | Modest |
| Generic AI receptionist | Non-legal businesses, simple booking | None | Summary only | $100 to $400 | Low for legal use |
| Custom AI intake agent | Firms where intake quality determines case mix | Real-time API against case management | Full audio, transcript, structured fields, model version | $1,500 to $4,000 plus usage | Significant for after-hours leads |
The honest read on this table: if you are a two-attorney estate planning practice with a steady stream of warm referrals, a legal answering service is fine. You do not need infrastructure. If you are a contingency-fee practice in a competitive market where the first responder books the case, the custom build pays for itself on a single recovered matter.
The trap is the middle. Firms that pick a generic AI receptionist because it is cheaper than custom and faster than a human are buying the worst of all three options for a use case that punishes shortcuts. The full economics of build versus buy across AI tooling categories show the same pattern, which we worked through in our 3-year cost comparison of AI agent platforms versus custom-built AI.
What to Ask Before You Sign
The market for AI receptionists is loud, and the demos are good. The diligence questions that separate a serious vendor from a polished one are short and specific.
- Where does the call audio physically reside, and for how long? If the answer involves a US-based provider with documented retention policies, keep going. If it involves "our cloud partner," walk.
- Can the system call our case management system's conflict API in real time during the call, not afterward? If not, the conflict check is theater.
- What happens when the model is uncertain? An AI that confidently guesses on a legal matter is more dangerous than one that escalates. The escalation rate is a feature, not a bug.
- Will you sign a business associate agreement and provide SOC 2 documentation? In a profession with confidentiality duties, this is table stakes.
- Show me the audit log. Not the dashboard. The log.
Practical resources for legal technology evaluation include Bob Ambrogi's LawSites, the Stanford CodeX center, and your state bar's most recent ethics opinions on generative AI. Most state bars have now issued at least preliminary guidance, and a few, including California and Florida, have published detailed frameworks worth reading before any deployment.
How OpenNash CX Can Help
If your firm is evaluating an AI intake system and the off-the-shelf options do not map cleanly to your conflict process, your case types, or your audit requirements, this is the work OpenNash CX does. We build custom intake agents that sit on top of your existing case management system, run real conflict checks against your data rather than a generic database, and produce the kind of audit trail that holds up in a malpractice review.
The engagement typically follows four steps. We audit your current intake flow and the cases you are losing, including a sample of after-hours calls. We design the conversation flow, the conflict logic, the jurisdiction rules, and the escalation triggers with your intake team. We build the agent against your case management system and voice stack, with full audit logging. We deploy with a supervised pilot, then hand off ownership of the prompts, the workflows, and the data.
Two firms should not call us. If you are a small practice with low call volume, a good answering service is the right answer. If you want a chatbot that pretends to be a human, we will not build that. Everyone else, including firms currently on Smith.ai, Ruby, or a generic AI receptionist who feel the conversion math is not working, is the conversation we want to have.
Book a call to walk through your current intake numbers and what a custom build would change.
The next prospective client who calls your firm at 8:47 PM on a Tuesday is going to talk to someone in the next sixty seconds. The only open question is whether that someone is you, your competitor, or a system you actually trust.