AI for legal drafting in Indian law firms: what the first week actually looks like
A walkthrough of the AI drafting workflow we install in the first week at Indian law firms — including DPDP-safe configuration, the human-review loop and the metrics that prove it's actually faster.
TL;DR
The AI drafting workflow in an Indian law firm starts producing usable first drafts in week one — not as a research project, but as a daily tool. We install it on Microsoft 365 inside the firm's existing tenant, train the cohort on day three, and by Friday the firm has shipped two real notices and a reply, all reviewed by a partner. This post is the actual week-one playbook.
What "AI drafting" really means in an Indian legal context
"AI drafting" is a term that's done a lot of damage in the Indian legal market. Vendors use it to mean anything — a chatbot that answers questions about contract law, a summariser that mauls a judgment into bullet points, a marketing widget that drops boilerplate into a Word document. None of those is what we install.
What Matter Labs installs is a bounded, audited drafting workflow that produces the first 40% of a specific class of documents — Section 138 NI Act replies, Order VII plaints, NDAs, S.91 CrPC objections, S.34 Arbitration Act setting-aside applications — using the firm's own past drafts as the prompt context. The AI doesn't draft from a public corpus. It drafts from your corpus.
That distinction matters for three reasons:
- Privilege. The model only sees text the firm has already chosen to feed it. There is no "Internet drafting" — the workflow is configured to refuse external context.
- Style. The output reads like the firm's other drafts, because it was trained on the firm's other drafts. Partners stop saying "this isn't how we write."
- Defensibility. Every prompt and every output is logged with the matter number, the user and the model version. If a regulator or a client ever asks "how was this drafted?", the firm has a complete audit trail.
The five-day week-one timeline
Day 1 — Diagnose
We shadow two associates and one partner for half a day each. The output is a single document: the drafting choke-point map. It lists every document type the firm produced in the past quarter, tagged by frequency, average drafting time, and current quality bar. Most firms are surprised to find that 60–70% of their associate hours go into 5–7 document types. Those become the install candidates.
Day 2 — Pick
We pick two document types for week-one install. Not three, not five. The criteria: high frequency, low strategic stakes, and an existing corpus of at least 20 past drafts that the firm is happy to use as prompt context. For most commercial litigation firms in Mumbai or Delhi, the answer is Section 138 reply notices and NDAs.
Day 3 — Spec
Both workflows get a written one-page spec by end of day. Each spec lists:
- Inputs. What does the user supply? (For S.138: the original notice as a PDF, the matter number, and a one-line context.)
- Outputs. What does the workflow return? (A Word draft with the firm's house format, citations live-linked, and a structured "what's missing" comment block.)
- Data handling. Where does the data live, who can see it, and what's retained?
- Fallback. What happens when the model is wrong? (For drafting: senior-review checkpoint, rejection log, no auto-send.)
Nothing builds until both partners and the operations head sign the spec.
Day 4 — Install
The two workflows go live inside Microsoft 365. There are no new logins for associates. They open Word, click a Matter Labs ribbon button, paste the inputs, and the draft lands in the document. The build takes one engineer roughly 3 hours per workflow because we deliberately keep the surface area small.
Day 5 — Train and ship
Cohort training in the morning — partners and associates together, 90 minutes, walking through the prompt structure, the rejection log, and the audit trail. By the afternoon the firm has shipped at least one real notice that came out of the workflow. That's the demo: a real client matter, drafted in the workflow, reviewed by a partner, sent.
What changes in week two
The visible change in week two is that associates stop opening blank documents. They open the workflow. The drafts come back faster, but more importantly they come back in the firm's voice — because the corpus the workflow was trained on is the firm's voice.
The less visible change is that partners start trusting the output enough to delegate review. By the end of week two, the partner is reviewing maybe 8 minutes per draft instead of 45. That hour and a half saved per matter, multiplied across an active book of 200 matters, is what we're actually selling.
The metric that proves it's working
We track one number weekly: the partner-review minutes per finalised draft. It's the only metric that captures both quality and speed. When that number is dropping week-on-week and the rejection rate (drafts the partner sends back for re-prompting) is stable, the workflow is working. When the rejection rate climbs, the prompt context needs refining — and we have a structured process for that, run by the firm's ops head, not by us.
What we don't do in week one
For complete transparency, here's what is not in scope for week one:
- We do not touch judgment-writing or any drafting of strategy documents.
- We do not touch client-facing deliverables that go out without partner review.
- We do not connect the workflow to any vendor's general-purpose chat interface.
- We do not change the firm's existing review process. The AI is upstream of the existing process, not a replacement for it.
That's it. Week one done. By Friday the firm has a workflow installed, a cohort trained, and a metric to track. Week two builds the second-tier workflows on the same install pattern.
If your firm is at L0 or L1 on the maturity model and you want to know what the first week would actually look like for your corpus, book a teardown — we'll map your choke points and tell you which two workflows to start with.
Frequently asked
Yes. The AI drafting workflow is content-agnostic — what matters is whether your firm has a corpus of past drafts to seed prompt context from. Criminal-side firms typically have richer reply-to-notice corpora than civil firms, which actually makes onboarding faster.