ADRs when your team uses LLMs
Large language models are useful for exploration and draft text; they are not a substitute for an agreed process, a merge review, or a durable record. ADRs still live in the same place as before: version control, as small Markdown documents your team owns and reviews.
Why keep ADRs in an LLM-heavy workflow
- Grounding. An ADR gives humans (and tools) a stable statement of what was decided, by whom, and under which constraints—separate from day-to-day chat or generated snippets.
- Token discipline. Pasting a short ADR (or a link to a file in the repo) into a model’s context is cheaper and clearer than re-explaining history on every request.
- Reviewability. A PR that adds or changes an ADR is reviewable the same way as code. Generated prose should still pass that bar.
What to avoid
- Treating model output in a thread as a “decision” without a committed ADR in the repo.
- Letting the model invent past decisions: always verify against the actual ADR files, tickets, and owners.
- Dumping an entire monolith into a prompt instead of the narrow context (interfaces, invariants) that a decision should reference.
Practical patterns
- Draft, then human-edit. Use the generator or your template, paste a rough draft from a model if useful, and tighten for your team’s voice and scope before merge.
- One ADR per real decision — the same “small and durable” rule as without LLMs; avoid mega-documents that no one rereads.
- Link out. Reference issues, SLOs, and security notes by URL or ID. Models can help draft those cross-links; humans should confirm.
- Regenerate later. If you adopt a new format, older ADRs do not have to be rewritten for AI—only when the decision is still relevant to restate in a clearer shape.
Related on this site
- What is an ADR
- Compare formats (Nygard, MADR, Y-Statement, ISO 42010–inspired)
- External resources & tooling list