Designing Project Documentation for AI Coding Agents
I joined a project where everyone was writing code. Not just developers — project managers, analysts, even a designer who had never seen a terminal in his life. They would open VS Code, fire up Google's Gemini, and ask it to move a button, change a color, add border radius, tweak spacing. And the agent would do it. The code would work. And the codebase would get a little worse every time.
Because the agent had no context. It didn't know the project used a design system. It didn't know there were shared components for exactly these things. It didn't know that some files used CSS modules, others used inline styles, and a third group used something else entirely. So every AI-generated change was technically correct and architecturally wrong. The codebase turned into a patchwork of conflicting approaches.
I spent almost two weeks writing structured project documentation specifically for the agent. Architecture, component rules, naming conventions, styling approach, the whole picture. And then I ran my own test: I told the agent, "Find 20 components that have something to fix." That's it. A dead simple, almost primitive prompt. The agent — now armed with full project context — went through the codebase, found real issues (wrong styling approach, missing design tokens, inconsistent patterns), and fixed them on the spot. It was the moment I realized: the model was never the problem. The missing context was.
Context
Most teams treat AI coding agents like extremely smart autocomplete. The agent sees the open files, maybe a README, and a prompt like "implement this feature".
That's rarely enough.
In real projects, the missing context is huge:
- Why the product exists
- Who the users are
- Architectural constraints
- Coding standards
- Testing expectations
- Future migration plans
Humans fill these gaps from experience and project history. Agents don't.
While working on enterprise projects (including work done for large companies such as Procter & Gamble and MedMe), I noticed the same pattern: agents generate better code when the project explicitly describes itself.
The solution that emerged was surprisingly simple.
Create a dedicated documentation layer designed for AI agents.
Not a README. Not a wiki.
A structured project context directory.
Core Concept
The core idea is straightforward:
AI agents need explicit, structured context about the project.
Instead of relying on scattered documentation, the project exposes its knowledge in a predictable location.
Example:
/docs
product.md
architecture.md
coding-standards.md
testing-strategy.md
code-review.md
design-system.md
roadmap.md
third-party-services.md
An index file explains what each document contains and when an agent should read it.
Example:
docs/index.md
product.md
Short overview of the product: users, business goals, main workflows.
architecture.md
High-level architecture of the frontend and backend systems.
coding-standards.md
Project conventions and linting rules the agent must follow.
testing-strategy.md
What must be tested and how tests should be written.
Then the main instruction file references this folder.
Example:
AGENT.md
When working on this project you MUST read the docs folder.
Start with:
docs/index.md
Based on the task you may need to read:
- architecture.md
- coding-standards.md
- testing-strategy.md
- design-system.md
This approach turns documentation into something closer to a context API for agents.
Implementation Example
A simplified instruction file for an agent might look like this:
# AGENT.md This repository contains a React + Next.js application. Before implementing any feature: 1. Read docs/index.md 2. Identify which documents are relevant for the task 3. Follow project conventions strictly Key constraints: - Use TypeScript strict mode - Follow design tokens defined in docs/design-system.md - All components must have tests (see docs/testing-strategy.md) - Follow code review rules defined in docs/code-review.md
Example of coding standards exposed to the agent:
# coding-standards.md Component rules: - Components must be colocated with tests - Use function components only - Avoid default exports Folder structure: components/ Button/ Button.tsx Button.test.tsx Button.styles.ts
Example of testing expectations:
# testing-strategy.md All UI components must include: - unit tests (Vitest) - interaction tests - accessibility checks where applicable Example: describe("Button", () => { it("renders label", () => {}) it("triggers click handler", () => {}) })
Once this context exists, agents consistently generate code that is much closer to what a real team would accept.
Integrating Task Context via MCP
Documentation alone is useful, but it becomes much more powerful when combined with external context.
Using MCP integrations, an agent can retrieve additional project history:
- Jira tasks
- Git commits
- bug reports
- previous discussions
Example workflow:
- Agent works on a component
- It inspects commit history
- Extracts issue IDs
- Queries Jira via MCP
- Reads bug descriptions
This allows the agent to understand things like:
- why a component was implemented in a specific way
- what bugs previously occurred
- which constraints must not be broken
Without this, agents tend to unknowingly reintroduce old bugs.
Where It Breaks in Enterprise
This approach works well, but it introduces new challenges.
Documentation drift
Docs become outdated quickly if they are not maintained.
Agents will follow outdated rules very confidently.
Over-documentation
Too many documents create the opposite problem: agents get lost in irrelevant context.
Keep documentation focused and scoped.
Missing architectural intent
If architecture documents only describe current implementation but not intent, agents will optimize locally and break long-term plans.
For example:
- migrating from Redux to TanStack Query
- replacing a design system
- upgrading React versions
Agents need to know about these plans.
Common Mistakes
Treating README as agent documentation
README files are written for humans. They are narrative and incomplete.
Agents work better with:
- short
- structured
- explicit documentation.
Documenting only code structure
Agents also need business context.
Example:
Users: pharmacists
Main workflow: reviewing prescriptions
Critical requirement: audit traceability
Without this, agents optimize code but miss product constraints.
Not documenting code review expectations
Agents can generate correct code that still fails review.
For example:
- missing tests
- wrong folder structure
- inconsistent naming
Explicit review rules prevent this.
When NOT To Use This
This approach is unnecessary for:
- very small repositories
- short-lived prototypes
- single-developer experiments
The overhead only pays off when:
- multiple developers collaborate
- AI agents are heavily used
- the project has architectural complexity.
How I Apply This in Real Projects
In projects where I use AI coding agents (primarily Claude-based tools, GPT assistants, and Gemini), I always start by creating a docs directory designed for the agent.
The process is simple:
- Write a short product overview
- Describe architecture at a high level
- Define coding and testing rules
- Document code review expectations
- Outline future architectural plans
The two-week documentation effort I described at the beginning of this article wasn't typical — that was a rescue operation for a codebase that had already accumulated months of context-free AI changes. For a new project, the initial setup takes a day or two. After that, you maintain it incrementally.
The "find 20 components to fix" trick has become my standard smoke test for new documentation. If the agent can autonomously identify and fix real issues based on the context you've written, the documentation is working. If it makes the same mistakes as before, the docs are missing something important.
Practical Recommendations
If you want to try this approach:
Start small.
Create just four documents:
docs/product.md
docs/architecture.md
docs/coding-standards.md
docs/testing-strategy.md
Add an index file and reference it from your agent instructions.
Then gradually expand documentation as real problems appear.
Also remember:
- keep documents short
- keep them structured
- keep them updated.
Think of them as system prompts for your codebase.
Summary
The project where non-developers were writing code through AI taught me something fundamental: models are already good enough. The bottleneck isn't intelligence — it's context. An agent without project documentation will confidently produce code that breaks every convention you have. An agent with structured context will find and fix problems you didn't even ask about.
A structured docs/ directory can describe:
- product context
- architecture
- coding standards
- testing rules
- code review expectations
- future architectural plans
Combined with integrations like Jira and Git history, this gives agents enough context to generate code that aligns with real production systems.
In practice, this simple change dramatically improves the quality of AI-generated code. Two weeks of documentation saved me months of cleanup.
If you have feedback, alternative approaches, or just want to discuss this topic — feel free to reach out.