Back to all posts
AIProductivityDXWorkflow

Give Your AI Agent a Memory — It Will Stop Disappointing You

Mar 26, 2026
6 min read

Give Your AI Agent a Memory — It Will Stop Disappointing You

I was mass-scrolling LinkedIn and almost missed it. A post from Dmytro Sergiev — a Product Manager, not an engineer — about how he gives his AI agent a persistent memory using plain folders and markdown files. And something clicked.

I had been annoyed for months. Every new chat session with an AI agent — same questions, same wrong assumptions, same "what framework are you using?" dance. I would explain my project setup, my conventions, my stack. The agent would do decent work. Next day — blank slate. All of it, gone. I knew the problem was not the model itself — I understood that AI needs context to be useful. But I could not figure out a sustainable way to give it that context. The problem was amnesia.

Dmytro's idea was dead simple: give the agent a folder it can read and write between sessions. I grabbed this concept, adapted it for my frontend engineering workflow, and started rolling it out across every project I work on. It has been a few weeks now, and honestly — I cannot believe I worked without this for so long.

The Problem Is Not the Model — It Is the Amnesia

Think about what happens when you open a fresh chat and say "add a blog post to my site." The agent has no idea what framework you use. It does not know how your content is structured. It does not know you spent two hours last Tuesday figuring out that your blog posts are TypeScript files with markdown in a template literal. It does not know that you tried CSS-in-JS once and it was a disaster.

So it guesses. You correct. It guesses again. You correct again. Twenty minutes pass before any real work happens. And tomorrow? Same thing. From zero.

I always understood that the model needs context. That was never the question. The question was — how do you actually give it enough? Context is not a document you write once. It is years of decisions, conventions, mistakes, preferences — all living in your head. You cannot dump all of that into a prompt. And even if you could, it would be out of date by next week. I needed a system, not a one-time fix.

The Fix: A Folder the Agent Can Read and Write

Create a .ai-memory/ folder in your project root. Put a few markdown files in it. Tell the agent to read them at the start of every session and update them as it learns. Seriously, that is the whole idea.

.ai-memory/ context.md — What this project is, who works on it, goals, constraints decisions.md — Key decisions made, with reasoning patterns.md — Conventions, naming rules, recurring patterns lessons.md — Mistakes, gotchas, things to avoid sessions/ 2026-03-25.md — What was done today, what was learned 2026-03-24.md — Yesterday's session

Five files and a folder. No database. No API. No npm packages. Just markdown files that a human can read too.

What Goes in Each File

context.md is the project's identity card. Who you are, what the project does, the tech stack, the constraints. For a React project it might say "Next.js 15, App Router, Tailwind, deployed to Vercel." For a marketing project it might say "Brand guidelines in /docs, tone is professional but warm, target audience is B2B SaaS." For an accounting workflow it could describe the tools, the report formats, the compliance rules.

decisions.md captures the "why" behind choices. Not just "we use Tailwind" but "we use Tailwind because utility-first is fast for a small team and avoids CSS naming debates." When the agent encounters a similar decision later, it does not propose something contradictory.

patterns.md records how things are done in this specific project. File naming conventions. Component structure. How blog posts are stored. Where images go. The things a new team member would ask on their first day — except the agent finds them on its own.

lessons.md is the file that surprised me the most. I did not expect it to matter this much. Every bug, every gotcha, every "do not do this because..." entry prevents the agent from repeating the same mistake. I had a bug in one of my projects where backticks inside a template literal were breaking the build. I fixed it, moved on, and the next day the agent generated the exact same bug. After I added "backticks inside template literal content must be escaped" to lessons.md — never happened again. Three words in a markdown file saved me from debugging the same issue three times.

sessions/ logs keep a running history. After each session the agent writes what it did, what decisions were made, what was learned. These logs feed future sessions with recent context.

Why This Works for Everyone — Not Just Engineers

Here is what struck me about Dmytro's original post: he is not a developer. He is a Product Manager who works with data, strategy, and AI-assisted research. He does not write React components. He uses VS Code as a workspace for documents and workflows. And the memory system works just as well for him — maybe better, because his context is even harder for an agent to guess.

Think about an analyst who works with multiple data sources, specific report formats, metric definitions that differ between clients, and compliance rules that change quarterly. Every time they start a new AI session, they would have to re-explain all of that. With a memory folder, the agent already knows: "Client X uses quarterly cohort analysis, reports go in /reports/YYYY-QN/, retention is measured as 30-day active, not 30-day login." That is not code. That is just domain knowledge written in plain text. And it makes the agent dramatically more useful.

The same applies to product managers tracking sprint context, designers recording spacing conventions and color tokens, writers maintaining tone guidelines. The files are plain markdown. No code required. If you can write a bullet point, you can give your agent a memory.

The Practical Difference

Let me give you a real example. Last week I came back to a project I had not touched in six days. Old me would have spent the first 15 minutes re-explaining the setup: "This is a Next.js site, blog posts are TypeScript files, we use Tailwind, content goes in src/data/posts..." Instead, I just said "create a new blog post about agent memory." The agent read context.md and patterns.md, understood the file structure, the naming conventions, the BlogPost type — and produced a working post file on the first try. Fifteen minutes of setup reduced to zero.

Or this one: I kept correcting the agent — "we use Tailwind, not CSS modules." Every. Single. Session. After I wrote it in decisions.md, that correction disappeared permanently. A one-line entry fixed a daily annoyance.

The thing that gets me is how it compounds. After a week, the agent knows your project better than a new team member after onboarding. After a month, going back to a project without memory feels broken — like opening a codebase with no README, no comments, nothing. You realize how much invisible context you were carrying in your head and re-typing every day.

How to Set This Up in Any Project

Here is the step-by-step. Takes about five minutes.

Step 1: Create the folder structure in your project root:

mkdir .ai-memory mkdir .ai-memory/sessions

Step 2: Create the instruction file. If you use VS Code with GitHub Copilot, put this in .github/copilot-instructions.md. If you use Claude/Cursor, put it in .cursorrules or the equivalent config file.

Here is the full prompt you can copy:

# Agent Memory System You have access to a persistent memory system stored in .ai-memory/ at the root of this project. Use it to maintain context across sessions, avoid asking repeated questions, and make better decisions over time. ## Memory Structure .ai-memory/ context.md — Project identity, tech stack, goals, constraints decisions.md — Key technical decisions with reasoning patterns.md — Conventions, naming patterns, project-specific rules lessons.md — Mistakes, gotchas, things to remember and avoid sessions/ YYYY-MM-DD.md — Session log: what was done, decided, learned ## Rules ### At the Start of Every Session 1. Read context.md, decisions.md, patterns.md, and lessons.md. 2. Use this information to inform your work. Do not ask questions already answered in these files. 3. If a file does not exist yet, create it when you have information worth recording. ### During Work - Non-trivial decisions -> record in decisions.md - Discovered conventions -> record in patterns.md - Bugs, gotchas, mistakes -> record in lessons.md - Project goals or constraints -> update context.md ### Mandatory Memory Checkpoints You MUST update memory at these moments. Do not skip. Do not defer. 1. After fixing a bug or discovering a gotcha -> add to lessons.md immediately. 2. After completing a multi-step task -> update relevant memory files right away. 3. When the user shares a preference or constraint -> update context.md or patterns.md within the same response. 4. Before responding with "done" -> check if anything should be in memory first. ### After Significant Work - Create or append to sessions/YYYY-MM-DD.md with: what was done, key decisions, lessons learned. ### Writing Style - Concise: 1-3 sentences per entry - Factual: what was decided, not how you feel - Append-friendly: new entries at the bottom - No duplication: update existing entries, do not contradict ### If .ai-memory/ Does Not Exist Create the folder and initial files by scanning the project: package.json, README, configs -> context.md and patterns.md. Leave decisions.md and lessons.md mostly empty — they grow over time.

Step 3: Start a new session. The agent will read the instruction, create the initial memory files by scanning your project, and begin maintaining them automatically.

Step 4: Decide where memory lives.

If this is your personal project — commit .ai-memory/ to the repo. You are the only one working here, so your memory is the project's memory. That is how I do it on ma-x.im.

If you work in a team — even three developers — add .ai-memory/ to .gitignore. Memory is personal. Your thinking, your mistakes, your patterns — they are different from your colleague's. General project knowledge (architecture, API conventions, component rules) belongs in shared context files like .github/copilot-instructions.md. But memory is your private notebook. The instruction prompt stays in the repo so every developer's agent knows to create and maintain its own local memory — but the memory content itself stays local.

There is a possible third path: shared team memory where everyone writes to the same folder. I can see how it could work in theory, but I have not seen a strong use case for it yet. Shared context plus individual memory covers most scenarios.

Honest Limitations

This is not magic. A few things to keep in mind:

  • The agent forgets to update memory. This was my biggest surprise. I wrote clear instructions: "after fixing a bug, record it in lessons.md." The agent would fix the bug, confirm it was done, and move on without writing anything down. Hours of work, zero memory updates. The fix was adding explicit mandatory checkpoints to the instructions — not "record things when appropriate" but "you MUST update memory BEFORE telling me you are done." Passive instructions get ignored during complex multi-step work. Forceful, specific checkpoints work.
  • Files can get bloated. If lessons.md grows past 100 lines, split it or prune old entries. The agent reads these files every session — keep them focused.
  • It depends on the agent following instructions. Some AI tools respect system prompts better than others. GitHub Copilot in VS Code and Claude in Cursor both handle this well. Other tools may vary.
  • It is not a replacement for proper documentation. This is a working notebook, not a wiki. Keep real docs in real docs.

Context vs Memory — Why You Need Both

This is the distinction I wish someone had explained to me earlier.

Context is what describes the project: architecture, API structure, component conventions, CSS rules, TypeScript config, deployment pipeline. It is documentation. You write it once, maintain it as the project evolves, and every developer on the team reads the same version. Context lives in shared files — README, copilot-instructions.md, architecture docs.

Memory is what describes the developer: your decisions, your mistakes, your patterns, your working style. It grows organically, session by session. What you did today, what went wrong yesterday, what you learned this week — this is what makes the agent smarter over time. Context is static knowledge. Memory is accumulated experience.

In an ideal project, you need both. Context so the agent understands the codebase. Memory so it understands you. Without context, the agent guesses about your stack. Without memory, it repeats the same mistakes and asks the same questions every day.

The real shift for me was understanding that context alone is never enough. There is always more knowledge in your head than you can write down — it forms over months and years of working on a project. You cannot capture all of it upfront. The memory approach solves this differently: instead of trying to dump everything at once, it lets knowledge accumulate naturally. Every session teaches the agent something new. After a week it knows your project better than a new hire after onboarding. After a month, working without it feels broken.

Summary

I always knew the model needed context. That was never the mystery. What I could not figure out was how to give it enough context without spending half my day writing documentation that would be outdated by Friday. The memory approach solved this — not by asking me to write everything upfront, but by letting knowledge accumulate naturally, session by session.

Five markdown files and a prompt. Five minutes of setup. The return is immediate and it compounds — every session makes the next one smarter.

The people chasing the newest model are solving the wrong problem. The leverage is not in the model. It is in what the model knows about you and your project.

If you have feedback, alternative approaches, or just want to discuss this topic — feel free to reach out.

Found this helpful?

Let's discuss your project needs.

Get in touch