Why AI Gets Confused in Large Codebases — Context Windows Explained
AI coding tools work brilliantly in small files and fail mysteriously in large ones. The reason is context windows — and understanding them changes how you use AI.

The Frustrating Pattern
You're using Claude or Copilot. On a small project, it's magical — it understands your codebase, generates exactly what you need, catches your mistakes. Then you start a larger project and everything goes wrong. It generates code that conflicts with what's already there. It forgets constraints you told it about. It gets confused.
The model didn't get dumber. The context window got full.
What is a Context Window?
A context window is the amount of text an AI model can "hold in mind" at once. It's the model's working memory. Everything inside it is what the model can reason about. Everything outside it might as well not exist.
Modern models have large context windows — hundreds of thousands of tokens. But codebases grow faster than context windows. A medium-sized production codebase can easily exceed what any model can hold in full.
The Real Problem Isn't Model Limitations
The insight from engineers who work with AI daily is that AI fails in large codebases not because the models are bad, but because the context gets diluted. Relevant code gets pushed out by irrelevant code. The signal-to-noise ratio drops.
The model is doing its best with what's in the window. If the window contains the wrong things, the output will be wrong.
Frequent Intentional Compaction
The solution, called "frequent intentional compaction," is to actively manage what's in the context rather than letting it grow unbounded.
Instead of one endlessly growing conversation, you:
- Summarise what's been established — decisions made, constraints set, patterns agreed on
- Compact that summary into a dense, accurate description
- Start a new session with the compacted summary as context
- Repeat as the project grows
This is the AI equivalent of writing good documentation — except you're writing it for the model, not for yourself.
What Good Compaction Looks Like
At the end of a productive AI session, ask the model: "Summarise everything we've established in this conversation — the design decisions, component structure, constraints, and patterns. Be concise and precise."
Use that summary as the opening message in your next session. The model starts with a clear, relevant context and produces much more reliable output.
Practical Implications for Designers
If you're using AI to build prototypes or UI components:
- Keep components small and focused — smaller files mean more of the relevant code fits in context
- Work in short sessions — don't let a conversation run for hours before compacting
- Be explicit about constraints at the start of each session — don't assume the model remembers previous sessions
- Use Claude Projects or similar tools — these let you attach persistent context (design system docs, component patterns) that stays in scope across sessions
The Horizon
Context windows will keep getting larger. But for the next several years, context management will remain a core skill for anyone using AI in technical workflows. The designers and developers who understand this constraint — and design their workflows around it — will get dramatically better results than those who don't.
Think of it less like a limitation and more like a new design constraint. Good designers work with constraints. This is just a new one.