More rules should mean better output. That's the intuition.
I spent weeks building a comprehensive CLAUDE.md — 200 lines covering naming conventions, security rules, error handling, architectural patterns, import ordering, type safety requirements, and more. I was proud of it. I'd thought through every scenario.
Then I scored the output.
79.0 / 100.
My carefully crafted documentation was actively making AI performance worse. Here's the experiment, what I found, and how restructuring it to 4 files (the original CLAUDE.md trimmed to ~35 lines, plus 3 topic-specific guideline files) pushed the score to 96.9.
The Experiment
Project: Next.js 14 + TypeScript + Prisma + NextAuth
What I measured: Code quality across 5 dimensions, scored by a structured rubric applied to 20 generated files (mix of API routes, Server Actions, components, utility functions). I applied the rubric manually — each file scored 0–100 per dimension, then averaged across all files. All generation tasks used the same prompts to isolate the config variable:
| Dimension | What was scored |
|---|---|
| Naming | Variable names, function names, file naming — match project conventions? |
| Security | Auth checks present, input validation, no sensitive data exposure |
| Error handling | Consistent pattern, proper types, no silent failures |
| Type safety | No any, proper inference, Zod schemas where needed |
| Test coverage | Unit tests generated alongside implementation files |
Baseline: 200-line monolithic CLAUDE.md containing all rules
Why More Rules Made Things Worse
I expected a positive correlation: more rules → better adherence. I got the opposite. Here's why.
Position sensitivity
Claude reads the entire CLAUDE.md on every interaction. A 200-line file isn't huge in absolute terms, but attention is not uniformly distributed across a long document. Rules near the end — in my case, security invariants starting at line 160 — received demonstrably less weight than rules at the top.
In my tests, moving the security section from line 160 to line 20 pushed security scores from 68% to 81% — without changing a single rule. The content was identical. Position was the only variable.
Why: the user's prompt arrives at the end of the context window. Rules at the bottom of CLAUDE.md are immediately adjacent to the prompt and compete with everything above them for the model's attention during generation. The further up a rule is, the less it competes.
Rule conflicts
When L1 philosophy rules and L3 specific rules live in the same file, conflicts appear. My L1 section said "prefer explicitness." My L3 section said "keep implementations concise." When generating a Server Action, the AI had to arbitrate between these — and sometimes picked the wrong tradeoff for the context.
The result by dimension
| Dimension | 200-line CLAUDE.md |
|---|---|
| Naming | 81% |
| Security | 68% |
| Error handling | 77% |
| Type safety | 82% |
| Test coverage | 86% |
| Overall | 79.0% |
Security being the lowest made sense in retrospect. Security rules are naturally verbose and were buried at the bottom.
The Fix: Two-Tier Context
The insight: CLAUDE.md should contain only what the AI needs to know for every single interaction. Everything else should be loaded on demand.
CLAUDE.md ← Always loaded (3 directive lines + file refs)
docs/ai-dev-os/03_guidelines/
frameworks/nextjs/ ← Next.js App Router, Server Actions, etc.
common/security.md ← Auth patterns, input validation
common/validation.md ← Zod usage
... ← Referenced from CLAUDE.md
docs/guidelines/ ← Your project-specific rules (3 files recommended)
The rule of thumb: CLAUDE.md is an index file, not a rules file. Keep it to 3 lines of directives and a list of file references. The actual rules live in the referenced files.
What CLAUDE.md looks like
# Project: [Your App Name] - Claude Code Guidelines
Please write code following the guidelines below for this project.
In case of conflicts, upper sections take priority.
## Development Guidelines
- docs/ai-dev-os/03_guidelines/frameworks/nextjs/overview.md - Overview & tech stack
- docs/ai-dev-os/03_guidelines/frameworks/nextjs/server-actions.md - Server Actions (ActionResult pattern)
- docs/ai-dev-os/03_guidelines/common/security.md - Security
- docs/ai-dev-os/03_guidelines/common/validation.md - Validation
- docs/ai-dev-os/03_guidelines/common/error-handling.md - Error handling
...
## Project-Specific Guidelines
- docs/guidelines/your-business-rule.md
- docs/guidelines/your-external-api.md
- docs/guidelines/your-compliance.md
That's it — 3 directive lines, then file references. The philosophy, coding conventions, and security invariants are all in the referenced files in docs/ai-dev-os/, which come from the rule submodule.
What the topic files contain
The submodule (ai-dev-os-rules-typescript) provides:
docs/ai-dev-os/03_guidelines/common/security.md:
- Every auth pattern with examples
- Field selection rules for Prisma queries
- Input validation requirements
- Rate limiting requirements for auth endpoints
- Logging requirements for security events
docs/ai-dev-os/03_guidelines/frameworks/nextjs/server-actions.md:
- Server Action patterns with full examples (
ActionResult<T>) - Middleware structure
- Data fetching patterns (no
useEffectfor data)
docs/ai-dev-os/03_guidelines/common/naming.md:
- Import ordering
- Type naming conventions
- Error type definitions
Your project-specific rules go into docs/guidelines/ — 3 files is the recommended starting point.
Loading on demand
In Claude Code, the Skills bundled with the plugin (ai-dev-os-plugin-claude-code) handle contextual loading automatically. For manual invocation:
Load the Next.js guidelines, then complete the task:
@docs/ai-dev-os/03_guidelines/frameworks/nextjs/overview.md
@docs/ai-dev-os/03_guidelines/common/security.md
Task: $ARGUMENTS
Usage: /nextjs Create an API route for updating user preferences
For Cursor: the Cursor plugin (.cursor/rules/) provides scoped .mdc files with globs patterns — no manual setup needed after running the CLI.
For Kiro: the Kiro plugin copies steering rules to .kiro/steering/ with appropriate include conditions — no manual configuration needed.
After the Split
| Dimension | 200-line CLAUDE.md | File-reference split |
|---|---|---|
| Naming | 81% | 98% |
| Security | 68% | 95% |
| Error handling | 77% | 97% |
| Type safety | 82% | 99% |
| Test coverage | 86% | 95% |
| Overall | 79.0% | 96.9% |
Security improved the most: from 68% to 95%. Moving security rules to a dedicated file and explicitly loading them for auth/API work meant they were in context when they mattered and not diluting attention when they didn't.
How to Refactor Your Own CLAUDE.md
If yours has grown to 100+ lines, here's the audit process I used:
Step 1: Categorize every rule
Go through your CLAUDE.md line by line and tag each rule:
-
[always]— needs to apply to every single generated line -
[task:X]— only relevant when working on task type X -
[rarely]— edge case that doesn't need to be in persistent context
Step 2: Move all rules to topic files
Rules in [task:X] and [rarely] move to dedicated files (security.md, testing.md, etc.) in docs/guidelines/ or a rule submodule. Replace the inline content in CLAUDE.md with file references.
Step 3: Reduce CLAUDE.md to 3 directive lines + references
CLAUDE.md should contain:
- A project header
- "Please write code following the guidelines below."
- "In case of conflicts, upper sections take priority."
- A list of file references (not inline rules)
If CLAUDE.md still has inline rules rather than file references, you haven't finished the refactor.
Step 4: Set up loading mechanisms
- Claude Code: The
ai-dev-os-plugin-claude-codeplugin includes Skills and Agents that load the relevant guideline files contextually - Cursor: The
ai-dev-os-plugin-cursorprovides scoped.mdcfiles withglobspatterns - Kiro: The
ai-dev-os-plugin-kiroinstalls steering rules into.kiro/steering/
Step 5: Score before and after
Run the same 5–10 generation tasks with your old config and your new config. If you don't have a formal rubric, just note which rules are adhered to. You'll see the pattern.
The Counter-Intuitive Takeaway
The impulse to "add more to your CLAUDE.md" when you notice rule violations is natural — and usually wrong.
If the AI isn't following a rule, the first question isn't "should I add more detail?" It's "is this rule in the right place?" If it's a convention only relevant to one type of task, moving it to a topic file and loading it contextually will do more than elaborating on it in the always-on context.
Less in CLAUDE.md. More in the right place at the right time.
Next in the series: 30-Minute Setup: AI Coding Guidelines for Your Next.js Project (link will be added on publish)
This article was originally published by DEV Community and written by yunbow.
Read original article on DEV Community