Twelve Golden Rules
Quick, actionable rules for working effectively with AI coding tools.
Note
1. Context First
The more context you provide, the better the results.
❌ "Write code for user authentication"
✅ "Add a POST /auth/register endpoint to our FastAPI backend.
Use JWT, bcrypt for passwords, return 201 with user_id.
Follow the Repository Pattern in src/repositories/.
Stack: FastAPI 0.109, SQLAlchemy 2.0, Pydantic v2."
Template: Context → Task → Constraints → Stack → Quality expectations
→ See Prompting Best Practices for more examples.
2. Work Iteratively
Don’t generate everything at once. Break work into reviewable steps.
- Plan — “Design a REST API for user management”
- Detail — “Write pseudocode for the /register endpoint”
- Implement — “Implement the register endpoint”
- Review — Run tests, read the diff, verify
Warning
→ See Prompting Best Practices — Task Decomposition
3. Use Plan Mode
Many AI tools (Claude Code, Cursor) offer Plan mode — use it.
- Describe the task
- Let AI create a plan
- Review the plan
- If plan ≠ expectation → restart with a better prompt (don’t keep patching)
- If plan is good → execute
Common mistake: Endlessly correcting a bad plan instead of restarting with a clearer prompt.
→ See Modes & Context for mode details.
4. Tests First
Write tests before implementation. AI is excellent at implementing against a spec.
def test_user_registration():
response = client.post("/auth/register", json={
"email": "test@example.com",
"password": "SecurePass123!"
})
assert response.status_code == 201
assert "user_id" in response.json()
Then: “Implement the register endpoint to pass these tests.”
Why this works:
- Tests are an unambiguous spec for the AI
- Automatic validation — you know immediately if it worked
- Fewer debugging cycles
5. Always Review
Every AI-generated line must be reviewed. No exceptions.
| Level | What | How |
|---|---|---|
| 1 | Automated | Linting, type checks, SAST, tests |
| 2 | AI-assisted | Copilot review, CodeRabbit |
| 3 | Human (required) | Logic, edge cases, security, performance |
Before approving, ask yourself:
- Do I understand every line?
- Are edge cases covered?
- Any security risks?
- Are tests present and meaningful?
6. Pick the Right Model
Different tasks need different models.
| Task | Good choice | Why |
|---|---|---|
| Quick inline completions | Copilot, Cursor Tab | Fast, low-friction |
| Complex reasoning / architecture | Claude Opus, GPT-4.1 | Deep thinking, large context |
| Multi-file refactoring | Cursor Agent, Claude Code | Codebase understanding |
| Explanation / learning | ChatGPT, Claude | Best “teacher” models |
Balance cost vs. quality:
- Prototyping → cheaper/faster models are fine
- Production code → use the best model available
- Routine tasks → Copilot-tier is sufficient
→ See Tools for full comparison with pricing.
7. Maintain Project Rules
Tell AI your conventions once, enforce them everywhere.
Files: .cursorrules, CLAUDE.md, AGENTS.md, copilot-instructions.md
What to include:
- Tech stack and versions
- Architecture patterns (e.g. “Repository Pattern, no Active Record”)
- Naming conventions
- Security requirements (e.g. “parameterized queries only”)
- Testing requirements
# Example .cursorrules (keep it short!)
- FastAPI 0.109+ / SQLAlchemy 2.0 async / Pydantic v2
- Clean Architecture: Domain → Application → Infrastructure
- Repository Pattern (NO Active Record)
- All inputs validated via Pydantic
- pytest with 80%+ coverage
- Black formatter, MyPy strict
Tip: If AI keeps making the same mistake, add a rule for it instead of correcting every time.
→ See Agent Rules for detailed guidance.
8. Git Discipline
AI generates fast. Without git discipline you lose track.
Commit small and atomic:
git commit -m "feat: Add user registration endpoint"
git commit -m "test: Add tests for registration"
git commit -m "refactor: Extract validation logic"
# NOT:
git commit -m "AI generated stuff" # 2000 lines, 50 files
Optional: tag AI-generated code in commit messages:
[AI] feat: Generate initial auth implementation
[AI-REVIEW] fix: Correct SQL injection in login
[MANUAL] refactor: Improve error handling
Makes it transparent which code was AI-generated.
9. Automate Security Checks
AI-generated code has 1.7x more issues than human-written code. Automated checks are non-negotiable.
Minimum CI pipeline:
PR → Lint → Type check → Tests → SAST → Secret scan → Dependency scan → Human review → Merge
Key tools: CodeQL, Semgrep, Dependabot, GitHub Secret Scanning, GitLeaks
→ See Security & Compliance for setup guides and GitHub configuration.
10. Keep Documentation Alive
AI can draft docs. You must curate them.
AI is good at: docstrings, API examples, code comments, changelog drafts
You must write: README (architecture + getting started), ADRs for important decisions, runbooks for deployment/troubleshooting
Rule: If AI generated a feature, make sure the docs were updated too — not just the code.
11. Watch for Performance Traps
AI optimizes for “works,” not for “performs.”
Common AI performance mistakes:
- N+1 database queries
- Missing indexes
- Loading entire datasets into memory
- Inefficient algorithms hidden behind clean-looking code
- No caching where caching is obvious
Rule: For any AI-generated code that touches a database, a loop, or an external API — review the performance implications manually. Add basic monitoring (response times, error rates, query counts) from day one.
12. Never Stop Learning
AI is a tool, not a replacement for knowledge.
| Stage | Focus | AI’s role |
|---|---|---|
| Fundamentals | Language, data structures, Git, testing | Not yet — learn without AI |
| AI as tutor | Concepts, best practices, debugging | AI explains, you implement |
| AI as co-pilot | Boilerplate, refactoring, testing | AI generates, you review |
| AI as multiplier | Architecture, migrations, performance | AI accelerates, you steer |
Caution
Remember: Official documentation > AI explanations. AI can teach you fast, but docs teach you right.