The more context you provide, the better the results.
Template:
Context → Task → Constraints → Stack → Quality expectations
❌
"Write code for user authentication"
✅
"Add a POST /auth/register endpoint
to our FastAPI backend.
Use JWT, bcrypt for passwords,
return 201 with user_id.
Follow the Repository Pattern in src/.
Stack: FastAPI 0.109, SQLAlchemy 2.0."
Work Iteratively — don’t generate everything at once.
Asking for an entire module in one prompt = 2000 lines that don’t work together.
Use Plan Mode — many tools offer it (Claude Code, Cursor). Use it.
Write tests before implementation. AI is excellent at implementing against a spec.
def test_user_registration():
response = client.post("/auth/register", json={
"email": "test@example.com",
"password": "SecurePass123!"
})
assert response.status_code == 201
assert "user_id" in response.json()
Then: “Implement the register endpoint to pass these tests.”
Why this works:
Every AI-generated line must be reviewed. No exceptions.
| Level | What | How |
|---|---|---|
| 1 | Automated | Linting, type checks, SAST, tests |
| 2 | AI-assisted | Copilot review, CodeRabbit |
| 3 | Human (required) | Logic, edge cases, security, performance |
Before approving, ask yourself:
| Task | Good choice |
|---|---|
| Quick inline completions | Copilot, Cursor Tab |
| Complex reasoning / architecture | Claude Opus, GPT-4.1 |
| Multi-file refactoring | Cursor Agent, Claude Code |
| Explanation / learning | ChatGPT, Claude |
Balance cost vs. quality:
Using a big model for a small task is overkill.
Using a small model for a big task is a comedy of errors.
Tell AI your conventions once. Enforce them everywhere.
Files: .cursorrules · CLAUDE.md · AGENTS.md · copilot-instructions.md
What to include:
# .cursorrules (keep it short!)
- FastAPI 0.109+ / SQLAlchemy 2.0 / Pydantic v2
- Clean Architecture: Domain → Application → Infrastructure
- Repository Pattern (NO Active Record)
- All inputs validated via Pydantic
- pytest with 80%+ coverage
- Black formatter, MyPy strict
Tip: If AI keeps making the same mistake, add a rule for it instead of correcting every time.
AI generates fast. Without git discipline you lose track.
Commit small and atomic:
# Good
git commit -m "feat: Add user registration endpoint"
git commit -m "test: Add tests for registration"
git commit -m "refactor: Extract validation logic"
# Bad
git commit -m "AI generated stuff" # 2000 lines, 50 files
Optional: tag AI-generated code in commit messages:
[AI] feat: Generate initial auth implementation
[AI-REVIEW] fix: Correct SQL injection in login
[MANUAL] refactor: Improve error handling
9. Automate Security Checks
AI-generated code has 1.7x more issues than human-written code. Automated checks are non-negotiable.
PR → Lint → Type check → Tests
→ SAST → Secret scan
→ Dependency scan → Human review → Merge
Tools: CodeQL, Semgrep, Dependabot, GitLeaks
10. Keep Documentation Alive
AI drafts docs. You curate them.
AI is good at: docstrings, API examples, changelog drafts.
You must write: README, ADRs, runbooks.
11. Watch for Performance Traps
AI optimizes for “works,” not “performs.”
Common traps: N+1 queries, missing indexes, loading entire datasets into memory, no caching.
Rule: Review all AI code that touches a database, loop, or external API.
12. Never Stop Learning
| Stage | AI’s role |
|---|---|
| Fundamentals | Not yet — learn without AI |
| AI as tutor | AI explains, you implement |
| AI as co-pilot | AI generates, you review |
| AI as multiplier | AI accelerates, you steer |
If you can’t explain the code AI wrote, you’re not ready to ship it.
AI teaches you fast. Docs teach you right.