Prompting Best Practices
Prompting is a skill, not magic. Better prompts produce better code.
This page covers the craft of writing prompts — bad-to-good transformations, techniques, common mistakes, and tool-specific tips.
Note
1. Bad Prompt → Good Prompt
The fastest way to improve: see what bad looks like, then fix it.
Vague vs. Specific
❌ "Fix the bug in the user service"
✅ "In src/services/user_service.py, the create_user function crashes
with AttributeError when email is None.
Fix the null check and add a test for this edge case."
Why: Names the file, the function, the error, and the expected deliverable.
Kitchen-sink vs. Focused
❌ "Build a complete authentication system with registration, login,
password reset, 2FA, roles, sessions, and admin panel"
✅ "Add a POST /auth/register endpoint.
- Accept email + password
- Hash the password with bcrypt
- Return 201 with user_id
- Return 409 if email already exists
Stop after this endpoint. We will add login next."
Why: One deliverable, clear acceptance criteria, explicit scope boundary.
No constraints vs. Constrained
❌ "Write a database query for user search"
✅ "Write a SQLAlchemy 2.0 query that searches users by name.
Use the existing async session from src/db.py.
Use ILIKE for case-insensitive matching.
Return a list of UserSchema objects.
Do not use raw SQL."
Why: Names the ORM version, existing patterns, return type, and a negative constraint.
No output format vs. Shaped output
❌ "Explain what this code does"
✅ "Explain what src/services/billing.py does.
Use this format:
- Purpose (1 sentence)
- Main functions (bullet list with one-line descriptions)
- Dependencies (which other modules it imports)
- Potential issues (if any)"
Why: Tells the model exactly what shape the answer should take.
Implicit context vs. Explicit context
❌ "Add error handling"
✅ "Add error handling to the create_order function in
@src/services/order_service.py.
Follow the pattern used in @src/services/user_service.py.
Use our custom AppError class from @src/errors.py.
Log errors with structlog."
Why: References concrete files the model can read, names the pattern to follow.
No stop condition vs. Gated prompt
❌ "Refactor the payment module"
✅ "Refactor the payment module in @src/payments/.
Step 1: List the files you would change and explain why.
Stop. Wait for my approval before editing anything."
Why: Prevents the model from running ahead. Keeps you in control.
2. Prompt Engineering Techniques
Prompt Engineering Guide has an extensive list of techniques with examples and explanations. Here are some of the most useful ones for coding tasks:
Role Assignment
Tell the model who it is. This shapes tone, depth, and focus.
"You are a senior Python developer reviewing a junior's PR.
Focus on security, error handling, and naming.
Be specific — cite line numbers."
Chain-of-Thought
Ask the model to show its reasoning before giving an answer.
"Before suggesting a fix, walk me through:
1. What the current code does step by step
2. Where the bug likely is and why
3. Your proposed fix and why it is correct"
Tip
Few-Shot Examples
Show the model the pattern you want by providing an example, then ask it to continue.
"Here is how we write service tests in this project:
[paste one existing test as example]
Now write similar tests for the OrderService.create_order method.
Cover: success, missing fields, duplicate order ID."
Negative Constraints
Tell the model what NOT to do. Models follow these surprisingly well.
"Do NOT:
- Add new dependencies
- Change the public API
- Use print() for logging (use structlog)
- Modify any migration files"
Structured Output
When you need a specific format, define it explicitly.
"Return your answer as a markdown table with columns:
| File | Change | Reason |"
3. Common Prompting Mistakes
| Mistake | What happens | Fix |
|---|---|---|
| Too vague | Model guesses wrong | Name files, functions, error messages |
| Too broad | 2000 lines of tangled code | One function, one endpoint, one test at a time |
| No stop condition | Model rewrites half the codebase | “Stop after X. Wait for confirmation.” |
| No context files | Model ignores your conventions | Use @-mentions or paste the relevant code |
| Correcting mid-stream | Chat gets confused, quality drops | Start a fresh chat with an improved prompt |
| Prompt is too long | Key instructions get buried | Lead with the most important constraint |
| Assuming correctness | Bugs slip through | Always run tests, always read the diff |
4. Task Decomposition
Big tasks need to be broken into promptable steps. Don’t ask AI to build a house — ask it to lay one brick at a time.
The Pattern
- Clarify the goal — write one sentence about what “done” looks like
- List the parts — what files, layers, or components are involved?
- Sequence the prompts — each prompt = one clear deliverable
Worked Example
Fuzzy request: “We need user notifications”
Prompt 1 (Ask mode):
"What notification patterns exist in this codebase already?
Check @src/ for any existing email, webhook, or event code."
Prompt 2 (Plan mode):
"Design a notification service that can send email and in-app
notifications when an order is completed.
List the files to create/modify. Stop after the plan."
Prompt 3 (Agent mode):
"Create the Notification model and migration based on the plan.
Add tests for the model. Do not implement the service yet."
Prompt 4 (Agent mode):
"Implement NotificationService.send() following the plan.
Use the pattern from @src/services/user_service.py.
Run all tests before finishing."
Each prompt is small, has a clear stop point, and builds on the previous result.
Tip
5. Prompt Starter Kit
Copy-paste these for common tasks. Fill in the [brackets].
Debug an error
I'm seeing this error in @[file]:
[paste error + stack trace]
It happens when [describe the action].
Walk me through: (1) what the code currently does, (2) why this error occurs,
(3) the fix with explanation. Do not change anything else.
Add tests
Write tests for [function/class] in @[file].
Use the pattern from @[nearest existing test file].
Cover:
- Happy path
- [edge case 1]
- [edge case 2]
Do not modify the implementation. Only add test code.
Code review
Review @[file] as a senior [language] engineer.
Focus on: security, error handling, naming, and edge cases.
Format your feedback as a table: | Line | Issue | Suggestion |
Do not rewrite the file — only flag issues.
Explain unfamiliar code
Explain what [function/module] in @[file] does.
Format:
- Purpose (1 sentence)
- Inputs and outputs
- Side effects or external calls
- Anything that could go wrong
Refactor safely
Refactor [function] in @[file] to [goal — e.g. extract helper, reduce nesting].
Constraints:
- Do NOT change the public API or function signature
- Do NOT add new dependencies
- Keep existing tests passing
First: list what you plan to change and why. Stop. Wait for my approval.
Add a feature (one endpoint / one function)
Add [specific endpoint or function].
- Input: [describe]
- Output: [describe, including status codes if HTTP]
- Follow the pattern in @[reference file]
- Do not modify unrelated code
- Stop when the function + tests are done
Understand why something is slow
Profile this function in @[file] conceptually.
Identify the most expensive operations (DB calls, loops, I/O).
Suggest the top 1–2 optimizations. Do not change code yet.
6. Tool-Specific Tips
Cursor
- Use
@-mentions (@src/,@tests/) to scope context .cursorrulesdefines project conventions — update it when AI keeps getting something wrong- Use Plan mode for multi-file changes, inline edit (
Cmd/Ctrl+K) for single-function fixes - Composer can edit multiple files — review each file’s diff separately
GitHub Copilot
- Write a clear function signature + docstring first → Copilot completes the body
- Open relevant files in adjacent tabs — Copilot reads open tabs for context
- Use
copilot-instructions.mdin.github/for repo-wide rules - Copilot Chat: use
/explainand/fixslash commands for common tasks
Claude Code (CLI)
- Use
/planto design before executing - Reference files with full paths — the CLI reads them directly
CLAUDE.mdin the repo root sets project conventions- Keep prompts short in the terminal; let the tool read files for context
ChatGPT / Web-based LLMs
- Paste only the relevant snippet, not the whole file
- State the language, framework, and version up front — the model has no project context
- Ask for one thing at a time; iterate with follow-ups
- Copy the result into your editor and run tests before trusting it
Resources
| Resource | What it covers |
|---|---|
| Prompt Engineering Guide | Comprehensive techniques with examples (CoT, few-shot, RAG, etc.) |
| Anthropic Prompt Library | Ready-made prompts for common tasks from Anthropic |
| OpenAI Cookbook | Practical recipes — many patterns apply to any LLM |
| Cursor Directory | Community .cursorrules files by stack/framework |
| GitHub Copilot Docs — Prompting | Copilot-specific prompting best practices |
Quick Reference
Before you prompt, check:
- Did I name the specific file(s) and function(s)?
- Did I state what “done” looks like?
- Did I set constraints (what NOT to do)?
- Did I add a stop condition?
- Is this prompt small enough for one reviewable change?
- Did I reference project conventions (rules files, examples)?