Prompting is a skill.
Not magic.
Better prompts produce better code.
❌ Vague
"Fix the bug in the user service"
The model guesses what bug, what file, what the fix should look like.
✅ Specific
"In src/services/user_service.py,
create_user crashes with
AttributeError when email is None.
Fix the null check and add a
test for this edge case."
Names the file, the function, the error, and the deliverable.
❌ Too broad
"Build a complete authentication
system with registration, login,
password reset, 2FA, roles,
sessions, and admin panel"
2000 lines of tangled code.
✅ One deliverable
"Add a POST /auth/register endpoint.
- Accept email + password
- Hash with bcrypt
- Return 201 with user_id
- Return 409 if email exists
Stop after this endpoint."
Clear scope. Clear stop condition.
❌ No constraints
"Write a database query
for user search"
Model picks ORM, style, return type at random.
✅ Constrained
"Write a SQLAlchemy 2.0 query
that searches users by name.
Use the async session from src/db.py.
Use ILIKE for case-insensitive matching.
Return a list of UserSchema objects.
Do not use raw SQL."
Names version, existing patterns, return type, and a negative constraint.
Role Assignment
"You are a senior Python developer
reviewing a junior's PR.
Focus on security, error handling,
and naming. Cite line numbers."
Few-Shot Examples
"Here is how we write service tests:
[paste one test]
Now write similar tests for
OrderService.create_order.
Cover: success, missing fields,
duplicate order ID."
Chain-of-Thought
"Before suggesting a fix, walk me through:
1. What the current code does
2. Where the bug likely is and why
3. Your proposed fix and why"
Negative Constraints
"Do NOT:
- Add new dependencies
- Change the public API
- Use print() — use structlog
- Modify migration files"
| Mistake | What happens | Fix |
|---|---|---|
| Too vague | Model guesses wrong | Name files, functions, error messages |
| Too broad | 2000 lines of tangled code | One function, one endpoint, one test at a time |
| No stop condition | Model rewrites half the codebase | “Stop after X. Wait for confirmation.” |
| No context files | Model ignores your conventions | Use @-mentions or paste the relevant code |
| Correcting mid-stream | Chat gets confused, quality drops | Start a fresh chat with an improved prompt |
| Prompt is too long | Key instructions get buried | Lead with the most important constraint |
| Assuming correctness | Bugs slip through | Always run tests, always read the diff |
Big tasks need to be broken into promptable steps.
Don’t ask AI to build a house — ask it to lay one brick at a time.
Fuzzy request: “We need user notifications”
Prompt 1 (Ask): "What notification patterns exist in this codebase?
Check @src/ for any existing email, webhook, or event code."
Prompt 2 (Plan): "Design a notification service for order completion.
List the files to create/modify. Stop after the plan."
Prompt 3 (Agent): "Create the Notification model and migration.
Add tests for the model. Do not implement the service yet."
Prompt 4 (Agent): "Implement NotificationService.send() following
the plan. Use the pattern from @src/services/user_service.py.
Run all tests before finishing."
Each prompt is small, has a clear stop point, and builds on the previous result.
Cursor
@-mentions to scope context to specific files.cursorrules defines project conventions — update it when AI keeps getting something wrongCmd/Ctrl+K for single-function fixesGitHub Copilot
.github/copilot-instructions.md for repo-wide rulesClaude Code
/plan to design before executingCLAUDE.md sets project conventionsChatGPT / Web LLMs
Before you prompt, check:
Better prompts.
Better code.
Less cleanup.