AI-Assisted Development

Prompting Best Practices

Workshop — Part 5: Writing prompts that actually work

4 / 12 — Prompting
AI-Assisted Development

Prompting is a skill.

Not magic.

Better prompts produce better code.

4 / 12 — Prompting
AI-Assisted Development

Vague vs. Specific

Vague

"Fix the bug in the user service"

The model guesses what bug, what file, what the fix should look like.

Specific

"In src/services/user_service.py,
create_user crashes with
AttributeError when email is None.
Fix the null check and add a
test for this edge case."

Names the file, the function, the error, and the deliverable.

4 / 12 — Prompting
AI-Assisted Development

Kitchen-Sink vs. Focused

Too broad

"Build a complete authentication
system with registration, login,
password reset, 2FA, roles,
sessions, and admin panel"

2000 lines of tangled code.

One deliverable

"Add a POST /auth/register endpoint.
- Accept email + password
- Hash with bcrypt
- Return 201 with user_id
- Return 409 if email exists

Stop after this endpoint."

Clear scope. Clear stop condition.

4 / 12 — Prompting
AI-Assisted Development

No Constraints vs. Constrained

No constraints

"Write a database query
for user search"

Model picks ORM, style, return type at random.

Constrained

"Write a SQLAlchemy 2.0 query
that searches users by name.
Use the async session from src/db.py.
Use ILIKE for case-insensitive matching.
Return a list of UserSchema objects.
Do not use raw SQL."

Names version, existing patterns, return type, and a negative constraint.

4 / 12 — Prompting
AI-Assisted Development

Prompt Engineering Techniques

Role Assignment

"You are a senior Python developer
reviewing a junior's PR.
Focus on security, error handling,
and naming. Cite line numbers."

Few-Shot Examples

"Here is how we write service tests:
[paste one test]

Now write similar tests for
OrderService.create_order.
Cover: success, missing fields,
duplicate order ID."

Chain-of-Thought

"Before suggesting a fix, walk me through:
1. What the current code does
2. Where the bug likely is and why
3. Your proposed fix and why"

Negative Constraints

"Do NOT:
- Add new dependencies
- Change the public API
- Use print() — use structlog
- Modify migration files"
4 / 12 — Prompting
AI-Assisted Development

Common Prompting Mistakes

Mistake What happens Fix
Too vague Model guesses wrong Name files, functions, error messages
Too broad 2000 lines of tangled code One function, one endpoint, one test at a time
No stop condition Model rewrites half the codebase “Stop after X. Wait for confirmation.”
No context files Model ignores your conventions Use @-mentions or paste the relevant code
Correcting mid-stream Chat gets confused, quality drops Start a fresh chat with an improved prompt
Prompt is too long Key instructions get buried Lead with the most important constraint
Assuming correctness Bugs slip through Always run tests, always read the diff
4 / 12 — Prompting
AI-Assisted Development

Task Decomposition

Big tasks need to be broken into promptable steps.

Don’t ask AI to build a house — ask it to lay one brick at a time.

Pattern

  1. Clarify the goal — one sentence: what does “done” look like?
  2. List the parts — which files, layers, or components are involved?
  3. Sequence the prompts — each prompt = one clear deliverable
4 / 12 — Prompting
AI-Assisted Development

Decomposition in Practice

Fuzzy request: “We need user notifications”

Prompt 1 (Ask): "What notification patterns exist in this codebase?
Check @src/ for any existing email, webhook, or event code."

Prompt 2 (Plan): "Design a notification service for order completion.
List the files to create/modify. Stop after the plan."

Prompt 3 (Agent): "Create the Notification model and migration.
Add tests for the model. Do not implement the service yet."

Prompt 4 (Agent): "Implement NotificationService.send() following
the plan. Use the pattern from @src/services/user_service.py.
Run all tests before finishing."

Each prompt is small, has a clear stop point, and builds on the previous result.

4 / 12 — Prompting
AI-Assisted Development

Tool-Specific Tips

Cursor

  • @-mentions to scope context to specific files
  • .cursorrules defines project conventions — update it when AI keeps getting something wrong
  • Plan mode for multi-file changes; Cmd/Ctrl+K for single-function fixes

GitHub Copilot

  • Write a clear function signature + docstring first → Copilot fills the body
  • Open relevant files in adjacent tabs — Copilot reads all open tabs
  • .github/copilot-instructions.md for repo-wide rules

Claude Code

  • /plan to design before executing
  • CLAUDE.md sets project conventions
  • Reference files with full paths — the CLI reads them directly

ChatGPT / Web LLMs

  • Paste only the relevant snippet, not the whole file
  • State language, framework, and version up front
  • Ask for one thing at a time; iterate with follow-ups
  • Copy to your editor and run tests before trusting it
4 / 12 — Prompting
AI-Assisted Development

Quick Reference

Before you prompt, check:

  • Did I name the specific file(s) and function(s)?
  • Did I state what “done” looks like?
  • Did I set constraints (what NOT to do)?
  • Did I add a stop condition?
  • Is this prompt small enough for one reviewable change?
  • Did I reference project conventions (rules files, examples)?
4 / 12 — Prompting
AI-Assisted Development

Better prompts.

Better code.

Less cleanup.