Edit on GitHub

Workflows

Note

This document describes generic workflows for AI-assisted development.
It is tool-agnostic but uses Cursor-style concepts (chat, inline edits, agents) in examples.

1. Feature Development with AI

1.1 High-level workflow

  1. Clarify requirements
    • Write a short problem statement and 2–3 user stories.
    • Capture constraints (stack, performance, security, compliance).
  2. Design first (with AI)
    • Describe the feature and ask for architecture options.
    • Decide on endpoints, data models, and responsibilities before code.
  3. Define tests
    • Ask the assistant to propose test cases.
    • Turn them into unit/integration tests in your repo.
  4. Generate the implementation
    • Use AI to scaffold code (inline completions or multi-file edits).
    • Keep changes small, focused, and reviewable.
  5. Review and harden
    • Run tests, linters, and security checks.
    • Manually review all AI‑generated code.
  6. Commit and document
    • Create small commits with clear messages.
    • Update README, ADRs, or rules files if you changed behavior.

Tip

Treat the assistant as a pair programmer: it helps with options, boilerplate and refactors, while you own design, trade-offs and final decisions.

1.2 Example prompt: feature design

Context: @src @tests @docs

Task: Add a feature so that users can mark a todo as completed.

Requirements:
- Keep the existing API style.
- Preserve current data model where possible.
- Ensure the operation is idempotent.

Please:
1. Propose a small design change (DB, API, service layer).
2. List 3–5 test cases we should cover.
3. Stop after the design and test list – do not write code yet.

2. Bug Fixing with AI

2.1 Minimal bug-fix workflow

  1. Reproduce
    • Capture logs, stack trace, and an exact “steps to reproduce”.
  2. Create a regression test
    • Write a test that currently fails with the bug.
  3. Let AI suggest causes
    • Provide the error and relevant files, ask “what could cause this?”.
  4. Let AI propose a fix
    • Ask for one focused change, not a rewrite.
  5. Run tests & review
    • Ensure all tests (old + new) pass.
    • Manually review fix and surrounding code.
  6. Document
    • Mention the bug id/description in the commit message.

Note

AI is particularly useful for hypothesis generation and for navigating unfamiliar code, but the final fix must still be validated with tests and reasoning.

2.2 Example prompt: bug analysis

Bug: When creating a user with an existing email, the API crashes with
"AttributeError: 'NoneType' object has no attribute 'id'".

Context:
- Error log: [paste stack trace]
- Relevant files: @src/user_service.py @src/user_repository.py @tests/test_users.py

Tasks:
1. Identify the most likely root cause.
2. Propose a regression test that reproduces this bug.
3. Suggest a minimal fix that keeps existing behavior.
Stop after showing the test and the diff for the fix.

3. Legacy Refactoring with AI

3.1 Safe refactoring workflow

  1. Map the area
    • Use search and code navigation to find where the concept lives.
    • Write down a short description of current behavior.
  2. Define a narrow slice
    • “Refactor validation logic in UserService only” instead of “rewrite UserService”.
  3. Freeze behavior with tests
    • Add or improve tests around the slice you’re about to touch.
  4. Let AI propose a plan
    • Ask for a short, numbered refactoring plan.
  5. Apply changes step by step
    • After each step: run tests, review diffs, commit.
  6. Clean up
    • Remove dead code, update docs, remove feature flags if used.

3.2 Example prompt: refactor plan

We want to refactor the user validation logic in @src/user_service.py.

Constraints:
- Keep all public APIs and behavior the same.
- Use existing pydantic models and error types.
- Do NOT touch database migrations or routes.

Please:
1. Explain briefly how validation currently works.
2. Propose a small refactor plan in at most 4 steps.
3. Wait for my confirmation before editing any files.

4. Code Review with AI Support

4.1 Two-level review workflow

  1. AI pre-review
    • Use an assistant or code-review tool to scan for obvious issues:
      • style, missing tests, common bugs, security smells.
  2. Human review
    • Focus on business logic, architecture fit, long‑term maintainability.

Warning

AI review can highlight many issues quickly, but cannot replace human review, especially for business logic, security and long-term maintainability.

4.2 Example prompt: review checklist

Review this diff for:

1. Logic correctness and edge cases
2. Security issues (auth, validation, injections)
3. Performance pitfalls (N+1 queries, unnecessary loops)
4. Consistency with existing architecture (patterns in @docs and @src)
5. Testing gaps (missing unit/integration tests)

Suggest concrete, minimal improvements.

[Paste diff or use @ to reference PR branch]

Use this together with your normal code review habits – the AI should augment, not replace, human reviewers.