Problems
AI-assisted coding is useful, but it comes with predictable failure modes. The biggest risk is not that the model is useless — it’s that it is often plausible when it is wrong.
1. Trust & Hallucinations
AI sounds confident even when it is wrong. It doesn’t show uncertainty or tell you when it’s guessing. Worse, it may invent functions, APIs, or library features that don’t exist — and make them look idiomatic.
Example:
# AI generated with 100% confidence:
def connect_database():
import mysql.connector # ⚠️ deprecated library!
conn = mysql.connector.connect(...)
Warning
- Library names, method signatures, and versions against official docs
- Security-critical code (encryption, auth, authorization)
- Edge cases, error handling, and return values
Rule of thumb: Treat AI output as a draft. Verify before merging, prefer small commits, and write tests for the behaviour you expect.
2. The Context Problem
AI doesn’t know your project. It doesn’t know your architecture, naming conventions, or which patterns you use — unless you tell it.
Example:
# Your project uses Repository Pattern
class UserRepository:
def __init__(self, db_session):
self.db = db_session
# AI suddenly suggests Active Record — inconsistent!
class Product:
def save(self):
db.session.add(self)
How to fix this:
- Use project rules files (
.cursorrules,AGENTS.md,copilot-instructions.md) to define conventions - Keep prompts small and specific — only give the model what it needs
- Start a fresh chat when the task changes
- If AI keeps making the same mistake, fix your instructions instead of correcting output
Tip
3. The Fundamentals Trap
Developers use AI as a hiding place instead of a tool. They generate code they don’t understand.
Warning Signs:
- “The AI did this, I don’t know why it works”
- Copy-paste without reading
- Debugging by “asking AI what’s wrong” instead of trying yourself
- Accepting code you couldn’t explain in a code review
Caution
AI can accelerate your learning curve OR destroy it.
“If you feel like a fraud because you genuinely don’t understand the code you’re submitting, that’s not imposter syndrome — that’s a sign you need to slow down and learn the fundamentals.”
Healthy progression:
- Fundamentals first — learn the basics without AI
- AI as tutor — let it explain concepts
- AI as co-pilot — use it for familiar patterns
- Never autopilot — never blindly accept code
4. Security & Privacy Risks
AI regularly generates code with security issues. The OWASP GenAI Security Project highlights risks like prompt injection, sensitive data disclosure, and unsafe output handling.
Common AI-generated security issues:
# ❌ SQL Injection
query = f"SELECT * FROM users WHERE name = '{username}'"
# ❌ Hardcoded Secrets
API_KEY = "sk-proj-abc123..."
# ❌ PII in logs (GDPR violation)
logger.info(f"User {email} performed {action}")
Privacy pitfalls:
- Pasting secrets or customer data into prompts
- Using model output directly without validation
- Allowing AI tools to call actions with overly broad permissions
Warning
Non-negotiable checks:
- SAST tools in CI/CD
- Manual security review for critical flows
- Input validation and output encoding at trust boundaries
- Secret scanning before merging
5. Missing Tests & False Confidence
AI makes code look finished before it has been tested. The model generates the happy path and forgets the cases that break production: empty arrays, null values, race conditions, malformed input.
Better workflow:
- Write the test first, let the model implement against it
- Run tests, linters, and type checks after each change
- Add at least one test for every bug the AI introduced
- Don’t trust tests that mirror the implementation instead of the requirement
6. Cost Explosions
API-based AI tools can get expensive fast.
| Tool | Typical Cost |
|---|---|
| GitHub Copilot | 19/user) |
| Cursor | $20/month |
| Claude API | ~$15/M tokens (model dependent) |
How to keep costs down:
- Use free tiers for experiments
- Minimize context — don’t send entire repos
- Improve prompts instead of regenerating repeatedly
- Use cheaper models for simple tasks
- Remember: long chats are expensive (they carry all old context)
7. Vendor Lock-in
Some tools tie you to specific platforms:
- Cursor → Cursor ecosystem
- V0 → Vercel + Supabase
- Amazon Q → AWS-optimized
Before adopting a tool, ask:
- Can I export the code and run it elsewhere?
- Does the workflow depend on a vendor-specific API?
- Can I replace the model without redesigning the app?
8. The Maintenance Problem
AI generates code that works today but becomes expensive to maintain tomorrow — too clever, too broad, or too tightly coupled to a specific prompt.
Warning signs:
- Huge generated files with no clear boundaries
- Duplicated logic from “just add one more thing” prompts
- Code that ignores project conventions
- Features that work, but only if you never change them
Better habit: Ask for small, modular changes. Refactor while context is fresh. Prefer boring code over magical code.
Quick Checklist
Before accepting AI-generated code, ask:
- Does this match the project’s architecture and conventions?
- Can I explain what the code does and why it is correct?
- Did I verify the API/library/version against the docs?
- Are security, privacy, and compliance implications acceptable?
- Do I have tests for the important behaviour and edge cases?
- Would I still merge this if it had no author name on it?