Treat AI like a smart but untrusted junior developer — assume it will make security mistakes unless you prevent and detect them.
| AI habit | Example |
|---|---|
| Concatenates user input into queries | f"SELECT * FROM users WHERE name = '{name}'" |
| Hard-codes secrets | API_KEY = "sk-proj-abc123..." |
| Skips auth on new endpoints | Generates a route with no middleware |
| Logs PII | logger.info(f"User {email} did {action}") |
| Suggests outdated packages | Recommends a library abandoned 2 years ago |
| Invents APIs that don’t exist | Calls a method that was never part of the library |
AI-generated code often looks secure. The patterns are idiomatic, the naming is clean — but the holes are real.
Unverified dependencies
AI suggests packages from training data — which may be outdated, unmaintained, or typosquatted.
Before accepting: Does it exist? Last updated? Known vulnerability?
Missing auth
AI generates endpoints and forgets your middleware, role model, or session handling.
Rule: Every new route AI creates — “Where is the auth check?”
Hard-coded secrets
AI has seen thousands of tutorials with hard-coded keys. It will reproduce that pattern.
Rule: Never accept string literals that look like keys, tokens, or passwords.
Over-collecting data
AI tends to log everything when asked to “track usage” — emails, IPs, full request bodies.
Rule: Log event types and anonymized IDs only. Never log PII without a documented legal basis.
Secret Scanning + Push Protection
AI loves hard-coded secrets — GitHub catches them.
Settings → Security → Secret scanning
Dependabot
AI suggests packages that may be vulnerable by now.
Settings → Security → Dependabot
CodeQL (Code Scanning)
AI generates injection vulnerabilities, XSS, path traversal — CodeQL catches these statically.
# .github/workflows/codeql.yml
on: [push, pull_request]
jobs:
analyze:
permissions:
security-events: write
strategy:
matrix:
language: [javascript, python]
Branch Protection
AI generates fast — dangerous without a review gate.
| Never put in prompts | Why |
|---|---|
| Production secrets, API keys | May be stored or logged by the provider |
| Customer PII (names, emails, IDs) | GDPR violation — no DPA for “pasting into ChatGPT” |
| Proprietary source code | May be used for model training (check plan terms) |
| Internal architecture docs | Competitive risk |
For teams:
Encode security rules in your project’s instruction files — the AI follows them by default:
Files: AGENTS.md · .cursorrules · copilot-instructions.md · CLAUDE.md
## Security Rules
- Never log PII or secrets
- Always use parameterized queries — no string concatenation for SQL
- All new endpoints require authentication middleware
- Do not hard-code API keys, tokens, or passwords
- Do not add dependencies without checking for known vulnerabilities
This won’t make AI perfect — but it significantly reduces how often you correct the same mistakes.
Before merging AI-assisted changes: