Security & Compliance for AI-Assisted Coding
AI will happily generate insecure or non-compliant code if you let it.
Treat AI like a smart but untrusted junior developer — assume it will make these mistakes unless you prevent and detect them.
1. What AI Gets Wrong
These are the security mistakes AI tools make over and over again:
| AI habit | Example |
|---|---|
| Concatenates user input into queries | f"SELECT * FROM users WHERE name = '{name}'" |
| Hard-codes secrets | API_KEY = "sk-proj-abc123..." |
| Skips auth on new endpoints | Generates a route with no middleware |
| Logs PII | logger.info(f"User {email} did {action}") |
| Suggests outdated or vulnerable packages | Recommends a library that was abandoned 2 years ago |
| Invents APIs that don’t exist | Calls a method that looks right but was never part of the library |
Warning
2. What to Watch For (AI-Specific)
AI adds dependencies you didn’t ask for
AI frequently suggests packages it “remembers” from training data. These may be outdated, unmaintained, or even typosquatted.
Before accepting a new dependency:
- Does it actually exist? Check npm/PyPI/etc.
- When was it last updated?
- How many downloads / maintainers does it have?
- Is there a known vulnerability? (Check GitHub Advisories)
AI doesn’t know your auth setup
AI generates endpoints and forgets your middleware, your role model, or your session handling. Every new route or API it creates is a potential open door.
Rule: If AI generates a new endpoint, the first question is: “Where is the auth check?”
AI puts secrets in code
AI has seen thousands of tutorials with hard-coded keys. It will happily reproduce that pattern. It will also suggest committing .env files or logging tokens for “debugging.”
Rule: Never accept AI code that contains string literals looking like keys, tokens, or passwords.
AI over-collects data
When asked to “log user actions” or “track usage,” AI tends to log everything — emails, IPs, full request bodies. This is a GDPR problem waiting to happen.
Rule: Log event types and anonymized IDs. Never log PII unless you have a documented legal basis.
3. Catching AI Mistakes with GitHub
GitHub has built-in tools that are especially useful when AI writes your code, because they catch exactly the mistakes AI tends to make.
Secret Scanning + Push Protection
AI loves to hard-code secrets. GitHub secret scanning catches them.
Settings → Security → Code security and analysis → Secret scanning
- Scans for known secret patterns (AWS keys, GitHub tokens, etc.)
- Push protection blocks commits containing secrets before they reach the repo
Tip
Dependabot
AI suggests packages it saw in training data — which may have known vulnerabilities by now.
Settings → Security → Code security and analysis → Dependabot
- Dependabot Alerts — flags known vulnerabilities in your dependencies
- Security Updates — auto-opens PRs to fix vulnerable packages
# .github/dependabot.yml
version: 2
updates:
- package-ecosystem: "npm"
directory: "/"
schedule:
interval: "weekly"
Code Scanning (CodeQL)
AI generates injection vulnerabilities, XSS, path traversal — CodeQL catches these statically.
# .github/workflows/codeql.yml
name: "CodeQL"
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
analyze:
runs-on: ubuntu-latest
permissions:
security-events: write
strategy:
matrix:
language: [javascript, python]
steps:
- uses: actions/checkout@v4
- uses: github/codeql-action/init@v3
with:
languages: ${{ matrix.language }}
- uses: github/codeql-action/analyze@v3
Branch Protection
AI generates fast. That’s dangerous without a review gate.
Settings → Branches → Add branch protection rule
- Require PR reviews before merging
- Require status checks to pass (CI + CodeQL)
- Do not allow bypassing these settings
Why this matters for AI: The speed of AI-generated code makes it tempting to skip review. Branch protection ensures nothing lands on main without human eyes and green checks.
4. What Goes Into AI Prompts (Compliance)
When using third-party AI tools (Copilot, Cursor, ChatGPT, Claude), your prompts are data you’re sending to an external service.
| Never put in prompts | Why |
|---|---|
| Production secrets, API keys | They may be stored or logged by the provider |
| Customer PII (names, emails, IDs) | GDPR/DSGVO violation — you have no DPA for “pasting into ChatGPT” |
| Proprietary source code | May be used for model training (check your plan’s terms) |
| Internal architecture docs | Competitive risk |
For teams:
- Define which AI tools are approved and on which plans (enterprise vs. free)
- Prefer enterprise offerings that guarantee no training on your code
- Require Data Processing Agreements (DPAs) with AI vendors
- Consider on-prem or VPC-hosted models for sensitive environments
5. Teach Your AI the Rules
Encode security rules in your project’s AI instruction files so the AI follows them by default:
Files: AGENTS.md, .cursorrules, copilot-instructions.md, CLAUDE.md
## Security Rules
- Never log PII or secrets
- Always use parameterized queries — no string concatenation for SQL
- All new endpoints require authentication middleware
- Do not hard-code API keys, tokens, or passwords
- Do not add dependencies without checking for known vulnerabilities
This won’t make AI perfect, but it significantly reduces how often you have to correct the same mistakes.
Checklist
Before merging AI-assisted changes:
- No hard-coded secrets, tokens, or credentials
- New endpoints have authentication and authorization checks
- Database access uses parameterized queries
- Logs contain no PII or secrets
- New dependencies are verified (exist, maintained, no known vulnerabilities)
- GitHub secret scanning + push protection enabled
- CodeQL / SAST checks pass
- At least one human has reviewed the change with security in mind
- No customer data or secrets were pasted into AI prompts