AI-Assisted Development

Security & Compliance

Workshop — Part 9: Keeping AI-assisted code safe

9 / 12 — Security & Compliance
AI-Assisted Development

Treat AI like a smart but untrusted junior developer — assume it will make security mistakes unless you prevent and detect them.

9 / 12 — Security & Compliance
AI-Assisted Development

What AI Gets Wrong — Every Time

AI habit Example
Concatenates user input into queries f"SELECT * FROM users WHERE name = '{name}'"
Hard-codes secrets API_KEY = "sk-proj-abc123..."
Skips auth on new endpoints Generates a route with no middleware
Logs PII logger.info(f"User {email} did {action}")
Suggests outdated packages Recommends a library abandoned 2 years ago
Invents APIs that don’t exist Calls a method that was never part of the library

AI-generated code often looks secure. The patterns are idiomatic, the naming is clean — but the holes are real.

9 / 12 — Security & Compliance
AI-Assisted Development

Watch Out For (AI-Specific)

Unverified dependencies
AI suggests packages from training data — which may be outdated, unmaintained, or typosquatted.

Before accepting: Does it exist? Last updated? Known vulnerability?

Missing auth
AI generates endpoints and forgets your middleware, role model, or session handling.

Rule: Every new route AI creates — “Where is the auth check?”

Hard-coded secrets
AI has seen thousands of tutorials with hard-coded keys. It will reproduce that pattern.

Rule: Never accept string literals that look like keys, tokens, or passwords.

Over-collecting data
AI tends to log everything when asked to “track usage” — emails, IPs, full request bodies.

Rule: Log event types and anonymized IDs only. Never log PII without a documented legal basis.

9 / 12 — Security & Compliance
AI-Assisted Development

GitHub Security Tools

Secret Scanning + Push Protection
AI loves hard-coded secrets — GitHub catches them.

Settings → Security → Secret scanning

  • Scans for known secret patterns
  • Push protection blocks secrets before they reach the repo

Dependabot
AI suggests packages that may be vulnerable by now.

Settings → Security → Dependabot

  • Alerts for known vulnerabilities
  • Auto-opens PRs to fix vulnerable packages

CodeQL (Code Scanning)
AI generates injection vulnerabilities, XSS, path traversal — CodeQL catches these statically.

# .github/workflows/codeql.yml
on: [push, pull_request]
jobs:
  analyze:
    permissions:
      security-events: write
    strategy:
      matrix:
        language: [javascript, python]

Branch Protection
AI generates fast — dangerous without a review gate.

  • Require PR reviews before merging
  • Require CI + CodeQL to pass
  • Do not allow bypassing these settings
9 / 12 — Security & Compliance
AI-Assisted Development

What NOT to Put in Prompts

Never put in prompts Why
Production secrets, API keys May be stored or logged by the provider
Customer PII (names, emails, IDs) GDPR violation — no DPA for “pasting into ChatGPT”
Proprietary source code May be used for model training (check plan terms)
Internal architecture docs Competitive risk

For teams:

  • Define which AI tools are approved and on which plan tier
  • Prefer enterprise offerings that guarantee no training on your code
  • Require Data Processing Agreements (DPAs) with AI vendors
  • Consider on-prem or VPC-hosted models for sensitive environments
9 / 12 — Security & Compliance
AI-Assisted Development

Teach AI the Rules

Encode security rules in your project’s instruction files — the AI follows them by default:

Files: AGENTS.md · .cursorrules · copilot-instructions.md · CLAUDE.md

## Security Rules

- Never log PII or secrets
- Always use parameterized queries — no string concatenation for SQL
- All new endpoints require authentication middleware
- Do not hard-code API keys, tokens, or passwords
- Do not add dependencies without checking for known vulnerabilities

This won’t make AI perfect — but it significantly reduces how often you correct the same mistakes.

9 / 12 — Security & Compliance
AI-Assisted Development

Security Checklist

Before merging AI-assisted changes:

  • No hard-coded secrets, tokens, or credentials
  • New endpoints have authentication and authorization checks
  • Database access uses parameterized queries
  • Logs contain no PII or secrets
  • New dependencies verified (exist, maintained, no known CVEs)
  • GitHub secret scanning + push protection enabled
  • CodeQL / SAST checks pass
  • At least one human reviewed the change with security in mind
  • No customer data or secrets were pasted into AI prompts
9 / 12 — Security & Compliance
AI-Assisted Development

Summary

  • AI produces security holes confidently and repeatedly — assume it until you verify
  • The four most common mistakes: SQL injection, hard-coded secrets, missing auth, PII in logs
  • Use GitHub’s security features (secret scanning, Dependabot, CodeQL) as your safety net
  • Never put secrets, PII, or proprietary code into AI prompts
  • Teach AI your security rules in project instruction files
  • Security is not delegable — AI makes you faster, but you are still responsible