Artificial Intelligence simulates human intelligence in machines.
Enables computers to:
Many sub-fields — but this workshop focuses on one: Generative AI
Machine Learning
Learns from data to make predictions
Natural Language Processing
Understands and processes human language
Computer Vision
Interprets visual information
Reinforcement Learning
Learns by trial and reward
Deep Learning
Neural networks with many layers
Generative AI ✓
Creates new content: text, images, code
The most relevant type for coding.
Models that generate new content based on patterns learned from existing data.
They can produce text, images, code, and more.
Trained on massive datasets, they understand context to generate coherent, relevant responses.
Language & Code
Images
Code-specialized
The architecture behind all modern AI language models.
"The bank can guarantee deposits will eventually cover
future tuition costs because it invests in [?]"
Self-attention understands "bank" = financial institution,
not a river bank — purely from context.
For each token (word/character), the model asks:
“Which other tokens in this sequence are most relevant to understanding this one?”
This happens simultaneously for all tokens — that’s what makes transformers fast and powerful.
Input tokens:
The cat sat on the mat
↕ ↕ ↕ ↕ ↕ ↕
[attention weights between every pair]
Each word “looks at” every other word to build context.
Because the model predicts plausible output — not correct output:
Treat AI output as a draft, not a finished answer.
Summary:
Deep dives
Attention is All You Need — original paper
The Illustrated Transformer — visual walkthrough