LLM 101: Language Models
LLM 101 is a two-week, hands-on introduction to the technology reshaping every industry. No coding, no prerequisites — just clear thinking and curiosity. Students leave with a real understanding of how large language models work, why they sometimes fail, and how to build with them intentionally. This isn't a course about using AI. It's a course about understanding it.
Over four sessions, we move from concept to creation. The first week covers the fundamentals — how LLMs are trained, what tokens and context windows actually mean, and why prompting is a design skill, not a technical one. The second week goes deeper into retrieval-augmented generation (RAG), the system architecture that makes AI useful for real-world knowledge work.
The course culminates in a capstone project: each student builds their own no-code RAG chatbot powered by documents about the future of their chosen field — media, technology, business, or the arts. They present it live, explain how it works, and deliberately show where it breaks. Because understanding failure is the beginning of mastery.
How LLMs Think (and How They Don't)
- ◈What training actually means — in plain English
- ◈Tokens and context windows — the basic unit of how LLMs read and respond
- ◈Why models hallucinate — pattern completion vs. factual retrieval
- ◈The pattern-completer mental model — smart but not thinking
- ◈Prompt experiments in the browser — observe how small wording changes shift outputs dramatically
- ◈Group discussion: where did the model get it right? Where did it fail, and why?
Prompting as Design
- ◈Prompting is a design discipline — not a technical skill
- ◈System prompts, personas, and constraints
- ◈Few-shot examples and how to frame context
- ◈Temperature and creativity as system levers
- ◈Prompt redesign workshop — students receive a poorly written prompt and rewrite it
- ◈Compare outputs before and after redesign; discuss what changed and why
What RAG Actually Is
- ◈Why prompting alone isn't enough — the hallucination problem at scale
- ◈What "grounding" means and why it matters
- ◈Embeddings as a way of measuring meaning — no math, just the concept of similarity
- ◈Vector stores as organized memory
- ◈The full RAG pipeline as a system — ingest → retrieve → generate
- ◈Visual mapping exercise — students diagram the RAG pipeline for a real-world use case
- ◈Group Q&A: where does retrieval fail? What kinds of questions can't it answer?
Demo Day — Present Your Chatbot
- ◈Each student presents their RAG chatbot (10 minutes per student)
- ◈Explain the document set you chose and why
- ◈Show the chatbot answering questions well
- ◈Intentionally break it — show a question it gets wrong and explain why
- ◈Group feedback and discussion after each demo
“Build a chatbot that can answer questions about the future of your field — using retrieval-augmented generation.”
- ◈Select 3–5 documents focused on the future of your chosen discipline (Media, Tech, Business, or Arts)
- ◈Documents can be articles, reports, essays, or transcripts — PDFs and text preferred
- ◈Build your chatbot using Claude Projects or NotebookLM (no coding required)
- ◈Test it thoroughly before Demo Day — find its limits
- ◈Explain your document set and the question you were trying to answer
- ◈Demonstrate at least 3 strong responses from your chatbot
- ◈Demonstrate at least 1 failure — and explain why it failed
- ◈Share one insight about AI or your field that building this taught you