← Back to courses
tech

LLM 101: Language Models

About This Course

LLM 101 is a two-week, hands-on introduction to the technology reshaping every industry. No coding, no prerequisites — just clear thinking and curiosity. Students leave with a real understanding of how large language models work, why they sometimes fail, and how to build with them intentionally. This isn't a course about using AI. It's a course about understanding it.

Over four sessions, we move from concept to creation. The first week covers the fundamentals — how LLMs are trained, what tokens and context windows actually mean, and why prompting is a design skill, not a technical one. The second week goes deeper into retrieval-augmented generation (RAG), the system architecture that makes AI useful for real-world knowledge work.

The course culminates in a capstone project: each student builds their own no-code RAG chatbot powered by documents about the future of their chosen field — media, technology, business, or the arts. They present it live, explain how it works, and deliberately show where it breaks. Because understanding failure is the beginning of mastery.

Course Details
Duration
2 weeks, 4 sessions
Format
Live sessions (instructor-led)
Prerequisites
None — curiosity is the only requirement
Tools Used
Claude Projects or NotebookLM (no accounts needed during class)
Disciplines
Media · Technology · Business · Arts
Capstone
No-code RAG chatbot + live demo presentation
Week 1 — Understanding the Machine
Session 1

How LLMs Think (and How They Don't)

Core Concepts
  • What training actually means — in plain English
  • Tokens and context windows — the basic unit of how LLMs read and respond
  • Why models hallucinate — pattern completion vs. factual retrieval
  • The pattern-completer mental model — smart but not thinking
Hands-On Activity
  • Prompt experiments in the browser — observe how small wording changes shift outputs dramatically
  • Group discussion: where did the model get it right? Where did it fail, and why?
Session 2

Prompting as Design

Core Concepts
  • Prompting is a design discipline — not a technical skill
  • System prompts, personas, and constraints
  • Few-shot examples and how to frame context
  • Temperature and creativity as system levers
Hands-On Activity
  • Prompt redesign workshop — students receive a poorly written prompt and rewrite it
  • Compare outputs before and after redesign; discuss what changed and why
Week 2 — Building with Structure
Session 3

What RAG Actually Is

Core Concepts
  • Why prompting alone isn't enough — the hallucination problem at scale
  • What "grounding" means and why it matters
  • Embeddings as a way of measuring meaning — no math, just the concept of similarity
  • Vector stores as organized memory
  • The full RAG pipeline as a system — ingest → retrieve → generate
Hands-On Activity
  • Visual mapping exercise — students diagram the RAG pipeline for a real-world use case
  • Group Q&A: where does retrieval fail? What kinds of questions can't it answer?
Session 4

Demo Day — Present Your Chatbot

Format
  • Each student presents their RAG chatbot (10 minutes per student)
  • Explain the document set you chose and why
  • Show the chatbot answering questions well
  • Intentionally break it — show a question it gets wrong and explain why
  • Group feedback and discussion after each demo
Capstone Project

Build a chatbot that can answer questions about the future of your field — using retrieval-augmented generation.

Project Requirements
  • Select 3–5 documents focused on the future of your chosen discipline (Media, Tech, Business, or Arts)
  • Documents can be articles, reports, essays, or transcripts — PDFs and text preferred
  • Build your chatbot using Claude Projects or NotebookLM (no coding required)
  • Test it thoroughly before Demo Day — find its limits
Demo Day Criteria
  • Explain your document set and the question you were trying to answer
  • Demonstrate at least 3 strong responses from your chatbot
  • Demonstrate at least 1 failure — and explain why it failed
  • Share one insight about AI or your field that building this taught you
Learning Outcomes
1Explain how large language models work at a conceptual level
2Write clear, effective prompts for a range of real-world tasks
3Understand the RAG architecture and why it matters for knowledge work
4Build and evaluate a no-code RAG chatbot over their own document set
5Identify where AI systems succeed and where they fail — and why

Ready to join this course?

Blacksky Up is completely free. Apply now and start learning.

Apply Now