Day 2 - Session 1: Basic Prompting Foundations
How do we populate the context window effectively?
Good Morning!
Today’s Big Question
How do we populate the context window effectively?
This Morning
- Share your annotated prompts
- Learn basic prompting theory
- Practice iterative questioning
- Begin systematic annotation
Show Me Your Prompts! (15 min)
Our First Sharing Session
- Who tried the homework prompt?
- What surprised you about the AI’s questions?
- What did you highlight in pink (didn’t work)?
- What did you highlight in blue (worked well)?
- What other prompts did you experiment with overnight?
Remember: We learn from failures as much as successes
What We’ll Learn This Session
By the end of this morning, you will be able to:
- Explain: How iterative questioning builds context
- Identify: Which AI questions advance vs. derail your work
- Track: When prompts succeed vs. fail through annotation
Basic Theory: Rules of Thumb
The Engineering Method
“Solving problems using rules of thumb that cause the best change in a poorly understood situation using available resources” - Bill Hammack
For Prompting, This Means
- We don’t know exactly why things work
- We develop local heuristics through practice
- What works depends on context
- Iteration beats perfection
Key Prompting Principles
1. Context is Everything
The AI only knows what’s in the current conversation
2. Specificity Matters
Vague instructions → vague outputs
3. Iteration is Expected
First attempts rarely perfect
4. Structure Helps
Break complex tasks into steps
Exercise: Weakening the Prompt (15 min)
Remember Yesterday’s Prompt?
Take the “ask me one question at a time” prompt from yesterday.
Now Break It
Try: “Help me figure out my goals for the week” without any of the setup.
Compare Results
What changes? Put up green sticky when complete.
What Makes Prompts Effective?
Good prompts
- One task at a time
- High scaffolding
- Is it clear what the intention of the conversation is?
- Is it clear what the register is?
- Are the answers easily falsifable?
Bad Prompts
- Generic
- Fact based
- Poorly populated context window
- Multiple questions
Annotation Practice (20 min)
On the Conceptboard
With your conversation from the weakening exercise:
- Your prompts: Mark what instructed useful behavior
- AI responses: Note where it followed/ignored instructions
- Patterns: What words consistently trigger better responses?
The Context Window
Think of it as a bucket: - Everything must fit inside - New information pushes out old - Quality matters more than quantity - You control what goes in
Your prompts are the recipe for filling this bucket effectively.
Exercise: Playing with GPT-2 (10 minutes)
Go visit mirror.zad-giessen.de/perplexity
We will talk about why the context window matters so much as you play with it.
Other concepts:
- Tokens
- Temperature
Thou shalt not allow an error to live.
Looking Ahead
Today: Metaprompting
- Can AI write its own prompts?
- The blank page problem
- Epistemic humility (or lack thereof)
Tomorrow: What Can We Verify?
- Working with documents
- Extracting vs. interpreting
- Model differences matter
Key Takeaways
- Prompting is engineering - rules of thumb, not laws
- Iteration is required - expect to refine
- Context accumulates - each exchange builds on the last. Do not allow errors to remain.
- Your judgment develops - through annotation and reflection
Before the Break
- Save your annotated conversations
- Add all prompts to our grimoire
- Let us know which ones worked and which ones didn’t
- Think about: What patterns are you noticing?
See you at 11:00 for metaprompting!