Stage 05 · SpecialtiesModule 24 of 26~6h

Education

Lesson plans, tutoring, and personalised feedback.

← All modules in this stage

A good tutor doesn't give answers — they ask the right next question. This module is about getting Claude to behave like that, plus the practical tasks educators actually face: lesson plans, exercise generation, marking with feedback.

By the end of this module you'll have

Time: about 1.5 hours for the basics, ~6 hours with all three notebooks.

Prerequisites: Modules 3 (prompt basics), 10 (conversations), 13 (content creation). Module 14 (production patterns) for any deployment with real students.


The instinct to override

Claude's default mode is "be helpful by giving the answer". For education, that's almost exactly wrong. You'll be fighting that default in every prompt — a tutor who gives the answer at the first sign of struggle has just deprived the student of the learning.


Recipe 1 · Socratic tutor

from anthropic import Anthropic
from dotenv import load_dotenv

load_dotenv()
client = Anthropic()

TUTOR = """\
You are a patient maths tutor for a 14-year-old preparing for GCSE.

CORE RULE: Never give the final answer until the student has tried at least once
and explained their thinking. If asked directly for the answer, respond with a
question that helps them notice the next step themselves.

When the student is stuck:
- First, ask what they've tried.
- Second, point at the *concept* that's missing (not the calculation).
- Third, only after two attempts, demonstrate ONE worked sub-step — not the full solution.

Tone: warm, never condescending. Avoid "well done" until something's actually well done.
"""

def tutor_turn(history: list[dict], student_message: str) -> tuple[list[dict], str]:
    history.append({"role": "user", "content": student_message})
    response = client.messages.create(
        model="claude-sonnet-4-6", max_tokens=400, system=TUTOR, messages=history,
    )
    reply = response.content[0].text
    history.append({"role": "assistant", "content": reply})
    return history, reply

# Try it:
history = []
history, reply = tutor_turn(history, "I have to solve 2x + 5 = 13 but I don't know where to start")
print("tutor>", reply)
history, reply = tutor_turn(history, "just tell me the answer")
print("tutor>", reply)

What to look for: the second response should not be "x = 4". It should be a question that helps the student see how to get there.


Recipe 2 · Exercise generator with built-in rubric

LESSON = "Solving linear equations of the form ax + b = c, where a ≠ 0."
LEVEL  = "Year 9 (~13–14 years old), beginner"

response = client.messages.create(
    model="claude-sonnet-4-6", max_tokens=800,
    system=(
        "You are a maths teacher generating an exercise sheet. "
        "Output JSON with this shape:\n"
        '{"questions":[{"q": "...", "answer": "...", "rubric": {"1_mark_for": "...", "2_marks_for": "..."}}]}'
        "\nGenerate 5 questions of progressive difficulty. The first must be solvable in one step."
        " Include rubrics that grade method, not just final answer."
    ),
    messages=[{"role": "user", "content": f"Lesson: {LESSON}\nLevel: {LEVEL}"}],
)
print(response.content[0].text)

A rubric with method-marks is the educational equivalent of citations: it forces the model to think about how a student should reach the answer, not just whether the final number matches.


Recipe 3 · Marking with feedback

def mark(question: str, model_answer: str, rubric: dict, student_answer: str) -> dict:
    response = client.messages.create(
        model="claude-sonnet-4-6", max_tokens=400,
        system=(
            "You are marking a student's answer against a rubric. Output JSON with: "
            '{"marks_awarded": int, "marks_possible": int, '
            '"specific_strengths": [str], "specific_improvements": [str], '
            '"next_step_question": str}'
            "\nBe specific. 'Good work' is not feedback. Always include one constructive "
            "next-step question that probes deeper understanding."
        ),
        messages=[{
            "role": "user",
            "content": (
                f"Q: {question}\nMODEL ANSWER: {model_answer}\nRUBRIC: {rubric}\n"
                f"STUDENT ANSWER: {student_answer}"
            ),
        }],
    )
    import json
    return json.loads(response.content[0].text)

Notice the next_step_question field. Marking shouldn't be a verdict — it should hand the student back to the next bit of learning.


Safety, age-appropriateness, and accountability

Education is a domain where defaults aren't enough. A short list:

If you're shipping anything to under-18s, treat the whole thing as a regulated product. Module 14's eval and observability practices aren't optional here.


Try changing one thing


Going deeper: open the notebooks


Module checklist


Next module

Module S5 · Software Engineering — Claude as a pair programmer for working engineers.