Building Applications
Turn prompts into real apps with a clean structure.
← All modules in this stageSo far you've been calling Claude from one-off scripts. This module is the leap into a real app: a clean structure, a single source of truth for prompts, and an entry point you can hand someone else to run.
By the end of this module you'll have
- A small Claude-backed CLI app organised into config, prompts, client, app, and entry point — five files, no framework
- The habit of putting prompts in their own file (and version-controlling them like code)
- A mental model for where to add features without breaking the structure
Time: about 1.5 hours for the basics, ~8 hours with all three notebooks.
Prerequisites: Modules 4 (API basics) and 6 (advanced prompting).
The shape of a small Claude app
my_app/
├── config.py # model, temperature, paths — one place to change them
├── prompts.py # all prompt templates as functions returning strings
├── client.py # the retry-wrapped Anthropic client
├── app.py # the actual feature: takes input, returns output
└── __main__.py # turns the package into a CLI: `python -m my_app ...`
Five files. No framework. You can copy this layout into any project and it'll fit.
Build it now
Make a folder summarizer/ next to your scripts and create five files.
summarizer/config.py
from dataclasses import dataclass
@dataclass(frozen=True)
class Config:
model: str = "claude-sonnet-4-6"
max_tokens: int = 400
temperature: float = 0.2
CONFIG = Config()
summarizer/prompts.py — every prompt in one place, as a pure function:
def summarize_prompt(text: str, *, sentences: int = 3) -> str:
return (
f"Summarise the text below in {sentences} sentences. "
"Neutral tone. No marketing language. No bullet points.\n\n"
f"---\n{text}\n---"
)
summarizer/client.py — the wrapper from Module 4 lives here:
import time, random
from anthropic import Anthropic, RateLimitError, APIConnectionError, APITimeoutError
from dotenv import load_dotenv
load_dotenv()
_client = Anthropic()
_TRANSIENT = (RateLimitError, APIConnectionError, APITimeoutError)
def call(messages, *, model: str, max_tokens: int, temperature: float, max_attempts: int = 4):
for attempt in range(max_attempts):
try:
return _client.messages.create(
model=model, max_tokens=max_tokens, temperature=temperature, messages=messages,
)
except _TRANSIENT:
if attempt == max_attempts - 1:
raise
time.sleep((2 ** attempt) + random.random())
summarizer/app.py — the actual feature:
from .config import CONFIG
from .prompts import summarize_prompt
from .client import call
def summarize(text: str, sentences: int = 3) -> str:
response = call(
messages=[{"role": "user", "content": summarize_prompt(text, sentences=sentences)}],
model=CONFIG.model, max_tokens=CONFIG.max_tokens, temperature=CONFIG.temperature,
)
return response.content[0].text.strip()
summarizer/__main__.py — turns the package into a runnable CLI:
import sys
from .app import summarize
if __name__ == "__main__":
text = sys.stdin.read() if not sys.stdin.isatty() else " ".join(sys.argv[1:])
if not text.strip():
sys.exit("Usage: echo 'long text' | python -m summarizer (or: python -m summarizer 'text')")
print(summarize(text))
Run it:
echo "$(cat README.md)" | python -m summarizer
python -m summarizer "Long text passed as an argument."
What this layout buys you
- One place for every change. Want a different model? Edit
config.py. New prompt? Add a function toprompts.py. Want to retry on a new exception?client.py. Nothing leaks. - Prompts diff cleanly. Because they're plain Python functions returning strings, your code reviewer can read a PR like "the summariser tone changed" without scrolling through API plumbing.
- Tests stay easy.
summarizetakes a string and returns a string. Pass a stub that fakescall(...)and you have a unit test. We'll do that in Module 20. - It scales without rewrites. A FastAPI route, a Lambda handler, or a Streamlit UI all just import
summarize. The structure is the same.
Where new features go
| You want to add... | Put it in |
|---|---|
| A new prompt or task | A function in prompts.py and a function in app.py |
| A different model for one feature | A second Config instance, or pass overrides into call(...) |
| Logging | Wrap call() in client.py — every feature gets it free |
| A web UI / API endpoint | New file outside the package, importing from app.py |
| User authentication, rate limiting, billing | Outside summarizer/ — that's app-shell territory, not Claude-app territory |
The last row matters: don't put auth or rate limiting inside this package. It's the Claude integration, not the whole product. Keep it small.
Try changing one thing
- Add a
bullet_summary_prompt(text)toprompts.pyand asummarize_bullets()toapp.py. Notice you didn't touchclient.pyorconfig.py. - Replace
messages.createwithmessages.streaminclient.pyand haveapp.pyyield text chunks. The CLI will start printing immediately. - Pass
temperature=0.8for a "creative tagline" feature. Same pipeline, different config. - Move
prompts.pyto a YAML file loaded at startup. Now non-engineers can edit prompts.
Going deeper: open the notebooks
notebooks/01_introduction.ipynb— feature flags for model rollouts, capturing user feedback (~1.5–2h)notebooks/02_intermediate.ipynb— caching, background jobs, prompt versioning per deploy (~2–3h)notebooks/03_advanced.ipynb— multi-environment promotion, disaster recovery, SLAs (~1.5–2.5h)
Module checklist
- [ ] You ran
python -m summarizerand got a real summary - [ ] You can point to which file you'd touch to (a) change the model, (b) tweak the prompt, (c) add a new feature
- [ ] You can explain in one sentence why prompts live in their own file
- [ ] You can imagine bolting a FastAPI endpoint onto this without rewriting any of the five files
Next module
Module 8 · Tool Use — let Claude take action by calling functions you define.