Architect first, model second
A take-home assignment for an AI systems role in K-12 EdTech reinforced something I keep coming back to: the hardest part of AI product work usually isn’t choosing a model. It’s designing the system around it.
I spent the day working through a take-home assignment for an AI systems role in K-12 EdTech, and it sharpened a view I’ve had for a while:
The most important AI decision usually isn’t which model to use. It’s when not to use one.
The exercise was simple on paper: design a practical pipeline that could ingest student artifacts across text, PDFs, audio, video, and images, then turn them into structured outputs for feedback, summaries, and search.
But once you think about that in a product used by more than a million students, the problem changes quickly.
Now you’re not just choosing models. You’re managing cost, latency, privacy, failure paths, and operational complexity.
That’s why my solution started with a simple rule: use deterministic systems first, smaller models by default, and larger models only when the task truly earns the expense.
In practice, that meant parsing PDFs before invoking vision, stripping audio from video before transcription, transforming every artifact into a single shared schema, and keeping embeddings close to application data so retrieval and tenant isolation stay simple and enforceable.
What stood out to me is how quickly “AI feature design” becomes architecture work.
The model matters, of course. But the more durable advantage lives in the routing logic, the data contracts, the storage design, and the guardrails that determine what happens when the model is uncertain.
That feels especially true in education, where trust matters just as much as capability.
Architect first. Model second.
Full PDF of the take-home assignment: