The Solution
Compile away the complexity before it reaches your LLM.
Write the full logic
Define your prompts with all the branching, conditions, and variations you need. Put the logic where it belongs—in the prompt file, visible to everyone.
teacher.prompt
init do@version: 1.0@major: 1params:@variation: enum[analogy, recognition]@depth: enum[shallow, medium, deep] = medium@is_question: bool = false@user_input: strend initif @is_question is true doAnswer the user's question directlyelsecase @variation doanalogy: Use an analogy to explainrecognition: Ask a guiding questionend @variationcase @depth doshallow: Give a short answer.medium: Give a moderate answer with one example.deep: Give a thorough answer with multiple examples.end @depthend @is_question@user_inputresponse do{"response_type": "...","content": "..."}end response
Compile to clean output
When you call the prompt with specific parameters, the compiler resolves all branching and sends only the relevant content to the LLM.
LLM receives
Ask a guiding question.Give a moderate answer with one example.[user message]Respond with this JSON:{"response_type": "...","content": "..."}
What you get
Single source of truth
All prompt logic in .prompt files. No more scattered f-strings.
Non-technical iteration
Product managers and prompt engineers can edit prompts directly.
Contracts included
Params and response schemas are declared and versioned together.
Token efficiency
Only relevant branches are sent. No wasted tokens on untaken paths.
No branching. No logic. No dead weight. Just the instruction the LLM needs.