The Solution

Compile away the complexity before it reaches your LLM.

Write the full logic

Define your prompts with all the branching, conditions, and variations you need. Put the logic where it belongs—in the prompt file, visible to everyone.

teacher.prompt
init do
  @version: 1.0
  @major: 1

  params:
    @variation: enum[analogy, recognition]
    @depth: enum[shallow, medium, deep] = medium
    @is_question: bool = false
    @user_input: str

end init

if @is_question is true do
  Answer the user's question directly
else

case @variation do
  analogy: Use an analogy to explain
  recognition: Ask a guiding question
end @variation

case @depth do
  shallow: Give a short answer.
  medium: Give a moderate answer with one example.
  deep: Give a thorough answer with multiple examples.
end @depth

end @is_question

@user_input

response do
  {
    "response_type": "...",
    "content": "..."
  }
end response

Compile to clean output

When you call the prompt with specific parameters, the compiler resolves all branching and sends only the relevant content to the LLM.

LLM receives
Ask a guiding question.

Give a moderate answer with one example.

[user message]

Respond with this JSON:
{
  "response_type": "...",
  "content": "..."
}

What you get

Single source of truth

All prompt logic in .prompt files. No more scattered f-strings.

Non-technical iteration

Product managers and prompt engineers can edit prompts directly.

Contracts included

Params and response schemas are declared and versioned together.

Token efficiency

Only relevant branches are sent. No wasted tokens on untaken paths.

No branching. No logic. No dead weight. Just the instruction the LLM needs.