FAQ & Troubleshooting

Answers to common questions and solutions to frequent issues when using dot-prompt.

General Questions

What is dot-prompt?

dot-prompt is a prompt management system that lets you write, version, and compile prompts using a specialized DSL. Think of it like a compiler for your prompts - you write in the dot-prompt language, and it produces optimized prompt strings for your LLMs.

Why do I need a prompt compiler?

As your prompts grow in complexity, you face challenges like: scattered prompts across files, no type safety, difficult to test, no version control, and no way to validate outputs. dot-prompt solves all of these by bringing software engineering best practices to prompt engineering.

What languages/clients are supported?

We provide official clients for TypeScript/JavaScript and Python. The API is REST-based, so you can easily create clients for other languages.

How is dot-prompt licensed?

dot-prompt is open source under the Apache 2.0 license. The TypeScript client (@dotprompt/client) and Python client (dotprompt-client) are also Apache 2.0 licensed.

Installation & Setup

Docker container won't start

Common causes and solutions:

  • Port already in use - Check if something else is using port 4000: lsof -i :4000
  • Permission issues - On Linux, you may need to run Docker with sudo or add your user to the docker group
  • Missing prompts directory - Create the prompts folder before starting: mkdir -p prompts

The API returns 404

Make sure you're calling the correct port (4000 by default) and that your prompt file exists in the prompts directory with the .prompt extension. Also verify the container is running: docker ps

How do I run without Docker?

See the Installation & Setup guide for running with Elixir directly. You'll need Elixir 1.17+ and Erlang/OTP.

Language & Syntax

What's the difference between @ and @@?

@variable references a runtime parameter that gets passed when rendering. @@variable references a compile-time constant defined in the init block. Constants are resolved when the prompt is compiled, variables are resolved at runtime.

How do enum parameters work?

Use enum[value1, value2, value3] in your params definition. The enum validates at runtime and works with case blocks for branching:

params:
  @level: enum[beginner, expert] = beginner

Can I use fragments across multiple prompts?

Yes! Fragments defined in one prompt can be injected into others using the inject keyword. This enables reusable prompt components. See the Fragments documentation for details.

How does versioning work?

Versioning is built into the init block with @major and @version. The server can handle multiple versions simultaneously, allowing you to migrate users gradually. See the Versioning guide for the full strategy.

What's the difference between case and vary?

case works with discrete values (strings, enums) - like a switch statement. vary works with numeric ranges - great for things like temperature or creativity levels. Example: vary @temperature do low: ... medium: ... high: ... end

Client Libraries

TypeScript client throws "baseUrl is required"

Make sure you're passing the baseUrl when creating the client:

const client = new DotPromptClient({
  baseUrl: 'http://localhost:4000'
});

Python client: "Context manager exited without closing"

Always use the client with a context manager to ensure proper cleanup:

with DotPromptClient() as client:
    result = client.render('prompt', params)

How do I handle errors from the API?

Both clients throw specific exceptions. In Python:

from dotprompt.exceptions import PromptNotFoundError, ValidationError

try:
    result = client.render('my-prompt', params)
except PromptNotFoundError:
    print("Prompt not found")
except ValidationError as e:
    print(f"Validation failed: {e}")
In TypeScript, use try/catch with the error types from the package.

Performance & Scaling

How fast is prompt compilation?

Prompt compilation is extremely fast - typically under 10ms. The compiled prompts are cached, so subsequent renders are nearly instant. The main latency comes from network calls to your LLM.

Can I run multiple prompt servers?

Yes! The servers are stateless - just point your clients to different instances. You can run multiple headless containers behind a load balancer for high availability.

How do I monitor prompt performance?

The API returns timing information in the response. You can also add logging/metrics on the client side to track render times across your application.

Migration & Integrations

How do I migrate from inline prompts?

Start by identifying your most complex prompts - those with conditionals, multiple variables, or structured outputs. Move them to .prompt files first, then update your code to call the API instead of inline strings.

Does dot-prompt work with LangChain?

Yes! You can use dot-prompt as the prompt source for LangChain. Just render the prompt with dot-prompt, then pass the result to your LangChain chain. Here's a TypeScript example:

import { DotPromptClient } from '@dotprompt/client';
import { LLMChain } from 'langchain';

const dotPrompt = new DotPromptClient({ baseUrl: 'http://localhost:4000' });
const rendered = dotPrompt.render('my-prompt', { ... });

const chain = new LLMChain({ 
  prompt: new PromptTemplate({ template: rendered.prompt }),
  llm: new ChatOpenAI()
});

Can I use dot-prompt with OpenAI, Anthropic, etc?

Absolutely! dot-prompt is prompt-agnostic - it just compiles strings. The compiled prompt works with any LLM. Just render with dot-prompt, then send the result to your LLM API as usual.

Getting Help

If your question isn't answered here: