Skip to main content
Prompts are the instructions that tell the LLM what to do. In Endprompt, prompts are versioned, testable, and independent from your endpoint URLs—letting you iterate safely without changing your application code.

What is a Prompt?

A prompt consists of:
  • Template — The instruction text, using Liquid templating
  • Model — Which LLM to use (GPT-4, Claude, etc.)
  • Settings — Temperature, max tokens, and other parameters
  • Status — Draft, Live, or Archived
You are a helpful assistant that summarizes text.

## Instructions
- Read the text carefully
- Extract the main points
- Write a concise summary

## Text
{{ inputs.text }}

## Output
Return a JSON object with a "summary" field.

Why Prompts Are Powerful

Versioning

Every save creates a version. Rollback instantly if something breaks.

A/B Testing

Attach multiple prompts to one endpoint. Test which performs better.

Safe Iteration

Test changes in Draft status before promoting to Live.

Model Flexibility

Switch models without changing code. Try GPT-4 vs Claude instantly.

Prompt Lifecycle

Prompts follow a status workflow:
StatusDescriptionCan be Default?
DraftWork in progress, for testingNo
LiveProduction-readyYes
ArchivedDeprecated, hidden from listsNo

Prompts vs Endpoints

Understanding the relationship:
ConceptWhat It IsExample
EndpointStable API URL/api/v1/summarize
PromptLLM instruction template”Summarize this text…”
RelationshipOne endpoint, many promptsEndpoint has 3 prompt versions
Your integration code calls the endpoint URL. The prompt can change behind the scenes without affecting your code.

Multiple Prompts per Endpoint

You can attach multiple prompts to a single endpoint:
  • Different models — GPT-4 for quality, GPT-3.5 for speed
  • Different approaches — Concise summary vs detailed analysis
  • Different versions — v1, v2, v3 of your prompt
  • A/B testing — Compare performance between prompts
When calling your endpoint, you can specify which prompt to use:
# Use default prompt
curl -X POST .../api/v1/summarize

# Use specific prompt
curl -X POST .../api/v1/summarize?prompt=detailed-summary-v2

The Prompt Editor

The prompt editor provides:
  • Syntax highlighting for Liquid templates
  • Variable autocomplete for input fields
  • Live preview of rendered output
  • Side-by-side testing to run and see results immediately

Key Components

The instruction text sent to the LLM. Uses Liquid templating to inject input values.
Choose from supported providers (OpenAI, Anthropic) and models (GPT-4, Claude).
Controls randomness. Lower (0.1-0.3) for consistent outputs, higher (0.7-1.0) for creativity.
Maximum length of the LLM response. Higher values allow longer outputs but cost more.
Optional system-level instruction that sets the AI’s persona or behavior (for models that support it).

Next Steps