Skip to main content
This guide walks you through creating a new prompt for your endpoint.

Prerequisites

Before creating a prompt, ensure you have:
  • An endpoint created
  • Input schema defined (so you know what data you’ll receive)
  • Output schema defined (optional, but recommended)

Creating a Prompt

1

Navigate to the Endpoint

Open the endpoint you want to add a prompt to.
2

Go to Prompts Tab

Click the Prompts tab in the endpoint workspace.
3

Click Create Prompt

Click the Create Prompt button.
4

Configure Basic Settings

Fill in the prompt details:
FieldDescription
NameDescriptive name (e.g., “Detailed Summary v2”)
ModelSelect the LLM to use
TemperatureRandomness (0.0 - 1.0)
Max TokensMaximum response length
5

Write the Template

Enter your prompt template using Liquid syntax.
6

Save

Click Save to create the prompt as a Draft.

Prompt Settings Explained

Model Selection

Choose from available models:
ProviderModelsBest For
OpenAIGPT-4o, GPT-4, GPT-3.5-turboGeneral purpose, coding, analysis
AnthropicClaude 3 Opus, Sonnet, HaikuLong context, nuanced responses
Start with GPT-4o or Claude 3 Sonnet for the best balance of quality and speed. Optimize later if needed.

Temperature

Controls randomness in responses:
ValueBehaviorUse Case
0.0 - 0.3Deterministic, consistentClassification, extraction, facts
0.4 - 0.6BalancedGeneral tasks, summarization
0.7 - 1.0Creative, variedCreative writing, brainstorming

Max Tokens

Maximum tokens in the response:
  • Short responses (100-500): Classifications, extractions
  • Medium responses (500-2000): Summaries, analyses
  • Long responses (2000+): Detailed reports, long-form content
Higher max tokens = higher costs. Set appropriate limits for your use case.

System Prompt (Optional)

A system-level instruction that sets context:
You are a professional content editor with 20 years of experience. 
You always provide constructive, actionable feedback.
Not all models support system prompts. When unsupported, it’s prepended to the main prompt.

Writing Effective Templates

Structure Your Prompt

A well-structured prompt typically includes:
  1. Role/Context — Who is the AI in this scenario
  2. Instructions — What to do, step by step
  3. Input Data — The user’s data to process
  4. Output Format — How to structure the response

Example: Complete Prompt

You are an expert content analyst specializing in {{ inputs.industry | default: "general" }} content.

## Your Task
Analyze the provided text and extract key insights.

## Instructions
1. Read the content carefully
2. Identify the {{ inputs.num_insights | default: 3 }} most important insights
3. For each insight, provide:
   - A concise title
   - A brief explanation
   - Relevance score (1-10)

## Content to Analyze
{{ inputs.content | truncate: 10000 }}

{% if inputs.focus_areas %}
## Focus Areas
Pay special attention to:
{% for area in inputs.focus_areas %}
- {{ area }}
{% endfor %}
{% endif %}

## Output Format
Return a JSON object:
{
  "insights": [
    {
      "title": "string",
      "explanation": "string",
      "relevance": number
    }
  ],
  "overall_summary": "string"
}

Return only valid JSON, no additional text.

Common Patterns

Classification

Classify the following text into one of these categories:
{% for cat in inputs.categories %}
- {{ cat }}
{% endfor %}

Text: {{ inputs.text }}

Return JSON: {"category": "selected", "confidence": 0.0-1.0}

Extraction

Extract the following information from the text:
- Names of people
- Dates mentioned
- Key actions or events

Text: {{ inputs.text }}

Return JSON with arrays for each type.

Generation

Write a {{ inputs.tone }} {{ inputs.content_type }} about {{ inputs.topic }}.

Requirements:
- Length: approximately {{ inputs.word_count | default: 200 }} words
- Audience: {{ inputs.audience | default: "general" }}
{% if inputs.keywords %}
- Include these keywords: {{ inputs.keywords | join: ", " }}
{% endif %}

Transformation

Rewrite the following text to be more {{ inputs.style }}.

Original:
{{ inputs.text }}

Maintain the core meaning while improving {{ inputs.focus }}.

Testing Your Prompt

After saving, test immediately:
  1. Click the Test button next to your prompt
  2. Fill in the auto-generated form with sample data
  3. Click Run Test
  4. Review the response
  5. Iterate on your template as needed
Test with edge cases: empty optional fields, very long inputs, unusual values. Your prompt should handle them gracefully.

Best Practices

If you need JSON, say so explicitly and provide an example structure.
LLMs follow numbered steps more reliably than prose paragraphs.
Tell the model what NOT to do: “Do not include explanations outside the JSON.”
Few-shot examples dramatically improve consistency for complex tasks.
Use conditionals to handle missing data: {% if inputs.optional_field %}

Next Steps