Prerequisites
Before creating a prompt, ensure you have:- An endpoint created
- Input schema defined (so you know what data you’ll receive)
- Output schema defined (optional, but recommended)
Creating a Prompt
Configure Basic Settings
Fill in the prompt details:
| Field | Description |
|---|---|
| Name | Descriptive name (e.g., “Detailed Summary v2”) |
| Model | Select the LLM to use |
| Temperature | Randomness (0.0 - 1.0) |
| Max Tokens | Maximum response length |
Prompt Settings Explained
Model Selection
Choose from available models:| Provider | Models | Best For |
|---|---|---|
| OpenAI | GPT-4o, GPT-4, GPT-3.5-turbo | General purpose, coding, analysis |
| Anthropic | Claude 3 Opus, Sonnet, Haiku | Long context, nuanced responses |
Temperature
Controls randomness in responses:| Value | Behavior | Use Case |
|---|---|---|
| 0.0 - 0.3 | Deterministic, consistent | Classification, extraction, facts |
| 0.4 - 0.6 | Balanced | General tasks, summarization |
| 0.7 - 1.0 | Creative, varied | Creative writing, brainstorming |
Max Tokens
Maximum tokens in the response:- Short responses (100-500): Classifications, extractions
- Medium responses (500-2000): Summaries, analyses
- Long responses (2000+): Detailed reports, long-form content
System Prompt (Optional)
A system-level instruction that sets context:Not all models support system prompts. When unsupported, it’s prepended to the main prompt.
Writing Effective Templates
Structure Your Prompt
A well-structured prompt typically includes:- Role/Context — Who is the AI in this scenario
- Instructions — What to do, step by step
- Input Data — The user’s data to process
- Output Format — How to structure the response
Example: Complete Prompt
Common Patterns
Classification
Extraction
Generation
Transformation
Testing Your Prompt
After saving, test immediately:- Click the Test button next to your prompt
- Fill in the auto-generated form with sample data
- Click Run Test
- Review the response
- Iterate on your template as needed
Best Practices
Be explicit about output format
Be explicit about output format
If you need JSON, say so explicitly and provide an example structure.
Use numbered instructions
Use numbered instructions
LLMs follow numbered steps more reliably than prose paragraphs.
Set clear boundaries
Set clear boundaries
Tell the model what NOT to do: “Do not include explanations outside the JSON.”
Provide examples
Provide examples
Few-shot examples dramatically improve consistency for complex tasks.
Handle edge cases
Handle edge cases
Use conditionals to handle missing data:
{% if inputs.optional_field %}
