Skip to main content
The output schema defines what your endpoint returns. While optional, defining an output schema ensures consistent responses and enables validation of LLM outputs.

Why Define Output Schema?

Consistent Responses

Ensure every response follows the same structure

Documentation

OpenAPI docs show exactly what callers receive

Validation

Catch malformed LLM responses before they reach your app

Type Safety

Frontend/backend code can rely on specific fields

Adding Output Fields

Navigate to your endpoint’s Output Schema tab and click Add Field.

Field Configuration

PropertyRequiredDescription
NameYesField name in the response JSON
TypeYesData type (string, number, boolean, array, object)
DescriptionNoWhat this field contains
Output fields are never “required” in the traditional sense—they define what the LLM should return. If the LLM doesn’t include a field, it will be omitted from the response.

Field Types

Output schemas support the same types as input schemas:
{
  "name": "summary",
  "type": "string",
  "description": "The summarized content"
}

Prompting for Structured Output

Your prompt must instruct the LLM to return JSON matching your schema. Here’s the pattern:
Analyze the following text for sentiment:

{{ inputs.text }}

Respond with a JSON object containing:
- sentiment: "positive", "negative", or "neutral"
- confidence: a number between 0 and 1
- keywords: an array of key terms that influenced the analysis

Return only valid JSON, no additional text.
If your prompt doesn’t ask for JSON output, the LLM may return plain text that won’t parse correctly.

Example Schemas

Sentiment Analysis

Output Schema:
{
  "type": "object",
  "properties": {
    "sentiment": {
      "type": "string",
      "description": "Overall sentiment"
    },
    "confidence": {
      "type": "number",
      "description": "Confidence score 0-1"
    },
    "aspects": {
      "type": "array",
      "items": {
        "type": "object",
        "properties": {
          "aspect": { "type": "string" },
          "sentiment": { "type": "string" }
        }
      },
      "description": "Sentiment by aspect"
    }
  }
}
Example Response:
{
  "sentiment": "positive",
  "confidence": 0.92,
  "aspects": [
    { "aspect": "price", "sentiment": "neutral" },
    { "aspect": "quality", "sentiment": "positive" },
    { "aspect": "delivery", "sentiment": "positive" }
  ]
}

Content Classification

Output Schema:
{
  "type": "object",
  "properties": {
    "category": {
      "type": "string",
      "description": "Primary category"
    },
    "subcategories": {
      "type": "array",
      "items": { "type": "string" },
      "description": "Secondary categories"
    },
    "confidence_scores": {
      "type": "object",
      "description": "Confidence per category"
    }
  }
}

Text Extraction

Output Schema:
{
  "type": "object",
  "properties": {
    "entities": {
      "type": "array",
      "items": {
        "type": "object",
        "properties": {
          "text": { "type": "string" },
          "type": { "type": "string" },
          "start": { "type": "integer" },
          "end": { "type": "integer" }
        }
      }
    },
    "summary": {
      "type": "string"
    }
  }
}

Response Validation

When an output schema is defined, Endprompt:
  1. Parses the LLM response as JSON
  2. Validates against your schema
  3. Returns the validated response
If validation fails, you’ll see details in the execution logs.
Start without an output schema while experimenting, then add one once you’ve stabilized the prompt’s output format.

JSON Schema Preview

The Output Schema tab shows the complete JSON Schema:
{
  "type": "object",
  "properties": {
    "summary": {
      "type": "string",
      "description": "Concise summary of the content"
    },
    "bullet_points": {
      "type": "array",
      "items": { "type": "string" },
      "description": "Key points as bullet list"
    },
    "word_count": {
      "type": "integer",
      "description": "Number of words in summary"
    }
  }
}

Best Practices

Flat structures are easier for LLMs to produce consistently than deeply nested ones.
Tell the LLM exactly what JSON structure you expect, including field names.
Field names like is_spam are clearer than spam or s.
Add descriptions to every field—they appear in your OpenAPI docs.
Test with unusual inputs to see if the LLM still produces valid JSON.

Without Output Schema

If you don’t define an output schema:
  • The raw LLM response is returned
  • No JSON parsing or validation occurs
  • The response may be plain text or unstructured
This is fine for simple use cases where you just need text back.

Next Steps