Skip to main content
Before diving deeper, let’s understand the key concepts that make Endprompt work.

Core Concepts

Endprompt Architecture

Endpoints

An Endpoint is a stable API URL that your application calls. Think of it as the “contract” between your code and the LLM.
https://yourcompany.api.endprompt.ai/api/v1/summarize
                                    └──────────────┘
                                     Your endpoint path
Endpoints define:
  • Path — The URL path (e.g., /api/v1/summarize)
  • Input Schema — What data the endpoint accepts
  • Output Schema — What data the endpoint returns
  • Default Prompt — Which prompt to use when called
Your integration code always calls the same endpoint URL. You can change prompts, models, and configurations without touching your application code.

Prompts

A Prompt is the actual instruction sent to the LLM. Prompts are attached to endpoints and can be versioned independently.
You are a helpful assistant. Please summarize the following text:

{{ inputs.text }}

Return your response as JSON with a "summary" field.
Key features of prompts:
  • Liquid Templating — Use {{ inputs.fieldName }} to inject request data
  • Model Selection — Choose which LLM to use (GPT-4, Claude, etc.)
  • Settings — Configure temperature, max tokens, and other parameters
  • Versioning — Every save creates a new version you can rollback to

Prompt Status Lifecycle

Prompts follow a status workflow:
StatusDescription
DraftWork in progress. Can be tested but not used as default.
LiveProduction-ready. Can be set as endpoint default.
ArchivedDeprecated. Hidden from lists but preserved for history.

Input & Output Schemas

Schemas define the structure of data flowing in and out of your endpoint. Input Schema Example:
{
  "text": {
    "type": "string",
    "required": true,
    "description": "The text to summarize"
  },
  "max_length": {
    "type": "integer",
    "required": false,
    "default": 100
  }
}
Output Schema Example:
{
  "summary": {
    "type": "string",
    "description": "The summarized text"
  },
  "word_count": {
    "type": "integer",
    "description": "Number of words in summary"
  }
}
Input validation happens automatically. If a request doesn’t match your schema, it’s rejected with a clear error message before reaching the LLM.

Request Flow

Here’s what happens when you call an Endprompt endpoint:
1

Request Received

Your application sends a POST request with JSON data to your endpoint URL.
2

Authentication

Endprompt validates your API key from the x-api-key header.
3

Input Validation

The request body is validated against the endpoint’s input schema.
4

Prompt Resolution

The default prompt (or specified prompt) is loaded.
5

Template Rendering

Liquid template is rendered with your input data, creating the final prompt text.
6

LLM Execution

The rendered prompt is sent to the configured LLM (OpenAI, Anthropic, etc.).
7

Response Parsing

The LLM response is parsed and validated against the output schema.
8

Logging

The entire execution is logged for observability.
9

Response Returned

The validated JSON response is returned to your application.

Multi-Tenancy

Each Endprompt account is a tenant with its own:
  • Subdomain (e.g., yourcompany.api.endprompt.ai)
  • Endpoints, prompts, and configurations
  • API keys
  • Team members
  • Usage quotas
All your data is completely isolated from other tenants. Your prompts, logs, and API keys are never visible to others.

What Makes This Different?

Stable Contracts

Your application code calls the same URL forever. Iterate on prompts without deployments.

Type Safety

Schema validation catches errors before they reach the LLM—and before they reach your users.

Version Control

Every prompt change is versioned. Test new versions, rollback bad ones, compare performance.

Observability

See every request, response, latency, and cost. Replay requests to debug issues.

Next Steps