Skip to main content
This guide walks you through creating your first Endprompt endpoint—from signup to making your first API call.

Prerequisites

Step 1: Create an Endpoint

After logging in, you’ll land on your dashboard. Let’s create your first endpoint.
1

Navigate to Endpoints

Click Endpoints in the sidebar navigation.
2

Create New Endpoint

Click the Create Endpoint button in the top right.
3

Fill in the Details

Enter the following:
  • Name: Text Summarizer
  • Path: /api/v1/summarize
  • Category: Content (or create a new one)
  • Description: Summarizes long text into concise bullet points
4

Save

Click Create to save your endpoint.
You’ll be taken to the endpoint detail page with multiple tabs.

Step 2: Define Input Schema

Your endpoint needs to know what data to expect. Click the Input Schema tab.
1

Add a Text Field

Click Add Field and configure:
  • Name: text
  • Type: String
  • Required: Yes
  • Description: The text to summarize
2

Add a Length Field

Click Add Field again:
  • Name: max_bullets
  • Type: Integer
  • Required: No
  • Default: 5
  • Description: Maximum number of bullet points
3

Save Schema

Click Save to store your input schema.

Step 3: Create a Prompt

Now let’s write the prompt that powers your endpoint. Click the Prompts tab.
1

Create New Prompt

Click Create Prompt.
2

Configure the Prompt

  • Name: Bullet Point Summarizer v1
  • Model: Select gpt-4o (or your preferred model)
  • Temperature: 0.3 (lower = more consistent)
3

Write the Template

Enter this Liquid template:
You are a professional content summarizer. Your task is to extract the key points from the provided text.

## Instructions
- Read the text carefully
- Identify the {{ inputs.max_bullets | default: 5 }} most important points
- Write each point as a concise bullet
- Use clear, simple language

## Text to Summarize
{{ inputs.text }}

## Output Format
Return a JSON object with a "bullets" array containing the summary points:
{
  "bullets": ["point 1", "point 2", ...]
}
4

Save the Prompt

Click Save to create your prompt.

Step 4: Test Your Prompt

Before going live, let’s make sure it works. You should still be on the Prompts tab.
1

Open Test Runner

Click the Test button next to your prompt.
2

Enter Test Data

In the auto-generated form, enter:
  • text: Endprompt is a platform for building LLM-powered APIs. It provides versioning, schema validation, and observability. Teams use it to ship faster and iterate safely. The platform supports multiple LLM providers including OpenAI and Anthropic.
  • max_bullets: 3
3

Run Test

Click Run Test and watch the response appear.
You should see a JSON response with your bullet points!

Step 5: Go Live

Your prompt is working. Let’s make it the default for your endpoint.
1

Promote to Live

In the prompt list, click the menu and select Promote to Live.
2

Set as Default

Click Set as Default to make this the endpoint’s default prompt.

Step 6: Get Your API Key

You need an API key to call your endpoint.
1

Navigate to API Keys

Click API Keys in the sidebar.
2

Create API Key

Click Create API Key, give it a name like Development, and click Create.
3

Copy the Key

Copy the displayed key immediately—you won’t see it again!
Store your API key securely. Never commit it to version control or expose it in client-side code.

Step 7: Call Your API

Now for the exciting part—calling your endpoint!
curl -X POST https://yourcompany.api.endprompt.ai/api/v1/summarize \
  -H "Content-Type: application/json" \
  -H "x-api-key: your-api-key-here" \
  -d '{
    "text": "Your long text to summarize goes here...",
    "max_bullets": 3
  }'
Replace yourcompany with your actual tenant subdomain, which you can find in your dashboard.

What’s Next?

Congratulations! You’ve built your first LLM-powered API. Here’s where to go next:

Learn Liquid Templating

Master the template syntax for dynamic prompts

Define Output Schemas

Ensure your API returns consistent JSON structures

Prompt Versioning

Learn to version, test, and rollback prompts safely

View Execution Logs

Monitor requests, debug issues, and track costs