Skip to main content
Endprompt exposes its Admin API as an MCP (Model Context Protocol) server, allowing AI coding tools to manage your endpoints, prompts, and executions directly through natural conversation.

What is MCP?

MCP is an open protocol that lets AI assistants use external tools. When you connect Endprompt’s MCP server to your coding tool, the AI can:
  • Create and configure endpoints
  • Write and test Liquid prompt templates
  • Promote prompts to Live and set defaults
  • Execute endpoints and analyze results
  • Monitor stats and browse execution logs
All without you leaving your editor.

Supported Tools

The MCP server works with any client that supports the Streamable HTTP transport:
  • VS Code with GitHub Copilot or compatible extensions
  • Claude Desktop
  • Cursor
  • Windsurf
  • Any MCP-compatible client

Setup

Prerequisites

VS Code

Create a .vscode/mcp.json file in your project:
{
  "servers": {
    "endprompt": {
      "type": "http",
      "url": "https://api.endprompt.app/mcp",
      "headers": {
        "Authorization": "Bearer ${input:endpromptApiKey}"
      }
    }
  },
  "inputs": [
    {
      "id": "endpromptApiKey",
      "type": "promptString",
      "description": "Endprompt admin API key (epa_...)",
      "password": true
    }
  ]
}
VS Code will prompt you for the API key when the MCP server is first used. The key is stored securely per session.
Add .vscode/mcp.json to your .gitignore to avoid committing it, or use the ${input:...} pattern shown above to avoid hardcoding the key.

Claude Desktop

Add to your claude_desktop_config.json:
// ~/Library/Application Support/Claude/claude_desktop_config.json
{
  "mcpServers": {
    "endprompt": {
      "url": "https://api.endprompt.app/mcp",
      "headers": {
        "Authorization": "Bearer epa_your_admin_key_here"
      }
    }
  }
}

Cursor

Add an MCP server in Cursor’s settings with:
  • URL: https://api.endprompt.app/mcp
  • Header: Authorization: Bearer epa_your_admin_key_here

Available Tools

Once connected, your AI assistant has access to 25 tools organized into three categories.

Endpoint Management

ToolDescription
list_endpointsList all API endpoints for your tenant
get_endpointGet full endpoint details including field schemas
create_endpointCreate a new API endpoint
update_endpointUpdate endpoint metadata (name, description, visibility)
delete_endpointSoft-delete an endpoint
add_input_fieldAdd a typed input field to an endpoint
add_output_fieldAdd a typed output field to an endpoint

Prompt Lifecycle

ToolDescription
list_promptsList prompts for an endpoint with status filtering
get_promptGet full prompt details including Liquid template
create_promptCreate a new prompt in Draft status
update_promptUpdate template or config (auto-versions on change)
promote_promptPromote from Draft to Live
set_default_promptSet a Live prompt as the endpoint default
archive_promptArchive a prompt
unarchive_promptRestore an archived prompt to Draft
duplicate_promptClone a prompt as a new Draft
list_prompt_versionsView version history of a prompt
validate_templateCheck Liquid syntax and variable references
list_available_modelsList all available LLM models

Execution & Analytics

ToolDescription
execute_endpointExecute an endpoint with inputs, get LLM output
get_endpoint_statsExecution stats (requests, latency, tokens, costs)
get_prompt_statsCompare prompt performance within an endpoint
list_execution_logsBrowse execution history with filtering
get_execution_logFull execution details (inputs, rendered prompt, response)

Example Conversations

”Create a sentiment analysis endpoint”

The AI assistant will:
  1. Call list_available_models to show you model options
  2. Call create_endpoint with your chosen name and path
  3. Call add_input_field to define the text input
  4. Call add_output_field for sentiment and confidence outputs
  5. Call create_prompt with a Liquid template
  6. Call validate_template to check for errors
  7. Call promote_prompt to make it Live
  8. Call set_default_prompt to activate it

”How are my endpoints performing?”

The AI assistant will:
  1. Call get_tenant_stats for an overview
  2. Call list_endpoints to show your endpoints
  3. Call get_endpoint_stats for specific endpoints you ask about
  4. Call get_prompt_stats to compare prompt performance

”Show me recent errors”

The AI assistant will:
  1. Call list_execution_logs filtered to recent entries
  2. Call get_execution_log for any failures to show full details
  3. Analyze the rendered prompt and raw response to identify issues

Transport Details

SettingValue
ProtocolMCP (Model Context Protocol)
TransportStreamable HTTP
URLhttps://api.endprompt.app/mcp
AuthAuthorization: Bearer epa_...
SessionManaged via Mcp-Session-Id header
The server also supports legacy SSE transport for older MCP clients.