Endprompt exposes its Admin API as an MCP (Model Context Protocol) server, allowing AI coding tools to manage your endpoints, prompts, and executions directly through natural conversation.
What is MCP?
MCP is an open protocol that lets AI assistants use external tools. When you connect Endprompt’s MCP server to your coding tool, the AI can:
- Create and configure endpoints
- Write and test Liquid prompt templates
- Promote prompts to Live and set defaults
- Execute endpoints and analyze results
- Monitor stats and browse execution logs
All without you leaving your editor.
The MCP server works with any client that supports the Streamable HTTP transport:
- VS Code with GitHub Copilot or compatible extensions
- Claude Desktop
- Cursor
- Windsurf
- Any MCP-compatible client
Setup
Prerequisites
VS Code
Create a .vscode/mcp.json file in your project:
{
"servers": {
"endprompt": {
"type": "http",
"url": "https://api.endprompt.app/mcp",
"headers": {
"Authorization": "Bearer ${input:endpromptApiKey}"
}
}
},
"inputs": [
{
"id": "endpromptApiKey",
"type": "promptString",
"description": "Endprompt admin API key (epa_...)",
"password": true
}
]
}
VS Code will prompt you for the API key when the MCP server is first used. The key is stored securely per session.
Add .vscode/mcp.json to your .gitignore to avoid committing it, or use the ${input:...} pattern shown above to avoid hardcoding the key.
Claude Desktop
Add to your claude_desktop_config.json:
// ~/Library/Application Support/Claude/claude_desktop_config.json
{
"mcpServers": {
"endprompt": {
"url": "https://api.endprompt.app/mcp",
"headers": {
"Authorization": "Bearer epa_your_admin_key_here"
}
}
}
}
Cursor
Add an MCP server in Cursor’s settings with:
- URL:
https://api.endprompt.app/mcp
- Header:
Authorization: Bearer epa_your_admin_key_here
Once connected, your AI assistant has access to 25 tools organized into three categories.
Endpoint Management
| Tool | Description |
|---|
list_endpoints | List all API endpoints for your tenant |
get_endpoint | Get full endpoint details including field schemas |
create_endpoint | Create a new API endpoint |
update_endpoint | Update endpoint metadata (name, description, visibility) |
delete_endpoint | Soft-delete an endpoint |
add_input_field | Add a typed input field to an endpoint |
add_output_field | Add a typed output field to an endpoint |
Prompt Lifecycle
| Tool | Description |
|---|
list_prompts | List prompts for an endpoint with status filtering |
get_prompt | Get full prompt details including Liquid template |
create_prompt | Create a new prompt in Draft status |
update_prompt | Update template or config (auto-versions on change) |
promote_prompt | Promote from Draft to Live |
set_default_prompt | Set a Live prompt as the endpoint default |
archive_prompt | Archive a prompt |
unarchive_prompt | Restore an archived prompt to Draft |
duplicate_prompt | Clone a prompt as a new Draft |
list_prompt_versions | View version history of a prompt |
validate_template | Check Liquid syntax and variable references |
list_available_models | List all available LLM models |
Execution & Analytics
| Tool | Description |
|---|
execute_endpoint | Execute an endpoint with inputs, get LLM output |
get_endpoint_stats | Execution stats (requests, latency, tokens, costs) |
get_prompt_stats | Compare prompt performance within an endpoint |
list_execution_logs | Browse execution history with filtering |
get_execution_log | Full execution details (inputs, rendered prompt, response) |
Example Conversations
”Create a sentiment analysis endpoint”
The AI assistant will:
- Call
list_available_models to show you model options
- Call
create_endpoint with your chosen name and path
- Call
add_input_field to define the text input
- Call
add_output_field for sentiment and confidence outputs
- Call
create_prompt with a Liquid template
- Call
validate_template to check for errors
- Call
promote_prompt to make it Live
- Call
set_default_prompt to activate it
The AI assistant will:
- Call
get_tenant_stats for an overview
- Call
list_endpoints to show your endpoints
- Call
get_endpoint_stats for specific endpoints you ask about
- Call
get_prompt_stats to compare prompt performance
”Show me recent errors”
The AI assistant will:
- Call
list_execution_logs filtered to recent entries
- Call
get_execution_log for any failures to show full details
- Analyze the rendered prompt and raw response to identify issues
Transport Details
| Setting | Value |
|---|
| Protocol | MCP (Model Context Protocol) |
| Transport | Streamable HTTP |
| URL | https://api.endprompt.app/mcp |
| Auth | Authorization: Bearer epa_... |
| Session | Managed via Mcp-Session-Id header |
The server also supports legacy SSE transport for older MCP clients.