Skip to main content
Endprompt includes a powerful test runner that lets you validate prompts before deploying to production. Test with single requests, bulk CSV uploads, or saved test datasets.

The Test Runner

Every prompt has a Test button that opens the test runner:
  • Auto-generated form based on your input schema
  • Real-time execution against the actual LLM
  • Response preview with timing and token usage
  • History of past test runs

Single Request Testing

1

Open the Test Runner

Click Test next to any prompt in your endpoint.
2

Fill in the Form

The form is generated from your input schema. Required fields are marked.
3

Select Options

  • Bypass Cache: Force a fresh LLM call
  • Model Override: Test with a different model
4

Run Test

Click Run Test or press Ctrl/Cmd + Enter.
5

Review Response

See the JSON response, latency, and token usage.

Test Results Panel

After running a test, you’ll see:
SectionDescription
ResponseThe JSON output from the LLM
LatencyTime taken for the request
TokensInput and output token counts
CostEstimated cost of the request
Raw OutputUnprocessed LLM response
Toggle between Formatted and Raw views to debug JSON parsing issues.

Bulk CSV Testing

Test multiple inputs at once by uploading a CSV file:
1

Prepare Your CSV

Create a CSV with columns matching your input field names:
text,max_length,tone
"First document to summarize",100,formal
"Second document here",150,casual
"Third document",100,formal
2

Open Bulk Test

In the test runner, click Bulk Test or Import CSV.
3

Upload CSV

Select your CSV file.
4

Review Mapping

Confirm columns map to the correct input fields.
5

Run All

Click Run All Tests to execute each row.
6

Review Results

See success/failure status and responses for each row.

CSV Best Practices

Wrap text containing commas in double quotes: "Hello, world"
Column headers must match your input schema field names.
Test with 5-10 rows first before running hundreds.
Add rows with empty optional fields, long text, special characters.

Saved Test Datasets

Save frequently-used test data for quick access:
1

Run a Test

Execute a test with your desired inputs.
2

Save Dataset

Click Save as Dataset and give it a name.
3

Reuse Later

Select the saved dataset from the dropdown to load those inputs.
Saved datasets are useful for:
  • Regression testing after prompt changes
  • Comparing outputs across different prompts
  • Onboarding team members with realistic examples

Comparing Prompts

Test the same input against multiple prompts:
  1. Open the endpoint’s Testing tab
  2. Select multiple prompts to compare
  3. Enter your test input
  4. Run the comparison
  5. View results side-by-side
This helps you:
  • Compare model performance (GPT-4 vs Claude)
  • Evaluate prompt variations
  • Choose the best approach before going live

Test History

All test runs are saved in your history:
  • View past test inputs and outputs
  • Re-run previous tests
  • Track how responses change over time
Test history is separate from production logs. Tests don’t count against your API usage limits.

Testing Best Practices

Test Edge Cases

Empty strings, very long inputs, special characters, missing optional fields

Test Realistic Data

Use real-world examples, not just “test” and “hello world”

Test Before Promoting

Always test Draft prompts before promoting to Live

Save Good Datasets

Build a library of test cases you can reuse

Debugging Failed Tests

If your test fails or returns unexpected results:

Check Input Validation

{
  "error": "validation_error",
  "message": "Field 'text' is required"
}
Solution: Ensure all required fields are provided.

Check JSON Parsing

{
  "error": "parse_error",
  "message": "Invalid JSON in response"
}
Solution: Update your prompt to explicitly request valid JSON output.

Check Token Limits

{
  "error": "token_limit_exceeded",
  "message": "Response truncated due to max_tokens"
}
Solution: Increase max_tokens in prompt settings or ask for shorter responses.

Check Rate Limits

{
  "error": "rate_limit_exceeded",
  "message": "Too many requests"
}
Solution: Wait and retry, or reduce test frequency.

Keyboard Shortcuts

ShortcutAction
Ctrl/Cmd + EnterRun test
Ctrl/Cmd + SSave test input
EscapeClose test runner

Next Steps