API Reference
Evaluate Prompt
Operations about evaluate_prompts
POST
/
evaluate_prompt
/predict
Post Evaluate Prompt Predict
Evaluate an AI Prompt
Request Body
application/json
Requiredmessages[role]
Requiredarray<string>
messages[content]
Requiredarray<string>
max_tokens
integer
Maximum number of output tokens, maximum 400
Default:
300
Format: "int32"
temperature
number
How creative the response should be. Between 0 and 2, the lower the less creative
Format:
"float"
system
string
For Anthropic, set the system prompt to use
model_kind
string
Which model provider should be used
Default:
"openai"
Value in: "openai" | "anthropic"
Evaluate an AI Prompt