Multi-LLM Orchestrator —
Use the same endpoint with different provider/model parameters:
{
"statusCode": 200,
"data": {
"trace_id": "...",
"provider": "openai",
"model": "gpt-4o",
"content": "The AI response text",
"latency_ms": 1234.56,
"tokens": { "input": 10, "output": 25 }
}
}