The Overshoot REST API turns a live video feed into a conversation. Publish frames over WebRTC, then call an OpenAI-compatible chat completions endpoint to ask any vision-language model about your stream — over the last few seconds, the latest frame, or anywhere in the stream’s history.Documentation Index
Fetch the complete documentation index at: https://docs.overshoot.ai/llms.txt
Use this file to discover all available pages before exploring further.
Endpoints
- List models — discover the vision-language models you can target in chat completions.
- Create stream — open a live stream and get a LiveKit room URL + token to start publishing frames.
- Get stream — inspect the state of a stream: frame counts, recent FPS, lease expiry.
- Renew stream lease — keep a stream alive past the default 5-minute idle window.
- Delete stream — end a stream and release its resources.
- Chat completions — ask any vision-language model about a stream’s frames.
Base URL
Authentication
Bearer tokens. Include your API key on every authenticated request:ovs- and managed in the Overshoot dashboard. List models is the only unauthenticated endpoint.
Calls using your API key incur cost. Treat keys as secrets — never commit them or expose them in client-side code.
Errors
Non-2xx responses use this shape:| Code | Meaning |
|---|---|
401, 403 | Missing or invalid API key. |
402 | Insufficient credits. |
404 | Stream not found, or not owned by your key. |
422 | Request validation failed. details lists the offending fields. |
503 | Service is draining or starting up — retry. |
Related
- The Stream — concept guide for the stream lifecycle.
- Models — overview of available vision-language models.
- Chat Completion — the URL grammar for referencing frames and segments.
- Best practices — production tips for low latency and cost.