Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.overshoot.ai/llms.txt

Use this file to discover all available pages before exploring further.

The Overshoot REST API turns a live video feed into a conversation. Publish frames over WebRTC, then call an OpenAI-compatible chat completions endpoint to ask any vision-language model about your stream — over the last few seconds, the latest frame, or anywhere in the stream’s history.

Endpoints

  • List models — discover the vision-language models you can target in chat completions.
  • Create stream — open a live stream and get a LiveKit room URL + token to start publishing frames.
  • Get stream — inspect the state of a stream: frame counts, recent FPS, lease expiry.
  • Renew stream lease — keep a stream alive past the default 5-minute idle window.
  • Delete stream — end a stream and release its resources.
  • Chat completions — ask any vision-language model about a stream’s frames.

Base URL

https://api.overshoot.ai/v1

Authentication

Bearer tokens. Include your API key on every authenticated request:
curl https://api.overshoot.ai/v1/streams/<id> \
  -H "Authorization: Bearer ovs-..."
Keys are prefixed with ovs- and managed in the Overshoot dashboard. List models is the only unauthenticated endpoint.
Calls using your API key incur cost. Treat keys as secrets — never commit them or expose them in client-side code.

Errors

Non-2xx responses use this shape:
{ "detail": "Stream not found" }
CodeMeaning
401, 403Missing or invalid API key.
402Insufficient credits.
404Stream not found, or not owned by your key.
422Request validation failed. details lists the offending fields.
503Service is draining or starting up — retry.
  • The Stream — concept guide for the stream lifecycle.
  • Models — overview of available vision-language models.
  • Chat Completion — the URL grammar for referencing frames and segments.
  • Best practices — production tips for low latency and cost.