Configuration
The streams.create() method accepts flat parameters to configure the video source, processing mode, and inference settings.
Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
source | SourceConfig | required | Video source |
prompt | str | required | Analysis prompt |
model | str | required | Model name |
on_result | Callable | required | Callback for results |
on_error | Callable | None | Callback for errors |
mode | "clip" or "frame" | "frame" | Processing mode |
output_schema | dict | None | JSON schema for structured output |
max_output_tokens | int | None | Max tokens per inference |
target_fps | int | 6 | Clip mode: frame sampling rate (1-30) |
clip_length_seconds | float | 0.5 | Clip mode: clip duration |
delay_seconds | float | 0.5 | Clip mode: delay between clips |
interval_seconds | float | 0.5 | Frame mode: capture interval |
Frame Mode (Default)
Frame mode captures individual frames at a regular interval. This is the default mode.
stream = await client.streams.create(
source=overshoot.FileSource(path="/path/to/video.mp4"),
prompt="Read any visible text",
model="Qwen/Qwen3.5-9B",
on_result=lambda r: print(r.result),
# mode="frame" and interval_seconds=0.5 are the defaults
)Clip Mode
Clip mode captures short video clips and sends them for analysis. Use this when you need temporal/motion context.
stream = await client.streams.create(
source=overshoot.FileSource(path="/path/to/video.mp4"),
prompt="Describe what is happening",
model="Qwen/Qwen3.5-9B",
on_result=lambda r: print(r.result),
mode="clip",
target_fps=6,
clip_length_seconds=0.5,
delay_seconds=0.5,
)See Processing Modes for conceptual details on when to use each mode.
Updating the Prompt
You can update the prompt on a running stream without restarting it:
config = await stream.update_prompt("Count the number of people")Stream Object
The stream object returned by streams.create() exposes:
stream.stream_id # server-assigned ID
stream.is_active # True if runningClose a stream when you are done:
await stream.close()Low-Level ApiClient
For full control over the stream lifecycle, use overshoot.ApiClient. This client maps directly to the HTTP API with no background tasks.
api = overshoot.ApiClient(api_key="ovs_...")
response = await api.create_stream(
source=None, # native transport
processing=overshoot.ClipProcessingConfig(target_fps=6),
inference=overshoot.InferenceConfig(
prompt="Describe what you see",
model="Qwen/Qwen3.5-9B",
),
mode="clip",
)
keepalive = await api.keepalive(response.stream_id)
await api.close_stream(response.stream_id)
await api.close()See Output Token Limits for details on controlling output length.