Skip to main content
PATCH
/
streams
/
{stream_id}
/
config
/
prompt
Error
A valid request URL is required to generate request examples
{
  "id": "<string>",
  "stream_id": "<string>",
  "prompt": "<string>",
  "backend": "gemini",
  "model": "<string>",
  "output_schema_json": {},
  "created_at": "<string>",
  "updated_at": "<string>"
}

Documentation Index

Fetch the complete documentation index at: https://docs.overshoot.ai/llms.txt

Use this file to discover all available pages before exploring further.

The v0.2 API is deprecated. New integrations should use v1.

Authorizations

Authorization
string
header
required

Provide your API key in the Authorization header as 'Bearer <api_key>'

Path Parameters

stream_id
string
required

Body

application/json

Update the inference prompt on a running stream. Takes effect on the next inference cycle.

prompt
string
required

New prompt text (e.g. 'Count the number of people')

Minimum string length: 1

Response

Successful Response

Current stream inference configuration, returned after updates.

id
string
required

Config identifier

stream_id
string
required

Stream identifier

prompt
string
required

Active inference prompt

backend
enum<string>
required

Inference backend

Available options:
gemini,
overshoot
model
string
required

Model identifier

output_schema_json
Output Schema Json · object
required

JSON Schema for structured output, or null

created_at
string | null
required

ISO 8601 creation timestamp

updated_at
string | null
required

ISO 8601 last update timestamp