Change the inference prompt on a running stream. Takes effect on the next inference cycle — no need to recreate the stream.
Documentation Index
Fetch the complete documentation index at: https://docs.overshoot.ai/llms.txt
Use this file to discover all available pages before exploring further.
Provide your API key in the Authorization header as 'Bearer <api_key>'
Update the inference prompt on a running stream. Takes effect on the next inference cycle.
New prompt text (e.g. 'Count the number of people')
1Successful Response
Current stream inference configuration, returned after updates.
Config identifier
Stream identifier
Active inference prompt
Inference backend
gemini, overshoot Model identifier
JSON Schema for structured output, or null
ISO 8601 creation timestamp
ISO 8601 last update timestamp