401 Unauthorized
401 Unauthorized
404 Not Found on /v1/chat/completions
404 Not Found on /v1/chat/completions
A Not this:The OpenAI Python and JavaScript SDKs both append
404 on the chat completions endpoint almost always means a duplicated /v1 path segment in the URL. This happens when the client SDK appends /v1 automatically and the base URL you configured already includes it.Do this:/v1 automatically, so the base URL you pass should be the root of OpenOpen8 with no path suffix.429 Too Many Requests
429 Too Many Requests
A
429 response means a rate limit has been reached. OpenOpen8 applies rate limits at three levels:- Token-level limits — configured per-token in Tokens
- User-level limits — configured per-user in User Management or Settings → Operation Settings
- Upstream provider limits — the provider’s own API returned a 429, which OpenOpen8 passes through
- Wait and retry with exponential backoff. Most rate limits reset within seconds to minutes.
- If you are an end user, contact your administrator to request a higher limit.
- If you are an administrator, open Settings → Operation Settings and review the rate limit settings. You can also check the Logs page to see which token or model is hitting its ceiling.
- If the 429 is coming from the upstream provider, either reduce your request rate or add additional channels for that provider to distribute load.
503 Service Unavailable — no available channels
503 Service Unavailable — no available channels
Streaming response cuts off before completion
Streaming response cuts off before completion
If streaming responses stop mid-generation without an error, the most likely cause is the The default is
STREAMING_TIMEOUT limit being reached.Increase the timeout by setting the environment variable and restarting the container:300 seconds (5 minutes). For very long reasoning or generation tasks, you may need to increase this significantly.If the response cuts off at a consistent point or in an inconsistent pattern, also check whether the upstream provider itself is terminating the connection. Look at the Logs page for error details on the affected requests.Large image or base64 responses are truncated
Large image or base64 responses are truncated
If responses from vision or image-generation models are cut off — particularly when the model returns large base64-encoded image data inside a streaming response — the per-line buffer for the stream scanner is too small.Increase it by setting:Restart the container after making this change. If truncation persists, keep doubling the value until responses are complete. Each MB you add is memory that the container may use per active streaming connection, so do not set this higher than necessary.
Login state is inconsistent — users are logged out randomly
Login state is inconsistent — users are logged out randomly
If users report being logged out unpredictably, or have to log in again when their request is routed to a different instance, you have not set a shared Restart all containers after setting the variable. Existing sessions will be invalidated — users will need to log in once more, after which logins will persist correctly across instances.
SESSION_SECRET.By default, each OpenOpen8 container generates its own random session signing key at startup. Session cookies signed by one instance are rejected by all others. To fix this, set the same secret on every instance:Choose a strong random value (at least 32 characters) and store it securely. Treat it like a password — anyone with this value can forge valid session cookies.
Channel test is failing
Channel test is failing
When the Test button on a channel returns an error, work through these checks in order:
- API key — paste the key into the channel editor and confirm there are no leading or trailing spaces. Some providers issue keys with invisible characters when copied from their dashboard.
-
Base URL — verify the URL is correct for the provider type. For standard OpenAI-compatible providers it should end without a trailing slash and without
/v1(OpenOpen8 appends the path itself). For Azure, confirm the endpoint matches the format:https://your-resource.openai.azure.com. -
Model name — the model name configured in the channel must match exactly what the provider expects. For example, Anthropic requires
claude-3-5-sonnet-20241022, not a shortened alias. Check the provider’s documentation for the exact identifier. -
Network reachability — ensure outbound HTTPS is allowed to
openopen8.ai. -
TLS issues — if the channel target uses a self-signed certificate (for example, a local Ollama instance behind a reverse proxy), you can set
TLS_INSECURE_SKIP_VERIFY=truetemporarily to test. Do not leave this enabled in production.