Supported formats
OpenOpen8 accepts requests at these endpoints, each corresponding to a specific format:| Endpoint | Format | Primary use |
|---|---|---|
POST /v1/chat/completions | OpenAI Chat Completions | The universal default — works with any OpenAI-compatible client |
POST /v1/completions | OpenAI legacy completions | Older OpenAI text completion clients |
POST /v1/messages | Claude Messages | Anthropic-native clients and SDKs |
POST /v1/responses | OpenAI Responses | Newer OpenAI Responses API format |
POST /v1beta/models/{model}:generateContent | Gemini | Google AI clients and SDKs |
How format conversion works
When you send a request, OpenOpen8 reads the incoming format based on the endpoint you called. It then converts the request to match the format required by the backend channel it routes to, and converts the response back to the format you sent. For example, if you send a Claude Messages request to/v1/messages and OpenOpen8 routes it to an OpenAI-compatible channel, it converts your request to OpenAI format before sending it upstream, then converts the OpenAI response back to Claude Messages format before returning it to you.
Supported conversions
OpenAI ↔ Claude Messages
OpenAI ↔ Claude Messages
OpenOpen8 can translate in both directions between the OpenAI Chat Completions format and the Anthropic Claude Messages format.
- OpenAI → Claude: Send to
/v1/chat/completions, route to an Anthropic channel. OpenOpen8 converts the request to Claude Messages format and the response back to OpenAI format. - Claude → OpenAI: Send to
/v1/messages, route to an OpenAI-compatible channel. OpenOpen8 converts the request to OpenAI format and the response back to Claude Messages format.
OpenAI → Gemini
OpenAI → Gemini
Send a standard OpenAI Chat Completions request and route it to a Google Gemini channel. OpenOpen8 converts the request to Gemini’s
generateContent format and returns an OpenAI-compatible response.Gemini → OpenAI
Gemini → OpenAI
Send a Gemini-format request to
/v1beta/models/{model}:generateContent and route it to an OpenAI-compatible channel. OpenOpen8 converts the request and returns a Gemini-format response.OpenAI Responses format
OpenAI Responses format
OpenOpen8 accepts requests at
POST /v1/responses using the newer OpenAI Responses API format. Conversion between the Responses format and the standard Chat Completions format is in active development.Reasoning effort
Several models support adjustable reasoning effort. OpenOpen8 exposes this through model name suffixes — you append a suffix to the model name in your request, and OpenOpen8 translates it to the appropriate provider-specific parameter.| Suffix | Effect |
|---|---|
-thinking | Enable extended reasoning / thinking mode |
-high | High reasoning effort |
-medium | Medium reasoning effort |
-low | Low reasoning effort |
-nothinking | Disable thinking mode (Gemini models) |
OpenAI reasoning models
OpenAI reasoning models
Append
-high, -medium, or -low to a supported model name:Claude thinking models
Claude thinking models
Append
-thinking to enable extended thinking:Gemini thinking models
Gemini thinking models
Append You can also append a token budget:
-thinking to enable, -nothinking to disable, or -low/-medium/-high for effort level:gemini-2.5-pro-thinking-128 sets a thinking budget of 128 tokens.Related
Supported providers
See all providers available through OpenOpen8.
API reference
Full request and response schemas for each supported format.