Skip to main content
OpenOpen8 exposes an OpenAI-compatible endpoint, which means any library or tool that works with OpenAI will work with OpenOpen8 — you only need to change the base URL and swap in your OpenOpen8 token. This guide walks you through getting a token, making your first request, and checking your usage in the dashboard.

Prerequisites

Before you start, you need:
  • An OpenOpen8 account. Sign up at openopen8.ai if you don’t have one yet.
  • Credits in your account. Top up from the dashboard under Top Up.

Steps

1

Get your API token

Log in to the OpenOpen8 dashboard and create an API token:
  1. Open openopen8.ai in a browser and sign in.
  2. Navigate to Settings → Tokens.
  3. Click Create token.
  4. Give the token a name, optionally set an expiry and quota, then click Submit.
  5. Copy the token value — it won’t be shown again.
Tokens control which models a user or application can access and how much quota they can consume. You can create separate tokens for different services, teams, or users.
2

Make your first chat completion request

Use your token and the OpenOpen8 base URL to send a chat completion request. Replace YOUR_TOKEN with the token you copied above.
curl https://openopen8.ai/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_TOKEN" \
  -d '{
    "model": "gpt-4o",
    "messages": [
      {
        "role": "user",
        "content": "Say hello in one sentence."
      }
    ]
  }'
A successful response looks like this:
{
  "id": "chatcmpl-...",
  "object": "chat.completion",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Hello! It's great to meet you."
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 13,
    "completion_tokens": 10,
    "total_tokens": 23
  }
}
Any client library or tool that supports the OpenAI API works with OpenOpen8 out of the box. Just set the base URL to https://openopen8.ai/v1 and use your OpenOpen8 token as the API key. This includes LangChain, LlamaIndex, Vercel AI SDK, and others.
3

Check your usage in the dashboard

After making requests, you can view token consumption and costs in the dashboard:
  1. Log in to openopen8.ai.
  2. Navigate to Log to see a record of each request, including the model used, token counts, and cost.
  3. Navigate to Analytics for aggregated charts and usage summaries.
Usage is tracked per token, so you can monitor consumption for each application or user you’ve issued a token to.

What to try next

Use the Claude Messages format

OpenOpen8 accepts native Anthropic Claude Messages requests. Point your Claude client at https://openopen8.ai instead of api.anthropic.com.

Explore supported providers

OpenOpen8 supports 40+ providers including OpenAI, Anthropic, Gemini, DeepSeek, and more — all pre-configured and ready to use.

Set rate limits and quotas

Go to Settings → Tokens to edit your token and set per-minute rate limits and total quota limits per model.

Learn about format conversion

See how OpenOpen8 automatically translates between OpenAI, Claude, and Gemini formats in the format conversion guide.