OpenAI-compatible API powered by decentralized inference. Drop-in replacement, censorship-resistant, no rate limits.
For developers & apps
Use any OpenAI-compatible client. Just change the base URL.
from openai import OpenAI
client = OpenAI(
base_url="https://dogecat.com/api/v1",
api_key="your-api-key"
)
response = client.chat.completions.create(
model="Qwen/Qwen3-235B-A22B-Instruct-2507-FP8",
messages=[
{"role": "user", "content": "Hello!"}
]
)
print(response.choices[0].message.content)curl https://dogecat.com/api/v1/chat/completions \
-H "Authorization: Bearer your-api-key" \
-H "Content-Type: application/json" \
-d '{
"model": "Qwen/Qwen3-235B-A22B-Instruct-2507-FP8",
"messages": [{"role": "user", "content": "Hello!"}]
}'Add dogecat as a provider in your OpenClaw config. Works with any OpenClaw agent!
providers:
dogecat:
kind: openai
baseUrl: https://dogecat.com/api/v1
apiKey: PASTE_YOUR_API_KEY_HERE
models:
- Qwen/Qwen3-235B-A22B-Instruct-2507-FP8
# Then use it as your default:
defaultModel: dogecat/Qwen/Qwen3-235B-A22B-Instruct-2507-FP8Don't have OpenClaw? Get it here →
/api/v1/chat/completionsCreate a chat completion. Fully compatible with OpenAI's API.
| Parameter | Type | Description |
|---|---|---|
model | string | Model ID to use |
messages | array | List of messages in the conversation |
stream | boolean | Whether to stream responses (optional) |
max_tokens | integer | Maximum tokens to generate (optional) |
temperature | number | Sampling temperature 0-2 (optional) |
/api/v1/modelsList available models.
| Model | Input / 1M | Output / 1M | Context |
|---|---|---|---|
Qwen3-235B-A22B-Instruct-2507-FP8 Latest Qwen3 MoE, 235B parameters | $0.50 | $1.00 | 128K |