
If your Allowed Headers are already set to
*, you can skip this note. If not and you face issues integrating Bifrost with OpenCode, try switching to * or adding the specific headers required by your client. By default, Bifrost whitelists: Content-Type, Authorization, X-Requested-With, X-Stainless-Timeout, and X-Api-Key.Setup
1. Configure OpenCode to work with Bifrost
OpenCode uses a JSON config file (opencode.json) to configure providers. Point your provider’s baseURL to Bifrost.
Using OpenAI-compatible endpoint
Route OpenAI and other providers through Bifrost’s OpenAI endpoint:Using Anthropic endpoint
Route Anthropic models through Bifrost’s Anthropic endpoint:Virtual Keys
When Bifrost has virtual key authentication enabled, setapiKey in your provider options to your virtual key:
Model Selection
Set your default models inopencode.json:

- Use powerful models like
openai/gpt-5oranthropic/claude-sonnet-4-5-20250929for complex coding tasks - Use fast models like
groq/llama-3.3-70b-versatilefor quick completions - Set
small_modelto a lighter model for faster, lower-cost operations
Using Multiple Providers
Bifrost routes requests to the correct provider based on the model name. Use theprovider/model-name format to access any configured provider through the single OpenAI endpoint:
Supported Providers
Bifrost supports the following providers with theprovider/model-name format:
openai, azure, gemini, vertex, bedrock, mistral, groq, cerebras, cohere, perplexity, xai, ollama, openrouter, huggingface, nebius, parasail, replicate, vllm, sgl
OpenCode connects to Bifrost via a single endpoint. Bifrost handles routing to the correct provider based on the model name — no per-provider configuration needed.
Observability
All OpenCode traffic through Bifrost is logged. Monitor it athttp://localhost:8080/logs — filter by provider, model, or search through conversation content to track usage.
Next Steps
- Provider Configuration — Configure AI providers in Bifrost
- Virtual Keys — Set up usage limits and access control

