Multi-Provider Setup
Configure multiple providers to seamlessly switch between them. This example shows how to configure OpenAI, Anthropic, and Mistral providers.- Using Web UI
- Using API
- Using config.json

- Go to http://localhost:8080
- Navigate to “Providers” in the sidebar
- Click “Add Provider”
- Select provider and configure keys
- Save configuration
Making Requests
Once providers are configured, you can make requests to any specific provider. This example shows how to send a request directly to OpenAI’s GPT-4o Mini model. Bifrost handles the provider-specific API formatting automatically.Environment Variables
Set up your API keys for the providers you want to use. Bifrost supports both direct key values and environment variable references with theenv. prefix:
- Use
"value": "env.VARIABLE_NAME"to reference environment variables - Use
"value": "sk-proj-xxxxxxxxx"to pass keys directly - All sensitive data is automatically redacted in GET requests and UI responses for security
Advanced Configuration
Weighted Load Balancing
Distribute requests across multiple API keys or providers based on custom weights. This example shows how to split traffic 70/30 between two OpenAI keys, useful for managing rate limits or costs across different accounts.- Using Web UI
- Using API
- Using config.json

- Navigate to “Providers” → “OpenAI”
- Click “Add Key” to add multiple keys
- Set weight values (0.7 and 0.3)
- Save configuration
Model-Specific Keys
Use different API keys for specific models, allowing you to manage access controls and billing separately. This example uses a premium key for advanced reasoning models (o1-preview, o1-mini) and a standard key for regular GPT models.- Using Web UI
- Using API
- Using config.json

- Navigate to “Providers” → “OpenAI”
- Add first key with models:
["gpt-4o", "gpt-4o-mini"] - Add premium key with models:
["o1-preview", "o1-mini"] - Save configuration
Custom Network Settings
Customize the network configuration for each provider, including custom base URLs, extra headers, and timeout settings. This example shows how to use a local OpenAI-compatible server with custom headers for user identification.- Using Web UI
- Using API
- Using config.json

- Navigate to “Providers” → “OpenAI” → “Advanced”
- Set Base URL:
http://localhost:8000/v1 - Set Timeout:
30seconds - Save configuration
Managing Retries
Configure retry behavior for handling temporary failures and rate limits. This example sets up exponential backoff with up to 5 retries, starting with 1ms delay and capping at 10 seconds - ideal for handling transient network issues.- Using Web UI
- Using API
- Using config.json

- Navigate to “Providers” → “OpenAI” → “Advanced”
- Set Max Retries:
5 - Set Initial Backoff:
1ms - Set Max Backoff:
10000ms - Save configuration
Custom Concurrency and Buffer Size
Fine-tune performance by adjusting worker concurrency and queue sizes per provider (defaults are 1000 workers and 5000 queue size). This example gives OpenAI higher limits (100 workers, 500 queue) for high throughput, while Anthropic gets conservative limits to respect their rate limits.- Using Web UI
- Using API
- Using config.json

- Navigate to “Providers” → Provider → “Performance”
- Set Concurrency: Worker count (100 for OpenAI, 25 for Anthropic)
- Set Buffer Size: Queue size (500 for OpenAI, 100 for Anthropic)
- Save configuration
Setting Up a Proxy
Route requests through proxies for compliance, security, or geographic requirements. This example shows both HTTP proxy for OpenAI and authenticated SOCKS5 proxy for Anthropic, useful for corporate environments or regional access.- Using Web UI
- Using API
- Using config.json

- Navigate to “Providers” → Provider → “Proxy”
- Select Proxy Type: HTTP or SOCKS5
- Set Proxy URL:
http://localhost:8000 - Add credentials if needed (username/password)
- Save configuration
Send Back Raw Response
Include the original provider response alongside Bifrost’s standardized response format. Useful for debugging and accessing provider-specific metadata.- Using Web UI
- Using API
- Using config.json

- Navigate to “Providers” → Provider → “Advanced”
- Toggle “Include Raw Response” to enabled
- Save configuration
extra_fields.raw_response:
Provider-Specific Authentication
Enterprise cloud providers require additional configuration beyond API keys. Configure Azure OpenAI, AWS Bedrock, and Google Vertex with platform-specific authentication details.Azure OpenAI
Azure OpenAI requires endpoint URLs, deployment mappings, and API version configuration:- Using Web UI
- Using API
- Using config.json

- Navigate to “Providers” → “Azure OpenAI”
- Set API Key: Your Azure API key
- Set Endpoint: Your Azure endpoint URL
- Configure Deployments: Map model names to deployment names
- Set API Version: e.g.,
2024-08-01-preview - Save configuration
AWS Bedrock
AWS Bedrock supports both explicit credentials and IAM role authentication:- Using Web UI
- Using API
- Using config.json

- Navigate to “Providers” → “AWS Bedrock”
- Set API Key: AWS API Key (or leave empty if using IAM role authentication)
- Set Access Key: AWS Access Key ID (or leave empty to use IAM in environment)
- Set Secret Key: AWS Secret Access Key (or leave empty to use IAM in environment)
- Set Region: e.g.,
us-east-1 - Configure Deployments: Map model names to inference profiles
- Set ARN: Required for deployments mapping
- Save configuration
- If using API Key authentication, set
valuefield to the API key, else leave it empty for IAM role authentication. - In IAM role authentication, if both
access_keyandsecret_keyare empty, Bifrost uses IAM role authentication from the environment. arnis required for URL formation -deploymentsmapping is ignored without it.- When using
arn+deployments, Bifrost uses model profiles; otherwise forms path with incoming model name directly.
Google Vertex
Google Vertex requires project configuration and authentication credentials:- Using Web UI
- Using API
- Using config.json

- Navigate to “Providers” → “Google Vertex”
- Set API Key: Your Vertex API key
- Set Project ID: Your Google Cloud project ID
- Set Region: e.g.,
us-central1 - Set Auth Credentials: Service account credentials JSON
- Save configuration
- You can leave both API Key and Auth Credentials empty to use service account authentication from the environment.
- You must set Project Number in Key config if using fine-tuned models.
- API Key Authentication is only supported for Gemini and fine-tuned models.
- You can use custom fine-tuned models by passing
vertex/<your-fine-tuned-model-id>orvertex/<model-deployment-alias>if you have set the deployments in the key config.
Vertex AI support for fine-tuned models is currently in beta. Requests to non-Gemini fine-tuned models may fail, so please test and report any issues.
Next Steps
Now that you understand provider configuration, explore these related topics:Essential Topics
- Streaming Responses - Real-time response generation
- Tool Calling - Enable AI to use external functions
- Multimodal AI - Process images, audio, and text
- Integrations - Drop-in compatibility with existing SDKs
Advanced Topics
- Core Features - Advanced Bifrost capabilities
- Architecture - How Bifrost works internally
- Deployment - Production setup and scaling

