Overview
Bifrost Clustering delivers production-ready high availability through a peer-to-peer network architecture with automatic service discovery. The clustering system uses gossip protocols to maintain consistent state across nodes while providing seamless scaling, automatic failover, and zero-downtime deployments.Why Clustering Matters
Modern AI gateway deployments require robust infrastructure to handle production workloads:| Challenge | Impact | Clustering Solution |
|---|---|---|
| Single Point of Failure | Complete service outage if gateway fails | Distributed architecture with automatic failover |
| Traffic Spikes | Performance degradation under high load | Dynamic load distribution across multiple nodes |
| Provider Rate Limits | Request throttling and service interruption | Distributed rate limit tracking across cluster |
| Regional Latency | Poor user experience in distant regions | Geographic distribution with local processing |
| Maintenance Windows | Service downtime during updates | Rolling updates with zero-downtime deployment |
| Capacity Planning | Over/under-provisioning resources | Elastic scaling based on real-time demand |
Core Features
| Feature | Description |
|---|---|
| Automatic Service Discovery | 6 discovery methods for any infrastructure (K8s, Consul, etcd, DNS, UDP, mDNS) |
| Peer-to-Peer Architecture | No single point of failure with equal node participation |
| Gossip-Based State Sync | Real-time synchronization of traffic patterns and limits |
| Automatic Failover | Seamless traffic redistribution when nodes fail |
| Zero-Downtime Updates | Rolling deployments without service interruption |
Architecture
Peer-to-Peer Network Design
Bifrost clustering uses a peer-to-peer (P2P) network where all nodes are equal participants. Each node:- Discovers peers automatically using configured discovery method
- Synchronizes state via gossip protocol
- Shares traffic patterns and rate limits
- Handles failover automatically
Gossip Protocol
The gossip protocol ensures all nodes maintain consistent views of:- Traffic Patterns: Request volume, latency metrics, error rates
- Rate Limit States: Current usage counters for each provider/model
- Node Health: CPU, memory, network status of all peers
- Configuration Changes: Provider updates, routing rules, policies
Minimum Node Requirements
| Cluster Size | Fault Tolerance | Use Case |
|---|---|---|
| 3 nodes | 1 node failure | Small production deployments |
| 5 nodes | 2 node failures | Medium production deployments |
| 7+ nodes | 3+ node failures | Large enterprise deployments |
Configuration Basics
Core Configuration Structure
The new clustering configuration uses acluster_config object with integrated service discovery:
Common Discovery Configuration Fields
All discovery methods support these common fields:| Field | Type | Required | Description |
|---|---|---|---|
enabled | boolean | Yes | Enable/disable discovery |
type | string | Yes | Discovery type: kubernetes, consul, etcd, dns, udp, mdns |
service_name | string | Yes | Service name for discovery |
bind_port | integer | No | Port for cluster communication (default: 10101) |
dial_timeout | duration | No | Discovery timeout (default: 10s) |
allowed_address_space | array | No | CIDR ranges to filter discovered nodes (e.g., ["10.0.0.0/8"]) |
Gossip Configuration
| Field | Description | Default |
|---|---|---|
port | Gossip protocol port | 10101 |
timeout_seconds | Health check timeout | 10 |
success_threshold | Successful checks to mark healthy | 3 |
failure_threshold | Failed checks to mark unhealthy | 3 |
Service Discovery Methods
Bifrost supports 6 service discovery methods to fit any infrastructure. Choose based on your deployment environment:Kubernetes
Consul
etcd
DNS
UDP Broadcast
mDNS
Kubernetes Discovery
Best for: Kubernetes deployments with StatefulSets or Deployments Kubernetes discovery uses the K8s API to automatically discover pods based on label selectors. This is the most common method for cloud-native deployments.How It Works
- Each Bifrost pod queries the Kubernetes API for pods matching the label selector
- Discovers pod IPs automatically as pods scale up/down
- Works seamlessly with StatefulSets, Deployments, and DaemonSets
- No external dependencies required
Configuration
Configuration Parameters
| Parameter | Required | Description | Example |
|---|---|---|---|
k8s_namespace | No | Kubernetes namespace to search | "default", "production" |
k8s_label_selector | Yes | Label selector for pod discovery | "app=bifrost", "app=bifrost,env=prod" |
Kubernetes Deployment Example
- StatefulSet
- Deployment
Troubleshooting
Pods not discovering each other
Pods not discovering each other
- Verify ServiceAccount has RBAC permissions to list pods
- Check label selector matches pod labels exactly
- Ensure namespace is correct (defaults to “default”)
- Verify gossip port (10101) is not blocked by NetworkPolicies
- Check logs for “error listing pods” messages
Permission denied errors
Permission denied errors
- Create ServiceAccount for Bifrost pods
- Create Role with
get,list,watchpermissions on pods - Create RoleBinding linking ServiceAccount to Role
- Verify RBAC is enabled in cluster
Cluster forms but nodes show as unhealthy
Cluster forms but nodes show as unhealthy
- Verify gossip port (10101) is accessible between pods
- Check for NetworkPolicies blocking pod-to-pod communication
- Increase
timeout_secondsin gossip config if network is slow - Verify pods are in Running state with
kubectl get pods
Consul Discovery
Best for: Consul service mesh environments, multi-datacenter deployments Consul discovery integrates with HashiCorp Consul for service registration and discovery. Ideal for environments already using Consul for service mesh or service discovery.How It Works
- Each Bifrost node registers itself with Consul on startup
- Nodes query Consul to discover other Bifrost instances
- Consul performs health checks on each node
- Unhealthy nodes are automatically deregistered
- Supports multi-datacenter deployments
Configuration
Configuration Parameters
| Parameter | Required | Description | Example |
|---|---|---|---|
consul_address | No | Consul agent address | "localhost:8500", "consul.service.consul:8500" (default: localhost:8500) |
Docker Compose with Consul
Troubleshooting
Failed to register with Consul
Failed to register with Consul
- Verify Consul agent is accessible at configured address
- Check Consul agent logs for registration errors
- Ensure Consul ACL token has write permissions if ACLs enabled
- Verify network connectivity between Bifrost and Consul
- Check firewall rules allow connections to port 8500
Services registered but not discovered
Services registered but not discovered
- Verify
service_namematches across all nodes - Check Consul service health checks are passing
- Ensure gossip port is accessible between nodes
- Verify nodes are registered in correct datacenter
- Check for DNS resolution issues if using service DNS names
Health checks failing
Health checks failing
- Verify gossip port (10101) is accessible
- Check Consul agent can reach node’s gossip port
- Increase health check timeout in Consul if needed
- Review Bifrost logs for startup errors
- Ensure nodes have correct IP addresses registered
etcd Discovery
Best for: etcd-based distributed systems, existing etcd infrastructure etcd discovery uses etcd’s distributed key-value store for service registration and discovery. Perfect for environments already using etcd or requiring strong consistency.How It Works
- Each Bifrost node registers itself in etcd with a lease
- Nodes maintain lease through keepalive messages
- Nodes query etcd prefix to discover other instances
- Failed nodes’ leases expire and are automatically removed
- Provides strongly consistent service registry
Configuration
Configuration Parameters
| Parameter | Required | Description | Example |
|---|---|---|---|
etcd_endpoints | Yes | Array of etcd endpoint URLs | ["http://localhost:2379"], ["https://etcd1:2379", "https://etcd2:2379"] |
dial_timeout | No | Connection timeout | "10s" (default), "30s" |
/services/{service_name}/{node_id} with a 30-second TTL lease.Docker Compose with etcd
Troubleshooting
Failed to create etcd client
Failed to create etcd client
- Verify etcd endpoints are accessible
- Check URL format (http:// or https://)
- Ensure etcd cluster is healthy and running
- Verify network connectivity to etcd endpoints
- Check firewall rules allow connections to port 2379
- Increase
dial_timeoutif network is slow
Failed to register with etcd
Failed to register with etcd
- Verify etcd cluster is accepting writes
- Check etcd cluster has available space
- Ensure authentication credentials if etcd has auth enabled
- Review etcd logs for permission or quota errors
- Verify node can resolve etcd hostnames
Lease keepalive failures
Lease keepalive failures
- Check network stability between nodes and etcd
- Verify etcd cluster is not overloaded
- Monitor etcd metrics for high latency
- Increase lease TTL if network has high latency
- Check for etcd leader election issues
DNS Discovery
Best for: Traditional infrastructure, static node addresses, cloud DNS services DNS discovery uses standard DNS resolution to discover cluster nodes. Works with any DNS server and is ideal for static deployments or cloud environments with DNS integration.How It Works
- Configure DNS A records or SRV records for cluster nodes
- Bifrost queries DNS to resolve configured names
- All returned IP addresses are treated as potential cluster members
- Supports multiple DNS names for different node groups
- Works with internal DNS, cloud DNS, or public DNS
Configuration
Configuration Parameters
| Parameter | Required | Description | Example |
|---|---|---|---|
dns_names | Yes | Array of DNS names to resolve | ["bifrost.local"], ["node1.local", "node2.local", "node3.local"] |
bind_port | No | Port appended to discovered IPs | 10101 (default) |
Setup Examples
- Cloud DNS (AWS Route53)
- Kubernetes Headless Service
- Local DNS (dnsmasq)
Troubleshooting
DNS lookup errors
DNS lookup errors
- Verify DNS names are resolvable:
nslookup bifrost-cluster.local - Check DNS server is accessible from Bifrost nodes
- Verify
/etc/resolv.confhas correct nameserver - Test DNS resolution from inside container if using Docker
- Check for DNS caching issues (try flushing DNS cache)
No nodes discovered via DNS
No nodes discovered via DNS
- Verify DNS returns multiple A records (not CNAME)
- Check that returned IPs are correct and reachable
- Ensure
bind_portmatches actual gossip port on nodes - Verify nodes are listening on returned IP addresses
- Use
digornslookupto verify DNS response format
Nodes discovered but can't connect
Nodes discovered but can't connect
- Verify gossip port (10101) is open on all nodes
- Check firewall rules between nodes
- Ensure nodes are listening on correct network interface
- Verify IP addresses match node’s actual network addresses
- Test connectivity:
telnet <ip> 10101
UDP Broadcast Discovery
Best for: Local network deployments, on-premise infrastructure, development clusters UDP broadcast discovery automatically finds nodes on the same local network using broadcast packets. No external dependencies required.How It Works
- Nodes broadcast UDP discovery beacons on configured port
- Other nodes on the same network respond with acknowledgments
- Nodes discover each other’s IP addresses automatically
- Limited to nodes on the same broadcast domain (subnet)
- Requires
allowed_address_spacefor security
Configuration
Configuration Parameters
| Parameter | Required | Description | Example |
|---|---|---|---|
udp_broadcast_port | Yes | Port for broadcast discovery | 9999, 8888 |
allowed_address_space | Yes | CIDR ranges to limit discovery scope | ["192.168.1.0/24"], ["10.0.0.0/8", "172.16.0.0/12"] |
dial_timeout | No | Time to wait for responses | "10s" (default) |
Docker Compose Example
network_mode: bridge (default) or host for UDP broadcast. Custom networks may not support broadcast.Troubleshooting
No nodes discovered via UDP broadcast
No nodes discovered via UDP broadcast
- Verify
allowed_address_spaceincludes node IP addresses - Check UDP broadcast port is open (firewall/security groups)
- Ensure nodes are on same subnet/broadcast domain
- Verify broadcast is enabled on network interface
- Test with
tcpdump -i any -n udp port 9999 - Check Docker network mode supports broadcast (use bridge or host)
Address space filtering issues
Address space filtering issues
- Verify CIDR notation is correct (e.g.,
192.168.1.0/24) - Ensure
allowed_address_spacecovers all node IPs - Check node IP addresses:
ip addrorifconfig - Remember to use network address, not host address
- Test CIDR match online or with ipcalc
Permission denied on UDP port
Permission denied on UDP port
- Check if another process is using the UDP broadcast port
- Verify port number is > 1024 (non-privileged) or run as root
- Use
netstat -tulpn | grep 9999to check port usage - Change
udp_broadcast_portto different value - Ensure firewall isn’t blocking UDP on that port
mDNS Discovery
Best for: Local development, testing, zero-configuration setups mDNS (Multicast DNS) provides zero-configuration service discovery on local networks. Perfect for development and testing without requiring any infrastructure setup.How It Works
- Nodes advertise themselves via mDNS (Bonjour/Avahi)
- Other nodes browse for mDNS services
- Automatic discovery within the same local network
- No DNS server or configuration required
- Limited to local network segment
Configuration
Configuration Parameters
| Parameter | Required | Description | Example |
|---|---|---|---|
mdns_service | No | mDNS service type | "_bifrost._tcp" (default), "_myapp._tcp" |
dial_timeout | No | Time to wait for mDNS responses | "10s" (default) |
Local Development Example
Troubleshooting
mDNS services not discovered
mDNS services not discovered
- Verify mDNS is enabled on network (check firewall)
- Ensure multicast is enabled on network interface
- Check nodes are on same local network segment
- Verify mDNS port 5353 is not blocked
- Test mDNS resolution:
avahi-browse -a(Linux) ordns-sd -B(macOS) - Increase
dial_timeoutif discovery is slow
Network address validation errors
Network address validation errors
- This is normal - mDNS returns network/broadcast addresses
- mDNS automatically filters invalid addresses (127.x.x.x, *.0, *.255)
- Check that nodes have valid non-loopback IP addresses
- Ensure nodes are not using 127.0.0.1 for binding
- Verify network interface has proper IP configuration
Discovery works but cluster unstable
Discovery works but cluster unstable
- mDNS has eventual consistency, allow time for propagation
- Check gossip port accessibility between nodes
- Verify network doesn’t drop multicast packets
- Consider using a more robust discovery method for production
- Check for network congestion or packet loss
Deployment Patterns
Docker Compose Deployment
Complete example using Kubernetes-style discovery with a shared config store:Kubernetes Production Deployment
Production-ready Kubernetes deployment with StatefulSet:Bare Metal / VM Deployment
For bare metal or VM deployments using systemd: Step 1: Install Bifrost on each nodeTroubleshooting
General Clustering Issues
Cluster forms but only has 1 member
Cluster forms but only has 1 member
- Discovery not configured: Verify
discovery.enabled: trueanddiscovery.typeis set - Service name mismatch: Ensure all nodes have identical
service_name - Gossip port blocked: Check firewall allows TCP port 10101 between nodes
- Discovery method issues: See method-specific troubleshooting above
- Network isolation: Verify nodes can reach each other on gossip port
Split brain - nodes form separate clusters
Split brain - nodes form separate clusters
- Network partition: Check network connectivity between all nodes
- Different discovery configs: Ensure all nodes use same discovery settings
- Firewall blocking gossip: Verify bidirectional connectivity on port 10101
- Discovery scoped incorrectly: Check label selectors, DNS names, or address spaces
- Restart all nodes: Sometimes requires simultaneous restart to reform cluster
High memory usage in cluster
High memory usage in cluster
- Large gossip messages: Check size of gossiped data
- Too many nodes: Optimize for clusters with 3-7 nodes typically
- Message deduplication cache: This is normal, cache TTL is 2 minutes
- Increase node resources: Ensure adequate memory allocation
Cluster unstable - nodes flapping
Cluster unstable - nodes flapping
- Network instability: Check for packet loss or high latency
- Resource constraints: Ensure nodes have adequate CPU/memory
- Timeout too aggressive: Increase
timeout_secondsin gossip config - Health check failures: Review liveness probe configuration
- Discovery intervals: Check discovery isn’t running too frequently
Cannot broadcast messages to cluster
Cannot broadcast messages to cluster
- Queue not initialized: Check logs for initialization errors
- No active members: Verify cluster has multiple healthy members
- Gossip port unreachable: Test connectivity between all nodes
- Message too large: Check size of broadcast messages
Debug Logging
Enable debug logging to troubleshoot cluster issues:Health Check Endpoints
Monitor cluster health via HTTP endpoints:This clustering implementation ensures Bifrost can handle enterprise-scale deployments with high availability, automatic service discovery, and intelligent traffic distribution across any infrastructure.

