Overview
Firebender can integrate with your LiteLLM deployment in two ways:- Model discovery: Firebender reads available models from LiteLLM
- Request routing: Firebender sends chat completions through LiteLLM for models marked with
provider: 'lite-llm'
Recommended LiteLLM setup
1. Expose model discovery endpoints
Firebender checks LiteLLM model metadata using:/model/info/v1/modelsas a fallback
2. Create a discovery/admin key
Use a LiteLLM key that can read model metadata. Firebender uses this key to discover which models your deployment exposes.3. Create virtual keys for request auth
Firebender requires a LiteLLM virtual key for actual request execution. That means:- the discovery/admin key alone is not enough to send requests
- users who route through LiteLLM need a valid virtual key configured in Firebender
Example LiteLLM alias
A common pattern is to expose a friendly alias from LiteLLM and map it to a real upstream provider model.- Firebender can show
bedrock-claude-sonnet-4-6in its merged model list - LiteLLM still forwards requests to the real upstream Bedrock Anthropic model
What Firebender expects from LiteLLM metadata
Firebender uses LiteLLM model metadata to build merged model definitions, including fields like:- model alias / ID
- reasoning support
- vision support
- PDF input support
- token limits
- pricing metadata when available
Firebender-side configuration
For the Firebender steps, see Firebender on LiteLLM.Troubleshooting
/model/info is unavailable
That is okay as long as /v1/models works. Firebender falls back automatically.