BerriAI/litellm
π LiteLLM
Call all LLM APIs using the OpenAI format [Bedrock, Huggingface, VertexAI, TogetherAI, Azure, OpenAI, Groq etc.]
LiteLLM Proxy Server (LLM Gateway) | Hosted Proxy (Preview) | Enterprise Tier
LiteLLM manages:
- Translate inputs to providerβs
completion
,embedding
, andimage_generation
endpoints - Consistent output, text responses will always be available at
['choices'][0]['message']['content']
- Retry/fallback logic across multiple deployments (e.g. Azure/OpenAI) - Router
- Set Budgets & Rate limits per project, api key, model LiteLLM Proxy Server (LLM Gateway)
Jump to LiteLLM Proxy (LLM Gateway) Docs
Jump to Supported LLM Providers
π¨ Stable Release: Use docker images with the -stable
tag. These have undergone 12 hour load tests, before being published. More information about the release cycle here
Support for more providers. Missing a provider or LLM Platform, raise a feature request.
Usage (Docs)
[!IMPORTANT] LiteLLM v1.0.0 now requires
openai>=1.0.0
. Migration guide here
LiteLLM v1.40.14+ now requirespydantic>=2.0.0
. No changes required.
|
|
|
|
Response (OpenAI Format)
|
|
Call any model supported by a provider, with model=<provider_name>/<model_name>
. There might be provider-specific details here, so refer to provider docs for more information
Async (Docs)
|
|
Streaming (Docs)
liteLLM supports streaming the model response back, pass stream=True
to get a streaming iterator in response.
Streaming is supported for all models (Bedrock, Huggingface, TogetherAI, Azure, OpenAI, etc.)
|
|
Response chunk (OpenAI Format)
|
|
Logging Observability (Docs)
LiteLLM exposes pre defined callbacks to send data to Lunary, MLflow, Langfuse, DynamoDB, s3 Buckets, Helicone, Promptlayer, Traceloop, Athina, Slack
|
|
LiteLLM Proxy Server (LLM Gateway) - (Docs)
Track spend + Load Balance across multiple projects
The proxy provides:
π Proxy Endpoints - Swagger Docs
Quick Start Proxy - CLI
|
|
Step 1: Start litellm proxy
|
|
Step 2: Make ChatCompletions Request to Proxy
[!IMPORTANT] π‘ Use LiteLLM Proxy with Langchain (Python, JS), OpenAI SDK (Python, JS) Anthropic SDK, Mistral SDK, LlamaIndex, Instructor, Curl
|
|
Proxy Key Management (Docs)
Connect the proxy with a Postgres DB to create proxy keys
|
|
UI on /ui
on your proxy server
Set budgets and rate limits across multiple projects
POST /key/generate
Request
|
|
Expected Response
|
|
Supported Providers (Docs)
Provider | Completion | Streaming | Async Completion | Async Streaming | Async Embedding | Async Image Generation |
---|---|---|---|---|---|---|
openai | β | β | β | β | β | β |
azure | β | β | β | β | β | β |
AI/ML API | β | β | β | β | β | β |
aws - sagemaker | β | β | β | β | β | |
aws - bedrock | β | β | β | β | β | |
google - vertex_ai | β | β | β | β | β | β |
google - palm | β | β | β | β | ||
google AI Studio - gemini | β | β | β | β | ||
mistral ai api | β | β | β | β | β | |
cloudflare AI Workers | β | β | β | β | ||
cohere | β | β | β | β | β | |
anthropic | β | β | β | β | ||
empower | β | β | β | β | ||
huggingface | β | β | β | β | β | |
replicate | β | β | β | β | ||
together_ai | β | β | β | β | ||
openrouter | β | β | β | β | ||
ai21 | β | β | β | β | ||
baseten | β | β | β | β | ||
vllm | β | β | β | β | ||
nlp_cloud | β | β | β | β | ||
aleph alpha | β | β | β | β | ||
petals | β | β | β | β | ||
ollama | β | β | β | β | β | |
deepinfra | β | β | β | β | ||
perplexity-ai | β | β | β | β | ||
Groq AI | β | β | β | β | ||
Deepseek | β | β | β | β | ||
anyscale | β | β | β | β | ||
IBM - watsonx.ai | β | β | β | β | β | |
voyage ai | β | |||||
xinference [Xorbits Inference] | β | |||||
FriendliAI | β | β | β | β | ||
Galadriel | β | β | β | β |
Contributing
Interested in contributing? Contributions to LiteLLM Python SDK, Proxy Server, and contributing LLM integrations are both accepted and highly encouraged! See our Contribution Guide for more details
Enterprise
For companies that need better security, user management and professional support
This covers:
- β Features under the LiteLLM Commercial License:
- β Feature Prioritization
- β Custom Integrations
- β Professional Support - Dedicated discord + slack
- β Custom SLAs
- β Secure access with Single Sign-On
Code Quality / Linting
LiteLLM follows the Google Python Style Guide.
We run:
- Ruff for formatting and linting checks
- Mypy + Pyright for typing 1, 2
- Black for formatting
- isort for import sorting
If you have suggestions on how to improve the code quality feel free to open an issue or a PR.
Support / talk with founders
- Schedule Demo π
- Community Discord π
- Our numbers π +1 (770) 8783-106 / β+1 (412) 618-6238β¬
- Our emails βοΈ ishaan@berri.ai / krrish@berri.ai
Why did we build this
- Need for simplicity: Our code started to get extremely complicated managing & translating calls between Azure, OpenAI and Cohere.
Contributors
Run in Developer mode
Services
- Setup .env file in root
- Run dependant services
docker-compose up db prometheus
Backend
- (In root) create virtual environment
python -m venv .venv
- Activate virtual environment
source .venv/bin/activate
- Install dependencies
pip install -e ".[all]"
- Start proxy backend
uvicorn litellm.proxy.proxy_server:app --host localhost --port 4000 --reload
Frontend
- Navigate to
ui/litellm-dashboard
- Install dependencies
npm install
- Run
npm run dev
to start the dashboard