BerriAI/litellm
🚅 LiteLLM
Call 100+ LLMs in OpenAI format. [Bedrock, Azure, OpenAI, VertexAI, Anthropic, Groq, etc.]
LiteLLM Proxy Server (AI Gateway) | Hosted Proxy | Enterprise Tier
Use LiteLLM for
LLMs - Call 100+ LLMs (Python SDK + AI Gateway)
All Supported Endpoints - /chat/completions, /responses, /embeddings, /images, /audio, /batches, /rerank, /a2a, /messages and more.
Python SDK
|
|
|
|
AI Gateway (Proxy Server)
Getting Started - E2E Tutorial - Setup virtual keys, make your first request
|
|
|
|
Agents - Invoke A2A Agents (Python SDK + AI Gateway)
Supported Providers - LangGraph, Vertex AI Agent Engine, Azure AI Foundry, Bedrock AgentCore, Pydantic AI
Python SDK - A2A Protocol
|
|
AI Gateway (Proxy Server)
Step 1. Add your Agent to the AI Gateway
Step 2. Call Agent via A2A SDK
|
|
MCP Tools - Connect MCP servers to any LLM (Python SDK + AI Gateway)
Python SDK - MCP Bridge
|
|
AI Gateway - MCP Gateway
Step 1. Add your MCP Server to the AI Gateway
Step 2. Call MCP tools via /chat/completions
|
|
Use with Cursor IDE
|
|
How to use LiteLLM
You can use LiteLLM through either the Proxy Server or Python SDK. Both gives you a unified interface to access multiple LLMs (100+ LLMs). Choose the option that best fits your needs:
| LiteLLM AI Gateway | LiteLLM Python SDK | |
|---|---|---|
| Use Case | Central service (LLM Gateway) to access multiple LLMs | Use LiteLLM directly in your Python code |
| Who Uses It? | Gen AI Enablement / ML Platform Teams | Developers building LLM projects |
| Key Features | Centralized API gateway with authentication and authorization, multi-tenant cost tracking and spend management per project/user, per-project customization (logging, guardrails, caching), virtual keys for secure access control, admin dashboard UI for monitoring and management | Direct Python library integration in your codebase, Router with retry/fallback logic across multiple deployments (e.g. Azure/OpenAI) - Router, application-level load balancing and cost tracking, exception handling with OpenAI-compatible errors, observability callbacks (Lunary, MLflow, Langfuse, etc.) |
LiteLLM Performance: 8ms P95 latency at 1k RPS (See benchmarks here)
Jump to LiteLLM Proxy (LLM Gateway) Docs
Jump to Supported LLM Providers
Stable Release: Use docker images with the -stable tag. These have undergone 12 hour load tests, before being published. More information about the release cycle here
Support for more providers. Missing a provider or LLM Platform, raise a feature request.
OSS Adopters
Netflix |
Supported Providers (Website Supported Models | Docs)
Run in Developer mode
Services
- Setup .env file in root
- Run dependant services
docker-compose up db prometheus
Backend
- (In root) create virtual environment
python -m venv .venv - Activate virtual environment
source .venv/bin/activate - Install dependencies
pip install -e ".[all]" pip install prismaprisma generate- Start proxy backend
python litellm/proxy/proxy_cli.py
Frontend
- Navigate to
ui/litellm-dashboard - Install dependencies
npm install - Run
npm run devto start the dashboard
Enterprise
For companies that need better security, user management and professional support
This covers:
- ✅ Features under the LiteLLM Commercial License:
- ✅ Feature Prioritization
- ✅ Custom Integrations
- ✅ Professional Support - Dedicated discord + slack
- ✅ Custom SLAs
- ✅ Secure access with Single Sign-On
Contributing
We welcome contributions to LiteLLM! Whether you’re fixing bugs, adding features, or improving documentation, we appreciate your help.
Quick Start for Contributors
This requires poetry to be installed.
|
|
For detailed contributing guidelines, see CONTRIBUTING.md.
Code Quality / Linting
LiteLLM follows the Google Python Style Guide.
Our automated checks include:
- Black for code formatting
- Ruff for linting and code quality
- MyPy for type checking
- Circular import detection
- Import safety checks
All these checks must pass before your PR can be merged.
Support / talk with founders
- Schedule Demo 👋
- Community Discord 💭
- Community Slack 💭
- Our emails ✉️ ishaan@berri.ai / krrish@berri.ai
Why did we build this
- Need for simplicity: Our code started to get extremely complicated managing & translating calls between Azure, OpenAI and Cohere.