simstudioai/sim
Build and deploy AI agent workflows in minutes.
Build Workflows with Ease
Design agent workflows visually on a canvas—connect agents, tools, and blocks, then run them instantly.
Supercharge with Copilot
Leverage Copilot to generate nodes, fix errors, and iterate on flows directly from natural language.
Integrate Vector Databases
Upload documents to a vector store and let agents answer questions grounded in your specific content.
Quickstart
Cloud-hosted: sim.ai
Self-hosted: NPM Package
|
|
→ http://localhost:3000
Note
Docker must be installed and running on your machine.
Options
| Flag | Description |
|---|---|
-p, --port <port> |
Port to run Sim on (default 3000) |
--no-pull |
Skip pulling latest Docker images |
Self-hosted: Docker Compose
|
|
Access the application at http://localhost:3000/
Using Local Models with Ollama
Run Sim with local AI models using Ollama - no external APIs required:
|
|
Wait for the model to download, then visit http://localhost:3000. Add more models with:
|
|
Using an External Ollama Instance
If you already have Ollama running on your host machine (outside Docker), you need to configure the OLLAMA_URL to use host.docker.internal instead of localhost:
|
|
Why? When running inside Docker, localhost refers to the container itself, not your host machine. host.docker.internal is a special DNS name that resolves to the host.
For Linux users, you can either:
- Use your host machine’s actual IP address (e.g.,
http://192.168.1.100:11434) - Add
extra_hosts: ["host.docker.internal:host-gateway"]to the simstudio service in your compose file
Using vLLM
Sim also supports vLLM for self-hosted models with OpenAI-compatible API:
|
|
When running with Docker, use host.docker.internal if vLLM is on your host machine (same as Ollama above).
Self-hosted: Dev Containers
- Open VS Code with the Remote - Containers extension
- Open the project and click “Reopen in Container” when prompted
- Run
bun run dev:fullin the terminal or use thesim-startalias- This starts both the main application and the realtime socket server
Self-hosted: Manual Setup
Requirements:
- Bun runtime
- Node.js v20+ (required for sandboxed code execution)
- PostgreSQL 12+ with pgvector extension (required for AI embeddings)
Note: Sim uses vector embeddings for AI features like knowledge bases and semantic search, which requires the pgvector PostgreSQL extension.
- Clone and install dependencies:
|
|
- Set up PostgreSQL with pgvector:
You need PostgreSQL with the vector extension for embedding support. Choose one option:
Option A: Using Docker (Recommended)
|
|
Option B: Manual Installation
- Install PostgreSQL 12+ and the pgvector extension
- See pgvector installation guide
- Set up environment:
|
|
Update your .env file with the database URL:
|
|
- Set up the database:
First, configure the database package environment:
|
|
Update your packages/db/.env file with the database URL:
|
|
Then run the migrations:
|
|
- Start the development servers:
Recommended approach - run both servers together (from project root):
|
|
This starts both the main Next.js application and the realtime socket server required for full functionality.
Alternative - run servers separately:
Next.js app (from project root):
|
|
Realtime socket server (from apps/sim directory in a separate terminal):
|
|
Copilot API Keys
Copilot is a Sim-managed service. To use Copilot on a self-hosted instance:
- Go to https://sim.ai → Settings → Copilot and generate a Copilot API key
- Set
COPILOT_API_KEYenvironment variable in your self-hosted apps/sim/.env file to that value
Environment Variables
Key environment variables for self-hosted deployments (see apps/sim/.env.example for full list):
| Variable | Required | Description |
|---|---|---|
DATABASE_URL |
Yes | PostgreSQL connection string with pgvector |
BETTER_AUTH_SECRET |
Yes | Auth secret (openssl rand -hex 32) |
BETTER_AUTH_URL |
Yes | Your app URL (e.g., http://localhost:3000) |
NEXT_PUBLIC_APP_URL |
Yes | Public app URL (same as above) |
ENCRYPTION_KEY |
Yes | Encryption key (openssl rand -hex 32) |
OLLAMA_URL |
No | Ollama server URL (default: http://localhost:11434) |
VLLM_BASE_URL |
No | vLLM server URL for self-hosted models |
COPILOT_API_KEY |
No | API key from sim.ai for Copilot features |
Troubleshooting
Ollama models not showing in dropdown (Docker)
If you’re running Ollama on your host machine and Sim in Docker, change OLLAMA_URL from localhost to host.docker.internal:
|
|
See Using an External Ollama Instance for details.
Database connection issues
Ensure PostgreSQL has the pgvector extension installed. When using Docker, wait for the database to be healthy before running migrations.
Port conflicts
If ports 3000, 3002, or 5432 are in use, configure alternatives:
|
|
Tech Stack
- Framework: Next.js (App Router)
- Runtime: Bun
- Database: PostgreSQL with Drizzle ORM
- Authentication: Better Auth
- UI: Shadcn, Tailwind CSS
- State Management: Zustand
- Flow Editor: ReactFlow
- Docs: Fumadocs
- Monorepo: Turborepo
- Realtime: Socket.io
- Background Jobs: Trigger.dev
- Remote Code Execution: E2B
Contributing
We welcome contributions! Please see our Contributing Guide for details.
License
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
Made with ❤️ by the Sim Team