supermemoryai/supermemory
State-of-the-art memory and context engine for AI.
Docs · Quickstart · Dashboard · Discord
Supermemory is the memory and context layer for AI. #1 on LongMemEval, LoCoMo, and ConvoMem — the three major benchmarks for AI memory.
We are a research lab building the engine, plugins and tools around it.
Your AI forgets everything between conversations. Supermemory fixes that.
It automatically learns from conversations, extracts facts, builds user profiles, handles knowledge updates and contradictions, forgets expired information, and delivers the right context at the right time. Full RAG, connectors, file processing — the entire context stack, one system.
| 🧠 Memory | Extracts facts from conversations. Handles temporal changes, contradictions, and automatic forgetting. |
| 👤 User Profiles | Auto-maintained user context — stable facts + recent activity. One call, ~50ms. |
| 🔍 Hybrid Search | RAG + Memory in a single query. Knowledge base docs and personalized context together. |
| 🔌 Connectors | Google Drive · Gmail · Notion · OneDrive · GitHub — auto-sync with real-time webhooks. |
| 📄 Multi-modal Extractors | PDFs, images (OCR), videos (transcription), code (AST-aware chunking). Upload and it works. |
All of this is in our single memory structure and ontology.
Use Supermemory
🧑💻 I use AI toolsBuild your own personal supermemory by using our app. Builds persistent memory graph across every conversation. Your AI remembers your preferences, projects, past discussions — and gets smarter over time. |
🔧 I'm building AI productsAdd memory, RAG, user profiles, and connectors to your agents and apps with a single API. No vector DB config. No embedding pipelines. No chunking strategies. |
Give your AI memory
The Supermemory App, browser extension, plugins and MCP server gives any compatible AI assistant persistent memory. One install, and your AI remembers you.
The app
You can use supermemory without any code, by using our consumer-facing app for free.
Start at https://app.supermemory.ai
It also comes with an agent embedded inside, which we call Nova.
Supermemory Plugins
Supermemory comes built with Plugins for Claude Code, OpenCode and OpenClaw.
These plugins are implementations of the supermemory API, and they are open source!
You can find them here:
- Openclaw plugin: https://github.com/supermemoryai/openclaw-supermemory
- Claude code plugin: https://github.com/supermemoryai/claude-supermemory
- OpenCode plugin: https://github.com/supermemoryai/opencode-supermemory
MCP - Quick install
|
|
Replace claude with your client: cursor, windsurf, vscode, etc.
Read more about our MCP here - https://supermemory.ai/docs/supermemory-mcp/mcp
What your AI gets
| Tool | What it does |
|---|---|
memory |
Save or forget information. Your AI calls this automatically when you share something worth remembering. |
recall |
Search memories by query. Returns relevant memories + your user profile summary. |
context |
Injects your full profile (preferences, recent activity) into the conversation at start. In Cursor and Claude Code, just type /context. |
How it works
Once installed, Supermemory runs in the background:
- You talk to your AI normally. Share preferences, mention projects, discuss problems.
- Supermemory extracts and stores the important stuff. Facts, preferences, project context — not noise.
- Next conversation, your AI already knows you. It recalls what you’re working on, how you like things, what you discussed before.
Memory is scoped with projects (container tags) so you can separate work and personal context, or organize by client, repo, or anything else.
Supported clients
Claude Desktop · Cursor · Windsurf · VS Code · Claude Code · OpenCode · OpenClaw
The MCP server is open source — view the source.
Manual configuration
Add this to your MCP client config:
|
|
Or use an API key instead of OAuth:
|
|
Build with Supermemory (API)
If you’re building AI agents or apps, Supermemory gives you the entire context stack through one API — memory, RAG, user profiles, connectors, and file processing.
Install
|
|
Quickstart
|
|
|
|
Supermemory automatically extracts memories, builds user profiles, and returns relevant context. No embedding pipelines, no vector DB config, no chunking strategies.
Framework integrations
Drop-in wrappers for every major AI framework:
|
|
Vercel AI SDK · LangChain · LangGraph · OpenAI Agents SDK · Mastra · Agno · Claude Memory Tool · n8n
Search modes
|
|
User profiles
Traditional memory relies on search — you need to know what to ask for. Supermemory automatically maintains a profile for every user:
|
|
One call. ~50ms. Inject into your system prompt and your agent instantly knows who it’s talking to.
Connectors
Auto-sync external data into your knowledge base:
Google Drive · Gmail · Notion · OneDrive · GitHub · Web Crawler
Real-time webhooks. Documents automatically processed, chunked, and searchable.
API at a glance
| Method | Purpose |
|---|---|
client.add() |
Store content — text, conversations, URLs, HTML |
client.profile() |
User profile + optional search in one call |
client.search.memories() |
Hybrid search across memories and documents |
client.search.documents() |
Document search with metadata filters |
client.documents.uploadFile() |
Upload PDFs, images, videos, code |
client.documents.list() |
List and filter documents |
client.settings.update() |
Configure memory extraction and chunking |
Full API reference → supermemory.ai/docs
Benchmarks
Supermemory is state of the art across all major AI memory benchmarks:
| Benchmark | What it measures | Result |
|---|---|---|
| LongMemEval | Long-term memory across sessions with knowledge updates | 81.6% — #1 |
| LoCoMo | Fact recall across extended conversations (single-hop, multi-hop, temporal, adversarial) | #1 |
| ConvoMem | Personalization and preference learning | #1 |
We also built MemoryBench — an open-source framework for standardized, reproducible benchmarks of memory providers. Compare Supermemory, Mem0, Zep, and others head-to-head:
|
|
Benchmarking your own memory solution
We provide an Agent skill for companies to benchmark their own context and memory solutions against supermemory.
|
|
Simply run this and do /benchmark-context - Supermemory will automatically do the work for you!
How memory works under the hood
|
|
Memory is not RAG. RAG retrieves document chunks — stateless, same results for everyone. Memory extracts and tracks facts about users over time. It understands that “I just moved to SF” supersedes “I live in NYC.” Supermemory runs both together by default, so you get knowledge base retrieval and personalized context in every query. Read more about this here - https://supermemory.ai/docs/concepts/memory-vs-rag
Automatic forgetting. Supermemory knows when memories become irrelevant. Temporary facts (“I have an exam tomorrow”) expire after the date passes. Contradictions are resolved automatically. Noise never becomes permanent memory.
Links
- 📖 Documentation
- 🚀 Quickstart
- 🧪 MemoryBench
- 🔌 Integrations
- 💬 Discord
- 𝕏 Twitter
Give your AI a memory. It's about time..