NevaMind-AI/memU

memU is a memory framework built for 24/7 proactive agents. It is designed for long-running use and greatly reduces the LLM token cost of keeping agents always online, making always-on, evolving agents practical in production systems. memU continuously captures and understands user intent. Even without a command, the agent can tell what you are about to do and act on it by itself.
🤖 OpenClaw (Moltbot, Clawdbot) Alternative
- Download-and-use and simple to get started.
- Builds long-term memory to understand user intent and act proactively.
- Cuts LLM token cost with smaller context.
Try now: memU bot
⭐️ Star the repository
If you find memU useful or interesting, a GitHub Star ⭐️ would be greatly appreciated.
✨ Core Features
| Capability | Description |
|---|---|
| 🤖 24/7 Proactive Agent | Always-on memory agent that works continuously in the background—never sleeps, never forgets |
| 🎯 User Intention Capture | Understands and remembers user goals, preferences, and context across sessions automatically |
| 💰 Cost Efficient | Reduces long-running token costs by caching insights and avoiding redundant LLM calls |
🔄 How Proactive Memory Works
|
|
Proactive Memory Lifecycle
|
|
🎯 Proactive Use Cases
1. Information Recommendation
Agent monitors interests and proactively surfaces relevant content
|
|
2. Email Management
Agent learns communication patterns and handles routine correspondence
|
|
3. Trading & Financial Monitoring
Agent tracks market context and user investment behavior
|
|
…
🗂️ Hierarchical Memory Architecture
MemU’s three-layer system enables both reactive queries and proactive context loading:
| Layer | Reactive Use | Proactive Use |
|---|---|---|
| Resource | Direct access to original data | Background monitoring for new patterns |
| Item | Targeted fact retrieval | Real-time extraction from ongoing interactions |
| Category | Summary-level overview | Automatic context assembly for anticipation |
Proactive Benefits:
- Auto-categorization: New memories self-organize into topics
- Pattern Detection: System identifies recurring themes
- Context Prediction: Anticipates what information will be needed next
🚀 Quick Start
Option 1: Cloud Version
Experience proactive memory instantly:
👉 memu.so - Hosted service with 7×24 continuous learning
For enterprise deployment with custom proactive workflows, contact info@nevamind.ai
Cloud API (v3)
| Base URL | https://api.memu.so |
|---|---|
| Auth | Authorization: Bearer YOUR_API_KEY |
| Method | Endpoint | Description |
|---|---|---|
POST |
/api/v3/memory/memorize |
Register continuous learning task |
GET |
/api/v3/memory/memorize/status/{task_id} |
Check real-time processing status |
POST |
/api/v3/memory/categories |
List auto-generated categories |
POST |
/api/v3/memory/retrieve |
Query memory (supports proactive context loading) |
Option 2: Self-Hosted
Installation
|
|
Basic Example
Requirements: Python 3.13+ and an OpenAI API key
Test Continuous Learning (in-memory):
|
|
Test with Persistent Storage (PostgreSQL):
|
|
Both examples demonstrate proactive memory workflows:
- Continuous Ingestion: Process multiple files sequentially
- Auto-Extraction: Immediate memory creation
- Proactive Retrieval: Context-aware memory surfacing
See tests/test_inmemory.py and tests/test_postgres.py for implementation details.
Custom LLM and Embedding Providers
MemU supports custom LLM and embedding providers beyond OpenAI. Configure them via llm_profiles:
|
|
OpenRouter Integration
MemU supports OpenRouter as a model provider, giving you access to multiple LLM providers through a single API.
Configuration
|
|
Environment Variables
| Variable | Description |
|---|---|
OPENROUTER_API_KEY |
Your OpenRouter API key from openrouter.ai/keys |
Supported Features
| Feature | Status | Notes |
|---|---|---|
| Chat Completions | Supported | Works with any OpenRouter chat model |
| Embeddings | Supported | Use OpenAI embedding models via OpenRouter |
| Vision | Supported | Use vision-capable models (e.g., openai/gpt-4o) |
Running OpenRouter Tests
|
|
See examples/example_4_openrouter_memory.py for a complete working example.
📖 Core APIs
memorize() - Continuous Learning Pipeline
Processes inputs in real-time and immediately updates memory:
|
|
Proactive Features:
- Zero-delay processing—memories available immediately
- Automatic categorization without manual tagging
- Cross-reference with existing memories for pattern detection
retrieve() - Dual-Mode Intelligence
MemU supports both proactive context loading and reactive querying:
RAG-based Retrieval (method="rag")
Fast proactive context assembly using embeddings:
- ✅ Instant context: Sub-second memory surfacing
- ✅ Background monitoring: Can run continuously without LLM costs
- ✅ Similarity scoring: Identifies most relevant memories automatically
LLM-based Retrieval (method="llm")
Deep anticipatory reasoning for complex contexts:
- ✅ Intent prediction: LLM infers what user needs before they ask
- ✅ Query evolution: Automatically refines search as context develops
- ✅ Early termination: Stops when sufficient context is gathered
Comparison
| Aspect | RAG (Fast Context) | LLM (Deep Reasoning) |
|---|---|---|
| Speed | ⚡ Milliseconds | 🐢 Seconds |
| Cost | 💰 Embedding only | 💰💰 LLM inference |
| Proactive use | Continuous monitoring | Triggered context loading |
| Best for | Real-time suggestions | Complex anticipation |
Usage
|
|
Proactive Filtering: Use where to scope continuous monitoring:
where={"user_id": "123"}- User-specific contextwhere={"agent_id__in": ["1", "2"]}- Multi-agent coordination- Omit
wherefor global context awareness
📚 For complete API documentation, see SERVICE_API.md - includes proactive workflow patterns, pipeline configuration, and real-time update handling.
💡 Proactive Scenarios
Example 1: Always-Learning Assistant
Continuously learns from every interaction without explicit memory commands:
|
|
Proactive Behavior:
- Automatically extracts preferences from casual mentions
- Builds relationship models from interaction patterns
- Surfaces relevant context in future conversations
- Adapts communication style based on learned preferences
Best for: Personal AI assistants, customer support that remembers, social chatbots
Example 2: Self-Improving Agent
Learns from execution logs and proactively suggests optimizations:
|
|
Proactive Behavior:
- Monitors agent actions and outcomes continuously
- Identifies patterns in successes and failures
- Auto-generates skill guides from experience
- Proactively suggests strategies for similar future tasks
Best for: DevOps automation, agent self-improvement, knowledge capture
Example 3: Multimodal Context Builder
Unifies memory across different input types for comprehensive context:
|
|
Proactive Behavior:
- Cross-references text, images, and documents automatically
- Builds unified understanding across modalities
- Surfaces visual context when discussing related topics
- Anticipates information needs by combining multiple sources
Best for: Documentation systems, learning platforms, research assistants
📊 Performance
MemU achieves 92.09% average accuracy on the Locomo benchmark across all reasoning tasks, demonstrating reliable proactive memory operations.
View detailed experimental data: memU-experiment
🧩 Ecosystem
| Repository | Description | Proactive Features |
|---|---|---|
| memU | Core proactive memory engine | 7×24 learning pipeline, auto-categorization |
| memU-server | Backend with continuous sync | Real-time memory updates, webhook triggers |
| memU-ui | Visual memory dashboard | Live memory evolution monitoring |
Quick Links:
🤝 Partners
🤝 How to Contribute
We welcome contributions from the community! Whether you’re fixing bugs, adding features, or improving documentation, your help is appreciated.
Getting Started
To start contributing to MemU, you’ll need to set up your development environment:
Prerequisites
- Python 3.13+
- uv (Python package manager)
- Git
Setup Development Environment
|
|
The make install command will:
- Create a virtual environment using
uv - Install all project dependencies
- Set up pre-commit hooks for code quality checks
Running Quality Checks
Before submitting your contribution, ensure your code passes all quality checks:
|
|
The make check command runs:
- Lock file verification: Ensures
pyproject.tomlconsistency - Pre-commit hooks: Lints code with Ruff, formats with Black
- Type checking: Runs
mypyfor static type analysis - Dependency analysis: Uses
deptryto find obsolete dependencies
Contributing Guidelines
For detailed contribution guidelines, code standards, and development practices, please see CONTRIBUTING.md.
Quick tips:
- Create a new branch for each feature or bug fix
- Write clear commit messages
- Add tests for new functionality
- Update documentation as needed
- Run
make checkbefore pushing
📄 License
🌍 Community
- GitHub Issues: Report bugs & request features
- Discord: Join the community
- X (Twitter): Follow @memU_ai
- Contact: info@nevamind.ai
⭐ Star us on GitHub to get notified about new releases!