β‘ Created & Architected by Varun Pratap Bhardwaj β‘
Solution Architect β’ Original Creator β’ 2026
Stop re-explaining your codebase every session. 100% local. Zero setup. Completely free.
Quick Start β’
Why This? β’
Features β’
vs Alternatives β’
Docs β’
Issues
Created by Varun Pratap Bhardwaj β’
π Sponsor β’
π Attribution Required
π’ Coming Soon: SuperLocalMemory V3 with
npm install -g superlocalmemoryfor even easier installation!
Using V2? You’re in the right place. V2 remains fully supported with all features.
| Version | Installation | Best For |
|---|---|---|
| V3 (Coming Soon) | npm install -g superlocalmemory |
Most users – One-command install |
| V2 (Current) | git clone + ./install.sh |
Advanced users – Manual control |
Both versions have identical features and performance. V3 adds professional npm distribution.
Every time you start a new Claude session:
You: "Remember that authentication bug we fixed last week?"
Claude: "I don't have access to previous conversations..."
You: *sighs and explains everything again*
AI assistants forget everything between sessions. You waste time re-explaining your:
- Project architecture
- Coding preferences
- Previous decisions
- Debugging history
# Install in 5 minutes
git clone https://github.com/varun369/SuperLocalMemoryV2.git && cd SuperLocalMemoryV2 && ./install.sh
# Save a memory
superlocalmemoryv2:remember "Fixed auth bug - JWT tokens were expiring too fast, increased to 24h"
# Later, in a new session...
superlocalmemoryv2:recall "auth bug"
# β Found: "Fixed auth bug - JWT tokens were expiring too fast, increased to 24h"
Your AI now remembers everything. Forever. Locally. For free.
git clone https://github.com/varun369/SuperLocalMemoryV2.git
cd SuperLocalMemoryV2
./install.sh
git clone https://github.com/varun369/SuperLocalMemoryV2.git
cd SuperLocalMemoryV2
.\install.ps1
superlocalmemoryv2:status
# β Database: OK (0 memories)
# β Graph: Ready
# β Patterns: Ready
That’s it. No Docker. No API keys. No cloud accounts. No configuration.
# Launch the interactive web UI
python3 ~/.claude-memory/ui_server.py
# Opens at http://localhost:8000
# Features: Timeline view, search explorer, graph visualization
NEW in v2.2.0: Interactive web-based dashboard for exploring your memories visually.
| Feature | Description |
|---|---|
| π Timeline View | See your memories chronologically with importance indicators |
| π Search Explorer | Real-time semantic search with score visualization |
| πΈοΈ Graph Visualization | Interactive knowledge graph with clusters and relationships |
| π Statistics Dashboard | Memory trends, tag clouds, pattern insights |
| π― Advanced Filters | Filter by tags, importance, date range, clusters |
# 1. Start dashboard
python ~/.claude-memory/dashboard.py
# 2. Navigate to http://localhost:8050
# 3. Explore your memories:
# - Timeline: See memories over time
# - Search: Find with semantic scoring
# - Graph: Visualize relationships
# - Stats: Analyze patterns
[[Complete Dashboard Guide β|Visualization-Dashboard]]
SuperLocalMemory V2.2.0 implements hybrid search combining multiple strategies for maximum accuracy.
| Strategy | Method | Best For | Speed |
|---|---|---|---|
| Semantic Search | TF-IDF vectors + cosine similarity | Conceptual queries (“authentication patterns”) | 45ms |
| Full-Text Search | SQLite FTS5 with ranking | Exact phrases (“JWT tokens expire”) | 30ms |
| Graph-Enhanced | Knowledge graph traversal | Related concepts (“show auth-related”) | 60ms |
| Hybrid Mode | All three combined | General queries | 80ms |
# Semantic: finds conceptually similar
slm recall "security best practices"
# Matches: "JWT implementation", "OAuth flow", "CSRF protection"
# Exact: finds literal text
slm recall "PostgreSQL 15"
# Matches: exactly "PostgreSQL 15"
# Graph: finds related via clusters
slm recall "authentication" --use-graph
# Matches: JWT, OAuth, sessions (via "Auth & Security" cluster)
# Hybrid: best of all worlds (default)
slm recall "API design patterns"
# Combines semantic + exact + graph for optimal results
| Memories | Semantic | FTS5 | Graph | Hybrid |
|---|---|---|---|---|
| 100 | 35ms | 25ms | 50ms | 65ms |
| 500 | 45ms | 30ms | 60ms | 80ms |
| 1,000 | 55ms | 35ms | 70ms | 95ms |
| 5,000 | 85ms | 50ms | 110ms | 150ms |
All search strategies remain sub-second even with 5,000+ memories.
| Operation | Time | Comparison | Notes |
|---|---|---|---|
| Add Memory | – | Instant indexing | |
| Search (Hybrid) | 80ms | 3.3x faster than v1 | 500 memories |
| Graph Build | – | 100 memories | |
| Pattern Learning | – | Incremental | |
| Dashboard Load | – | 1,000 memories | |
| Timeline Render | – | All memories |
| Tier | Description | Compression | Savings |
|---|---|---|---|
| Tier 1 | Active memories (0-30 days) | None | – |
| Tier 2 | Warm memories (30-90 days) | 60% | Progressive summarization |
| Tier 3 | Cold storage (90+ days) | 96% | JSON archival |
Example: 1,000 memories with mixed ages = ~15MB (vs 380MB uncompressed)
| Dataset Size | Search Time | Graph Build | RAM Usage |
|---|---|---|---|
| 100 memories | 35ms | 0.5s | |
| 500 memories | 45ms | 2s | |
| 1,000 memories | 55ms | 5s | |
| 5,000 memories | 85ms | 30s |
Tested up to 10,000 memories with linear scaling and no degradation.
SuperLocalMemory V2 is the ONLY memory system that works across ALL your tools:
| Tool | Integration | How It Works |
|---|---|---|
| Claude Code | β Native Skills | /superlocalmemoryv2:remember |
| Cursor | β MCP Integration | AI automatically uses memory tools |
| Windsurf | β MCP Integration | Native memory access |
| Claude Desktop | β MCP Integration | Built-in support |
| VS Code + Continue | β MCP + Skills | /slm-remember or AI tools |
| VS Code + Cody | β Custom Commands | /slm-remember commands |
| Aider | β Smart Wrapper | aider-smart with auto-context |
| Any Terminal | β Universal CLI | slm remember "content" |
-
MCP (Model Context Protocol) – Auto-configured for Cursor, Windsurf, Claude Desktop
- AI assistants get natural access to your memory
- No manual commands needed
- “Remember that we use FastAPI” just works
-
Skills & Commands – For Claude Code, Continue.dev, Cody
/superlocalmemoryv2:rememberin Claude Code/slm-rememberin Continue.dev and Cody- Familiar slash command interface
-
Universal CLI – Works in any terminal or script
slm remember "content"– Simple, clean syntaxslm recall "query"– Search from anywhereaider-smart– Aider with auto-context injection
All three methods use the SAME local database. No data duplication, no conflicts.
Installation automatically detects and configures:
- Existing IDEs (Cursor, Windsurf, VS Code)
- Installed tools (Aider, Continue, Cody)
- Shell environment (bash, zsh)
Zero manual configuration required. It just works.
Want to use SuperLocalMemory in ChatGPT, Perplexity, Zed, or other MCP-compatible tools?
π Complete setup guide: docs/MCP-MANUAL-SETUP.md
Covers:
- ChatGPT Desktop – Add via Settings β MCP
- Perplexity – Configure via app settings
- Zed Editor – JSON configuration
- Cody – VS Code/JetBrains setup
- Custom MCP clients – Python/HTTP integration
All tools connect to the same local database – no data duplication.
| Scenario | Without Memory | With SuperLocalMemory |
|---|---|---|
| New Claude session | Re-explain entire project | recall "project context" β instant context |
| Debugging | “We tried X last week…” starts over | Knowledge graph shows related past fixes |
| Code preferences | “I prefer React…” every time | Pattern learning knows your style |
| Multi-project | Context constantly bleeds | Separate profiles per project |
Not another simple key-value store. SuperLocalMemory implements cutting-edge memory architecture:
- PageIndex (Meta AI) β Hierarchical memory organization
- GraphRAG (Microsoft) β Knowledge graph with auto-clustering
- xMemory (Stanford) β Identity pattern learning
- A-RAG β Multi-level retrieval with context awareness
The only open-source implementation combining all four approaches.
| Solution | Free Tier Limits | Paid Price | What’s Missing |
|---|---|---|---|
| Mem0 | 10K memories, limited API | Usage-based | No pattern learning, not local |
| Zep | Limited credits | $50/month | Credit system, cloud-only |
| Supermemory | 1M tokens, 10K queries | $19-399/mo | Not local, no graphs |
| Personal.AI | β No free tier | $33/month | Cloud-only, closed ecosystem |
| Letta/MemGPT | Self-hosted (complex) | TBD | Requires significant setup |
| SuperLocalMemory V2 | Unlimited | $0 forever | Nothing. |
| Feature | Mem0 | Zep | Khoj | Letta | SuperLocalMemory V2 |
|---|---|---|---|---|---|
| Works in Cursor | Cloud Only | β | β | β | β Local |
| Works in Windsurf | Cloud Only | β | β | β | β Local |
| Works in VS Code | 3rd Party | β | Partial | β | β Native |
| Works in Claude | β | β | β | β | β |
| Works with Aider | β | β | β | β | β |
| Universal CLI | β | β | β | β | β |
| 4-Layer Architecture | β | β | β | β | β |
| Pattern Learning | β | β | β | β | β |
| Multi-Profile Support | β | β | β | Partial | β |
| Knowledge Graphs | β | β | β | β | β |
| 100% Local | β | β | Partial | Partial | β |
| Zero Setup | β | β | β | β | β |
| Progressive Compression | β | β | β | β | β |
| Completely Free | Limited | Limited | Partial | β | β |
SuperLocalMemory V2 is the ONLY solution that:
- β Works across 8+ IDEs and CLI tools
- β Remains 100% local (no cloud dependencies)
- β Completely free with unlimited memories
See full competitive analysis β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Layer 9: VISUALIZATION (NEW v2.2.0) β
β Interactive dashboard: timeline, search, graph explorer β
β Real-time analytics and visual insights β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Layer 8: HYBRID SEARCH (NEW v2.2.0) β
β Combines: Semantic + FTS5 + Graph traversal β
β 80ms response time with maximum accuracy β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Layer 7: UNIVERSAL ACCESS β
β MCP + Skills + CLI (works everywhere) β
β 11+ IDEs with single database β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Layer 6: MCP INTEGRATION β
β Model Context Protocol: 6 tools, 4 resources, 2 prompts β
β Auto-configured for Cursor, Windsurf, Claude β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Layer 5: SKILLS LAYER β
β 6 universal slash-commands for AI assistants β
β Compatible with Claude Code, Continue, Cody β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Layer 4: PATTERN LEARNING β
β Learns: coding style, preferences, terminology β
β "You prefer React over Vue" (73% confidence) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Layer 3: KNOWLEDGE GRAPH β
β Auto-clusters: "Auth & Tokens", "Performance", "Testing" β
β Discovers relationships you didn't know existed β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Layer 2: HIERARCHICAL INDEX β
β Tree structure for fast navigation β
β O(log n) lookups instead of O(n) scans β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Layer 1: RAW STORAGE β
β SQLite + Full-text search + TF-IDF vectors β
β Compression: 60-96% space savings β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
# Build the graph from your memories
python ~/.claude-memory/graph_engine.py build
# Output:
# β Processed 47 memories
# β Created 12 clusters:
# - "Authentication & Tokens" (8 memories)
# - "Performance Optimization" (6 memories)
# - "React Components" (11 memories)
# - "Database Queries" (5 memories)
# ...
The graph automatically discovers relationships. Ask “what relates to auth?” and get JWT, session management, token refreshβeven if you never tagged them together.
# Learn patterns from your memories
python ~/.claude-memory/pattern_learner.py update
# Get your coding identity
python ~/.claude-memory/pattern_learner.py context 0.5
# Output:
# Your Coding Identity:
# - Framework preference: React (73% confidence)
# - Style: Performance over readability (58% confidence)
# - Testing: Jest + React Testing Library (65% confidence)
# - API style: REST over GraphQL (81% confidence)
Your AI assistant can now match your preferences automatically.
# Work profile
superlocalmemoryv2:profile create work --description "Day job"
superlocalmemoryv2:profile switch work
# Personal projects
superlocalmemoryv2:profile create personal
superlocalmemoryv2:profile switch personal
# Client projects (completely isolated)
superlocalmemoryv2:profile create client-acme
Each profile has isolated memories, graphs, and patterns. No context bleeding.
# Memory Operations
superlocalmemoryv2:remember "content" --tags tag1,tag2 # Save memory
superlocalmemoryv2:recall "search query" # Search
superlocalmemoryv2:list # Recent memories
superlocalmemoryv2:status # System health
# Profile Management
superlocalmemoryv2:profile list # Show all profiles
superlocalmemoryv2:profile create name> # New profile
superlocalmemoryv2:profile switch name> # Switch context
# Knowledge Graph
python ~/.claude-memory/graph_engine.py build # Build graph
python ~/.claude-memory/graph_engine.py stats # View clusters
python ~/.claude-memory/graph_engine.py related --id 5 # Find related
# Pattern Learning
python ~/.claude-memory/pattern_learner.py update # Learn patterns
python ~/.claude-memory/pattern_learner.py context 0.5 # Get identity
# Reset (Use with caution!)
superlocalmemoryv2:reset soft # Clear memories
superlocalmemoryv2:reset hard --confirm # Nuclear option
SEO: Performance benchmarks, memory system speed, search latency, visualization dashboard performance
| Metric | Result | Notes |
|---|---|---|
| Hybrid search | 80ms | Semantic + FTS5 + Graph combined |
| Semantic search | 45ms | 3.3x faster than v1 |
| FTS5 search | 30ms | Exact phrase matching |
| Graph build (100 memories) | Leiden clustering | |
| Pattern learning | Incremental updates | |
| Dashboard load | 1,000 memories | |
| Timeline render | All memories visualized | |
| Storage compression | 60-96% reduction | Progressive tiering |
| Memory overhead | Lightweight |
Tested up to 10,000 memories with sub-second search times and linear scaling.
We welcome contributions! See CONTRIBUTING.md for guidelines.
Areas for contribution:
- Additional pattern categories
- Graph visualization UI
- Integration with more AI assistants
- Performance optimizations
- Documentation improvements
If SuperLocalMemory saves you time, consider supporting its development:
MIT License β use freely, even commercially. Just include the license.
Varun Pratap Bhardwaj β Solution Architect
Building tools that make AI actually useful for developers.
100% local. 100% private. 100% yours.