varun369/SuperLocalMemoryV2: Standalone intelligent memory system with knowledge graphs, pattern learning, and 4-layer architecture. 100% local, zero external APIs.


SuperLocalMemory V2

⚑ Created & Architected by Varun Pratap Bhardwaj ⚑
Solution Architect β€’ Original Creator β€’ 2026

Stop re-explaining your codebase every session. 100% local. Zero setup. Completely free.

Python 3.8+
MIT License
100% Local
5 Min Setup
Cross Platform
Wiki

Quick Start β€’
Why This? β€’
Features β€’
vs Alternatives β€’
Docs β€’
Issues

Created by Varun Pratap Bhardwaj β€’
πŸ’– Sponsor β€’
πŸ“œ Attribution Required


πŸ“’ Coming Soon: SuperLocalMemory V3 with npm install -g superlocalmemory for even easier installation!
Using V2? You’re in the right place. V2 remains fully supported with all features.

Version Installation Best For
V3 (Coming Soon) npm install -g superlocalmemory Most users – One-command install
V2 (Current) git clone + ./install.sh Advanced users – Manual control

Both versions have identical features and performance. V3 adds professional npm distribution.


Every time you start a new Claude session:

You: "Remember that authentication bug we fixed last week?"
Claude: "I don't have access to previous conversations..."
You: *sighs and explains everything again*

AI assistants forget everything between sessions. You waste time re-explaining your:

  • Project architecture
  • Coding preferences
  • Previous decisions
  • Debugging history
# Install in 5 minutes
git clone https://github.com/varun369/SuperLocalMemoryV2.git && cd SuperLocalMemoryV2 && ./install.sh

# Save a memory
superlocalmemoryv2:remember "Fixed auth bug - JWT tokens were expiring too fast, increased to 24h"

# Later, in a new session...
superlocalmemoryv2:recall "auth bug"
# βœ“ Found: "Fixed auth bug - JWT tokens were expiring too fast, increased to 24h"

Your AI now remembers everything. Forever. Locally. For free.


git clone https://github.com/varun369/SuperLocalMemoryV2.git
cd SuperLocalMemoryV2
./install.sh
git clone https://github.com/varun369/SuperLocalMemoryV2.git
cd SuperLocalMemoryV2
.\install.ps1
superlocalmemoryv2:status
# βœ“ Database: OK (0 memories)
# βœ“ Graph: Ready
# βœ“ Patterns: Ready

That’s it. No Docker. No API keys. No cloud accounts. No configuration.

Start the Visualization Dashboard

# Launch the interactive web UI
python3 ~/.claude-memory/ui_server.py

# Opens at http://localhost:8000
# Features: Timeline view, search explorer, graph visualization

🎨 Visualization Dashboard

NEW in v2.2.0: Interactive web-based dashboard for exploring your memories visually.

Feature Description
πŸ“ˆ Timeline View See your memories chronologically with importance indicators
πŸ” Search Explorer Real-time semantic search with score visualization
πŸ•ΈοΈ Graph Visualization Interactive knowledge graph with clusters and relationships
πŸ“Š Statistics Dashboard Memory trends, tag clouds, pattern insights
🎯 Advanced Filters Filter by tags, importance, date range, clusters

# 1. Start dashboard
python ~/.claude-memory/dashboard.py

# 2. Navigate to http://localhost:8050

# 3. Explore your memories:
#    - Timeline: See memories over time
#    - Search: Find with semantic scoring
#    - Graph: Visualize relationships
#    - Stats: Analyze patterns

[[Complete Dashboard Guide β†’|Visualization-Dashboard]]


SuperLocalMemory V2.2.0 implements hybrid search combining multiple strategies for maximum accuracy.

Strategy Method Best For Speed
Semantic Search TF-IDF vectors + cosine similarity Conceptual queries (“authentication patterns”) 45ms
Full-Text Search SQLite FTS5 with ranking Exact phrases (“JWT tokens expire”) 30ms
Graph-Enhanced Knowledge graph traversal Related concepts (“show auth-related”) 60ms
Hybrid Mode All three combined General queries 80ms

# Semantic: finds conceptually similar
slm recall "security best practices"
# Matches: "JWT implementation", "OAuth flow", "CSRF protection"

# Exact: finds literal text
slm recall "PostgreSQL 15"
# Matches: exactly "PostgreSQL 15"

# Graph: finds related via clusters
slm recall "authentication" --use-graph
# Matches: JWT, OAuth, sessions (via "Auth & Security" cluster)

# Hybrid: best of all worlds (default)
slm recall "API design patterns"
# Combines semantic + exact + graph for optimal results

Search Performance by Dataset Size

Memories Semantic FTS5 Graph Hybrid
100 35ms 25ms 50ms 65ms
500 45ms 30ms 60ms 80ms
1,000 55ms 35ms 70ms 95ms
5,000 85ms 50ms 110ms 150ms

All search strategies remain sub-second even with 5,000+ memories.


Operation Time Comparison Notes
Add Memory Instant indexing
Search (Hybrid) 80ms 3.3x faster than v1 500 memories
Graph Build 100 memories
Pattern Learning Incremental
Dashboard Load 1,000 memories
Timeline Render All memories

Tier Description Compression Savings
Tier 1 Active memories (0-30 days) None
Tier 2 Warm memories (30-90 days) 60% Progressive summarization
Tier 3 Cold storage (90+ days) 96% JSON archival

Example: 1,000 memories with mixed ages = ~15MB (vs 380MB uncompressed)

Dataset Size Search Time Graph Build RAM Usage
100 memories 35ms 0.5s
500 memories 45ms 2s
1,000 memories 55ms 5s
5,000 memories 85ms 30s

Tested up to 10,000 memories with linear scaling and no degradation.


SuperLocalMemory V2 is the ONLY memory system that works across ALL your tools:

Tool Integration How It Works
Claude Code βœ… Native Skills /superlocalmemoryv2:remember
Cursor βœ… MCP Integration AI automatically uses memory tools
Windsurf βœ… MCP Integration Native memory access
Claude Desktop βœ… MCP Integration Built-in support
VS Code + Continue βœ… MCP + Skills /slm-remember or AI tools
VS Code + Cody βœ… Custom Commands /slm-remember commands
Aider βœ… Smart Wrapper aider-smart with auto-context
Any Terminal βœ… Universal CLI slm remember "content"

  1. MCP (Model Context Protocol) – Auto-configured for Cursor, Windsurf, Claude Desktop

    • AI assistants get natural access to your memory
    • No manual commands needed
    • “Remember that we use FastAPI” just works
  2. Skills & Commands – For Claude Code, Continue.dev, Cody

    • /superlocalmemoryv2:remember in Claude Code
    • /slm-remember in Continue.dev and Cody
    • Familiar slash command interface
  3. Universal CLI – Works in any terminal or script

    • slm remember "content" – Simple, clean syntax
    • slm recall "query" – Search from anywhere
    • aider-smart – Aider with auto-context injection

All three methods use the SAME local database. No data duplication, no conflicts.

Installation automatically detects and configures:

  • Existing IDEs (Cursor, Windsurf, VS Code)
  • Installed tools (Aider, Continue, Cody)
  • Shell environment (bash, zsh)

Zero manual configuration required. It just works.

Manual Setup for Other Apps

Want to use SuperLocalMemory in ChatGPT, Perplexity, Zed, or other MCP-compatible tools?

πŸ“˜ Complete setup guide: docs/MCP-MANUAL-SETUP.md

Covers:

  • ChatGPT Desktop – Add via Settings β†’ MCP
  • Perplexity – Configure via app settings
  • Zed Editor – JSON configuration
  • Cody – VS Code/JetBrains setup
  • Custom MCP clients – Python/HTTP integration

All tools connect to the same local database – no data duplication.


πŸ’‘ Why SuperLocalMemory?

For Developers Who Use AI Daily

Scenario Without Memory With SuperLocalMemory
New Claude session Re-explain entire project recall "project context" β†’ instant context
Debugging “We tried X last week…” starts over Knowledge graph shows related past fixes
Code preferences “I prefer React…” every time Pattern learning knows your style
Multi-project Context constantly bleeds Separate profiles per project

Not another simple key-value store. SuperLocalMemory implements cutting-edge memory architecture:

  • PageIndex (Meta AI) β†’ Hierarchical memory organization
  • GraphRAG (Microsoft) β†’ Knowledge graph with auto-clustering
  • xMemory (Stanford) β†’ Identity pattern learning
  • A-RAG β†’ Multi-level retrieval with context awareness

The only open-source implementation combining all four approaches.


The Hard Truth About “Free” Tiers

Solution Free Tier Limits Paid Price What’s Missing
Mem0 10K memories, limited API Usage-based No pattern learning, not local
Zep Limited credits $50/month Credit system, cloud-only
Supermemory 1M tokens, 10K queries $19-399/mo Not local, no graphs
Personal.AI ❌ No free tier $33/month Cloud-only, closed ecosystem
Letta/MemGPT Self-hosted (complex) TBD Requires significant setup
SuperLocalMemory V2 Unlimited $0 forever Nothing.

Feature Comparison (What Actually Matters)

Feature Mem0 Zep Khoj Letta SuperLocalMemory V2
Works in Cursor Cloud Only ❌ ❌ ❌ βœ… Local
Works in Windsurf Cloud Only ❌ ❌ ❌ βœ… Local
Works in VS Code 3rd Party ❌ Partial ❌ βœ… Native
Works in Claude ❌ ❌ ❌ ❌ βœ…
Works with Aider ❌ ❌ ❌ ❌ βœ…
Universal CLI ❌ ❌ ❌ ❌ βœ…
4-Layer Architecture ❌ ❌ ❌ ❌ βœ…
Pattern Learning ❌ ❌ ❌ ❌ βœ…
Multi-Profile Support ❌ ❌ ❌ Partial βœ…
Knowledge Graphs βœ… βœ… ❌ ❌ βœ…
100% Local ❌ ❌ Partial Partial βœ…
Zero Setup ❌ ❌ ❌ ❌ βœ…
Progressive Compression ❌ ❌ ❌ ❌ βœ…
Completely Free Limited Limited Partial βœ… βœ…

SuperLocalMemory V2 is the ONLY solution that:

  • βœ… Works across 8+ IDEs and CLI tools
  • βœ… Remains 100% local (no cloud dependencies)
  • βœ… Completely free with unlimited memories

See full competitive analysis β†’


Multi-Layer Memory Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  Layer 9: VISUALIZATION (NEW v2.2.0)                        β”‚
β”‚  Interactive dashboard: timeline, search, graph explorer    β”‚
β”‚  Real-time analytics and visual insights                    β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚  Layer 8: HYBRID SEARCH (NEW v2.2.0)                        β”‚
β”‚  Combines: Semantic + FTS5 + Graph traversal                β”‚
β”‚  80ms response time with maximum accuracy                   β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚  Layer 7: UNIVERSAL ACCESS                                  β”‚
β”‚  MCP + Skills + CLI (works everywhere)                      β”‚
β”‚  11+ IDEs with single database                              β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚  Layer 6: MCP INTEGRATION                                   β”‚
β”‚  Model Context Protocol: 6 tools, 4 resources, 2 prompts    β”‚
β”‚  Auto-configured for Cursor, Windsurf, Claude               β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚  Layer 5: SKILLS LAYER                                      β”‚
β”‚  6 universal slash-commands for AI assistants               β”‚
β”‚  Compatible with Claude Code, Continue, Cody                β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚  Layer 4: PATTERN LEARNING                                  β”‚
β”‚  Learns: coding style, preferences, terminology             β”‚
β”‚  "You prefer React over Vue" (73% confidence)               β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚  Layer 3: KNOWLEDGE GRAPH                                   β”‚
β”‚  Auto-clusters: "Auth & Tokens", "Performance", "Testing"   β”‚
β”‚  Discovers relationships you didn't know existed            β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚  Layer 2: HIERARCHICAL INDEX                                β”‚
β”‚  Tree structure for fast navigation                         β”‚
β”‚  O(log n) lookups instead of O(n) scans                     β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚  Layer 1: RAW STORAGE                                       β”‚
β”‚  SQLite + Full-text search + TF-IDF vectors                 β”‚
β”‚  Compression: 60-96% space savings                          β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Knowledge Graph (It’s Magic)

# Build the graph from your memories
python ~/.claude-memory/graph_engine.py build

# Output:
# βœ“ Processed 47 memories
# βœ“ Created 12 clusters:
#   - "Authentication & Tokens" (8 memories)
#   - "Performance Optimization" (6 memories)
#   - "React Components" (11 memories)
#   - "Database Queries" (5 memories)
#   ...

The graph automatically discovers relationships. Ask “what relates to auth?” and get JWT, session management, token refreshβ€”even if you never tagged them together.

Pattern Learning (It Knows You)

# Learn patterns from your memories
python ~/.claude-memory/pattern_learner.py update

# Get your coding identity
python ~/.claude-memory/pattern_learner.py context 0.5

# Output:
# Your Coding Identity:
# - Framework preference: React (73% confidence)
# - Style: Performance over readability (58% confidence)
# - Testing: Jest + React Testing Library (65% confidence)
# - API style: REST over GraphQL (81% confidence)

Your AI assistant can now match your preferences automatically.

# Work profile
superlocalmemoryv2:profile create work --description "Day job"
superlocalmemoryv2:profile switch work

# Personal projects
superlocalmemoryv2:profile create personal
superlocalmemoryv2:profile switch personal

# Client projects (completely isolated)
superlocalmemoryv2:profile create client-acme

Each profile has isolated memories, graphs, and patterns. No context bleeding.



# Memory Operations
superlocalmemoryv2:remember "content" --tags tag1,tag2  # Save memory
superlocalmemoryv2:recall "search query"                 # Search
superlocalmemoryv2:list                                  # Recent memories
superlocalmemoryv2:status                                # System health

# Profile Management
superlocalmemoryv2:profile list                          # Show all profiles
superlocalmemoryv2:profile create name>                 # New profile
superlocalmemoryv2:profile switch name>                 # Switch context

# Knowledge Graph
python ~/.claude-memory/graph_engine.py build            # Build graph
python ~/.claude-memory/graph_engine.py stats            # View clusters
python ~/.claude-memory/graph_engine.py related --id 5   # Find related

# Pattern Learning
python ~/.claude-memory/pattern_learner.py update        # Learn patterns
python ~/.claude-memory/pattern_learner.py context 0.5   # Get identity

# Reset (Use with caution!)
superlocalmemoryv2:reset soft                            # Clear memories
superlocalmemoryv2:reset hard --confirm                  # Nuclear option

SEO: Performance benchmarks, memory system speed, search latency, visualization dashboard performance

Metric Result Notes
Hybrid search 80ms Semantic + FTS5 + Graph combined
Semantic search 45ms 3.3x faster than v1
FTS5 search 30ms Exact phrase matching
Graph build (100 memories) Leiden clustering
Pattern learning Incremental updates
Dashboard load 1,000 memories
Timeline render All memories visualized
Storage compression 60-96% reduction Progressive tiering
Memory overhead Lightweight

Tested up to 10,000 memories with sub-second search times and linear scaling.


We welcome contributions! See CONTRIBUTING.md for guidelines.

Areas for contribution:

  • Additional pattern categories
  • Graph visualization UI
  • Integration with more AI assistants
  • Performance optimizations
  • Documentation improvements

πŸ’– Support This Project

If SuperLocalMemory saves you time, consider supporting its development:


MIT License β€” use freely, even commercially. Just include the license.


Varun Pratap Bhardwaj β€” Solution Architect

GitHub

Building tools that make AI actually useful for developers.


100% local. 100% private. 100% yours.


Star on GitHub



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *