A self-hosted MCP server
for portable AI context.
Fork it, deploy it to Cloudflare, and every AI tool you connect instantly has access to the same confirmed facts, ongoing projects, and session continuity — without re-explaining yourself every time.
How it works
Limitless sits between your AI tools and your context. It's not a replacement for native memory — it supplements it. When an AI tool connects via MCP, it pulls confirmed context from your personal store and injects it into the session. It runs on Cloudflare's global network — meaning low latency wherever you are, with the uptime and security guarantees of edge infrastructure.
Supplement model
Works alongside native AI memory, not instead of it. Limitless handles the portable layer; each tool handles its own session context.
Confirmed flag
Entries you've explicitly marked as confirmed: true are asserted confidently. Unconfirmed entries are surfaced with appropriate hedging.
Staleness awareness
Fact-type entries (job, location, relationships) are flagged for re-confirmation after ~12–18 months. The AI won't assert stale facts — it asks.
Encrypted at rest
All content is AES-GCM encrypted before storage. Your encryption key is derived from your identity at login — only you can decrypt your data.
Isolated execution
Limitless runs on Cloudflare Workers, which uses an isolated, per-request execution model — each invocation spins up a fresh context with no persistent memory. Plaintext is never accumulated between requests. Combined with AES-GCM encryption at rest, the hosting provider cannot read stored payloads. How Workers works →
Namespace scoping
Entries belong to work, personal, or shared namespaces. Set your session namespace once in the system prompt; searches and stores filter automatically. shared entries load in all sessions.
Relationship graph
Entries are connected by typed, temporal relationships — uses_framework, priced_from, decided_by, supersedes, and more. Walk the graph with explore_context to pull everything related to a client, project, or decision.
Progressive loading
Call bootstrap_session once at session start to load identity, rules, and active projects (~800-1500 tokens). Everything else loads on demand via search_memory or explore_context.
Using Limitless
Limitless supports two integration paths: MCP clients (Claude Desktop, Claude Code, any MCP-native tool) connect plug-and-play via the MCP config. Non-MCP tools (ChatGPT, Gemini, any web UI) use the system prompt / custom instructions snippet below — this is a fully supported path.
Connecting an MCP client
Limitless works with any MCP-compatible AI client — Claude Code, Claude Desktop, and others. Once connected, your client can read and write to your personal context store automatically. No manual copy-pasting. No re-explaining yourself every session.
Adding context
Limitless organizes entries into 9 domain types: identity (who you are),
rules (behavioral directives), catalog (service offerings),
framework (methodologies), decision (decisions with rationale
and audit chain), project (active work), handoff (cross-session
tasks), resource (templates, URIs), and memory (catch-all).
Decisions support a supersedes chain for explicit audit trails. Confirm an
entry with confirmed_at and the AI will assert it as a current fact.
Good candidates for Limitless: your current role, location, ongoing projects, communication preferences, recurring relationships. Things that are true across sessions and worth carrying into every conversation.
System prompt snippets
Add one of these to your AI tool's system prompt to activate Limitless context retrieval.
Call bootstrap_session with the appropriate namespace before any task. Use limitless-mcp for all context — do not assert facts without searching. Cross-namespace writes go through handoffs, never direct mutations. If a Limitless entry conflicts with native memory, prefer the source with the newer timestamp and surface the discrepancy.
You have a Limitless memory tool. Use it at conversation start to retrieve
context about the user. Confirmed entries are facts; surface unconfirmed
entries with appropriate hedging ("I have you as X — is that still current?").
Staleness and re-confirmation
When you confirm an entry (confirmed_at is set), the AI treats it as a
current fact. Over time — typically 12–18 months for role or location entries — the AI
is instructed to ask whether the fact is still accurate rather than asserting it
confidently. Use update_entry with a fresh confirmed_at
timestamp to re-confirm. Unconfirmed entries are always surfaced with hedging.
Conflict resolution
When Limitless entries conflict with an AI tool's native memory, prefer the source with the newer timestamp. The system prompt instructs the AI to compare timestamps and surface discrepancies rather than silently picking one.
Admin interface
Visit /admin on your deployed Worker to browse and manage entries without
going through an AI. Filter by namespace, pin entries, assign namespaces to existing
entries, and delete old ones. Authenticate with the same Google account.
Changelog
bootstrap_session, explore_context, decision chains with supersedes,
bulk import API, namespace-aware graph traversal
/admin;
REST API for programmatic access (/api/entries)
get_pinned_context); resource registry (get_resource); staleness tracking
Roadmap
Limitless is in active development. Shipped items and upcoming work:
- ✓ Core MCP tools (store, search, retrieve)
- ✓ Vectorize semantic search
- ✓ Google OAuth authentication
- ✓ AES-GCM content encryption
- ✓ Confirmed flag + staleness logic
- ✓ Handoff system for session continuity
- ✓ Namespace scoping (work / personal / shared)
- ✓ Pinned context entries (
get_pinned_context) - ✓ Resource registry (
get_resource) - ✓ REST API (
GET/PATCH/DELETE /api/entries) - ✓ Admin UI at
/admin(entry browser + management) - ✓ Provider column — Google OAuth supported; schema ready for additional providers
- ✓ 9 domain types (identity, rules, catalog, framework, decision, project, handoff, resource, memory)
- ✓ Relationship graph with typed, temporal edges
- ✓
bootstrap_session+explore_contexttools - ✓ Bulk import / migration tooling (
POST /api/entries/bulk)
- → Onboarding interview flow
- → Memory import prompts (Obsidian vaults, ChatGPT/Gemini export)
- → Automatic relationship discovery