One of the limitations of working with AI assistants is that they don't remember anything between sessions. Every conversation starts from scratch. I wanted to fix that, not by relying on a platform's built-in memory feature, but by building something I owned and controlled. The idea came from Nate B. Jones and his Open Brain guide, which laid out the architecture and got me started. The result is Open Brain: a persistent memory system that lets me capture thoughts, ideas, and context from Discord and retrieve them through any MCP-compatible AI client.
What Is Open Brain?
Open Brain is a personal knowledge capture and retrieval system built on three components working together: a Supabase backend with pgvector for semantic search, a Python Discord bot for capturing thoughts on the fly, and a custom MCP (Model Context Protocol) server that exposes retrieval tools to AI clients like Claude.
The idea is simple: when I have a thought, task, or piece of context I want my AI assistant to know about, I drop it into a Discord channel. The bot picks it up, embeds it, stores it in Supabase, and from that point forward Claude can search it semantically. It bridges the gap between "things I think of throughout the day" and "context my AI actually has access to."
The Stack
Supabase (Postgres + pgvector)
Supabase serves as the persistent data layer. It provides a managed Postgres database with the pgvector extension enabled, which means each captured thought is stored alongside its vector embedding for semantic similarity search. Rather than needing to stand up and maintain a dedicated vector database, Supabase gives you relational storage and vector search in one place on a generous free tier.
The database schema is straightforward: a thoughts table that holds the raw text, metadata like type and topic, the vector embedding, and a timestamp. The MCP server also exposes Supabase Edge Functions, which handle the actual embedding generation and similarity search logic server-side.
Python Discord Bot (on Proxmox LXC)
The Discord bot is the capture interface. It runs in a lightweight LXC container on my Proxmox homelab server and listens for messages in a dedicated channel. When I drop a thought into that channel, the bot classifies it, calls the Supabase Edge Function to generate an embedding, and writes the record to the database.
Discord makes sense as the capture interface because it's already open on my phone and desktop throughout the day. It's lower friction than opening a notes app or a web UI. I just type a message and the bot handles the rest. The bot also supports slash commands for listing and searching stored thoughts directly from Discord.
Custom MCP Server
The MCP (Model Context Protocol) server is what makes the stored thoughts accessible to AI clients. MCP is an open protocol developed by Anthropic that lets AI assistants call external tools and retrieve context in a standardized way. By building a custom MCP server on top of Supabase, Claude can query my stored thoughts using semantic search during any conversation without me having to paste anything in manually.
The MCP server is hosted as a Supabase Edge Function, which keeps the whole retrieval layer serverless and co-located with the database. Once registered as a connector in Claude.ai, it shows up as a set of tools; search thoughts, list thoughts, and capture a thought that Claude can call when relevant context might be useful.
What Was Needed to Set It Up
The setup had a few distinct phases. On the Supabase side, the main tasks were creating the project, enabling the pgvector extension, defining the thoughts schema, and writing the Edge Functions for embedding generation and semantic search. Supabase's free tier handles all of this comfortably for personal-scale usage.
For the Discord bot, the main requirements were a Discord application and bot token from the developer portal, a Python environment with the discord.py library, and an LXC container on Proxmox to keep it running persistently. The bot itself is an event listener that calls the Supabase API when a message arrives in the designated channel.
Wiring up the MCP server required understanding the MCP specification and building an Edge Function that responds to the expected tool discovery and invocation patterns. Once deployed to Supabase, registering it with Claude.ai is just a matter of adding the Edge Function URL as a connector in settings and from there it shows up automatically as a set of available tools.
What I Learned
This project sits at the intersection of a few things I find genuinely interesting: agentic AI tooling, vector search, and self-hosted infrastructure. One of the more interesting takeaways is how low the barrier actually is once you have Supabase in place. PGVector and Edge Functions together give you most of what you'd need for a production RAG system, for free, without standing up any additional infrastructure.
The MCP layer is where things get interesting from an AI engineering perspective. Rather than relying on a platform to decide what context to inject into my conversations, I now have explicit control over what my AI assistant knows and when it retrieves it. That kind of transparency and ownership over the context layer feels like the right way to build personal AI tooling.
What's Next
A few directions I'm exploring to extend the system: ingesting my Obsidian notes as a RAG knowledge base alongside the captured thoughts, building a LangGraph-based homelab ops agent that also has access to Open Brain as a memory store, and eventually wiring it into a more general-purpose second brain alongside bookmarks and code snippets. The foundation is solid, it's mostly a matter of what gets piped in next.
