MCP data layer for shared access to memory + documents + structured data

Local, Open Source, backed by Supabase and Git.

One store. Every MCP-compatible AI tool reads and writes to it. Switch tools; the context stays.

Early release Architecture is solid, community is just getting started.

git clone https://github.com/FlashQuery/flashquery
Get Started
FlashQuery architecture: AI clients connect via MCP to FlashQuery, which routes to Supabase (Postgres + pgvector), a Markdown vault, and a Git repo

What it is

  • A server process you run locally, configured via YAML.
  • An MCP server. One interface to your data for any MCP-compatible tool.
  • A Supabase-backed store for relational data, memory, and vectors.
  • A Markdown vault, a folder of plain text, editable in anything.
  • A Git repo underneath. Every change versioned. git push backs it up.
  • Plugin-extensible. Each plugin brings its own schema, skills, and rules.
Supabase Postgres Markdown Git MCP

Nothing proprietary, no vendor-controlled backends, privacy-first, and portable. If FlashQuery disappeared tomorrow, your data remains yours.

What it isn't

  • Not a SaaS. Not an agent framework. Not memory-only.
  • Not a replacement for Claude, Cursor, or any LLM. It sits between you and whichever model you use.
  • Not a note-taking app. Obsidian is still Obsidian.

Known limitations (early release): Markdown documents only for now. PDFs, Word docs, images, and other file types are on the roadmap. One vault folder hierarchy at a time; future support for multiple folder hierarchies.

What FlashQuery handles so you don't have to

The infrastructure layer is already built.

Drop a markdown doc in the vault folder on your filesystem. FlashQuery sees it, watches for changes, indexes it for search. Edit it in TextEdit, Obsidian, VS Code, Vim, Emacs, whatever you use. No import workflow or sync daemon, and no app to open. ls it, grep it, git blame it. Your folders are the system.

Memory lives in Postgres, managed by FlashQuery. Cursor, ChatGPT, Claude, Qwen, local models — all of them can read and write memories. Switch tools or models; the context doesn't reset because it was never inside any of them. Scope memory to a conversation, a project, a person. Then query it like the database it is.

Your vault is a Git repo. Every file added, every edit, every decision is committed to Git with a descriptive message. git log to see what changed. git diff to see exactly how. git revert to undo anything the AI touched. Your vault is an auditable record of changes.

Section-aware document tools. AI can get a synopsis without loading the whole document, insert a paragraph without rewriting the rest. Your token budget goes to reasoning instead of parsing. The model reads only what it needs.

Claude Desktop, Claude Code, Cursor, ChatGPT Desktop, and any other MCP client — one config entry per tool, same vault, same memory, same plugins behind all of them. Add a plugin once and every connected tool sees its new capabilities automatically. No per-tool integration work to maintain.

FlashQuery is an MCP server. Point a Cloudflare Tunnel (or any reverse tunnel) at it and your entire data layer — vault, memory, plugins — is reachable from anywhere over a private connection you control. Your files stay on local disk. Nothing changes about how your AI tools connect to it.

Full architecture, plugin spec, and MCP tool surface: github.com/FlashQuery/flashquery

You're not the only one doing this.

Developers call it context loss. You re-explain yourself every session, and copy-paste between tools. The context that should carry over...doesn't.

"A glorified human clipboard. Copy, paste, repeat. Copy, paste, cry a little."

— Developer post, builder.io

"My workflow is 70% AI, 20% copy-paste, 10% panic."

Hacker News

Developers keep building this independently

Developers have been independently building persistent-memory systems for their AI tools. At least a dozen HN posts. Each solves part of the problem, but none spans all of it.

FlashQuery is what happens when that layer gets built once, properly, in the open.

Plugins add behavior while FlashQuery handles the infrastructure.

A plugin brings a schema and skills. FlashQuery provides the storage, versioning, search, and MCP surface.

CRM Plugin
shipping

Contacts, businesses, opportunities, interactions, all in FlashQuery's data layer. No separate database, no login, no UI.

"I just had coffee with Sarah Chen,
 VP of Design at Meridian Labs."

FlashQuery creates a contact record in Postgres, writes a vault document at crm/contacts/sarah-chen.md with the meeting notes, and commits the change to Git. Next week: "brief me before my Meridian call" pulls the contact record, the vault doc, any logged interactions, and the opportunity status.

Product Brain Plugin
shipping

A durable, queryable knowledge layer for the product you're building: definitions, research, decisions, requirements, tracking.

"Why did we decide to store documents
 in Markdown instead of the database?"

Answer sourced to the original decision document. Structured data and documents live in the same system, so the query hits both.

Skill Server Plugin
on the roadmap

A skill registry built as a FlashQuery plugin. Your skills directory with versioning, semantic discovery, and usage tracking so you can improve them over time.

Designed, on the roadmap.

The plugin spec is open. Browse the plugins repo · Join #flashquery on AI Product Hive if you're building one.

See it work.

FlashQuery runs locally. The CRM and Product Brain plugins are the two shipping plugins. Here's what a working session looks like.

Run it locally.

FlashQuery is a scoped server process. No cloud dependency, no telemetry, no account required. Connect it as a local MCP server and Claude Desktop, Claude Code, or Cursor can read and write your vault.

Self-host in the cloud.

Stand up FlashQuery on a VPS or personal Supabase project you control. Same vault, same plugins, reachable from every machine. Nothing hosted by FlashQuery.

What's coming next

PDFs, Word docs, spreadsheets, same drop-in workflow.

Drop a non-Markdown file in the vault. FlashQuery indexes it so your AI tools get structured context, not a raw binary. Same folder, more file types, no extra setup.

Plugins from the community.

The CRM and Product Brain plugins are the first two. The spec is open and designed for anyone to build against.

Describe what you want to track. FlashQuery builds the schema.

The goal: describe what you need in plain English. FlashQuery designs the Postgres schema, creates the tables, wires up the MCP tools. Export the result as a shareable plugin.

Document projections: the model reads a summary, not the whole file.

A lightweight model preprocesses vault documents into compact, skill-specific views. The model works from that view instead of the raw file. Each skill gets the view of the document it needs. Token budget spent on reasoning, not parsing.

Multiple vault folders.

Today FlashQuery manages one vault folder hierarchy at a time. Support for multiple folder hierarchies is coming — separate vaults for separate projects, all reachable through the same MCP server.

FlashQuery is on GitHub.

git clone https://github.com/FlashQuery/flashquery

Full setup, plugin spec, and architecture notes live in the repo.

View on GitHub

Works with any MCP-compatible tool. Start with the README.

Find us in #flashquery on AI Product Hive.