One store. Every MCP-compatible AI tool reads and writes to it. Switch tools; the context stays.
Early release Architecture is solid, community is just getting started.
git clone https://github.com/FlashQuery/flashquery
git push backs it up.
Nothing proprietary, no vendor-controlled backends, privacy-first, and portable. If FlashQuery disappeared tomorrow, your data remains yours.
Known limitations (early release): Markdown documents only for now. PDFs, Word docs, images, and other file types are on the roadmap. One vault folder hierarchy at a time; future support for multiple folder hierarchies.
The infrastructure layer is already built.
Drop a markdown doc in the vault folder on
your filesystem. FlashQuery sees it, watches
for changes, indexes it for search. Edit it
in TextEdit, Obsidian, VS Code, Vim, Emacs,
whatever you use. No import workflow or sync
daemon, and no app to open.
ls it, grep it,
git blame it. Your folders are
the system.
Memory lives in Postgres, managed by FlashQuery. Cursor, ChatGPT, Claude, Qwen, local models — all of them can read and write memories. Switch tools or models; the context doesn't reset because it was never inside any of them. Scope memory to a conversation, a project, a person. Then query it like the database it is.
Your vault is a Git repo. Every file added,
every edit, every decision is committed to
Git with a descriptive message.
git log to see what changed.
git diff to see exactly how.
git revert to undo anything the
AI touched. Your vault is an auditable
record of changes.
Section-aware document tools. AI can get a synopsis without loading the whole document, insert a paragraph without rewriting the rest. Your token budget goes to reasoning instead of parsing. The model reads only what it needs.
Claude Desktop, Claude Code, Cursor, ChatGPT Desktop, and any other MCP client — one config entry per tool, same vault, same memory, same plugins behind all of them. Add a plugin once and every connected tool sees its new capabilities automatically. No per-tool integration work to maintain.
FlashQuery is an MCP server. Point a Cloudflare Tunnel (or any reverse tunnel) at it and your entire data layer — vault, memory, plugins — is reachable from anywhere over a private connection you control. Your files stay on local disk. Nothing changes about how your AI tools connect to it.
Full architecture, plugin spec, and MCP tool surface: github.com/FlashQuery/flashquery
Developers call it context loss. You re-explain yourself every session, and copy-paste between tools. The context that should carry over...doesn't.
"A glorified human clipboard. Copy, paste, repeat. Copy, paste, cry a little."
— Developer post, builder.io"My workflow is 70% AI, 20% copy-paste, 10% panic."
— Hacker NewsDevelopers have been independently building persistent-memory systems for their AI tools. At least a dozen HN posts. Each solves part of the problem, but none spans all of it.
FlashQuery is what happens when that layer gets built once, properly, in the open.
A plugin brings a schema and skills. FlashQuery provides the storage, versioning, search, and MCP surface.
Contacts, businesses, opportunities, interactions, all in FlashQuery's data layer. No separate database, no login, no UI.
"I just had coffee with Sarah Chen, VP of Design at Meridian Labs."
FlashQuery creates a contact record in Postgres,
writes a vault document at
crm/contacts/sarah-chen.md with the
meeting notes, and commits the change to Git.
Next week: "brief me before my Meridian call"
pulls the contact record, the vault doc, any
logged interactions, and the opportunity status.
A durable, queryable knowledge layer for the product you're building: definitions, research, decisions, requirements, tracking.
"Why did we decide to store documents in Markdown instead of the database?"
Answer sourced to the original decision document. Structured data and documents live in the same system, so the query hits both.
A skill registry built as a FlashQuery plugin. Your skills directory with versioning, semantic discovery, and usage tracking so you can improve them over time.
Designed, on the roadmap.
The plugin spec is open. Browse the plugins repo · Join #flashquery on AI Product Hive if you're building one.
FlashQuery runs locally. The CRM and Product Brain plugins are the two shipping plugins. Here's what a working session looks like.
FlashQuery is a scoped server process. No cloud dependency, no telemetry, no account required. Connect it as a local MCP server and Claude Desktop, Claude Code, or Cursor can read and write your vault.
Stand up FlashQuery on a VPS or personal Supabase project you control. Same vault, same plugins, reachable from every machine. Nothing hosted by FlashQuery.
PDFs, Word docs, spreadsheets, same drop-in workflow.
Drop a non-Markdown file in the vault. FlashQuery indexes it so your AI tools get structured context, not a raw binary. Same folder, more file types, no extra setup.
Plugins from the community.
The CRM and Product Brain plugins are the first two. The spec is open and designed for anyone to build against.
Describe what you want to track. FlashQuery builds the schema.
The goal: describe what you need in plain English. FlashQuery designs the Postgres schema, creates the tables, wires up the MCP tools. Export the result as a shareable plugin.
Document projections: the model reads a summary, not the whole file.
A lightweight model preprocesses vault documents into compact, skill-specific views. The model works from that view instead of the raw file. Each skill gets the view of the document it needs. Token budget spent on reasoning, not parsing.
Multiple vault folders.
Today FlashQuery manages one vault folder hierarchy at a time. Support for multiple folder hierarchies is coming — separate vaults for separate projects, all reachable through the same MCP server.
git clone https://github.com/FlashQuery/flashquery
Full setup, plugin spec, and architecture notes live in the repo.
View on GitHubWorks with any MCP-compatible tool. Start with the README.
Find us in #flashquery on AI Product Hive.