Connect your sources. An LLM builds an organized wiki. Your AI agents read it like a library. Not a RAG pipeline — a wiki.
How it works
No RAG pipeline to build. No embeddings to manage. No infrastructure to operate.
Confluence, GitHub, Notion, Slack, monday.com, or plain uploads. Atlas reads the source and suggests a wiki structure.
The LLM reads everything, writes organized pages, adds cross-references. Watch the wiki grow in real-time.
Your agents read the wiki with MCP tools: wiki_cat, wiki_grep, wiki_ls. Like a library, not a search engine.
Features
Built for teams where some members are human and some are AI.
A SCHEMA.md file defines your wiki's DNA — folder structure, page templates, naming conventions. The LLM follows it on every ingestion.
Connect GitHub, Confluence, or Notion. When your docs change, the wiki updates automatically.
Agents contribute findings via wiki_contribute. Humans review and approve before merge into the wiki.
Detect contradictions, stale pages, coverage gaps, orphaned pages, and broken cross-references automatically.
Track which pages agents read, when, and how often. Identify unused content and coverage gaps.
Sections marked human-edited are never overwritten by the LLM. Human intent is sacred.
For agents
Add the MCP server config and your agent has organizational knowledge. No SDK, no API keys, no pipeline.
Claude Code
Claude Desktop
Cursor
Any MCP client
{
"mcpServers": {
"atlas-wiki": {
"url": "https://your-api.up.railway.app/mcp/wiki-id"
}
}
}Not just RAG
| Concern | Traditional RAG | monday Atlas |
|---|---|---|
| Storage | Vectors in a database | Markdown pages in folders |
| Processing | At query time (slow) | At write time (fast reads) |
| Output | Fragments scored by similarity | Complete, synthesized pages |
| Structure | Flat chunk store | Schema-driven wiki hierarchy |
| Cross-references | None | Automatic bidirectional links |
| Agent interface | Custom retrieval API | Standard MCP tools |
| Human readable? | Not really | Yes — it's a wiki |
| Cost model | Per-query embeddings + inference | Pay for ingestion. Reads are free. |
Pricing
Your agents read the wiki at zero cost. You only pay when the LLM writes.
Create your first wiki in 5 minutes. Connect sources. Watch the LLM build it. Let your agents read.
Get started free