Every engineering team eventually faces the same problem: knowledge lives in too many places. Design decisions are buried in Slack threads. Architecture diagrams rot in a Google Drive folder no one remembers sharing. Onboarding docs get written once and never updated. The information exists, but finding it when you need it takes longer than re-deriving it from scratch.

We tried the off-the-shelf options. Confluence is slow, cluttered with Jira integration chrome, and searches like it is 2009. Notion is beautiful for personal notes but falls apart as a team knowledge base because everything is a flat block with no real hierarchy. GitBook is read-only by design, aimed at public documentation rather than internal living knowledge. None of them gave us what we actually needed: a fast, Markdown-native wiki with real hierarchy, version history, bidirectional links, and deep AI integration.

So we built Codex.

93
MCP Tools
3
Spaces (seed)
Version History
[[
Wiki-Links

What Codex Does

Codex is a team wiki and knowledge base organized around spaces, pages, and wiki-links. A space is a top-level container, similar to a Confluence space or a Notion workspace. Within each space, pages form a tree: every page can have child pages, creating arbitrarily deep hierarchies that mirror how engineers actually think about systems. You do not flatten a service architecture into a list of Notion blocks. You nest it: service, then endpoints, then data models, then deployment, each at the right level of detail.

Pages are written in Markdown. Not a proprietary block format, not a WYSIWYG editor that silently mangles your formatting. Plain Markdown with a live preview, so you can paste code blocks, write tables, and format text without fighting the editor. Every page stores its content as Markdown in PostgreSQL, which means full-text search works across the entire knowledge base without an external search index.

Version History with Diffs

Every edit creates a new version. Codex stores the full content of each version, not just diffs, so loading any historical state is a single read rather than a chain of patch applications. The version history view shows who changed what and when, with inline diffs that highlight exactly what moved between revisions. You can restore any previous version with one click.

This matters more than most teams realize. Without version history, a wiki is a liability. Someone edits the deployment runbook at 2 AM during an incident, removes a critical step, and nobody notices until the next deploy fails. With Codex, you see exactly what changed, when, and by whom. The runbook is recoverable in seconds.

Any page can link to any other page using [[Page Title]] syntax. Codex resolves wiki-links at render time, so you do not need to know or maintain URLs. If someone renames a page, the link still resolves because it matches on title. Backlinks are tracked automatically: when you view a page, you can see every other page that links to it. This turns the wiki from a collection of isolated documents into a connected knowledge graph where related information surfaces naturally.

Wiki-links change how people write documentation. Instead of duplicating context, you link to it. The architecture overview links to the data model page. The data model page links to the migration runbook. The migration runbook links to the deployment checklist. Each document stays focused on one thing and links to its dependencies, exactly like well-factored code.

93 MCP Tools

This is where Codex diverges most sharply from every commercial wiki. Codex exposes 93 MCP tools, making it the most AI-programmable knowledge base we are aware of. Claude can create spaces, write pages, search content, read page trees, update existing pages, manage versions, resolve wiki-links, and traverse the knowledge graph, all within a single conversation.

The practical impact is that AI agents can use the wiki as working memory. When a Claude Code session investigates a bug, it can write its findings to a Codex page. When a new service is scaffolded, the agent can create the documentation pages as part of the same workflow. The wiki is not a place humans write for other humans to read. It is a shared knowledge layer that both humans and AI agents read from and write to continuously.

Cross-Tool Integration

Codex plugs into the rest of the Renkara tool fleet through the standard patterns: shared RS256 JWT authentication, REST API, and MCP servers.

Pages can be published to Narrative, our static site generator, turning internal documentation into public-facing content with a single action. Technical specs written in Codex become blog posts or product documentation without copy-pasting between systems.

Any page can spawn a Docket issue. Found a bug while documenting a system? Create the issue from the page, and the issue automatically links back to the wiki page that describes the context. The developer fixing the bug has the full technical background without asking anyone for it.

Codex pages can be sent to Herald for inclusion in newsletters. A weekly engineering update, a changelog, a knowledge digest: write it once in the wiki, distribute it through Herald to the subscribers who need it.

The StrataForge Example

When we first deployed Codex, we seeded it with StrataForge, our fictional company dataset used for generating realistic screenshots and demo data across the tool fleet. The StrataForge space contains organizational structure, employee profiles, project histories, product specifications, and operational procedures, hundreds of interconnected pages that exercise every feature: deep hierarchies, wiki-links between pages, version history from iterative refinement, and full-text search across a non-trivial corpus.

The seeding process itself demonstrated the MCP integration. An AI agent created the entire StrataForge knowledge base programmatically through Codex's MCP tools, writing pages, establishing links, and building out the hierarchy in a single automated session. What would have taken a human days of manual wiki editing took minutes. The result was immediately useful: every tool in the fleet that needs realistic company data can pull it from Codex rather than maintaining its own fixture files.

Key Specs

SpecDetail
FrontendReact 19, TypeScript 5.6+, Vite 6
BackendFastAPI, SQLAlchemy 2.0 async, PostgreSQL
AuthRS256 JWT via shared auth-service
ContentMarkdown with live preview, full-text search
VersioningFull content snapshots, inline diffs, one-click restore
Links`[[Wiki-Link]]` syntax with automatic backlink tracking
MCP Tools93 tools for spaces, pages, versions, search, and graph traversal
ThemeLight and dark mode

Why Build Your Own Wiki

The same reason we built every other tool in the fleet: control. When the wiki is ours, we decide how AI interacts with it. We decide how it integrates with issue tracking, content publishing, and newsletters. We decide how search works, how versioning works, and how the hierarchy is structured. No feature request to a vendor, no waiting for a roadmap update, no migration when the vendor changes their pricing.

Codex is tool number sixteen in the Renkara fleet. It took less than a day to build, runs on the same infrastructure as everything else, and has already become the default place to write things down. That is the compound effect of owning your tools: each one you build makes the next one more useful, because they all share the same platform, the same protocols, and the same AI integration layer.