Renkara Media Group operates multiple websites: marketing sites for AccelaStudy(R), a corporate site, a personal blog, client-facing landing pages. We tried Hugo, Jekyll, and Eleventy. They all generate HTML from Markdown. None of them provision the AWS infrastructure to host it, detect AI-generated content before publishing, run semantic search over the rendered output, or let an AI agent draft and deploy articles through MCP tools. Narrative does all of that.
Multi-Site Architecture
Narrative is site-agnostic. Each website lives in its own directory with independent content, templates, assets, and deployment configuration defined in a site.yml file. The SITE_NAME environment variable selects which site to build. One CMS, many sites. When we add a new website, we create a directory, write a site.yml, add some templates and content, and build. The generator does not care what the site is about or where it deploys.
Content types include long-form articles (architecture deep-dives, technical walkthroughs) and date-driven blog posts (shorter entries, daily leverage records). Both use Markdown with YAML frontmatter. Articles use kebab-case slugs with no date prefix. Posts use YYYY-MM-DD-slug.markdown filenames organized in year subdirectories. The system parses 19+ date formats in frontmatter, so you never fight with date strings.
Incremental Builds
The build intelligence layer is what makes Narrative practical for daily use. Full builds of a 450-page site take about 50 seconds. Incremental builds take about 1.5 seconds. The system achieves this by tracking content hashes and collection memberships in Valkey (Redis). When a post changes, the cache identifies which collections it belongs to, finds all pages that depend on those collections (index pages, tag pages, pagination pages), and rebuilds only those. Everything else is skipped.
| Scenario | What Rebuilds | Time |
|---|---|---|
| Edit a single post | Post + index + tag pages + pagination | ~1.5s |
| Add a new image | Just that image (optimize + WebP) | < 1s |
| Edit SCSS | CSS only | < 1s |
| Edit a template partial | All pages using that template | Proportional |
| Full build (450 pages) | Everything | ~50s |
A --watch mode uses a file watcher with 1-second debounce for local development. Edit a Markdown file, save it, and the rebuilt page is ready before you can switch to the browser tab.
Asset Pipeline
The asset pipeline handles SCSS compilation with @import resolution, image optimization with hash-based caching, and automatic WebP conversion with <picture> element output. JPEG and PNG images are optimized on first encounter and cached by content hash, so subsequent builds skip images that have not changed. The pipeline is configured per-site in site.yml with quality settings for both original format and WebP output.
Deployment
Each site defines multiple deployment stages (Staging, Production) with their own S3 bucket, CloudFront distribution, AWS profile, and minification settings. Staging includes drafts; production excludes them. The deploy command performs hash-based incremental S3 uploads (skipping unchanged files), triggers targeted CloudFront invalidation for 50 or fewer changed paths (or a wildcard invalidation for larger deploys), and skips invalidation entirely when zero files changed.
HTML and JavaScript are minified at upload time using a built-in minifier with no external dependencies. The minification setting is per-stage, so you can skip it for staging to make debugging easier.
VelvetRope Access Control
Any deployment stage can be gated with VelvetRope, a client-side access control system. Visitors must provide a key via URL parameter or enter it on a lock screen. The key persists in localStorage for subsequent visits. This protects staging sites and stealth launches without requiring server-side authentication. It works with any static host: S3, CloudFront, Netlify, anywhere.
Infrastructure Provisioning
This is the feature that most SSGs do not even attempt. Narrative can provision complete AWS infrastructure for a new site via boto3: S3 bucket with static website hosting, ACM SSL/TLS certificate with DNS validation, CloudFront CDN distribution, and Route53 DNS records. A --dry-run flag previews what will be created. The actual provisioning is a single command. Stand up a new website with HTTPS and CDN in minutes, not hours of clicking through the AWS console.
AI Content Detection
When AI agents write or co-write content (and ours do, frequently), we need to know the AI detection score before publishing. Narrative integrates with the Sapling API to scan articles and optionally blog posts for AI-generated content. The scan writes an ai_score and ai_checked timestamp to each file's frontmatter. Files already checked are skipped unless modified or force-rescanned. A configurable threshold flags content that scores too high.
This is not about hiding AI involvement. It is about quality control. Content with a high AI score often reads generically. The score is a signal to rewrite, add personal experience, inject opinion, and make the piece sound like it was written by someone who has actually operated the systems being described.
Semantic Search
Narrative includes a Lambda-based semantic search pipeline. Content is chunked into embedding-sized segments, vectorized, and indexed. A Lambda function handles search requests with rate limiting. The search API returns semantically relevant results, not just keyword matches. Asking "how do I handle database failover" returns articles about RDS multi-AZ, connection pooling, and disaster recovery, even if none of them contain the exact phrase "database failover."
MCP Integration
The MCP server exposes 8 tools for LLM integration: list_sites, build_site, deploy_site, provision_infrastructure, check_infra_status, run_ai_detection, read_content_file, and write_content_file. It also provides 3 resources (site config, content listings, build instructions) and a guided article drafting prompt. Claude Code can draft an article, save it, run AI detection, deploy to staging, and report the result without the human ever touching the CLI.
Shortcodes and Plugins
Narrative supports extensible shortcodes with nested support: toc for table of contents, callout for styled callout blocks, code_tabs for tabbed code examples, mermaid for diagrams, youtube and vimeo for video embeds, article_link for cross-references, and several others. Shortcodes are implemented as plugins, so adding a new one means dropping a Python file into the plugins directory.
What Makes It Better Than Hugo or Jekyll
Hugo and Jekyll are excellent static site generators. They do one thing well: turn Markdown into HTML. Narrative does that too, but it also provisions the infrastructure to host the result, detects AI-authored content, runs semantic search over it, deploys with incremental uploads and cache invalidation, gates access to staging environments, exports content as JSON for React SPAs, and exposes the entire pipeline to AI agents via MCP tools. No other SSG we are aware of covers this scope.
The tradeoff is complexity. Narrative has 662 tests across 57 files. It depends on Valkey for incremental builds, boto3 for AWS operations, and the Sapling API for AI detection. If you just need to turn Markdown into HTML, Hugo is faster and simpler. If you need a full content operations platform that an AI agent can drive end-to-end, Narrative is what we built.
Key Specs
| Spec | Detail |
|---|---|
| Language | Python 3.12+ |
| Templating | Jinja2 with shortcode and plugin support |
| Styling | SCSS compilation with @import resolution |
| Image processing | JPEG/PNG optimization + WebP conversion, hash-cached |
| Caching | Valkey (Redis) for dependency tracking and incremental builds |
| Deployment | S3 + CloudFront, hash-based incremental upload |
| AI detection | Sapling API, score written to frontmatter |
| Search | Lambda-based semantic search with vector embeddings |
| MCP tools | 8 tools + 3 resources + 1 prompt |
| Tests | 662 tests across 57 files |
| Shortcodes | 10 built-in (toc, callout, mermaid, youtube, code_tabs, etc.) |
| Theme | Per-site templates, light and dark mode support |
Integration Points
Narrative connects to the broader Renkara ecosystem in several ways. The React export feature generates JSON that feeds directly into AccelaStudy(R) web and other React SPAs. The MCP server integrates with Claude Code for autonomous article drafting and deployment. Infrastructure provisioning uses the same AWS profiles and regions as the rest of the platform. And the AI detection pipeline ensures that content quality standards are met before anything reaches production, regardless of whether a human or an AI wrote the first draft.