World Builder
World analysis
Send your world through a careful read and get back the numbers a senior reader would care about, plus a list of specific things to look at on your next building pass.
World analysis runs only on the world you point it at and produces a report you can scroll through, mark up with your verdicts, ask the AI for a second opinion on, and download as a PDF. It lives in two places: a tab inside the World Builder, and the dashboard Analysis tab with a Manuscript / World toggle. Both surfaces render the same component, so the read is identical wherever you start.
World analysis is part of the Essentials plan. Premium adds unlimited AI second opinions on findings.
What the engine looks at
The pass walks every domain and entry in your world and grades it across 24 categories. The sidebar collapses those into ten sections you can jump between.
Premise
The one-sentence reason this world exists. Without a premise the engine and a reader can't tell whether this is grimdark fantasy, a hopepunk solarpunk city, or a hard-sci-fi mining colony. Every other category gets evaluated against it.
Your premise lives on the world's logline + description. The engine checks both for length and for whether the premise carries any tension (a world WHERE x BUT y) so the rest of the report has something to compare against.
Cohesion
Whether the domains hang together. The engine flags missing foundations (a world without geography, culture, or history reads as a stage set), and worlds where 60%+ of entries sit in one domain (top-heavy worlds tend to read like one-trick settings).
Consistency sits in this section too: duplicate entry names, entries you flagged as 'demands testing' that haven't been settled yet, and contradictions across descriptions.
Geography
Climate, terrain, regions, water, light. The physical bones of your world. The engine checks not just whether you have entries here but whether the descriptions carry the kind of verbs and processes that make a place feel real (does the climate do anything to the people who live in it, or is it just a noun?).
Society
Culture, religion, politics, social hierarchy, law and justice. Five categories grouped together because they almost always overlap in practice. The engine grades each individually but the radar shows them as a cluster so you can see at a glance whether your society reads multidimensional or single-axis.
Systems
Magic, technology, economy, trade, creatures and species. The systems that distinguish your world from the world outside the book. The engine pays special attention here to whether descriptions name costs, limits, and consequences (a magic system without a price reads as a wish-fulfillment lever rather than a constraint with stories in it).
History + Myth
Where this world has been (history), what's beyond it (cosmology), and what stories the people inside it tell themselves (mythology). The engine checks for years, wars, founding events, and named gods or legends. A world with no history reads as a snapshot; a world with no mythology reads atheistic in a way readers feel even when they can't name it.
Language + Daily life
How people speak and name things, and what an ordinary person does on a quiet Tuesday. The daily-life check is one of the most useful in the whole report: a world that doesn't describe food, work, home, family, and sleep is hard for prose to inhabit, even when every other category looks fine.
Story support
When you bind the run to a project, the engine pulls your project's open Promises and asks: does the world have entries that can pay these off? A promise about the cost of magic with no magic system entry that names a cost is an orphan promise, and the report flags it.
Conflict potential lives here too. The engine counts how many entries describe friction (rival, banned, scarce, threatened, opposed) and flags worlds where the friction ratio is so low the writer has nothing to push against.
Depth + originality
Are entries more than placeholders? Do entries have at least a paragraph of description and a couple of concept tags, or are they nouns in a list?
Originality scans for stock-shape patterns like 'dark lord', 'chosen one', 'galactic empire', 'evil AI'. Stock tropes communicate fast but they don't differentiate. The engine flags worlds that lean on more than two without owning them deliberately.
Usability is the third leg here: does the world have the structured artifacts (maps, timelines, articles, calendars, languages) that the engine and AI need to be useful as context? A world with 50 World Index entries but no map is harder for the prose tools to lean on than a world with 12 entries, a map, and a timeline.
Read the radar
The Overview shows a 24-spoke radar with one spoke per category. The polygon tips farther from center where the world is strong. Faded spokes mean low coverage: the engine didn’t have enough data in that category to score it, so the spoke is pulled to zero rather than 100. A category with high score AND full opacity is genuinely strong; high score with a faded spoke just means there’s nothing there yet.
The composite score below the radar is a coverage-weighted average of every category, with severity-driven deductions for findings that demand attention. A world that hasn’t been built yet scores 0 (which is the truth). A world that has been built thoroughly and reads cohesive scores in the eighties or nineties.
Each finding has a verdict
Same as manuscript analysis: every flag the engine raises is a finding with a severity, a one-line headline, an explanation, an optional concrete suggestion, and (sometimes) an evidence object you can expand. Mark each one Agree, Disagree, or Ignore. Add a short note for any verdict to remind your future-self of the context.
Disagreeing doesn’t silence the finding next run; it just gives your future-self the context that you already considered it. Some flags are wrong for your world. That’s expected.
Ask the AI for nuance
Every finding has an “Ask AI for nuance” button. The model gets the finding, the specific entry it’s about (if any), the world’s premise, and writes back a two-paragraph response: the underlying worldbuilding cause behind the symptom and a concrete next step you can take. The cost is bounded: each ask is cached on the finding so a re-click returns the same response without re-billing your model.
On Essentials, you get 50 of these per month combined across manuscript and world analysis. On Premium, unlimited. The counter resets on the first.
Download a PDF report
Once a run completes, a PDF download button appears in the report header. The PDF includes the cover, headline metrics, the 24-spoke radar, the strongest and weakest categories, every finding grouped by category, and a recommended next-steps list. Hand it to a beta reader, a writing partner, or your future-self.
Frequently asked
Will an analysis change my world?
No. World analysis is read-only. It reads your domains, entries, and (when bound to a project) your project's Promise and Voice layers. It writes a report row, findings, metrics, and category scores. It never edits a single entry.
Why did my magic-rich fantasy get flagged on originality?
The originality detector counts stock-shape phrases like 'dark lord' and 'chosen one'. If the report flagged you, look at whether you're leaning on the trope by default or owning it deliberately. Mark it Disagree if you've made the choice consciously.
The radar has spokes near zero on categories I haven't built. Is that bad?
No. Low-coverage spokes mean 'nothing to evaluate yet' rather than 'this is broken'. The radar is faded on those spokes deliberately so you can see the difference. A score of 0 on Magic when your world doesn't have magic is correct.
What changes when I bind a project to the run?
Two extra detectors run: genre fit (does your world have the categories the genre tends to lean on?) and story support (do the world entries have anything to do with your project's open Promises?). Without a project lens, those checks are skipped because the engine doesn't know what story the world is supporting.
Can I run analysis on just one category?
The schema and API support category-scoped runs. The UI ships world-scope only in v1; per-category UI follows.
How does the world analysis composite score work?
Same idea as manuscript analysis but blended with category coverage. Each category contributes its score weighted by how much data was there to evaluate. Severity-driven deductions apply on top, with diminishing returns past the first three of any severity. A world that hasn't been built scores 0 (correct); a world that's been built thoroughly without major issues scores in the eighties.
Tips from the team
- Run a world analysis early. The first run on a half-built world is the most useful one because it surfaces foundations you forgot to write.
- Don’t chase a perfect score. A world with one critical you disagreed with and a few warnings is in better shape than one with no findings and no daily life.
- Bind a project lens when you can. Genre fit and story support are the most valuable detectors and they’re skipped without it.
- Use the radar to plan, not to grade. Pick the weakest spoke with the most narrative weight in your story, build there.

