Manuscript

Manuscript analysis

Send your draft through a careful read and get back the numbers an editor would care about, plus a list of specific things to look at on your next pass.

Analysis runs on your whole project and produces a report you can scroll through, mark up with your verdicts, ask the AI for a second opinion on, and download as a PDF. It lives in two places: a third tab inside the manuscript editor (next to Compose and Architect), and its own tab in the dashboard nav so you can run a pass on any of your projects without opening them first.

Analysis is part of the Essentials plan. Premium adds unlimited AI second opinions on findings.

What the engine looks at

The analysis pass walks every scene in your project and measures things that matter. The report groups the results into ten sections you can jump to from the sidebar.

Overview

The headline screen. A composite score out of 100, the count of issues by severity (critical, warning, suggestion), the top five things to look at, and a tile for each report section so you can dive in wherever you like.

The composite is generous. It starts at 100 and ticks down for each thing the engine flags, weighted by severity. A book in good shape lands in the eighties or higher.

Readability

Flesch reading ease, Flesch-Kincaid grade level, Gunning Fog, Coleman-Liau. Four established formulas that quantify how heavy your prose is to read.

The numbers are there because they correlate with how broadly accessible the prose feels. The report flags chapters that drift far from the rest of the book (a sign that voice or audience pitched without you noticing) and individual scenes whose readability sits well outside what your genre tends to expect.

Style

Adverb density, passive voice, sentence variety, and show-vs-tell density. Plus dialogue ratio with a genre-aware target band: romance manuscripts tend to land 35-55 percent dialogue; thrillers 20-40 percent; epic fantasy 15-30 percent. The report flags scenes that fall well outside the band you would expect for your genre.

Language

Cliches and crutch words. The engine carries a list of common cliched phrases (like 'her heart raced' or 'at the end of the day') and flags scenes where they land. Crutch words are content words you have used a lot more than makes sense for a manuscript of your length, which usually means you are leaning on them without noticing.

Pacing

An energy curve across every scene in your book, plus POV and tense drift. The energy score is a blend of sentence rate, dialogue presence, and action verbs (high in fight scenes, low in expository chapters). The report flags any run of three or more low-energy scenes back to back, which is usually where readers stall.

POV and tense drift compares your prose to the targets you set in your project's Voice (first person versus third, past versus present). When the prose drifts, the report names the scenes where it happens.

Characters

One row per character from your Story Index. Scenes they appear in, lines of dialogue, agency score (a rough measure of how often they take action versus react), percent of total page time, and a sentiment arc tracking the emotional weight of their scenes across the manuscript.

The point isn't to optimize the numbers; it's to notice when something looks off. A protagonist whose page time is 18 percent might want a closer look. A secondary character whose sentiment arc never changes is one you might want to give an actual journey.

Plot threads

The Chekhov check. The engine looks for objects, places, and named items that show up exactly once in the manuscript, usually a setup that never paid off, or a payoff that has no setup. Both are worth looking at on revision.

Beat coverage sits here too: how many beats from your active outline have a scene anchored to them, and which ones are still uncovered.

Voice

How consistent your voice is across chapters. Average sentence length, sentence-length variance, and tonal markers compared chapter by chapter. The report flags chapters that drift from the rest of the book.

This is the section that usually says the most about a long manuscript. Consistency in voice is one of the things readers feel without being able to name. When this section is quiet, your book reads as one author's voice; when it's loud, it doesn't.

Marketability

Word count compared to typical genre bands (a 110,000-word romance is uncommon; a 60,000-word epic fantasy is short). The opening hook check looks at the first 500 words for the kinds of signals that tend to land well in the genre you set.

These are not rules. They are conventions. The point is to know when you are leaning into them and when you are leaning out, on purpose.

Promise + Closure

How well your prose lines up with the story you said you were telling. The report reads from your Promise layer (the things this book is owed to deliver) and your Closure ledger (the open setups you have not yet paid off) and surfaces the gaps.

If you set up a curse on page 42 and never lift it, this section says so. If your Promise says 'this is a story about choosing love over duty' and the prose has not put a duty option on the page yet, this section says that too.

Each finding has a verdict

Every flag the engine raises is a finding. A finding has a severity (critical, warning, suggestion, note, or info), a one-line headline, an explanation in plain English, an optional concrete suggestion, and (sometimes) a JSON evidence object with the underlying numbers if you want to look.

You can mark each finding as Agree, Disagree, or Ignore. Your verdict is yours. Disagreeing doesn’t silence the finding on the next run; it just gives your future-self the context that you already considered it. You can attach a short note explaining why for any verdict.

Ask the AI for nuance

Every finding has a button that says “Ask AI for nuance.” Click it and your preferred model writes a two-paragraph response: the underlying craft cause behind the symptom the engine flagged, and a concrete next step you can take in your next revision pass.

The model gets the finding, the relevant scene excerpt, and a slice of your project’s Voice and Promises so its notes ground in your actual book instead of giving generic editing advice. Each ask is cached on the finding so a re-click returns the same response without re-billing your model. There’s a small Ask again button if you want a fresh pass.

On Essentials, you get 50 of these per month. On Premium, unlimited. The counter resets on the first.

Download a PDF report

Once a run completes, a PDF download button appears in the report header. The PDF includes the cover, headline metrics, the pacing curve, every finding grouped by section, and the character page-time strip. Hand it to a writing partner, a beta reader, or your future-self.

Frequently asked

How long does an analysis run take?

A typical novel takes a minute or two. The engine reads every scene, so longer manuscripts take longer. While the run is in progress, you can navigate away. Your report shows up the next time you open the Analysis tab.

Will an analysis change my manuscript?

No. Analysis is read-only. It reads your prose, your Story Index, and your Promise and Closure layers. It writes a report row, findings rows, and metrics rows. It never edits a single scene.

Why is one of my scenes flagged as low-readability?

The reading-ease formulas tend to penalize long sentences and dense vocabulary. A scene full of formal dialogue or a heavy expository passage will look harder to read. That doesn't mean it's wrong, just that it's worth knowing. Your verdict is yours.

The engine flagged a 'Chekhov' item that's actually paid off later. Why?

The plot-thread detector looks for objects and named items that appear exactly once in the manuscript. If you used a different name for the same thing later, or if the payoff is buried in a longer phrase, the simple match might miss the connection. Mark it Disagree and add a note; on revision you'll have the context.

How does the AI second opinion know about my book?

Each ask sends the finding, the scene excerpt around it, and a small slice of your project's Voice and Promises to your preferred model. Nothing more. The model never sees your full manuscript at once. The cost is bounded so the same question on the same finding never re-bills you.

Can I run analysis on a single chapter or scene?

The first version runs on the whole project. Per-chapter and per-scene scopes are coming. The schema and API already support them; the UI will follow.

What does the composite score actually mean?

It's a single readable number that summarizes how many things the engine flagged and how serious they were. It starts at 100 and deducts 6 for each critical, 3 for each warning, and 1 for each suggestion. Notes and info are zero. It's not a verdict on your book. It's a summary of how your book looked to the engine on this pass. Yours can go up or down on the next pass without anything in the prose changing; that means the engine reweighted, not your book.

Tips from the team

  • Run an analysis at the end of every revision pass. The score is most useful as a delta: did the things you changed make the report better?
  • Don’t chase the score. Chase the findings that match your taste. A book with one disagreed-with critical and a few warnings is in better shape than one with no findings and a flat protagonist.
  • On the first run, set your project’s genre and Voice before you start. Genre-aware bands are more useful than the defaults, and the AI second opinion gets sharper when it has Voice to ground in.
  • If you’re stuck on a chapter, run analysis just after writing the rough draft. The engine’s findings are a fresh pair of eyes that doesn’t cost you a beta reader’s evening.