Best for
- Drafting CSR results text from TFL (tables, figures, and listings) outputs
- Converting adverse event incidence tables into safety narrative summaries
- Writing efficacy summaries from primary and secondary endpoint tables
- Preparing neutral data narratives for subgroup analyses
- Drafting Module 2 summary text from CSR-level statistical outputs
- Any task where a table of numbers needs to become a paragraph of text without interpretation
Inputs
- The statistical output, table, or figure to convert (complete, not excerpted)
- Context on the analysis population (ITT, mITT, PP) and analysis type
- Any formatting or style requirements (MedDRA terms for AEs, decimal precision, CI format)
- The section of the document where this text will appear (for context on appropriate detail level)
Steps
Identify the source output
Select the specific table, figure, or listing you need to convert. Confirm it is the final, validated output. Drafting from interim or unvalidated tables creates rework when values change.
Capture the critical variables
Before generating text, identify what the output contains: treatment arms, endpoints, effect sizes, confidence intervals, p-values, incidence rates, sample sizes. This list becomes your verification checklist.
Generate the draft narrative
Use AI to convert the output into neutral prose. The instruction is translation, not interpretation: the narrative should say exactly what the table says, in sentence form, without adding meaning or emphasis.
Verify every value
Check each number in the generated text against the source output. AI commonly transposes treatment arms, rounds values, omits confidence intervals, or changes “median” to “mean.” Every value must match exactly.
Standardise phrasing
Ensure the generated text uses consistent phrasing across sections. If the efficacy section says “a statistically significant difference was observed,” the safety section should not say “the drug showed a significant effect.” Align language with the protocol and SAP.
Output
Neutral, regulatory-style prose that reports the contents of a statistical output without interpretation. Every value in the text matches the source exactly. The narrative uses consistent terminology, appropriate precision, and language suitable for a regulatory submission.Prompt pattern
Why this works
Converting tables to text is one of the most repetitive tasks in regulatory writing. The content is entirely determined by the source output; the writer’s job is accurate translation, not interpretation. AI handles the mechanical conversion at speed while the writer focuses on the verification work that matters most: confirming every value is correct, the language is neutral, and the narrative is consistent with the rest of the document.Common mistakes
Transposed treatment arms
Transposed treatment arms
The efficacy table shows Drug X at 42% and placebo at 18%. AI writes “placebo showed a response rate of 42%.” This is the single most dangerous error in stats-to-narrative conversion. Verify each value is attributed to the correct arm.
Values rounded or reformatted
Values rounded or reformatted
The source reports a hazard ratio of 0.683 (95% CI: 0.552–0.845). AI rounds to 0.68 (95% CI: 0.55–0.85). Regulatory documents must reproduce values at the precision reported in the validated output.
Interpretive language added
Interpretive language added
AI writes “Treatment X showed a clinically meaningful improvement” when the source table only presents numerical results. Remove any language that interprets, characterises, or editorialises the data.
Confidence intervals or p-values omitted
Confidence intervals or p-values omitted
AI reports the point estimate but drops the CI or p-value. If the source provides them, the narrative should include them.
Inconsistent terminology across sections
Inconsistent terminology across sections
The efficacy narrative uses “overall survival” but the safety narrative uses “survival time” for the same endpoint. Use the protocol-defined term throughout the document.
Tool stack
| Tool | Role |
|---|---|
| RefCheckr | Cross-check generated narrative against source data |
Review checklist
Human review checklist
Human review checklist
- Every numerical value matches the source output exactly, at the reported precision
- Values are attributed to the correct treatment arm and population
- The analysis population (ITT, mITT, PP, safety) is correctly identified
- No interpretive or promotional language is present
- Confidence intervals and p-values are included where reported in the source
- MedDRA preferred terms are used correctly for adverse events
- Subgroup and post-hoc results are labelled as such
- Terminology is consistent with the protocol, SAP, and other document sections
- Cross-references to the source table or figure are correct
Next steps: Integrate the narrative into a Regulatory Document or Manuscript. Run Check Document Consistency to verify values match across sections.