Skip to main content
Every AI-assisted output must trace to specific source materials you provided. No unsourced claims. No invented data. No extrapolation beyond what the evidence supports.
Translation, not invention.

What source grounding means

In medical writing, the source document defines what can and cannot be said — whether that is a clinical study report, a published paper, a summary of product characteristics, or a set of congress abstracts. AI does not know what the source says unless you give it the source. And even when you do, AI can:
  • Paraphrase in ways that subtly shift meaning
  • Fill gaps with plausible-sounding but unsupported statements
  • Merge findings from different sources without distinguishing them
  • Present interpretations as facts
Source grounding is the practice of keeping AI output anchored to the materials you provide — and ensuring that every claim can be traced back to its origin by a human reviewer.

How to apply source grounding in practice

1

Always provide the source material as input

Never ask AI to write about a topic from general knowledge. Always provide the specific paper, CSR section, or data source; the relevant prescribing information or SmPC; and any approved key messages or brand messaging framework.If you do not have a source, you do not have an input for an AI workflow.
2

Instruct the AI to cite its sources

Every prompt pattern in this playbook includes a constraint along these lines:
“Base your output only on the provided source. Do not include information from outside the source material. Cite specific sections, tables, or figures where relevant.”
This is not a suggestion — it is a structural requirement. AI models will generate plausible content from training data if not explicitly told to stay within the provided source.
3

Verify every claim against the original

After receiving AI output, check each factual claim against the source document. Specifically:
  • Confirm numerical data — endpoints, p-values, confidence intervals, sample sizes — are accurately reproduced
  • Verify that findings from different study arms, timepoints, or populations have not been combined
  • Confirm that conclusions match the source’s stated conclusions, not the AI’s interpretation
4

Remove or resolve unsourced content

If the AI has included any statement that cannot be traced to the provided source, remove it, source it from an appropriate reference, or flag it for expert review.Do not leave unsourced claims in a deliverable on the assumption that they are “probably correct.”

Common source grounding failures

These are the patterns that recur most often in AI-assisted medical writing. Learn to spot them in review.
FailureExampleImpact
Hallucinated dataAI inserts p=0.003 when the source reports p=0.03, or generates a confidence interval that does not appear in the paperFabricated statistical evidence enters a slide deck or manuscript draft. If not caught, it propagates through all downstream deliverables.
Merged populationsAI combines ITT results (n=450) with per-protocol results (n=380) into a single statementMisleading efficacy representation. An MLR reviewer or journal editor will catch this — but only after wasted review cycles.
Extrapolated conclusionsSource reports non-inferiority (HR 0.95; 95% CI 0.82–1.10). AI summary states the treatment “demonstrated improved outcomes.”Overstated claim that could become a compliance issue in promotional materials.
Invented contextAI adds disease prevalence or mechanism of action from training data, not from the provided sourceUnsourced claims enter the deliverable. Easy to miss because the information sounds plausible.
Omitted qualifiersSource states efficacy “in patients with moderate-to-severe disease (PASI ≥12 at baseline).” AI drops the qualifier.The claim appears to apply to the full study population, broadening it beyond what the reference supports.
AI errors are fluent. A hallucinated p-value or an extrapolated conclusion reads with the same confidence as correctly reproduced data. Review against the source — do not rely on the output sounding right.

Source grounding and risk tiers

The importance of source grounding increases with the risk level of the deliverable.
Tasks: Summarisation, structuring, reformattingSource grounding ensures the draft accurately reflects the source content. Errors at this tier are correctable in standard review.
See the full risk levels framework for review requirements at each tier.

For regulated content

In regulated contexts — promotional materials, prescribing information supplements, regulatory submissions — source grounding is not just good practice. It is a requirement.
  • Every claim must be supportable by a specific, cited reference
  • AI-assisted drafts must go through the same referencing and verification process as manually written content
  • The use of AI does not change the standard of evidence required

Tools that support source grounding

RefCheckr

Verifies whether specific claims in a document are supported by the cited references.

PosterLens

Extracts structured information from scientific posters, providing a clear source for subsequent summarisation workflows.
These tools support source grounding — they do not replace it. A human reviewer must still confirm that the source-to-claim mapping is accurate and complete.