Skip to main content
Every AI-assisted deliverable must have a clear owner, a defined review process, and an auditable trail from source to final output. AI changes how work is produced — it does not change who is responsible for it.

Who is accountable?

The person who signs off on a deliverable is accountable for its content, regardless of how it was produced. This holds whether the content was written manually, drafted with AI assistance and reviewed, or generated using AI tools and edited. AI-assisted workflows do not create a new category of reduced accountability. They create a new set of review requirements.

What the sign-off owner is accountable for

Accuracy

Every factual claim is supported by the cited source.

Completeness

No material omissions that would change the reader’s understanding.

Appropriateness

Content is suitable for its intended audience, channel, and regulatory context.

Compliance

Content meets applicable promotional codes, regulatory requirements, and organisational standards.

Transparency

AI use in the content development process is documented where required.

Review process

For every AI-assisted deliverable

1

Identify the risk tier

Use the risk levels framework to determine the review intensity required before you start.
2

Assign a named reviewer

Every AI-assisted output must have a specific person responsible for reviewing it. Unassigned reviews get skipped.
3

Review against sources

Do not review AI output only for readability or flow. Verify claims, data, and interpretations against the original source materials. Fluent prose is not evidence of accuracy.
4

Use structured checklists

Each workflow in this playbook includes a task-specific checklist. Start with the Final Human Review checklist as a baseline for all AI-assisted deliverables.
5

Document the review

Record what was reviewed, what was changed, and who approved the final version.

Review intensity by risk tier

Risk tierMinimum reviewReviewer qualification
LowStandard review by medical writerExperienced medical writer
MediumEnhanced review with source cross-checkSenior medical writer or subject matter expert
HighFull expert review with formal sign-offMedical advisor, regulatory reviewer, or compliance lead
CriticalFull expert review — this is the final quality gateQualified reviewer for the content type; the sign-off reviewer is accountable

What to review in AI-assisted content

AI outputs have specific failure patterns. Use this checklist as a starting point for every review.
  • All numerical data matches the source (endpoints, p-values, confidence intervals, sample sizes)
  • Study populations are correctly described (ITT, mITT, per-protocol, subgroups)
  • Timepoints and study phases are accurately represented
  • Statistical significance and clinical significance are not conflated
  • Conclusions match the source’s stated conclusions
  • Safety data is included where relevant and not minimised
  • Limitations of the evidence are preserved
  • Relevant qualifiers (subgroup, post-hoc, exploratory) are retained
  • Comparator information is accurate and present
  • Language is suitable for the target audience
  • Claims are appropriate for the content type (promotional vs. scientific vs. educational)
  • Tone is consistent with the therapeutic area and context
  • No inappropriate certainty or hedging
  • Claims are within the approved messaging framework (if applicable)
  • References are correctly cited and support the claims made
  • No off-label implications
  • Balance of efficacy and safety information is appropriate

Audit trails

For AI-assisted workflows, maintain documentation that supports traceability from source to final output.

What to record

ItemWhat to capture
Source materialsWhat was provided as input to the AI
Workflow appliedWhich workflow card was followed
AI tools usedWhich tools or models were used and for which steps
Reviewer identityWho reviewed the AI-assisted output
Changes madeWhat was modified during review (tracked changes or documented edits)
Final sign-offWho approved the final deliverable and when

Why audit trails matter

  • Support internal quality processes
  • Provide transparency for clients and stakeholders
  • Enable retrospective analysis of AI workflow effectiveness
  • Meet emerging expectations around AI transparency in regulated industries

For agencies implementing AI workflows

Integrate with existing QC processes

AI workflows should sit within your existing quality control framework, not alongside it as a separate track.
  • Add AI-specific checkpoints to your review SOPs
  • Include AI workflow documentation in project trackers
  • Train reviewers on AI-specific failure modes
  • Do not create separate, lighter review tracks for AI-assisted content

Client transparency

  • Proactively brief clients on how AI is used in their projects — do not wait for them to ask
  • Follow client-specific AI policies where they exist; some pharma companies have explicit restrictions on AI use in specific deliverable types
  • Document AI use at a level of detail that would satisfy a client audit: which deliverables, which workflow steps, which tools, who reviewed
  • Your project team must be able to answer “Was AI used in this deliverable?” immediately and specifically

Team training

Train every writer using AI workflows on the specific failure modes in this playbook — not generic “AI limitations” slides, but the actual patterns: hallucinated p-values, merged study arms, dropped qualifiers, promotional framing of non-promotional evidence.
Train reviewers to approach AI-assisted content differently from human-written content. AI errors are fluent and plausible — the review mindset is verification, not editorial polish. Run periodic calibration exercises: give reviewers an AI-generated summary with planted errors and assess detection rates. This builds the skill that matters most.

The bottom line

A client, an MLR committee, or a regulatory body does not apply a lower standard because AI was involved in production. The deliverable either meets the required standard or it does not. AI changes how content is produced. It does not change what “correct” looks like.

Human-in-the-Loop

Why every deliverable needs a named owner and what that owner is responsible for.

Source grounding

Keeping every claim traceable to a source document.

Risk levels

How review intensity scales with the risk of the deliverable.