Who is accountable?
The person who signs off on a deliverable is accountable for its content, regardless of how it was produced. This holds whether the content was written manually, drafted with AI assistance and reviewed, or generated using AI tools and edited. AI-assisted workflows do not create a new category of reduced accountability. They create a new set of review requirements.What the sign-off owner is accountable for
Accuracy
Every factual claim is supported by the cited source.
Completeness
No material omissions that would change the reader’s understanding.
Appropriateness
Content is suitable for its intended audience, channel, and regulatory context.
Compliance
Content meets applicable promotional codes, regulatory requirements, and organisational standards.
Transparency
AI use in the content development process is documented where required.
Review process
For every AI-assisted deliverable
Identify the risk tier
Use the risk levels framework to determine the review intensity required before you start.
Assign a named reviewer
Every AI-assisted output must have a specific person responsible for reviewing it. Unassigned reviews get skipped.
Review against sources
Do not review AI output only for readability or flow. Verify claims, data, and interpretations against the original source materials. Fluent prose is not evidence of accuracy.
Use structured checklists
Each workflow in this playbook includes a task-specific checklist. Start with the Final Human Review checklist as a baseline for all AI-assisted deliverables.
Review intensity by risk tier
| Risk tier | Minimum review | Reviewer qualification |
|---|---|---|
| Low | Standard review by medical writer | Experienced medical writer |
| Medium | Enhanced review with source cross-check | Senior medical writer or subject matter expert |
| High | Full expert review with formal sign-off | Medical advisor, regulatory reviewer, or compliance lead |
| Critical | Full expert review — this is the final quality gate | Qualified reviewer for the content type; the sign-off reviewer is accountable |
What to review in AI-assisted content
AI outputs have specific failure patterns. Use this checklist as a starting point for every review.Accuracy checks
Accuracy checks
- All numerical data matches the source (endpoints, p-values, confidence intervals, sample sizes)
- Study populations are correctly described (ITT, mITT, per-protocol, subgroups)
- Timepoints and study phases are accurately represented
- Statistical significance and clinical significance are not conflated
- Conclusions match the source’s stated conclusions
Completeness checks
Completeness checks
- Safety data is included where relevant and not minimised
- Limitations of the evidence are preserved
- Relevant qualifiers (subgroup, post-hoc, exploratory) are retained
- Comparator information is accurate and present
Appropriateness checks
Appropriateness checks
- Language is suitable for the target audience
- Claims are appropriate for the content type (promotional vs. scientific vs. educational)
- Tone is consistent with the therapeutic area and context
- No inappropriate certainty or hedging
Compliance checks
Compliance checks
- Claims are within the approved messaging framework (if applicable)
- References are correctly cited and support the claims made
- No off-label implications
- Balance of efficacy and safety information is appropriate
Audit trails
For AI-assisted workflows, maintain documentation that supports traceability from source to final output.What to record
| Item | What to capture |
|---|---|
| Source materials | What was provided as input to the AI |
| Workflow applied | Which workflow card was followed |
| AI tools used | Which tools or models were used and for which steps |
| Reviewer identity | Who reviewed the AI-assisted output |
| Changes made | What was modified during review (tracked changes or documented edits) |
| Final sign-off | Who approved the final deliverable and when |
Why audit trails matter
- Support internal quality processes
- Provide transparency for clients and stakeholders
- Enable retrospective analysis of AI workflow effectiveness
- Meet emerging expectations around AI transparency in regulated industries
For agencies implementing AI workflows
Integrate with existing QC processes
AI workflows should sit within your existing quality control framework, not alongside it as a separate track.- Add AI-specific checkpoints to your review SOPs
- Include AI workflow documentation in project trackers
- Train reviewers on AI-specific failure modes
- Do not create separate, lighter review tracks for AI-assisted content
Client transparency
- Proactively brief clients on how AI is used in their projects — do not wait for them to ask
- Follow client-specific AI policies where they exist; some pharma companies have explicit restrictions on AI use in specific deliverable types
- Document AI use at a level of detail that would satisfy a client audit: which deliverables, which workflow steps, which tools, who reviewed
- Your project team must be able to answer “Was AI used in this deliverable?” immediately and specifically
Team training
Train every writer using AI workflows on the specific failure modes in this playbook — not generic “AI limitations” slides, but the actual patterns: hallucinated p-values, merged study arms, dropped qualifiers, promotional framing of non-promotional evidence.
The bottom line
A client, an MLR committee, or a regulatory body does not apply a lower standard because AI was involved in production. The deliverable either meets the required standard or it does not. AI changes how content is produced. It does not change what “correct” looks like.Related principles
Human-in-the-Loop
Why every deliverable needs a named owner and what that owner is responsible for.
Source grounding
Keeping every claim traceable to a source document.
Risk levels
How review intensity scales with the risk of the deliverable.