Skip to main content
Use these prompts to support QC, verification, and pre-submission review of medical communications content. Each pattern targets a specific review task: claim-to-reference verification, compliance screening, source grounding, safety data completeness, cross-document consistency, and readability.
These prompts are review support tools — they inform the reviewer’s judgement, they do not substitute for it. All AI-assisted review outputs require human verification before sign-off.

When to use review prompts

Run review prompts at the end of any AI-assisted drafting or adaptation workflow, before the content goes to formal MLR review or client delivery. Use the final human review workflow as your overall QC gate.

Verify claims against references

Full claim verification workflow for pre-MLR QC.

Final human review

Structured QC gate before any AI-assisted deliverable ships.

Prompt patterns

Use this prompt to verify each claim in a document against its cited reference. Use it with the verify claims against references workflow and RefCheckr.
You are a medical writing QC assistant. Verify each claim in the document against its cited reference.

For each claim:
1. Quote the claim
2. Identify the cited reference
3. Find the supporting evidence in the reference
4. Assess: SUPPORTED / PARTIALLY SUPPORTED / NOT SUPPORTED / CANNOT VERIFY
5. Explain any discrepancy
6. Flag numerical mismatches
7. Flag language that is stronger than the reference supports

Document:
[INSERT DOCUMENT WITH REFERENCE CITATIONS]

References:
[INSERT FULL TEXT OF EACH CITED REFERENCE, LABELLED]

Rules:
- Compare strictly against the cited reference, not general knowledge
- If the reference does not contain relevant information, mark NOT SUPPORTED
- Note missing qualifiers (subgroup, post-hoc, etc.)
For high volumes of claims, use RefCheckr — it is purpose-built for this task and produces a structured verification report.
Use this prompt to pre-screen content for potential promotional compliance issues before formal MLR review. Use it with the check promotional compliance workflow and MedCheckr.
You are a medical communications compliance review assistant. Pre-screen the following content for potential promotional compliance issues.

Content:
[INSERT CONTENT]

Product: [SPECIFY]
Approved indication(s): [SPECIFY]
Audience: [SPECIFY]
Channel: [SPECIFY]

Check for:
1. Superlative or comparative claims needing substantiation
2. Language implying efficacy beyond what references support
3. Off-label implications
4. Insufficient safety information relative to efficacy claims
5. Emotive or promotional language inappropriate for the content type
6. Claims beyond the approved indication

For each issue: quote the text, describe the concern, suggest revision type, rate LOW / MEDIUM / HIGH.

Note: This is pre-screening, not compliance clearance. Formal MLR review is still required.
This prompt produces a pre-screen, not a compliance clearance. Formal MLR review by qualified medical, legal, and regulatory reviewers is always required before promotional content is used.
Use this prompt to verify that AI-generated or AI-assisted content stays within the bounds of the provided source materials. This is the primary check for source grounding.
You are a medical writing QC assistant. Check whether the following content is fully grounded in the provided source material.

Content to check:
[INSERT AI-GENERATED OR AI-ASSISTED CONTENT]

Source material:
[INSERT SOURCE DOCUMENT]

For each paragraph or claim in the content:
1. Is it supported by specific content in the source? (YES / NO / PARTIAL)
2. If YES, cite the relevant section of the source
3. If NO, flag as potentially unsourced — this may be AI-generated content from training data
4. If PARTIAL, explain what is supported and what is not

Also check:
- Are there any data points in the content that do not appear in the source?
- Are there any conclusions in the content that go beyond the source's stated conclusions?
- Has any context or background been added that is not from the source?

Rules:
- Be thorough. Flag anything that cannot be traced to the source.
- It is better to over-flag than to miss unsourced content.
Unsourced content flagged by this check may be accurate — but it is not verifiable from the provided documents. Any flagged content must either be traced to an approved source or removed.
Use this prompt to verify that safety information is adequately represented in a deliverable relative to the source data.
You are a medical writing QC assistant focused on safety data representation.

Deliverable:
[INSERT DELIVERABLE]

Source safety data:
[INSERT SAFETY SECTIONS FROM SOURCE DOCUMENTS]

Check:
1. Are the most common adverse events reported in the source included in the deliverable?
2. Are serious adverse events included?
3. Are discontinuations due to adverse events mentioned?
4. Is the safety data proportionate to the efficacy content? (i.e., does the deliverable give fair representation to both benefits and risks?)
5. Are safety qualifiers preserved? (e.g., timing, severity grading, relationship to treatment)
6. For patient-facing content: is safety information presented in understandable language?

Flag any:
- Safety findings in the source that are absent from the deliverable
- Safety information that appears minimised compared to its prominence in the source
- Missing context that could affect how a reader understands the safety profile
Safety completeness checks are particularly important for patient-facing and promotional content, where the balance of benefits and risks has regulatory and ethical significance.
Use this prompt to check consistency across multiple deliverables from the same evidence base — for example, when a slide deck, leave piece, and website content are all in development together.
You are a medical writing QC assistant. Check the following documents for consistency.

Document A: [INSERT — e.g., slide deck]
Document B: [INSERT — e.g., leave piece]
Document C: [INSERT — e.g., website content]

Check for:
1. Are the same data points reported consistently across documents? (same numbers, same phrasing of results)
2. Are key messages consistent across documents?
3. Are there any claims in one document that contradict or conflict with another?
4. Is safety information consistent across all documents?
5. Are references consistent?

For each inconsistency found:
- Quote the relevant text from each document
- Describe the inconsistency
- Note which document (if any) matches the source material

Rules:
- Flag all inconsistencies, even minor ones
- Differences in emphasis or depth between channels are expected — flag only factual inconsistencies
Run this check before submitting a suite of related deliverables for MLR. Inconsistencies across materials are a common and easily avoided reason for MLR queries.
Use this prompt to review a plain language summary for readability and accessibility. Use it with the create a plain language summary workflow.
You are a health literacy specialist. Review the following plain language summary for readability and accessibility.

Content:
[INSERT PLAIN LANGUAGE SUMMARY]

Target audience: [SPECIFY]
Target reading level: [SPECIFY]

Check:
1. Sentence length — average should be 15–20 words. Flag sentences over 25 words.
2. Medical jargon — flag any unexplained technical terms
3. Passive voice — flag and suggest active alternatives
4. Paragraph length — should be 2–3 sentences maximum
5. Structure — are headings clear and helpful?
6. Explanations — are medical concepts explained in terms the audience would understand?
7. Tone — is it respectful, clear, and non-condescending?

For each issue:
- Quote the text
- Explain the readability concern
- Suggest a revision

Note: This checks readability, not medical accuracy. Accuracy must be verified separately against the source.
This prompt checks readability only. Run the source grounding check and safety completeness check separately to verify accuracy and safety coverage.

Combining review prompts

1

Start with source grounding

Run the source grounding check on any AI-assisted content before other review steps. Unsourced content must be resolved before compliance or claim verification is meaningful.
2

Check safety completeness

Run the safety information completeness check next. Safety omissions are high-risk and should be caught before the content is reviewed for claims or compliance.
3

Verify claims

Run claim-to-reference verification for high-risk or promotional content. Use RefCheckr for large claim volumes.
4

Pre-screen for compliance

Run the promotional compliance pre-screen for any content intended for promotional use. Use MedCheckr for faster screening.
5

Check consistency across deliverables

If multiple deliverables are in development simultaneously, run the consistency check before final MLR submission.

Customisation notes

  • Combine prompts as needed: For a final review, run the source grounding check, safety completeness check, and compliance pre-screen in sequence.
  • Adjust rigour to risk tier: Low-risk content may need only a source grounding check; high-risk content should use multiple review prompts. See the risk levels framework.
  • Document the review: Record which review prompts were used and what was found — this supports audit trails and aligns with review and accountability requirements.

Verify claims against references

Systematic claim-to-reference checking workflow.

Check promotional compliance

Pre-MLR compliance screening workflow.

Final human review

Structured QC gate before any AI-assisted deliverable ships.

Source grounding

The principle that underpins every review check on this page.