Best for
- Starting a new publication or medical affairs project that needs a defined evidence base
- Background research for advisory board materials, slide decks, or training content
- Identifying key papers for a literature review or competitive landscape analysis
- Building an evidence set to support key message development or publication planning
- Screening a large set of abstracts to find the most relevant sources quickly
Inputs
- A clearly defined research question, indication, compound, or topic area
- Any known key references or authors to use as starting points
- Inclusion/exclusion criteria for the evidence you need (publication date, study type, population)
- The intended use of the evidence (informs how broad or narrow the search should be)
Steps
Define the research question
Be specific. “What is the efficacy and safety of Drug X in moderate-to-severe plaque psoriasis?” will produce better results than “Drug X psoriasis.” Specify the population, intervention, comparator, and outcomes you need (PICO framework).
Build the search strategy
Use AI to generate candidate search terms, MeSH headings, and Boolean combinations. Review and refine these manually. A poorly constructed search returns noise; a well-constructed one saves hours of screening.
Run the search
Execute the search across relevant databases (PubMed, Embase, trial registries, prescribing information). Use PubCrawl for structured biomedical searches or Perplexity for quick exploratory queries with cited sources.
Screen results
Review titles and abstracts against your inclusion criteria. AI can help flag likely relevant results, but the decision to include or exclude a paper is yours. Pay attention to study design, population, and recency.
Evaluate and select
Read the full text of shortlisted papers. Assess relevance, quality, and how each paper fits the project’s evidence needs. This is where editorial and scientific judgement matters most.
Output
A curated set of 5–30 source documents (depending on project scope) organised by relevance, with a brief note on each paper’s contribution to the project. The evidence set should be traceable back to the search strategy and inclusion criteria, and sufficient to support the downstream writing workflows.Prompt pattern
Why this works
AI generates comprehensive search strategies in minutes, surfacing MeSH terms, author names, and Boolean combinations that a manual approach might miss. The human writer retains the decisions that determine evidence quality: defining the research question, setting inclusion criteria, evaluating study relevance, and judging whether the evidence set is sufficient for the project.Common mistakes
Starting with too broad a question
Starting with too broad a question
“What is known about Drug X?” returns hundreds of results across indications, populations, and study types. Narrow the question before searching. A 30-second refinement saves an hour of screening.
Relying on AI to select your sources
Relying on AI to select your sources
AI can rank and flag abstracts, but it cannot judge whether a particular study design is appropriate for your project, whether the population matches your target, or whether the journal is credible. Source selection is a human decision.
Missing key databases
Missing key databases
PubMed does not index everything. For certain therapeutic areas, Embase, Cochrane, or trial registries (ClinicalTrials.gov, EU CTR) may contain essential evidence that PubMed misses. Match your database selection to the project requirements.
Stopping at abstracts
Stopping at abstracts
An abstract may suggest a paper is relevant, but the full text may reveal a different population, a post-hoc analysis, or an endpoint that does not match your needs. Always read the full text of shortlisted papers before including them in your evidence set.
No documented search strategy
No documented search strategy
If someone asks how you found your evidence, you should be able to show the search terms, databases, date range, and inclusion criteria. Undocumented evidence gathering cannot be audited or reproduced.
Tool stack
| Tool | Role |
|---|---|
| PubCrawl | Structured biomedical literature search across PubMed, trial registries, and prescribing information |
Review checklist
Human review checklist
Human review checklist
- The research question is clearly defined and specific enough for the project
- Search terms and Boolean strategy are appropriate and comprehensive
- Relevant databases have been searched (not just PubMed)
- Inclusion and exclusion criteria are documented
- Abstracts have been screened against the criteria
- Full texts of shortlisted papers have been reviewed
- The evidence set is sufficient for the project’s scope and objectives
- Key papers in the therapeutic area have not been missed
- The search strategy and results are documented for audit
Next steps: Use your evidence set to Summarise a Source Paper, Prepare a Congress Summary, or Extract Study Data, then Extract Key Messages for content development.