Skip to Main Content

Exploring Types of Reviews: from Systematic Review, Scoping Review and Met-Analysis: Terminology used in Systematic Reviews

Navigating Literature Reviews: A Comprehensive Guide

list of some common jargon and terminology used in systematic reviews:

  1. Systematic Review (SR): A comprehensive and structured synthesis of existing evidence on a specific research question or topic.

  2. Meta-analysis: A statistical technique used to combine the results of multiple studies quantitatively to produce a summary effect estimate.

  3. PRISMA: Preferred Reporting Items for Systematic Reviews and Meta-Analyses. A set of guidelines for reporting systematic reviews and meta-analyses to ensure transparency and completeness.

  4. Inclusion criteria: Criteria used to determine which studies are eligible for inclusion in the systematic review based on characteristics such as study design, participants, interventions, outcomes, etc.

  5. Exclusion criteria: Criteria used to exclude studies from the systematic review based on predefined characteristics that are not relevant to the research question.

  6. Search strategy: A detailed plan outlining how and where to search for relevant studies, including databases, search terms, and any other sources of literature.

  7. Grey literature: Literature that is not formally published or indexed in traditional databases, such as conference abstracts, theses, reports, and unpublished studies.

  8. Risk of bias: The likelihood that the results of a study are influenced by systematic errors or flaws in study design, conduct, or analysis.

  9. Publication bias: The tendency for studies with positive or statistically significant results to be more likely to be published, leading to an overestimation of the true effect size.

  10. Forest plot: A graphical representation of the results of individual studies included in a meta-analysis, with effect estimates and confidence intervals displayed for each study.

  11. Heterogeneity: Variability or diversity among the results of individual studies included in a meta-analysis, which may arise from differences in study populations, interventions, outcomes, or study designs.

  12. Funnel plot: A graphical tool used to assess publication bias in meta-analyses by plotting effect size against study precision (e.g., standard error or sample size).

  13. Quality assessment: The process of evaluating the methodological quality and risk of bias of individual studies included in a systematic review using standardized tools or criteria.

  14. Subgroup analysis: An analysis conducted to explore whether the effect of an intervention varies across different subgroups of study participants (e.g., age, gender, baseline risk).

  15. Sensitivity analysis: An analysis conducted to assess the robustness of the results of a systematic review or meta-analysis by varying key methodological decisions or inclusion criteria.

  16. Protocol: A systematic review protocol is a detailed plan or blueprint outlining the methods and procedures that will be followed during the conduct of the systematic review. It typically includes information such as the research question or objective, inclusion and exclusion criteria, search strategy, methods for data extraction and synthesis, criteria for assessing study quality, and plans for reporting the findings. Developing and registering a protocol before conducting the review helps ensure transparency, rigor, and consistency in the review process. It also helps to minimize bias by providing a pre-defined plan that can be followed systematically.

  17. Kappa statistic: The kappa statistic is a measure of inter-rater agreement or reliability used to assess the degree of agreement between two or more raters or observers when categorizing or classifying data. In the context of systematic reviews, kappa is often used to assess the agreement between reviewers or assessors when screening studies for inclusion, extracting data, or assessing study quality. It quantifies the extent to which the observed agreement between raters exceeds the agreement that would be expected by chance alone. Kappa values range from -1 to 1, where a value of 1 indicates perfect agreement, 0 indicates agreement equivalent to chance, and negative values indicate agreement worse than chance. Generally, kappa values above 0.75 are considered excellent agreement, values between 0.40 and 0.75 are considered fair to good agreement, and values below 0.40 are considered poor agreement. The kappa statistic provides a useful measure of the reliability of data extraction and assessment processes in systematic reviews, helping to ensure consistency and accuracy in the review findings.