Methods and Standards
Revised brief and accompanying webinar on HomVEE prioritization procedures
This revised brief describes the procedures used in the HomVEE project to determine which models to review. It provides hypothetical examples to illustrate the prioritization criteria and answers frequently asked questions about prioritization. An accompanying webinar provides an overview of HomVEE prioritization criteria, an explanation of how home visiting models earn study- and model-level prioritization points, an explanation of how prioritization scores are calculated, and updates to the annual review process.
HomVEE fact sheet
The HomVEE project systematically reviews the research on home visiting models that serve pregnant women or families with children up to kindergarten entry. It determines which models have enough rigorous evidence to be considered evidence-based according to criteria defined by the U.S. Department of Health and Human Services (HHS). This 3-page fact sheet describes how the HomVEE project evaluates home visiting programs and provides stakeholders with an overview of how evidence-based home visiting models are identified through a four step evaluation process.
HomVEE updated reporting guide for study authors
The author reporting guide, which has been updated from the 2016 version, provides evaluators guidance about how to describe randomized controlled trials and matched comparison group design studies, and report their findings clearly so that systematic reviews can use their results. Reporting the information described in this guide is considered a best practice in general, and the information can help HomVEE reviewers assess the appropriate rating to assign to the study. The latest update clearly identifies the information that HomVEE seeks from evaluators, which will help the project assign prioritization points that determine which home visiting models are reviewed.
Flowchart illustrating matched comparison group design standards
This flowchart shows HomVEE’s process for rating matched comparison group studies, along with definitions of key concepts the HomVEE team considers when rating studies. Users of this flowchart may also read more about producing study ratings elsewhere on the HomVEE website
Flowchart illustrating randomized controlled trial standards
This flowchart shows HomVEE’s process for rating randomized controlled trials, along with definitions of key concepts the HomVEE team considers when rating studies. Users of this flowchart may also read more about producing study ratings elsewhere on the HomVEE website.
Addressing Attrition Bias in Randomized Controlled Trials: Considerations for Systematic Evidence Reviews
This paper is focused on attrition and the HomVEE attrition standard in particular. The paper begins by defining attrition and explaining why the bias that attrition introduces into randomized controlled trials can be problematic when interpreting study results. HomVEE uses an attrition standard adapted from the Department of Education’s What Works Clearinghouse (WWC), another systematic evidence review. HomVEE’s population of interest includes pregnant women, and families with children age birth to kindergarten entry; the population is different than the school-age children whose test scores were the basis of the attrition standard for the WWC. This paper describes findings testing the sensitivity of the assumptions underlying the HomVEE standard using data about parents and young children.
What Isn’t There Matters: Attrition and Randomized Controlled Trials
A randomized controlled trial (RCT) offers a highly credible way to evaluate the effect of a program. But a strong design can be offset by weaknesses in planning or execution. One common problem that weakens the conclusions of RCTs is attrition, or missing data. This brief describes what attrition is, why it matters, and how it factors into the study ratings in the HomVEE review.
On Equal Footing: The Importance of Baseline Equivalence in Measuring Program Effectiveness
To understand the effects of a program, researchers must distinguish effects caused by the program from effects caused by other factors. This effort typically involves comparing outcomes for two groups. The similarity of the two groups before program services begin is referred to as baseline equivalence. This brief explains the role of baseline equivalence when measuring a program’s effectiveness.