Skip Navigation

Publications

Methods & Standards

User-friendly brief on HomVEE prioritization procedures

February 2018

This brief describes the procedures used in the HomVEE project to determine which models to review. It provides hypothetical examples to illustrate the prioritization criteria and answers frequently asked questions about prioritization.

PDFDownload report
File size =164 KB

HomVEE fact sheet

February 2018

The HomVEE project systematically reviews the research on home visiting models that serve pregnant women or families with children up to kindergarten entry. It determines which models have enough rigorous evidence to be considered evidence-based according to criteria defined by the U.S. Department of Health and Human Services (HHS). This 3-page fact sheet describes how the HomVEE project evaluates home visiting programs and provides stakeholders with an overview of how evidence-based home visiting models are identified through a four step evaluation process.

PDFDownload report
File size =124 KB

HomVEE updated reporting guide for study authors

February 2018

The author reporting guide, which has been updated from the 2016 version, provides evaluators guidance about how to describe randomized controlled trials and matched comparison group design studies, and report their findings clearly so that systematic reviews can use their results. Reporting the information described in this guide is considered a best practice in general, and the information can help HomVEE reviewers assess the appropriate rating to assign to the study. The latest update clearly identifies the information that HomVEE seeks from evaluators, which will help the project assign prioritization points that determine which home visiting models are reviewed.

PDFDownload report
File size =152 KB

Flowchart illustrating matched comparison group design standards

May 2016

This flowchart shows HomVEE’s process for rating matched comparison group studies, along with definitions of key concepts the HomVEE team considers when rating studies. Users of this flowchart may also read more about producing study ratings elsewhere on the HomVEE website

PDFDownload report
File size =86 KB

Flowchart illustrating randomized controlled trial standards

May 2016

This flowchart shows HomVEE’s process for rating randomized controlled trials, along with definitions of key concepts the HomVEE team considers when rating studies. Users of this flowchart may also read more about producing study ratings elsewhere on the HomVEE website.

PDFDownload report
File size =159 KB

Addressing Attrition Bias in Randomized Controlled Trials: Considerations for Systematic Evidence Reviews

July 2015

This paper is focused on attrition and the HomVEE attrition standard in particular. The paper begins by defining attrition and explaining why the bias that attrition introduces into randomized controlled trials can be problematic when interpreting study results. HomVEE uses an attrition standard adapted from the Department of Education’s What Works Clearinghouse (WWC), another systematic evidence review. HomVEE’s population of interest includes pregnant women, and families with children age birth to kindergarten entry; the population is different than the school-age children whose test scores were the basis of the attrition standard for the WWC. This paper describes findings testing the sensitivity of the assumptions underlying the HomVEE standard using data about parents and young children.

PDFDownload report
File size =487 KB

What Isn’t There Matters: Attrition and Randomized Controlled Trials

August 2014

A randomized controlled trial (RCT) offers a highly credible way to evaluate the effect of a program. But a strong design can be offset by weaknesses in planning or execution. One common problem that weakens the conclusions of RCTs is attrition, or missing data. This brief describes what attrition is, why it matters, and how it factors into the study ratings in the HomVEE review.

PDFDownload report
File size =293 KB

On Equal Footing: The Importance of Baseline Equivalence in Measuring Program Effectiveness

August 2014

To understand the effects of a program, researchers must distinguish effects caused by the program from effects caused by other factors. This effort typically involves comparing outcomes for two groups. The similarity of the two groups before program services begin is referred to as baseline equivalence. This brief explains the role of baseline equivalence when measuring a program’s effectiveness.

PDFDownload report
File size =150 KB
Top