Skip Navigation


Methods & Standards

HomVEE reporting guide for study authors

May 2016

This document provides guidance on how to describe randomized controlled trials and matched comparison group design studies. It also discusses how to report findings that are clear, complete, and transparent. Reporting the information described below is considered a best practice in general, and the information but can help HomVEE reviewers assess the appropriate rating to assign to the study.

PDFDownload report
File size =90 KB

Addressing Attrition Bias in Randomized Controlled Trials: Considerations for Systematic Evidence Reviews

July 2015

This paper is focused on attrition and the HomVEE attrition standard in particular. The paper begins by defining attrition and explaining why the bias that attrition introduces into randomized controlled trials can be problematic when interpreting study results. HomVEE uses an attrition standard adapted from the Department of Education’s What Works Clearinghouse (WWC), another systematic evidence review. HomVEE’s population of interest includes pregnant women, and families with children age birth to kindergarten entry; the population is different than the school-age children whose test scores were the basis of the attrition standard for the WWC. This paper describes findings testing the sensitivity of the assumptions underlying the HomVEE standard using data about parents and young children.

PDFDownload report
File size =487 KB

What Isn’t There Matters: Attrition and Randomized Controlled Trials

August 2014

A randomized controlled trial (RCT) offers a highly credible way to evaluate the effect of a program. But a strong design can be offset by weaknesses in planning or execution. One common problem that weakens the conclusions of RCTs is attrition, or missing data. This brief describes what attrition is, why it matters, and how it factors into the study ratings in the HomVEE review.

PDFDownload report
File size =293 KB

On Equal Footing: The Importance of Baseline Equivalence in Measuring Program Effectiveness

August 2014

To understand the effects of a program, researchers must distinguish effects caused by the program from effects caused by other factors. This effort typically involves comparing outcomes for two groups. The similarity of the two groups before program services begin is referred to as baseline equivalence. This brief explains the role of baseline equivalence when measuring a program’s effectiveness.

PDFDownload report
File size =150 KB