Review Process

Glossary of Terms

Click on a Letter to Browse the Glossary:

A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z



Absolute value. The value of a number, as a distance from zero, disregarding whether the number is positive or negative. For example, the absolute value for both +4 and -4 is 4.

Attrition. The loss of sample members from the study. Attrition typically occurs in one of three ways: (1) some sample members refuse to participate; (2) researchers are unable to locate some sample members (for example, if they have moved); or (3) researchers exclude sample members from the study (although this may negatively affect the research design). Researchers may exclude sample members for various reasons, for example, if a sample member was determined to be ineligible for the program or did not have data for all the required outcomes.



Baseline equivalence. The program and comparison groups have similar characteristics (such as race and education) at the study’s onset. For example, the sample has baseline equivalence on language if, at the beginning of the study, similar proportions of the program and comparison groups are native English speakers and non-native English speakers.

Beta. A numerical estimate from a statistical method called regression modeling, used to represent the relationship between characteristics (such as age or access to services) and an outcome (such as number of well-baby visits). The beta measures the association between two variables, holding other characteristics in the model constant. The beta can be positive, indicating that as the values of one variable increase, the other variable values also increase; or the beta can be negative, indicating that as the value of one variable increases, the other variable values decrease. In the HomVEE review, we typically present the beta for the program—that is, the difference in the outcome of interest between study participants in the program group and the comparison group, holding constant other characteristics included in the regression.



Comparison group. A group with characteristics similar to those of program group members, except that those in the comparison group do not receive the services of interest. The comparison group is intended to represent what would have happened to members of the program group if they had not received the services from the model of interest. The more similar a comparison group is to the program group, the more likely it is that any difference in outcomes between the two groups can be attributed to the program.

Confidence interval. An interval surrounding a statistic (such as a mean, percentile, or correlation), within which the true statistic is believed to lie with a specified level of confidence (for example, with a 95 percent confidence level).

Confounding factor. Confounding factors occur when an aspect of the study design, other than the model of interest, is conflated with the treatment or comparison group, making it impossible to measure unbiased impact. For example, if one home visitor administers all program services, it is impossible to distinguish the effectiveness of that particular person from the effectiveness of the program. Confounding factors may also arise from systematic differences in the way data are collected from subjects in the treatment group versus the comparison group. For example, participants may report information differently to someone they know, like their home visitor, than to someone they do not know, like a research assistant. Familiarity with the data collector may change the way participants answer the questions. The presence of confounding factors can significantly impede the ability of a study to capture unbiased impacts.

Cronbach’s coefficient alpha. An estimate of internal consistency reliability, that is, how well groups of items in an assessment “hang together.” The  estimate captures the extent to which all the separate items on the measure all seem to move in the same direction (i.e., if a person is high on one item of anxiety, they rate themselves high on all of the items related to anxiety on a measure). The greater the similarity among items, the higher the reliability (and thus the higher the value of Cronbach’s coefficient alpha). Values of the alpha can range from -1.0 to 1.0, with greater values indicating stronger internal consistency.



Effect size. A measure of the magnitude of the difference between the program group and the comparison group. The effect size shows the size of the impact (or the difference between the program and comparison group) relative to the standard deviation of the measure. A benefit of using the effect size is that it allows for comparisons of impacts across outcomes that may have been measured using different units. In the HomVEE review, a negative value indicates that the comparison group (which did not receive the services or program) had larger outcomes, on average, than the program group (which did receive services). A positive value indicates that the outcomes for the program group were greater than those for the comparison group. Values of 0 indicate there is no difference, on average, between the program and comparison groups.

Evidence-based home visiting model. A home visiting model reviewed by HomVEE and found to meet HHS criteria for an evidence-based home visiting model.



Favorable impact. A statistically significant impact on an outcome measure in a direction that is beneficial for children and parents. This impact could statistically be positive or negative, and is determined “favorable” based on the end result. For example, a favorable impact could be an increase in children’s vocabulary or daily reading to children by parents, or a reduction in harsh parenting practices or maternal depression.



Hazard ratio. The probability that an outcome occurs in one group divided by the probability that an outcome occurs in another group. If the outcome is equally likely to occur in both groups, the hazard ratio equals 1. Values greater than 1 indicate that the outcome is more likely to occur in the first group than the second group, and values less than 1 mean that the outcome is more likely to occur in the second group.



Log. The exponent to which a base must be raised to equal a certain number. For example, the log of 100 (base 10) is 2 because 10 raised to the 2nd power (102 or 10 squared) equals 100. In statistics, logs are typically used to transform an outcome that does not have a desired quality. In the HomVEE review, the log of an outcome, such as the log of the incidence of doctor’s visits, may be used for tests of statistical significance.



Mean. A measure of the average value for a sample, which equals the sum of all values divided by the number of sample members.



Odds. The probability of an event occurring divided by the probability that it will not occur. Values greater than 1 indicate the event is more likely to occur than not. For example, if an event occurs 75 percent of the time, the odds are 3 (0.75/0.25), meaning the event occurs approximately three times for every time it does not occur.

Odds ratio. The odds of an event occurring in one group divided by the odds of an event occurring in another group. If the odds of an event occurring in group A equals 3 and the odds of an event occurring in group B equals 2, the odds ratio equals 1.5. In other words, there is a 1.5 times greater likelihood the event will occur in group A than in group B.

Outcome domain. A group of related outcomes that measure the same or similar constructs. The HomVEE review includes eight outcome domains: maternal health; child health; child development and school readiness; reductions in child maltreatment; reductions in juvenile delinquency, family violence, and crime; positive parenting practices; family economic self-sufficiency; as well as linkages and referrals.



p-value. The probability that the observed finding was obtained by chance when there is no true relationship in the population. For example, a sample may show a positive mean difference, suggesting that the program group has better outcomes than the comparison group, with a p-value of 0.05. The p-value means that there is a 5 percent chance that the positive finding for the program group was obtained by chance and does not occur in the population.  

Primary outcome measure. For the HomVEE review, an outcome measured through direct observation, direct assessment, or administrative data; or self-reported data collected using a standardized (normed) instrument.

Program group. The sample members who receive the services or program of interest. For the HomVEE review, the services of interest are either home visiting services or the program enhancement being tested.



Quasi-experimental design.  A study design in which sample members (children, parents, or families) are selected for the program and comparison conditions in a nonrandom way.



Randomized controlled trial. A study design in which sample members (children, parents, or families) are assigned to the program and comparison groups by chance.

Regression Discontinuity Design. A design in which a continuous scoring variable is used to assign an intervention to study units. Units with scores below a pre-set cutoff value are assigned to the treatment group and units with scores above the cutoff value are assigned to the comparison group, or vice versa. The effect of the intervention is estimated as the difference in mean outcomes between treatment and comparison group units, adjusting statistically for the relationship between the outcomes and the variable used to assign units to the intervention, typically referred to as the “forcing” variable.

Replicated. For the HomVEE review, favorable impacts on at least one primary outcome measure in the same outcome domain in at least two high or moderate quality studies based on different samples.



Sample. Persons (children, parents, or families) included in the study. For the HomVEE review, sites or cohorts that are analyzed separately are counted as separate samples.

Secondary outcome measure. For the HomVEE review, most self-reported data, excluding self-reports based on a standardized (normed) instrument.

Single Case Design. These designs often involve repeated, systematic measurement of a dependent variable before, during, and after the active manipulation of an independent variable (the intervention). These designs can provide a strong basis for establishing causal inference and are widely used in applied and clinical disciplines in psychology and education.

Standard deviation. A measure of the spread or variation of values in the sample. The standard deviation approximates the average distance from the mean. Smaller standard deviations indicate that the values for individual sample members are close to the mean, whereas larger standard deviations indicate there is more variation in values.

Standardized (normed) instrument. An outcome measure that uses a uniform or standard set of procedures for administration and scoring. A norming sample, selected to be representative of the population of interest, was used to establish the standardized scoring system, or norms, for the measure.

Statistical significance. An indication of the probability that the observed finding was obtained by chance (when there is not a real relationship in the population). If the p-value is equal to or less than a predetermined cutoff (in the HomVEE review, 0.05), then the finding is considered statistically significant.

Sustained. For the HomVEE review, favorable impacts on primary outcome measures measured at least one year after program services ended.



Unfavorable or ambiguous impact. A statistically significant impact on an outcome measure in a direction that may indicate potential harm to children and/or parents. This impact could statistically be positive or negative, and is determined “unfavorable or ambiguous” based on the end result. While some outcomes are clearly unfavorable, for other outcomes it is not as clear which direction is desirable. For example, an increase in children’s behavior problems is clearly unfavorable, while an increase in number of days mothers are hospitalized is more ambiguous. This may be viewed as an unfavorable impact because it indicates that mothers have more health problems, but it could also indicate that mothers have increased access to needed health care due to their participation in a home visiting program.