Evaluation Training & Technical Assistance (TA)
OAH evaluation grantees receive training and technical assistance (TA) to ensure their evaluations are designed, implemented, analyzed, and disseminated to meet the research quality standards of the HHS Teen Pregnancy Prevention Evidence Review. OAH offers evaluation TA through a variety of mechanisms including individual TA, group training, webinars, and written documents.
The FY18 Teen Pregnancy Prevention (TPP) Tier 2 grantees implementing new and innovative strategies to prevent teen pregnancy will receive Evaluation TA for conducting formative and process/implementation evaluations. Mathematica Policy Research, along with the Center for Relationship Education, will provide implementation evaluation-related coordination, training, and support to well-position grantees for successful delivery of TPP services to teens and to inform related national efforts.
Evaluation TA Webinars
February 15 and 17, 2011: Introduction to the TPP Evaluation TA Website
Tier 1B Grant Implementation Study Planning (September 2017)
OAH Tier 1B grantees need well-designed implementation studies that can assess the successes and challenges of implementing the Tier 1B grant project. This brief guides grantees through the initial steps of implementation study design, including research question selection and prioritization, data source mapping, and study timeline development.
Evaluators of TPP programs implemented in schools face a difficult tradeoff in selecting the level of randomization. If schools are randomized, then the study’s statistical power is reduced by larger standard errors resulting from clustering. If students are randomized within schools, then the study’s power is potentially reduced by attenuation bias that can occur when members of the program group date members of the control group (contamination bias). This brief quantifies this tradeoff to help evaluators choose the best unit of randomization. See the Appendix for information about the analytic approach, simulation findings for alternative model assumptions, and tables of descriptive statistics.
Estimating Program Effects on Program Participants (September 2017)
A randomized experiment provides the opportunity to calculate an unbiased estimate of the effect of an intervention. Specifically, an intent-to-treat (ITT) analysis allows researchers to credibly estimate the effect of the offer of an intervention. However, not all participants comply with their assigned condition and this non-compliance can lead to an underestimate of the effect of actually receiving the intervention. This brief will describe analytic approaches for estimating a credible Treatment on the Treated (TOT) estimate as a supplement to the ITT estimate, and provide guidance on how to report this finding in a final report or journal article.
Selecting Benchmark and Sensitivity Analyses (September 2017)
Researchers make a number of decisions in how they prepare and analyze their data to show program effectiveness, and these decisions can influence the findings of an evaluation. The brief highlights a number of common situations where TPP researchers make decisions that might influence findings (e.g. handling inconsistent/missing data, statistically adjusting for covariates/blocks, etc.), includes suggested approaches to use as benchmark and sensitivity analyses for each of these common decision points, especially in the context of the HHS evidence review, and offers guidance on ways to present and interpret benchmark and sensitivity results in reports or journal articles.
An Overview of Economic Evaluation Methods (October 2016)
Programs need cost data to estimate how much it costs to provide a program, to understand the resources they use, and to answer other questions about the cost of teen pregnancy prevention programs. This brief describes several economic evaluation methodologies and discusses how to plan for and collect the cost data necessary for these analyses.
Structural Elements of an Intervention (October 2016)
Program developers want to be able to accurately describe their interventions and understand which pieces of the intervention contribute to changes in participant outcomes. This brief provides guidance on unpacking interventions—specifically, it discusses how to dissect an intervention into its structural elements (core components), measure aspects of implementation related to structural elements, and assess how those structural elements influence participant outcomes.
The choice of study design has important implications for the sample size (number of communities) needed to detect policy-relevant impacts. This brief uses a general example to show how statistical power varies across three types of quasi-experimental community-level designs.
OAH Tier 1B evaluations need data that can assess the impact of multiple strategies on a whole community or set of communities. This brief reviews general factors to consider when choosing data sources, and highlights advantages and disadvantages of four types of administrative/secondary data sources.
Evaluations of OAH Tier 1B projects are designed to assess impacts of the broad strategy at the community level. This brief provides guidance to grantees and evaluators on how to define treatment communities for evaluation purposes, and includes a basic example of how to estimate and interpret a community-level impact.
To ensure a program meets enrollment targets, it is essential to monitor the flow of enrollees through the various stages of a recruitment process. This brief provides researchers and practitioners with tools to track both recruitment and retention in TPP programs.
This brief complements the brief on “Recommendations for Successfully Recruiting and Retaining District Participation in a Teen Pregnancy Prevention Impact Evaluation.” Once school district approval is received, school recruitment can begin. This brief provides steps for securing schools’ interest and participation in a TPP impact evaluation.
Almost all school districts require approval before conducting a program evaluation within their schools. Permission must first come from school districts and then individual schools can be recruited. This brief provides steps for obtaining district approval for an evaluation of a TPP program implemented in a school setting.
This brief for researchers planning or implementing an evaluation of a teen pregnancy prevention program provides an overview of the HHS Teen Pregnancy Prevention Evidence Review. It includes information about the review process and requirements for being included as part of the evidence for the effectiveness of teen pregnancy prevention programs.
A common goal of a teen pregnancy prevention impact evaluation is to show that the intervention being tested has a positive and statistically significant effect on participant behavioral outcomes. This brief provides an overview of how researchers can calculate the minimum detectable impacts (MDIs) for a given evaluation – this is analogous to a “power calculation.” An accompanying Excel tool is provided to allow evaluators to calculate MDIs for their own impact evaluations, and example calculations are presented in the brief.
Researchers are often apprehensive to use linear regression as an analytic approach when the outcomes being examined are binary (yes/no) responses. This brief provides researchers with a technical explanation for why the linear probability model—the linear regression methodology used for binary outcomes—is appropriate in the context of calculating impacts in an evaluation.
Sample Attrition in Teen Pregnancy Prevention Impact Evaluations (November 2014)
A randomized controlled trial (RCT) is able to produce an unbiased estimate of the effect of an intervention. However, when a small or non-representative subset of the initially assigned sample is used to show the effect of the intervention (e.g., the set of individuals who respond to a follow-up survey), the resulting estimate of program effectiveness may be biased. This brief outlines how non-response (i.e., sample attrition) affects individual- and cluster-level RCTs, how the bias from attrition can be assessed, and strategies to limit sample attrition in teen pregnancy prevention evaluations.
Baseline Inequivalence and Matching (November 2014)
In order for a study to provide compelling evidence of program effectiveness, the intervention and comparison groups should be equivalent on key characteristics measured at baseline. This brief discusses why baseline equivalence is important, how it can be assessed, and provides guidance on matching methods that can improve baseline equivalence to provide persuasive evidence of the effect of teen pregnancy prevention interventions.
Missing outcome data can pose a serious threat to the validity of experimental impacts. This brief provides guidance on how to manage this issue if it occurs, including strategies on how to clearly describe the problem and how to use valid statistical methods to adjust for it.
The impact of teenage pregnancy prevention programs on an outcome like contraceptive use for sexually active youth is often of interest to researchers and policymakers. This brief describes a serious pitfall in estimating program impacts on outcomes such as contraceptive use among only a subgroup of youth who are sexually active at follow-up—a strategy that likely produces biased estimates even in a study with a random assignment design. This brief illustrates the source and nature of the bias, and offers alternative strategies for analyzing impacts on sexual risk behavior that produce unbiased estimates by maintaining the integrity of the random assignment design.
This brief discusses planning effectiveness evaluations that will meet HHS evidence standards, while also being useful to decision makers, and discusses approaches for clearly communicating key evaluation plan components to funders.
Evaluation updates are developed to share answers to commonly asked questions received from grantees.
Technical Assistance (TA) Schedule for Research & Demonstration Projects (TPP Tier 2)
- Reporting timeline for 5-year evaluation grants
- Year 1: Revised evaluation design
- Year 2: Evaluation abstract
- Year 3: Implementation analysis plan
- Year 3: Impact analysis plan
- Year 5: Final impact evaluation report
- Year 5: Final evaluation abstract
- Twice yearly during data collection:
- Reporting templates
- Evaluation Collaboration site for Grantees:
- Grantees FY2010-2014
- Grantees FY2015-2019
- FY 2015-2019 Tier 2 Measures
Content last reviewed on December 15, 2018