• Text Resize A A A
  • Print Print
  • Share Share Share Share

Evaluation Training & Technical Assistance (TA)

OAH evaluation grantees receive training and technical assistance (TA) to ensure their evaluations are designed, implemented, analyzed, and disseminated to meet the research quality standards of the HHS Teen Pregnancy Prevention Evidence Review. OAH offers evaluation TA through a variety of mechanisms including individual TA, group training, webinars, and written documents.

Training

Evaluation TA Webinars

September 28, 2017: Extending Your Reach- Mounting an Integrated Communications Strategy
Slides, Transcript

April 5, 2016: Technical Assistance Webinar Identifying Appropriate Data Sources for Community-Level Evaluations: Tier 1B Grantees and Evaluators
Slides, Audio, Transcript

February 4, 2016: Technical Assistant Webinar Designing Community-Level Evaluations: Tier 1B Grantees & Evaluators
Slides, Audio, Transcript

November 2014: Getting Your Message Heard: Simple and Successful Dissemination
Slides, Audio, Transcript

October 28, 2014: Final Evaluation Reporting: OAH Tier 1 C/D, Tier 2
Slides, Audio, Transcript

January 29, 2014: Upcoming Evaluation Reporting: Evaluation Progress, Implementation Analysis Plans, and an Evaluation Abstract
Presentation on YouTube, Slides, Transcript

June 23, 2011: Working Together: Program Staff’s Role in Effectiveness Evaluations
Slide Set 1, Slide Set 2, Transcript

February 15 and 17, 2011: Introduction to the TPP Evaluation TA Website
Slides

Evaluation Briefs

Tier 1B Grant Implementation Study Planning (September 2017)

OAH Tier 1B grantees need well designed implementation studies that can assess the successes and challenges of implementing the Tier 1B grant project.  This brief guides grantees through the initial steps of implementation study design, including research question selection and prioritization, data source mapping, and study timeline development.

Should Teen Pregnancy Prevention Studies Randomize Students or Schools? The Power Tradeoffs Between Contamination Bias and Clustering (September 2017)

Evaluators of TPP programs implemented in schools face a difficult tradeoff in selecting the level of randomization. If schools are randomized, then the study’s statistical power is reduced by larger standard errors resulting from clustering. If students are randomized within schools, then the study’s power is potentially reduced by attenuation bias that can occur when members of the program group date members of the control group (contamination bias). This brief quantifies this tradeoff to help evaluators choose the best unit of randomization. See the Appendix for information about the analytic approach, simulation findings for alternative model assumptions, and tables of descriptive statistics.

Estimating Program Effects on Program Participants (September 2017)

A randomized experiment provides the opportunity to calculate an unbiased estimate of the effect of an intervention.  Specifically, an intent-to-treat (ITT) analysis allows researchers to credibly estimate the effect of the offer of an intervention.  However, not all participants comply with their assigned condition and this non-compliance can lead to an underestimate of the effect of actually receiving the intervention.  This brief will describe analytic approaches for estimating a credible Treatment on the Treated (TOT) estimate as a supplement to the ITT estimate, and provide guidance on how to report this finding in a final report or journal article.

Selecting Benchmark and Sensitivity Analyses (September 2017)

Researchers make a number of decisions in how they prepare and analyze their data to show program effectiveness, and these decisions can influence the findings of an evaluation.   The brief highlights a number of common situations where TPP researchers make decisions that might influence findings (e.g. handling inconsistent/missing data, statistically adjusting for covariates/blocks, etc.), includes suggested approaches to use as benchmark and sensitivity analyses for each of these common decision points, especially in the context of the HHS evidence review, and offers guidance on ways to present and interpret benchmark and sensitivity results in reports or journal articles. 

An Overview of Economic Evaluation Methods (October 2016)

Programs need cost data to estimate how much it costs to provide a program, to understand the resources they use, and to answer other questions about the cost of teen pregnancy prevention programs. This brief describes several economic evaluation methodologies and discusses how to plan for and collect the cost data necessary for these analyses.

Structural Elements of an Intervention (October 2016)

Program developers want to be able to accurately describe their interventions and understand which pieces of the intervention contribute to changes in participant outcomes. This brief provides guidance on unpacking interventions—specifically, it discusses how to dissect an intervention into its structural elements (core components), measure aspects of implementation related to structural elements, and assess how those structural elements influence participant outcomes.

TA Brief for Tier 1B Grantees: How Study Design Influences Statistical Power in Community-Level Evaluations (September 2016)

The choice of study design has important implications for the sample size (number of communities) needed to detect policy-relevant impacts. This brief uses a general example to show how statistical power varies across three types of quasi-experimental community-level designs.

TA Brief for Tier 1B Grantees: Data Sources for Community-Level Outcomes (August 2016)

OAH Tier 1B evaluations need data that can assess the impact of multiple strategies on a whole community or set of communities. This brief reviews general factors to consider when choosing data sources, and highlights advantages and disadvantages of four types of administrative/secondary data sources.

TA Brief for Tier 1B Grantees: Defining Treatment Communities and Estimating Community Impacts (June 2016)

Evaluations of OAH Tier 1B projects are designed to assess impacts of the broad strategy at the community level. This brief provides guidance to grantees and evaluators on how to define treatment communities for evaluation purposes, and includes a basic example of how to estimate and interpret a community-level impact.

Developing and Implementing Systems for Tracking Recruitment and Retention for Programs Participating in Effectiveness Evaluations (June 2016)

To ensure a program meets enrollment targets, it is essential to monitor the flow of enrollees through the various stages of a recruitment process. This brief provides researchers and practitioners with tools to track both recruitment and retention in Teen Pregnancy Prevention programs.

Recommendations for Successfully Recruiting and Retaining School Participation in a Teen Pregnancy Prevention Impact Evaluation (June 2015)

This brief complements the brief on “Recommendations for Successfully Recruiting and Retaining District Participation in a Teen Pregnancy Prevention Impact Evaluation.” Once school district approval is received, school recruitment can begin. This brief provides steps for securing schools’ interest and participation in a teen pregnancy prevention (TPP) impact evaluation.

Recommendations for Successfully Recruiting and Retaining District Participation in a Teen Pregnancy Prevention Impact Evaluation (June 2015)

Almost all school districts require approval before conducting a program evaluation within their schools. Permission must first come from school districts and then individual schools can be recruited. This brief provides steps for obtaining district approval for an evaluation of a teen pregnancy prevention (TPP) program implemented in a school setting.

Understanding the HHS Teen Pregnancy Prevention Evidence Review (June 2015)

This brief for researchers planning or implementing an evaluation of a teen pregnancy prevention program provides an overview of the HHS Teen Pregnancy Prevention Evidence Review. It includes information about the review process and requirements for being included as part of the evidence for the effectiveness of teen pregnancy prevention programs.

Calculating Minimum Detectable Impacts in Teen Pregnancy Prevention Impact Evaluations (December 2014)

A common goal of a teen pregnancy prevention impact evaluation is to show that the intervention being tested has a positive and statistically significant effect on participant behavioral outcomes. This brief provides an overview of how researchers can calculate the minimum detectable impacts (MDIs) for a given evaluation – this is analogous to a “power calculation.” An accompanying excel tool is provided to allow evaluators to calculate MDIs for their own impact evaluations, and example calculations are presented in the brief.

Using the Linear Probability Model to Estimate Impacts on Binary Outcomes in Randomized Controlled Trials (December 2014)

Researchers are often apprehensive to use linear regression as an analytic approach when the outcomes being examined are binary (yes/no) responses. This brief provides researchers with a technical explanation for why the linear probability model—the linear regression methodology used for binary outcomes—is appropriate in the context of calculating impacts in an evaluation.

Sample Attrition in Teen Pregnancy Prevention Impact Evaluations (November 2014)

A randomized controlled trial (RCT) is able to produce an unbiased estimate of the effect of an intervention. However, when a small or non-representative subset of the initially assigned sample is used to show the effect of the intervention (e.g., the set of individuals who respond to a follow-up survey), the resulting estimate of program effectiveness may be biased. This brief outlines how non-response (i.e., sample attrition) affects individual- and cluster-level RCTs, how the bias from attrition can be assessed, and strategies to limit sample attrition in teen pregnancy prevention evaluations.

Baseline Inequivalence and Matching (November 2014)

In order for a study to provide compelling evidence of program effectiveness, the intervention and comparison groups should be equivalent on key characteristics measured at baseline. This brief discusses why baseline equivalence is important, how it can be assessed, and provides guidance on matching methods that can improve baseline equivalence to provide persuasive evidence of the effect of teen pregnancy prevention interventions.

Coping with Missing Data in Randomized Controlled Trials (May 2013)

Missing outcome data can pose a serious threat to the validity of experimental impacts. This brief provides guidance on how to manage this issue if it occurs including strategies on how to clearly describe the problem and how to use valid statistical methods to adjust for it.

Estimating Program Impacts for a Subgroup Defined by Post-Intervention Behavior: Why is it a Problem? What is the Solution? (December 2012)

The impact of teenage pregnancy prevention programs on an outcome like contraceptive use for sexually active youth is often of interest to researchers and policymakers. This brief describes a serious pitfall in estimating program impacts on outcomes such as contraceptive use among only a subgroup of youth who are sexually active at follow-up—a strategy that likely produces biased estimates even in a study with a random assignment design. This brief illustrates the source and nature of the bias, and offers alternative strategies for analyzing impacts on sexual risk behavior that produce unbiased estimates by maintaining the integrity of the random assignment design.

Planning Evaluations Designed to Meet Scientific Standards: Communicating Key Components of the Plan for a Rigorous and Useful Evaluation of a Teenage Pregnancy Prevention Program (July 2011)

This brief discusses planning effectiveness evaluations that will meet HHS evidence standards, while also being useful to decision makers, and discusses approaches for clearly communicating key evaluation plan components to funders.

Evaluation Updates

Evaluation updates are developed to share answers to commonly asked questions received from grantees.

May 2017: Evaluations at a Glance

December 2016: Evaluations at a Glance

June 2015: FAQs about School Recruitment

March 2015: Evaluation Reporting at a Glance

December 2013: FAQs about the Implications of Clustering in RCTs

November 2013: Evaluations at a Glance

December 2011: FAQs about Reporting Implementation Findings

July 2011: FAQs about Evaluation Start-Up

January 2011: Evaluation Technical Assistance Update

Technical Assistance (TA) Schedule for Research & Demonstration Projects (TPP Tier 2)

 

Content created by Office of Adolescent Health
Content last reviewed on December 8, 2016