Research Design

Essential considerations for designing rigorous impact evaluations, from theoretical foundations to practical implementation strategies for randomized controlled trials and quasi-experimental studies.

Research design is a key ingredient for reliable, policy-relevant research and data science projects. It determines whether you can credibly measure and attribute observed changes to your intervention, rather than to other factors. This page provides an overview of key research design considerations and links to detailed guidance on specific topics.

Core Research Design Elements

Every research and data science project must address several fundamental questions that determine the strength and credibility of your findings:

Theory of Change and Causal Pathways

Before designing your project – whether exploratory analysis, predictive modeling, or causal inference – you need a clear understanding of your research context. You also need to understand how change occurs within that context. A Theory of Change maps the logical sequence from program activities to intended outcomes, making explicit the assumptions that underlie each step.

Key considerations:

  • What specific changes do you expect to see?
  • What are the key assumptions about how change happens?
  • What risks could break the causal chain?

👉 Learn more: Theory of Change

Statistical Power and Sample Size

Your study must be adequately powered to detect meaningful effects. This requires careful consideration of the minimum effect size worth detecting, expected variability in outcomes, and practical constraints.

Key considerations:

  • What is the minimum detectable effect that would be policy-relevant?
  • What sample size do you need to achieve adequate statistical power?
  • How will attrition and non-compliance affect your power?

👉 Learn more: Sample and Power Calculations | How-to Guide to Power Calculations

Randomization Strategy

The method of assigning participants to treatment and control groups is crucial for causal identification. The right approach depends on your intervention, context, and implementation constraints.

Key considerations:

  • Should you randomize at the individual or cluster level?
  • Do you need stratification to ensure balance on key characteristics?
  • How will you handle spillovers between treatment and control groups?

👉 Learn more: Randomization | How-to Guide to Randomization

Measurement Strategy

Valid and reliable measurement of outcomes is essential for credible impact evaluation. This involves both technical considerations about instruments and practical considerations about data collection.

Key considerations:

  • Which outcomes should you measure and when?
  • How will you ensure measurement validity and reliability?
  • What baseline data do you need for analysis and balance checks?

👉 Learn more: Measurement

We recommend starting with a clear Theory of Change and then iteratively refining your design as you consider these elements. Each project is unique, so adapt these principles to fit your specific context and research questions.

Back to top