Tuesday , July 25 2017
Home / Impact Evaluations

Impact Evaluations

A new answer to why developing country firms are so small, and how cellphones solve this problem

Much of my research over the past decade or so has tried to help answer the question of why there are so many small firms in developing countries that don’t ever grow to the point of adding many workers. We’ve tried giving firms grants, loans, business training, formalization assistance, and wage subsidies, and found that, while these can increase sales and profits, none of them get many firms to grow. These interventions typically assume that firms face enough demand that if they produce...

Read More »

Weekly links July 21: a 1930s RCT revisited, brain development in poor infants, Indonesian status cards, and more…

Martin Kanz summarizes his new paper on understanding the demand for status good consumption based on credit card experiments in Indonesia on Let’s Talk Development – including discussion of an intervention that temporarily boosts self-esteem, and showing that this lowers the demand for status goods. Nature news on how brain imaging technology is being used to measure how poverty affects brain development of infants in Bangladesh – differences in grey matter already seen at 2-3 months of...

Read More »

What a new preschool study tells us about early child education – and about impact evaluation

When I talk to people about impact evaluation results, I often get two reactions: Sure, that intervention delivered great results in a well-managed pilot. But it doesn’t tell us anything about whether it would work at a larger scale.  Does this result really surprise you? (With both positive results and null results, I often hear, Didn’t we already know that intuitively?) A recent paper – “Cognitive science in the field: A preschool intervention durably enhances intuitive but not...

Read More »

False positives in sensitive survey questions?

This is a follow-up to my earlier blog on list experiments for sensitive questions, which, thanks to our readers generated many responses via the comments section and emails: more reading for me – yay! More recently, my colleague Julian Jamison, who is also interested in the topic, sent me three recent papers that I had not been aware of. This short post discusses those papers and serves as a coda to the earlier post… Random response techniques (RRT) are used to provide more valid data than...

Read More »

Weekly links July 14: Sociologists versus behavioral economists, mobile phone surveys, biometrics, and more

Skelly lays out 5 lessons from two mobile phone surveys in Mozambique. Among other takeaways, providing airtime incentives increased retention of respondents, but not by very much: from 45% up to 51%. (@ FHI360) Using the example of mental accounting, three sociologists show what economists and psychologists may miss in their approach. (@ Andrew Gelman’s blog) Hanna and McIntyre explain why implementing attendance monitoring and incentives for public sector service providers was easier in...

Read More »

Trouble with pre-analysis plans? Try these three weird tricks.

Pre-analysis plans increase the chances that published results are true by restricting researchers’ ability to data-mine.  Unfortunately, writing a pre-analysis plan isn’t easy, nor is it without costs, as discussed in recent work by Olken and Coffman and Niederle. Two recent working papers - “Split-Sample Strategies for Avoiding False Discoveries,” by Michael L. Anderson and Jeremy Magruder (ungated here) and “Using Split Samples to Improve Inference on Causal Effects,” by Marcel Fafchamps...

Read More »

What does a game-theoretic model with belief-dependent preferences teach us about how to randomize?

The June 2017 issue of the Economic Journal has a paper entitled “Assignment procedure biases in randomized policy experiments” (ungated version). The abstract summarizes the claim of the paper:“We analyse theoretically encouragement and resentful demoralisation in RCTs and show that these might be rooted in the same behavioural trait –people’s propensity to act reciprocally. When people are motivated by reciprocity, the choice of assignment procedure influences the RCTs’ findings. We show...

Read More »

Weekly links July 7: Making Jakarta Traffic Worse, Patient Kids and Hungry Judges, Competing for Brides by Pushing up Home Prices, and More…

In this week’s Science, Rema Hanna, Gabriel Kreindler, and Ben Olken look what happened when Jakarta abruptly ended HOV rules – showing how traffic got worse for everyone. Nice example of using Google traffic data – MIT news has a summary and discussion of how the research took place : “The key thing we did is to start collecting traffic data immediately,” Hanna explains. “Within 48 hours of the policy announcement, we were regularly having our computers check Google Maps every 10 minutes to...

Read More »

Teacher Coaching: What We Know

“Teacher coaching has emerged as a promising alternative to traditional models of professional development.” In Kraft, Blazar, and Hogan’s newly updated review “The Effect of Teacher Coaching on Instruction and Achievement: A Meta-Analysis of the Causal Evidence,” they highlight that reviews of the literature on teacher professional development (i.e., training teachers who are already on the job) highlight a few promising characteristics of effective programs: Practice on the job...

Read More »

Weekly links June 30: 7th grade development economics, the beginning at the end approach, stuff that happened a long time ago still impacts today, and more…

How to teach development economics in 20 minutes to 7th graders – Dave Evans explains his method. The “beginning at the end” approach to experimentation – written from the point of view of business start-ups, but could easily apply to policy experiment work too “The typical approach to research is to start with a problem. In business, this often leads to identifying a lot of vague unknowns—a “broad area of ignorance” as Andreasen calls it—and leaves a loosely defined goal of simply reducing...

Read More »