A catch-up of some of the things that caught my attention over our break. The NYTimes Upshot covers an RCT of the Illinois Wellness program, where the authors found no effect, but show that if they had used non-experimental methods, they would have concluded the program was successful. Published in August, “many analysts, one data ...
David McKenzie considers the following as important:
This could be interesting, too:
Bank of Japan writes Average Interest Rates by Type of Deposit
Amol Agrawal writes Economic Lessons from heavy crowding at Mt. Everest
Amol Agrawal writes Facebook’s cryptocurrency Libra
International Settlement writes Inflation and deflationary biases in inflation expectations
- The NYTimes Upshot covers an RCT of the Illinois Wellness program, where the authors found no effect, but show that if they had used non-experimental methods, they would have concluded the program was successful.
- Published in August, “many analysts, one data set”, highlighting how many choices are involved in even simple statistical analysis – “Twenty-nine teams involving 61 analysts used the same data set to address the same research question: whether soccer referees are more likely to give red cards to dark-skin-toned players than to light-skin-toned players. Analytic approaches varied widely across the teams, and the estimated effect sizes ranged from 0.89 to 2.93 (Mdn = 1.31) in odds-ratio units. Twenty teams (69%) found a statistically significant positive effect, and 9 teams (31%) did not observe a significant relationship. Overall, the 29 different analyses used 21 unique combinations of covariates.”
- Video of Esther Duflo’s NBER Summer institute lecture on machine learning for empirical researchers; and of Penny Goldberg’s NBER lecture on can trade policy serve as competition policy?
- Martin Ravallion asks should the randomistas continue to rule?
- ICYMI Lant Pritchett, Chris Blattman, and Karthik Muralidharan chime in with comments on my post on descriptive papers in development economics.
Elie Tamer interviews Chuck Manski for the Econometric Theory journal. Really interesting history of thought, covering a lot of different topics including:
- inspiration to not give up if your early work is not well received. Manski on his job market paper on discrete choices “suffered grievously. I could not get a job on the market. Period! I mean, I got nothing!... I’d go on the market and I’d give seminars and people would say, “What’s the dependent variable?” I said, “Well, a choice.” But unless you can write it as y = Xβ + ε, people just didn’t understand. I must have given pretty bad seminars.”
- discussion of econometric theory work motivated by theoretical issues “And so, if you’re doing econometrics that isn’t going to be useful to economics, then who the hell cares?”
- where the inspiration for Manski bounds came from - a question from someone working on a study of homelessness, who had the problem of attrition in a panel study he was running, and gave Manski some funding to think seriously about it.
- His approach to generating new insights “sometimes building on the literature does not work. There are times when it is more fruitful to step back and say: “Imagine I'm a baby. I don't know anything. Looking at this problem from the beginning and supposing that I know nothing, what is the essence of the problem?””
- On robustness checks “If people only report robustness checks or Monte Carlo analysis that show that their stuff works and they don't push it to the breakdown point, then that's deception basically. I don’t expect empirical researchers to prove theorems on breakdown points. But they can still try to weaken the assumptions and do sensitivity analysis to see where their results break down. There's nothing that prevents them from doing that.”
- And amongst much else, ending with a discussion of machine learning, and whether controlling for lots of things makes identification more credible “I don’t take that seriously.... It’s just a marketing job... For what class of real-world problems is there a credible basis for thinking that as you add more covariates you get closer to random treatment selection?... You can easily come up with counter examples. I give counter examples in my graduate text book and Goldberger did in his book. In these examples, when you add more covariates you move away from random treatment selection.”