Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

Syllabus

17.802 Quantitative Research Methods II


Professor: Jens Hainmueller
TA: Chad Hazlett
Spring Semester 2012

Time & Room Office


Class: T & Th 9:30-11 E53-438 Jens Hainmueller
Recitation: F 11-12 E53-438 Room: E53-459
Email : [email protected]
Office Hour : T 1:30-2:30PM

Chad Hazlett
Room: E40-446
Email : [email protected]
Office Hour : W 4:00-5:00PM or by appointment

Overview and Class Goals


This is the second course in a three-course sequence on quantitative political methodology, by which we
mean the application of statistical methods to problems in political science and public policy. The goal
of the three-course sequence is to teach you (1) to understand and (2) to confidently apply a variety of
statistical methods and research designs that are essential for political science and public policy research.

Building on the first course (17.800) which covered regression models, this second class provides a survey
of more advanced empirical tools for political science and public policy research. The focus is on statistical
methods for causal inference, i.e. methods designed to address research questions that concern the impact
of some potential cause (e.g., an intervention, a change in institutions, economic conditions, or policies) on
some outcome (e.g., vote choice, income, election results, levels of violence).

We cover a variety of causal inference designs, including experiments, matching, regression, panel methods,
difference-in-differences, synthetic control methods, instrumental variable estimation, regression discontinu-
ity designs, quantile regressions, and bounds. We will analyze the strengths and weaknesses of these methods.
Applications are drawn from various fields including political science, public policy, economics, and sociology.

The class is open to qualified students from other departments and undergraduates.

Prerequisites
A willingness to work hard on possibly unfamiliar material. In addition to introductory statistics and proba-
bility, the course assumes a good knowledge of linear regression meaning that you should have taken at least
one graduate class on this subject (such as 17.800). Students are also expected to be reasonably proficient
in the statistical software R (you may use other software packages that you are very familiar with, but we
can only support R).

Class Requirements
Reading
The syllabus lists the required readings for every week. This required reading should be completed prior to
lecture in a given week. Students are expected to read the material very carefully. You may even find it
helpful to read the material multiple times.
Professor: Jens Hainmueller TA: Chad Hazlett Spring Semester 2012 2

Homework
This is a methodological course, developing skills in understanding and applying statistical methods. You
can only learn statistics by doing statistics and therefore the homework for this course is extensive, includ-
ing weekly homework assignments. The assignments consist of analytical problems, computer simulations,
and data analysis. They will usually be assigned on Tuesday night and due the following Tuesday, prior
to lecture. No late homework will be accepted. All sufficiently attempted homework (ie. a typed and well
organized write-up with all problems attempted) will be graded on a (+,X,-) scale. We encourage students
to work together on the assignments, but you always need to write your own solutions, and we ask that you
make a solo effort at all the problems before consulting others. We also ask that you write the names of
your co-workers on your assignments.

Student Project
Students are expected to write a short empirical paper that applies methods learned in this class to a re-
search question of their choice. The paper should be 5-15 pages in length and focus on the research question,
data, empirical strategy, results, and conclusions. Literature reviews, background, lengthy motivations, etc.
should be omitted or may be included as an appendix. You also need to submit a copy of your analysis code.
Students are free to choose any topic they want, as long as they have a clear research question that concerns
the causal effect of some institution, intervention, policy, or event on some outcome, result, or performance.
Co-authored projects are very strongly encouraged (learning to co-author is essential because nowadays most
articles in political science are co-authored).

Students need to meet the following milestones for their project:

• By 3/15: Email the instructor and TA a short description of your proposed project (i.e. half a page).
Students are encouraged to meet with the instructor and TA during office hours to discuss their project
and progress.
• By 4/12: Email the instructor and TA a 2-3 page description of progress, analysis, and preliminary
results.
• By 5/8: Email the instructor, TA, and the entire class the first draft of your project. Everyone is
expected to read all these submissions prior to the student presentations that follow.
• On 5/10 and 5/15: Students will present their projects to the class. Presentations will be no more than
5-10 minutes in length, and will be oral with the aid of 2-5 slides that summarize the main results.
• By 5/17: Email the instructor and TA the final version of your project.

Grading
Grades will be based on
• weekly homework assignments (65% of final grade)
• student project (30% of final grade)

• participation and presentation (5 % of final grade).


Recitation Sections
Weekly recitation sections will be held on Friday. The section will cover a review of the theoretical material
and also provide help with computing issues. The TA will run the sections and can give more detail.

Computation
We teach the course in R, which is an open-source computing language that is very widely used in statistics.
You can download it for free from www.r-project.org. The web provides many great tutorials and resources
to learn R. A list of these is provided at http://wiki.math.yorku.ca/index.php/R: Getting started. A nice
way to start you off are the two video tutorials provided by Dan Goldstein here and also here. R runs on a
wide variety of UNIX platforms, Windows and MacOS. R makes programming very easy, has strong graphical
Professor: Jens Hainmueller TA: Chad Hazlett Spring Semester 2012 3

capabilities, and also contains canned functions for most commonly used estimators.

To refresh your R you are expected to work through one of the following free tutorials unless you are well
familiar with this material. All three tutorials cover similar material, just pick the one you like best:
Owen. The R Guide. At: http://cran.r-project.org/doc/contrib/Owen-TheRGuide.pdf
Venables and Smith. An Introduction to R. At: http://cran.r-project.org/doc/manuals/R-intro.pdf
Verzani. Simple R. At: http://cran.r-project.org/doc/contrib/Verzani-SimpleR.pdf

If you are very familiar with another statistical software package you may use that for the course at your
own risk. We can only support R.

Course Website
The course website is located at: http://stellar.mit.edu/S/course/17/sp12/17.802/. It provides home-
work assignments, datasets, and supplementary materials.

Course Forums
The course website has a discussion board in the “forum” section. This discussion board provides an oppor-
tunity to post questions regarding the course material and/or computing. In addition to precepts and office
hours, please use this Forum on the Stellar course website when asking questions about lectures, problem
sets, and other course materials. This will allow students to see other students’ questions and learn from
them. Both the TA and the instructor will regularly check the Board and answer questions posted, although
everyone else is also encouraged to contribute to the discussion. You can also sign up to receive notifica-
tions for posted questions and answers. A student’s respectful and constructive participation on the forum
will count toward his/her class participation grade. Do not email your statistical questions directly to the
instructor (unless they are of personal nature) — we will not answer them!

Schedule
Please notice the following scheduling issues:

• No class on 2/21 (Due to Presidents Day on Monday, Tues 2/21 will have a Monday schedule),
• No class on 3/27 and 3/29 (Spring Break).
• No class on 4/17 (Patriots Day).

Books
Main Books
• We will read chapters from the following textbooks which are available at the COOP and also on reserve
in the library.
– Angrist, Joshua D. and Jörn-Steffen Pischke. 2008. Mostly Harmless Econometrics: An Empiri-
cist’s Companion. Princeton University Press.
– Morgan, Stephen L. and Christopher Winship. 2007. Counterfactuals and Causal Inference:
Methods and Principles for Social Research. Cambridge University Press.
Useful Summary Articles
• The following papers summarize the main methods learned in this course. They are dense and detailed
and you might not understand all of the details the first time you read through them. However, if you
plan to conduct applied empirical work that involves causal inference, you should revisit these again
and again as reference.
– Guido W. Imbens and Jeffrey Wooldridge. 2008. Recent Developments in the Econometrics of
Program Evaluation. NBER Working Paper No. 14251.
– Joshua D. Angrist and Alan B. Krueger. 1999. Empirical Strategies in Labor Economics. In
Handbook of Labor Economics, ed. O. Ashenfelter and D. Card: Elsevier Science.
Professor: Jens Hainmueller TA: Chad Hazlett Spring Semester 2012 4

Optional Books
• The following books are optional but may prove useful for additional coverage of some of the course
topics.
• Reference Book for Panel Methods
– Wooldridge, Jeffrey M. 2002. Econometric Analysis of Cross Section and Panel Data. MIT Press.
• Causal Inference
– Rosenbaum, Paul R. 2009. Design of Observational Studies. Springer Series in Statistics.
– Rosenbaum, Paul R. 2002. Observational Studies. Springer-Verlag. 2nd edition.
– Pearl, Judea. 2000. Causality: Models, Reasoning, and Inference New York: Cambridge Univer-
sity Press.
– Manski, Charles F. 1995. Identification Problems in the Social Sciences. Cambridge: Harvard
University Press.
• Matching
– Rubin, Donald. 2006. Matched Sampling for Causal Effects. Cambridge University Press.

Preliminary Schedule
The following is a preliminary schedule of course topics. Notice that required readings are marked with a (?).

1 Introduction
• Overview, Course Requirements, Course Outline

2 Review of Statistical Concepts Useful for Causal Inference


• Random Variables, Measures of Location and Dispersion
• Inference and Properties of Estimators
• Conditional mean function

3 The Potential Outcome Model


• Counterfactual Responses and the Fundamental Identification Problem
• Estimands and Assignment Mechanisms
• Heterogeneity and Selection
Readings
• Morgan and Winship: Chapter 1-2. (?)
• Angrist and Pischke: Chapter 1. (?)
• Holland, P. W. 1986. Statistics and Causal Inference. Journal of the American Statistical Association,
Vol. 81, No. 396: 945-960.
• Sekhon, J.S. 2004. Quality Meets Quantity: Case Studies, Conditional Probability and Counterfactu-
als. Perspectives on Politics, Vol. 2: 281-293. (?)
Professor: Jens Hainmueller TA: Chad Hazlett Spring Semester 2012 5

4 Randomized Experiments
• Identification of Causal Effects under Randomization

• Implementation, Estimation, Diagnostics, Blocking


• Treads Validity

Readings: Theory of Experiments


• Angrist and Pischke: Chapter 2. (?)
• Rosenbaum, Paul R. 2002. Observational Studies. Springer-Verlag. 2nd edition. Chapter 2.

Readings: Application of Experiments


• Olken, Benjamin. 2007. Monitoring corruption : Evidence from a field experiment in Indonesia.
Journal of Political Economy. 2007, vol. 115, No 2: 200-249. (?)
• Gerber, A., Green, D., Larimer, C. 2008. Social Pressure and Voter Turnout: Evidence from a
Largescale Field Experiment. American Political Science Review. Vol. 102, No. 1: 1-48. (?)

• Wantchekon, Leonard. 2003. Clientelism and Voting Behavior: Evidence from a Field Experiment in
Benin World Politics. Volume 55, Number 3, April: 399-422. (?)
• Chattopadhyay, R. and Duflo, E. 2004. Women as Policy Makers: Evidence from a Randomized Policy
Experiment in India. Econometrica, Vol. 72, No. 5: 14091443.

• Mutz, Diana C. and Byron Reeves. 2005. The New Video Malaise: Effects of Televised Incivility on
Political Trust. American Political Science Review 99 (February): 1-15.
• Gartner, Scott. 2008. The Multiple Effects of Casualties on Public Support for War: An Experimental
Approach. American Political Science Review 102(1): 95-106.

Readings: Application of Natural Experiments


• DellaVigna, Stefano, and Ethan Kaplan. 2007. The Fox News Effect: Media Bias and Voting. Quar-
terly Journal of Economics 122(3): 1187-1234.
• Hyde, Susan D. 2007. The Observer Effect in International Politics: Evidence from a Natural Experi-
ment. World Politics 60(1): 37-63. (?)

• Ferraz, Claudio, and Federico Finan. 2008. Exposing Corrupt Politicians: The Effects of Brazil’s
Publicly Released Audits on Electoral Outcomes. Quarterly Journal of Economics 123(2): 703-45.
• Ho, Daniel E., and Kosuke Imai. 2008. Estimating Causal Effects of Ballot Order from a Randomized
Natural Experiment: The California Alphabet Lottery, 1978-2002. Public Opinion Quarterly 72(2):
216-40.
Readings: Experiments Review Articles
• Palfrey, Thomas. 2009. Laboratory Experiments in Political Economy. Annual Review of Political
Science 12: 379-88.

• Druckman, James N., Donald P. Green, James H. Kuklinski, and Arthur Lupia. 2006. The Growth
and Development of Experimental Research in Political Science. American Political Science Review
100(4): 627-35.
• de Rooij, Eline A., Donald P. Green, and Alan S. Gerber. 2009. Field Experiments on Political
Behavior and Collective Action. Annual Review of Political Science 12: 389-95. (?)
Professor: Jens Hainmueller TA: Chad Hazlett Spring Semester 2012 6

• Humphreys, Macartan, and Jeremy Weinstein. 2009. Field Experiments and the Political Economy of
Development. Annual Review of Political Science 12: 367-78.
• Harrison, Glenn and John A. List. 2004. Field Experiments. Journal of Economic Literature, XLII:
1013-1059.
• List, John A., and Steven Levitt. 2006. What Do Laboratory Experiments Tell Us About the Real
World? University of Chicago and NBER.

• Gaines, Brian J., and James H. Kuklinski. 2007. The Logic of the Survey Experiment Reexamined.
Political Analysis 15: 1-20.
Readings: Useful Methodological Guides for Experiments

• Duflo, Esther, Abhijit Banerjee, Rachel Glennerster, and Michael Kremer. 2006. Using Randomization
in Development Economics: A Toolkit. Handbook of Development Economics.
• Howard S. Bloom. 2006. The Core Analytics of Randomized Experiments for Social Research. MDRC
Working Papers on Research Methodology.
• Bruhn, Miriam and David McKenzie. 2008. In Pursuit of Balance. The World Bank Policy Research
Working Paper 4752.
• Gary King, et.al. 2007. A Politically Robust Experimental Design for Public Policy Evaluation, with
Application to the Mexican Universal Health Insurance Program. Journal of Policy Analysis and
Management 26, 3, 479506.
• MIT Committee on the Use of Humans as Experimental Subjects (COUHES)
http://web.mit.edu/committees/couhes/

5 Causal Effects under Selection on Observables


5.1 Selection on Observables
• Identification under Selection on Observables
• Subclassification

Readings
• Morgan and Winship: Chapter 3. (?)
• Rubin, Donald B. 2008. For Objective Causal Inference, Design Trumps Analysis. Annals of Applied
Statistics 2(3): 808-840.(?)
• Rosenbaum, Paul R. 2002. Observational Studies. Springer-Verlag. 2nd edition. Chapter 3.
• Rosenbaum, P. R. 2005. Heterogeneity and Causality: Unit Heterogeneity and Design Sensitivity in
Observational Studies. The American Statistician, Vol. 59: 147-152.

• Acemoglu, D. 2005. Constitutions, Politics, and Economics: A Review Essay on Persson and Tabellinis
The Economic Effects of Constitutions. Journal of Economic Literature Vol. XLIII: 1025-1048. (?)
• Cochran, W. G. 1968. The Effectiveness of Adjustment by Subclassification in Re-moving Bias in
Observational Studies, Biometrics, vol. 24: 295-313.
Professor: Jens Hainmueller TA: Chad Hazlett Spring Semester 2012 7

5.2 Matching Methods


• Covariate Matching, Balance Checks, Properties of Matching Estimators

Readings: Matching Theory


• Morgan and Winship: Chapter 4. (?)
• Morgan and Harding. 2006. Matching Estimators of Causal Effects: Prospects and Pitfalls in Theory
and Practice.
• Sekhon, Jasjeet S. 2009. Opiates for the Matches: Matching Methods for Causal Inference. Annual
Review of Political Science 12: 487-508.(?)
• Ho et. al. 2007. Matching as Nonparametric Preprocessing for Reducing Model Dependence in
Parametric Causal Inference. Political Analysis, Vol. 15: 199-236.
• Stuart. 2009. Matching methods for causal inference: A review and a look forward
• Rubin. 2006. Chapters 3 to 5.
• Rosenbaum, P. R., 1995. Observational Studies. New York: Springer-Verlag. Chapter 3.
• Abadie, A. and G. W. Imbens. 2006. Large Sample Properties of Matching Estimators for Average
Treatment Effects, Econometrica, vol. 74, 235-267.
• Abadie, A. and G. W. Imbens. 2007. Bias Corrected Matching Estimators for Average Treatment
Effects. (PDF) .
Readings: Matching Applications
• Jason Lyall. 2010. Are Co-Ethnics More Effective Counter-Insurgents? Evidence from the Second
Chechen War. American Political Science Review, 104:1 (February 2010), 1-20.
• Gordon, S. and Huber, S. 2007. The Effect of Electoral Competitiveness on Incumbent Behavior.
Quarterly Journal of Political Science, vol. 2: 107-138.
• Sekhon, J. 2004. The 2004 Florida Optical Voting Machine Controversy: A Causal Analysis Using
Matching. Manuscript. UC Berkeley.
• Eggers, A. and Hainmueller, J. 2009. MPs for Sale? Estimating Returns to Office in Post-War British
Politics. American Political Science Review. Vol. 103, No. 4 November 2009.
• Gilligan, M. and Sergenti, J. 2008. Do UN Interventions Cause Peace? Using Matching to Improve
Causal Inference. Quarterly Journal of Political Science, vol. 3: 89122.
• Simmons, B. and Hopkins, D. 2005. The Constraining Power of International Treaties: Theory and
Methods. American Political Science Review 99(4): 623-631.

5.3 Propensity Score Methods


• Identification, Propensity Score Estimation, Matching on the Propensity Score, Weighting on the
Propensity Score, Reweighting methods

Readings: Propensity Score Methods Theory


• Morgan and Winship: Chapter 3. (?)
• Rubin, D. 2006. Chapters 10, 11 and 14 (all with Paul R. Rosenbaum).
• Imbens, G. 2004. Nonparametric Estimation of Average Treatment Effects under Exogeneity: A
Review. Review of Economics and Statistics 86 (1): 4-29.
Professor: Jens Hainmueller TA: Chad Hazlett Spring Semester 2012 8

• Hainmueller, Jens. 2012. Entropy Balancing for Causal Effects: A Multivariate Reweighting Method
to Produce Balanced Samples in Observational Studies. Political Analysis 20 (1): 25-46.
Readings: Propensity Score Methods Applications
• Rubin, D. 2001. Using Propensity Scores to Help Design Observational Studies: Application to the
Tobacco Litigation. Health Services and Outcomes Research Methodology. Volume 2, Numbers 3-4:
169-188. (?)

• Blattman, C. and Annan, J. 2009. The Consequences of Child Soldiering. The Review of Economics
and Statistics (Forthcoming). (?)

5.4 Regression
• Non-parametric Regression, Identification with Regression

Readings
• Angrist and Pischke: Chapter 3. (?)
• Morgan and Winship: Chapter 5. (?)
• Chapter in Winship and Morgan on Matching vs Regression.

• Härdle, W and Linton, O. 1994. Applied Nonparametric Methods, in R. F. Engle and D. L. McFadden
eds. Handbook of Econometrics, vol. 4. New York: Elsevier Science.
• White, H. 1980. Using Least Squares to Approximate Unknown Regression Functions. International
Economic Review, vol. 21: 149-170.

5.5 Conclusion: Selection on Observables


• Can Non-Experimental Method Recover Causal Effects?

Readings: Comparison of Experimental and Non-experimental Methods


• Dehejia, R. H. and Wahba, S. 1999. Causal Effects in Non-Experimental Studies: Re- Evaluating
the Evaluation of Training Programs, Journal of the American Statistical Association, vol. 94: 1053-
1062.(?)
• Heckman, J. J., H. Ichimura and P. E. Todd (1997), Matching as an Econometric Evaluation Estimator:
Evidence from Evaluating a Job Training Programme, Review of Economic Studies, vol. 64, 605-654.
• Shadish, W. M. Clark, H., and Steiner, P. 2008. Can Nonrandomized Experiments Yield Accurate
Answers? A Randomized Experiment Comparing Random and Nonrandom Assignments. Journal of
the American Statistical Association, Vol. 103, No. 484: 1334-1344. (?)
• Arceneaux, Kevin, Alan S. Gerber, and Donald P. Green. 2006. Comparing Experimental and Match-
ing Methods using a LargeScale Voter Mobilization Experiment. Political Analysis 14: 1-36.
• John Concato, Nirav Shah, and Ralph Horwitz. 2000. Randomized, Controlled Trials, Observational
Studies, and the Hierarchy of Research Designs. New England Journal of Medicine 342(25): 1887-92.
• Benson, Kjell and Arthur J. Hartz. 2000. A Comparison of Observational Studies and Randomized,
Controlled Trials. New England Journal of Medicine 342(25): 1878-86.
Professor: Jens Hainmueller TA: Chad Hazlett Spring Semester 2012 9

6 Causal Effects under Selection on Time-Invariant Characteris-


tics
6.1 Panel Data Methods
• Fixed Effects and Random Effects Estimation

Readings: Panel Methods Theory


• Angrist and Pischke: Chapter 5.1 (?)
• Angrist and Pischke: Chapter 8 (?)
Readings: Panel Methods Applications
• Ladd, Jonathan McDonald, and Gabriel S. Lenz. 2009. Exploiting a Rare Communication Shift to
Document the Persuasive Power of the News Media. American Journal of Political Science 53(2):
394-410. (?)
• Cox, Gary W., and William Terry. 2008. Legislative Productivity in the 93d-105th Congresses. Leg-
islative Studies Quarterly 33(4): 603-16.
• Berrebi, C. and Klor, E. 2008. Are Voters Sensitive to Terrorism? Direct Evidence from the Israeli
Electorate. American Political Science Review (2008), 102:279-301.

6.2 Difference-in-Differences Estimators


• Identification, Estimation, Falsification tests

Readings: DID Theory


• Angrist and Pischke: Chapter 5.2-5.4 (?)
• Bertrand, Marianne, Esther Duflo, and Sendhil Mullainathan. 2004. How Much Should We Trust
Differences-in-Differences Estimates? Quarterly Journal of Economics 119(1): 249-75.
Readings: DID Applications
• Lyall, Jason. 2009. Does Indiscriminate Violence Incite Insurgent Attacks? Evidence from Chechnya.
Journal of Conflict Resolution 53(3): 331-62.(?)
• Card, D. (1990), The Impact of the Mariel Boatlift on the Miami Labor Market, Industrial and Labor
Relations Review, vol. 44, 245-257.
• Card, D. and A. B. Krueger (1994), Minimum Wages and Employment: A Case Study of the Fast-Food
Industry in New Jersey and Pennsylvania,” American Economic Review, vol. 84, 772-793. (?)
• Bechtel, M. and Hainmueller, J. 2011. How Lasting Is Voter Gratitude? An Analysis of the Short- and
Long-Term Electoral Returns to Beneficial Policy. American Journal of Political Science, vol. 55 (4),
852-868.

6.3 Synthetic Control Methods


Readings
• Abadie, Diamond, and Hainmueller. 2010. Synthetic Control Methods for Comparative Case Studies:
Estimating the Effect of California’s Tobacco Control Program. Journal of the American Statistical
Association. (?)
• Abadie, Alberto and Javier Gardeazabal. 2003. The Economic Costs of Conflict: a Case-Control Study
for the Basque Country. American Economic Review 92 (1).
Professor: Jens Hainmueller TA: Chad Hazlett Spring Semester 2012 10

7 Causal Effects under Selection on Time-variant Characteristics


7.1 Instrumental Variables
• Identification: Using Exogenous Variation in Treatment Intake Given by Instruments
• Imperfect Compliance in Randomized Studies

• Wald Estimator, Local Average Treatment Effects, 2SLS

Readings: Instrumental Variable Theory


• Angrist and Pischke: Chapter 4 (?)
• Morgan and Winship: Chapter 7
• Angrist, Joshua D., and Alan B. Krueger. 2001. Instrumental Variables and the Search for Identifica-
tion: From Supply and Demand to Natural Experiments.
• Angrist, Imbens, and Rubin 1996. Identification of Causal Effects Using Instrumental Variables. Jour-
nal of Economics Perspectives 15(4): 69-85.
• Abadie, Alberto 2003. Semiparametric instrumental variable estimation of treatment response models.
Journal of Econometrics 113 (2003) 231-263.
Readings: Instrumental Variable Critique
• Deaton, Angus. 2009. Instruments of Development: Randomization in the Tropics, and the Search for
the Elusive Keys to Economic Development. Typescript. Princeton University. (?)

• Hernan, Miguel A., and James M. Robins. 2006. Instruments for Causal Inference: An Epidemiologists
Dream? Epidemiology 17(4): 360-72.
• Guido Imbens. Better LATE Than Nothing: Some Comments on Deaton (2009) and Heckman and
Urzua (2009) (?)
Readings: Instrumental Variable Applications

• Kern and Hainmueller Opium for the Masses: How Foreign Free Media Can Stabilize Authoritarian
Regimes. Political Analysis (2009). (?)
• Angrist and Krueger. 2001 Instrumental Variables and the Search for Identification: From Supply and
Demand to Natural Experiments

• Acemoglu, Daron, Simon Johnson, and James A. Robinson. 2001. The Colonial Origins of Comparative
Development: An Empirical Investigation. American Economic Review 91(5): 1369-1401.(?)
• Clingingsmith, David, Asim Ijaz Khwaja, and Michael Kremer. 2009. Estimating the Impact of the
Hajj: Religion and Tolerance in Islams Global Gathering. Quarterly Journal of Economics 124(3):
1133-1170.

• Stromberg, David. 2004. Radios Impact on Public Spending. Quarterly Journal of Economics 119(1):
189-221.
• Angrist, Joshua D. 1990. Lifetime Earnings and the Vietnam Era Draft Lottery: Evidence from Social
Security Administrative Records. American Economic Review 80(3): 313-36.
Professor: Jens Hainmueller TA: Chad Hazlett Spring Semester 2012 11

7.2 The Regression Discontinuity Design


• Sharp and Fuzzy Designs, Identification, Estimation, Falsification Checks

Readings: RDD Theory


• Imbens, Guido W., and Thomas Lemieux. 2008. Regression Discontinuity Designs: A Guide to
Practice. Journal of Econometrics 142: 615-35. (Part of special issue on RDD, all of which is of
interest.) (?)
• Angrist and Pischke: Chapter 6 (?)

• Hahn, J., P. Todd and W. van der Klaauw (2001), Identification and Estimation of Treatment Effects
with a Regression Discontinuity Design, Econometrica, vol. 69: 201-209.
Readings: RDD Applications
• Ferraz, C., and F. Finan. 2008. Motivating Politicians: The Impacts of Monetary Incentives on Quality
and Performance. Mimeo. 2009 NBER Working paper w14906.
• Lee, David S. 2008. Randomized Experiments from Non-random Selection in U.S. House Elections.
Journal of Econometrics. Volume 142, Issue 2, Pages 675-697. (?)
• Butler, Daniel M., and Matthew J. Butler. 2006. Splitting the Difference? Causal Inference and
Theories of Split-Party Delegations. Political Analysis 14(4): 439-55.
• Hainmueller, Jens, and Holger Lutz Kern. 2008. Incumbency as a Source of Spillover Effects in Mixed
Electoral Systems: Evidence from a Regression- Discontinuity Design. Electoral Studies 27: 213-27.
• Caughey, Devin and Shekon, Jas. 2010. Regression-Discontinuity Designs and Popular Elections:
Implications of Pro-Incumbent Bias in Close U.S. House Races.

7.3 Sensitivity Analysis


• Nonparametric Bounds

• Formal sensitivity tests

Readings

• Guido W. Imbens. 2003. Sensitivity to Exogeneity Assumptions in Program Evaluation. The American
Economic Review 93(2): 126–32. (?)
• Morgan and Winship: Chapter 6 (?)
• Rosenbaum, Paul R. 2002. Observational Studies. Springer-Verlag. 2nd edition. Chapter 4.

• Manski, C. 1995. Identification Problems in the Social Sciences. Cambridge: Harvard University Press.
Chapter 2.
• Joseph Altonji, Todd E. Elder, and Christopher Taber. 2005. Selection on Observed and Unobserved
Variables: Assessing the Effectiveness of Catholic Schools. Journal of Political Economy Vol. 113:
151-184.

• Paul Rosenbaum and Donald Rubin. 1983. Assessing Sensitivity to an Unobserved Binary Covariate
in an Observational Study with Binary Outcome. Journal of the Royal Statistical Society. Series B
(Methodological) 45(2): 212-18.
Professor: Jens Hainmueller TA: Chad Hazlett Spring Semester 2012 12

8 Distributional Effects
8.1 Quantile Regression
Readings
• Angrist and Pischke: Chapter 7 (?)

• Roger Koenker, Kevin F. Hallock, Quantile Regression, Journal of Economic Perspectives, Vol. 15,
No. 4 (Fall 2001), pp. 143–156

8.2 Distributional Effects in Difference-in-Differences


Readings
• Athey, S. and Imbens, G. Identification and Inference in Nonlinear Difference-in-Differences Models.
Econometrica, 74 (2), March 2006, pp. 431–497.

February, 2012

You might also like