Criticisms of econometrics
It has been suggested that this article be merged into Econometrics. (Discuss) Proposed since October 2025. |
Integrating statistics into economic theory to develop claims of causality lead to disagreements within the discipline, resulting in criticisms of econometrics. These criticisms focus on methodological shortcomings in econometric modeling practices which failed to demonstrate causality.[citation needed] Most of these criticisms have been resolved as a result of the credibility revolution and the improved rigor of the potential outcomes framework, used today by applied economists, microeconomists and econometricians for generating results that interpret causality. While developments by econometricians began in the mid 1960s to improve statistical measures, the publication in 2009 of Mostly Harmless Econometrics by economists Joshua D. Angrist and Jörn-Steffen Pischke has summarized the improvements in econometric modeling.
Difficulties in model specification
[edit]Like other forms of statistical analysis, badly specified econometric models may show a spurious correlation where two variables are correlated but causally unrelated. Economist Ronald Coase is widely reported to have said "if you torture the data long enough it will confess".[1] Deirdre McCloskey argues that in published econometric work, economists often fail to use economic reasoning for including or excluding variables, equivocate statistical significance with substantial significance, and fail to report the power of their findings.[2]
Economic variables are observed in reality, and therefore are not readily isolated for experimental testing. Edward Leamer argued tthere was no essential difference between econometric analysis and randomized trials or controlled trials, provided the use of statistical techniques reduces the specification bias, the effects of collinearity between the variables, to the same order as the uncertainty due to the sample size.[3] Today, this critique is unbinding, as advances in identification are stronger. Identification today may report the average treatment effect (ATE), the average treatment effect on the treated (ATT), or the local average treatment effect (LATE).[4] Specification bias, or selection bias can be easily removed, through advances in sampling techniques and the ability to sample much larger populations through improved communications, data storage, and randomization techniques. Secondly, collinearity can easily be controlled for, through instrumental variables. By reporting either ATT or LATE we can control for or eliminate heterogenous error, reporting only the effects on the group as defined.
Economists, when using data, may have a number of explanatory variables they want to use that are highly collinear, such taht researcher bias may be important in variable selection. Leamer argues that economists can mitigate this by running statistical tests with different specified models and discarding any inferences which prove to be "fragile", concluding that "professionals ... properly withhold belief until an inference can be shown to be adequately insensitive to the choice of assumptions."[5] Today, this is known as p-hacking, and is not a failure of econometric methodology, but is instead a potential failure of a researcher who may be seeking to prove their own hypothesis.[6] P-hacking is not accepted in economics, and the requirement to disclose original data and the code to perform statistical analysis.[7] However Sala-I-Martin[8] argued, it's possible to specify two models suggesting contrary relation between two variables. The phenomenon was labeled emerging recalcitrant result phenomenon by Robert Goldfarb.[9] This is known as two-way causality, and should be discussed with respect to the underlying theory that the mechanism is attempting to capture.
Kennedy (1998, p 1-2) reports econometricians as being accused of using sledgehammers to crack open peanuts. That is they use a wide range of complex statistical techniques while turning a blind eye to data deficiencies and the many questionable assumptions required for the application of these techniques.[10] Kennedy quotes Stefan Valavanis's 1959 Econometrics textbook's critique of practice:
Econometric theory is like an exquisitely balanced French recipe, spelling out precisely with how many turns to mix the sauce, how many carats of spice to add, and for how many milliseconds to bake the mixture at exactly 474 degrees of temperature. But when the statistical cook turns to raw materials, he finds that hearts of cactus fruit are unavailable, so he substitutes chunks of cantaloupe; where the recipe calls for vermicelli he used shredded wheat; and he substitutes green garment die for curry, ping-pong balls for turtles eggs, and for Chalifougnac vintage 1883, a can of turpentine. (1959, p.83)[11]
Macroeconomic critiques
[edit]Looking primarily at macroeconomics, Lawrence Summers has criticized econometric formalism, arguing that "the empirical facts of which we are most confident and which provide the most secure basis for theory are those that require the least sophisticated statistical analysis to perceive." Summers is not critiquing the methodology itself but instead its usefulness in developing macroeconomic theory.
He looks at two well cited macroeconometric studies (Hansen & Singleton (1982, 1983), and Bernanke (1986)), and argues that while both make brilliant use of econometric methods, both papers do not speak to formal theoretical proof. Noting that in the natural sciences, "investigators rush to check out the validity of claims made by rival laboratories and then build on them," Summers points out that this rarely happen in economics, which to him is a result of the fact that "the results [of econometric studies] are rarely an important input to theory creation or the evolution of professional opinion more generally." To Summers:[12]
Successful empirical research has been characterized by attempts to gauge the strength of associations rather than to estimate structural parameters, verbal characterizations of how causal relations might operate rather than explicit mathematical models, and the skillful use of carefully chosen natural experiments rather than sophisticated statistical technique to achieve identification.
Lucas critique
[edit]Robert Lucas criticised the use of overly simplistic econometric models of the macroeconomy to predict the implications of economic policy, arguing that the structural relationships observed in historical models break down if decision makers adjust their preferences to reflect policy changes. Lucas argued that policy conclusions drawn from contemporary large-scale macroeconometric models were invalid as economic actors would change their expectations of the future and adjust their behaviour accordingly. Good macroeconometric model should incorporate microfoundations to model the effects of policy change, with equations representing economic representative agents responding to economic changes based on rational expectations of the future; implying their pattern of behaviour might be quite different if economic policy changed.
Austrian School critique
[edit]The current-day Austrian School of Economics typically rejects much of econometric modeling. The historical data used to make econometric models, they claim, represents behavior under circumstances idiosyncratic to the past; thus econometric models show correlational, not causal, relationships. Econometricians have addressed this criticism by adopting quasi-experimental methodologies. Austrian school economists remain skeptical of these corrected models, continuing in their belief that statistical methods are unsuited for the social sciences.[13]
The Austrian School holds that the counterfactual must be known for a causal relationship to be established. The changes due to the counterfactual could then be extracted from the observed changes, leaving only the changes caused by the variable. Meeting this critique is very challenging since "there is no dependable method for ascertaining the uniquely correct counterfactual" for historical data.[14] For non-historical data, the Austrian critique is met with randomized controlled trials. Randomized controlled trials must be purposefully prepared, which historical data is not.[15] The use of randomized controlled trials is becoming more common in social science research. In the United States, for example, the Education Sciences Reform Act of 2002 made funding for education research contingent on scientific validity defined in part as "experimental designs using random assignment, when feasible."[16] In answering questions of causation, parametric statistics only addresses the Austrian critique in randomized controlled trials.
If the data is not from a randomized controlled trial, econometricians meet the Austrian critique with quasi-experimental methodologies, including identifying and exploiting natural experiments. These methodologies attempt to extract the counterfactual post-hoc so that the use of the tools of parametric statistics is justified. Since parametric statistics depends on any observation following a Gaussian distribution, which is only guaranteed by the central limit theorem in a randomization methodology, the use of tools such as the confidence interval will be outside of their specification: the amount of selection bias will always be unknown.[17]
Response to criticisms
[edit]Structural causal modeling, which attempts to formalize the limitations of quasi-experimental methods from a causality perspective, allowing experimenters to precisely quantify the risks of quasi-experimental research, is an emerging discipline originating with the work of Judea Pearl, and the related work of economists Joshua D. Angrist and Jörn-Steffen Pischke.[15]
Literature on criticisms of econometrics
[edit]- Kmenta, J. (2025). Econometrics: A failed science?. In International Encyclopedia of Statistical Science (pp. 773-776). Springer, Berlin, Heidelberg.
- Hendry, D. F. (2000). Econometrics: alchemy or science?: essays in econometric methodology. OUP Oxford.[18]
- Moosa, I. A. (2017). Econometrics as a con art: exposing the limitations and abuses of econometrics. Edward Elgar Publishing.
- Pinto, H. (2011). The role of econometrics in economic science: An essay about the monopolization of economic methodology by econometric methods. The Journal of Socio-Economics, 40(4), 436-443.
- Swann, G. P. (2006). Putting econometrics in its place: a new direction in applied economics. Edward Elgar Publishing.[19]
See also
[edit]Notes
[edit]- ^ Gordon Tullock, "A Comment on Daniel Klein's 'A Plea to Economists Who Favor Liberty'", Eastern Economic Journal, Spring 2001, note 2 (Text: "As Ronald Coase says, 'if you torture the data long enough it will confess'." Note: "I have heard him say this several times. So far as I know he has never published it.")
- ^ McCloskey, D.N. (May 1985). "The Loss Function has been mislaid: the Rhetoric of Significance Tests" (PDF). American Economic Review. 75 (2): 201–205.
- ^ Leamer, Edward (March 1983). "Let's Take the Con out of Econometrics". American Economic Review. 73 (1): 31–43. JSTOR 1803924.
- ^ "2 Probability and Regression Review – Causal Inference
The Mixtape". mixtape.scunning.com. Retrieved 2025-10-26. - ^ Leamer, Edward (March 1983). "Let's Take the Con out of Econometrics". American Economic Review. 73 (1): 31–43. JSTOR 1803924.
- ^ Brodeur, Abel; Carrell, Scott; Figlio, David; Lusher, Lester (November 2023). "Unpacking P-hacking and Publication Bias". American Economic Review. 113 (11): 2974–3002. doi:10.1257/aer.20210795. ISSN 0002-8282.
- ^ "AEA Data and Code Policies and Guidance". www.aeaweb.org. Retrieved 2025-10-26.
- ^ Sala-i-Martin, Xavier X (November 1997). "I Just Ran Four Million Regressions". Working Paper Series. doi:10.3386/w6252.
{{cite journal}}: Cite journal requires|journal=(help) - ^ Goldfarb, Robert S. (December 1997). "Now you see it, now you don't: emerging contrary results in economics". Journal of Economic Methodology. 4 (2): 221–244. doi:10.1080/13501789700000016. ISSN 1350-178X.
- ^ Kennedy, P (1998) A Guide to Econometrics, Blackwell, 4th Edition
- ^ Valavanis, Stefan (1959) Econometrics, McGraw-Hill
- ^ Summers, Lawrence (June 1991). "The Scientific Illusion in Empirical Macroeconomics". Scandinavian Journal of Economics. 93 (2): 129–148. doi:10.2307/3440321. JSTOR 3440321.
- ^ Garrison, Roger - in The Meaning of Ludwig von Mises: Contributions is Economics, Sociology, Epistemology, and Political Philosophy, ed. Herbener, pp. 102-117. "Mises and His Methods"
- ^ DeMartino, George F. (2021). "The specter of irreparable ignorance: counterfactuals and causality in economics". Review of Evolutionary Political Economy. 2 (2): 253–276. doi:10.1007/s43253-020-00029-w. ISSN 2662-6136. PMC 7792558.
- ^ a b Angrist, Joshua; Pischke, Jörn-Steffen (15 December 2008). Mostly Harmless Econometrics. Princeton University Press. ISBN 978-1400829828.
- ^ Education Sciences Reform Act of 2002, Pub. L. 107–279; Approved Nov. 5, 2002; 116 Stat. 1941, As Amended Through P.L. 117–286, Enacted December 27, 2022 "https://www.govinfo.gov/content/pkg/COMPS-747/pdf/COMPS-747.pdf"
- ^ Harris, Anthony D.; McGregor, Jessina C.; Perencevich, Eli N.; Furuno, Jon P.; Zhu, Jingkun; Peterson, Dan E.; Finkelstein, Joseph (2006). "The Use and Interpretation of Quasi-Experimental Studies in Medical Informatics". Journal of the American Medical Informatics Association. 13 (1): 16–23. doi:10.1197/jamia.M1749. ISSN 1067-5027. PMC 1380192. PMID 16221933.
- ^ Hansen, Bruce E. “Methodology: Alchemy or Science?” The Economic Journal, vol. 106, no. 438, 1996, pp. 1398–413. JSTOR, https://doi.org/10.2307/2235531. Accessed 22 Oct. 2025.
- ^ Adkisson, Richard V. (2008) "Putting Econometrics in its Place: A New Direction in Applied Economics." Review of Social Economy, 127-129. , Vol. 66, No. 1,