Understanding the advantages and limitations of economists’ methods clarifies the value they can add to analysis of non-economic questions. Equally important, it underscores how economists’ approach can complement but never replace alternative, often qualitative methods used in other scholarly disciplines.
CAMBRIDGE – Economists have never been shy about taking on the big questions that disciplines such as history, sociology, or political science consider their own province. What have been slavery’s long-run implications for contemporary American society? Why do some communities exhibit higher levels of social trust than others? What explains the rise of right-wing populism in recent years?
In addressing these and many other non-economic issues, economists have gone well beyond their bread-and-butter preoccupation with supply and demand. This transgression of disciplinary boundaries is not always welcomed. Other scholars object (often correctly) that economists do not bother to familiarize themselves with existing work in relevant disciplines. They complain (again rightly) about an inhospitable academic culture. Replete with interruptions and aggressive questioning, economics seminars can seem to outsiders more akin to the Inquisition than a forum for colleagues to communicate results and probe new ideas.
Perhaps the most important source of tension, however, arises from the methods economists bring to their research. Economists rely on statistical tools to demonstrate that a particular underlying factor had a “causal” effect on the outcome of interest. Often misunderstood, this method can be the source of endless and unproductive conflict between economists and others.
Understanding the advantages (and limitations) of economists’ method clarifies the value they can add to analysis of non-economic questions. Equally important, it underscores how economists’ approach can complement but never replace alternative, often qualitative methods used in other scholarly disciplines.
It helps to begin with the idea of causality itself. In the sciences, we acquire knowledge about causation in one of two ways. Either we start from a cause and try to identify its effects. Or we start from the effect and try to ascertain its cause(s). The Columbia University statistician Andrew Gelman has called the first method “forward causal inference” (going from cause to possible effects) and the second “reverse causal inference” (going from effect to likely causes).
Economists are obsessed with the first of these approaches – forward causal inference. The most highly prized empirical research is that which demonstrates that an exogenous variation in some underlying cause X has a predictable and statistically significant effect on an outcome of interest Y.
At a time when democracy is under threat, there is an urgent need for incisive, informed analysis of the issues and questions driving the news – just what PS has always provided. Subscribe now and save $50 on a new subscription.
Subscribe Now
In the natural sciences, causal effects are measured using lab experiments that can isolate the consequences of variations in physical conditions on the effect of interest. Economists sometimes mimic this method through randomized social experiments. For example, households might be randomly assigned to a cash grant program – with some receiving the extra income and others not – to discover the consequences of additional income.
More often than not, history and social life do not permit lab-like conditions that allow the effects of changes in the human condition to be precisely ascertained and measured. Economists resort to imaginative statistical techniques instead.
For example, they might document a statistical association between an exogenous factor such as rainfall and the incidence of civil conflict, allowing them to infer that changes in income levels (due to fluctuations in agricultural output) are a cause of civil wars. Note the key piece of ingenuity here: because civil wars cannot influence weather patterns, the correlation between the two must be due to one-way causality in the other direction.
Well-done research in this style can be a beautiful thing to behold and a significant accomplishment – as reliable a causal assertion as is possible in the social sciences. Yet it might leave a historian or a political scientist cold.
This is because the economists’ method does not yield an answer to the question “what causes civil conflict” (the reverse causal inference question). It merely provides evidence on one of the causes (income fluctuations), which may not even be one of the more important factors. Worse, because economists are trained only in the forward-induction approach, they often present their research as if the partial answer is in fact the more comprehensive one, further raising the ire of scholars from other disciplines.
There are other sleights of hand that cause economists problems. In their quest for statistical “identification” of a causal effect, economists often have to resort to techniques that answer either a narrower or a somewhat different version of the question that motivated the research.
Results from randomized social experiments carried out in particular regions of, say, India or Kenya may not apply to other regions or countries. A research design exploiting variation across space may not yield the correct answer to a question that is essentially about changes over time: what happens when a region is hit with a bad harvest. The particular exogenous shock used in the research may not be representative; for example, income shortfalls not caused by water scarcity can have different effects on conflict than rainfall-related shocks.
So, economists’ research can rarely substitute for more complete works of synthesis, which consider a multitude of causes, weigh likely effects, and address spatial and temporal variation of causal mechanisms. Work of this kind is more likely to be undertaken by historians and non-quantitatively oriented social scientists.
Judgment necessarily plays a larger role in this kind of research, which in turn leaves greater room for dispute about the validity of the conclusions. And no synthesis can produce a complete list of the causes, even if one could gauge their relative significance.
Nevertheless, such work is essential. Economists would not even know where to start without the work of historians, ethnographers, and other social scientists who provide rich narratives of phenomena and hypothesize about possible causes, but do not claim causal certainty.
Economists can be justifiably proud of the power of their statistical and analytical methods. But they need to be more self-conscious about these tools’ limitations. Ultimately, our understanding of the social world is enriched by both styles of research. Economists and other scholars should embrace the diversity of their approaches instead of dismissing or taking umbrage at work done in adjacent disciplines.
To have unlimited access to our content including in-depth commentaries, book reviews, exclusive interviews, PS OnPoint and PS The Big Picture, please subscribe
At the end of a year of domestic and international upheaval, Project Syndicate commentators share their favorite books from the past 12 months. Covering a wide array of genres and disciplines, this year’s picks provide fresh perspectives on the defining challenges of our time and how to confront them.
ask Project Syndicate contributors to select the books that resonated with them the most over the past year.
CAMBRIDGE – Economists have never been shy about taking on the big questions that disciplines such as history, sociology, or political science consider their own province. What have been slavery’s long-run implications for contemporary American society? Why do some communities exhibit higher levels of social trust than others? What explains the rise of right-wing populism in recent years?
In addressing these and many other non-economic issues, economists have gone well beyond their bread-and-butter preoccupation with supply and demand. This transgression of disciplinary boundaries is not always welcomed. Other scholars object (often correctly) that economists do not bother to familiarize themselves with existing work in relevant disciplines. They complain (again rightly) about an inhospitable academic culture. Replete with interruptions and aggressive questioning, economics seminars can seem to outsiders more akin to the Inquisition than a forum for colleagues to communicate results and probe new ideas.
Perhaps the most important source of tension, however, arises from the methods economists bring to their research. Economists rely on statistical tools to demonstrate that a particular underlying factor had a “causal” effect on the outcome of interest. Often misunderstood, this method can be the source of endless and unproductive conflict between economists and others.
Understanding the advantages (and limitations) of economists’ method clarifies the value they can add to analysis of non-economic questions. Equally important, it underscores how economists’ approach can complement but never replace alternative, often qualitative methods used in other scholarly disciplines.
It helps to begin with the idea of causality itself. In the sciences, we acquire knowledge about causation in one of two ways. Either we start from a cause and try to identify its effects. Or we start from the effect and try to ascertain its cause(s). The Columbia University statistician Andrew Gelman has called the first method “forward causal inference” (going from cause to possible effects) and the second “reverse causal inference” (going from effect to likely causes).
Economists are obsessed with the first of these approaches – forward causal inference. The most highly prized empirical research is that which demonstrates that an exogenous variation in some underlying cause X has a predictable and statistically significant effect on an outcome of interest Y.
HOLIDAY SALE: PS for less than $0.7 per week
At a time when democracy is under threat, there is an urgent need for incisive, informed analysis of the issues and questions driving the news – just what PS has always provided. Subscribe now and save $50 on a new subscription.
Subscribe Now
In the natural sciences, causal effects are measured using lab experiments that can isolate the consequences of variations in physical conditions on the effect of interest. Economists sometimes mimic this method through randomized social experiments. For example, households might be randomly assigned to a cash grant program – with some receiving the extra income and others not – to discover the consequences of additional income.
More often than not, history and social life do not permit lab-like conditions that allow the effects of changes in the human condition to be precisely ascertained and measured. Economists resort to imaginative statistical techniques instead.
For example, they might document a statistical association between an exogenous factor such as rainfall and the incidence of civil conflict, allowing them to infer that changes in income levels (due to fluctuations in agricultural output) are a cause of civil wars. Note the key piece of ingenuity here: because civil wars cannot influence weather patterns, the correlation between the two must be due to one-way causality in the other direction.
Well-done research in this style can be a beautiful thing to behold and a significant accomplishment – as reliable a causal assertion as is possible in the social sciences. Yet it might leave a historian or a political scientist cold.
This is because the economists’ method does not yield an answer to the question “what causes civil conflict” (the reverse causal inference question). It merely provides evidence on one of the causes (income fluctuations), which may not even be one of the more important factors. Worse, because economists are trained only in the forward-induction approach, they often present their research as if the partial answer is in fact the more comprehensive one, further raising the ire of scholars from other disciplines.
There are other sleights of hand that cause economists problems. In their quest for statistical “identification” of a causal effect, economists often have to resort to techniques that answer either a narrower or a somewhat different version of the question that motivated the research.
Results from randomized social experiments carried out in particular regions of, say, India or Kenya may not apply to other regions or countries. A research design exploiting variation across space may not yield the correct answer to a question that is essentially about changes over time: what happens when a region is hit with a bad harvest. The particular exogenous shock used in the research may not be representative; for example, income shortfalls not caused by water scarcity can have different effects on conflict than rainfall-related shocks.
So, economists’ research can rarely substitute for more complete works of synthesis, which consider a multitude of causes, weigh likely effects, and address spatial and temporal variation of causal mechanisms. Work of this kind is more likely to be undertaken by historians and non-quantitatively oriented social scientists.
Judgment necessarily plays a larger role in this kind of research, which in turn leaves greater room for dispute about the validity of the conclusions. And no synthesis can produce a complete list of the causes, even if one could gauge their relative significance.
Nevertheless, such work is essential. Economists would not even know where to start without the work of historians, ethnographers, and other social scientists who provide rich narratives of phenomena and hypothesize about possible causes, but do not claim causal certainty.
Economists can be justifiably proud of the power of their statistical and analytical methods. But they need to be more self-conscious about these tools’ limitations. Ultimately, our understanding of the social world is enriched by both styles of research. Economists and other scholars should embrace the diversity of their approaches instead of dismissing or taking umbrage at work done in adjacent disciplines.