Economics imperialism in methods
Noah Smith writes that the ground has fundamentally shifted in economics – so much that the whole notion of what “economics” means is undergoing a dramatic change. In the mid-20th century, economics changed from a literary to a mathematical discipline. Now it might be changing from a deductive, philosophical field to an inductive, scientific field. The intricacies of how we imagine the world must work are taking a backseat to the evidence about what is actually happening in the world. Matthew Panhans and John Singleton write that while historians of economics have noted the transition in the character of economic research since the 1970s toward applications, less understood is the shift toward quasi-experimental work.
Matthew Panhans and John Singleton write that the missionary’s Bible is less Mas-Colell and more Mostly Harmless Econometrics. In 1984, George Stigler pondered the “imperialism” of economics. The key evangelists named by Stigler in each mission field, from Ronald Coase and Richard Posner (law) to Robert Fogel (history), Becker (sociology), and James Buchanan (politics), bore University of Chicago connections. Despite the diverse subject matters, what unified the work for Stigler was the application of a common behavioral model. In other words, what made the analyses “economic” was the postulate of rational pursuit of goals. But rather than the application of a behavioral model of purposive goal-seeking, “economic” analysis is increasingly the empirical investigation of causal effects for which the quasi-experimental toolkit is essential.
Nicola Fuchs-Schuendeln and Tarek Alexander Hassan writes that, even in macroeconomics, a growing literature relies on natural experiments to establish causal effects. The “natural” in natural experiments indicates that a researcher did not consciously design the episode to be analyzed, but researchers can nevertheless use it to learn about causal relationships. Whereas the main task of a researcher carrying out a laboratory or field experiment lies in designing it in a way that allows causal inference, the main task of a researcher analyzing a natural experiment lies in arguing that in fact the historical episode under consideration resembles an experiment. To show that the episode under consideration resembles an experiment, identifying valid treatment and control groups, that is, arguing that the treatment is in fact randomly assigned, is crucial.
Source: Nicola Fuchs-Schuendeln and Tarek Alexander Hassan
Data collection, clever identification and trendy topics
Daniel S. Hamermesh writes that top journals are publishing many fewer papers that represent pure theory, regardless of subfield, somewhat less empirical work based on publicly available data sets, andmany more empirical studies based on data collected by the author(s) or on laboratory or field experiments. The methodological innovations that have captivated the major journals in the past two decades – experimentation, and obtaining one’s own unusual data to examine causal effects – are unlikely to be any more permanent than was the profession’s fascination with variants of micro theory, growth theory, and publicly avail-able data in the 1960s and 1970s.
Barry Eichengreen writes that, as recently as a couple of decades ago, empirical analysis was informed by relatively small and limited data sets. While older members of the economics establishment continue to debate the merits of competing analytical frameworks, younger economists are bringing to bear important new evidence about how the economy operates. A first approach relies on big data. A second approach relies on new data. Economists are using automated information-retrieval routines, or “bots,” to scrape bits of novel information about economic decisions from the World Wide Web. A third approach employs historical evidence. Working in dusty archives has become easier with the advent of digital photography, mechanical character recognition, and remote data-entry services.
Tyler Cowen writes that top plaudits are won by quality empirical work, but lots of people have good skills. Today, there is thus a premium on a mix of clever ideas — often identification strategies — and access to quality data. Over time, let’s say that data become less scarce, as arguably has been the case in the field of history. Lots of economics researchers might also eventually have access to “Big Data.” Clever identification strategies won’t disappear, but they might become more commonplace. We would then still need a standard for elevating some work as more important or higher quality than other work. Popularity of topic could play an increasingly large role over time, and that is how economics might become more trendy.
Noah Smith (HT Chris Blattman) writes that the biggest winners from this paradigm shift are the public and policymakers as the results of these experiments are often easy enough for them to understand and use. Women in economics also win from this shift towards empirical economics. When theory doesn’t rely on data for confirmation, it often becomes a bullying/shouting contest where women are often disadvantaged. But with quasi-experiments, they can use reality to smack down bullies, as in the sciences. Beyond orthodox theory, another loser from this paradigm shift is heterodox thinking as it is much more theory-dominated than the mainstream and it wasn’t heterodox theory that eclipsed neoclassical theory. It was empirics.
This article is published in collaboration with Bruegel. Publication does not imply endorsement of views by the World Economic Forum.
To keep up with the Agenda subscribe to our weekly newsletter.
Author: Jérémie Cohen-Setton is a PhD candidate in Economics at U.C. Berkeley and a summer associate intern at Goldman Sachs Global Economic Research.
SHOULD experts in the public service follow rules, or rely on their own judgment? The answer is crucial for many areas of public policy, including criminal sentencing, immigration and education. It is also of pivotal importance to monetary policy.
Central bankers usually have discretion over how to use interest rates to achieve their goals. Yet it is easy to see the problems that result, as analysts pore over every word any central banker utters and markets see-saw in response. This tendency is becoming more frenzied as America’s Federal Reserve prepares to raise rates, issuing tantalisingly vague statements along the way. Some Republicans in Congress think it would be better if the central bank’s actions were more predictable. Jeb Hensarling, who heads the committee that oversees the Fed, frequently urges it to set interest rates using a simple formula.
The debate about rules versus discretion is an old one. In 1977 economists Finn Kydland and Edward Prescott—who went on to win the Nobel prize for their work—showed how too much tinkering with interest rates can be harmful.* In a simple model of the economy, two things determine inflation: the expectations of workers, who must decide how much pay to ask for, and the interest rate. Wage contracts last for a while, but policymakers can change interest rates at any time. If policymakers prefer lower unemployment than is natural—ie, a rate that causes some inflation—they will be tempted to cut rates when wage growth is moderate. Foreseeing this, workers expect high inflation to begin with. Policymakers would do better if they could credibly promise to sit on their hands. Most economists reckon this cycle helps to explain the high inflation of the 1970s.
Making central banks independent helps solve the problem. Public-spirited central bankers with a clear mandate are less likely to seek to inflate the economy artificially. But some think an algorithm could do their job even better. In 1993 John Taylor of Stanford University showed that the Fed typically behaves as if it follows a simple rule anyway.
Mr Taylor’s recipe for rates is as follows. Take the long-run real interest rate, which Mr Taylor assumed to be 2%. Add inflation. Then, adjust for your economic goals. For every 1% that inflation is above target, raise rates by 0.5%. For every 1% that economic growth falls short of its potential, cut rates by 0.5%.
This formula is remarkably good at predicting the Fed’s behaviour (see chart). Moreover, the Fed’s few deviations from the rule have not always been a success. Mr Taylor thinks the low rates of the early 2000s, for instance, inflated America’s housing bubble.
Monetary policy based on rules has one main advantage: transparency. The policy works by changing the cost of saving and borrowing. When rates go up, people are more inclined to save; when rates fall, they are more likely to borrow. When making these decisions, people care about tomorrow’s interest rate as well as today’s. The more they can predict how the central bank will act, the better they can plan—and the more likely they are to behave in the way that the central bank wants.
The benefit of transparency explains the policy of “forward guidance”—pledges by central banks, in the aftermath of the financial crisis, to keep rates low. Yet central bankers do not like having their hands tied. The Bank of England, for instance, listed no fewer than four reasons why it might abandon its own guidance, including runaway inflation and financial instability.
The need for such get-out clauses demonstrates the pitfalls of monetary rules. It is easy to see the optimal interest rate in an economic model (especially with the benefit of hindsight). It is much harder to understand how best to react to unexpected economic conditions. In 1987 the Fed deviated from the Taylor rule when stockmarkets crashed. From 2009 it deviated because it could not cut interest rates below zero, as the rule recommended. Research also suggests that the trade-off between inflation and unemployment evolves as the economy changes. If so, then any rate-setting formula would also need to change.
A recipe for disagreement
Mr Taylor’s rule is less clear than it seems. Take the long-term real interest rate, which he assumed to be 2%. Today, many economists suspect this rate is permanently lower as a result of chronically weak demand and low productivity growth. This would mean that interest rates should be lower than the Taylor rule suggests. Yet there is no more consensus about what the long-term real interest rate is than about where the Fed should set short-term rates.
A similar problem arises with potential growth. Debate rages about the amount of slack left in Western labour markets. Economists do not agree on how much wages are constrained by part-time workers who want more hours, or on how much the labour-force participation rate, as well as unemployment, varies with demand. Estimates of slack are themselves the product of qualitative judgment. Plugging them into a rule would give a spurious impression of objectivity.
Those calling for a Taylor rule acknowledge these uncertainties. They stress that it would not have to be binding: the Fed could override it. But doing so would be to court controversy. Fed chairmen might not want to stick their necks out by deviating from a rule for too long. They are therefore right to see one as an unnecessary constraint on their autonomy.
If the public—or financial markets—cannot predict interest rates, it is because setting them is difficult. There is no overcoming that. If politicians want more scrutiny of Fed policy, the opinion of a well-staffed shadow body would make a better comparator than a formula. Until the day the economy is fully understood, human judgment has a crucial role to play. Algorithms are replacing many jobs, but they should not supplant central bankers.