TLDR: I think through my complicated relationship with economics and why I’m both deeply compelled by economics as a style of thinking but deeply frustrated with economics as an existing discipline.
Before we begin, I want to note that the only formal economics training I’ve received was an intermediate microeconomics course I took in college, which I mostly slept through.1 Most of what I’ve learned has come from papers that I’ve read, econ blogs that I follow, and the occasional peek at some online courses. Given that background, take everything I say with a giant grain of salt.
Modern Economics is applied mathematical modeling
To begin with, what do I mean when I say economics? While the obvious answer is “the study of the economy”, I don’t think that’s a good description of what economics as an intellectual discipline really is. Economists have been analyzing all kinds of non-economic aspects of society for decades now (things like crime, voting, or discrimination) and complaints about “economics imperialism” are well-known.
Instead, I think economics is better defined as a particular style of thinking. As a first approximation, I’d say that economics is the application of mathematical models to study human society. This emphasis is most obvious when you look at all the equations in economics textbooks or research papers – even the most basic concepts like supply and demand are taught using formal mathematics (usually two lines meeting at a point, like the famous chart). But even when I read a general audience op-ed or blog post by an economist without any equations, it’s pretty easy (at least to me) to see the underlying “mathematical skeleton” of their argument.2 That’s a stark difference compared to someone like (say) Robert Reich who is not trained as an economist, even when they’re talking about “the economy”.
Of course, this definition isn’t perfect. I know of at least two major problems:
- This is a distinctly modern view of what economics is. Most older economists (like Adam Smith) sound much more like philosophers and are usually lack any kind of formal mathematics in their argument. I’m no expert, but I believe that the “mathematization” of economics only became dominant somewhere between 1930-1970 thanks to people like Paul Samuelson and Kenneth Arrow (although work had obviously begun earlier with folks like Léon Walras).
- Many social science disciplines use statistical models in their work, even if they’re just simple linear regressions. While I think simple statistical models aren’t the same as mathematical models, the line gets blurrier once the statistics gets more complicated and you start making more assumptions.
Why mathematical models?
Ultimately, I think this emphasis on mathematical models gets at the core of a lot of criticisms of economics – most other social science disciplines are significantly more humanities-oriented than economics, which creates a natural culture clash. While the advantages of mathematical models are obvious if you come from a STEM background, I want to try to justify their use to people without that indoctrination.
From my perspective, mathematical models have three main benefits. Most obviously, casting your argument as a mathematical model gives you access to a vast intellectual toolkit. This includes centuries of mathematical theory to help create your argument, well-developed techniques for evaluating your model’s fit to real data, and a ton of prerequisite knowledge you can just assume a properly trained audience will have.
Secondly, mathematics models are naturally very precise. Although you can argue about how relevant a model is, the core of the model becomes clear, with all assumptions explicitly written out and all internal reasoning reduced to objective logical steps. It’s much harder to smuggle in hidden assumptions or to accidentally slip between similar but distinct claims (e.g. from something can happen to something will happen).
Lastly, there’s the way that the very act of formalizing your argument into math pushes you into distilling into its most basic form. While in theory you could capture try to capture the full complexity of the world into a mathematical model, in practice people can only really reason mathematically about a handful of concepts at once. This means that formalization encourages you to really think about what’s the core logic of your argument and what’s just unnecessary detail While this can obviously go too far (there’s a difference simple and simplistic!), but when done well you get to make your argument simpler and more general at the same time.
To try to make this concrete, let me give 3 broad roles that I think mathematical models are especially well-suited for, along with economic exemplars of each:
- The Diamond-Dybvig model
of bank runs is a great example of what I call a parable. It’s a
deliberately stripped down and simplified model of bank runs that makes the
core logic of why they happen super crisp, even if those assumptions are
obviously unrealistic and untrue (e.g. the world obvious has more than two
time points). And because the model is so simple, it’s also obvious how the
same logic can apply to things that aren’t just banks – for instance, our
understanding of the 2007 Financial Crisis is that it was basically a giant
bank run on the financial system, which managed to transmute mortgage-backed
securities (a long-term asset) into things like money-market funds. The
technical term is for these non-banks that act like banks is a “shadow
bank”.
- A really important subclass of parables are counterexamples to things that you might naively believe.
- Another general class of models is the axiomatic model, which tries to
derive interesting insights from some “obviously true” axioms. The
Gibbard-Saterthwaite
Theorem3.
is a great example of this. It basically says that for any deterministic
ranked voting scheme with >2
voters and >2 outcomes, you either have a dictatorship or have a system
where people should lie about their preferences to get what they want (i.e.
strategic voting). Because most elections (1) use some form of ranked
voting,4 (2) have >2 voters and >2 outcomes, and (3) don’t include
any form of randomness when counting votes, these results are immediately
applicable to the real world.
- Axiomatic models are most likely to be useful when studying “artificial” systems governed by human-made rules: this includes things like voting, currency exchange (e.g. the balance of payments accounting identity) or auctions.
- Finally, there are empirical models, which are models that seem to actually fit real-world data well. I don’t have a great example off-the-top of my head because they tend to be quite technical and situation-specific, but I’d broadly place anything from econometrics or the credibility revolution in this bucket.
Interestingly, these different roles of models push you in different directions. Parables often include obviously untrue simplifying assumptions to simplify the argument, which rules them out as axiomatic models. Axiomatic models tend to characterize broad classes of objects (e.g. all ranked voting systems), which looks very different from empirical models which need to pick particular equations to actually fit real-world data. And empirical models often require extra terms and complications to flexibly fit data that dilute the conceptual simplicity of a parable. Although I suppose it’s possible for a model to fit into multiple categories here, I’m not sure I’ve actually seen this done well in practice.
So what’s wrong with economics?
Now that I’ve tried to justify why I find economics compelling, let me go over some of the ways in which I find economic thought so frustrating.
Weird Assumptions that Make No Sense
One thing I find utterly baffling is the number of assumptions that economists without explaining why they’re making that assumption. Economics isn’t pure math – the goal is for your models to actually be relevant to reality, not just to probe how some particular mathematical assumptions interact. That means assumptions need to be justified, hopefully based on some empirical argument. It’s ok to make assumptions purely to make the math easier, but then you either need to be rigorous about showing that your model still fits real-world data ok or admit that you’re trying to provide a parable for intuition rather than a descriptively valid model.
An extreme example of this are DSGE models of the macroeconomy. I’ve never seen why assuming a DSGE model makes sense – they make assumptions about the world that are obviously oversimplifications, so they can’t be axiomatically justified, and I’ve literally never seen good evidence that they actually fit real world data well (in fact, I’d say that the overwhelming evidence is that they fit empirical data very poorly). But at the same time, if DSGE models are supposed to be parables, then I have no clue what core argument they’re communicating – they seem flexible enough to convey almost any argument you’d want. As far as I can tell, they’re pretty much useless, and I don’t know why there’s so much research using them.
Bad Parables
Another thing that irks me is how unnecessarily complicated many of its economic parables. Instead of spending time trying to identify the core logic of a parable, economists seem to prefer to just write out a bunch of fairly arbitrary equations and crank through some algebra. Unless you just want a proof by contradiction (where a single example is enough to make your argument), I think this is counterproductive. The algebra is usually just a tedious distraction, and the fact that you’re choosing particular equations make it unclear whether your argument follows from some general principles or just because you happened to pick a particular functional form.
To make this concrete, let’s consider the Solow model of economic growth. You can go read the Wikipedia page for the mathematics of the model, from which I have a couple of complaints about it:
- Why do we even introduce \(A\) and \(L\) as variables? Given the assumptions, they immediately collapse into a single variable \(A \cdot L = \tilde{L} = \tidle{L}(0) e^{r \cdot t} \). Why bother modeling them separately?
- In fact, why bother including \(L\) at all? You basically end up just looking at \(\frac{Y}{AL}\) or “output per unit labor” to eliminate that term, so why add another variable to deal with?
- Why do we assume a Cobb-Douglas production function \(Y = K^\alpha (AL)^{1 - \alpha}\)? This another example of an assumption that seems to be really common in econ for reasons I don’t understand.
If I was to formulate this model, I’d do something like:
Consider a simple production process which takes \(K > 0\) units of investment and creates \(Y(K) > 0\) units of output. Let’s assume that we always reinvest some proportion \(0 < s \le 1\) of our output into new investment but that some proportion \(0 < \delta \le 1\) of our existing investment vanishes over time (i.e. so \(\frac{dY}{dt} = sY(K) - \delta K\)).
Note that by making this formulation simpler (only 3 variables: time, investment, and output), we’ve also made it more general! It could apply to an abstract economy (where \(Y\) is GDP and \(K\) ), but can also apply to other situations, like a single factory where \(Y\) is revenue and \(K\) is investment or an academic research field where \(Y\) is “hours of scholarly output” and \(K\) is “hours of total scholarly training”.
To go further, I would try to avoid writing any specific equation for \(Y(K)\) at all! Instead, I’d probe how how different assumptions yield different results. For instance (assuming \(Y(K)\) is smooth for mathematical convenience):
- If you assume \(Y(K)\) has decreasing returns (i.e. its strictly concave),
then you can prove that:
- The percentage growth in \(Y\) will always slow over time
- If you increase the savings rate \(s\), you will increase the growth in \(Y\), but only up to a limit: as long as if you have any consumption (i.e. savings rate \(s < 1\)), then the percentage growth in \(Y\) will always go to \(0\) over time.
- The smaller \(K\) is, the faster the percentage growth in \(Y\) is.
- If you further assume that \(\frac{dY}{dK} \to 0\) as \(K \to \infty\) (i.e.
there’s some limit to the “useful” investment you can make),5
then you can further derive that:
- The absolute growth in \(Y\) will go to 0 over time, regardless of what you do.
- In particular, that means that \(Y\) and \(K\) will converge to some fixed steady state, no matter how little \(K\) you start with.
Avoiding equations makes it clearer which properties of \(Y\) are important for which conclusions, which IMO makes it a much more useful parable for providing information.
Poor Pedagogy in Intro Econ Courses
This might be colored by my negative experience in college, but I’ve find introductory economics pedagogy to be terrible.6 Most of what you learn in an intro econ course are parables – they’re taught because their core logic is easy to understand, not because they’re necessarily descriptively accurate (they can be descriptively valid, but they’re not always descriptively valid). Unfortunately, this fact is unclear to most students because intro econ courses never seem to ever actually analyze real world data – at best, they just throw a handful of stylized facts at students as evidence. Without going into real data student’s aren’t exposed about the subtleties of how you translate a parable into an empirical model or how you actually test whether simplifying assumptions are empirically justified for specific cases.
This leaves a giant pedagogical hole where people come out of Econ 101 thinking either that economics is all bullshit or that you can blindly just apply these parables to analyze the real world. Aside from creating incredibly sloppy thinking, I think this “Econ 101-ism” is also why there’s so much anti-economics blowback (and why so much of it is such poor quality). People spend a bunch of time fighting about strawmen arguments like “markets are perfectly efficient” or “people are perfectly selfish” instead of engaging with anything actually interesting in economics.
Gaps in the Literature
The last thing I’m frustrated by is the drunk looking for their keys in the streetlight effect, where economists sometimes overly-fixate on things that are easy to mathematically model, even when those aren’t the most relevant (or interesting) parts of a problem. I think this leads to large gaps in the literature (or at least, the literature that I’m exposed to which is admittedly a small subset of the entire field), even on some very basic questions. For instance:
- Lots of economics analyzes things at the level of individual preferences,
but we know that individuals are part of multiple overlapping communities
that are core to their identity. Is there some “multi-level” modeling scheme
where we can incorporate these communities into our models instead of
assuming people are atomic entities (e.g. like residency matching program
was tweaked for couples who want to stay
together)?
- For instance: should we use social choice theory to actually model individuals as being composed of multiple identities with different preferences that are aggregated together?
- Potentially related to the above: economic models generally take people’s preferences as fixed and try to figure out ways to shape their incentives with things like Pigovian taxes. But people’s preferences don’t arise in a vacuum: we know that their preferences are shaped by the environment that they’re in. Is there a good model of preference formation over time (and particularly the social aspects of preference formation)?
- Corporations are some of the most powerful entities in the world with huge power to shape their political environment, but most models treat firms purely as nexuses of economic production. When they do treat the political economy of the firms, they usually focus on things like regulatory capture where firms influence politics to improve their economic position. But what’s a good model of the political aims of the firm that are separate from their economic production, like in a crony capitalist system where political power is its own goal?
-
As far as I can tell, the entire course was really just “applied Lagrange multipliers” and really could’ve been covered in one or two months. I pretty much stopped paying attention after the 2nd week and the only time I struggled with homework was when I had to Google all the economics-specific jargon. ↩︎
-
I’d make an analogy to mathematics. While in an first-year math major course there’s usually a strong emphasis on “rigorous” arguments where you laboriously write down proofs in all their gory detail, this tends to disappear when you get more advanced. Instead, people start making “proof skeletons” where you sketch out the main arguments and intuitions, relying on the reader to fill in the all the missing detail. Terry Tao goes more into detail about this here for those interested. ↩︎
-
While Arrow’s Impossibility theorem is better known (and more easier to prove), IMO the Gibbard Satterthwaite is actually more relevant. ↩︎
-
Actually, most elections only ask for people’s top choice, which is even more restrictive than getting their full rankings. ↩︎
-
Actually, I think you only need some \(k\) such that \(\forall K > k, \frac{dY}{dK} < 1\) but that’s much more of a mouthful. ↩︎
-
I’m not super impressed with pedagogy in upper-level econ courses either (which I’ll define as 400+ level courses), but I don’t have enough experiences with them to be confident there’s a generalized problem (rather than a couple poor personal experiences). ↩︎