May 2026

The Tragedy of the Commons

A kind of slow disaster that happens not because anyone is malicious, not because anyone is stupid, but because everyone is being perfectly reasonable. And the mathematics underneath it.

I rewatched A Beautiful Mind a few weeks ago. It is one of my favorite movies. I have seen it five or six times by now, and yet every time, I find myself fully engrossed in the film. For those of who haven't seen it, it's a movie based on the life of John Nash, a Nobel laureate in Economics and his struggles with schizophrenia. It's really good movie. Anyways, there is a particular scene in the movie that is interpreted as the Eureka moment for Nash Equilibrium, the one in the bar where Nash watches a blonde walk in with her brunnette friends and works out, supposedly on the spot, why all the men in the bar should ignore her and pair off with her friends instead. Russell Crowe (Actor who portrayed Nash) stares into the middle distance, scribbles on a napkin, and the music swells. That, supposedly, is the moment Nash equilibrium is born.

Except - and this is the part I did not know until very recently (courtesey of this video) - the scene actually gets the math almost completely wrong. The strategy Nash describes in the movie isn't a Nash equilibrium of the bar game at all. If every other man ignores the blonde, your best response is to be the one guy who goes for her. So "everyone ignores the blonde" can't be stable. The actual equilibrium of the scene is messier and considerably less romantic. The movie took cinematic freedom to make it more appealing and provoking and it worked on me - till now.

So I completed the movie, but one thing kept nagging me. I could not have told you in clear sentences what a Nash equilibrium actually was (Although movie itself doesn't delve into it but I was just curious). I had the vague shape of it - "everyone picks their best move given what everyone else picks" - but vague shapes do not survive a follow-up question. So I did what I usually do when something nags at me. I went and read about it. Properly. With a notebook open and tabs piling up. What I found was that this simple-sounding definition is the formal skeleton that is sitting underneath an enormous amount of how the world goes wrong. Price wars. Arms races. Why climate policy is so hard. Why a finite resource shared by many users tends to get used up (Don't need big brains to understand this one :p). The shape recurs. The math underneath is surprisingly same.

So I thought, lets write another article. It's about time for new one. Also, I have been working my way through Bruno Simon's Three.js Journey course on the side. I was looking for an excuse to build something with it that was not just "look, a rotating cube." So when I sat down to write up what I had learned, I figured: why not turn the lake into an actual lake. Why not build twenty little boats. Why not let people watch the equilibrium happen instead of just reading about it.

So this is what came out of that. The first half is a small parable - a lake, twenty boats, and the way things go quietly wrong when nobody is in charge of the whole. The second half is the formal version: what Nash actually proved, why it matters, and how the world's small army of mechanism designers spend their time trying to redesign games whose equilibria sit in the wrong place. I built the article the way I learned the ideas, which is also the way I think they land best. Feel it first. Then name it.

There is a particular shape that bad collective outcomes tend to take. It shows up in fisheries that empty out and forests that thin and atmospheres that warm and roads that clog and group chats that quietly fall apart. Different surfaces, same skeleton underneath. Once you've seen the skeleton you start to see it everywhere, which is either useful or depressing depending on the week.

The skeleton has a name. Garrett Hardin called it the in a 1968 essay, and the name stuck even though it's slightly misleading on both ends. It is not always tragic - some villages have lived on shared land for a thousand years without wrecking it - and it is not really about commons in the medieval sense. It is about a particular failure mode that emerges when a finite resource has many users, none of whom are individually in charge of the whole. (which took me on a tangent - why we tend to establish center of control in societies but that will have to be another article.)

The mechanics are simple enough that you can build them out of a pond and some boats. Each user gets a small benefit from taking more than their share. The cost of taking more is paid by the whole pond, which means each user pays only a tiny fraction of the cost of their own choice. Multiplied across many users acting individually but in parallel, the small benefits add up faster than the small costs do, and the pond goes bad.

That is the entire idea. It is also extremely easy to misread. Most people, on first encounter, decide that the lesson is something about human nature - greed, selfishness, the inevitable tragedy. That is not the lesson. The lesson is that you can take twenty thoroughly decent people, give them the wrong structure to act inside, and they will produce a disaster nobody wanted. The villain is the structure.

A bit of vocabulary before we start, since the rest of the piece leans on it. What we are about to look at, in the language of the field, is , and the specific concept it keeps coming back to is the . Both of those have full-blown definitions further down, once we have the right intuitions to hang them on. For now just hold the words; the lake will do the rest.

So let's start!

Act I: The Lake

A village, twenty boats, and the slow choreography of how a shared thing gets used up. Drag, click, watch.

I.

A still lake.

Before anyone arrives, the lake is doing one simple thing: it is feeding as many fish as it can feed. Fish are born, fish die, fish find food or don't. The population drifts up when there's room and down when there isn't, and it ends up parked at a number the lake can sustain. Ecologists call this the carrying capacity. We won't need the term again.

The important thing is that this number is a property of the lake, not the fish. A bigger lake holds more. A poorer lake holds fewer. The fish themselves do not negotiate.

The other important thing is that the lake regrows. Each year, if it isn't already full, it adds a few more fish. The fuller it is, the less it adds; an empty lake can't regrow at all, because there's nothing left to spawn. Hold this idea. It will do most of the work in the next section.

Watch the water for a moment.

II.

One boat.

A fisher arrives. They take some fish each year and the lake regrows what's left. If the catch is small enough, regrowth keeps up and the population holds. The fisher eats. The lake is fine. Forever, in principle.

The number that matters is the regrowth rate. A healthy, roughly half-full lake regrows fastest - that's the sweet spot where there are enough adults to spawn but enough room left for the young to survive. Catch under that rate and you're inside the lake's tolerance. Catch over it and you're eating into the capital, not the interest.

And here is where it stops being intuitive. Going slightly over for one year doesn't undo itself. A smaller population regrows more slowly, which leaves more room for next year's catch to outpace it, which shrinks the population further. Once you tip past the threshold, restraint later doesn't always rescue you. The collapse is not symmetric to the climb.

Drag the slider. The lake will be patient with you up to a line; cross it and watch how fast the forever gets cancelled.

25
0↑ regrowth limit ≈ 25greedy
sustainable - forever
Ksustainable lineyear 0fish
year 0lifetime catch: 0
III.

A second boat.

A neighbor sees you doing well. They build a boat. Same lake, same regrowth rate, but now whatever the lake produces each year has to feed two households instead of one. The sustainable share has been quietly cut in half.

And this is where the trouble starts, because the new structure has a temptation built into it. If you restrain and your neighbor restrains, the lake holds and you both eat modestly. But if you restrain and they grab, they get a big year and you get scraps. So you start to think: maybe I should grab a little too, just to protect myself. Your neighbor is having the same thought.

This is the prisoner's dilemma, the most famous bad equilibrium in game theory. Two players, two choices each, four outcomes - and the move that is individually rational, on both sides, produces the worst of the four. Cooperation pays best when both cooperate, but defection is what protects you from being the only one who didn't. So you both defect.

There is one important escape valve, and it is worth keeping in mind: this trap holds when you play once. Played a thousand times with the same neighbor, restraint can pay - because they will remember whether you restrained, and you will remember whether they did. We will come back to this.

Click any cell to drop yourselves into that scenario.

asymmetric. one prospers; the other doesn't.
Kyear 0fish
year 0you: 0·them: 0
IV.

The village.

Twenty boats now. The lake hasn't gotten any bigger; the sustainable share per boat is twenty times smaller. And the prisoner's dilemma we just walked through is now playing out between every pair of boats at once. Each fisher is asking the same question - do I restrain while everyone else grabs? - and all twenty are arriving at the same answer.

The cruel mechanic is that restraint actually punishes the restrained. The boat that takes its sustainable share goes home with less than the boat that grabs. So the grabbers end up richer, year over year, while the restrained slowly fall behind. Even decent people, watching this, eventually start fishing harder. You can almost feel it as a slow tilt: the village's average behavior drifts toward whoever is being greediest, because greed is being rewarded.

Try the sliders. Push the average greed up - everyone gets a bit more aggressive - and watch the population graph dive. But the more revealing experiment is to keep the average low and make the greed more varied. A village of mostly gentle fishers with a few greedy ones still collapses, because the greedy ones grab more, get richer, and pull the rest of the village toward them. The lake's fate isn't set by the average villager. It's set by the ones willing to take the most.

Watch the income histogram and the gini bar together while the lake dies. The Gini coefficient is a standard measure of inequality, running from 0 (everyone earns the same) to 1 (one boat earns everything). For reference, real-world countries tend to land between about 0.25 (the most equal Nordic states) and 0.6 (the most unequal). Watch the village's gini climb as the greedy boats outearn the restrained ones. A collapsing lake and a rising gini are usually the same story told two ways.

0.35
0.40
Kyear 0fish
lake health
income across the village
poorestrichest
Gini coefficient0.00
0 — everyone equalone wins all — 1
year 0
V.

Four ways out.

The collapse isn't a law of nature. It's a property of one specific structure: a shared resource with no rules about taking. Change the structure, change the outcome. Real communities have been doing this, in messy human ways, for as long as there have been shared resources to fight over.

Four families of fix, sketched briefly, each shown in the lake when you pick it.

Privatize. Slice the lake into twenty private ponds. Each fisher now owns the consequences of their own choices, so over-fishing your own pond is just hurting yourself. The math fixes itself. But you've lost the lake - the thing that made it ecologically rich, the thing that made it a shared place - and replaced it with a grid of private holdings. Wealth concentrates where the best ponds are.

Quota. Leave the lake whole. Cap the total annual catch and split it among the boats. This works exactly as well as your enforcement. With perfect enforcement, the lake holds forever. With weak enforcement, the boats most willing to cheat win, and you've reproduced the original problem with extra steps. Quotas are a tax on attention - somebody has to count.

Tax. Don't cap the catch; just make each additional fish more expensive. Boats decide for themselves how hard to fish, but the math now pushes them toward less. Elegant, and it raises money you can redistribute - but it requires an authority strong enough to levy the tax and trusted enough to spend the proceeds without becoming the new problem.

Norms. No fences, no warden, no tax. Just twenty fishers who all know each other, watch each other, and quietly shun anyone who takes too much. This is the cheapest and most beautiful solution when it works. It works because everyone is playing the iterated game with everyone else - the escape valve from the prisoner's dilemma. It collapses when the village grows past the size at which everyone can keep track of everyone else. Move the village-size slider and watch the cooperation network thin.

Pick a rule. Watch the lake recover, or fail to. Watch the income histogram. Each fix solves the original problem and creates a new one of its own.

no rules. the baseline.
Kyear 0fish
lake health
who earns what
poorestrichest
Gini coefficient0.00
0 — everyone equalone wins all — 1
year 0
VI.

The pattern, not the lake.

Take a moment. Let the lake sit.

You've just spent fifteen minutes inside a particular shape - a finite resource, many users, no one in charge of the whole. You probably noticed, without me pointing it out, that this shape lives in a lot of places that don't look anything like a pond.

That shape has a name, and a body of mathematics behind it, both of them older than the metaphor we used. The rest of the article is about those.

A Few Honest Things About the Lake

None of the four interventions is a clean win. Privatization saves the lake, but it concentrates wealth and destroys the quiet, communal benefits that only a shared lake can provide. Quotas work as long as someone is funded and willing to enforce them; the moment enforcement weakens, cheating returns. Taxes shift incentives but require an authority empowered to levy them and trusted enough to spend the proceeds. Norms are powerful and almost free - and they collapse the moment the village grows past the size at which everyone can keep an eye on everyone else.

won a Nobel for showing that real communities have been hybridizing these four approaches for centuries, often in ways that look messy from the outside and work beautifully from the inside. The clean theoretical answer is rarely the right real-world one. The right real-world one is usually some careful combination, tuned to the specific shape of the specific commons, held together by people who notice and adjust.

That is what Act I has to say, more or less. The lake had a problem; the four interventions are the four real families of answer; each comes with its own taxes paid in some other currency. So far, so concrete.

Now we are going to do something a little different. Underneath the lake is a piece of mathematics that doesn't care about lakes or fish at all. It cares about games - very small, very abstract toy problems where two or more players choose actions and get payoffs - and about a particular kind of stable point inside those games. That stable point is the answer to almost everything we just watched. It also explains a great deal else.

Act II: The Equilibrium

What Nash Showed

In 1950, a 22-year-old graduate student at Princeton named John Nash wrote a thesis introducing what we now call the Nash equilibrium, and over the next few decades it quietly reorganized the field of economics around itself. It is not an exaggeration to say it is one of the most-used ideas in modern social science. The tragedy of the commons is one specific instance of the more general thing Nash named.

The idea is almost embarrassingly simple. A Nash equilibrium is a state of a game in which no player can improve their own outcome by changing their alone, given what everyone else is doing. That is the entire definition. If you are at an equilibrium and you try to deviate, you do worse. So you don't deviate. Nobody does. The game stays where it is.

Written down with the right symbols, the definition is exactly this: a strategy profile (s1*, s2*, …, sn*) is a Nash equilibrium if, for every player i and every alternative strategy si they could have played:

ui(si*, s-i*) ≥ ui(si, s-i*)

Where ui is player i's payoff function and s-i* means "what everyone else is doing in the equilibrium." In English: every player is doing what is best given what everyone else is doing. Simple to state. Surprisingly hard to escape.

When the village's twenty boats all converge on grabbing, that is a Nash equilibrium. If you are one of those twenty and you decide to be virtuous and restrain, you don't save the lake - the other nineteen are still grabbing and the population still crashes - and you, personally, end the year with less fish than you would have had if you'd grabbed. Your individual to "everyone else grabs" is to grab. So nobody restrains. The bad outcome locks itself in place. No one changes course because changing alone only makes you worse off, so everyone stays frozen where they are. Everyone is doing the locally correct thing. The locally correct thing collectively destroys the village.

Try a Few Games Yourself

Here is the canonical way game theorists actually draw a 2×2 game on paper. Each cell holds (row payoff, column payoff). The arrows between cells point in the direction each player would prefer to deviate, holding the other player's choice fixed: vertical arrows are the row player's preference, horizontal arrows are the column player's. A cell is a Nash equilibrium when no arrows point away from it - both players are content to stay. Pick any of the four classic shapes below and follow the arrows.

Two-Player Games

Pick a game. Watch where the arrows lead.
Mutual cooperation pays best - but defection is each player's dominant move. Both defect; both lose. The lake's shape exactly.
Them
Cooperate
Them
Defect
You
Cooperate
3,3
0,5
You
Defect
5,0
1,1
equilibrium
1 Nash equilibrium. Highlighted in accent.
The arrows show where each player would prefer to switch, given the other player's choice. Vertical arrows are the row player's preference; horizontal arrows are the column player's. A cell is a Nash equilibrium when no arrows point away from it - neither player can improve by switching alone.

The four classic shapes are worth a moment each. The prisoner's dilemma is the lake's shape: cooperation pays best, defection is each player's , and the equilibrium is mutual defection. The stag hunt has two equilibria - a high-trust one where both players coordinate on the better outcome, and a low-trust one where both settle for less. Which one a society lands in often has more to do with mutual confidence than with mathematics. Chicken is the geometry of brinksmanship: each player is desperately hoping the other swerves first. And coordination is the gentlest of the four - both equilibria are fine, but only one of them is good, and the trick is agreeing in advance which side of the road to drive on.

What is interesting is that all four games are mathematically tiny - two players, two choices, four outcomes - and yet the different shapes of their payoffs produce wildly different stories. The shape of the payoff structure is the shape of the social problem. Edit a number and you can turn the prisoner's dilemma into a coordination game. The equilibria move with you.

The Building Block: Best Response

Underneath the equilibrium concept is a smaller idea called the best response. Given everyone else's strategy, your best response is whichever of your own strategies gives you the highest payoff. Best responses are easy to compute: hold the others fixed, run through your options, pick the winner. A Nash equilibrium, in this language, is just a profile where every player is best-responding to everyone else simultaneously. A mutual fixed point.

Best-response functions are easier to picture in a continuous game, where strategies are numbers rather than just "cooperate" or "defect." Below is a classic example, due to the French mathematician Antoine in 1838 - more than a century before Nash. Two firms in the same market each choose how much to produce. The market price drops as their combined output goes up. Each firm wants to maximize its own profit, which means balancing "more output, more sales" against "more output, lower price for everyone including me."

Plotted on a 2D grid where the axes are the two firms' output levels, each firm has a best-response curve: for any quantity the other firm produces, here is what I should produce. The Nash equilibrium is exactly where the two curves cross. Drag the dot, or hit "spiral to equilibrium" to watch the firms alternate best-responses and converge.

Two Firms, One Market

Drag the dot. Watch where the firms want to be.

Each firm chooses a production quantity. The market price drops as their combined output rises. Each firm's best response is the quantity that maximizes its own profit, given what the other produces.

Firm 1 quantity (q₁)Firm 2 quantity (q₂)firm 1's best responsefirm 2's best responseif they colludedNash equilibriumfirm 2
q₁ (firm 1)2.00
q₂ (firm 2)8.00
market price2.00
firm 1 profit4.0
firm 2 profit16.0

combined20.0
for reference
at equilibrium32.0
if they colluded36.0
The two curves are the firms' best-response functions: each one says, "if my opponent produces this much, this is what I should produce." Where they cross is the Nash equilibrium - the single point where both firms are simultaneously best-responding to each other.
The dotted point is what they'd produce if they colluded - lower combined output, higher price, higher combined profit. They can't stay there: from the collusion point, each firm has a unilateral incentive to produce more, and the system spirals back to the Nash equilibrium.

A few things are worth noticing in the Cournot demo. First, the equilibrium isn't where the firms make the most money. If they colluded - agreed to produce less and split the profits - they would each earn more. The combined-profit number under "if they colluded" is higher than the equilibrium number, often substantially. The lake again, in another costume: the individually rational thing produces a worse collective outcome than coordination would.

Second, the equilibrium is genuinely stable. Drag the dot away from it and click "spiral." The firms keep best-responding to each other, and the trajectory pulls inevitably back to the crossing point. Try starting from any corner of the grid; you end up in the same place. This is not a coincidence - in many real games, this kind of "iterated best response" is exactly how firms or countries actually arrive at equilibrium, by watching each other and adjusting.

Third, the math behind the curve is simple. If firm 2 produces q2, then firm 1's best output (assuming linear demand P = a − b·(q1 + q2) and marginal cost c) is:

q1* = (a − c) / (2b) − q2 / 2

Linear in the other firm's output. Firm 2 has the symmetric formula. Set them both true at once - meaning each firm is best-responding to the other - and you can solve algebraically for the equilibrium quantity: q* = (a − c) / (3b). That single number is the dot the curves cross at. The whole shape of the duopoly comes out of this little system.

Existence, Multiplicity, and Randomization

The reason the world made such a fuss about Nash's 1950 thesis was not the definition itself - that part is obvious in retrospect, and people had been informally circling it for years. What Nash proved was that an equilibrium always exists. In any finite game, no matter how many players, there is always at least one stable point. There is a catch though - sometimes it only exists if players are allowed to randomize rather than commit to a single move - but we'll get to that in a moment. The point is simple and powerful: you might not be able to find the equilibrium easily, and there might be more than one, but it is always in there somewhere.

The "more than one" case is its own subject. The stag hunt has two equilibria. So does chicken. So does coordination. When a game has multiple equilibria, the math doesn't tell you which one you'll end up in - the answer depends on history, expectations, communication, who blinked first - a lot of things that cannot be put into mathematical models. A surprising amount of real-world political and corporate strategy is essentially the problem of nudging a multi-equilibrium game toward the better equilibrium. This is one reason mere "rationality" doesn't uniquely determine outcomes. Rationality tells you what not to do, not what will happen. When several outcomes are all perfectly rational, you need something else to know which one wins. Rational play is a constraint, not a forecast.

Sometimes a game has no equilibrium in pure strategies - no single deterministic move is stable for either player. The classic case is , or more generally any zero-sum game where one player wants to match and the other wants to mismatch. The resolution is that Nash's existence proof covers a broader notion called a : instead of always playing the same move, players randomize across moves with specific probabilities. The Nash equilibrium of matching pennies is "each player flips a fair coin and plays heads or tails 50/50." That's not a metaphor; that's the equilibrium. Randomness is sometimes the only stable strategy, which is why soccer goalkeepers actually flip mental coins on penalty kicks.

Mechanism Design: Changing the Game

Once you've internalized the distinction between what is stable and what is good, you start to see equilibria that are bad for everybody all over the place. Two companies in a price war that neither wants to be in. Two countries in an arms race that neither can afford to stop. Antibiotic over-prescription, where every individual doctor's choice is reasonable and the population-level outcome is disastrous. Climate change, where every country's emissions policy is rational on its own and the atmosphere disagrees. Each of these is a Nash equilibrium that nobody wanted but nobody can unilaterally exit.

Which is what makes the four interventions in the lake sensible, and what makes them an example of something larger. None of them is a moral appeal. None of them asks the fishers to be kinder or smarter or more far-sighted. Each fisher changes the rules itself, so that the new equilibrium - the new place where unilateral deviation from the equilibrium makes you worse off - sits closer to where everyone wants to be. Privatization rewires the cost structure, so over-fishing your own pond hurts you immediately. A quota rewrites the legal payoffs, so grabbing too many costs you a fine. A rewrites the per-fish economics, so the marginal grab stops being worth it. Norms rewrite the social payoffs, so being known as a wolf hurts you in next year's interactions. Each is a way of moving the equilibrium without moving the people.

In the language of the field, this is called : working out what game to play so the equilibrium of the game is what you wanted in the first place. It is not an abstract concern. Every auction was designed this way. Carbon pricing is an exercise in this. The structure of is an exercise in this. The matching system that pairs medical residents with hospitals is an exercise in this; so is the algorithm that allocates donor kidneys. The world is full of games with the wrong equilibria, and somewhere there is a small, mostly anonymous army of economists trying to redesign them.

One small but striking example. In a sealed-bid first-price auction - everyone writes down what they're willing to pay, highest bid wins and pays their bid - rational bidders shade their bids below their true valuations to avoid overpaying. The seller doesn't always learn what bidders really value, and the auction is sometimes inefficient. William proposed, in 1961, a tiny rule change: highest bidder still wins, but pays the second-highest bid. Suddenly the dominant strategy for every bidder is to bid their honest valuation. The mechanism extracts the truth automatically. The Vickrey auction won him a Nobel Prize and is now used in places ranging from ad slot pricing on the internet to government bond issuance. A four-line rule change, an entirely different equilibrium.

Nash Himself

It is worth saying again, because the man's life is worth saying, that Nash himself suffered from severe schizophrenia for decades after the original proof, and shared the 1994 Nobel Prize in Economics largely on the strength of those original twenty-eight pages. What it says, stripped to its core, is this: most of the things that go wrong between rational people are equilibria, and equilibria can be redesigned.

That is also, more or less, what Act I was saying without the symbols. The lake collapsed because of an equilibrium. The four fixes worked, when they worked, by moving the equilibrium. The framework is general; the lake was just one tractable example of it. It's like the saying - there is more than one way to fry the fish!

Where This Leaves Us

I came at this from two directions. Act I was the parable - twenty boats, a body of water, the slow choreography of how shared things go quietly wrong. Act II was the parable's skeleton, exposed - the same dynamic in formal symbols, with the proof that it really does generalize past lakes. Neither half stands on its own quite right. The lake without the math feels like a fable; the math without the lake feels like a textbook. Together they say the same thing, twice.

That thing, stated as plainly as I know how, is this. Most of what looks like a moral failure between rational people is, on closer inspection, an equilibrium. Most equilibria are not the result of anybody choosing them - they are what happens when nobody chose anything. And most equilibria can be moved, if you know what game you are inside, and you can imagine a different one.

The reason this is worth knowing is mostly about where the blame goes. When you find yourself in a situation where everyone is doing the locally sensible thing and the collective result is bad - the over-fished pond, the price war, the marriage that has quietly stopped being kind to either person - it is worth pausing on the word equilibrium before reaching for the word fault. The structure does most of the work. People living inside a structure rarely see the structure. The first move, every time, is to notice it.

The second move - which is the harder, more interesting one - is to ask what a different game would look like, and whether you have any hand in writing it. Sometimes you don't. Sometimes you do. A great deal of useful adult work is figuring out which is which, and then doing the work that follows.

That, more or less, is where this leaves us. The equilibria are everywhere. They are also, much more often than we think, designable.