AI and the Tragedy of the Commons
Today at a Glance
What’s a Rich Text element?
The rich text element allows you to create and format headings, paragraphs, blockquotes, images, and video all in one place instead of having to add and format them individually. Just double-click and easily create content.
Static and dynamic content editing
A rich text element can be used with static or dynamic content. For static content, just drop it into any page and begin editing. For dynamic content, add a rich text field to any collection and then connect a rich text element to that field in the settings panel. !
- ml;xsml;xa
- koxsaml;xsml;xsa
- mklxsaml;xsa
How to customize formatting for each rich text
Headings, paragraphs, blockquotes, figures, images, and figure captions can all be styled after a class is added to the rich text element using the "When inside of" nested selector system.
In 1968, an ecologist named Garrett Hardin published an essay that described the perils of individual short-term thinking on shared resources:
"Picture a pasture open to all...the rational herdsman concludes that the only sensible course for him to pursue is to add another animal to his herd. And another; and another...Each man is locked into a system that compels him to increase his herd without limit in a world that is limited. Ruin is the destination toward which all men rush, each pursuing his own best interest in a society that believes in the freedom of the commons. Freedom in a commons brings ruin to all."
This so-called Tragedy of the Commons is a situation in which individual actors, acting in their own rational self-interest, destroy a valuable shared resource.
As I see it today, this mental model from 1968 is a useful lens through which to consider the modern situation we’re facing with AI.
In this case, our commons is society itself: The social fabric that holds us together, our markets, and even our sense of meaning, trust, and purpose.
All of this hangs in the balance—the stakes could not be any higher.
So, let's dive in...
The Actors in Our Modern Commons
Here are the four general actors I see in this tragedy:
1. The Nations
Global superpowers jockeying for long-term supremacy in a rapidly changing geopolitical climate.
The United States and China are the two most prominent players here.
Their goals: Make sure that your country is the first to develop any major breakthrough technology (like AGI) and that your geopolitical foe isn't able to access it.
The prize for doing so is global power and security.
2. The Builders
The companies building on the frontier of the technology.
A few prominent ones that come to mind here:
- OpenAI
- Anthropic
- xAI
- Microsoft
- Meta
- Nvidia
Their goals: Develop and deploy the most transformative technology as fast as possible (and certainly before your competitors do so).
The prize for doing so is market dominance, uncapped profits, global prestige, and legacy.
3. The Companies
The companies using the newly developed technologies to improve their operations.
At this point, it's hard to imagine any modern company that isn't using AI tools in at least some portion of its workflows.
My expectation is that the percentage of workflows that are dramatically improved in their quality and efficiency by these technologies will eventually near 100%. It may happen gradually (particularly in the case of more blue collar companies), but it will happen.
Their goals: Grow revenue and improve profit margins.
The prize for doing so is more money to the shareholders.
4. The People
The employees, contractors, creators, and freelancers using the technologies to improve their individual workflows.
Their goals: To improve productivity, output, and earnings.
The prize for doing so is more money and job security.
How We Destroy the Commons
"Show me the incentive and I will show you the outcome." - Charlie Munger
Let's play out the mental model to see how each actor, acting in their own rational self-interest, contributes to the destruction of our shared resource.
The Nations seek to create conditions for their native Builders to move as fast as possible. They are apprehensive about putting in place any regulations that may slow down their country's progress relative to the progress of their main geopolitical adversaries.
The Builders move as fast as possible in this modern day Wild West regulatory climate. The Builders compete with one another to release the most groundbreaking updates, vying for consumer and enterprise attention with splashy launches, savvy marketing campaigns, and unlimited budgets.
The technology industry has always embraced a "move fast and break things" mantra, but we've never seen it play out when things potentially includes society as a whole.
The Companies scale adoption of the new technologies from the top-down. Shareholders and senior executives create mandates about the use of technology to improve workflows, which permeate down through the leadership and managerial ranks. Productivity and efficiency improve.
The People automate and optimize an increasing percentage of their daily work and life. Each individual wants a "secret edge" over others, so the bottom-up adoption occurs more silently than the top-down mandates within companies. Those who work hard to stay on the forefront of the new technologies see their productivity and output improve dramatically, while those who don't are at risk of layoffs or financial struggle.
Four possible (and scary) implications I've been thinking about:
- Companies are growing, but need fewer employees to drive that growth. First, this means fewer new job postings, mostly at the entry level. Then, small rounds of layoffs. Then larger rounds of layoffs. Shareholders are thrilled by the expanding profits, but unemployment rates rise. You could see an unprecedented situation where stock markets and unemployment rates rise together, which creates the conditions for social unrest.
- Even if we naively assume all good actors, the outcomes can be dire. But as we know, bad actors exist, and a loose regulatory climate opens the door to severe security risks. All it takes is one bad actor armed with an increasingly powerful technology to change the world.
- People are more efficient and productive, but lose a sense of texture, meaning, or purpose in the work. They have unwittingly optimized the life out of their life.
- The genie escapes the bottle. I'm not technical enough to know the feasibility here, but it's not hard for me to paint a picture whereby we create a sentient technology that decides we're the biggest risk to its survival and destroys us. The counterargument is that humans control the data centers that power all this technology, but do we really? In the future, it's not hard to imagine these data centers being monitored by security systems and robots with access to AI. So, if you try to go unplug it, I think you may get stopped. But maybe I've just watched too many sci-fi movies...
My current belief: Our shared commons cannot hold if the current incentives remain.
Show me the incentive and I'll show you the outcome.
The Path Forward: Time Horizons
To be clear, I'm far from the only one voicing these concerns.
In fact, Dario Amodei, the CEO of Anthropic, an AI company whose most recent model considerably raised my concern level, recently said the following:
"We, as the producers of this technology, have a duty and an obligation to be honest about what is coming...Cancer is cured, the economy grows at 10% a year, the budget is balanced—and 20% of people don't have jobs...You should be worried about where the technology we're building is going."
We need a name for this darkly humorous irony when the people building the technology are also the people saying "someone should really do something about this."
The Frankenstein Dilemma seems like a fit...
And the general public is sounding the alarm as well. A recent Axios poll found that the majority of Americans across generations are feeling apprehensive about the pace of change and development:

To me, this is all a function of time horizons.
Humans are bad at long-term thinking—individually and collectively. We struggle to consider second-order effects. We fail the marshmallow test. We constantly seek instant gratification.
The harsh truth:
Everyone loves to say they're long-term until it's time to do long term s***.
But long-term thinking is the only way to avoid our modern Tragedy of the Commons.
So as we consider the path forward, my view is that it all comes down to a simple question:
How can we lower the cost of long-term thinking?
This means making it easier, more rewarding, or less risky for the actors in our tragedy to make decisions that benefit the long-term future of our commons, not just the short-term horizon.
It's not about stopping AI or its development. It's not about how we regulate it into paralysis. It's not about halting technological progress.
It's about how we make it rational for each actor to act with the longer-term future of our shared commons in mind. About how we make it rational to work towards a future of human flourishing.
In my view, it's the only question that matters for our future:
How can we lower the cost of long-term thinking?