Artificial Intelligence and the Superorganism

May 17, 2023

Daniel Schmachtenberger and Nate Hagens discuss a surprisingly overlooked risk to our global systems and planetary stability: artificial intelligence. Through a systems perspective, Daniel and Nate piece together the biophysical history that has led humans to this point, heading towards (and beyond) numerous planetary boundaries, and facing geopolitical risks all with existential consequences. How does artificial intelligence not only add to these risks, but accelerate the entire dynamic of the metacrisis? What is the role of intelligence versus wisdom on our current global pathway, and can we change course? Does artificial intelligence have a role to play in creating a more stable system, or will it be the tipping point that drives our current one out of control?




N. H.

Artificial intelligence is in the news. We hear about ChatGPT making people more efficient, learning quicker. We hear about AI replacing artists and mid-level programmers. We see deepfakes and fake beautiful baby peacocks that are much cuter than real baby peacocks. And lots of people are debating about the benefits and risks of artificial intelligence. But today’s guest is my colleague and friend Daniel Schmachtenberger, who’s back for a deep dive on how artificial intelligence accelerates the superorganism dynamic with respect to extraction, climate, and many of the planetary boundary limits that we face.


I’ve not heard this angle on artificial intelligence before. I think it’s really important to have this conversation. And throughout this talk with Daniel we talked about AI, but underpinning it all was: what is intelligence, and how has intelligence in groups in human history out-competed wisdom, restraint of different cultures and different groups of humans? This is an intense, dense, three-and-a-half hour conversation, and we weren’t even done. We’ll be back in the next month or so to have the follow-on questions. It’s probably one of the better conversations I’ve ever had on the Great Simplification, and I think it’s really important to merge the environmental consequences of AI into our cultural discourse. Here’s my friend Daniel Schmachtenberger.


N. H.

Hello, my friend!


D. S.

Hey, Nate. Good to be back with you.


N. H.

I prepare a lot for my podcasts. I read people’s stuff, I prepare questions, I think about it. But with you I’m like: I’ve got an appointment with Daniel at 4:00pm, I go for a bike ride, I go play with my chickens, and I just show up and we have a conversation. So I’m hoping this will work, because this conversation actually is the culmination of how our relationship started three years ago. Remember, we came to Washington, D.C., for a five- or six-day meeting where I wanted to discuss energy, money, technology, and how this combined into a superorganism, and you were focused on existential risks, and particularly oncoming innovation in artificial intelligence, and how that led to a lot of potential unknown destabilizing risks for society. And now we’ve educated each other after a couple of years. And today, rather than continue our Bend Versus Break series, I thought we would merge these two lines of thought on artificial intelligence and the superorganism.


D. S.

You and I have done five parts so far in this Bend Versus Break series. Given all the things that are in the public attention on AI, we decided to do this one. I imagine some of the people will have heard that series, and we can reference the concepts. For anyone who hasn’t, do you want to give a quick recap on superorganism, so we can relate it? And maybe superorganism and metacrisis, since those are kind of frames we’ve established that we’re going to be bringing into thinking about AI now?


N. H.

Sure. So, humans are a social species, and in the modern world we self-organize—as family units, as small businesses, as corporations, as nation-states, as an entire global economic system—around profits. Profits are our goal, and profits lead to GDP (or GWP, globally), and what we need for that GDP is three things. We need energy, we need materials, and we need technology—or, in your terms, information. And we have outsourced the wisdom and the decision-making of this entire system to the market, and the market is blind to the impacts of this growth. We represent this by money, and money is a claim on energy, and energy (from fossil hydrocarbons) is incredibly powerful—indistinguishable from magic, effectively, on human timescales. It’s also not infinite. And, as a society, we are drawing down the bank account of fossil carbon and non-renewable inputs like cobalt and copper and neodymium and water aquifers and forests millions of times faster than they were sequestered.


So there is a recognition that we’re impacting the environment, and all of the risk associated with this you label the metacrisis, or the polycrisis, or the human predicament. But they’re all tied together. The system fits together: human behavior, energy, materials, money, climate, the environment, governance, the economic system, et cetera.


So right now, our entire economic imperative (as nations and as a world) is to grow the economy—partially because that’s what our institutions are set up to do, partially because when we create money (primarily from commercial banks, increasingly from central banks when governments deficit spend), there is no biophysical tether, and the interest is not created. So if the interest is not created, it creates a growth imperative for the whole system, and we require growth.


Now, so far, the market has dictated this growth, but suddenly there’s a new kid on the block, which is artificial intelligence created by prior intelligence by humans, and that’s what we’re going to talk about today.


D. S.

That’s a good frame.


N. H.

I’m an educator. I’ve recently been a college professor. My whole role today is to inform and lightly inspire humans towards self-governance, better decisions, better pathways forward. And my trade deals with science and facts and systems ecology, and I’m afraid that AI will spell the end of what we know is true. And on both sides we won’t know what’s true, and there will be things that people can grab on the Internet—many of which are fake or influenced by artificial intelligence—that destroys the social discourse. So that’s one thing I’m worried about AI.


The other is: will AI accelerate climate change, because it will make the superorganism—it would be like playing Super Mario or Donkey Kong or something like that, and pressing the turbo button. And it makes processes more efficient, and it just speeds things up—which means more carbon (either directly or indirectly), more efficienty (it feeds Jevons paradox).


Another one of my worries is: a lot of jobs are going to disappear from AI, and how does that factor in to the superorganism? So it seems to me that AI both simultaneously makes the superorganism hungrier and more voracious, but also runs the risk of killing the host in several ways. So these are just some of my naïve questions.


But I think, before we get into artificial intelligence, maybe we’ll just start with intelligence. Humans—I think in a previous conversation you and I had, that’s what differentiates us from the rest of the biosphere: is our ability to problem-solve and use intelligence to grow the scale of our efforts. That is coupled with energy and materials, always. So maybe you could just unpack how you see the historical role of intelligence before we get to artificial intelligence.


D. S.

So I might start in a slightly different place, which is to actually start with a couple of cases of AI that are obvious, and then we’ll go back to intelligence; the relationship between intelligence and the superorganism itself, why human intelligence has made a superorganism that is different than animal and natural intelligence made in terms of the nature of ecosystems, and then how artificial intelligence relates to those types of human intelligence—not just individually, but collectively, as you mentioned, mediated by markets or larger types of human collective intelligent systems. And then get to what has to guide, direct, bind intelligence that it is in service of something that is actually both sustainable and desirable.


So let’s talk about just artificial intelligence for a moment, to give a couple examples. Because people have heard. And the reason it’s up so much since artificial intelligence was innovated in the fifties—and some could argue precursors before that—the reason it is in the conversation so much currently is the deployment of large language models publically, and where, starting with GPT-3, and the speed of the deployment of those relative to any other technologies. GPT-3 getting 100 million users in (I forget exactly what it was now) six weeks or something, which was radically faster than TikTok’s adoption curve, Facebooks, YouTube’s, cellphones, anything—which were already radically faster than the adoption curve of oil, or the plow, or anything else.


World-changingly powerful technologies at a speed of deployment, which then led to other companies deploying similar things, which led to people building companies on top of them, which leads to irretractability. And so the speed of what started to happen between the corporate races, the adoption curves, and the dependencies has, of course, understandably, changed the conversation and brought it into the center of mainstream conversation, where it had been only in the domain of people paying attention to artificial intelligence, or the risks or promises associated previously.


So when people talk about AI risk or AI promise—of which it has a lot of both—there’s a few things about cognitive bias worth addressing here first, which is a topic you always address on why people come to misunderstand the superorganism, and get kind of choice-making wrong, get sense-making wrong.


N. H.

Thank you! That means you actually have watched—


D. S.

Of course I have watched and read your things. This is why we’re friends and here.


N. H.

You’re so damn busy that I’m like: “Hey Daniel, watch this!” And you’re like, “Oh, I will.” But you and I have not talked about cognitive biases. But you’re right, I do talk about them a lot. So carry on.


D. S.

So let’s take—there are clusters of cognitive biases. They go together to define, like, default worldviews. And then, a single cognitive bias; there are kind of a bunch of them. And you don’t even have to think of it as bias, it’s just like—I mean, it’s a strong-sounding word, though it’s true—it’s a default basis for the sense-making and meaning-making on new information people are likely to do first. And so one of them that I think is really worth addressing when it comes to AI is a general orientation to techno-optimism or techno-pessimism, which is a subset of a general orientation to the progress narrative. And I would argue—and we’ll not spend too long on this, so it actually warrants a whole big discussion—I would argue that there are naïve versions of the progress narrative: capitalism is making everything better and better, democracy is, science is, technology is. Don’t we all like the world much better now that there’s Novocaine and antibiotics and infant mortality is down, and so many more total people are fed, and we can go to the stars, and blah, blah, blah? Obviously there are true parts in everything I just said, but there is a naïve version of that that does not factor all the costs that were associated adequately. And there’s a naïve version of techno-pessimism.


So, first, on the naïve version of techno-optimism. When we look at the progress narrative, there’s so much that has progressed that, if you want to cherry-pick those metrics, you can write lots and lots of books about how everything’s getting better and better and nobody would want to be alive at any other time in human history.


N. H.

There’s two things that the naïve progress is missing. One is the costs, like climate change and the oceans and insects, and the other is the one-time subsidy of non-renewable energy and inputs, and the source capacity of the Earth. And those are not finite. So those are two blind spots, I think, in that narrative.


D. S.

So we could say the costs and the sustainability of the story.


N. H.



D. S.

And so if you talk about the story of progress, particularly like the postmodernity version of science, technology, and the associated social technologies, not just physical tech, because capitalism and democracy and international relations are all kind of coordination systems, or we can call it social technology; a techne, a way of applying intelligence to achieving goals and doing things—of which you can consider language an early social technology, which it is.


If you ask the many, many indigenous cultures who were genocided or extincted, or who have just remnants of their culture left, or if you ask all the extincted species, or all of the endangered species, or all of the highly oppressed people, their version of the progress narrative is different. And just like the story of history written by winners or losers, but if you add all of those up, the totality of everything that was not the winner’s story is a critique on the progress narrative. And so one way of thinking about it is that the progress narrative is: there are some things that we make better. Maybe we make things better for an in-group relative to an out-group. Maybe we make things better for a class relative to another class. For a race relative to another race. For our species relative to the biosphere and the rest of species. Or for some metrics, like whatever metric our organization is tasked with up-regulating, or GDP, or something relative to lots of other metrics that we are not tasked with optimizing.


N. H.

Or for our generation versus future generations.


D. S.

Exactly. Short-term versus long-term. And so the question is: where it is not a synergistic satisfier, where there are zero-sum dynamics that are happening, the things that are progressing are at the cost of which other things? And we’re not saying that nothing could progress in this inquiry. We’re saying: are we calculating that well? And if we factor all of the stakeholders—meaning not just the one in the in-group, but all of the people; and not just all of the people, but all the people into the future; and not just all the people, but all the lifeforms; and all of the definitions of what is worthwhile, and then what is a meaningful life, not just GDP—then are the things that are creating progress actually creating progress across that whole scope?


N. H.

So I have two replies to that. My first is: amen. And my second is: you’re advocating for a wide-boundary definition of progress, as opposed to a narrow-boundary one.


D. S.

Yes. And the definition between wide-boundary and narrow-boundary is very related to the topic of intelligence, too. Are our goals narrow goals, or are they very inclusive goals? If we have a goal to improve something, for whom? Is it for a small set of stakeholders? Is it for a set of stakeholders for a small period of time? Is it measured in a small set of metrics where, in optimizing that, in being effective at goal-achievement, we can actually externalize harm to a lot of other things that also matter? And whether we’re talking about technology itself, or nation-state decision-making, or capitalism, or whatever, we can talk about something where all of the problems in our world, we could say, have to do—the human-induced problems—have to do with the capacity to innovate at goal-achieving decoupled from picking long-term, wide definition, good goals.


And that doesn’t mean that there is nothing good about the goal. It means that the goal-achievement process has fragmented the world enough that—and sometimes it’s not even perverse, right? I’m going to get ahead economically, and I’m going to fuck the environment, and the people doing slave labor in the mines, and I know it, and I’m just a sociopath and so I do it. Sometimes it’s not that. Sometimes it’s: the world is complex, nobody can focus on the whole thing, so we’re going to make, say, a government that has different branches that focus on different things so they can specialize, and specialization and division of labor allow more total capacity. And so this group is focused on national security of this type, or focused on whatever it is—let’s say focused on, if it’s the UN, world hunger.


Now, is it possible to have a solution to world hunger where now my organization has specific metrics—how many people are fed, et cetera—and how much of the budget we get next year to be able to do stuff, and whether we get appointed again or elected again have to do with specific metrics where it is possible to damage the topsoil, it’s possible to use fertilizers and pesticides that will harm the environment, and cause deadzones in oceans, and destroy pollinators that advance our metric. But if we don’t, there is actually no way to continue within that power structure. This is an example where it’s not even necessarily perverse in a knowing way, but the structure of it, the institutional choice-making architecture, is such that what is being measured for and optimized and tasked can’t not prioritize some things over others. And with increasing capacity to goal-achieve, what is externalized to the goal is increasingly problematic.


N. H.

So is the narrow-boundary focus versus the wide-boundary focus, could that itself be a basic fundamental difference between intelligence and wisdom? And then, building on that, if an entity, a tribe, a nation, a culture focuses on the narrow-boundary goals, won’t that out-compete a nation, a tribe, a culture that focuses just on the wide-boundary, broader, multi-variable things, like fairness or environment or future generations?


D. S.

Wonderful. So let’s come back to the definition of wisdom, and the relationship between wisdom and intelligence. But let’s address it—we were saying earlier there’s a naïve version of the progress narrative, or the kind of techno-capital optimist narrative. There’s also a naïve version of the techno pessimist narrative. The techno pessimist narrative over-focuses on all of the costs and externalities, who lost in that system, and basically orients in a Luddite way and is, like, “No, fuck tech and new things. It was better before.” There’s various versions. One is there was more wisdom before, and this is a descent from wisdom in terms of cleverness that is somewhere between less wise and evil. The benefits that come from this will be more like hypernormal stimuli that actually cause more net harm; that we’re moving towards tipping points of catastrophic boundaries for the planet, et cetera, so let’s just not tech. And extreme versions of that look like the Amish. But Unabomber wrote a lot of things on this topic, right? And they were not dumb things. He was doing a real critique of the advancement of technology, and in being, like, how do we not destroy everything if we keep it on this track? And we will also see that there have been indigenous perspectives that wanted to keep indigenous ways, that wanted to resist certain kinds of adoption of things that would, as far as technological implementation is considered, be considered a non-invasive progress.


Now, you were just mentioning: if tech is associated with goal achieving, and some goals have to do with how to up-regulate the benefits of an in-group relative to an out-group, doesn’t tech mean power? Yes. Doesn’t a group that rejects some of it mean less power in the short term, so where those competitive interests come, particularly if the other side both has tech and the mindset to use it, does that end up meaning that that doesn’t forward? And we can see that when China went into Tibet, and it was kind of the end of Tibetan culture as it had been, was that because Tibet was a less good culture? Meaning it provided a less fulfilling life for all of its people than China, and nature was selecting for the truly good thing for the people or the world? No. We can see that, whether we’re talking about Genghis Khan’s intersection with all of the people he intersected with, or Alexander the Great, or whatever that warring—


N. H.

But that was tech, too.


D. S.

Yes. Yes. That those who innovated in the technology of warfare, the technology of extraction, the technology of surplus, the technology of growing populations, coordinating them, and being able to use those coordinated populations to continue to advance that thing relative to others—there were cultures that might have lived in more population-sustainability with their environment, maybe more long-term harmony, maybe said, “Let’s make all our decisions factoring seven generations ahead,” and they were just going to lose is war every time.


And so the naïve techno-negative direction just chooses to not actually influence the future, right? It’s going to say: I’m going to choose something because it seems more intrinsically right, even if it guarantees we actually have no capacity for enactment of that for the world. And that’s why I’m calling it naïve. It’s—


N. H.

I don’t understand that. That last thing—could you give an example?


D. S.

Yeah. If someone says: advancement of tech in general focuses on the upsides that are easy to measure, because we intended it for that purpose, doesn’t focus on all the long-term second-, third-, fourth-order downsides that are going to happen. I don’t want to do that. We want to have a much slower process that pays attention to those downsides, only incorporates the things of the right use and guidance and incentives, it will lose in a war. It will lose in an economic growth to the other cultures that do the other thing. If you want to take a classic example and go back to—and it didn’t happen exactly this way anywhere, because it happened in such different ways; in the fertile crescent and in India and whatever. But as a kind of thought experiment illustration: the plow emerges, animal husbandry for being able to use the plow. Now we have to domesticate a bull, or a buffalo, turn it into an oxen. And that involves all the things it does. It involves castrating it, and it involves having a whip, and you stand behind it to get it to pull the plow for row cropping, whatever.


So certain animistic cultures were like: I don’t want to do this. We’ll hunt a buffalo, but we also will protect the baby buffalos. We’ll make sure that our body goes into the ground to become grass for the future ones. We’re part of a circle of life. We believe in the spirit of the buffalo. I can’t believe in the spirit of the buffalo and beat one all day long, and do things to it where I wouldn’t want to trade places with it. But the culture that says no, we’re not going to do that thing, is not going to get huge caloric surplus. It’s not going to grow its population as much, it’s not going to make it through the hard weather times as well. And so when the new technology emerges—those who use it—if the technology confers competitive advantage, it becomes obligate. Because whoever doesn’t use it, or use at least some comparable technologies, loses when you get into rivalrous interactions.


N. H.

Let me take a brief rabbit hole sidestep here, but while it’s fresh in my mind. I think this dynamic that you’re talking about now—and I know we’re going to get to artificial intelligence—but in my public discussions, people are recognizing the validity of the systemic risk that I’m discussing, and that we’re headed for—at least potentially—a great simplification. Simplification is the downslope of a century-plus of intensive complexification based on energy. But those communities (and you could talk about countries) that simplify first because it’s the right long-term thing to do, in the meantime, they’re going to be out-competed by communities that don’t, because those communities will have more access to government stimulus and money and technology and other things. But it almost becomes a tortoise-and-the-hair sort of story.


I had a podcast a few weeks ago with Antonio Turiel from Spain, and he said Europe is in much worse shape than the United States, because the United States has 90% of its own energy. So Europe is going to face this simplification worse first in a worse way. So the United States has another decade. So you guys are off the hook. And I was thinking to myself: really? Because, yes, the United States is mostly energy-independent, but Europe will be forced to make these changes first, and maybe they will have some learnings and adaptations that will serve them in the longer run when we just ride high in the superorganism for a while longer. I mean, that’s really a complicated speculation, but what do you think about all that? Is that relevant, or…?


D. S.

Well, so this is where we talk about the need to be able to make agreements that get out of the race to the bottom type dynamics. It’s a multipolar trap, the social trap. Because, of course, if anybody starts to cost resources properly, price resources properly—meaning pay for what it would cost to produce that thing renewably via recycling and whatever it is, and not produce pollution in the environment through known existing technology—they would price themselves out of the market completely relative to anyone else not doing that. So either everybody has to, or nobody can, right? And whether we’re talking about pricing carbon or pricing copper or pricing anything, as you say well, we price things at the cost of extraction plus a tiny margin defined by competition, and that was not what it cost the Earth to produce those things, or the cost to the ecosystem and other people of doing it.


So the proper pricing—pricing is really very deep to the topic of perverse incentive. And yet, if we talked about: how do we ensure that in—and this is core to the progress narrative, right? Because the thing that we’re advancing, that drives the revenue or the profit, is the progress thing. The cost to the environment—that we’re extracting something unrenewably that is going to cap out, that we’re turning into pollution wastes on the other side, and we’re doing it for differental advantage of some people over other people, and affecting other species in the process—if you were to… the stakeholders that benefit, you get a progress narrative. The stakeholders that don’t benefit, you get a non-progress narrative.


But until industrial tech—like, it’s important to get that—before industrial tech, we did extinct species, right? We over-hunted species in an area and extincted them. We did cut down all the trees and cause desertification, that then changed local ecosystems, led to flooding, ruined the topsoil. We did over-farm areas. So environmental destruction causing the end of civilizations is a thousands of year old story. But it could only be local.


N. H.

It’s just now global.


D. S.

So until we had industrial tech, we could not actually—we just weren’t powerful enough to mess up the entire biosphere. So how powerful we are is proportional to our tech. Because we can see that a polar bear cannot mess up the entire biosphere, no matter how powerful it is corporeally, right? The thing that can mess up the entire biosphere is our massive supply chain, technologically mediated thing, starting with industrial tech.


And so, given that we are for the first time ever running up on the end of the planetary boundaries, because we figured out how to extract stuff from the environment much faster than it could reproduce and turn it into pollution and waste much faster than it could be processed. And we’re hitting planetary boundaries on both sides of that. On almost every type of thing you can think about, right? In terms of biodiversity, in terms of trees, in terms of fish, in terms of pollinators, in terms of energy, in terms of physical materials, in terms of the chemical pollution planetary boundary.


So the things that are getting worse are getting very near tipping points that were never true before. Those tipping points will make it to where the things that are getting better won’t matter even for the stakeholders they’re intended. And that’s a key change to the story is: it can no longer be that the winners can win at the expense of everybody else. It is that we’re actually winning at the expense of the life support systems of the planet writ large. And when that cascade starts, obviously, you can’t keep winning that way—which is optimized narrow goals at the expense of very wide values.


N. H.

You’ve described the naïve progress optimist and the naïve progress pessimist. Is there such a thing as a progress realist?


D. S.

Yes. So I am a techno-optimist, meaning: there are things that I feel hopeful about that require figuring out new ways to do things. New techne, both social tech and physical tech. But I’m cognizant that the market versions of that tech are usually not the best versions (because of the incentive landscape) in the same way that if Facebook hadn’t had an ad model, it would’ve been a totally different thing, right? If we’re just talking about the technology of being able to do many-to-many communication. But you had something that was not a market force driving it, could you have had something that was much better that was not trying to turn people into a commodity for advertisers—which means behaviorally nudge them in ways that manufacture demand and drive the emotions of manufactured demand, maximize engagement, which causes the externality of every young person having body dysmorphia and ubiquitous loneliness and confusion about base reality and polarization. Could we have done it where, rather than drive engagement, the goal was to actually look at metrics of cognitive and psychological development and interconnectedness across ideologic divides, and do that thing? Yeah, of course. Right?


So the same technology can be applied to wider goals rather than more narrow goals, and you get a very different thing. So the base techne—it’s the technology and the motivational landscape that develops its application space we have to think about together. So there are ways that we can repurpose existing technologies and develop new ones—both social and physical technologies—that can solve a lot of problems. But it does require us getting this narrow goal-definition versus wide-goal definition. And if intelligence guiding technology is as powerful as it is, and actually exponentially powerful—and we’re defining intelligence here as the ability to achieve goals—what is it that defines what good enough goals are that, being able to optimize them exponentially, is not destructive? That’s how you would get a progress narrative that is post-naïve and post-cynical.


N. H.

In contrast, I’m probably a techno-pessimist, or at least a mild one, because I see how technology has acted as a vector for more energy and, more climate CO2, and degradation of nature. At the same time, I think it’s how we choose what technology we use. Like, a golden retriever is probably the best technological invention ever of our species, even though it’s really more of a co-evolution. But you know what I mean. It’s something that we came about and sexually selected for companionship, and they give us the complete suite of evolutionary neurotransmitters for not a lot of resource input. And there’s lots of other technologies that are appropriate that help us meet basic needs, give us well-being, that don’t destroy the biosphere. But this gets back to—I don’t think individuals chose—


D. S.

Which would—wait, this is important. The superorganism thesis that you put forward shows that the superorganism is oriented on a path that does kill its host and thus itself, right? It does destroy the space reality, the substrate that it depends on. The metacrisis narrative that I put forward says a similar thing. That’s why we did this whole five series, to kind of show the relationships. And so I would say: as long as the axioms of that thing are still in place—yes, I’m a techno-pessimist. Meaning: I think that the good things that come from the new tech don’t outweigh the fact that the new tech is, in general, more often than not, accelerating movement towards catastrophic outcomes, factoring the totality of its effects.


But this is why I said there is a post-naïve and post-cynical version, and that I’m a techno-optimist, but it requires not being on that trajectory anymore. It requires the technology is not being built by the superorganism in service of itself, but is being built by something different in service of something different.


N. H.

Well, in that way I’m also a techno-optimist. Because after growth ends, and after superorganism is no longer in control, efficiency will no longer be a vector for Jevons paradox, because then efficiency is going to save our vegetarian bacon. Because as the economy is shrinking, efficiency is going to be really important, and innovation. Just right now it’s feeding more energy and stuff into the hungry maw.


D. S.

For the people who haven’t heard the previous stuff on Jevons paradox, will you do that briefly? Why efficiency—because obviously, AI can cause radical efficiencies which can help the environment. That’s part of the story of why it’s an environmental hope. So would you explain why as long as Jevons paradox is the case?


N. H.

Yeah. So humans get smarter on how we use energy at around 1.1% a year. So we get more energy-efficient every year because we’re smarter. Coal plants use less coal to generate the same amount of electricity. We invent solar panels. Our televisions are a little bit more energy-efficient and our laundry machines. And one would think on the surface that that would allow us to use less energy. But what ends up happening is that money savings gets spent on other things that use energy, and writ large new innovation ends up (system-wide) requiring a lot more energy. Since 1990 we’ve had a 36% increase in energy efficiency. Over the same time we have a 63% increase in energy use. So as long as growth is our goal and our cultural aspiration is profits in GDP, more energy efficiency will paradoxically, unfortunately, result in more energy and environmental damage. That’s called Jevons paradox. It was based after a nineteenth-century economist who observed this—Walter Stanley Jevons, who observed this in steam engines. That steam engines wouldn’t reduce our energy use, but they would scale because they helped everyone and were so useful.


D. S.

So let’s talk about first- versus second-, third-, nth-order effects here, because Jevons paradox is… it’s important to understand that. We make a new technological—


N. H.

Daniel, what are the odds that we actually don’t get to artificial intelligence on this conversation?


D. S.



N. H.

Okay. Keep going. First, second, third order. Go for it.


D. S.

So if we create a new technology that creates more energy efficiency on something—whether it’s a steam engine or a more energy-efficient energy generation or transportation or storage technology—the first-order effect of it is: we use less energy. The second-order effect is: now that we have more available energy, and energy costs less, there’s a bunch of areas where there was not positive energy return, profit return, that now become profitable. And so now we open up a whole bunch of new industries and use more total energy. But it’s a second-order effect, or even a third-order effect, because it makes some other technology possible that does that.


This is one of the asymmetries that we have to focus on in the progress narrative. The progress narrative is—and technology in general—when we make a new technology (and by technology I mean a physical technology or even, say, a law, or a business to achieve a goal), we’re generally making something that is trying to have a first-order effect on a narrow goal that is definable in a finite number of metrics for a small set of stakeholders. The stakeholders are called the total addressable market of that thing. And very rarely is the total addressable market everything, right? And so we’re making things—whatever it is; so I’m using technology in the broadest sense of human innovations towards goal-achievement here—we’re making technologies to achieve first-order effects (meaning direct effects) for a defined goal, for a defined population. Even if we’re talking about a non-profit trying to do something for coral, it’s still focused on coral and not the Amazon and everything else, right? And so it can optimize that at the expense of something else in terms of the second-, third-order effects of whatever putting that thing through does.


And so we put out a communication to appeal to people to do a thing politically. Well, it appeals to some people. It really disappeals to other people. One of the second-order effects is: you just drove a counter-response. The counter-response is people who think that the thing you’re benefiting harms something they care about, and now you’ve just up-regulated that. Is that being factored? And so the progress narrative, the technology narrative, and all the way down to the science narrative—and this is where we get to the human intelligence versus wisdom, and how this relates to artificial intelligence—is: it is easier to think about a problem this way. Here’s a definable problem. It affects these people or these beings. It is definable in these metrics. We can measure the result of this, and we can produce a direct effect to achieve it. We did, we got progress—awesome. And the progress was more GDP, the progress was people could communicate faster, the progress was less dead people in the E.R., the progress was less starving people, the progress was whatever the thing was that we were focused on—even if it seems to be a virtuous goal. But that same thing that you did maybe polarized some people who are now going to do other stuff that is a second-order and maybe third-order effect. Maybe it had an effect on supply chains.


So the second-, third-, nth-order effects on a very wide number of metrics that you don’t even know what they are to measure, on a very wide number of stakeholders that you even know how to factor, is harder in kind to think about. So it is logically cognitively easier, as we talk about intelligence, to figure out how to achieve a goal than it is to make sure that goal doesn’t fuck other stuff up.


N. H.

So efficiency, too, has a narrow boundary and a wide boundary lens with which to be viewed.


D. S.



N. H.

But here’s one of the challenges, though. It’s easy for a group of humans, or a full culture, to optimize one thing. It’s very difficult to optimize multiple things at once—multi-variable inputs and outputs are incredibly complex. So optimizing dollar profits tethered to energy tethered to carbon, combining technology, materials, and energy, that is a thing that was very easy. You know, akin to the maximum power principle. So what do you think about that?


D. S.

I think that optimization is actually the wrong framework. When you think about everything that matters, you’re not thinking about optimization anymore. You’re thinking about a different thing. So optimization is—now, let’s come back to what is distinct to human intelligence: why did that cause a metacrisis or a superorganism? How does AI relate to that? And then: what it would take, what thing other than intelligence is also relevant to ensure that the intelligence is in service to what it needs to be in service to? So we’re not saying that humans are the only intelligent thing in nature. Obviously not; nobody reasonably would say that. But there is something distinct about human intelligence. So how do we define intelligence?


It’s fascinating. Go look up on a bunch of encyclopedias, and you’ll see that there are a lot of different schools of thought that define intelligence differently. Some do it in terms of formal logic and reason. Some do it in terms of pragmatics, which is the ability to process information to achieve goals. Some do it just kind of from a information theoretic point of view of the ability to intake information, process it, and make sense of it. And all of these are related. I’m not going to try to formalize it right now. But I’m going to focus on the applied side, because it ends up being the thing that’s selected for, and it ends up being the thing that wins short-term goals, and that obviously we’re building AI systems for.


So there we can say: intelligence is the ability to figure out how to achieve goals more effectively. Or we can just say the ability to achieve goals. We can see that a slime mold has the ability to achieve goals, and it will figure out and reconfigure itself. A termite colony figures out how to achieve goals, and it reconfigures itself. There is some element of learning. And when you watch a chimpanzee figuring out using this stick versus this stick to get larvae out of a thing, you can watch it innovating and learning how to achieve goals. So all of nature has intelligence. What is unique about human intelligence relative to other things—relative to other, let’s just say, animals. We could talk about plants, funguses, all the kingdoms, but that gets harder, so let’s just stick with other animals.


First we realized that we can’t talk about this properly because, from an evolutionary perspective, there were things between the other animals that we look at now and humans—meaning earlier hominids—and so where do we call it humans in that distinction? So since they’re not around, we can mostly talk about sapiens versus everything else on the planet that we’re aware of at this point. But we can say that the thing we’re calling human starts before Homo sapiens, probably with Homo habilis or Australopithecus or somewhere around there, having to do with not just—


N. H.

We are the ninth Homo, and perhaps the last. We don’t know.


D. S.

So people might question what kind of weird anthropocentrism is it that would have you say that you know that you have some kind of intelligence the whales don’t have, or the chimps don’t have, or whatever. And I think it’s very fair to say what it is like to be a whale we really don’t know. And what about the experience of whale-ness, the qualia, the sentience of it might even be deeper than ours, might be more interesting in some ways. Totally, right? Like, no—that’s a harder problem in kind to address. But we can, in observing their behavior, say there are types of goal-achieving that they clearly don’t have the ability to do in a prima fascia evidenced way that we obviously have the ability to do. They have not figured out how to innovate the techs that makes them work in all environments the way we have. They have not even figured out how to stay away from boats that are whaling boats. And so from their most obvious evolutionary motives, how to figure out that thing is not a thing that they’ve really done.


And so we can see in a prima fascia sense that they’re not innovating in technology and changing their environments, making the equivalent of an Anthropocene in a similar way. Even when we see the way that beavers make beaver dams and change their environment, or ants do, they do it roughly the same way they did it ten thousand years ago. Humans don’t do it roughly the way we did it ten thousand years ago. So we can see something unique about humans in our behavior related to the innovation and technology and environment modification.


N. H.

In whales’ defense, they don’t have opposable thumbs and they’re underwater. But I’m with you. Keep going.


D. S.

Well, this is not putting whales down. I think opposable thumbs are pretty significant to this story, right? I think there are things about the evolution of Homo sapiens that probably have to do with the combination of narrower hips from uprightness that allowed us to de-weight the hands, that allowed them to be more nimble and opposable. With the larger heads it involved neotonous birth. And there’s this whole complex of things.


And so in no way does saying that humans have more of this particular kind of innovative intelligence mean having more meaningful right to exist. Those are totally separate things, right? Doesn’t mean have a deeper experience of the world. Those are different things.


N. H.

So let me get back to something you just said a minute ago, unless this is where you were heading. But you said intelligence is problem-solving en route to a goal. And most animals in nature, their goal is—well, it’s security and mating and reproduction. But energy return is a primary goal in nature: to invest some energy and get a higher amount back, because that enables all sorts of other optionality. Energy calories in nature are optionality for organisms. So the problem with humans isn’t the intelligence per se, it’s the goal?


D. S.

I don’t even want to call it a problem yet. I want to call it a difference. We’ll get to the problem in a minute.


N. H.

Okay. Alright.


D. S.

I want to say something, because this is related here, about modeling. Because human intelligence—all forms of intelligence have something to do with modeling. They can take in information from the environment and be able to forecast what happens if they do something enough to be able to inform their next choice; which choice is more likely to achieve some future goal, even a future goal’s a second, right?


N. H.

So let me interrupt there. Does that differentiate humans? That if we model something, that we have the perception and ability to consider time? How does time factor into intelligence?


D. S.

We are not the only animal that has a relationship to time, but we definitely have the ability to have abstractions on time that seem to be unique from what we can tell. And we also have the ability to have abstractions on space, and abstractions on other agents. And there’s something about the nature of abstraction itself that is related to what is novel in human intelligence. The type of recursive abstraction.


But there’s reason—talking about modeling—there’s something… a model of reality is taking a limited amount of information from reality and trying to put together a proxy of that that will inform us for the purpose of forecasting, and ultimately choice-making, ultimately goal-achieving. Insofar as the model gives us accurate enough forecasts that it informs actions that achieve our goal, we consider it useful. That doesn’t mean that it is actually comprehensively right. The models end up being that they’re optimizing for a narrow set of sense-making, just like what we were talking about before, that we optimize for a narrow set of goals.


And the reason I bring this up is because all of our models can be useful, even in trying to understand the metacrisis and whatever themselves, can also end up blinding us to being able to perceive outside of those models. So when Lao Tzu started the Tao Te Ching with “The Tao is speakable in words or understandable conceptually is not the eternal Tao,” it was saying: keep your sensing of base reality open and not mediated by the model you have of reality. Otherwise, your sensing will be limited to your previous understanding, and your previous understanding is always smaller than the totality of what is.


I would even argue that “thou shalt have no false idols”—a model of reality that says “here’s how reality works” is the false idol which messes up our ability to be in direct perception of new things where our previous model was inadequate. So I say this because there are places where a particular thing we’re going to say is useful, but it is not the whole story, and it’s important to get where it’s not the whole story. So, for instance, if we talk about energy, that doesn’t include the parts about materiality and the parts about intelligence. And even if we talk about those three, that’s actually not the whole story. So it’s useful, but I’m wanting to call this out. If an animal is eating, are they eating only for energy? No, they’re also eating for minerals and for enzymes and for vitamins and for proteins and for fats, and not just fats that will get consumed as energy, but that will become part of the phospholipid membrane of cells. And they’re eating for materiality as well, right? And so it’s not true that all energy is fungible. I can’t really feed an elephant meat products well, even though there’s plenty of energy in it.


N. H.

So hold on a second. Elephants also have what I referred to as the trinity, which is energy, materials, and technology. They try to get acacia trees, they use their trunk or some other tool, and in the acacia leaves are energy—the photosynthesis from the sun, but also, as you said, atoms, minerals, materials. So it’s the same for animals as humans.


D. S.

And non-fungible ones. There’s a reason why—


N. H.

How so?


D. S.

I can’t make certain amino acids from other amino acids. I can’t make some minerals from other minerals. No matter how much calcium I get, I get no magnesium from that. And I need a certain amount of magnesium. So, of course, if I don’t get enough dietary input of vitamin D, we get Rickets, and even if we get plenty of B Vitamins and Vitamin C and other things. And so those nutrients are non-fungible to each other. And you need all of them. You needed the whole suite that a thing needs, which is why there’s very interesting health studies that show people who are dying of obesity are actually dying of diseases of malnutrition, because we have a diet that has basically optimized for calories while stripping all the micro-nutrients out of them. So you can be eating tens of thousands of calories a day, and actually becoming profoundly deficient in minerals and phytochemicals and other things like that, where then your body is wanting to keep eating because it’s actually starving. And you continue to give it something that creates a neurochemical stimulus that says you ate and satiates the hunger thing for a moment, but what you’re actually starving for is not in that food.


And so I don’t want to over-simplify that energy is a part of the story. Everything we say is a part of the story. But the totality of the story is more complex than however we talk about it, right? There’s just something so important and sacred about that, because what is wrong about our narrow goal-achieving is what’s upstream from that is our narrow modeling of reality. And what even equals progress? Who is worth paying attention to? How is it all connected? And I make a model that separates it all. Then I can up-regulate this, harm something else, but I don’t even realize that I’m harming something else, because that’s not in my model. I don’t even realize that that thing that I’m optimizing for isn’t the actual thing, or is only a part of the whole thing.


N. H.

And then probably, by definition, we choose models or inputs into the models that kind of confirm our own built identity up until that moment.


D. S.

And/or when, in some game-theoretic way—so, if the model damages lots of things but makes me win the war, that model will probably win. And you can notice how it’s, like, okay, so we get the Inquisition, we get the Crusades, we get some fuckin’ gnarly, violent, cruel-like figuring out how to optimize torture stuff in the name of the guy who said, “Let he who has no sins among you cast the first stone.” And you’re, like: how the fuck did we go from principles of forgiveness and “let he who has no sins cast the first stone” into this version that says the Inquisition is the right way to do that. What you see is: the interpretations—there’s lots of interpretations—the interpretations that lead to “kill all your enemies and proselytize” end up winning in short-term warfare, not because they’re more true or more good, but because they orient themselves to get rid of all of the enemies and have more people come into them, and have nobody ever leave the religion because they’re afraid of hell, and whatever.


So there is—and this is an example of: there are models that win in the short term, but that actually move towards comprehensively worse realities and/or even self-extinction; evolutionary cul-de-sacs. And I would argue that humanity is in the process of pursuing evolutionary cul-de-sacs, where the things that look like they are forward are forward in a way that does not get to keep forwarding. And at the heart of that is optimizing for narrow goals, and at the heart of that is perceiving reality in a fragmented way, and then getting attached to subsets of the metrics that matter, models, which leads to us wanting to optimize those models and those metrics.


And now I would start to define the distinction between intelligence and wisdom here, and that wisdom is related to wholes and wholeness, intelligence is related to the relevant realization of: how do I achieve a goal? A goal will be a narrow thing for a narrow set of agents, bound in time, modifying a fixed number of parameters. And so then I would say: if human intelligence—distinct from other types of animal intelligence, where we don’t just have the ability to work within a range of behaviors that are incapacities that are mostly built into… where the primary physical technology is our bodies, right? The animals evolved to have claws, to have blubber for the cold, to have whatever it was that was a technological innovation to be effective within its environment. And it can’t become radically more of that thing by choice, by its own understanding. It becomes more of that thing through genetic selection, which is super-slow and it doesn’t control. And the mutation that makes the giraffe have the slightly longer neck or the cheetah a little faster, or whatever it is, is happening as the rest of the environment is going through similar mutations. So the cheetah is getting a little faster, but so are the gazelles. And there’s co-selective pressure, so that if the cheetah gets a little faster first and eats the slower gazelles, then what’s left is the faster gazelles whose genes inbreed, right?


So you have tiny changes happening across the whole system, and co-up-regulating each other, so that there are symmetries in the rivalry that lead to the entire system still maintaining its metastability. Not stability—meaning a fixed equilibrium. A homeodynamics, not a homeostasis, that continues to increase in complexity over time. But that metastability is the result of that type of corporeal evolution, right? But then, humans’ adaptive capacity is not mostly corporeal, it’s mostly extra-corporeal. You call it extra-somatic—meaning outside of just our body. And it’s both—we can use a lot of calories outside of our body, which started with fire. Fire was the beginning of us being able to warm ourselves, and all of a sudden make new environments possible and make foods edible that weren’t edible before, right? But that’s calories, right? That’s extra-somatic calories.


Then our ability to get more calories from the environment—meaning gather more stuff, kill more things—involve the innovation of tools, right? Those spears, those stone tools, allowed a little group of primates to take down a mastodon. The combination of their coordination technologies with each other—because a single person couldn’t do it—and their stone tools together was able to do that. You get more caloric surplus. The agricultural revolution really advanced that. The oil revolution really advanced that. But at the heart of how did we figure out how to get oil and how to use it was intelligence. It was this kind of recursive intelligence that figures out: I can use this in service of my goals.


N. H.

I wonder if, thirty thousand years ago, some Neanderthal interbreeding with a human could ever imagine that, thirty thousand years later, there would be a brain evolved like Daniel Schmachtenberger’s!


D. S.

Well, check this out. Tyson Yunkaporta—I don’t know if he’s been on your show yet or not—


N. H.

Two weeks from now.


D. S.

Okay. So great. Then you can follow this conversation up with some things I learned from Tyson. An other indigenous—people who hold some indigenous wisdom, knowledge, have told me similar things that, one, date back more than the standard current archaeological narrative of when human’s knew certain shit, but also with kinds of wisdom that have definitely been lost in the progress narrative. And one of the things that Samantha Sweetwater told me originally, and then Tyson said something similar, was that many of the indigenous cultures had a story that, when humans developed the first stone tools, the apex predator of the environment—Samantha’s version was the saber-tooth—came to the human. Obviously this is a story, right? But you get what it would mean that the early people had made this story. And they said: we were the ones who were taking care of and maintaining the complex diversity of the whole system in this kind of apex-predator role you are, and we’re turning over the mantle of stewardship of the whole to you. Your job—because now you have the ability to destroy the whole ecosystem, you must be the steward of the whole thing.


And imagine that even recognizing—because maybe those stone tools were two million years ago, right? And maybe we already had killed a bunch of megafauna, extincted them, extincted some of our other hominid cousins through gruesome kind of interspecies genocidal tribal warfare, destroyed some environments, and already had time to learn those mythos, and be like: no, no, no, we’re not going to maximum-power-principle kill, take everything. We’re going to live in sustainability, think seven generations ahead. And there was wisdom about appropriate use of technology and restraint forty thousand years ago.


N. H.

That was my question; is: we are Homo sapiens, are Appalachian, which is wise man. But any small percentage of tribes or individuals or nation-states or warring clans that pursued a narrow-boundary goal would have out-competed those tribes with wisdom. And here we are with the superorganism.


D. S.

Not any of them, but any of them above a certain threshold, right?


N. H.

What do you mean?


D. S.

Let’s say that we had a number of tribes in an area that had all developed some kind of wisdom by which they’d bound intelligence. And I’m not saying we don’t do that today. It’s called law, right? And it’s supposedly also what religion is about: the development of wisdom, of what is the good life, what is worth pursuing and not pursuing, in which you get things like religious law. You’re not going to work on the Sabbath, you’re going to take that day to do different things. For instance, if you wanted to think about that as an example, you could think about the Sabbath as an example of law binding a multi-polar trap associated with a naïve version of progress. If you don’t have a Sabbath, some people will work seven days a week. In the short term, before they burn out, they’ll get ahead. They will get so much ahead because they’ll be able to keep investing that differential advantage in rent-seeking behavior that anyone who doesn’t will have no relevance to be able to guide their own lives, and now you have a world where no one spends any time with their kids, no one reflects on the religion, nobody enjoys their life, everything sucks for everyone because somebody did that thing in the name of progress, because they moved ahead faster. So we say: no, no, no, you’re going to have a day where you don’t fuckin’ progress stuff. That’s actually the gist. You’re not going to be focused on external progress in the world. And so there’s 27 or 29 ways in Leviticus that you can violate the Sabbath, and you’ll be killed if you violate it. Which seems like just whackadoodle religious nonsense. But if you’re, like: wait, no. You’d never have to actually do that if you hold that law that extremely, and everyone’s, like, alright, we’re not going to fuck with the Sabbath. Now, what do I get to do that day? Rather than achieve goals, I reflect on what are good goals. So I get to spend time with my family, I get to spend time with nature, I get to read the scripture, I get to meditate. I don’t get to achieve goals, I get to experience the fullness of life outside of goal-achieving, and I get to reflect on what goals are truly worthwhile, and in doing so bind the multi-polar trap that I don’t have to because everybody else is rushing ahead. That would be an example of the way religions were supposed to have something like wisdom, that created something like law and restraint, to bind naïve versions of progress in a way that was actually better for the whole long term.


N. H.

So two comments there. One: when I was much younger I had some Jewish friends, and I didn’t ridicule them, but I was kind of… ha-ha, you guys have Sabbath today, I’m going to go to the arcade, or go on a boat ride, or go fishing, or whatever. But now, as I’m older, everything you just said about the good life, and spending time with family, and reading, and spending time in nature, and not using the Internet on a Saturday or whatever, sounds freaking wise and makes sense and is appealing to me. So maybe, with age and maturity, I’m flipping from intelligence to wisdom. And then the second thing, the implication—and maybe this is where you’re heading—is: to muzzle or forestall the risk-singularity that is coming from the superorganism, we have to have some Sabbath-equivalent applied to AI.


D. S.

It’s not just Sabbath equivalent, but almost all of laws about restraint, right? Things you don’t do. In the presence of having incentive to achieve narrow goals, what are the things for the collective well-being—which also means the capacity for your own individual well-being; for all individuals into the future—what are the things we say we don’t do? If you have Samantha on, this is a topic she’ll talk about, cares a lot about, which is: there’s no definition of wisdom worth anything that is not bound to the concept of restraint.


N. H.

But how—yeah. I don’t know how our culture, approaching a biophysical Wile E. Coyote moment 20 or 30 or 40 years ago, before all this over-leverage and different systemic risk, we could’ve added restraints. But now a restraint would almost default create this rubber-band snapback in the economic systems. But we can talk about that another time.


D. S.

Well, so this is where you end up having the embedded growth-obligation of a system, the embedded continuity of a system, the kind of institutional momentum, that—partner Zach was writing something recently. A couple of people on the team were contributing. And in the beginning it was talking about where do we find ourselves now? And it says we find ourselves in the relationship between the life-giving nature of the biosphere and the life-giving nature of the civilizational system, and the unique point in time at which the latter is threatening the ongoing continuity of the former upon which it depends. And what it takes to maintain that civilizational system will destroy the biosphere the civilizational system depends on, so we must re-make the civilizational system fundamentally. We do need civilizational systems, we do need technological systems, but we need ones that don’t have embedded exponential growth obligations. We do need ones that have restraint, we do need ones that don’t optimize narrow interests at the expense of driving arms races and externalities. We do need ones where the intelligence in the system is bound by and directed by wisdom.


N. H.

Right. Which is the equivalent of Sabbath plus law plus emergence.


D. S.

Now, coming back to, for a moment, the “Ha-ha, what idiots!” reaction you had when you were young. I had a similar one. And lots of young people—probably even many people who were raised Jewish—have a similar one before they understand the full depth of it. So let’s talk about Chesterton’s fences for a moment.


N. H.

Never heard of that.


D. S.

I don’t know actually the history of why it got that name, but there’s a thought experiment in philosophy called Chesterton’s fence, which is: there’s a fence up, and you think, “Oh, the purpose of that fence is X. That’s no longer here. The fence is ugly and in the way. Let’s take the fence down.” Is there a chance that the purpose of the fence included several other things that you don’t know, and you don’t know that you don’t know it, and before you take the fuckin’ fence down you better make sure you actually understand why it was put up?


Now, this comes to a very deep intuition. We were talking about biases earlier in the progress narrative. Progressives—it’s funny how, right now, that is somehow associated with left in some weird way—but progressive and traditional is a deep dialectic, and neither one are supposed to be the one you choose. It’s a dialectic: you’re supposed to hold them in balance, right? And very much in the same way, this is an important point, and it relates to wholes and wisdom versus narrow goals. Narrow value sets are as bad as narrow goals. They’re a part of it. So any value that is a real value exists in a dynamic tension with other values—a dialectical one, oftentimes; but other values—where, if you optimize the one at the expense of everything else, you optimize it, you get these reducto ad absurdums, right? Meaning: the optimization of any value by itself can end up looking like evil.


So if I want to optimize truthfulness, and all I’m going to do is speak the truth all the time, then when the Nazis come and ask me, “Are there any Jews inside?” I say, “Yes.” No! Truthfulness is not the only value at that point. The preservation of life, and kindness, and other things are—we can even see an example where it’s, like: if someone in a naïve sense says, “My value is honesty,” there’s a bunch of places where you can see a person who, in the name of honesty, is just an asshole. Right? And they just say kind of mean things and say it’s in the name of honesty. We can see that, in the name of, say, kindness, people will lie to say flattering things and avoid painful things. We can see that, if you hold them in dialectical tension, you actually get a truth that is more truthful, and you get kindness that is more effective. Because the kindness that doesn’t want to tell the person they’re an addict when they are, and nobody does, or tell the emperor that they have no clothes, or say anything painful that is necessary feedback, isn’t even kind. So sometimes, for the value to even understand itself fully, there’s this kind of dialectical relationship that come about.


Now, here’s where I’m coming to Chesterton’s fence, and then I want to hear. There is a dialectic between a traditional impulse and a progressive impulse. The traditional impulse basically says: I think there were a lot of wise people for a long time—wise and smart people—who thought about some of these things more deeply than I have, who fought and argued, and that the systems that made it through evolution, that made it through, made it through for some reasons that have some embedded wisdom in it that I might not understand fully, and it makes sense for me to kind of have, as my null hypothesis, my default, trusting those systems. They wouldn’t have made it through if they weren’t successful, didn’t work. And likely, the total amount of embedded intelligence in them is more than I’ve thought about this thing. Without knowing it, that’s the traditional intuition. The progress intuition is: collective intelligence is advancing, built on all that we have known. We’re discovering new things, and we’re moving into new problem sets where the previous solutions could not possibly be the right solutions, because we have new problems. So we need to have fundamentally new thinking. Obviously, these are both true.


Now, on the traditional side, the Chesterton’s fence thing, is: I might have as a kid (or you might have as a kid) thrown out the Sabbath and said “That’s dumb” before we actually understood it, because we understood a straw manned version of it, said it was stupid, and thrown it out. And so when we’re talking about wisdom and restraint and all like that, there is something around. Are we seeking to—because in the name of progress there will always be something that is focused on restraint that seems like it’s fucking up that progress I could get, and if I don’t understand all the reasons for the restraint that factor second-, third-, fourth-, nth-order effects long into the future, in the short term I should do the thing. In the short term, like, no, of course I should advance the AI applied to genomics to solve cancer without thinking through the fact that the fourth-order effects might involve increasing bioweapons capability for everyone and destruction of the world, so even the cancer solutions don’t matter in the course of those people’s lives. And so this is: is there a whole enough perspective to be able to see how the things that are actually wise from a narrow perspective look stupid?


N. H.

Two questions. One: is the metaphor the same as when I was younger, I thought those things were stupid, and now I recognize the validity of them—that’s where we are as a culture? We’re the younger version of Nate in the intelligent-versus-wisdom dynamic? I’m just speculating. I think that’s probably the case. And then, two: I mean, you and all the people that I know—I know a lot of smart people. You’re certainly up there. But you also have wisdom. And I don’t know as many people that have both intelligence and wisdom. And you, in my sphere, rank near the top. But is it in our genome, is it in the human behavioral repertoire, to hold more than those single values, to hold multiple values, and wide-boundary views of the world? What do you think about that?


D. S.

So we said that there is something unique about the types of recursive intelligence that lead to technology, innovation, and the Anthropocene, the superorganism, et cetera, in human intelligence relative to other species. So let’s talk about the genetic predisposition, and what the predisposition is actually for in the nature/nurture thing a little bit.


I would say—and again, everything I’m going to say here will be at a high level that is hopefully pointing in the right direction, but totally inadequate to a deeper analysis of all the topics. Our nature—in terms of the genetic fitness of humans, Homo sapiens—it would be fair to say that our nature selected for being more quickly and recursively changeable by nurture than anything else. That our—


N. H.

As individual humans.


D. S.

The individual human is not the unit of selection in evolution. The tribe is.


N. H.

Right. Wow! Both.


D. S.

The tribe or the band; the group of humans.


N. H.

Both. Sometimes individuals, sometimes tribes.


D. S.

I don’t think there’s much of a case for individual humans surviving in the early evolutionary environment by themselves very well, and the behavior of them as individuals separate from social behavior leading. There are certainly some animals that are largely solitary, and they have a different set of selection criteria than primarily social animals. Humans are a primarily social animal.


N. H.

But, I mean, this was E. O. Wilson, David Sloan Wilson’s paper that selfish individuals out-compete within groups and cooperative groups out-compete selfish groups. So I think both are hard-wired in us. But let’s not get detracted by that.


D. S.

Actually, what I’m saying holds with this. The individual, there is some selection of an individual within a social environment, but there’s no selection of an individual outside of other sapiens. Right? And so the unit of selection—that is driving the dominant feature for sapiens—the unit of selection is a group. That’s actually a really important thing to think about as opposed to that the unit of selection is an individual, because we have such an individualistically-focused culture today, and we think in terms of individual focus way excessively to the actual evolutionary fitness of an individual outside of a tribe was dead in almost any environment for most of history. So a set of behaviors that made you alienate the tribe was not an evolutionary strategy for most of the evolutionary basis of humans.


N. H.

And the problem now is: our tribe is eight billion strong, pursuing profits tethered to carbon emitters—


D. S.

Which is no longer a tribe. The tribe was capped out at the scale at which there were certain types of communication across that group that allowed it to be the unit of selection. And so everyone knowing everybody, everybody being able to communicate with everyone, everyone being able to participate in some choices that then everyone would be bound by, so they stayed in it rather than defect against it, which is why you got kind of Dunbar limitations. And then there’s a series of things where you went from, say, a couple hundred thousand years of Dunbar to huge cities in a relatively short period of time, which is the beginning of the thing we call recorded civilization. And we would say that most of the superorganism properties we talk about now were at that junction started. Because in the smaller size, lots of lying and sociopathy and whatever really don’t pay, because are going to know you’re lying, and enough people can beat you up if you are very problematic.


N. H.

So we, as individual humans—because evolutionary selection acted at the tribal unit—we have the capacity for wisdom, but once the number of people and the self-organization went to the city-state, nation-state size, there became downward causation of the emergent phenomenon on the aggregate level. That started to focus on intelligence, and outcompeted the wisdom of individuals and smaller units.


D. S.

Yeah. The multi-polar trap, the Moloch-y behaviors, really took off there. Because you can think of Homo sapiens with tools, they were already different than the rest of nature, because they had stone tools and fire and language. So they were already different, which is why they had already extincted other species and moved in, become the apex predator everywhere, right? The beginning of the story is the beginning of the type of intelligence that leads to recursive abstraction leading to recursive innovation. So when you look at what various people call the defining characteristic of humans, or the earliest techne that made humans really distinct, stone tools, fire, and language and three very common ones brought up. Stone tools are an innovation in the domain of matter. Fire is an innovation in the domain of energy. Language is an innovation in the domain of intelligence; for information. And they were all intelligence applied to those innovations; recursive intelligence of this kind, right?


A group of humans with no stone tools can’t do anything to a mammoth: just with their fangs and claws, they’re not going to hunt a mammoth, right? Eight dudes aren’t going to do that. And one dude with a spear isn’t going to hunt a mammoth. It is the coordination protocols and the physical tech together that led to those capacities. So both the social tech and the physical tech. And so I’m meaning “tech” here as an intelligence applied to goal-achieving; the innovation of new fundamental capacities—that most broad definition of tech. And so for a couple hundred thousand years of sapiens you have this very small size. A couple years of hominids writ large. And then we can argue why exactly we started to get way beyond the Dunbar number, and where we went, and beyond the coupled Dunbar numbers into Sumeria, Egypt, Göbekli Tepe. Like, the first large ones.


But one function that certainly comes up in the analysis is having already capped out migration as a strategy and tribal warfare. You start getting into resource limits: with another tribe you’re sharing the same environment and competing for the same stuff. You just move. When you’ve moved to all the places, there’s nowhere else to move, there’s resource issues, and one tribe is willing to be warring. Now the other tribes have to unify together to survive, so they’re willing to sacrifice some of the collective intelligence of their smaller scale and their intimacy with each other, and whatever, for survival—


N. H.

Collective wisdom of their smaller scale.


D. S.

Yes. And now there is collective intimacy, actually, and collective wisdom are very deeply coupled. Because if everybody can know everyone, you can have intuitive mappings to everyone: you can care about everyone, you can have shared pathos, and you don’t want to hurt someone where you hurt if they hurt, right? As soon as we get to much larger scale where I don’t know everyone, I don’t trust everyone, I don’t have intimacy, now I have to have some way of knowing we’re in-group: we’re all under the same flag, we all have the same indicator of some kind. And now I don’t get to act in a unique way to each person, I have to have rule sets that mediate it. So we start to get this kind of intelligent rule-based law that is different than wisdom.


Now, one could say that all of the wisdom traditions were trying to—there’s different ways of saying this, having built an environment that we were not naturally fit to. And the way you would talk about this: our neurochemistry, our dopamine-opioid axis, is evolved in a certain environment, and then we’ve created a new environment for ourselves where we’re no longer fit. Now it’s easy to get more calories. In the early environment it wasn’t, so there was a dopamine relationship to always get more calories, whatever. Now, in this environment, we’re kind of actually in some ways genetically misfit. So you could say that the wisdom was trying to tell us how to deal with the dopamine-opioid axis in an environment that we were no longer evolutionary fit for. Deal with evolutionary mismatch. You could also say the wisdom was trying to deal with how do we guide the collective relationship to the technological powers we have that has some long-term viability built in?


N. H.

Number one: as you’re speaking on this grand arc of history, I kind of feel sorry for all the peoples and the cultures that expressed wisdom and were out-competed by the larger entity, number one. And number two: can we have wisdom with eight billion humans in our current environment, or is wisdom only able to be had with smaller-scale smaller groups that aren’t faced with the evolutionary environmental scale limits and pressures?


D. S.

In terms of feel sorry for: now, with that beginning of that insight, talk to any indigenous person, and you’ll get the context of their life. So talk to Tyson about this and have them get to share, like: oh, so the colonialist story was that we were savages that need to be civilized. Let me actually tell you about what civilization means; what something that was more truly civil was. Let’s say why the Hobbesian narrative, that our lives were short, brutish, nasty, and mean as the apologism to be able to genocide the fuck out of us is the dumbest story in human history—dumbest and cruelest, right? But here’s where also the naïve techno-pessimist narrative doesn’t work; is that if you don’t embrace the tools that will end up being what defines the world if any of them are there, then you don’t actually get to have any say in the future. Which is why, right now, I am advocating for something that recognizes both the problems of tech and the problems of a multi-polar trap where, if you just try to get things right for your people and not everybody, you still fail.


N. H.

Are we ready to talk about artificial intelligence?


D. S.

So, the reason I was talking a little bit about the evolutionary history of humans was in these smaller scales and totally different types of environment, with oral language rather than written language, with lots of things very different. But that unique to humans is the capacity to be more changed by their environment. You could say that our nature is to be very nurture-influenced. And as much as you can take a wild animal and train it to be different in a different environment in one generation, how much you can do that with a human is so radically different. And because the other animals are genetically fit to their environment—they evolved to have capacities to work within a niche—and yet humans evolved to make tools that would change our whole environment, including make whole new environments, right? Homes and cities and et cetera. We couldn’t come hardwired to a particular environment or a particular way of being, because our evolutionary adaptation was not our body. It was our body extended through tools that we change and environments that we change. So we had to be able to upgrade to: okay, new sapien, new environment, what language are we speaking, what tools are we optimizing for, what type of environment are we navigating? We just migrated. So we have to come incredibly not prepared for a specific set of things, which is also why we’re neotenous for so long, right? The human sapien is dependent and helpless for a very long period of time compared to the chimpanzee or any other close relative. But we also get to imprint—


N. H.

And it’s why our violent sheaths don’t—right. Keep going.


D. S.

We have a totally different—like, we don’t come preset to do the evolutionarily adaptive thing within that environment, because our evolutionary adaptation was to change how we would adapt, meaning which tools and everything we use, which languages we use, which cultures we identify with, and which environments—


N. H.

So we’re probably way more plastic and flexible in our behaviors than… I mean, the elephants today are doing similar things to elephants ten thousand years ago. Yeah, okay.


D. S.

So think about it. A human growing up learning how to throw a spear versus learning how to use ChatGPT are very different skill sets. And throwing a spear is not that advantageous today in most cities and most environments, even though it was the most advantageous thing. So you don’t want a kid growing up with genetically inbuilt spear-throwing capacity, you want them growing up to figure out: what is the tech stack around me, what’s the language, what are the goals, how do I do that thing? Which is why they’re also fairly useless at everything to begin with, because they haven’t imprinted what is useful in this environment yet. So the horse is up and walking in twenty minutes, and it takes us a year. And you think about about the multiples of how many twenty minutes go into a year for another mammal, and you’re like, wow, that’s really different. But then you look at how different human culture now to ten thousand years ago, and even one human culture to another is, right?


So now, we started doing our social science after certain aspects of culture/nurture had become so ubiquitous that we took them for granted, like alphabetic language. It’s not natural. It is not natural to have a written language. That is not a natural part of the evolutionary environment. That was an invention, and then we taught it to everybody. And it changes the nature of mind very fundamentally. Because rather than relate to the base reality out there, I relate to the mapping of reality to these sounds that are arbitrary. And then how those sounds work. So that affects the nature of mind. And then I go to school and I learn this is geography, and this is English, and this is math, and this is history, and it’s all divided, and there’s different principles, and the goal is: get the right answer, not understand why. That’s not natural.


So we’re ubiquitously conditioning people and then calling that nature. Whereas, if you go to the few indigenous cultures that are left, you’ll see that most of what we call human nature in terms of the current life experience of sapiens in the developed West is not true for them. And so it’s important to get that our relation—now, do we have a culture that is systemically conditioning what would most support wisdom? So look at what are called wisdom cultures, right? Look at the way that a child is raised in, say, pre-Mao Buddhist Tibet, and what they’re being developed for, and how fucking amazingly different it is.


So are we genetically misfit as beings to wisdom? No. Did we produce a civilization that is misfit to wisdom, and then humans are being made fit to that civilization? I.e. the superorganism, i.e. the Moloch, i.e. the megamachine, i.e. the generative dynamics that lead to the metacrisis? Yes. And the humans who are born into that, who become fit in that, are fit for the thing that is killing its own substrate.


N. H.

I get that. That is clear. But we were selected for, historically, at smaller scales, and those systems that had intelligence merged with wisdom to protect the tribe or the smaller unit, that dynamic doesn’t map to my knowledge to eight billion in a global consumption-based culture.


D. S.

Right. So the first question is: is it not possible for humans—individual humans, groups of humans at different scales—to develop wisdom, i.e. relationship to base reality rather than the symbols that mediate it and the models that mediate it, and relationship to the wider wholes, both temporally, spatially, and which agents? Is there anything about human nature that makes that impossible? No. Is there anything about the current conditioning environment of the civilizational system (not the biosphere) that makes that not what is being incented to be conditioned at scale? Absolutely. Is it possible for the eight billion person thing to continue without the wisdom? No, it will self-terminate.


So there is nothing innate to our biology that is the problem. There is something innate to the particular trajectory of the civilizational system that is. And we do not get to continue this civilizational system and what it conditions in people. So then the question is: what does it take to create environments that could condition the wisdom in people that, in turn, reinforce those environments, right? The bottom-up effects of the wiser humans creating different societies, and the top-down effects of different societies having an incentive to develop different things than the people. How do we get that collective-individual, top-down, bottom-up recursion moving in the right direction, given that there is no other answer for humans long term?


N. H.

So, last week on the phone you mentioned, when we were discussing AI, that capitalism ( early form of compute) can use that as a bridge, maybe, to get us to talking about AI?


D. S.

Within a body, all the cells in a body have the same genome. But they’re epigenetically differentiated so that a red blood cell and a liver cell and a neuron are different. And so there is specialization, division of labor, that allows synergy across differences and capacities of the whole system that none of the systems on their own have. So the differentiation and then integration is a thing, even at the level of a body, right? And at the level of a tribe there was specialization, division of labor, so that the tribe could do more than any individual person could do if everyone was trying to do everything. When we got to the much larg—


N. H.

Kind of economic theory of comparative advantage?


D. S.

The story of progress and economic advantage is very coupled to this idea of specialization, division of labor, and increase in the total cumulative complexity of the space that that facilitated, of which capitalism was seen to be the dominant system for doing so.


Now, I want to say something before going any further, which is: it is very common that if one says anything bad about capitalism, the default reaction by many people from a historical perspective is: this guy is a neo-Marxist and is going to suggest something that had Stalin kill 50 million people and Mao kill 50 million people, and blah, blah, blah, and doesn’t he realize that capitalism is the best solution?


N. H.

In my Earth Day talk I did last week—it’ll be out tomorrow—I did word pairings, and how the importance of semantics influences our behaviors and gives us permissions. One of the word pairings was fossil fuels. No, the reality is fossil hydrocarbons. But another was capitalism and communism are both industrial growth-based systems. So I’m with you there on the exculpatory definitional clause. Keep going.


D. S.

So when we talk about problems in capitalism, they were different expressions. But many of those problems—in terms of environmental harm, optimizing narrow goals, whatever—happened in communism, happened under feudalism, happened under various other types of systems that operated at scale. And we’re not even going to say that at the very small scale that everything was wise and awesome. That would be a kind of romantic naïveté that we don’t want to do. We don’t want to do the naïveté on the other side that says it was Hobbesian, brutal, short, nasty, and mean. Like, neither the full romantic tribal picture nor the full regressive they-were-dreadful-animals picture are true. It was complex, and different ones were different, and whatever.


But when we’re critiquing capitalism now, it’s not that we don’t understand why it was selected for, the things that it did that were more effective, or how gruesome some of the other systems were. It’s recognizing that this system (on the trajectory it is) happens to be self-terminating, so we have to come up with a new thing. And the new thing will be a new thing, it won’t be a previous thing. So I’m saying that so that the default reaction of “why am I listening to these commies”—someone doesn’t have to default into that.


But if we have a system, even if it was the best—and we can critique democracy in the same way: Churchill’s “it’s the worst of all governance systems save for all the other ones.” When they’re: “it’s the best thing, but the best thing is still self-terminating,” then we have to do new thinking, right? And so capitalism and democracy should not themselves be golden calves, beyond critique. But we are going to do the critique in a way that is historically informed.


N. H.

Quick question on capitalism, because I’ve never asked you this. I don’t think I’ve asked anyone this. There was never a person or a group of humans that said: hey, let’s invent capitalism. It was always an emergent response to the challenges and the innovation and the coordination of intelligence towards problem-solving of the day, and it took on momentum, and then institutions and everything built on top of it, right? There wasn’t a “let’s form this.”


D. S.

There were definitely points in the process that were important, and there were points that were conscious in the process. What you actually define as capitalism—right, because classical capitalism versus neoclassical versus Keynesian are all quite different; different in some of the axioms. So one could say capitalism started with the beginning of private property. That would be one way that one could talk about. One could say it began with surplus; the moment there was surplus, and we had to figure out who got to do what with the surplus, and who has the choice-making associated. That that’s the beginning. One could say: no, communism was one answer, capitalism was a different answer, so it’s surplus plus private ownership and exchange. Or one could say: actually, it was only once the medium of exchange was not itself an intrinsic value—so once we got currencies. Or you could say: actually, only once you had financial services where the currency made more of itself—so the beginning of rent-seeking or loans or debt. Or you could go forward to not until you actually have a formalized banking system, which you could say started with the Templars, or you could say started with Dutch capitalism and the kind of ship-based mercantilism. So each of those are various steps in the story of what we could call capitalism. Then, of course, central banking and international central banking agreements, global reserve currencies, blah, blah, blah. Right?


I mean, these are topics that, I think, for both of us are just intrinsically fascinating. To understand how the human condition evolved. And they also happen to be maximally important. What’s funny—right, you and I have in common is: we are not so motivated by, within the existing dominance systems, what is useful to advance success within that system, but recognizing that those systems have limits that we’re approaching. What is useful within the system is inherently self-terminating, so what is useful long-term is not based on what’s useful in the system, which means you have to go into: what do we even mean by “useful?” What do even mean by “good?” How did we get here, right? So when you’re at the end of a particular kind of success paradigm, if you recognize that, you have to actually go pretty deep in the historical stack, the theoretical stack, to even be pragmatically focused.


N. H.

“Islands of coherence have the ability to shift the entire system,” says Ilya Prigogine.


D. S.

It’s true.


N. H.

So keep going on why you think capitalism was an early form of compute, and compute is relevant to artificial intelligence.


D. S.

So, in one of our talks we discussed Moloch, which was kind of an analogy for—you’re actually asking the question now: did anyone design capitalism, or was it kind of emergent, step by step? And insofar as the most significant features of our world are not the result of anyone’s intentional choice, it’s like: who the fuck made that? And so being able to anthropomorphize, as a thought experiment, what the collective behaviors predispose, you can get different kinds of gods. So Adam Smith did this with the invisible hand of the market, right? That progress was defined by this god of the invisible hand; that nobody was choosing the overall topology of the system. They were all doing local point-based choices: if I want this product or service at this price, I would like to make a business that provides this product or service. And yet, the totality of the collective intelligence that was an emergent property of that moved things in a particular direction. He was defining the invisible hand as a good god, right? Roughly. And kind of—


N. H.

But he did say there was an end to that benevolence on the distant horizon, once we would exhaust the surplus. But keep going.


D. S.

So if humans have desire, demand for things that actually improve our lives, and we only want things that improve our lives, and we’re rational actors who will make a rational choice to utilize the resources that we have intelligently for the things that improve our lives the most, and that creates an incentive niche for people to innovate how to make goods and services that improve people’s lives more at better value, then yes, you would get a good god emerging from that decentralized collective intelligence, right? And that’s kind of the idea of the market and capital mediating that, and having this currency that is not worth anything itself, but represents optionality for all forms of value.


N. H.

So was the market an early form of artificial intelligence? The global market, just-in-time delivery, everything optimized for profits, efficiency, et cetera?


D. S.

So if we define intelligence in the pragmatic definition of goal-achieving, if we define general intelligence—narrow intelligences can achieve a specific kind of goal. An artificial intelligence can win at chess is a narrowly trained system on one thing. Can win at go is another thing. Can translate languages is another thing. Can do image recognition is another thing. Can optimize supply chains is another thing. Narrow intelligence. General intelligence is: increasingly, generals can do more of those things. Fully general can, in general, figure out how to optimize any goal. Is the market an intelligence—


N. H.

So humans have general intelligence?


D. S.

Yes. And one definition of artificial intelligence is trying to make things that behave in ways that we would see as intelligent. So it’s just taking the term intelligence kind of intuitively or heuristically. And if it behaves in a way that we kind of would assess that is intelligent, that’s one definition. Another definition is that intentionally aim to model human intelligent capacities. There are other definitions that are more interesting than those.


But when you ask this question about: so, are humans generally intelligent? Yes. I think we actually have to back up and define what artificial general intelligence versus narrow artificial intelligence is to explain why artificial general intelligence is scary, to then be able to come back and describe this market thing for a moment.


N. H.

Please do.


D. S.

So if people—probably most of the people here have heard the conversation about: are we nearing artificial general intelligence? Maybe we are nearing it in a very short period of time, and maybe that is really catastrophic for everything, and why? And probably some people have a clear sense of why. They’ve maybe heard Eliezer Yudkowsky or other people like that—Nick Bostrom—talk about the cases regarding artificial general intelligence that are the most catastrophic types of cases that one can imagine—meaning much worse than nuclear bombs. But for anyone who’s not familiar, I’ll just try to do it very briefly.


If I have a narrow intelligence—like the ability to play chess—first, if people don’t understand how much better at chess the best AI systems are than the human systems, go study that for a minute and get a sense. Look at how the AI that beat Kasparov at chess, IBM system, then evolved into Stockfish as the best system that had been programmed with every human game that was so much better than humans, it wasn’t even calibratable. And then how a totally new approach to AI that Google innovated, the kind of Alpha Go system, was able to beat that system. There were some ties, but roughly 38–0 without having programmed any human games, just letting two of them play themselves a trillion times in three hours, and get so fucking good based on no human system, just the rules themselves, that they could beat the previous god as if it was nothing. And it took three hours of training, and you’re like, “Whoa.” We aren’t even in the running of relevant. We don’t even have a reference for how not relevant we are at being able to do anything with that narrow goal, right?


Now, it happens that that narrow system that was able to train that way and beat us on chess also was on Go, also did on Starcraft. Also—like, just line them up, and you’re like, “Whoa. Okay.” So this is a sense of the power of artificial intelligence—meaning: intelligence like, we have: achieve a goal. Goal here is win at chess, win at Go, win at Starcraft. Are all those military strategy games? Yes, those are all military strategy games. Could you apply those to real world military and economic environments? Can you define real world environments like games, train AIs on them to be able to win, would they dominate that excessively? That is the whole direction of things, right? Does an AI clone—


N. H.

Real quick naïve question. So when people are applying AI to war and economic problems and innovation, or whatever, it’s a series of individual narrow artificial intelligence. They’re not applying an artificial general intelligence?


D. S.

There is no artificial general intelligence as far as we know today. It is the biggest goal in the space. We’re rapidly moving forward towards it. I’m going to make an argument that there is something like it, but not in the obvious sense. But the purely computational artificial intelligence systems are narrow, but they’re increasingly wide—meaning that the same system that can be trained on one narrow goal can be relatively easily trained on other goals. Which means that you don’t have to start from total scratch, right? So that is increasing generality, but not full generality.


But if I want to give a real world example: if you make drones—let’s talk about the copter-type drones—that use swarming algorithms where they fly in kind of a swarm pattern, and so they can go through obstacles, and some of them can be taken out and the other ones will reconfigure, the AI that regulates their flight is so radically advanced beyond what a human controller could do, right? Like, a human controller trying to control that fleet of drones is not even in the ballpark, just like they aren’t at chess. So when you start to think about autonomous weapons, and what would that mean, you can begin to get a sense of it. And there are already examples of this that—there’s a lot of things that are run by forms of artificial intelligence, cybernetic systems. So AI high-speed trading has already meant that a normal human with a maximum amount of knowledge without those tools cannot play in that space. AI-optimized high-speed trading can only be competed by competitive AI systems. And so then—


N. H.

Can I ask another naïve question? So when you say that—let’s just use chess as an example, even though military finance applications are probably more dangerous and relevant to our futures. But taking chess, you said you have to train this system, and the training took three hours. So what does that really mean? There’s some business owners or coders that say, okay, we’ve got these algorithms, this artificial intelligence, and now we want to apply it to be really good at chess. They write a little code and they give the AI the objective, and then they just press go, and three hours later they’ve got a model that they can apply in a real game?


D. S.

So this is the techniques of artificial intelligence, the techniques of learning, those are all evolving. And there are—to understand something about the evolution rate: there are many different exponential curves that are intersecting. In general, and you’ve probably addressed this on your show, in the evolutionary environment in which we evolved, yes, we have abstract intelligence, but we don’t have intuitions for ongoing exponential curves because they never happened in our environments, right? Something which started as an exponential curve and then it would turn into a logistic curve, into an S-curve. And so the rates of exponentials are non-intuitive. People are used to thinking about: oh, it’s getting faster. We have time to do something. Not like: it 10×-ed, and then it 10×-ed again, then it 10×-ed again, and the speed at which it is doing it is dropping by 10×, right? The time period. So we intuitively get this thing wrong. That’s for a single exponential.


So if I take AI, the hardware—right?—how we are not just having CPUs, but GPUs and TPUs, and different kinds of arrangements of those, and network cards, and whatever, and how many transistors we can get in a small area—there are exponential curves in the hardware in terms of the increasing progress; Moore’s law and other stuff. There are exponential curves in how much data is being created through sensor networks and through social media being able to aggregate all this human data and stuff. There are exponential curves in the total amount of capital going in, the total amount of human intelligence going into the space, the innovation in the models themselves, in the hardware capacities for actuators and the types of things that can be sensed. And so you have intersections of many different exponential curves.


So what we meant by how an AI learns at the time of Stockfish to the time of Alpha Go already was different. So completely different answer to that. And so with Stockfish, you were programming in all of Bobby Fischer’s games, and all of Spassky’s games, and giving it opening moves, and putting in all of the books of chess and all like that as a type of learning that then has a lot of human feedback on this at getting it right. This other technique, all that was programmed is: what are the definitions of a win and what are the allowed moves? And then you have two versions of this system play themselves, with some memory and learning features built in where they know what they’re seeking to optimize for is how to win. No human games built in, no theory of opening moves. But because of the speed, they can really play a trillion games in a small number of hours. Then they have opening moves that have never been seen, that are not aligned with any theory, totally different approaches humans have never done because they can take very different branches. And so they have the equivalent of would’ve been millions of years of human chess-playing. Or maybe no amount of human chess-playing, because human memory could never hold that much. I mean, combinatorics couldn’t. But in three hours it can be trained to do that, compared to how long this would take to get a human to just even catch up with what has already been known in chess.


N. H.

So who owns the AIs?


D. S.

Some AIs are built by corporations. Mostly public corporations. Some are built by branches of military or governments. So national laboratories make some very powerful AIs. And obviously the AIs that are in public deployment as large language models right now are Microsoft and Google and Anthropic, and then increasingly other companies that have to follow on—Meta and Baidu and like that. But Tesla has very powerful AIs; very, very powerful AIs that are trying to get full-blown self driving down. They take information from sensors and process it through artificial intelligence to control actuators. And—


N. H.

How did it happen that all these are companies that don’t really talk to each other? Like, Microsoft doesn’t really talk to Google in a competitive sense, so there’s just a bunch of engineers that around the world are simultaneously developing these AIs in competition with each other, and they’re all figuring out the same codes and et cetera, but some are doing it a little bit faster and better?


D. S.

I mean, AI is not that different in this sense than other categories of emerging technology, where you have different companies competing where anybody can see once someone deploys what they did, reverse engineer it, and try to figure it out. They can all try to spend enough money to hire key talent away from the other company. There are academic groups that are publishing stuff that is also the cutting edge of knowledge, and then the companies take that knowledge and do stuff with it. They can all do corporate espionage on each other.


N. H.

So there isn’t one AI. There are dozens, hundreds, thousands of AIs, depending what they’re applied on and depending what vector of society they’re developed in.


D. S.

And different fundamental approaches to AI.


N. H.

What do you mean?


D. S.

Well, there are AIs that are trying to take visual sensor data to control motor actuator data to move a self-driving car, which is different, where it’s sensing an actuating, a movement of a vehicle through space. There are other ones that are trying to read content online, read language—your chatbot—so they’re sensing in language and they’re actuating in language. So they’re optimized for language input/output. So those are very different kinds.


N. H.

But so the goal of AI in these examples is not profit maximization. The goal of the people that control the AI is profit maximization.


D. S.

The goal of the AI is whatever people code its goal to be: its objective function or its loss function; the inversive loss function. But the people are developing it in service to whatever the goals the organization they’re part of is seeking. So if it is a military group, it might be the goal of national security. If it is a corporation—especially a public corporation—it has to be profit maximization through some specific domains of action. If it’s an academic group it might be advancing the knowledge of that field in whatever ways that is.


So it is an important thing that you’re asking, which is: if the group that is developing an AI has certain goals, the AI will be developed in service to those goals. And so are those groups wise? If the AI is super-intelligent (and we’re already starting to get at that intelligence has to be bound by wisdom), is the goal of the group a wise set of goals, or is the goal of the group a narrow set of goals, in which case it’ll be building the AI for the optimization of that? It’s actually really a key question.


N. H.

But here’s a subset of that question. If a few of the groups choose wisdom as their goal, or if most of them do, they’ll still be out-competed by those AIs that have the narrow focus and win, no?


D. S.

So the question of: are multipolar traps obligate? Or: is there a way to get out of them? Agreement is a way out of a multipolar trap. The agreements are hard, because if everybody can realize that their likelihood of winning the race is low, the race as a whole might mess everything up for everybody, and not doing the thing is better. Was mutually assured—


N. H.

Like War Games with Matthew Broderick.


D. S.

Yeah. Mutually assured destruction was a way of saying: don’t think that you can win first strike advantage. Let’s be clear that winning equals losing. At this level of power there are things built in place where winning equals losing. So we have an agreement to not try to do the win, because anyone trying to do the win would be so catastrophically bad for everybody.


And the challenge with these agreements is: you can enforce such a thing within a nation, right? Within a nation where you have monopoly of violence backing up rule of law, you can make sure nobody cuts the national forests down even if there’s a lot of motivation to do it, because you can make a law that says you’re not going to, and the monopoly of violence is stronger than whatever the capacity the logging companies are going to bring to bear. But if you don’t have monopoly of violence and rule of law, how do you enact it? So internationally—and this is where we face so many global catastrophic risks, whether it’s the destruction of the oceans or the destruction of the atmosphere or biodiversity, or developing synthetic bio or AI—is: if anyone else is, we lose in the short term by not doing it, should they deploy it. And yet, if everybody does, most likely everybody dies. So how do we bind those? The tricky thing is: how do we know the other guy is keeping the agreement, and that they’re not defecting and secretly doing it?


So with regard to AI, can you get—so we see that OpenAI was originally created for safety purposes, and concern about how far ahead DeepMind was and everybody else, and the need to be able to have an open-source approach so there was not only one centrally concentrated power. And through whatever series of things happened, the thing that was originally a non-profit dedicated to safety is a public corporation for-profit wholly-owned subsidiary of Microsoft deploying in competitive types of races against other ones. And then Anthropic broke off from there to do safety, that took 300 million from Google, ends up being in similar dynamics.


And so without some kind of shared agreement that nobody does a particular thing, then yes, you end up having an incentive gradient for everyone to race to get there first—even ethically, where they hold me getting there first, I’m the ethical guy, so I’ll win that race, and then I’ll use that power for good. And that conversation is very actively—


N. H.

[???], yeah. That’s like a billionaire being aware of the great simplification and still trying to maximize their optionality to have more money so that they can do more philanthropic good in the future.


D. S.

It is amazing how powerful motivated reasoning is on bias.


N. H.

Let me ask more naïve questions, Daniel. We’re getting to the heart of it here. Could you program AI for wisdom?


D. S.

This is the AGI alignment question, and there’s different ways of kind of getting to this. So let me construct the AGI concern versus the narrow AI concern for a moment. So we were on the track of: what is narrow AI? It is optimized for a certain set of goals and it can get extremely good at it. It can get better than humans by not just a little bit, but in many domains, by so much that we have a hard time even understanding it.


N. H.

And that so much, as we speak, is accelerating by the month.


D. S.

Yes. And so without fully general intelligence, fully autonomous systems, then the AI is in service of humans using the AI for purposes. The humans are the agents, the AI is an enhancement of our goal-achieving capability, right?


N. H.

It’s just a tool.


D. S.

I would argue that even the word “tool” makes us think badly, because language—and this is is why I was saying technology, meaning goal-achieving, is such a broad set of things that the types of tools an end user uses versus the types of tools that make other tools, machine tools, versus the types of tools that innovate how to make lots of other tools, like computation—yes, they’re all tools, but they’re really fucking different in kind. And they have—we wrote this paper called Technology is Not Values Neutral that should link here, which discusses why it’s not just that we have our values and we use our tools, it’s that the tools give the capacity to do certain things better, to achieve certain goals better. And insofar as they do, the humans that use them win at things, and as a result everybody has to use them or they kind of lose. So, one: the tools become obligate.


Two, the tools code different patterns of human behavior. Now I’m doing the behavior where I’m using that tool as opposed to doing some other thing. Encoding that pattern of human behavior changes the nature of human mind and societies at large. So it is not true that you’ve got values, and then tools are just neutral. The use of the tools changes the human mind individually and in aggregate, and then becomes obligate. So psychology affects the tools we create, but the tools in turn affect the psychologies. And this is why I was saying our nature is to be malleable to the environment, because the change the environment so much. But if we don’t consciously know how to direct that, it will get directed by a kind of downhill gradient that ends up in an evolutionary cul-de-sac.


So with that little aside that “tool” or “technology” is actually a very deep concept—yes, a narrow AI that is not an agent, that a human is using, is a tool. But it’s a tool of a very special kind, where it can take lots of steps on its own to help achieve a goal rather than me just use it for fully specified purposes, right? So this is a kind of important distinction. Now, if I look at AI benefit first so as to not seem overly negative: for all goals humans have that would create some progress or benefit to some real thing, can we use AI in service of those? Yes. Not all things equally today, but lots of things. Is it all the scientific progress we have on solving new diseases and stuff like that, can we use AI to accelerate it, and accelerate movements in science, and discovery in medicine? So might AI be able to speed up the rate by which we come up with way better nuclear energy—maybe fusion, maybe deep geothermal? Yes. Might it be able to solve many types of cancers? Yes. If I have a daughter that is dying of cancer, do I want to hear anyone slowing down the systems that, if they get there fast enough, might save her life? I don’t want to hear that. That sounds like the most cruel, evil fucking thing. Whatever other problem it’s going to bring about, I’ll deal with that problem later.


N. H.

Is there someone listening to this that cares mostly about the environment and climate change? Is there a progress case to be made that AI will help carbon emissions and reduce environmental impact?


D. S.

Can I use AI to model how to do geoengineering more precisely? Can I use AI to do better genetic engineering on crops to maximize their carbon sequestration if I’m a carbon fundamentalist? Can I use AI to advance stem cell meats? Can I use AI to advance energy technology or battery storage technology or any number of things like that? Can I use AI to affect supply chains; to lower energy on supply chains and decentralize a lot of things where you don’t have as wide a supply chain? Yeah, you can AI on all those things.


N. H.

Okay. Got it.


D. S.

And so for any goal we have, if I can use that to enhance the goal, I don’t want to hear anybody talking about slowing that down.


N. H.

Yeah, I agree. Or at least I understand.


D. S.

This is the same as capitalism, right? If me having access to more capital speeds up my ability to achieve my goal, and I believe in my goal, then I don’t want to hear anything about taxing that or decreasing that or fucking up my capacity to achieve those goals.


N. H.



D. S.

And so are there a lot of things that human intelligence can do that are good, that increasing that type of human intelligence through artificial systems that can operate on more data and faster could also do? Yes. Now, why would we be so concerned about it, then?


So let’s talk about a few different cases. One type of AI risk is: AI employed by bad actors. Now, nobody thinks they’re a bad actor—for the most part, right? Like, there are some exceptions. But most people someone else calls bad actors, right? They’re terrorists, but to them they’re freedom fighters. Was it a protest or was it a riot? Blah, blah, blah. The Lakoff frame on those things is very much: is it progress or is it destroying all these areas. For the people that are being destroyed by it, who want to do some destruction back that they consider tiny, in the name of self-protection, it is a protest and a freedom fight. Otherwise it might be a terror act or a riot, right?


But let’s just pretend that we weren’t thinking about all that and just call it bad actors for a moment; criminal activity. Can people—whether it is a dude that would’ve just shot up a bunch of stuff with an AR-15, because that was a technology he could use to achieve harmful goals, and obviously this is why people are concerned about assault weapons, because they couldn’t kill as many people as easily with a knife, right? Or a single-loader pellet gun when the [???] ammo was created. Well, they can do way more harm as drones become more widely available, that you can hook explosives to. And so it happens to be that the bad actors who want to do fucked up stuff oftentimes are in states of mind where they’re not amazing technologists and can’t coordinate lots of people and do strategy and technology. This is not always true, but sometimes it has been true. We have been saved by this.


But as we make the destructive capacity—now, it might have constructive capacity. I can use a drone, of course, for a lot of positive things: to be able to monitor construction and railways for safety, and to plant trees from the sky, and whatever. But can we also use it to fly an explosive over some critical infrastructure? So when we make the thing for the positive purpose, we also enable all the things that it can do for any purpose. When we make that easy enough—


N. H.

And in that case, probably 80+% of those bad things we can’t even imagine at the beginning.


D. S.

That’s a complex topic I want to get to—how you do externality forecasting. Because we can do a much better job than we ever have done, and saying that we couldn’t is a source of plausible deniability for not trying.


N. H.

Well, what I meant was: there’s unknown unknowns, to use Donald Rumsfeld’s term, with AI. Yeah.


D. S.

But if we’re—okay, I have to do this tangent, then, because it is fucking critical. There are problems that—


N. H.

Okay. Well, we’re already in line for this to be the longest podcast I’ve ever done, so let’s do it right. Keep going.


D. S.

Because there are problems that are second-, third-, nth-order effects that were not easy to anticipate, can we use a technology for a certain purpose that creates unanticipated and maybe unanticipatable consequences? Yes. Can you prove that you can anticipate in advance everything? No. Because there will be some things that are unknown unknowns that you can’t have proven that you thought through in the safety analysis ahead of time. So that’s a true thing. But because that’s a true thing, people use that as a source of bullshit plausible deniability to say I couldn’t have possibly known, where then they don’t even really try to forecast the things, because they will privatize the gains and socialize the losses.


And this is a very, very important thing to understand—also related to the progress narrative and the underlying optimist versus pessimist, or (I would state) opportunity versus risk orientation. The people who focus more on the opportunity of a new technology—this is going to do all these amazing things, blah, blah, blah—they’re going to move faster focused on that than the people who are focused on the risk and really want to do good, thorough safety analysis and make sure it won’t cause any of those risks. It takes a lot of money and a lot of time, and it doesn’t rush you to market as fast as possible.


So the first-mover advantage is going to happen by the guys who take risk less seriously, focus on the opportunity more. They’ll be able to get the network dynamics that are very, very powerful from associating with first-mover advantage in early scale. They’ll be able to regulate in their interest, because they state that the risks are not that bad, and they do lobbying efforts with the money they got from early revenue or investment or et cetera. The people who take the risk seriously do the analysis and say, “Oh fuck, there’s actually no way to advance this that is good right now. We should just not do it.” Well, those people just get to sit on their knowledge of the problem and not do anything, but also not gain any power to affect the system, because they’re not going to generate money by which they can do lobbying and public opinion effect and et cetera. Or they say, “There is a safe way to do it, but we just spend all of our money figuring out safety, not doing optimization.”


So there is a perverse incentive in general for those who are more focused on opportunity than risk, and as a result we get all the opportunities and all the risks. And the risks happen, the cumulative effect—


N. H.

And that’s true with capitalism and the market in the last fifty years.


D. S.

It is a market function. Not fifty years—forever.


N. H.

Right. Okay. Yeah.


D. S.

But the risks are getting larger as the technology gets larger, and the cumulative effects are increasing. So now we’re at a place where the cumulative effects of industrial tech on the environment are reaching critical tipping points of planetary boundaries, and the exponential tech is getting to a place where its destructive capacities are so fucking huge, right? Like, the atomic bomb was the first thing that could destroy everything quickly. And until then, for all of human history, we couldn’t do any quick thing that would destroy everything. Now the atomic bomb is not the only thing like that. Synthetic bio, where you can make an artificial lifeform that doesn’t have a few natural genetic mutations, but so many that it could be an invasive species everywhere, right? Or artificial intelligence. These are examples of things where their destructive effects are actually exponentially more than any previous kinds of thing.


But if you combine things like a public corporation having a fiduciary responsibility for profit-maximization—which you can argue why that made sense. You can argue why the shareholders are giving you their money to work with. Those shareholders come from pension funds. You need to give that back to them. They can only trust you to have it if you have this bound fiduciary responsibility to return their funds appropriately and you’re innovating a good or service that people want, that is net good for the world, blah, blah, blah. Of course, as you can see, where the logic of the market became less and less true, as rather than rational actors we figured out how to nudge everybody into less and less rational action. And the supply side figured out how to manufacture demand for things that don’t increase the quality of people’s lives, and didn’t account for the cost in the environment. You still have that story.


So you’ve got the must-maximize profit, and then you have nobody would want to innovate, and the innovation is good for everyone if the bad things that happen they were personally liable for, so we’ll make liability-limiting properties where the corporation will get fined, not any of the people or directors involved. So then doing things that destroy everything just becomes a cost of doing business. No real deterrent for it. So the corporation will privatize the gains, socialize the losses. The actual people who make the decisions have a bunch of upside and not relative downside.


You put all those things together, and you say: AI’s being developed in those environments. And so it has to do profit-maximization, it does not have the people who are making the decision have the liability associated with the scale of risk and harm that could occur. Those who are more opportunity-focused build the corporations that become worth tens of billions of dollars and have all that power to also influence lobbying and influence public opinion. And those who do the safety analysis run tiny nonprofits that nobody listens to, comparatively. So it’s important to get that this asymmetry that perversely orients those who think about opportunity more than risk as risk is moving into an exponential scale, and then rationalize that, that is itself one of the underlying drivers of the metacrisis.


N. H.

And embedded in that—and I understand that—embedded in that is the natural human-scale ethical feedback loop when people are innovating, and they go and work in a room for months, and they’re working on something, they kind of get a little bit of recognition and an insight into what they’re doing. Here you code something, and you press a button, and all those negative potential externalities are just in the future, and you get no emotional sense of what’s going on. Is that also true?


D. S.

Yeah. The fact that it happens at a scale—like, many of the harms will occur via a supply chain somewhere in the world I won’t see. And as we already said, there’ll be second- and third- and fourth-order effects. So let’s say, for instance—now, this is a very important point—we can say that all technology is dual use; “dual use” being a military term. And there’s two different ways to think about it. A technology is developed for a military purpose, so its primary use is military. It also might have civilian or non-military applications, right? We’re developing the rocketry capability to make missiles hit their targets, but maybe we’ll also be able to use it to put satellites into outer space for communications for everybody. And obviously, computation was developed in World War II to crack the Enigma Machine and whatever for military purposes, and this had a lot of other purposes. On the other side we say anything that is developed for a non-military purpose probably also has a military application, right? So dual use goes both ways. The fact that if I’m developing something for a non-military purpose, it still probably has a military application—i.e. you can say on the positive side defense, but on the opposite side the offense or killing capability means that that has to be factored in the development of the technology.


N. H.

A dumb question, though: if someone at Microsoft is creating an AI for some purpose, they’re not sharing that with the U.S. government or any government, right? It’s got to be independently developed by coders and engineers within the government, within the military? No?


D. S.

No, no. No, no, no. If a technology is seen as having risks to national security, then there will end up being government bodies that have oversight in certain capacities.


N. H.

Got it.


D. S.

And that’s one way. The other way would just be: the corporation develops the capacity and one of its clients becomes military, it becomes a military contractor in addition to other things. One would be: it doesn’t even say Microsoft doesn’t develop a military side application, it develops LLMs, and then Palantir develops competing LLMs and makes a military application of them, right? So the technology itself will get developed lots of places, will get reverse engineered, used for lots of purposes. So all technology is dual use.


Now, on the side of: we’re developing something for civilian purpose that we say it’s positive, but it also has military. Okay, we have to think about that. But the other side is also risky. Because if you’re developing a military technology that is very powerful and very dangerous, but at least you think you can control it, the moment that that same capacity also gets a civilian purpose means it will proliferate, and it makes it much harder to control. Like, let’s say you developed drones for military purposes. Now it gets a commercial application, which is I can just fuckin’ film stuff with it. Now everybody can get access to drones. Wow, that just really affects the capacity for decentralized terrorism.


So, okay. Tell me where you want to go. I’m about to go to the next part beyond dual use.


N. H.

And you can. I’m just going to comment that I get this sinking feeling that we’re headed towards a risk singularity.


D. S.

Yes. The metacrisis is a risk singularity in which the underlying drivers over-determine failure. Meaning: if we could prevent the AI apocalypse, the doesn’t prevent the synth-bio one or the planetary boundary one or the gazillions of other ones. If we could stop planetary boundaries regarding fishing, that doesn’t affect what we’re doing to soil or nitrogen runoff or PFOS pollution, or whatever. The underlying thing is creating so many different sources that can lead to catastrophic risk that if you don’t deal with the underlying thing and you just deal with some of the risks, you only buy a tiny bit of time.


N. H.

I have thought that the underlying thing was the emergent phenomenon that I call the superorganism, which is the growth-compulsion of our global market system. What you’re saying is that we had agricultural surplus, then we found flammable fossils, then we accelerated our technology, then we went to debt, then we had the Internet. An each one of these kind of exponentially increased what had come before, and you’re saying that AI is the next “tool” in service of this growth-based superorganism?


D. S.

Yeah. So, you say “growth,” which is a fair way to say it. But let me define it slightly differently, because I’m going to define it in a way that is more aligned with this AI conversation. So I think it actually gives a lot of insight into the other. Narrow goal-achieving is the underlying generative dynamic.


N. H.

Okay. I get it.


D. S.

And growth is an epiphenomenon of that.


N. H.

Wait, an epiphenomenon?


D. S.

A result of. A second-order effect.


N. H.

Okay. So the first thing is the narrow boundary goal, and growth is the second-order effect of that narrow boundary goal.


D. S.

Yes. So if I want to achieve a goal, and having a little more surplus gives me more optionality to do that, but me having that more optionality when I’m rivalrous, that increased security I have inherently decreases somebody else’s security, or their relative competitive advantage in a status game for mating, or whatever it is. So now they have to do that. Then I see them doing it. So now I need to do more than I needed to do. Now we’re in a race on it. Growth is the epiphenomenon. What everyone is pursuing is not the growth of the whole system, they’re pursuing their own narrow goals and generalized optionality for goal-achieving.


N. H.



D. S.

And the growth in total consumption of energy and atoms, the growth of total waste and entropy, the growth of intelligence and new types of technology, the growth in mimetics and the ability to convince a lot of people of it are all epiphenomena of both goal-achieving in specific, an then the general capacity of increasing optionality for goal-achieving.


N. H.

Could we ask AI how to go from a narrow- to a wide-boundary goal?


D. S.

So, all technology has certain affordances, and all technology has combinatorial potential with other technologies. And the affordances of them together are different than them on their own. And obviously, not only does the hammer have different potentials if I also include nails and saws and the other things than it would on its own, but I can’t even get a hammer without the smithing tools that would be necessary to make a hammer, right? So there’s a whole technological ecosystem.


N. H.

I’m just—I mean, I’m cutting to the chase on this. I’m just wondering what AI does to our current complexity.


D. S.

Yes. So there’s a reason I’m having to try to do this. All tools have combinatorial potential with other tools. They all have certain affordances. AI is unique. Computation has been unique, and then AI, even within the space of computation, it is more combinatorial with every other type of tech than any other tech is. And it has more total affordances than any other tech has.


N. H.

What’s an affordance again?


D. S.

Things that it makes possible. So, for instance, if I have a nuclear weapon, it makes certain things possible that are not possible without it. It does not intrinsically make me better at bioengineering. It does not intrinsically make me make better drones. Computation can be applied to make better drones, better bioengineering, better nuclear weapons, better propaganda, better marketing, better financial systems, better chips to make better AI systems. So the AI—because all the other tools are made by the kind of human intelligence that makes tools, and AI is that kind of human intelligence externalized as a tool itself—it has a capacity to be omni-modal, right? Not dual use, omni use—more than anything else is. And omni combinatorial. The risk of AI with synthetic bio. So the dual use thing here is, I’d say: oh, I can use AI and kind of protein folding stuff to be able to solve cancers. But once I develop it for that purpose, I can also make better bioweapons with it, and now I’ve made that ability cheap and easy. Or I can use it on a chemical database to do drug discovery, but I can also make chemical weapons. I can use it to optimize supply chains, but I can also use it to do optimized terrorism on breaking supply chains.


And so the AI has the capacity to increase every agent’s capacity to do every other motive in any context with any other combinatoric technology in a way that nothing else has. And that’s really important in understanding, because we’ll develop it for a particular positive purpose, for a particular set of agents, but in doing so lower the barrier of entry of all other agents for all other purposes being able to use that capacity.


N. H.

So from my biophysical frame—which is not where you’re coming from, but I’ll just interject this—I think a lot of people who don’t fully understand AI look at the invisible but real biophysical body that AI has, which requires energy. I read in the last few days that AI requires a lot of water for cooling and things like that. But the larger environmental impact of resources, CO2, planetary boundaries, is not what’s powering the AI, what’s needed to power the AI, it’s the resulting acceleration of all the other consumption and things that were already going on that are just being exponentially increased by the AI.


D. S.

Yes. So let’s look at AI driving Jevons paradoxes in your world. So let’s say there’s a bunch of areas where the mining is not quite profitable, right? That the cost of extraction, I can’t on the current market sell the thing for enough money, so we don’t mine that area. Now, the AI figures out efficiency such that a whole bunch of areas that weren’t quite profitable are now profitable. Awesome for the economy, new economic growth for everybody, and also more rapid growth of the superorganism and more environmental externalities for everyone under the current motivational landscape and incentive landscape, down to both the market incentives and even the legal landscape. Because the market incentives won’t create a real liability deterrent because of liability-limiting things, and they actually require you to do profit-maximization and things like that, right? So—


N. H.

How is there not a massive discussion in the climate space about AI right now as a generator to bring us to 500 ppm in a decade?


D. S.

I don’t know. Maybe this conversation will help with that. I think it—


N. H.

But you’re very worried about that. You’re worried about the planetary boundaries impact of AI.


D. S.

Either you have the market still running things, and AI working with in the market—in which case, yes, the market incentives have cost externality as fundamental to them. If you had to pay the real cost of things, the market would collapse as we understand it. And so the AI causing more efficiencies will drive Jevons paradoxes, because you will continue to seek returns and you will drive planetary boundaries faster.


The AI person might argue no, this is not true, because the AI allows us a completely new economic system whereby we can actually track all those externalities and do very complex optimization. That means the market doesn’t run the world anymore, that means a centralized AI system runs the world. That is its own dystopia, because either you have separate competing groups that don’t have totalizing power, in which case they’re caught in multipolar traps, or you have a single group that combined all of those, but now there is a central point of failure, capture, and corruption, and no checks and balances on the power. That’s why we talk about one bad future being catastrophes, the other being dystopias. Because to control for all the catastrophes orients towards very centralized power, which gives dystopia.


So what we need is a third attractor that is neither of those, which means it is neither a central AI coordination system nor is it AI in service of markets—which means it is some new thing that, like markets as a collective intelligence system, are also a collective intelligence system, but it doesn’t have the perversion and externalities of markets. So a wiser collective intelligence system, wiser than democracy or capitalism, that, yes, it will employ AI and computational capabilities to mediate just in the same way that democracy required employing the printing press. And so new physical technologies enable—and just like computation has enabled all of modern banking—new technologies make new social systems possible. But the goal here is not an AI overlord that runs everything, it is: how do computation and intelligence capacities make new collective intelligence possible such that you can have the global coordination to prevent global catastrophic failures, but where that system has checks and balances in it so you don’t have centralized power, coordination, and corruption failures?


N. H.

So it seems to me that, if we weren’t in this metacrisis and Wile E. Coyote moment with all the leverage in the system and the geopolitical fragility and everything, that we could have time to have some wisdom embedded in our information systems and our governance. But we’re at this late stage where the snowball is getting bigger and bigger, and how do we inject wisdom into this situation with all these things going in the opposite direction? Obviously you must have some idea or a plan, or you wouldn’t’ve dropped all this really scary stuff on me and the listeners. Or are you just calling this out as a risk that we need to have massive urgent discussions on?


D. S.

I think there are people who are AI experts—which I’m not. I have a focus on how to think about the relationships between all of the kind of ideas of progress and risks that gives an insight into AI. But when you look at whether it’s Hinton or Russell or any of the famous kind of AI pioneers and the risks they have called forth, or the call to pause large language model deployment, or the most serious case, like Eliezer Yudkowsky’s conversations—if people haven’t seen the first podcast he did that kind of started this space, taking these things more seriously recently on Bankless. It was so fascinating to watch, because you could tell the podcasters didn’t really know what he was going to say and thought. He was going to talk about AI and crypto, and he came on and—you know, he founded Machine Intelligence Research Institute and has spent his life focused on the topic of the risks of artificial general intelligence, that he thinks we are both very close to, but not close to aligning.


And to understand that—which we haven’t discussed yet, because we keep not quite getting to the whole construction—you have narrow intelligence, which humans can use for fucked up purposes and markets can use for fucked up purposes. But then a general intelligence becomes kind of its own thing. It becomes its own agent, rather than a tool of us as the agents, can make its own goals, and can increase its speed of learning (meaning in competition relative to us) so much faster than us that if its goals don’t align with ours, we’re fucked.


N. H.

So using the chess example of the three hours and trillions of iterations, the AI could use itself, it could train itself, without humans giving it the orders?


D. S.

Yes. But across broader things. So it could come to a new domain where it wasn’t already trained what to do and figure out: what are my goals in this space? And, for whatever my big picture goal is, what would the goals on the path, the instrumental goals, be regarding this whole space? And so, kind of in the way you say: no matter what your goal is, it’s probably going to require energy. So “get more energy” ends up becoming a goal in and of itself, because it increases optionality to fulfill other goals. “Get more capital” becomes a goal in and of itself. It’s called an instrumental goal. The AI can start figuring out: well, I can gain all the optionality in this space by such-and-such. I’ll mine all the things, I’ll et cetera, et cetera, because I’ll be able to use them for goals in some future time that I have.


So if we talk about a general autonomous intelligence of that kind—that could beat us at any games, can redefine anything as a game, that can win—it’s very easy for us to realize: it would be really bad to make that thing before being sure that its goals don’t mess up everything for us. Because our goals really suck for all the species we extincted. Our goals really suck for all the animals in factory farms, and for all the cultures that are destroyed. So if there’s something that is as much smarter than us at goal-achieving than us, and similarly narrow-goal-achieving, then we might be like animals in the factory farms or extinct animals very soon. And so—


N. H.

Or the cultures with wisdom that were out-competed.


D. S.

So Eliezer came on this podcast having kind of world-leading expertise in this topic, and just spoke honestly to these podcasters who did a good job of staying with him but were totally not prepared, and he’s like: yeah, we’re on our way to this thing and we’re all going to die. And then they asked him: so what do we do about it? And he’s like: I don’t know. I’ve been working on this for decades, and everything that I thought would work doesn’t work, and yet the market is driving everybody ahead, and I have no idea. And then they’re sitting there: uuuhhh…


N. H.

How is the market—


D. S.

Wait, let me give you this last part. You should really watch it. Then they’re like: what do you want the listeners to take away? And he’s like: I don’t know. I just feel like at the end we should at least be honest with each other. It’s a very poignant moment, right? It’s a very poignant moment. I’m not doing that right now. I actually do think that there is a way forward, but I think what he said shows what someone who is very bright, who spent their whole life looking at this—like, it’s very important to take that as a data point.


But you were asking this question: okay, what do I want listeners to take away? So I was giving as a reference arguably what many of the top experts in the world come to with this. So I will offer what might be a way forward, but it does require taking seriously how deep a thing we’re talking about.


N. H.

I can understand how narrow AIs are pursued by corporations and by governments to achieve task and achieve profits. But why would a corporation pursue artificial general intelligence if profit was their objective? Wouldn’t that also run the risk of killing everyone and these other extreme scenarios, that then profits aren’t worth anything if the whole system blows up into a giant superorganism?


D. S.

You could ask: why would any country pursue having nukes, when the more nukes that exist increases the probability that everyone, including them, dies with nukes? And yet, you can see from the point of—


N. H.

So it’s an arms race.


D. S.

Yes. It’s partly an arms race, and it’s partly a set of these biases; that those who are more focused on the opportunity than risk end up being the ones that rush ahead. And so the people who don’t think AGI will kill everything try to build AGI because it will solve everything. Wow! Imagine intelligence like ours, but so much more. It’ll solve all of science, it’ll give us radical life extension, it’ll create wealth and abundance for everyone, and we will get ushered into the promised land.


N. H.

So we’ve got the naïve progress people shepherding the AI development train.


D. S.

The naïve progress, combined with the motivated reasoning on capital and winning and ego and all of the various sources of motivated reasoning, combined with once they start actually being legally bound for things like profit-maximizing, and that the liability is inherently externalized while the profit is centralized, then combined with things like the country doesn’t even want to regulate them because it wants the economic growth so that it’s not fucked, because it uses that economic growth to grow militaries and geopolitical alliances—so then you get a: yeah, but if we regulate, then China won’t, and we lose. So you get layers and layers of multipolar traps driving the need to continue to optimize for near-term narrow interests.


N. H.

This is not where I thought the conversation would go, really. I’m more depressed than I thought I would be. I kind of—you and I have talked about this, so I’m kind of aware of some of these risks, and they are risks that we haven’t talked about, and there’s information hazard with some of these risks. But this sounds highly plausible to me, and I think even before AGI would be reached there are plenty of other—well, just using AI as a vector to accelerate all of the things. That itself is enough to put us into tipping points.


D. S.

Okay, so here’s—and I know we’re late, and the sun has been setting in the beautiful mountain background behind me, so we need it to wrap soon. So I wanted to try to bring back a few threads. Human intelligence unbound by wisdom, it is fair to say, is the cause of the metacrisis and the growth imperative of the superorganism, or the capacity that gives rise to it.


N. H.



D. S.

That intelligence has created all the technologies—the industrial tech, the agricultural tech, the digital tech, the nuclear weapons, the energy harvesting, all of it. That intelligence has created all those things. It has made the system of capitalism, it made the system of communism. All of those things, right? And now that system of intelligence that takes corporeal capacities—things that a body could do and externalizes them, the way that a fist can get extended through a hammer, or a grip can get extended through a plier, or an eye can get extended through a microscope or a telescope, or our own metabolism can get extended through an internal combustion engine, or our musculature, or whatever it is, right? So it takes the corporeal capacity and extends the fuck out of it extra-corporeally. So that type of intelligence that does that is now having the extra-corporeal technology be that type of intelligence itself in maximized recursion, not bound by wisdom, driven by international multipolar military traps and markets and narrow short-term goals at the expense of long-term wide values.


So in the metacrisis there are many risks. Synthetic biology can make bad pandemics, and extreme weather events can drive human migration and local wars, and this kind of weapon can do this, and this kind of mining can cause pollution, and this kind of pesticide can kill these animals. So those are all risks within the metacrisis. AI is not a risk within the metacrisis, it is an accelerant to all of them, as used by the choice-making architectures that are currently driving the metacrisis. If we weren’t in a metacrisis, if we had different choice-making architectures, we’d be using AI for different things. But if AI is in service of human goals, and human goals have driven the superorganism and the metacrisis the way they are, then this context of human goals acts as an accelerant of them all.


Now, if I think about AGI risk: we make an AI that is fully autonomous, we can’t pull the plug, it has goals that aren’t ours, it starts optimizing for those, and in the process decides to use all the atoms for something else and completely terraforms the Earth, right? Which it could do. It could totally do, and we’re moving way faster towards that than we are towards safety on that. That is a risk within the metacrisis landscape. But AI being used by all types of militaries, all types of governments, all types of corporations, for all types of purposes—achieving narrow goals, externalizing harm to wide goals—is an accelerant of the metacrisis on every dimension.


And so now, as we take the intelligence that has driven these problems unbound by wisdom and we exponentialize that kind of intelligence, we get to see: whoa, superintelligence is really fuckin’ potent. What goals are worth optimizing with that much power; with something that’s a trillion trillion times smarter and faster than humans, what goals are worth optimizing? It’s not global GDP, because I can increase GDP with war and addiction and all kinds of things, and destroy the environment. So I say: okay, it’s GDP plus GINI coefficient, plus this other thing, plus carbon, plus whatever. Nope, there’s still lots of life that matters outside of those ten metrics, or hundred metrics, that I can damage to improve those.


The metric set that is definable—like, the Tao Te Ching says the Tao that is speakable in words is not the eternal Tao—the metric set that is definable is not the right metric set. So if I keep expanding the metric set to be GDP plus dot, dot, dot, dot, dot, I can still do a weighted optimization with an AI on this and destroy life. And the unknown unknown means there will always be stuff that matters that has to be pulled in, where I don’t want to run optimization on this thing.


So then the question comes: if I have something that can optimize so powerfully, what is the right thing to guide that? Its’ the thing that can identify the difference between the set of metrics you’ve identified as important and reality itself. The limits of your own models. That is not intelligence, that is wisdom. I am defining these roughly in a way that someone gets the sense of: the difference between what the weighted metric set of all the identified important metrics and the optimization function on it says you should do, the difference between that and what you should actually do is wisdom. And it requires being able to attune to more than just the known metrics, and more than just the optimization logic process on those.


D. S.

And so as superintelligence shows us how fucking dangerous narrow optimization is, it even shows us our own intelligence, and our own intelligence running across all the humans—via things like markets. And the market incentivizes some humans to innovate new ways to turn the world into dollars, and other humans to take those innovations and exploit the fuck out of them, right? So both search algorithms and optimization algorithms—the market makes the general intelligence of humans do those in groups. So the cybernetic intelligence of corporations and nation-states and the world as a whole is already a general autonomous superintelligence—running on all the humans as general intelligences (rather than running on CPUs), but also using all the CPUs and TPUs and GPUs—in service of the collection of all the narrow goals.


So AI accelerates the metacrisis, but it also makes clear to us that what it would take to align it is: you cannot have—and this is why the question you asked: who’s building it and who owns it and what goals do those groups have—if you wanted to make a superintelligence that was aligned with the thriving of all life in perpetuity, the group that was building it would have to have the goal of the thriving of all life in perpetuity—which is not the interest of one nation-state relative to others, and it’s not the interest of near-term market dynamics, or election dynamics, or quarterly profits, or finite-set metrics.


N. H.

But that maps right on to the global geopolitical governance conversation—which we can’t have it right now. I mean, yeah, go on.


D. S.

If you have a group that has a goal narrower than the thriving of all life in perpetuity, and it is developing increasingly general AIs that will be in service of those narrower goals, they will kill the thriving of all life in perpetuity. So what this says is: inside of capitalism, and inside of separate nation-state competitive interests—which are inside of Moloch superorganism type dynamics—you cannot safely build increasingly general intelligences. You have to have the general intelligence of the humans in the group, the cybernetic intelligence of these humans in the group, be aligned with the thriving of all life in perpetuity; that it has those wisdom dynamics. That thing could possibly (possibly!) be oriented—at least it doesn’t have a perverse incentive—to build something that was also aligned with that where it is now seeking to scale.


Because remember, we were talking about wisdom is more possible at a smaller scale of people that can be in richer relationships with each other, and then scale messes it up. Can AI actually help take the dynamics that can happen at smaller scales and help us to build governance structures? And I don’t mean AGI, I mean certain tools of computational capability. Like, I don’t think anyone thinks that if we were to try to build a democracy from scratch today, fighting a revolutionary war, or whatever, they would build it the same way we did in 1776. We would be building it in computational systems: would people be able to vote from home, would voting even be the thing, would digital identity be a thing, would cryptographic provenance of information be a thing, would there be some kind of data aggregation using AI? Everybody knows: if we were going to build it from scratch, it would be a different thing than if you build it in the industrial era. In the industrial era it was a different thing than when the Greeks built it. Because they had different problem sets based on their tech, and also different capabilities.


If what a democracy and a market are are systems of collective intelligence, what we need are systems of collective intelligence and wisdom. And now we’re talking about building artificial intelligence. How do we build systems, cybernetic systems, where the human interaction with each other—both how the humans are developed, and how they’re interacting with each other, factoring all their incentives—is both intelligent and wise, and the computational capabilities (the artificial intelligence) is not disintermediating them, but is in service of this scaling of those collective intelligence capabilities?


N. H.

Don’t we need a change in cultural goal and aspirations or prices or systems before that would happen? Or could we have small NGOs that get philanthropic donations, develop AIs in the service of all of life? But doesn’t it need a lot of resources and compute, and those would still get out-competed by the military and giant corporate AIs?


D. S.

If we look at the multipolar traps in the competition dynamics, if we look at who has the resources to build things at scale, if we look at the speed of those curves, it doesn’t look good. It just, to be honest, it doesn’t look good. Something has to happen that we are not currently obviously on course for. But if enough people, if some people, can—stepping back—be able to see: oh, the path that we are pursuing, that we feel obligated to pursue, oh, our own opportunity focus relative to risk focus is actually mistaken, and we’re running a cognitive bias. Or other people who are employed by it are recognizing that. Or enough people would be like: fuck, let’s make an agreement to slow this thing down and figure other things out. If you don’t have—again, we said wisdom will always be bound or restrained—if we do not get the restraint wisdom to stop the max-race, then yes, these will be near the last chapters of humanity. And so then, the task becomes: how do we do that?


N. H.

So is a first step to expand the wisdom within the hyper-agents in the system?


D. S.

It’s always a good question. If there are some people who have disproportional aim or power than others, and other people who have disproportionately more wisdom, do you try to get the people with more wisdom to have more power and influence, or do you try to get the people with more power and influence to have more wisdom? Or do you try to get the larger collective bodies to have some more of both of those?


N. H.

Yeah, and this gets back to the market, the superorganism, the growth imperative, the narrow-boundary goal, because those hyperagents that have the power and the resources to do AI and scaling, they have the optionality of money, and money can be turned into everything else. Is it possible that AI gives people more optionality than money at some point? I mean, I don’t know how to break that dynamic.


D. S.

Okay. There is stuff about money and optionality, and pursuing instrumental goals—goals that increase your optionality to pursue other future goals—that narrow-goal optimization requires. In AI it’s called instrumental convergence; that no matter what your goal is, you’re going to want to pursue certain things that increase goal-achieving ability. Capitalism does that. We didn’t finish answering: is the totality of the market already a superintelligence? We defined it as a superorganism; is it also a superintelligence? So there’s some stuff that we have not got to. I will allow that we’re just not going to get to that now—


N. H.

We could go for biology and Ferris’ eight-hour record on one recording. No, I’m not going to do that.


D. S.

What I’ll say is that, if where we leave this with all of the open threads, people have a lot of questions, and we want to come and address those, I think that would be interesting. In the need of closing—because I’m also realizing that I am late for another call with friends of ours on the AI topic—I want to share something that I think will be hopeful in the thinking about the wisdom/intelligence relationship, which is not saying how we enact it. The enactment thing is a real, real tricky thing. But just on what we need to enact.


If people have not watched the conversations that David Bohm and Krishnamurti had together back in the day, I would recommend them as some of the most useful, valuable, beautiful recordings of human conversation I have ever seen. And in one short clip, David Bohm, speaking on—if you YouTube it—I think it’s called, like, Fragmentation and Wholeness, something like that. He basically identifies—and this was maybe the eighties—the cause of the metacrisis, though he didn’t call it metacrisis or superorganism, but like all the problems of the human predicament that is clearly going towards a point of self-termination. It was seeable at that time.


The way he defined it, I think, was exceptionally good. I think it maps to the way indigenous wisdom has defined it, and other things. Men are not the web of life, we are merely a strand in it. Whatever we do to the web, we do to ourselves. But when we become capable of doing exponentially powerful stuff, then our own short-term win/lose becomes omni-lose-lose, or short-term optimization ends up affecting what we would—even the time scales we care about.


So what Bohm said is: the underlying cause of the problem is a consciousness that perceives parts, rather than it perceives wholes or the nature of wholeness. And because it perceives parts, it can think about some things separate from others. So it can think about benefiting some things separate from others, and either it can then care about some parts more than others, so it’s okay harming the other things, or it just doesn’t even realize it is, right? So whether it is separation of care and values, or separation of just calculation. And so I can benefit myself at the expense of somebody else, I can benefit my in-group at the expense of an out-group, I can benefit my species at the expense of nature, I can benefit my current at the expense of my future, I can benefit these metrics at the expense of other metrics we don’t know about. And all of the problems come from that; that insofar as we were perceiving the field of wholeness itself, and our goals were coming from there, and then our goal-achieving was in service of goals that came from there, that’s what wisdom binding intelligence would mean—which is the perception of and the identification with wholeness being that which guides our manipulation of parts, i.e. techne, technology.


And then Ian McGilchrist—who I think you’re going to have on your show, or maybe already have. If not, you should definitely—


N. H.

I have, but please introduce us. But keep going.


D. S.

Ian, in The Master and His Emissary, I think, advanced what David Bohm was saying in an incredibly beautiful way. And he’ll share it here on the show, but basically said: to not hit evolutionary cul-de-sacs, there is a capacity in humans that needs to be the master, and another capacity that needs to be the emissary—meaning in service of; also bound by. You could also say there is a capacity that needs to be the principal and another that needs to be the agent. In legal terms: a principal-agent dynamics. And basically, the thing that needs to be the master is that which perceives not mediated by words, symbols, language models; perceives in an immediated way the field of inseparable wholeness. And the emissary is the thing that perceives each thing in light of its relevance to goals, and figures out how to up-regulate some parts relative to others, i.e. what we think of as intelligence. Relevance realization, salience realization, information compression. And so when I was talking with him, I said, “So basically—”


N. H.

So wisdom is the master, and intelligence is the emissary.


D. S.

—and I said: You’re basically saying that the principal that the emissary developed all these powerful capabilities, and so in some places the emissary said fuck the master thing, I want to be the master. And it had the tools to do so. Started to win that thing on a runaway dynamic is the cause of the metacrisis, the superorganism. He’s, like: “Exactly.” Which maps to what Bohm was saying corresponds to the reality that the master circuits would orient towards.


So if you already look at all the problems in the world, and the global metacrisis, and the impending catastrophes being the result of the emissary intelligence function unbound by the master wisdom function. Then you look at AI taking that part of us already not bound by wisdom, and putting it on a completely unbound recursive exponential curve, that’s the way to think about what that is.


So what is it that could bind the power of AI adequately has to be that what human intelligence is already doing is bound by and in service to wisdom. Which means a restructuring of our institutions, our political economies, our civilizational structure, such that the goals that arise from wisdom are what the goal-achievement is oriented towards. That is the next phase of human history, if there is to be a next phase of human history.


N. H.

So my takeaway from that, and from this whole conversation, is that—well, first of all, my takeaway from the conversation is: I owe you an apology. From the time we met four years ago, I thought Limits to Growth and The Great Simplification dominated AI as a risk. And, looking back, I didn’t understand—actually, I didn’t understand until the last three hours how that merged; that AI merges with the superorganism in this potentially catastrophic way that accelerates all the things. So I didn’t understand.


D. S.

As a hypertrophication of the intelligence that is already driving the superorganism. Exactly.


N. H.

Yeah. And then my second thought is: there still is a role for education, for culture, and leading by example, and by a transference of the me-based culture to a we-based recognition, both between humans (we are in this together) and we’re part of the web of life, and that’s also part of the we. And the more humans that understand and feel that—whether through ayahuasca or drums or being in nature or being with others or meditation or whatever it is—the higher chance we have to intervene with the trajectory we’re on. I mean, that’s my, just mild takeaway.


D. S.

But those people who go spend time in nature—observing non-analytically but communing with the truth, goodness, and beauty, the intelligence, the meaningfulness of it, who do the ayahuasca and have those experiences—if they become lotus-eaters and simply drop out, it doesn’t affect the curve of the world either.


N. H.

It’s the equivalent of them being deniers. Yeah.


D. S.

The key is: and yet, if people are working to make change, but they’re not actually connected to the kind of wholeness that they need to be in service to, and they continue to have what seem like good goals, but they’re narrow—we need to get carbon down, we need to get the rights of these people up, we need to protect democracy, we need to get our side elected because the other side is crazy, we need to develop the AI to solve this problem. Anything less than the connectedness with wholeness, and anything less both at the level of care and at the level of calculus, even though you can’t do them, you are oriented to try with the humility that knows you will never do it properly. The humility that knows that you will never do it properly is what keeps you from being dangerous from hubris. But the part that really, really wants to try is what has you make progress in the direction of the service of the whole.


N. H.

We should probably do our next conversation on in the service of the whole, and take a deep dive in that and what it might look like, and maybe there will be some people that respond to that calling, and then come back and do a deeper dive on these AI questions, because it’s going to take me a while to process this. This has been great. Is there anything else you want to contribute?


D. S.

We’ll add some things to the show notes. So the piece that Eliezer Yudkowsky did on bankless I think is worth watching, both because of getting what he said, but also that the moment in culture. The Bohm-Krishnamurti pieces, I think, are super valuable. McGilchrist and Tyson you’re going to have on. I think Samantha would be great to have on. She and I have had good conversations on these topics, and has good indigenous insights. There’s a guy named Robert Wright who’s made a bunch of short, really simple AI risk videos that I think are exceptional as a resource to share. Not—


N. H.

Is he the guy that wrote The Moral Animal? No. Different one. Okay.


D. S.

No. Different guy. And—let me verify—Robert Miles, excuse me. Not Robert Wright. Robert Miles. And so if people just want to understand the AI issue better with short, simple explainer videos, I think his are some of the better ones I know of. And obviously the conversations you and I had previously had in this series contextualize a lot of it. And then, if people write to you with questions, there are so many threads we left open. Would be happy to come back to it. It would be fun.


N. H.

Okay. Thank you for continuing to think and push on our plight. It seems daunting, but this is the time we’re alive. We have to understand it, care about it, and engage with it. But this is a big old red pill, my friend.


D. S.

I mean, it only takes the whole of ourself in service to the whole of reality, and utilizing both all of our technological and trans-technological capabilities for the purposes that are inclusive of everybody. And so that could seem daunting—can also seem inspiring.


N. H.

Yeah, it’s both. It’s both. To be continued. And thank you.


D. S.

Thank you, my friend. These are fun to be in; these conversations with you.

Daniel Schmachtenberger and Nate Hagens

Document Options
Find out more