Technium Unbound

November 12, 2014

What comes after the Internet? What is bigger than the web? What will produce more wealth than all the startups to date? The answer is a planetary superorganism comprised of 4 billion mobile phones, 80 quintillion transistor chips, a million miles of fiber optic cables, and 6 billion human minds all wired together. The whole thing acts like a single organism, with its own behavior and character—but at a scale we have little experience with. This is more than just a metaphor. Kevin Kelly takes the idea of a global superorganism seriously by describing what we know about it so far, how it is growing, where its boundaries are, and what it will mean for us as individuals and collectively.

Part of the Long Now’s Seminars on Long-Term Thinking.

Topics
Mentions

10:24

Kelly

Yeah. That is indeed. I think today humanity’s a little bit bigger than it was yesterday, with the landing of the probe. That’s sort of what the technium is about. What I wanted to do this evening was to take an idea that has been around for a long time, that’s sort of been in the realm of poetry, and to try to treat it seriously. That idea is the idea of a planetary technological superorganism. It’s a terrible set of words, but we don’t have very many good words for that. And what I’m going to try and do is to add some words to it. It’s an idea that has many different names an has been around for a long time, and what I’m trying to do is to apply—it’s to move it into the realm of science, to try to talk about it [???], and to actually take the idea seriously.

11:28

So superorganism is a—we think we know what that means. It could mean something like a bee hive, a bee colony. Wilson wrote a book about the superorganism of ants and bees. Bees are interesting because they cannot regulate the temperature of their own bodies, but they can regulate the temperature of a hive. They live for only six weeks, therefore their memory only lasts for six weeks. But the memory of a bee hive can be for years. We know about swarms, which are—like a murmuring of starlings, they can act almost as a single shape as it moves through the air; how they do that with no leader? It’s an emergent phenomenon. We know about termites, another social insect that can create this architecture that is, again, thermo-regulating, it actually has air conditioning ductwork. And of course the scale of this thing is way beyond even what a termite can perceive, and yet they can build these things with great accuracy. That’s a superorganism. And a reef, which is an interesting superorganism because it’s many species, not just a single species. And the reef itself is being protected by and cultivated by the individual separate species, who are of course interested in their own survival, but are actually doing things to maintain this very large structure. And that’s another example of a superorganism.

13:07

I wanted a more technical definition of superorganism, and even Ed Wilson’s book on the superorganism doesn’t give us one. So I went to Wikipedia and I found that was a pretty good definition: “A collection of agents acting in concert to produce phenomena governed by the collective.” The only trouble is: I wrote that. They were quoting me, which really dampened my confidence in Wikipedia! It’s like, “This is supposed to be smarter than me!” So we don’t really have—I mean, we may have an idea of a superorganism. So I’m using this word, and I’m admitting the fact that it’s actually kind of fuzzy, it’s actually kind of vague. I wrote a book, Out Of Control, which was sort of trying to look at the phenomena of emergent behavior that comes from collections of agents, and that’s where this quote comes from.

14:09

But there is something about these organizations. They exhibit some of these four behaviors, which is that they tend to regulate something, to kind of bring things back into some kind of range. So there’s a sense in which they’re self-governing. They also exhibit power law scaling, which I will mention a bit later, which is this idea that there is an association between the different scales. So there’s a sense of things operating at multiple levels. There’s also persistent disequilibrium, which is a fancy word to say that the superorganism is never at rest. It’s always slightly out of kilter, and it’s always catching itself. So there’s a dynamicism about it; that it isn’t static. And lastly, of course, there’s emergent behaviors, which means that there are things that it does that cannot be found in any of the parts. So, famously, you could look at the behavior of a single bee forever and never really see the behavior of a hive in it. That’s some of the qualities of a superorganism.

15:37

And I want to apply that now to technology. And this is a slide inspired by Matt Ridley, who is a speaker here on this seminar before. And if you take these two objects which are roughly the same size, the one on the left, you could probably make in a good weekend with some instruction. The one on the right you not only could not make, but even everybody here, as smart as we all are, could not make together. Because it requires a network of other technologies to support it. So that technology, the mouse, is made up of hundreds of technologies, and they themselves are all requiring hundreds of other technologies to support it. And so the entire thing becomes saturated with different dependencies of technologies supporting other technologies.

16:37

And I’m really a fan of this art project done by a guy named Thomas Thwaites—I think how you pronounce it. It’s called the toaster project. And he decided to try and make a toaster from scratch. From scratch—meaning, like, from the ore, from the oil. And that’s his suitcase of copper ore. And it turns out that this is really difficult to do. Smelting copper is incredibly toxic. Just even getting the ore was a whole year in getting permission to move it, whatever it is. And so he got this ore, and he did smelt it in his backyard, using a kind of—some other technologies, but keeping it to a minimum. And he was actually able to cast this framework. This was the basic mechanism—which was a big deal. Believe me, this was a huge project. And then he actually got some crude oil, which he did not get himself, but he purchased, which was okay. Just a little bit. And he started casting the plastic case of it. And he was carving these wood blocks, and he was pouring it in, and there’s his case. And he had made other parts. Eventually, this is the toaster that he made from scratch. He’s in England, so it’s a three-prong plug. It’s 220 volts. And it worked for thirty seconds. Okay? But just to kind of show you, I think, for me the great reveal he did was: he put it in a Best Buy-like store, at the real cost. So these other toasters are about thirty dollars, and his was two thousand dollars.

18:48

And the point of this was—I don’t know what the point of it is, but what I take the point of it is to show you the depth of the support, the subsidy, that we have with technology. So when you hear “maker,” people talk about they’re going to make something, they’re going to make a little bit of a something, they’re going to add the last little bit. They’re going to buy screws, or they’re going to buy some copper, or they’re going to buy plastic, which has a whole deep network of technologies that have supported that. And of course the point of these technologies is they form networks, in the sense that they’re all self-supporting. To make a saw you need to have a hammer to hammer the sawblade. And to have a hammer you need the saw to cut the wood. So these things are self-supporting. And the more complex the technology, the more self-supporting and self-dependent it is. You might think of these things as any invention, any complex invention, as a network of dependent and self-sustaining different technologies.

19:52

So we have networks of these. Nothing is really standalone except for maybe that stone hammer. Anything complex in today’s tech world is a network of things. And that’s also true for ideas. Ideas are not standalone ideas. Ideas are a network of other ideas that support them and are required to make them happen. So usually a new idea is just the last little bit that’s added to a network that already exists. And that’s why simultaneous independent inventions is the norm. They happen again and again because any new idea is going to be resting on a whole network of other inventions.

20:33

So if we take that idea of the toaster as being dependent on lots of other technologies, and we keep looking further and further back, we come to this huge web of all the technologies together, from concrete to plumbing to the electrical wires running through this place. All these things together in the furthest abstraction. And that’s what I call the technium. That’s not the technology as a singular. That collective, that superorganism, so to speak, of all these things is what I call the technium—to give it a word, so we can use and talk about it. So the technium is that web of all these things together that are requiring each other to continue.

21:24

And just as there are many parts in any very complex thing, and the parts themselves are smaller than the whole. So—this is a diagram of a cell—so the cell has thousands and thousands of parts, and thousands and thousands of metabolic pathways, which this is a diagram of. And none of those parts are living, and yet the cell itself has life. Okay? So there is a way with even the most complex systems we know are made up of parts that don’t exhibit any of the evidence, or any of the behaviors of the whole. And so a switch is not alive in any sense of the word any more than a metabolic compound in a cell is. But I’m suggesting that the entire superorganism of all the technology, of the technium itself, can in some ways exhibit lifelike behaviors. And that technological superorganism is this thing that we are in the middle of. We’re surrounded by it right now. Our lives depend on it, most of our jobs are involved in it, it’s most of what we care about these days to some extent, and that is what we’re in. And I’m trying to take seriously this idea. And I’m calling it the technium. That is my shorthand for it.

23:06

But what I’m not talking about is Gaia. So Gaia is another idea very much like this, started by James Lovelock and Lynn Margulis, which suggests that the natural habitat of the planet also is a superorganism; it also exhibits its own kind of emergent behavior. It acts as if it was its own organism. In a sense, it is self-regulating. And the claims of Gaia are very strong. It’s not just that it’s all acting together, but in fact that this system of life on this planet as actually trying to bend the entire planet to be more conducive to more life. It’s actually having an effect, or has had an effect, on the geological structure of the Earth; that the four billion years of life have actually affected the landscape, the mountains, the weather, continental drift. All those things actually have been affected, in some ways influenced, by life on this Earth, by Gaia, as it forms a system. So that’s the strong version of Gaia. And that’s not quite what I’m talking about right now, though. I’ll come back to it.

24:23

I’m also not talking about the World Brain, which was H. G. Wells’ term for that contraption, that large thing, that we seem to be making even when he wrote this book almost a century ago. And what he had in mind was something that we would now recognize more closer to the web. But it was—again, he was talking mostly about just the mechanical aspects of what we’re making. And I’m not quite talking about that in this idea.

25:01

And I’m not talking about the noosphere, which is a term by Le Roy and Pierre Teilhard de Chardin, the French priest and anthropologist, which is this idea of what we might think of today as more like a human collective consciousness, or global consciousness. It’s about the seven billion humans and our collective presence together. That’s maybe one version of the noosphere. So I’m not talking about just that, either.

25:38

I’m talking about—and I’m not talking about the singularity exactly, although I will come back to this because there is actually an element of the singularity, but I don’t mean it in the way that Singularity University, say, might think about it. There’s a little bit to that, and so I will turn to that.

25:59

So it’s something in between all these things. It’s this very large machine that’s made up of humans, that is actually made up of a lot of technology, but it also has an influence and a role and is tied to the natural world as well. So I’m trying to say: let’s imagine if this was a new critter, a new organism. Let’s treat it as if it was an organism, and I’m going to try to describe it as if I was a biologist if you were finding this in a petri dish, and you were trying to say: what can we say about it? And I think the external task I give myself is imagining I’m from another planet, and I’m going to go through, and I’m trying to catalog all the civilizations in our galaxy, and how would I make a taxonomy, and how would I describe what we have? Could we do it in a way as if I was a biologist on a field trip?

26:57

So the first question is: are there planetary civilization classifications? And actually there are a bunch, and I collect all the ones that I knew about. There’s a lot of them. And the kind of example of one is the Kardashev Scale, which was about the energy of different planets. And so, roughly, if a civilization could control all the solar energy from that sun that came onto the planet, that was a 1. If they could control all the solar energy given off by that star, that was a 2. If they could harness all the energy in a galaxy, that was 3. And Carl Sagan kind of turned that into a little formula, and he estimated that the Earth today, we were about 0.74. That’s just a single dimension.

27:44

And I collected all the other known planetary classification systems I could find. The most elaborate one was from Star Trek, and they had a lot of different ones in their imaginary worlds. But all of them were on single dimensions, and of course it’s very obvious in a taxonomy of planetary civilizations that you wouldn’t have just a single axis, you’d have many, many different dimensions, and so it’d be a matrix. But we haven’t gotten that far. So we don’t really have very good classification schemes.

28:18

But in my search for trying to describe what we have, I was asking myself: what can we say about this planetary system in a global sense? And it turns out that, whenever you ask what do we have globally—take X, and we try to find the global dimensions of it—that actually there’s a very uniform answer, which is: we have no idea. We don’t know. The one metric that we’ve measured to some reliability is population of humans, and even that, as we see—because we’re revising these numbers all the time—we don’t even know that. But how many telephones are there on the planet? How many miles of road? How many schools? How much fresh water? The answer is that we don’t really have any idea.

29:07

And I know that a little bit because I was involved in a project with Stewart and Ryan to try to catalog all the living species on this planet. It seemed a very reasonable thing to do. If we found life on another planet—say, Mars—one of the first things we would do is send the probes back and try to do a systematic survey of all the life that would be on that planet. But we haven’t done that to our own home planet. We think we know about five percent. There are 1.8 million species that we have discovered; we think maybe about 5 percent. There are some methods that we can make that estimate. But we basically have no idea. We certainly don’t know what’s in the deep oceans. And these are mostly small things. I mean, very few new birds were discovered. But new insects, and below, smaller than that, we have no idea. That, unfortunately, is the answer to most of the questions we have about the planet as a whole.

30:10

And the fact that we don’t know even the species is really important, because very early on we humans, about 10,000 years ago, and even earlier, we basically have eliminated, say, for instance, all the megafauna. The megafauna, that used to be in Mexico, that were basically wiped out about 10,000 years ago by human hunters. That had a huge effect on our continent and its shape and the ecology. That’s not just a biological effect, a biological consequence, it actually has even geological consequences. And while we know about global change right now, and rising CO2, that actually—while it’s very, very drastic right now—it actually began a long time ago. It began even at the beginning of agriculture. That slide is actually supposed to be year B.C. So even in pre-industrial ages, our inventions, our technologies were beginning to affect the planet in a geological way.

31:26

So we know about five thousand natural minerals, and most of those minerals, maybe about a hundred, are very, very common. But the rest of them are actually very rare. We’ve made, manufactured, a thousand artificial minerals. And some of them in very large quantities, like concrete, which actually is in more abundance than many of other natural minerals. And so, in that sense, we are actually, again, having geological consequences with our inventions. It’s a geological force, it’s not just biological.

32:08

So returning back to this machine that we’re making with the extension of our minds into matter, what can we say about it? Well, one thing is that the most rapidly expanding quantity or item on this planet is nothing biological, is nothing concrete, it’s actually information. Information is expanding 66.5 percent a year. Almost anything else we make over the span of decades is only increasing at 7 percent. There’s no biological phenomenon that we see where things are increasing at that rate. And that information is in the realm right now of a million zettas. If we keep going, there’ll be a million zettas in 2050. I’ll tell you about zettas in a minute.

33:00

So the storage, if we counted up all the storage in the entire planet right now, and we mapped it out, that’s the graph of what it would look like. And that is a graph of an explosion, basically, right? If you take the surface area per second that we are manufacturing it, it’s an explosion. In fact, we were laying cable at the speed of sound for a while. Okay? I mean, we were laying 340 meters of cable, fiber optic, in the planet per second for a while. That’s how fast it was. And this digital world that’s exploding—again, if you calculate the volume of stuff that we’re making per second, it actually is like a nuclear explosion. That’s the velocity that we’re creating in. But unlike a nuclear explosion, which is over in seconds, fractions of a second, this is continuing year after year after year. So that’s the scale which we’re affecting this increase in the world.

34:15

And again, if you just look at the number of transistors that we’re making, it’s phenomenal. And we’re making this into one very large machine, because it’s all being connected. So you can think about all the transistors in the world that are now all being connected together—in the same way that the transistors in your computer or your phone are all connected—we’re connecting all the devices together. And so [???] calculation of the total number of transistors and all the devices that are connected, and it’s some crazy number. These numbers are changing, they don’t really mean anything. What’s a quintillion anyway, right? There’s a lot of them. Right? There’s a lot of links, there’s a lot of—the speed at which this is happening, you can almost hear it. I mean, literally, if you actually tried to listen to it. And the total amount of memory is really huge, and the number of clicks that people are making; these little synapses.

35:13

They’re actually at the same scale as a human brain; the number of links and the number of clicks in the web, or the number of links in the web, is roughly approximate to the number of neurons and synapses in the human brain. And we consider the human brain the most complicated, complex thing that we know about. So that’s what the technium is. That’s the scale right now. The only thing is, your brain’s not doubling every eighteen months, okay? And that’s what we’re doing here.

35:43

So by 2040, this machine will exceed—in terms of this kind of the mechanical count of its transistors and synapses—it will exceed all the capacity, or all the metrics, of the humans on Earth. And that’s why people like Ray Kurzweil will say, “Well then we’ll have a new brain.” But there’s a whole hierarchy of the way things work in terms of autonomy. So right now you can have a manufactured superorganism that doesn’t do very much. It’s just a very large machine that is just a machine that’s very large that has its own emergent behaviors. It doesn’t necessarily have to be autonomous. But it could be—that would be another phase, it would be you have something that shows some autonomy. Maybe like a starfish level. It’s not smart, but it’s autonomous. Or a cricket, or a mushroom. You could also have a superorganism that had some kind of intelligence—maybe like a mouse. It’s not conscious, but it’s intelligent. Or of course you could have a superorganism that was conscious. Just because you have a superorganism doesn’t mean that it has to have consciousness, is what I’m saying. And I think that, actually, what we’ve made so far today is somewhere just above the manufactured level. It may have some autonomy, but it doesn’t necessarily have to. It’s sort of like a technological mushroom; a planet-sized mushroom that isn’t necessarily smart, but it does have maybe some emerging autonomy.

37:28

So there’s other ways in which this has some lifelike attributes, one of which is that, if you take a plot of the metabolism of most biological organisms, it’s ten watts per kilogram or less. One to ten. And the data server actually, these days, is almost at that same rate. So you can almost see that these data farms have sort of a biological-like metabolism. In fact, this is where the scaling laws come in, because you can plot all the animals on a power law. So basically, per weight, they have a very very strict adherence to the law of metabolism. And even plants do, too. In fact, you can even scale up for lots of organized matter. This is Geoffrey West’s work at the Santa Fe Institute.

38:27

So cities—again, extended type of technology—also exhibit this same power law, in both the good and bad; both crime rates as well as wealth, both how fast people walk as well as the cost of their houses. That this kind of a scaling law is very, very particular to complex systems like an organism. And Geoffrey West actually noticed a very key thing, which is that while they follow a power law, which is basically defined as one on a log scale, things that were just slightly above one and things that scaled just slightly below one had a huge difference. There was a huge difference in their behavior, even though they were very close in first approximation use of the power law. That little difference made a great difference. And that was: if it was greater than one, there was unbounded growth, and if it was less than one, there’d be growth and then collapse. Okay? And so cities, generally, were super-exponential, and organizations and companies were generally below one; sub-linear.

39:52

And the question really is: well, what’s this technium as a whole? And we don’t know the answer to that. Cities certainly obey this idea of unbounded growth. There are very few cities ever gone extinct completely. They’re very slow to. They do, but they’re very slow to. But we don’t know what the technium as a whole is. We haven’t been able to do that calculation. But I wanted to give you what a picture of this looks like. This is what the technium looks like on the planet. It is not this, which is from Star Wars. This is a planet that was engineered. That’s not what we see. We see something much more biological than that.

40:37

And to give you an example of where this may come from is: there is, again, a kind of an [???] project called degree confluence, and it would take the arbitrary grid that we’ve applied to the planet with the round numbers, latitudes and longitudes, and where they cross would form kind of a grid for random sampling. And so you actually have people whose hobby it is to find a latitude and longitude with zero, zero—rounded even numbers—and to go visit where they cross, and take a picture. Okay? And that’s a maps of the ones that have been completed so far. And if you take China, the most densely populated country in the world, and you take some of the degree confluence that had been photographed, you will see that the most populated country in the world is not very developed. It’s some agriculture and a lot of wilderness or scrub, and very few buildings. This is a random sampling of what China looks like. And if you take the Earth, it’s even more wild. The average place on Earth is not developed.

41:59

But cities are, and they are forming these kind of nodes like a neuron, which are connected by these communication threads—which are roads and other communication devices. And so if we take the plot of all the cities of a million, you can kind of see they form a very uneven, kind of biological-looking, pattern. Or even the megacities that are coming, and the megacities that will be there in 2050, this is a map of what that pattern of human civilization will look like in 2050. Here’s another view. You can see how spiky it is. It’s not an even engineered coat around the planet. Human density, again, is uneven. This is mobile coverage—uneven. 3G—uneven. Here’s the oceanic cables: these are the neurons connecting between the continents. There’s a thousand floating sensors in the ocean right now. There’s this whole network of ocean sensors, transportation routes, airport traffic looking like neurons, flight paths, air traffic, air connections, Twitter, social media mapped on top of the city, across the country, here’s Facebook connections, social connections, electrical grid. You can see these are the thousand communication satellites around the planet in a halo. These are like a nerve system. That’s the pattern that we have. And there’s a million eyes that we’ve added to that nervous system. A billion eyes, actually. A trillion eyes! They’re everywhere. They’re coming. Okay? So we’re actually cloaking this thing with senses now, giving it microphones and cameras, accelerometers, moisture sensors—everything. We’re connecting together in this kind of our own biometrics are feeding into this thing as well as the quantified self, Gary Wolfe and myself trying to capture everything that we do with our own bodies. And the Internet of Things, of course: we’re connecting everything we make. Everything we manufacture, putting a tiny chip into it, and we’re connecting it. And that’s the thing that we’re making.

44:27

So we have this super-machine, and right now it consumes—the technium consumes three quarters of the energy that we use. It’s not really for our benefit directly, you know. We’re heating our garages for our car. It’s that sense in which the technium itself is for the benefit of other technologies. Most of the energy that we consume is for the benefit of the technium. In fact, five percent of the electricity we use is just running the Internet, and that will continue to grow. So this technium, this thing we’re making, is the longest running machine we’ve ever made. Most machines, the time life of any complexity that you can run it in, is maybe hours, days, weeks, or years—this has been going on for decades. This is our longest-running machine.

45:20

But then, another definition of superorganisms is that parts die when the whole dies. And so we are dependent on this machine. And if this machine did not work, we would not continue to live. So we are in a symbiotic relationship with this thing, and I want to come back, then, to describing it a little bit more. Because I mentioned it’s obvious that we’re kind of making a world brain. But it’s not just that, we’re also making a noosphere. We’re taking all the humans on our planet and we’re connecting them together, and we’re creating some kind of collective thought. And we also have Gaia, and we’re interfacing with Gaia, and what we’re making is affecting Gaia. So there is a sense in which those three spheres are part of this. And I would say, yes, we have the technium, we have humanity, and we have Gaia. And those three spheres of influence, those three spheres of activity and action, are actually three faces of the big thing—whatever that thing is. Okay? So this thing is—there’s three corners to it. There’s technology, humans, and nature; Gaia. And that is forming this planetary thing. It’s not just the technology, it’s not just us and our minds, it’s not just Gaia. It’s all three together. And, interesting to me—taking kind of a page from Stewart’s thinking—they all run at different rates. So Gaia’s sort of operating on the scale of eons, humanity centuries, and technium’s is monthly or daily or something, it’s just really, really fast. So these three things, one way to distinguish them is actually maybe talking about the rates at which they operate.

47:30

So that thing doesn’t have a name. It needs a name. I tried thinking about a name. So my first draft suggestion was calling it the holosphere. The holosphere, unfortunately, is something in Star Wars; it’s a little device you hold that has 3D in it. But I think this is a better use for the name. We can call it maybe holos. So the holos is all these—it’s all the seven billion, nine billion humans thinking, connected together, it’s all the stuff that we’ve made with our minds connected together, an it’s all interfacing with Gaia, which itself is a superorganism. So I’m calling that holos just for the time being.

48:21

And I think there are seven frontiers in this holos. One is new math. Okay? It’s big. It’s really big. The kind of numbers that we’re talking about even with data is huge. 1030. It’s like, we don’t know how to operate that in real time. We don’t know how to do the kind of math. We don’t have algorithms anywhere near allow us to manage all this complexity. It’s way beyond anything that we have. And, in fact, we don’t even have really good terms. Right now we’re talking about exa and zetta. Well, we don’t even have words that come after yotta. There’s no terms. That’s how blank this is. There’s lotta and hella, but…! Yeah, yeah…. So I think we could take like a mol, right/ Avogadro’s number, 1023. We call 1023 bytes is one mol. So then we’ve got Megamols, then Gigamols, and Petamols or Examols. And if that’s not big enough, then we can take a planc, which is 1034, and we can talk about kiloplancs and megaplancs. I mean, we’re going to be operating in this realm very soon. And this is a huge opportunity for anybody who’s into math or computer science, because we really don’t know how to manipulate things at this scale. I call it zillionics, right? There’s zillions of stuff. So we’re going into zillionics.

49:54

There’s new economics, alright? There’s one economy. There’s not national economies, there’s one economy. This is the beat of, you know, all of the different major stock markets behaving as if there’s one market. There’s only one market. We have flash crashes where things—this was a famous thing a couple years ago where it just dipped. Nobody knew why, no one has ever come up with any explanation. It just went down on its own as an emergent phenomena. Because there’s one thing, and it hiccuped.

50:32

So we have slow earthquakes which can move through the Earth without us really knowing it. And I think we’re going to have thoughts happening on the superorganism that we’re not even going to be aware of, because they’re operating at such a wide, long frequency that we can hardly even feel it. Well, some of these ideas will at such a large frequency or long frequency that we will have trouble detecting them.

50:57

There’s new biology. There are so many things that could go wrong that, while we have—we know about flus and cancer and poisonings—but we’ll have those equivalent on a superorganism. If you have a superorganism, it will get ill, it will have diseases, it will have phobias, compulsions, oscillations. That’s how these things operate. If you have a large system, you have illnesses in those systems. And so we’re going to be confronting new kinds of illnesses.

51:36

What are we doing? Well, there’s a great idea of putting in an Internet immunology, where you take some of the advantages the immune system has, and you import it right into this system. So again, you take a biological idea and you try to implement something that we have in our own bodies and other living bodies have—which is a very sophisticated system. There’s no such thing as zero tolerance in the immune system. It’s all about tolerance at different levels. But you need some very sophisticated tools in order to make something like that happen.

52:12

This, by the way, is an attempt to take an immune idea to span where you’re actually not looking at the content span, you’re actually looking at the behavior of malware that goes through the system, the flows of it. So they can even—on the left was the ordinary flow, and on the right was the malware flow. And you could actually—without, again, looking at the content, just looking at how it behaves—be able to discern it in an immunological way.

52:39

We need new minds. I don’t like to think about AI. I think about utilitarian intelligence or artificial smartness. This is kind of this idea of a web-based service. We get it over the cloud. And because it’s based in the cloud, the more people who use it, the smarter it gets; the smarter it gets, the more people use it. Recently we saw this fabulous demonstration, where they took the stills from about a million different YouTube clips, and they took it to the AI at Google, and they said: what do you see? And it painted this picture, which is a cat. It didn’t say “cat,” it just said, “I see this.” And basically it’s saying, “I see this.” “I see cats.” On the internet. No one asked it. No one said anything about cats. But “I see cats.”

53:39

So AI is very powerful. And here’s where I want to talk about the singularity. I think just two kinds of singularities: strong and weak. And the strong kind is kind of the Ray Kurzweil, where we’re going to make this supermachine, and it’s going to make another supermachine faster. And the idea is, if you look at these exponential curves, they’re saying: look, in the near future they’re going up. But the problem with an exponential curve is that they’re always going to go up anywhere along that entire path. They’re going to go up, and they’ll still be going up. So it’s always near, and it has always been near for the past hundred years. So exponential curves should not really inform this idea of a singularity at all.

54:30

But Ray’s idea is that there’s step one: you make a smarter than us AI. And then step two: you have immortality. It’s like… there’s a lot in between there. There’s a lot in between there. And his idea is: if you have something really smart, that will solve all the problems of health and immortality. But that’s not the way things work, because biological phenomena take time, and you have to do experiments. You could take all the superintelligence in the world, and they can read all the medical literature there is right now, and you’re not going to be able to figure out how to cure cancer just by thinking about it. I call that thinkism. A lot of these guys are very smart. They think the answer to everything is thinking about it. But you need data, you need to do experiments. And so that idea of the singularity happening just because we make something smarter than us I don’t think is going to work.

55:21

But I do think there’s one aspect to the singularity which is true, which is that it means that we can make something at a larger scale that we can’t really see what’s going to happen; we can’t really imagine what something would be like at another level in the organization. And I think that sense of singularity is going to be true. If we have auto-enhancing AI, I think that’s a possibility. I don’t think it’s going to make much difference when it’s by itself, but I think if it’s on the cloud, if it’s the superorganism that’s making the AI, I think that’s much more interesting. And I think that’s actually much more likely: that AI is not going to come in a standalone machine or standalone company, it’ll occur at the level of the superorganism.

56:08

So that superintelligence—it’s not that it’s super meaning like it’s better than us, I think it means that it’s just going to operate at a different level. Because what we want is actually: we want to make as many different kinds of minds as possible. And as I will talk about in just a second, I think it’s really important that we actually have a different level of thinking. And I think most of the new startups in the next hundred years are going to follow this form: take X, add AI. That’s going to be the way to do it. We know about SETI—the search for extraterrestrial intelligence—but David Eagleman and I are actually trying to do something called the search for internet intelligence. What would the evidence be that we would need to see to actually be convinced that there was kind of a superintelligence? What would convince a skeptic? We don’t have an answer, but I think it’s a good question.

57:03

We need new governance. And this is the last point I want to make. Planetary challenges require planetary thinking—and I use the word “challenge” because I mean both the problems and the opportunities of holos require holos thinking. Which means we need to think at the level of holos as well as we need to have a holos kind of thinking. I mean that superintelligence. But we cannot just imagine—we can’t solve planetary problems without taking the perspective, and taking it seriously, the idea that there is this superorganism that we’re dealing with.

57:41

I think we need—climate shock and climate change is going to require geo-engineering; is going to require planetary-scale solutions. Stewart Brand was instrumental in giving us a view of the whole Earth. And every astronaut who’s come back—if they’re not globalist, they’re certainly thinking more globally than they went before, because they understood that this is a system of three different types.

58:11

Paul Otlet was one of the earliest inventors of hypertext, and he devised this idea of the world encyclopedia, the universal library, and not coincidentally, he was instrumental in this idea of the world city, the League of Nations, world government. One world government. Man, if you want to know about conspiracy and quacks, just delve into this for a second. It used to be on late AM, it’s now moved to daytime talk radio. But any twelve-year-old anywhere in the world will tell you that one world government is inevitable. The question is: what kind of world government do we want? There are lots of people trying crazy stuff. You can actually get a world government passport. And sometimes the border guards are uninformed enough that they’ll let you through, other times they’ll arrest you. So it’s really not a useful thing to have. But the idea—there’s many organizations, and the people who are critics of world government have one thing right, which is there’s not an ounce of democracy in any of those. The World Bank’s never asked me to vote on anything. The U.N. has never asked me to vote. But at the same time they’re doing something in the right direction. And ICANN is not really in the same league as these others—I mention because it’s something that’s trying to govern the internet. And there’s a hope that, through trying to govern that in a global way, we might learn something about trying to do it in other ways.

59:54

But there’s a recursive dilemma, which is that: who decides how this is formed, and who decides who decides? If you can have a vote: no voting system is fair. They’re all unfair in different ways. But who decides what system there’s going to be, and who decides who those people are? So there’s no way out of that. You basically start unfairly, and you try to work towards fairness. But we actually don’t know how to make a representative democracy for nine billion people. We have no idea, really, how that would work. And there are a bunch of science fiction authors, with Neal Stephenson and others, called Hieroglyph, which are trying to promote this idea of working on big problems instead of little problems. Neal, I think, famously said the best minds of our generation are trying to get people to click on ads. Why not have them work on a big problem? Here’s a big problem: how do we have representative democracy for nine billion people? Let’s take this idea of a global superorganism for real.

1:01:09

There are lots of challenges. I’m really concerned about the fact that we don’t have any rules for cyberwar. Americans are very sensitive to the fact that the Chinese are breaking in. The Americans are breaking in to the Chinese; we know that as well. But we have no rules about what’s civil, what’s honorable, what’s acceptable. Nothing at all. We don’t know. And we don’t have a backup. If this goes down, even if it’s just a sickness and it goes down for a little while, we have nothing alternative. Everything’s connected together. So we should at least have a backup that would allow us to reboot.

1:01:52

And we need holos thinking. We need to have a mind that’s actually bigger than ourselves. We need to have as many different minds as possible, because some of the problems that we’re engaged with cannot be solved by our minds alone. That’s one of the reasons why we want AI. It’s not so much that they’re smarter than us, it’s that they think differently than us. So we need to have different kind of thinking, planetary thinking, in order to solve some of the problems we have.

1:02:17

This is just the beginning of the beginning. I feel nothing has really happened yet. 2014; everything has happened before us doesn’t really count. It’s all going to happen afterwards. All the most important inventions of the next twenty, thirty years have not been invented yet. So I’m very optimistic. I think that we can take this idea seriously, to try to make a global holos, and try and do our best to make it benefit us, all the species on this planet, and all the robots. Thank you!

1:03:06

And that’s my book, Cool Tools, which I think there’s copies out there. It’s self-published and you should get one for your young friend.

1:03:14

Brand

Great! Have a seat. You say there were seven frontiers. You only showed us five.

1:03:21

Kelly

That was a mistake, sorry. That was a typo. Sorry! It was a typo!

1:03:32

Brand

I think, you know, why can’t you come up with two more? Several versions of this question came in according to Alexander. David Kemholz asks: “Humans evolved in a context in which social groups were quite small. Even today, most decisions are made in a fairly limited scale. Can anyone naturally control technium? Should we even try?” This was somewhat what you were getting at in the end. Is this a democratic thing we’re talking about, or what?

1:04:08

Kelly

Yeah. No, I think there’s kind of a Dunbar’s number where you have a natural size of the number of human relationships that you can deal with. I think that is natural. But I think that right now we expect that we will understand this, and I don’t think that we will understand it. I don’t think we understand nature. We don’t know how nature works, but we can still use it. So you can use things that you don’t understand, even though you’re always trying to understand more. I don’t think that we will necessarily be able to understand or control it completely. So I think that’s going to be a big step: to acknowledge the fact that your child will no longer be under your control. And I think—you know, there’s a famous book by Arthur C. Clarke, Childhood’s End. I think that’s where we’re at: is that we may not have total control of this thing. But it doesn’t mean that we aren’t going to try to do the best we can to set it off in the right direction.

1:05:24

Brand

Darwin noticed that there was a pattern to how uncontrolled life was. Nevertheless pattern, and how it emerged. And Geoffrey West spends time looking not only at cities, but at how these scaling laws apply in natural systems. Does that kind of understanding count?

1:05:46

Kelly

Yeah. I think the technium, the superorganism, we’re going to approach it like a second nature, where we will go to it in constant awe about its complexity, and are constantly going to learn about it, to try and make it useful, to try and bend it to our purposes. But like, nature—we’ll never successfully be able to master it. I think—again, Gaia is beyond our understanding at some level, and certainly beyond our control. And we think of that as kind of a good thing, because it’s had four billion years of learning, so it’s very wise. I think the technium does not have—you know, it has two hundred years, maybe. It’s not very wise. But we can teach it. We can embed into it some of the principles that we want in our offspring. We’re going to teach robots ethics. And by teaching robot ethics, we are going to become better people. Because we do things—we drive down a road and we actually don’t even know what we believe. We don’t know who we’re going to—if we have to do the paradox of who do you hit, we don’t have good answers. But when we have to teach our robots how to drive, we’re forced to kind of go through our own beliefs and our own ethics in order to—it’s like having children. They often force you to become better people.

1:07:31

Brand

It’s curious how, with the technium in mind, and trying to understand the technium in mind, and explicate the human mind, and how you find yourself responding to sort of the daily course of news—so, John Marcroft had a piece in the New York Times today about how Asimov’s first law of robotics is being routinely disobeyed by these AI-driven missiles now that are fire-and-forget and are pretty smart, and they will select the tank rather than another kind of vehicle and hit it, and kill whoever’s inside and, in Asimov’s terms, cause harm to humans. If items like that come by you, what do you do with that?

1:08:18

Kelly

Yeah. So, in this case, I think if we’re giving decisions to these machines, and they’re making decisions that we don’t like, then we have to teach them better. We have to keep improving them until they make decisions that we like. It’s not like we do this once, and then that’s the end. I think it’s like: oh, that was not a good decision. No, we didn’t like that. Okay, we have to do that better. And so it’s—again, it’s like children. It’s like, yeah. They make a mistake and you kind of keep reiterating, and you’re often forced to think about yourself. And I think that that’s the process we’re going to have with, say, AI and these robots that we’re giving some decision to. Is that, well, are these decisions that we approve? Do we like that collectively as a society? If not, let’s change the algorithm.

1:09:19

Brand

I’m curious with the sort of conditions of health itself, healing, this kind of system has. Whenever there’s something that looks like it’s increasingly accelerating and going exponential—[???] came into paying attention and responsibility at a time that people thought that the exponential human population growth was absolutely catastrophic and could go nothing but up until everything broke. And various ideas were put forward of [???] and the [???] and things like this that would somehow get ahead of that. And yet the system turned out to be just another S-curve. That is in fact now dwindling off theoretically in the nine billion we keep talking about. It’s probably maximum number of humans on Earth that we’ll ever see. We didn’t do that. That sort of happened as a byproduct of people moving into cities for other good reasons that had nothing to do with worrying about too many humans. Are the systems of this kind of complexity and self-connectedness in some sense self-healing in that respect?

1:10:37

Kelly

Yeah, that’s a good question. I don’t know. I mean, I do know that television had probably more effect on birth rates than anything else, and it had to do with the fact that women could see role models of women that they wanted their daughters to grow up to be. And we have a very clear map of that in India, where places that TV was introduced, the birth rates fell very fast. So that was not deliberate. Nobody was planning to that. That was not conscious. Is that in some ways a system effect? Would we see that on other planets? That’s the question that I always ask myself on: if we went to visit another planetary civilization in the galaxy, would they also exhibit that kind of a pattern? I don’t know. We’re stuck, because—

1:11:24

Brand

Do you have an opinion about them? Are there planetary civilizations similar or radically—

1:11:31

Kelly

Yeah, that’s a good question. We really have no idea. And, I mean, part of the reason why Star Trek and others are interesting is that they let us ask that question. How much of this stuff is baked into a system? How much is going to be common? We don’t really know. But we can at least begin to—

1:11:58

Brand

Certainly look into that. Two questions here. David Lang asks: what fact or axiom causes the most problems for your technium theory? What still doesn’t fit? And Kevin Kelly—I don’t know how he did this—asked: what would be evidence that there is no superorganism? What’s wrong with this? So presumably you do bang yourself for this from time to time. What if this is all completely wrong?

1:12:26

Kelly

Yeah, and I think, as I said in the beginning, I would like to try them. They make this scientific rather than poetic. And the way you do that is you have something that’s falsifiable. What could you say about this that was falsifiable, that we could actually show that did not exist?

1:12:41

And I think the reason why it’s a good question is they don’t have a very good answer to it. And I think what I would like to try and do is to try and work towards having something that was falsifiable: a statement about this thing that we could say, well, if we discover this, then it proves that it is not happening, or there’s evidence that it was not happening. And I think it might look like when we have X number of things connected for X number of time, that everything was explained. I mean, that we never saw any kind of behavior that we did not expect. And so that would mean trying to write down what kinds of things we would expect. I don’t know. It’s a very hard thing to—it’s a very hard thing because there is a little bit of that singularity, and it’s really hard for us to see beyond into that level of organization. So how you prove the negative is tough.

1:13:45

Brand

Well, some of them you can break down into mechanisms. So Gaia theory is challenged basically on the mechanisms of Lovelock. You know, are there really these self-enhancing and self-limiting mechanisms that Jim Lovelock imagined? Does hydrogen sulfide make clouds in the ocean because of warming and things like this? And some of those have held up and some have not. Likewise, the singularity is pretty much based on an idea of a set of connected things all accelerating at a certain rate. And if they don’t all keep accelerating at the rate that is drawn on the charts, then you would say, well, that particular version of thinking about the singularity isn’t right because the numbers don’t hold up. Are there places like that in your technium theory that you could say that, aha, see, it’s really happening—or hmm, it’s not?

1:14:35

Kelly

Another way to say that is: can we make some predictions about something that might happen? So I might make a prediction that there’ll be a one million person flash crowd that happens in the next ten years. But is that enough to convince anybody? I don’t know.

1:14:50

Brand

A physical one or—

1:14:51

Kelly

Yeah, a physical one. Where it’s going to be like a Woodstock, where one million people will show up somewhere and everybody will be amazed. But it’ll seem like—afterwards, it’ll seem like, oh, obviously, yes, of course. So I think that would be one good way: would be to make some predictions about behavior that we might expect to see. I don’t have those, but I think I would like to work towards that.

1:15:19

Brand

Tyler Willis asks: does technology ever evolve far enough to limit or negate humanity’s role in wholes? Do we fall out of the equation?

1:15:29

Kelly

Yeah. I think that’s a very common fear these days because we see things happening fast. Sometimes there’s a sense that the speed is happening so fast that that speed leaves us behind. Sometimes it’s a scale. And the reason I guess I’m not as worried as many people are about it is because I’ve sort of seen what we’ve done with other species. I think there was a period of time where we didn’t maybe care about whether other species survived or not. But then we recognized that in every species is sort of a wealth, a bank, of learning. Whatever species you can find on Earth has gone through the equal number of amount of evolution, has almost four billion years of evolution in this thing, and that information, that knowledge actually can be useful to us and we need it. And our lives are also improved or bettered by having multiple species around us. So we’re no longer interested in sort of getting rid of species, we’re actually interested in reviving species. And I think—

1:16:46

Brand

We’re really fitting the technique. Gabriel asks a question about that.

1:16:49

Kelly

I think what happens is that, as I’m saying, is that we want to have as many different kinds of thinking, as many different kinds of devices that we possibly can, because there’s going to be problems that we encounter where we need to have other beings or other kinds of thinking. That our type—see, the only reason to make an AI is not to make one like humans, but to make one different than humans. And so that is where this power comes from. We need collectively to have other kinds of thinking, other kinds of beings, other kinds of existence in order to solve the problems that we have made.

1:17:29

Brand

So there’s one kind of romantic argument against letting languages go extinct is when language goes extinct, the world disappears. And it’s not entirely romantic. Often, when it’s a native language, they have a whole bunch of knowledge about the other organisms in their system and what they’re good for and how they’ve heard of them and where they fit into their stories and so on. And it sounds like you’re saying that letting other organisms go, you also lose a world in time, and thereby impoverish the one that we share.

1:17:59

Kelly

And I would say: and by inventing other kinds of beings, we actually are creating those new worlds. We’re creating more of our world. That we—by surrounding ourselves with many different, I call them beings, many different kinds of thinking, many different perspectives, many different creatures, we actually better our world and better ourselves. Because right now we’re on a century-long identity crisis. We don’t know what humans are good for. We don’t know why we’re here. We don’t know why we’re different than anything else. And so we need all these things to help us answer that question about: what is a human, and what’s a human for?

1:18:44

Brand

Well, this is sort of in Danny Hill’s story and as part of the founding of the Long Now Foundation. He had the feeling that what had been humanity’s story for a while, which was essentially control of nature, is the story which is, with the arrival of the Anthropocene, kind of complete. Or at least the completion is in sight. And that we’re now in this tangled process of trying to figure out what the next story is. Do you think the next story is in some sense the technium?

1:19:17

Kelly

Yeah, I think the next story is going to be another level of our existence. And again, I think maybe leaving behind the last remnants of our tribal nature, which is nationalism. I mean, nationalism is a terrible disease that we need to cure ourselves of. And I think that that next story would be a more of a planetary story, and it’s going to be more of an all-species story. And it’s going to be one of multiple minds story, where we are making minds that will have certain dimensions that are better than us in thinking. And together, all these different kinds of minds thinking, we can solve problems that we can’t solve right now. I mean, there may be parts of physics that are simply going to be unsolvable by human minds alone. And we may need to make these other kinds of minds to help us even understand quantum gravity.

1:20:22

Brand

A question we avoid at Revive and Restore: is it okay to bring the Neanderthals back? This sounds like you wouldn’t mind a bit.

1:20:29

Kelly

No, I would love to have Neanderthals. Who wouldn’t? I mean, it’s like—why? Why? I think one of the sources of our arrogance is the fact that we don’t have any other competing intelligences. We’ve basically probably murdered most of them. And that, by having something else that’s similar to us, it will force us to be better in the sense of being more distinguished about what we’re doing, being more careful about how we do it. This is like having competition in the sense that we have someone else to reflect back to us. And we can see ourselves a little better by having something else similar to us. And I think the Neanderthal would be a tremendous gift for us to realize more about who we are.

1:21:27

Brand

Well, speaking of space, here comes a question from Australia. Nick Hodges asks: given that we’ve just landed on a comment, what impact do you think the technium will have on the solar system and—oh, what the hell—galaxy?

1:21:42

Kelly

Yeah. I’m sure that we’ll, you know, diffuse into space. We’ve already begun with comets, and we’ll have one-way missions to Mars as a reality TV program, I’m sure.

1:21:58

Brand

Where you buy into one-way missions?

1:22:01

Kelly

Well, yeah. That’s the current plan right now. This is a one-way mission to Mars and it’s a reality, it’s financed by being a reality TV program. The entire world will watch to see how long they live. And there’s no end of people who would volunteer for it, and we’d learn a lot.

1:22:21

Brand

So in Branson’s, you know, lower space tourist vehicle crash that killed somebody the other day—you see that as not slowing anything down? Makes it more interesting?

1:22:31

Kelly

No, absolutely not. Yeah, there will still be people who go. And I think that that’s pretty inevitable—even though I think it’s really a bad idea to send meat into space, I think we will. You know, it’s—

1:22:52

Brand

So it sounds like in that case you’re a fan of downloading us into non-meat things.

1:22:57

Kelly

Yeah, I think with virtual reality and those kinds of things that we could

1:23:04

Brand

The lag time is horrible!

1:23:05

Kelly

It is. Yeah. Move over! Yeah. It’s terrible, but I think with AIs and stuff like that, we can do a lot of it. I don’t think that it will prevent us from actually having people who want to risk their life to go out there and will do everything they can. But I think most of the exploration will be not done by bodies. It will be done by our extended bodies. And there will still be some, you know, the equivalent of the Amish in space, you know, who want to be there in their bodies.

1:23:43

Brand

Danny Hillis—I mean he was talking to Brian Eno earlier this year—said that if he was a grad student now, he would not be studying computer science, he’d be studying synthetic biology. And this was in context of Brian saying that Mars is really boring. Why would anybody go there? And Danny basically saying, you know: we can make versions of humans, biologically, that would be a version of meat that’s welcome in space and would very much enjoy the nature of the low sunlight and low gravity and so on and so forth.

1:24:22

Kelly

Right, exactly. Danny’s idea is to make people really, really small. They make humans this big because they do better in space.

1:24:29

Brand

This was Rusty Schweickart’s comment. First thing you discover in space is there’s no use for legs whatsoever. They just get in the way. All they are good for is a sort of hook. And, you know, we probably know the genes that make legs and so we can deal with that pretty quick. Or turn them into more arms.

1:24:53

Kelly

Well, I think one of the central questions in the long term—in the very, very long term—is going to be whether we remain one species or many, and whether we remain of one mind or many. And I think—

1:25:05

Brand

The technium is obviously going to be one, it sounds like.

1:25:08

Kelly

Yes, but I think we will speciate. And I think there’ll be naturals and people who will say under no circumstances will I or any of my descendants ever have our genes touched. And then there’ll be other people who will say, like, you know, tomorrow, yeah, sign me up. Do anything you want or transform myself. And I think that will lead to a natural forking in us. And so I think we will have both.

1:25:34

Brand

You have a sense of pace of that? Is that this century? No? Not this century?

1:25:39

Kelly

No, no. Biology is so difficult to move. My wife was here. I mean, she works with living things, and it’s so, so hard to change things. Because it’s four billion years of accumulated trial and error and working. And it’s so hard to move that. I think that’s a very slow process. I’m not saying it’s impossible, but I think it’s really slow.

1:26:09

Brand

Yeah, well, in code terms, it’s a classic ball of mud, and trying to reverse-engineer what was never engineered.

1:26:16

Kelly

Yeah. I mean, I think we will do it piecemeal, but I think it’s a long-term project.

1:26:21

Brand

So is it the case that, you know, the Holos is another big ball of mud; that it’s, in a sense, not programmable?

1:26:30

Kelly

Yeah, I think there are going to be aspects of it that are not programmable by us. But the point is, there’s probably a lot that is. And it’s like world government. Okay, world government is inevitable—but the question is: what kind of world? There’s still lots of choices to make. And I think the Holos is the same thing. There’s a huge amount that’s just beyond anything that we could do and program. But there’s still so much that we can do, that we can focus on the parts that we have some influence and power over

1:27:02

Brand

To which you can hope for the best. Don Means asks: what does Holos want?

1:27:09

Kelly

What does Holos want? I think it wants what many systems want. And that is, one: it wants to survive and prosper and to alter its environment to be more conducive to more of itself. So it will take all the money and energy that it can possibly get, and brainpower, to try and make it more complex, more energy efficient, more diverse, more mutual. I mean, it’ll do the same thing that evolution has done. It’s basically an extension of evolution forces; self-organization. It’s going to do what all self-organized entities have done, which is to bend whatever it can to make it more conducive, so it can deepen. So I think what it wants—the Holos wants to be more Holos. It wants to be more of itself. And I think that’s sort of what we’re all working on.

1:28:12

Brand

Is Holos, the way you describe it, is in a sense kind of engineering. In what sense is it science?

1:28:22

Kelly

It’s not science right now. It’s still on the verge of, on the edge of poetry, but it needs to be taken scientifically. We need to treat it and describe it. It’s my first attempt at it. We should try to describe this. We should test it in the sense of: make predictions, try to understand it as much as we can, to—

1:28:49

Brand

What experiments come to mind here? Do you get a sense of some of the currently unknown, but it would be nice to know, aspects of Holos?

1:29:00

Kelly

Yeah. What we don’t know is so vast. But I think having kind of global logs of the traffic, global measuring, its rhythms in energy use, metabolism, looking at this metabolism as a whole. So, in a first sense, I think the first step is observation: collecting data about the thing. And then we can begin to try and do some experiments, make some predictions, and collect more data. So right now, what I’m suggesting, I think we’re at the stage of: let’s just observe and collect as much data as we can about it. And then if people treat it seriously, let’s make some hypotheses, some predictions.

1:29:51

Brand

And then what?

1:29:54

Kelly

Then I think, like anything, I think that if we even just had a sense that it was real, then we could begin to pay attention to it. And I think we could, in areas that were possible, we could train it. We could bring in the things that we want it to be. We could guide it in terms of the places that it’s growing. If it has to have ethical, if we want it to be friendly, then we want to bring all these things into it. Then we want to engage with it. So I’m a big believer that the way you deal with technology is that you engage with it. And you engage with it by using it. So we want to use this, and to use it, we have to understand that it’s there. So what we don’t want to do is to prohibit it, or prevent it, or pretend that it’s not there. We engage with it and say, okay, this is it. What can we do? Let’s observe it and then interact with it.

1:31:05

Brand

Is there any comfort in doing that? Does the bee abide in the hive in some…?

1:31:15

Kelly

Is a bee comforted by the hive? Is Holos comforting? That’s an interesting question.

1:31:27

Brand

You’re not terrified by it. Many are.

1:31:28

Kelly

No, I’m not terrified by it. I haven’t thought about being comforted by it. I think it’s a fair question. Would people ever sing a song for it? Would we ever write an ode to it? I believe we might. And actually, I think that the technium, as it continues to complexify—even to stages where it really is beyond our understanding—that we might even come to it like a cathedral and feel an awe in it, or like a woods or redwoods, in the sense that we would really find it inspiring in some way. So I think eventually, yes, we could come to it. Maybe we would even feel proud about it. Oh, yeah, my Holos. It’s that Holos. It’s just really, yeah. It’s possible. I mean, I think that would be better than talking about your nation. And I think, could we be proud of it? Well, that’s maybe a good thing to aim for. I’d like to make a Holos that we would be proud of.

1:32:45

Brand

And it would probably be proud to have you do that. Thank you!

Kevin Kelly

https://www.organism.earth/library/docs/kevin-kelly/headshot-square.webp

×
Document Options
Find out more