The Integrated Information Theory of Consciousness

September 23, 2014

The science of consciousness has made great strides by focusing on the behavioral and neuronal correlates of experience. However, such correlates are not enough if we are to understand even basic facts. Moreover, correlates are of little help in many instances where we would like to know if consciousness is present: patients with a few remaining islands of functioning cortex, pre-term infants, non-mammalian species, and machines that are rapidly outperforming people at driving, recognizing faces and objects, and answering difficult questions. To address these issues, we need a theory of consciousness–one that says what experience is and what type of physical systems can have it.

Mentions

00:03

Thank you, dear Dr. Father, for the kind introduction. That’s the term we use in German—Dr. Father.


Yeah, so I’ll be talking about “Consciousness: Here, There, but not Everywhere.” Unlike panpsychism. And for background reference, Giulio and I wrote an ex archive, a manuscript, that you can find. The technical stuff is in this [???] Computation Biology paper, and then a more general account is in my book Consciousness: Confessions of a Romantic Reductionist.

00:34

So without further ado, without making—I used to, for many years, when I started talking about consciousness, I used to sort of have a ten-slide preamble why, as a scientist, as a natural scientist, as a physicist, and as a neurobiologist, one can reasonably talk about consciousness. In fact, why, as a scientist, one has to talk about consciousness if we want to take science seriously. In particular, the claim that science ultimately can explain all of reality: because the one aspect of reality that I have close acquaintance with—in fact, to adopt the languages of philosophers—the only aspect of the world that I have direct acquaintance with is my own consciousness. I know nothing about the world. The only thing I really know about the world, the only thing I have direct knowledge of, is my sensations: the fact that I see, I hear, I feel, I can be angry, I can be sad—those are all different conscious states.

01:28

So the most famous deduction in Western philosophical thought is René Descartes, more than three centuries ago: Je pense, donc je suis, later on translated as cogito, ergo sum—in modern language we would say, “I am conscious, therefore I am.” So the only way I know about me, about the world, about you guys, about science, is because I have a movie in my head. And so, if science is ultimately to try to explain everything including dark matter and dark energy, and viruses, and neurons, surely it has to explain the phenomenon that’s at the center of each one of our existences, namely: consciousness. And I think in order to do that, in order to successfully bridge what some philosophers today call the hard problem, one has to start out with experience.

02:12

So rather than giving you a long definition—hm, is there a way to dim the lights here for this movie, please?


So, in lieu of giving you lengthy definitions—that typically only happens in a science once you’re at the textbook-writing stage—I’m showing you one of many illusions I could show you. And if you—is it moving? Okay. So if you just keep your eyes steady, if you just fixate, for example, on the central cross, or if you fixate on the cross at the bottom, then—well, you tell me. What do you see? I mean, just tell me. It’s not…

Audience

A rotating ball.

Koch

Alright. What else do you see?

Audience

A direction of rotation.

Koch

Alright. Yeah, that’s true. But what else do you see? I mean, it should be pretty obvious.

Audience

The yellow squares disappear.

03:11

Koch

Alright. Thank you. The yellow squares disappeared. Otherwise, you should come see me afterwards if you don’t see that.


So here we have a very simple phenomenon: the yellow squares disappear. If you really keep your eyes steady, both of them can disappear. Once you move your eyes, they make a reappearance. In fact, it’s counter-intuitive because the more salient you make the yellow squares, the more likely it is that they disappear. It’s certainly a counter-intuitive explanation.

03:38

Alright, so this is a simple—it came out in Nature more than a decade ago. And the thing about this that Francis Crick and I were always interested in: where’s the difference in the brain between when you see the yellow square and when you don’t? Right? You have a particular feeling associated with it, it feels like yellow, it reminds you of lots of other yellow things you’ve seen before. And when you don’t, the photons are still there, they still strike your retina, they still evoke retinal ganglion cells firing, but you don’t perceive them anymore.

04:06

And the claim is—of myself, and many other people—that once we understand things like simple forms of consciousness, like visual consciousness, we’re well on the way to understanding all of consciousness, since the higher elaboration of consciousness—like self-consciousness, et cetera—are just exactly that: the elaboration upon something that’s probably much more basic.

04:26

So what is it that we can say today for certain about consciousness? There are many things that we can see already. People think they are likely—I mean, every day I get manuscripts from people who purport to explain consciousness. But consciousness is now a little bit like nuclear physics: there’s a large body of data that any one theory has to explain. You can’t just start from scratch. So, for instance, we know that consciousness is associated with certain types of complex, adaptive, biological systems. Not all of them. So, for instance, the enteric nervous system, roughly 100 to 200 million neurons in us down here in the gut, they don’t seem to be associated with consciousness. We don’t really know why. But if you have feelings down there, typically they’re mediated through activity in the insular, right? If you feel nauseated or something like that. It’s caused by neurons—we know this from brain stimulation, et cetera—up in [the] cortex.

05:18

You have an immune system—in fact, you have several immune systems, right? You have an acquired immune system and an innate one. They respond in a very complex way. They have memory, right? Once you form antibodies, you can think of that as a memory. I just came yesterday from Seattle. I may well be exposed to some bug here in the Cambridge area. My immune system is busy fighting it off. But I have no conscious access to that. I don’t know whether my immune system is active now or not. I don’t know. Yet it’s doing some very complicated task. So we need to ask why my immune system seems to work in this unconscious mode. We don’t know.

05:53

We know consciousness doesn’t require behavior. Certainly, in fully grown people like us. We know this because every night we go to sleep, and sometimes we wake up in the privacy of our sleep. And we have so-called dreams, which is another form of conscious states. Yet, there’s a central paralysis that’s given out by our brain because, otherwise, we would act out our dreams, which wouldn’t be a good idea for our bedmate. And, of course, that occasionally happens. Also, there are cataplexy: other forms of clinical cases when people are unable to move. They are, for example, the MPTP patients, the frozen addicts that were unable to move yet were fully conscious.

06:31

Similar from the clinic, we know that consciousness doesn’t require emotions, at least strong emotions. You can, for example, talk to veterans that come back from Iraq or Afghanistan, and let’s say their leg’s been blown off and they have sustained brain damage due to an improvised explosive device, yet, if you talk to them, they’re clearly conscious, they can describe you their state, they can describe how they’re feeling, but there’s this totally flat affect, right? And they’re not concerned about their future, they’re not concerned that their life has changed radically. So, certainly, the strong emotions don’t seem to be necessary to support their consciousness.

07:06

We know, once again from the clinic, and from fMRI experiments and others, that consciousness doesn’t require language, nor does it even require self-consciousness. Self-consciousness is a very elaborate form of consciousness. It’s particularly well expressed in academics, particular certain types of academics who like to write books and like to introspect endlessly—it’s probably counter-productive to a certain extent. Yet, through most of my daily life, when I try to introspect in the evening; you know, when you’re engaged. So, I bike every day to work. When you’re going at high speed through traffic, when you’re climbing, when you’re making love, when you’re watching an engaging movie, when you’re reading an engaging book—in all those cases, you’re out there in the world, you’re engaging with the world. When you’re climbing or biking at high speed you’re very much engaged with the world. Yet, there’s very little self-awareness, self-consciousness. Simply, you don’t have time to reflect upon yourself when you’re out there engaged with the world. So there’s really no evidence to suggest that aphasic people—or children who can’t speak yet—are not conscious.

08:14

We know from lots of patients—in particular, a beautiful patient; this guy who was a conductor (and BBC made a movie of him) who had a medial temporal lobe viral infection that knocked out his entire medial temporal lobe, a very dense amnesia. You can track over ten years, he has no long-term memory. He still is in love with his wife that he married two weeks before he had the virus infection. Ten years later, he still thinks of her as just newly married. It’s very endearing. Yet, he doesn’t remember anything consciously. But if you talk to him, he can tell you all about his feelings of love, how he feels vis-à-vis his wife, that he rediscovers every minute. It’s very striking. So, clearly, consciousness doesn’t require long-term memory.

09:08

We know from split brain experiments done by Roger Sperry that consciousness can occur in either one hemisphere, both the linguistic competent one as well as the other one, if they are dissociated by the corpus callosum. And we know from 150 years of clinical neurology that destruction of localized brain regions interfere with specific content of consciousness. So you can lose specific parts of [the] cortex, primarily, and then you lose specific content. You may be unable to see in color, you may be unable to see motion, you may feel your wife has been exchanged for an alien because you lost a feeling of familiarity. That’s all due to specific parts of the brain helping mediate specific content. So we know there is this very local association.

09:53

What about attention and consciousness? So, for the longest in this area I’ve been quite active. Over the longest, for the past century and a half or two centuries, most people have assumed—most scientists who study it, psychologists—that what you attend to is identical to what you’re conscious of. The fact that people say, early on, when I talked about consciousness, people said, “Well, you shouldn’t really be talking about consciousness, you should really only strictly be talking about attention. And the only reason you talk about consciousness is because it gets you into the press.” Already at the time was Francis—we disagreed. In the meantime, we have a lot of beautiful—by “we” I mean the community—a lot of beautiful, probably 80 different papers that have appeared over the last six to seven years, including one by Nancy Kanwisher here, very nicely dissociating selective visual attention from visual awareness. And I’ll show you one or two. They’re really different neuronal processes with distinct function. And yes, very often, under laboratory conditions, and possibly even in life, what you attend to is what you’re conscious of. But there’s lots of evidence now to indicate—and I think that’s not controversial anymore [???] visual psychologist—that you can attend to things that are completely invisible, that you’re totally unconscious of. What remains more controversial: the extent to which you can be conscious of things without attending to it. Experimentally, that’s more difficult to manipulate.

11:17

One big advance in this area has been the development of this technique by a student of mine, who is a professor now—Tsuchiya—it’s called CFS: Continuous Flash Suppression. Works very powerfully. So, in the left eye—let’s say you have a dominant eye, and that’s the right eye for the sake of illustration—and in the left eye you put this constant, low-contrast, angry face. Clearly, a very powerful biological stimulus. In the right eye I put these flashing Mondrians. They change, let’s say, at 10 Hertz. Okay, what you’ll see for a minute or two, typically, you’ll just see this, and at some point you’ll get the face breaking through. It’ll be there for two, there, four, five seconds, and then it disappears again. It’s related to rivalry. It’s not the same, but it’s related. But it lasts much longer. Rivalry typically lasts—suppression periods are in terms of 5–10 seconds, this can be a minute or two. So you can now hide all sorts of things. And people have done lots and lots of variants.

12:18

One of the more interesting variants involves sex, as it always does. So here you have—let’s see. On the left side you will put a picture of a naked person, either man or a female. And on the right side you cut up the picture. Okay? And then you hide it using this CFS. And then you leave this on for 800 milliseconds. So if you just ask people naively, “What do you see?” they tell you all they see is flashing colored squares. If you’re a distrustful psychologist, you ask them, “Well, tell me: is the nude on the left or the right?” And people are right with a chance of 50%. And now what you do: you have an ISI, and then you put an objective test. So what you do now: you put this little faint grating here, and the grating is either oriented a little bit to the left, a little bit to the right. And your task is to say: is it oriented to the left or to the right. And you can do standard signal detection pattern, you can get a d Prime. How good are you at doing this task? And then you check: how good are you doing this task when it’s on the same side where the invisible nude is, or on the opposite side of the invisible nude. Then, what you find: that [when] this is done in straight people, in heterosexuals—so this is in ten straight men and ten straight women, this is the individual subject, this is the average, this is the d Prime (so it’s a measure of how well you do this task). And what you can see here: straight men perform this task significantly better, at 0.01, 1 per cent better, if the target is at the side of the invisible naked female. And they do worse if it’s on the side of the invisible naked man. In other words, their attention gets attracted to the invisible female nude, and it gets repelled by the invisible naked man. I man, biologically, this makes perfect sense. If there’s a potential naked mate out there, your brain has mechanisms to detect that. Women are the opposite. Women respond—I mean, the performance increase is in the side of the naked male, but they’re not repelled by naked invisible females. That’s the interesting thing: it’s all invisible.

14:41

Oh yeah, that’s right. I have this paper by Nancy. So this came out a couple of years ago where she uses this same technique. She essentially does pop-out, and she studies to what extent does pop-out depend on conscious seeing or not. And essentially, it’s a very similar paradigm. And you do this performance at the same place where the invisible pop-out was, or you do it at [the] opposite place. And you can also do an attentional task to show that there is this attentional allocation that, if you don’t allocate it to this invisible pop-out, you cannot perform this task. So there are lots of variants of this to show that you can attend to invisible things. You don’t need to see things in order to preferentially attend to them.

15:26

So what some people are doing now, you really have to do a two by two design. So if you study whether it’s in a monkey—like David Leopold has done—or in a human, or in a mouse, like we want to do, you really need to do a two by two design. You need to separately manipulate selective visual attention and selective visual awareness. And so you can do that. One: awareness or consciousness you can do by manipulating visibility, using masking or continuous flash suppression, or any of the many tricks that psychologists have developed over the last hundred years. And here you use sort of attentional manipulations to independently manipulate attention. And certain things you can do without attention and without awareness, and some things depend both on attention and consciousness, and there’s some that you can do in one or the other quadrant. But that seems to support the idea that consciousness and attention are separate processes; they are at least partially separated, if not fully separated. They have a different function and they are subserved by different biological mechanisms.

16:31

And so we are back to this dilemma that there are many things that the brain can do. Here I just listed some of them. And, of course, you can open the pages of any psychology journal to see there’s a very large number of things that we can do without being aware of them, without consciousness. Francis Crick and I call these zombie systems. And so, if you think about the neural correlates you have to ask the question, “Where’s the difference at the neuronal level between all those tasks that you can do without seeing them?” So, for example, we’ve done experiments—the Simon Thorpe experiment that many of you, I know, are familiar with—where you rapidly are being shown an image and you very rapidly, as quickly as you can, you have to say does it contain a face or not a face, is it an animal or not an animal. And some of those things you can do perfectly well if they’re masked. So you don’t even see them, yet you still do these things above chance. And so you have to ask at the neuronal level, where’s the difference between those tasks that require consciousness and those that don’t.

17:29

Alright. And so, ultimately, you can come up with what we call Behavioral Correlates of Consciousness. You can ask, at the behavioral level, in people, in adults—“people” typically means here “undergraduates,” I should say, because that’s the vast majority of subjects, of course. But you can also think to what extent is it true in patients, to what extent is it true in pre-verbal children, to what extent is it true in babies, and, of course, to what extent is this true in animals that you train, like mice or monkeys? So these are some of the behaviors that, in people, we associate with consciousness. I.e., when I ask you, “What did you do last night?” and you tell me what you did, I assume you’re conscious. In the clinic emergency room they have these things called the Coma Scale. Right? They ask you certain things: can you move your eyes, do you know what year it is, do you know who’s the president, and things like that, to assess clinical impairment. In animals, particularly in mice, that I’m interested in, you can do any non-stereotyped, temporal-delayed sensory-motor behavior, which is a behavioral essay of consciousness.

18:39

Okay. So that’s on the behavior side. But now, the project over the last 30 years has been to take the mind-body problem out of the domain of pure behavior and psychology and into the neuronal domain. And ultimately the aim is to look for what Francis Crick and I called the Neuronal Correlates of Consciousness, which is also abbreviated as NCC, which is the [question]: “What are the minimal neuronal mechanisms that are necessary for any one conscious percept?” So whether it’s the yellow squares, or me feeling upset, or having a toothache—for those three different conscious sensations there will, in each case, be a minimal neuronal mechanism that’s necessary to give rise to that. And if I remove that mechanism by inactivating it using halorhodopsin, or TMS, or a lesion, then this sensation will be gone. And then, if I artificially activate this neural correlate by using channelrhodopsin or TMS or some other technique, the feeling should be there. There should be a one-to-one correspondence at the individual trial-by-trial level. And for any such conscious percept there will be a neural correlate of consciousness. I mean, in some sense this is trivial, right? Anything that the mind does, the mind will believe is physically supervenient upon the brain, so there has to be a brain correlate. The question is: is there something common about all those correlates? Let’s say maybe they all involve Layer 5 pyramidal cells. Maybe they all involve oscillation. Maybe they all involve a high degree of synchrony. Maybe they all involve activity from the anterior right insular. These are all different possibilities people have offered. Maybe they all involve large-range projection neurons in [the] dorsal, lateral, prefrontal cortex, à la Global Workspace. These are all different possibilities that people are studying.

20:20

So if we think about vision, we can ask the question—for example, when I see you—is my eye the activity of my retinal and ganglion cells? To what extent is that a neural correlate of consciousness? Well, certainly, right now, were you to record from my eyes, it would certainly correlate with what I see. Yet, the eye itself is too different—the properties of retinal ganglion cells, the receptor field properties—is too different from my conscious perception. For instance, there’s a hole in my eye; it’s called the blind spot. It doesn’t show up in my vision. There are almost no cones, there are very few color [???] in the periphery, yet my entire visual field looks colored. I currently move my eyes three to four times a second, yet my percept is very stable. So from things like that we can infer—an artist inferred this already in the nineteenth century—that the retina is not the place where consciousness actually happens. It’s not where the neural mechanisms give rise, in a causal way, to consciousness. That has to be in a higher part of the brain. Furthermore, I can close my eyes and I can still imagine it. And I tend to dream a lot, and I tend to remember my dreams a lot, and so I have very vivid—this night, I was visiting this bloke in Kazakhstan; I had no idea how I knew him, but here I was in Kazakhstan. A very vivid memory of Kazakhstan, and I can tell you all about it. And I had a visual memory. I mean, it was a picture in my head. But clearly I was sleeping in the dark and my eyes were closed. So, clearly, I can see things without my eyes being active.

21:50

Let’s look at some other parts of the brain. So, of your 86 billion neurons, 69 billion of them—more than two thirds—are in your cerebellum. The [???] cells. In fact, more than two out of three of your cells are little cells, they are four stubby little dendrites in the cerebellum. Okay? Yet, if you lose them or never had them—so this just came out. This is a patient. She was discovered recently in China. She’s 24 years old. She is slightly mentally retarded; just a little bit. And she moves in a clumsy way, and she has a little bit [of a] speech impairment, but you can apparently perfectly converse with her. It took her until six years of age to learn how to walk and to run. And then, when people scanned her, they found this. It’s a complete—and they did a DTI, it’s a quite nice paper if you want to look it up—it’s a complete absence of the cerebellum. So this is one of the few, rare cases of agenesis of [the] cerebellum. No cerebellum whatsoever. She lacks 69 billion neurons. Yet, she’s clearly—the doctors talk about it—she’s clearly fully conversant and she can clearly talk about internal states. So you don’t, apparently, seem to need the cerebellum. Now, there’s no such case for [the] cortex, where you have no cortex whatsoever, and you’re still a conscious person. So that seems to tell us the cortex seems to be much more essential for consciousness than the cerebellum. So we have to ask from a neurological point of view—but more interesting, also from a conceptional theoretical point of view—what is it about the cerebellum that it fails to give rise to conscious sensation? It has beautiful neurons—[???] cell, right? They are sort of the mother of all neurons. They have beautiful dendritic spikes, and complex spikes, and simple spikes, everything that [???] cells have, right? In glorious complexity. And there are lots of neurons, and they have action potential. Everything else you want, you expect in a real brain. Yet, you remove it, and patients don’t complain of loss of consciousness. When you get a stroke, or viral, or gunshot there, people have ataxic, they have motor deficiencies, they never complain about anything of loss of consciousness. So we have to ask why.

24:02

So, Francis Crick and I famously made this prediction in a Nature article, gee, almost 20 years ago now. What we said was that the neural correlates of consciousness doesn’t reside in the primary visual cortex. That, yes, much imperceptual activity correlates with V1, but that’s not where visual conscious sensations arise. Lots of evidence for and against it. Let me just show you the latest one. It comes from the Logatitis Tanaka Lab. It’s in [???] fMRI, although David Leopold has done a similar experiment in monkeys. So it’s one of these two-by-two dissociations that I mentioned before. So, across here you—this is sort of an artistic rendition. The center here was always a low-contrast grating. There was moving, I think, left or right. And I think at some point you had to say whether it was moving left or right. It was always there. But sometimes you saw it and sometimes you didn’t see it, because they did this manipulation. So sometimes you had to attend to the letters, or you had to attend to the gratings. So here you manipulate the visibility, and here you manipulate whether or not you’re attending here or whether you’re attending there. So it’s a two-by-two design.

25:28

And then they look at the fMRI area in the primary visual cortex that corresponds to that central area here. This is in two subjects. The paper was four subjects. So here they have the two traces when you have high attention with or without visibility of the central grating. And here was when you had low attention to the grating, so in other words, you attended to the periphery—whether you saw it or didn’t see it. Same thing here. So in other words, what V1 seems to care about is whether or not you attend to the central grating or you didn’t attend. Whether or not you saw it—here or here, those two curves totally overlap—didn’t make any difference. Now, of course, this is fMRI, it’s not single neurons. Although David Leopold has something in [???] neuron. But this sort of gets at the techniques that people use to try to untangle consciousness from attention or related processes. So, in terms of [the] cortex, we have pretty good evidence that it doesn’t seem to involve primary visual, primary auditory, primary somatosensory cortex, that it seems to be primarily involving the higher-order cortex: parietal cortex, temporal cortex, and prefrontal cortex.

26:39

How many people have heard about this part of the brain? Do you all know it? Okay, you should. Remember where you were when somebody mentioned the claustrum. So the claustrum—as implied by its name—is a hidden structure, okay? In us it’s yay big. Big like this. It’s roughly here, under the insular. You have one here and one here. You can see it in all mammals. Mice definitely have it. In fact, we have a few genes that are uniquely expressed there. And here you can see—these are pictures from Nikos Logothetis—you can see it here. That’s a sheet-like structure. It’s lying underneath the insular and above the basal ganglia, and it’s in white matter. It’s between the external and the extreme capsule. So it’s embedded in the white matter. It’s a thin layer of cells, between—in us it’s maybe between 0.5 and 2 millimeters thick. In humans. And, as I said, it’s elongated. It’s difficult—there are few patients with lesions, because it gets supplied by two separate arteries. And if you wanted to lesion it chemically or pharmacologically, you have to do multiple injections because it’s very elongated.

27:45

Now, this is a recent paper, but it’s known from both the rodent literature, as well as the cat, as well as the monkey, as well as the human literature. So this is a fancy version of multi-spectral DTI. The claustrum, here, connects with all the different cortical areas in this very nice, topographic manner. So you have a visual part of the cortex, you have a somatosensory part of the cortex, you have a motor- and you have a prefrontal part of the cortex. And there are a few interesting asymmetries, like: it gets input from both ipsi and contra, but only projects to ipsi, and a few interesting things like that. So, like the thalamus, it’s highly interconnected to the cortex. But unlike the thalamus, it doesn’t seem to be organized in 45 different, separate nuclei, but they all seem to be a single synthesium, a single tissue.

28:36

So Francis Crick and I, based on this—this was a structure function argument that we made similar to the much more famous one that he made with Jim Watson. So when we—he first wrote about this in his book, and then later on we wrote this paper. So you have this unique anatomical structure in the brain and you ask: what is its function? It seems to integrate all information from the different cortical regions. So we thought at the time it was associated with consciousness. It binds all the information from the different sensory, non-sensory, motor, planning areas together into one coherent percept. A little bit like the conductor of the cerebral symphony. If all these different actors had played different visual areas and they both project to and get input from this claustrum, so one obvious function it could serve would be to coordinate all of that.

29:30

This was in fact the very last paper that Francis worked on. In fact, two days before he went into the hospital in June 28, 2004, he told me not to worry. He would continue to work on the paper. Here is actually the paper, the manuscript. And when he passed away—on the day he passed away, two hours before—Odile, his wife, told me in the morning still dictated corrections to this manuscript, and in the afternoon he was hallucinating a discussion with me about the claustrum. A scientist to the bitter end.

30:08

Okay, so this paper appeared in 2005. And then nothing happens for ten years. Well, no, there’s a bunch of pharmacological studies and molecular studies. But then this paper came out. It’s a pretty cool paper. It’s a single patient, alright? So I have to warn you. It’s a single patient. There are all sorts of problems with single patients. But it’s an interesting anecdote that gives rise to possible experiments that one can easily do, for instance, in rodents. So here you have a patient who is an epileptic patient. And as part of the epileptic workup, you put electrodes into the brain to try to see which areas are eloquent and which areas are not eloquent. This is a common procedure done. So typically, what happens—we know this now from 120 years of direct stimulation using micro-stimulation of human brains—so typically, what happens: nothing happens. Typically, when you stimulate these small [???] human brain cortex, nothing will happen. Less frequently, the patient will have discrete sensation, will hear something, will see something, sometimes there will be motor activity, sometimes there will be a vague body center feeling that’s very difficult to express in words. In this case, in one electrode, the patient turned sort of—the easiest way to describe it: turned into a zombie. Every time they stimulated, the patient would stare ahead, would stop for as long as the current was on—between one and ten seconds. If the patient was starting doing something simple like this, the patient would stare ahead and continue to do this. If the patient said something very simple—like a word: “two, two, two”—the patient would continue to say that while staring ahead. The patient had no recollection of these episodes. And the electrode was just below the claustrum. So once again, it’s a single patient. So it’s very difficult to know what to make of it. But it’s certainly challenging, and so it’s interesting enough that one can learn… so here, that’s the location of the electrode here, just underneath the claustrum—to do further experimentation in animals. I mean, this is obviously just a—you can’t repeat this in a patient.

32:18

Alright. So there are lots and lots of people who are looking for the neuronal correlates of consciousness. And all sorts of different, typically cortical, structures. Of course, we have to ask: what about consciousness in other mammals? So here you see two female mammals: my daughter and her guard dog; her beloved German shepherd, Tosca. Now, we think—not only because I’m very fond of dogs—but we think (or biologists at least believe) that most mammals share most essential things, except language, with humans. And we say that because their brains are very similar. If I give you a little cubic millimeter of human cortex, of dog cortex, of mouse cortex, only an expert armed with a microscope can really tell them apart. The genes are roughly the same, the neurons are roughly the same, the layering is roughly the same. It’s all basically the same. There’s just more of it in us. We have roughly a thousand times more than a mouse, and it’s thicker. Of course, we don’t have the biggest brain. That’s given to elephants and other structures. So, for reasons of evolutionary and structural continuity, I think there’s no reason to deny that, certainly, animals like dogs can be happy and can be sad. And if you’re around a cat there’s no question about it. It can be lonely, it can have other states that we have. Maybe less complex, but they certainly also share the gifts of consciousness with us. Right now, experimentally, it’s very difficult to address the question: to what extent is this true of animals that are very different from us? For instance: cephalopods, that are very complex, that have imitation learning and other very complicated ways. Or bees that have the dance, that have very complicated behaviors—whose brain, I’d like to remind you (the mushroom body) has a circa density ten times higher than the density of the cortex. It’s very difficult right now to know to what extent does a bee actually feel something when it’s laden with nectar in the golden sun? We don’t know. With mammals it’s easier to do because you can do tests that are very similar to the tests we can do in humans.But ultimately, you’re left with a number of hard questions that, without a theory, you cannot really address. And that’s what I want to talk [about] in the second part.

34:23

So if it’s true that the claustrum is involved—let’s just [???] that—we really want to have a deep theoretical reason: why the claustrum? Why not the thalamus? Why the claustrum? Why not some other structure? Why not the cerebellum? I was just telling you, empirically, the cerebellum does not seem to be involved in consciousness. Well, why? It’s curious. They have lots of neurons and everything else. Why is it not involved in consciousness? What theoretical reasons? Why not afferent pathways? Why not cortex during deep sleep? Right? If you record from a single neuron in a sleeping animal, it’s not easy to say is it sleeping or not? What’s different? Why, in deep sleep, does it not give rise to consciousness? If you think synchrony is important, well, then your brain is highly synchronized during a grand mal seizure—but, of course, that’s when we lose consciousness. So why is that?

35:09

There are more hard questions that are very difficult to answer without a theory. There are patients like this—this is one of Nick Schiff’s patients—who is isolated, everything is in dysfunction except this isolated island of cortical activity. And he says one thing. And he says it again, and again, and again. He says, “Oh shit. Oh shit. Oh shit.” That’s what he says eight hours a day. You know, it’s like a tape recorder stuck. Well, is this person experiencing something, a little bit maybe? Right now, very difficult to answer.

35:35

What about pre-linguistic children? Either a newborn infant or a pre-term infant like this. This is a 28-week-old infant. Or what about a fetus? At what point does a fetus make a transition, if ever, between feeling nothing (being alive, clearly, but not feeling anything like we do in deep sleep) and at what point does a fetus feel something? Right now we have heated arguments involving abortions and other things based on legal and political reasons, but we really don’t know from a scientific point of view how to answer this question.

36:11

What about anesthesia? For example, ketamine anesthesia, when your brain is highly active? And, of course, there are lots of cases of awareness under anesthesia.


What about sleepwalking? When you have an individual that, with open eyes, can do complicated things including driving and all sorts of other things, to what extent is this person conscious or not?


As I mentioned: what about animals that are very different form us, that don’t have a cerebral cortex and a thalamus, but have a very different structure, but are capable of highly sophisticated behavior, like an octopus, or a bee, or a fly, or C. elegans?

36:45

And then, lastly, what about things like this? Right? We live, now, in a world where more and more—and particularly here, at MIT, in the center—we’re confronted with creatures that, if a human were to exhibit those abilities, nobody would doubt that they would be conscious. If you have a severely brain-injured patient and she can play chess, or she can play Jeopardy, or she can drive a car—all of which are things computers can do—there would be no question in anybody’s mind that this person is fully conscious. So, based on what reason do we deny or not deny these guys—you know, a more advanced version of Siri, let’s say. Remember the movie Her? Samantha? Right? How do we know? We need a theory that tells us whether Samantha is actually conscious or whether she’s not conscious. Right now, we don’t have such a theory. So what’s really beyond studying the behavioral correlates and the neuronal correlates of consciousness—which is what I do, and what lots of other labs now do—we need a theory that tells us, in principle: when is a system conscious? Is this one conscious? And I want a rigorous explanation for why it is or why it’s not conscious. What about these? What about these?

37:54

Alright. So we need a theory that takes us from this—from conscious experience—to mechanisms and to the brain. This, incidentally, also bypasses the hard problem that Leibniz first talks about in his famous example about when he walks inside the mill, and more recently, William James, of course, and then more recently David Chalmers talks about. Right? It is probably true that trying to take a brain and sort of wring consciousness out of it is probably truly a hard problem. Although one has to be extremely skeptical when philosophy says something is hard and science can’t do it. Historically, they don’t have a very good track record, I think, of predicting things. But I think it’s much easier if we start where I think any investigation of the world has to start, namely with the most central fact of our existence: my own feelings, my own phenomenology.

38:46

So now I come to the theory of Giulio Tononi, who is a psychiatrist and a neuroscientist—a very good friend and a close colleague; disclosure here: we’ve published many papers together—at the University of Wisconsin in Madison, who has this Integrated Information Theory that he’s worked on with many people, but it’s really his theory. And there are various versions of it, and so for the latest I urge you, if you’re interested, go to this [???] computational biology paper.

39:21

So here, just like in modern mathematics, you start out an axiomatic approach. So the idea is that you formulate five axioms based on your phenomenological experience, the experience of how the world appears to you. These axioms should do what any other axiomatic systems do: they should be independent, they should not be derivable from each other. And then, together, they should describe everything that there is to describe about the phenomena. And then, from these axioms, you go towards a calculus that implements these axioms, the actual meat of the theory. And then you test this integrated information theory on various tests you can do in the clinic and that you can do in animals and in people.

40:08

Alright. So there are five axioms here. The axioms themselves, I think, are relatively straightforward to understand. The first axiom is the axiom of existence, right? In order for anything to exist it has to make a difference. This is also known as Alexander’s Dictum. If nothing makes a difference to you and you don’t make a difference to anything, then you may as well not exist. Well, I mean, I remind you: in physics this principle is used, for example, in the discussion of aether. Physicists certainly know: the aether was this notion that was used around 1900. It fills the space, is infinitely rigid yet also infinitely flexible. It had to explain a number of discerning facts about the cosmos at large. And Einstein didn’t need the aether anymore to explain anything. Now, the aether could still exist, but it has no causal connection, it doesn’t make any difference. Nothing makes a difference to it, it doesn’t make a difference, so therefore, physicists don’t talk about it anymore. So I think it’s a deep principle that we use, whether we know it or not.

41:11

So, the axiom of existence says: experience exists intrinsically. This is very important. Not observer-dependent. My consciousness exists totally independent of anything else in the world. It doesn’t depend on a brain looking down on me, it doesn’t depend on anything else looking down at me. It just exists intrinsically. B: experience is structured. It has many aspects. So this is a famous drawing from Ernst Mach. If you see actually what it is: it’s him—he’s tried to describe what he sees looking out of his one eye. So he can see his mustache, the bridge of his nose, and he looks out at the world. And the world has all sorts of elements, right? It has objects in it, it has “left,” it has “right,” it has “up,” it has “down,” it’s incredibly rich. There are all these concepts. Next to each other. The books are to the right of the window, which is above the floor, et cetera. So each conscious perception is very rich.

42:08

Which brings me to this next axiom, the third axiom: each experience is the way it is because it’s differentiated from a gazillion other possible percepts you can have. So if you go back to this scholastic, they actually thought a lot about this. Some of them, I think, are really better than the analytic philosophers. They call this the essence, the Sosein, for example—some of them, people like Albertus Magnus—that experience is differentiated from one out of many, right? If you imagine everything I see right now out of my left eye, and you imagined I see this one unique thing that I’ll never see again in the history of the universe, compared to everything else I could see—every frame of every movie that’s ever been made or will ever be made in the history of the universe, right—plus all smells and all tastes and all emotional experience. So it’s incredibly rich. Both what you see and what you not see. Even if you wake up disoriented—you’re jetlagged, you traveled nine hours, you wake up at three in the morning in your hotel room—all you know: it’s black. But that black is not just a simple one bit, because that black is different from anything else that you might see and that you have ever seen. So even that black is incredibly rich to differentiate it from all other possible experiences.

43:31

Next: so, philosophers have much remarked upon this. They call this holistic or integrated. Each experience is highly integrated. It is one. So, for example, you don’t see the left room separately from the right room. You don’t see—I mean, of course, you can do this, but then you’re seeing different things. Whatever I apprehend, I apprehend as one. It’s a unitary, integrated, holistic percept. It’s just like when I look at, for instance, the word “honeymoon.” I don’t see “honey” and “moon,” and then I see a higher order “honeymoon,” I just see it as “honeymoon”—you know, what people do once they get married.

44:05

Alright. And lastly, experience is unique. At any one point in time, I only see one thing. I am not a superposition, unlike in quantum mechanics, I’m not a superposition of different conscious percepts. The Christof—right? My narrative self, the one that looks out at the world and sees all of you, that see this inner movie—there’s only one experience. And not different experiences in my left brain and my right brain, unless I’m dissociated, in a dissociative state. That sometimes happens with split-brain patients or something else. But in a normal brain it’s integrated, I have one experience, it’s at one level of granularity—whatever that may be, neurons or subneurons or supraneurons or columns. It’s at one timescale. It doesn’t flow at infinite many timescales and I’m a superposition of any—I’m only one.

44:54

Alright. So now it gets a little bit tricky. Because now we have to move from the axioms to the postulates. I mean, it’s all nice having these axioms, and most people find a lot in these axioms that resonates—although there are some people that say, “Well, maybe we need to postulate an additional axiom.” Maybe yes, maybe no. But the bigger challenge is to move to mechanisms. Because we are scientists, we’re not just philosophers. So it’s not just okay to speculate. You want to speculate, but in a realm where you can ultimately make predictions about what is and what is not conscious, and where you can make predictions about neuronal correlates, and whether machines ever will be or won’t be conscious.

45:34

So the existence—the first axiom—says (and this is, in some sense, the most difficult one to get across): that experience you have, you have a mechanism. Like the brain or the set of transistor gates. And they are in a particular state. So some neurons are firing, some neurons are not firing. So we do everything, actually, because it’s very difficult to compute things with IIT because it very quickly explodes in terms of the number of possibilities you have to compute. We have this simple gate here. Five neurons, three of them is an XOR neuron, an OR, and an AND, and they’re either on or off. Here they are off and yellow means they are on. Okay? So here you have these neurons, these gates if you want, and some of them are on and some of them are off. Alright?

46:17

Now, what’s really important is that experience is generated not only by a set of mechanisms in a particular state—like a brain where some neurons fire and some neurons don’t fire—but also, it has a cause-effect repertoire. And I’ll come back [to] what that means in a couple of slides in terms of causation. Because this state has to come from somewhere, and it’s going to go somewhere. Remember? I said you can only exist if you make a difference. In this case, because consciousness is not dependent on an external observer, it has to make—you only exist if you make a difference to yourself. In other words, you have to have some cause within your system that caused you, and you have to be able to cause things within your system.

46:59

So in this case, when you’re in this case, when you’re in this state—so this is off, this is off, and this is on—so here there’s three neurons corresponding to A, B, and C. You can say: well, given I find myself in this state, these are the various states I could’ve come from just in the past—assuming this is a discrete system. And these are the different states I could go to, given I’m in this state. So it has a past, it has a cause repertoire, it was caused by some of these states with this probability. So I could have come from these different states. And it has a [few] possible different ways to go into the future, depending on my input here. And so, for example, if I were to do an experiment where, by [???] I eliminate some of these, and this is going to change the consciousness of the observer in a predictable way, even though I may not change the state. I’ll come back to that in a second. Alright. So experience is generated by any mechanism that has a cause-effect repertoire in a particular state.

48:01

So, compositions. Experience is structured. It has many aspects. Well, so, in this case, there are many subcomponents. I can look at the entire system as a whole, or I can look at each of the subcomponents. I can look at these two, this tuplet, this n-tuplet, I can look at that neuron and that neuron and this neuron. And they all—so, in principle, I have to look at the power set of all of these different mechanisms.

48:30

Experience is differentiated. It can be one out of many. So it is what it is because it differs in particular ways from all the other experiences. So, in this way, you have—once again, you have these mechanisms, and you have all these different subcomponents, and each one of them has a particular cause repertoire and a particular effect repertoire. So ultimately, this structure lives in a space—a so-called [???] space—that has as many dimensions as dimensions you have in your past and in your future. So here you have three neurons, so in principle you have eight states in the past and you have eight states in the future. So in principle, this structure lives in a 16-dimensional space.

49:23

It has to be integrated. Experience is unified. So here you compute a measure of how integrated this system is by essentially computing the difference between different forms of—if you want—entropy, using something like Kullback-Leibler. Or here they actually use a metric, a distance measure called the EMD, the Earth-Mover Distance. So essentially, you look at all these different states from all the different elementary mechanisms, and then you look to what extent could they exist by themselves. So if you have a system like a split brain that consists of two independent brains, then the joint entropy is just a product of the individual entropies. So in that sense, the system says this system doesn’t have its own autonomous existence. You only exist if you’re irreducible. If you’re reducible to a simpler system, then you don’t exist—only the simpler system exists. So here you compute to what extent the system is irreducible by, essentially, looking at all possible cuts, all possible bipartitions, all possible tripartitions. This is where the series gets practically very difficult to compute. And you look at the one that sort of minimizes the information exchange between the different partitions. So if you had a system like a brain that was cut by the surgeon, you essentially have two independent systems, they exist, but there isn’t anything like to be a brain as a whole because it doesn’t exist at its own level. Only these subcomponents exist.

50:59

And lastly, you see exclusion. In any given system you only pick the one system that maximizes this irreducibility, this number called phi (Φ). So this is what phi is about. So phi, in some sense, is a measure of the extent to which a system is irreducible. You look at all possible subsystems at all possible levels of granularity—so you can look at it at small granules and at high granules, you can look at different spatial granularity, different temporal granularity, and it’s like a maximum principle extreme in physics: you pick the one that maximizes it. And that is the system that has consciousness associated with it.

51:46

And then you come to the central identity of the theory that essentially says: if you have a mechanism in a particular state with a particular cause-effect repertoire, the central identity posits that the structure of the system in this high-dimensional qualia space—this space spanned by all the different states, it could take in the past and it could take in the future depending on where you are right now—that space is what experience is. So in a sense, it’s the Pythagorean program run to its completion because, ultimately, it says: what experience is is this mathematical structure, this n-tope, in this very high-dimensional space. And there are two things associated with that. There’s the structure itself associated with it, that gives you the quality of that structure. So whether you’re seeing red, or it’s the agony of a cancer patient, or it’s the dream of a lotus eater—all are what they are, all are experiences in these trillion-dimensional spaces. That’s what they are. That’s what the [???] of experience is. The voice inside your head, the picture inside your skull, they are what they are because of this mathematical structure. And the quantity of the structure is measured by this number called phi. And so the theory says there’s this main—so, if you look at a system, there’s a main complex, the component that’s most irreducible, that has a highest phi, and that is currently what it is, the neural correlate of consciousness—to switch over to the language, now, of neurology. Now, I know there’s no way in hell that you can convey the complexity of this theory in ten minutes, so right now, let’s just go with that. You can ask me questions afterwards. All the mathematics is spelled out in the papers.

53:35

It makes a number of predictions, some of which are comparable with this very ancient philosophical belief called panpsychism. Panpsychism—I first encountered it at undergrad in Plato, of course. It’s been prevalent. It’s been a sort of a theme among Western philosophers, including Schopenhauer. Strawson is probably the contemporary philosopher most closely associated with that. And then, of course, the Dalai Lama. It’s a very powerful part of Buddhism. But there are also certain things where the theory makes very strikingly different predictions, particularly when it comes to computers.

54:10

So this theory says: a system consciousness can be graded. So you have a system like this. It has three times five neurons, fifteen neurons. I mean, “neurons.” These are simple switches. And here, what you do: they’re interconnected in this way, and you can now compute phi. The theory—whether you think it’s relevant or not, whether it explains consciousness or not—it’s a well-defined theory that takes any mechanism like this, in a particular state, and assigns a number to it. So in this case—it’s a dimensionless number—it’s 10.56. It tells you how irreducible the system is. In some sense, how much does this system exist? The larger the number, the more irreducible the system is, and (in some real ontological sense) the more it exists

54:56

Now you add noise to these connections. All you do: you leave all the connections there but you add more and more noise to it. And then you can see: the overall phi goes down. There’s less integration now because you’ve injected entropy into the system. You can also compute the phi of these little guys because, once again: in principle you compute phi for all possible configurations of elements, and you pick the one that’s maximum. So here, these little guys are still at a very low phi, lower than the whole. But then, you’ve added so much noise that, suddenly, the system disintegrates. And now it disintegrates into five separate conscious systems, each of which is separately conscious, at a very low level, because the whole now—sorry, these numbers are switched around. Should be the other way. The little guys have now more phi than the big guy. So it says that consciousness is graded. This, of course, reflects our own experience in our lifetime, in day-to-day. Your consciousness waxes and wanes. Right? When you’re a baby it’s different than when you’re a fully grown adult, or even as a teenager you don’t have a lot of insight into your own behavior. You do certain things you don’t know why. And, of course, if you become old and demented then your consciousness goes down. And even during the day, when you have slept well, when you haven’t slept in a day or two, or you’re totally hungover, your consciousness can wax and wane. So this theory very much reflects that: that consciousness is graded. It’s not an all-or-none thing.

56:30

A very interesting prediction. This theory predicts that any feed-forward system has Φ = 0. The reason is essentially: the first axiom of existence says that if a system has to make a difference to itself, in other words, it has to feedback to itself and has to get input from itself. A strictly feed-forward system does not do this. Of course, interesting, if you look at machine learning algorithms, you look at standard convolutional nets, they’re all feed-forward. So what this theory says: yes, you can have a complicated neural network that does complicated things like detecting whether there’s a cat present or a face present—it can do all sorts of things, anything that you can do with standard machine learning—yet, this system will not be conscious because it doesn’t have the right cause-effect structure, it doesn’t have the right causal structure. So this also means there isn’t any Turing test because—there can be, of course, Turing tests for intelligence, but Turing tests for consciousness doesn’t work. It’s not an input-output manipulation. It’s not that you manipulate the input and you look at the output. Because you can clearly do that for a strictly feed-forward network. And the theory—whether you believe the theory relates to consciousness or it’s a different matter—but the theory quite clearly says: phi associated with any sort of feed-forward networks will be zero.

57:50

Which also means you can have two separate networks. You can now have a complicated, heavy feedback network. And of course there’s an equivalent. For a finite amount of time steps you can, for any complicated feedback network, unfold that and turn it into much more complicated, purely feed-forward networks. So both systems will do exactly the same thing. They’re isomorphic in terms of input-output behavior. That one, because of its causal structure—so the theory says—will be conscious, the other one not.

58:21

So let’s look at some experimental predictions. So the neuronal correlates of consciousness are identical to the main complex, the one that maximizes phi, in a particular state with its own particular cause-effect repertoire. I emphasize that because of some experiments that we can begin to do now. So first of all, there was this paper from the Tononi lab ten years ago called Zap and Zip. So what they did: they took volunteers and sleep deprive them for a day. So these are healthy undergraduates. Sleep deprive them. So then they can sleep in a lab equipped with 128 EEG channels and with a TMS device. And you do the TMS device—you go to sub-threshold doses so the person doesn’t wake up. And then, essentially, you tap the brain. And then you look at, in terms of the EEG, the reverberation. You do [???] something, an EEG source localization device. And then you just compute the complexity of the resulting brain wave. Think of it a little bit it’s like a you have a bell, like the Liberty Bell, and you ring it with a hammer and then you can hear it resonate. And if it’s really good it resonates for a long time. That’s a sort of metaphor. So this is in awake-state. The timescale here—this is like 300 milliseconds or something. You give the TMS pulse here and then you can see this reverberation. So you do it here over the precuneus, and then you can see it travels contra-laterally. It reverberates around the cortex. And here, if you do the underlying source localization. By some measure it’s well-integrated. It’s what the cortex does. So what they’re trying to do is derive a simple empirical measure that you can use in the clinic to measure whether a patient in front of you, who may be severely impaired and is unable to speak, is actually conscious or not.

1:00:12

So in this paper they did this for wake and then they did this for deep sleep. So if you have subjects that are now in deep sleep, you get a response, locally, that’s in fact even bigger, depending on up-down states, et cetera. But then it’s structured, the complexity is much less, and it very quickly stops. It doesn’t really travel nearly as far. The brain is much more disconnected.

1:00:33

What they can now do—they’ve done this in a large clinical study of hundreds of subjects or so; patients in Italy. And they’re now trying it at a bunch of different clinics. What they’re doing now is: at the single subject level, not at the group level—because for it to be a clinically useful device, you need to have it at the level of individual people. You do this in normal subjects, awake and sleep. You do it in volunteer anesthesiologists that become anesthetized using three different types of anesthesia to try to test what does this measure do in anesthesia. You do it in persistent vegetative states, you do it in vegetative states, you do it in minimally conscious states, and you do it in locked-in syndrome. Well, we know from the clinic—for example, locked-in syndrome—these people are conscious. Minimally conscious state. They can be sometimes conscious. In persistent vegetative state they don’t appear to be conscious. So what you can do: they zap a cortex, then they get this underlying cortical activity pattern, then they compress it using Lempel-Ziv—so they call it “Zap and Zip” method—to get a single number. So this method ends up with one number, the PCI: Perturbational Complexity Index. And it’s a scalar number, and if it’s high the patients tend to be conscious. These are the conscious subjects. And if it’s low you do other clinical measures where we know the patient isn’t conscious. So it very nicely segregates. It didn’t work in two severely impaired patients. In both of them it predicted they would be conscious—and indeed, two days later, they “woke up” using other clinical criteria. So they shifted from a vegetative state, when people are non-responsive, to spoken commands or other things; switch into a minimally conscious state.

1:02:17

So that’s pretty cool. That’s really very exciting, because it could be for the first time that you have a clinically useful device that tells you: is this patient in front of me actually conscious or not? And in the U.S. alone there are roughly 10,000 people in these states like a persistent vegetative state. Some of you will remember Terri Schaivo—she was an example of that.

1:02:42

Alright. So, you can try to explain some of these—yeah, so, most famously: why is the cerebellum not involved in consciousness? Well, the main hypothesis is: if you look at very simple computer modeling of networks and connectivity with respect to phi, the cerebellum is really organized as a bunch of two-dimensional sheets, right? You have the Purkinje cells and you have the parallel fibers. So it doesn’t have a three-dimensional network connectivity in the sense that, for example, the cortex has—which is heavily interconnected and has this small-world connectivity. So if you have very regular, almost crystalline-like arrays of these two-dimensional slabs, you get a very low phi, compared to having a cortex where you have heterogeneous elements of different cell types and they interconnect in this small-world connectivity, you get much easier, very high values of phi.

1:03:30

Alright, to come to an end here let’s look at some cool predictions. Let’s do this. Now we’re thinking about channel rhodopsin here, in halo, in humans. You can do this in mice and in monkeys also. So first of all, you’re looking at a gray apple. Alright? And you have activity in your favorite color area, let’s say, down here: V4. And it goes up to LIP because it combines with the spatial information. And you see: you’re conscious of a gray apple, and you say “gray apple.” Alright? So now you make the following experiment. You put into the terminals of these neurons here—you inject them with halo, so the halo is expressed throughout the neurons, particularly in the synaptic terminals. So now you shine green light on them, and you turn off the synaptic terminals. So nothing changes. This is counter-intuitive, because nothing changes in activity here. In both cases, neurons in, let’s say, your color area are not firing. Both here and here. So if I just look at the neurons firing, I see: okay, in both cases, neurons are not firing. So I guess in both cases you’ll say “the apple is gray.” But here they’re not firing—they could have fired, but they didn’t fire because there wasn’t any color. Here they’re not firing because they’ve been blocked, they’ve been prevented by my experimental manipulation. So in this situation, here, I’ve reduced the cause-effect repertoire. I’ve dramatically eliminated the effect. These neurons—although they still fire—cannot have any effect more downstream. And the theory says—and quite clearly makes a prediction—that, although the firing is the same, here the consciousness will be different. Here you’ll probably get something closer to anosognosia. You get what people call anosognosia with achromatopsia. In other words, the patient will say, “Well, I don’t see any color. It’s not that I see gray,”—because gray is a color, of course—“but I see nothing.” Or he’ll say, “Well, I know apples are red, so therefore they’re probably red.” And there are patients like this. So what’s counter-intuitive here—and I’ll show you a second case—to most physiologists: in both cases, neurons are not firing, but yet you get a different state. Now, this doesn’t violate physicalism, right? Here, the mental is totally supervenient to the physical, but the difference is, here, if you want, your synaptic output weights are set to zero by my experimental manipulation. And what this also shows you is that it’s not about sending spikes to somewhere, it’s not that consciousness isn’t a message that’s being passed sort of along here with the spike. It’s a difference the system makes to itself. And here, the ability for the system to make a difference to itself has been dramatically reduced by your experimental manipulation.

1:06:32

Here’s a second experiment. In fact, Dan Dennett talks about it: the perfect experiment. It’s actually not. This is quite imperfect. Here you have sort of the opposite case. You have a red apple, and now your neurons here are firing. And their firing symbolizes red. You go over here, and you’re conscious of red, and you say, “I see a red apple.” Now you do the same manipulation. You introduce a halo into these neurons here. So what halo does, when you shine the right light on it, it activates a chloride pump and effectively shunts out. So those neurons here cannot influence the post-synaptic target anymore. Those neurons are firing just as much as here, but the theory says in this case you will not see anything. Again, you’ll get the same symptoms of seeing no color—anosognosia, achromatopsia—while here you see the color red. So that’s a prediction you can test, either by this or by using other ways of TMS. It’s a little bit like the story of Sherlock Holmes in Silver Blazes. Remember when Sherlock Holmes—when the inspector (who is, as usual, clueless)—and Lestrade asks, “Well, what critical clue?” And Sherlock Holmes says, “The dog.” And then Lestrade says, “Why the dog?” Sherlock Holmes says, “Well, the dog didn’t bark at night.” That’s a critical clue. And, of course, what that revealed to Sherlock Holmes was: the dog didn’t bark at night because the intruder was known to the dog, right? The dog could have barked, but it didn’t bark. Which is different than if you had a dog that was poisoned, for instance, because then he couldn’t have barked. Then the meaning of the silent dog would have been quite different. So the important point here is to see that consciousness is not in the sending of messages. It’s the difference a system makes by generating spikes to itself.

1:08:38

Alright. Let me come to an end here. A question, particularly here at the Center for Intelligence, is: so what difference does consciousness make? Could it have been evolutionarily selected? So, under this reading, consciousness is a property intrinsic to organized matter. It’s a property like charge and spin. We find ourselves in a universe that has space and time, and mass and energy, but we also find ourselves in a universe where organized systems that have [a value of] phi different from zero have experiences. It’s just the way it is. We can ask, “Could we imagine another universe?” I could. I mean, I can also imagine physicists occupied with the thought “Can you imagine a universe in which quantum mechanics doesn’t hold?” Maybe yes, maybe no. So a priori no physicist goes around and says, “Well, what’s the function of charge? Or mass?” It just is. We live in a universe where certain things have a positive or negative charge.

1:09:37

But now we find ourselves in a universe where we have highly conscious creatures. So the question is: how were we selected for? And the answer is: integrated information is evolutionarily advantageous since, obviously, it’s much better—rather than having separate streams of information; let’s say auditory, and visual, and memory—it’s obviously much better if you can integrate that information, because then you’re much more easily able to find coincidences and make an informed judgment on the whole. You can show that in simple evolution. I’m not going to go into great depths here. We have several simple creatures that have a genome. We do sort of artificial evolution. These are like Braitenberg vehicles, except they have a genome. Early on, they don’t know anything. They have three visual sensors, side door sensors, a one-bit memory—no, sorry, no memory here. And then motor. They can move left, move right, or move straight ahead. And you put them down here and you send them through these mazes. And over 60,000 generations you select the top ten percent in terms of how far have they gotten through the labyrinth. And you select the best ones, you mutate them using various point mutations, you send them in again, and you do this over and over and over again.

1:10:51

And then what you can see, if you do this long enough, you get the animates adapting to their environment using our particular selection functions. And you can see this nice relationship between the minimal phi—so there’s a lot of redundancy here. So this shows you how adapted they are. 100% is the animate who does the optimal at every single point in the labyrinth; makes the optimal decision. And so you can see there’s this nice relationship between how adapted the animates are and their measure of integration between the minimal phi. Because there is a large degree of redundancy. So this is a simple toy experiment, and then you can make more of them to show why it pays for an organism to be highly integrated. So this would suggest the driver for why we are highly conscious creatures is that it makes us much more effective at making decisions.

1:11:44

Now lastly, particularly here in a school like MIT, let me come to the point that’s probably most controversial, that many of you are going to reject. Which systems are not conscious? Or which systems are only minimally conscious? So, first of all, IIT solves a long-standing problem with consciousness that Leibniz talks about and that William James talks about, namely the problem of aggregates. John Searle also talks about it. There are what? A hundred people in this room. Is there a superconsciousness? Is there an Übermind? Alright? Many people believe that. Well, there’s not. And the theory says that there is not because it’s a maximum overall grain over all spatial-temporal scales. So the idea would be this: there’s a local maximum here, and there’s a local maximum here within Tommy’s brain, but there’s no Über. There’s no Christof/Tommy. Now, what you could do—you could do interesting thought experiments that may be possible in the future. You can, for example, connect my brain and Tommy’s brain with some sort of direct brain-to-brain transfer where it enhances the bandwidth between our brains. At some point, the theory says our brains would become so interconnected that the phi of us two as a whole is going to exceed the phi of each one of us. At that point, abruptly, my consciousness and his consciousness will disappear and there will be this new Übermind. But it requires a causal mechanism. And likewise, if you turn it back—so you could think about the opposite of the experiment. You take a normal brain, and you slowly—axon by axon—poison or block the corpus callosum, the 200 million fibers that connect the left and right brain. What the theory says is that you have a single, integrated consciousness, but as you block more and more, at some point the local phi will exceed the phi of the whole. At that point, the big phi will abruptly disappear—because of the fifth axiom of exclusivity: you pick the maximum one, and you will have two consciousnesses that appear.

1:13:36

Feed-forward systems have no phi. And most interestingly, if you think about computer simulations of brains—so let’s say we think of Henry Markham’s system. Let’s fast forward fifty years from now and we have a perfect computer model that has all the dendrites, and all the synapses, and all the NMDA spikes, and calcium channels, and potassium channels, and genes, and whatnot that’s involved in consciousness. And this computer reproduces my behavior. Both input-output as well as at the causal level. And people would say, “Well, clearly, it’s conscious.” The theory says no. You have to look not at what it’s simulating, but you have to look at its causal effect power at the relevant hardware level. The relevant hardware level is the level of the CPU. And so now you have to actually look at the causal effect structure of individual transistors. And we know a lot about them because we build them, and so we know, for example, on the ALU part, typically one transistor talks to three to other five transistors. Gets input from three to five, talks with three to five others in the logical part of it. So its causal effect power is very, very simple, is very much reduced. And so the theory says very clearly: this thing will not be conscious. This computer simulation—although it replicates all the behavior. So this really argues against functionalism. Although the behavior is the same, even at the level of simulated neurons, the underlying causal effect repertoire is not. It is similar to saying: “When I simulate a black hole”—right? I can do that in great detail—spacetime in this simulation will never bend the computer. Just like weather simulations—it will never get actually wet inside the computer. Right? Well, this is the same thing. You can simulate it, but simulation is not the same. So you have a simulated input-output, but the machine itself will not be conscious. In order to create a conscious being—it doesn’t require magic, it requires you to replicate the actual causal effect structure. So you want to do it neuromorphically. Right? You actually want to replicate the bilipid membranes, the synapses, the large-fan, in-fan, out—in copper, or in wire, or in light, or whatever. You have to do that. Not emulate it but actually build it. Not simulate it, but actually build it. Then you would get human-level consciousness. So, of these systems, only the upper left one would be conscious.

1:16:03

So this is the way we know the world. I only know the world because I am this flame. That’s how I experience the world. And that’s the only thing I know about the world. And, of course, we know (objectively speaking) the world is more like this: there are many, many flames of other people and other conscious entities. And I think IIT is not any final theory, but I think it’s by far the best theory that has been out there. 20 years. It makes some nice predictions, it’s computational, it makes predictions about the neural correlates, it’s axiomatic, it’s anything you want about a scientific theory—in particular, its predictive power in non-intuitive places. Just like Einstein’s theory early on predicted things like black holes, which are totally non-intuitive. Finally the were borne out by this. So yes, this theory makes a number of predictions that you can find consciousness in very unusual places. Maybe in very small animals. And you may not find it in places where you think it is.


Thank you very much.

Christof Koch

https://www.organism.earth/library/docs/christof-koch/headshot-square.webp

×
Document Options
Find out more