
Where Shall We Meet
Explorations of topics about society, culture, arts, technology and science with your hosts Natascha McElhone and Omid Ashtari.
The spirit of this podcast is to interview people from all walks of life on different subjects. Our hope is to talk about ideas, divorced from our identities - listening, learning and maybe meeting somewhere in the middle. The perfect audio diet for shallow polymaths!
Natascha McElhone is an actor and producer.
Omid Ashtari is a tech entrepreneur and angel investor.
Where Shall We Meet
On Consciousness with Anil Seth
Questions, suggestions, or feedback? Send us a message!
Our guest today is Anil Seth, Professor of Cognitive and Computational Neuroscience at the University of Sussex, where he is also Director of the Sussex Centre for Consciousness Science. He was the founding Editor-in-Chief of Neuroscience of Consciousness Oxford University Press.
Anil is a Clarivate Highly Cited Researcher (2019-2024), which recognizes the top 0.1% of scientists in the world, by the impact of their publications.
N - In 2023, he was awarded the Royal Society’s Michael Faraday Prize, which is ‘awarded annually to the scientist or engineer whose expertise in communicating scientific ideas in lay terms is exemplary’.
His 2021 book Being You: A New Science of Consciousness was a Sunday Times Top 10 Bestseller, and was Economist, Guardian and FT Science Book of the Year. Anil edited and co-authored the best-selling 30 Second Brain, and also writes the blog NeuroBanter.
We talk about:
- How to define consciousness
- What it feels like to be a bat
- Are we at the mercy of our brain chemistry
- The concept of interoception
- The white and gold OR the blue and black dress
- We predict ourselves into existence
- Does consciousness need a body
Let’s get our neurons firing!
Web: www.whereshallwemeet.xyz
Twitter: @whrshallwemeet
Instagram: @whrshallwemeet
Hi, this is Umida Shtari and Natasha McElhane. Our guest today is Anil Seth, professor of Cognitive and Computational Neuroscience at the University of Sussex, where he's also director of Sussex Center of Consciousness Science. He was the founding editor-in-chief of Neuroscience of Consciousness, oxford University Press. Anil is a Clarivate, highly Cited Researcher 2019-24, which recognizes the top 0.1% of scientists in the world by the impact of their publications In 2023, he was awarded the Royal Society's Michael Faraday Prize.
Speaker 2:1% of scientists in the world by the impact of their publications In 2023,. He was awarded the Royal Society's Michael Faraday Prize, which is awarded annually to the scientist or engineer whose expertise in communicating scientific ideas in lay terms is exemplary. His 2021 book being you A New Science of Consciousness was a Sunday Times top 10 bestseller and was Economist Guardian and FT Science Book of the Year. Anil edited and co-authored the bestselling book 30 Second Brain and also writes the blog Neuro Banter.
Speaker 2:We talk about how to define consciousness what it feels like to be a bat are we at the mercy of our brain chemistry?
Speaker 1:the concept of interoception the white and gold or the blue and black dress how we predict ourselves into existence does consciousness need a body?
Speaker 2:Let's get our neurons firing. Does consciousness need a body? Let's get our neurons firing.
Speaker 1:Hi, this is Omid Ashtari.
Speaker 2:And Natasha McElhone and with us today we have I'm Anil Seth.
Speaker 3:I'm a professor of neuroscience at the University of Sussex.
Speaker 2:Welcome. Thank you so much for talking to us.
Speaker 1:Yeah, it's a pleasure, Anil. I loved your book and quite excited about having this conversation with you. We figured that when we want to talk about your book, we need to start by doing the boring thing and talk about the definition of this really weird word that we call consciousness. So why don't you take us away?
Speaker 3:It is a weird word, isn't it? But I think you're right. I think otherwise we end up talking about completely different things. So consciousness, the way I define it, is two ways really. The first is it's what goes away under very deep sleep or, even more so, under something like general anesthesia. General anesthesia turns you into an object and then, when it wears off, you turn back again into a person and it's kind of unlike normal sleep, because in general anesthesia, once it kicks in, you know you're gone on your back and no time seems to have passed, even though time actually has passed. So you're really not there in a much more profound way than you're not there during sleep. So that's one definition.
Speaker 3:The other definition I kind of like the one from a philosopher called Thomas Nagel who, 50 years ago now he puts it like this that for a conscious organism there is something. It is like to be that organism. It feels like something to be me, it feels like something to be either of you, but it doesn't feel like anything to be a table or a chair. I mean, these are just objects. There's no inner life for any of these objects. So consciousness this way is something we're totally familiar with. It's any kind of experience, the redness of red, the paininess of pain, the thought that goes through our mind, the memory, anything that has a sort of inner aspect to it that's part of consciousness.
Speaker 1:Right Makes sense, and I think the things that you're describing there have an implicit delineation built into it. So, for instance, you made a difference between living and non-living things. There also may be a difference between intelligence and non-intelligence things, and we always bring these words together in the same conversation. There's like life, there's consciousness, there's intelligence. Maybe we can draw some lines here between them to again make the landscape a little bit more traversable.
Speaker 3:Yeah, I think that's great. I think it's important to recognize that sometimes we bundle all these things together because they come together in us as humans. That doesn't mean they come together in general. I mean, we're, we're all alive, uh, we're all conscious, unless we're under anesthesia, and we think we're intelligent too. We are in some kind of species specific way. There's something. We're cognitively sophisticated in various ways, so we kind of sometimes put these together, but they're not the same thing.
Speaker 3:There may be plenty of other animals who are alive but may not be particularly intelligent, at least by our questionable human standards, and then it's an open question whether everything that is alive is also conscious. I mean, what about single cell bacteria? What about a slime, mold, what about a honeybee or an ant? So I think that's an open question. And of course then there may be other things, and then we'll probably come to this later. But we're increasingly influenced and surrounded by artificial intelligence now. So here we have a possible category of things that are intelligent in some way, if not in a specifically human way, but are definitely not alive.
Speaker 3:Um, and you know, are they conscious? Well, there's a lot of conflicting opinions about that. So if we are trying to understand consciousness it's. It's absolutely critical not to confound ourselves and conflate the idea of consciousness with these other ideas, to mix it up with it, with intelligence or language or life or um, or an explicit sense of self that you know I, when I'm conscious, I I experience being an ill seth with my memories and personal history and plans. But it may be perfectly possible to be conscious without that kind of very, very elaborate sense of self too. So it's just wise not to take the human case as the only case.
Speaker 2:I remember one of my sons when he was little. I asked him what he thought consciousness was and he said said, you have to have blood I like that, which I thought was as good a definition as any, I like that yeah blood is necessary, but is it sufficient?
Speaker 3:that's what you should have followed up with um.
Speaker 2:So, on that subject, how much have you looked into and this is going to sound very floaty and it's not meant to, I mean it as a serious question how much have you looked into animal consciousness?
Speaker 3:um, I mean, I don't. In my group at sussex we don't work with animals directly, so all my understanding about animal consciousness is secondhand from talking to people who do, from reading the papers, from visiting the labs too. But it is, I think, I mean I'm deeply and fundamentally interested in this question, Not only the question of which other animals are conscious, but what their conscious world might be like.
Speaker 2:AI may give us an insight into that at some point, right when we're able to maybe categorize smells, or I mean maybe there's one way in which AI might transform our understanding of what it is like to be a bat.
Speaker 3:That's the famous title from. Thomas. Nagel's paper where his definition is, and that's the use of AI for interspecies communication.
Speaker 3:You've got to do a little thing and this to me is a very exciting new frontier in applied machine learning, and it's one of those things that actually, the more you look into it, the more plausible it becomes. But there's going to be this huge problem of interpretation when you have language in general I mean Wittgenstein pointed this out long, long ago like if I say, if I'm holding a mug in front of me and I say mug, how do we, how am I sure, how do you know that I'm referring to the same thing? You know I might be referring to this particular shape, or I might be referring just to a part of the mug, or something like that. So there's this kind of background, shared understanding that allows us to translate between human languages. So is that going to be there for, you know, for human to whale interactions?
Speaker 3:let's say yeah, it's, it's, it's not, it's kind of not not clear that already goes wrong.
Speaker 1:But when it comes to human, to, human, let alone from human to whale, but I suppose it's trying to get.
Speaker 2:It's not so much communication between us and animals that I was interested in, it's more that idea if, if we would have at some point maybe, um, an understanding of the range that we don't have at the moment, and that that would give us an insight then into, yeah, what it was to be conscious as a dog. I suppose I mean like a kind of VR headset that you can put on.
Speaker 3:With the little sprays of things.
Speaker 3:Actually the paper from Thomas Nagel's paper again to go back to it again. One of the points he was making in that was was posing the question what is it like to be a bat? Firstly, to sort of suggest that, well, there'll be something for the bat, that it's like to be a bat, but fundamentally we humans can never really know what it's like. Only a bat can have the experience of being a bat. Bats have, famously, echolocation, which seems quite different from any of the ways of perceiving the world that we have. It's not the same as sight. It's kind of a mixture of sight, hearing and touch in some way.
Speaker 2:But it's experiential, then, fundamentally, is that what you're saying? Well, that's the assumption.
Speaker 3:If bats are conscious, and I've no reason to think they're not. Bats are mammals, they have the same kind of basic brain architecture that we do, and so to deny conscious experiences from bats would, I think, be a little bit unprincipled, and I think you don't know a little bit unprincipled, and I think you don't know. But the inference to the best explanation, the best guess in this case, I think, is that bats have experiences, and then the question is what are they like and presumably part of their experiential world? There's a beautiful word for this from the German ethologist called Jakob von Oechskel. He called it the umwelt of an animal is, is the sort of the, the experiential world from that animal's perspective.
Speaker 3:So for a dog, the idea is indeed that it would be replete with smells of all different kinds that may be much more spatially localized than smells are. For for us and for bats it would be defined, defined to some extent by echolocation, and so we can kind of get some handle on it right, I think, by analogy, by understanding the mechanisms better, but it's not the kind of embodied, first person, immediate knowledge of what it would be like for the bat itself to be a bat. We can imagine what it's like for a human being to imagine being a bat. That's a very different thing.
Speaker 1:Yeah, I think what's really core here in what you're saying, which I very much relate to, is and, by the way, when I go on holiday and I see a person, I feel like how does it feel like to grow up here? I can never really grasp that. And even when the hardware and the wetware is the same. But if the hardware and the wetware is fundamentally different, then the perception of the world is very different, and I think that's where we get to something really interesting.
Speaker 1:A lot of the reality of our consciousness is totally bound up with what this perception is. That is, the real world does not have color, the real world does not have sound. It's just that our hardware creates when it sees this electromagnetic wave, sees red, right, but what exists is the electromagnetic wave, not the color red, and the bat may see it as something totally different. Therefore, we're continuously constructing, as you call it, a controlled hallucination of what is going on around us, which is just, you know, somehow trying to triangulate what this world really is, just so that it maximizes survival or whatever it is that we're maximizing. For we would be really good at this point to talk a little bit about how we construct reality and how that influences. You know our inner states, Absolutely.
Speaker 3:And, by the way, the point you mentioned that it's not only that we have different experiences from other species, but you're absolutely right that we have different experiences one person to another too.
Speaker 3:Even if we share the same objective reality and have more or less the same wetware inside our skulls, we all differ on the outside a little bit in skin color, body shape and so on, and we will have slightly different brains. So we will all experience the world in somewhat different ways. And I think we really are likely to underestimate this difference. And that's partly because the second thing you said, which is when we open our eyes and look around, it doesn't seem that our experience of the world has anything to do with what's inside our heads. It just seems that it's there in all its subjective, multicolor, multisensory glory. Things are red or things are green or whatever they might be, and it just seems as though we see things exactly as they are. But the novelist the nice man, I think, put it best when she said we don't see exactly as they are. But the novelist the nice nin, I think, put it best when she said we don't see things as they are. We see them as we are and we're all slightly different.
Speaker 3:So there's this. I think there's this really interesting. I call it perceptual diversity. That might really be an enriching way to contemplate humanity at large. And one of our more recent projects is exactly about mapping out this perceptual diversity and understanding how different our unique experiences of the world and the self really are. But it all comes down to, indeed, this idea of how does perception work in the first place, that a basic understanding of what's going on under the hood when we experience anything is the foundation for trying to answer any of these questions. We're talking about whether it's how a bat experiences the world or how you or I do yeah and, and the basic idea is simple but it's kind of massively counterintuitive at the same time.
Speaker 3:So the common sense view of how perception works might go something like this that you open your eyes, information comes in to the brain through the senses and the brain kind of reads out this information in this outside-in direction and gradually builds up a picture of the way the world is. But that's I don't think. Anyway. I don't think that's what's going on. And building on well over 100 years of other people's work, the idea I prefer runs something like this that the brain is locked inside this bony cavern of the skull. It has no direct access to things out there in the world. All it gets are electrical signals which, as you said, are only indirectly related to stuff that's out there. There's no red out there, there's just electromagnetic radiation.
Speaker 3:And so the brain is always in the business of trying to make sense of this fundamentally unlabeled and quite noisy and ambiguous sensory information.
Speaker 3:And the idea is the way it does this is by continually making predictions about what's out there in the world or in the body and then using the sensory signals not to sort of read out what the world is, but to update and calibrate these predictions to keep them tied to the world. And, in this view, what we experience isn't a readout of sensation, it's the top-down, inside-out active best guess that's controlled by sensory signals, which is why I like the term controlled hallucination by sensory signals, which is why I like the term controlled hallucination. Now, hallucination because we typically think of hallucinations as experiences coming from within, but controlled because they are controlled by the world and the body through sensory signals. So they're useful. It doesn't mean the world doesn't exist or our perception is just completely made up. No, not at all. Quite the opposite. The way we experience the world has been tuned by evolution, by development, by day-to-day life, so that we perceive the world, by and large, as it's useful for us to do so.
Speaker 1:And so you also introduced the concept of interoception as part of this, which is the perception from within which I find really fascinating, Because I was having this conversation with Natasha. When you haven't slept enough, you're cranky. You make really different decisions than when you haven't had enough sleep, and you're mostly not aware of that, or many people are not aware of that, and that has something to do with your inner state, essentially sending certain signals to your brain that totally change your behavior. So there is that level as well that's influencing our perception.
Speaker 3:That's right, and I think that that level is actually it is so much more important than we realize. And if you ask most people what they experience, you know what they perceive. We'll often hear, sight, sound, smell, taste, touch the classic five senses. But if you think about what brains are fundamentally in the business of doing, it's keeping the body alive. The first duty of any brain is to keep the body, and therefore itself, going, and the body has all this complex, all these organs doing various things blood circulating around the body, hearts beating, all of these kinds of things. The brain has no direct access to the interior of the body either. Again, it's locked up there in the skull. It's the same game. It has to infer what's going on inside the body on the basis of sensory signals that come from within the body and its predictions about what's happening, and so that's broadly so.
Speaker 3:Interoception is this whole category of perception that tells the brain about things like how the heart is doing, blood pressure and so on, and the claim that I have about this is that when the brain is making these kinds of predictions about the body from the inside, what we experience is something like an emotion or a mood that's the equivalent of the brain, or rather of us seeing a cup on a table when the brain's making a prediction about some visual input. So yeah, and they're totally overlapping too. So you're right to point out that the way the body is on the inside can very much affect how we perceive other things around us on the outside. They're not compartmentalized. I mean, I think there's this longstanding error I mean Antonio Damasio, the author, called it Descartes' error you know the idea that our rational mind floats freely of our bodily concerns or should do, when firstly, it just doesn't and secondly, it wouldn't even work that way. It shouldn't work that way. You know, to make good decisions we need the input from our bodies.
Speaker 2:So it feels like there's a messaging system in addition. I mean, obviously there's so many operating systems going on, but there's a messaging system as well as a perception, as well as a decision-making.
Speaker 3:And attention as well.
Speaker 2:Yeah, I was going to say attention or sort of prediction that you talk a lot about. I know we're going to get on to that. In fact, we probably interrupted you. You're probably about to talk about predictive models and things. Sorry.
Speaker 3:No, but that's I think we already are talking about that, because that's the idea, that that's the fundamental mechanism under the hood that's involved in all of these things, whether it's the emotions that we feel when the brain is making predictions about the state of the body on the inside, whether it's the sight of a coffee cup on the table or the experience of intending to do something, the experience of free will. The basic idea certainly the idea I explore in the book is that these are all kinds of brain-based predictions and the brain has this deeply embedded operating principle that it can generate predictions about what it's getting. So if you step outside for a second, there's this sort of the problem the brain is trying to solve is it's trying to infer the most likely state of the world and the body given some uncertain data. In mathematics and statistics this is a problem known as Bayesian inference, and it's really hard to solve this problem.
Speaker 3:So it looks like evolution has hit on a trick to approximate it, and evolution will always get away with a good enough approximation I mean, that's one of the beautiful things about evolution and this approximation is precisely making predictions about what sensory data it's going to get and then updating these predictions on the basis of the sensory data that it does get. And you can show and I think it's a beautiful result actually you can show that if the brain just follows the simple rule of updating its predictions so as to minimize the so-called prediction error the difference between what it gets and what it expects then it will approximate this Bayesian inference. It will be approximately doing the best thing it could possibly do, but it's doing it in a way that you can actually understand something like a brain as being capable of doing. It's a great way for a biological mechanism, or indeed an AI, to solve a problem like that.
Speaker 1:Yeah, and we actually had a conversation before we went on to talk to you and Natasha was pointing out which I thought was an interesting way of thinking is that there are 8 billion people and somehow this Bayesian inference seems to work very similarly even though we have 8 billion people, and it's weird that the conclusions that we draw from all this perceptual information is in this very tight variance. Still right, you were pointing out there's probably more diversity than I assumed and Natasha assumed before we went on this call, but like it's maybe worth talking a little bit about that.
Speaker 3:I mean, firstly, you're right, it's an empirical question. We don't know how similar it is between people in general. I mean, every time you look you find actually there is difference. I don't know if you remember 10 years ago there was this photo of the dress.
Speaker 1:Yes, the blue dress or the gold dress, exactly, exactly and that never surprised me.
Speaker 2:To me, what's surprising is that that isn't the case all the time.
Speaker 3:But maybe it is. How would you know if it is?
Speaker 2:Well, I think it depends. If you tune in and listen and kids are always a big window into that stuff Maybe before all the kind of priors or whatever you want to call them are set, you do notice that there's a curiosity and investigate, I mean just that little thing about consciousness. Well, it has to be blood, I mean. Where did that? Come from Well, things that move. When I stamp on them bleed. I mean, I'm not suggesting.
Speaker 3:You know he's actually a Jainist. How are you?
Speaker 1:bringing up this child Interesting, interesting, exactly.
Speaker 2:But these assumptions that with limited data get made right, and then they get built upon, and then they get disproven, or in fact the opposite, they get confirmed, and then our bias is set, and then on and on and on. We go until someone comes along and smashes it, or a life event comes along and smashes it yeah, so blue dress golden dress um, yeah, or was it golden?
Speaker 3:yeah, it was golden, right. Yeah, yeah, it was. It was white and gold, or blue and black, yeah it's a blue and black, and yeah, and I think I mean it was you know. I think your intuition Tash is kind of on the bright lines, because something like this probably is happening quite often. Again, I think we really underappreciate how different individual experiences might be, but that hits a particular sweet spot where the difference was so big that people started using different words and once they start using different words, then it's will be.
Speaker 2:Then becomes um very apparent that people are having different experiences, and what was striking to me was how hard it was for anyone who saw it one way to accept another way of seeing it would be is that because you think there's less variety in the visual cortex and what we receive or what our brains do with that, as opposed to, let's say, taste, because we know we all taste things?
Speaker 3:incredibly differently. That's right. That's right, of course. Yeah, and you know we go to an art gallery. We're going to have very different experiences of seeing any but the taste thing feels more primal in some way, because it because it's it should be more objective
Speaker 2:and yeah, um, I don't know uh, it's a super good question.
Speaker 3:It's actually really much harder to study and that studying how different taste experiences are is not something I have any expertise in, but it's much harder than studying vision, where you can really control what people are looking at and and measure it much more directly. Um, but yeah, I go.
Speaker 2:Go back to your point. Sorry about the five senses, because I know that I thought this was really interesting in your book when you said how and who ever decided there was only five senses and then you expanded on that yeah, it's.
Speaker 3:It's one of those things we've inherited, I think, right this idea that we have five senses, and I, when I'm trying to label them, I usually end up missing one out. I'm not sure why. I always end up with like but there's sight, there's sound there's taste, there's touch and there's smell.
Speaker 1:Right yes, you've smell right. Yes, you got five. Well done, you got more.
Speaker 3:Thank you. I think probably there's be something back in Greek literature or something like that which explains why this was settled on. I mean, they are the five main ways we have of sensing the external world, but even then it's a bit confusing. I mean taste we were just talking about taste. It's not quite an external sensation, is it? I mean you taste something when you put something in your mouth, but it feels internal too, and to some extent smell can be like that.
Speaker 3:So it's only really sight and sound that are truly sensing the world at a distance. And then there are a whole bunch of other senses that not very much reflection will reveal that, of course we have this. So if I close my eyes, I know where my hand is without seeing it, without touching it with another hand. Now I can touch my nose with my finger. We can all do this. We have a sense of where our bodies are in space and how they're moving, a direct perception of that that's called proprioception for the body position and kinesthesia for body movement. We have vestibular sensation, you know, we kind of know which way is up, and it's that conflict between vision and our vestibular sense that makes us car sick when we're scrolling through our email in a car.
Speaker 2:So there's all these senses, the sense of orientation, for instance. But that stuff is learnt, isn't it? I mean that's not innate. I mean babies famously bite themselves or put their feet in them or find the edges of their bodies right. They spend a long time rolling from side to side and all of those sort of tests that you do.
Speaker 3:But I mean probably all senses are learned to a particular degree. One of the so-called fathers of psychology, william James, back in the late 19th century, famously described, or hypothesized, because babies can't tell you quite exactly what they're experiencing but he hypothesized that the experiential world of a baby, the baby's umfeld, would be this blooming and buzzing confusion where even modalities like vision and sound wouldn't have the separation that they seem to do for us.
Speaker 3:And that, yes, some ways of perceiving the world come online or mature more fully than others, but any way of sensing the world needs some period of development or learning, and I think that's quite an appealing idea. I think there's some evidence for it and also, you know, our senses don't ever get completely separate. That's the other I think one of the other myths that we can be encouraged to assume that we have, like, a vision module in the brain and it's connected to the eyes, that does vision and that's all it does, and then we have a hearing module and a smell module and so on and so on. But there's a lot of mixing between the senses and there are people, actually quite a large number of people, with synesthesia, which is when you get a so-called mixing of the senses.
Speaker 3:So letters might have color, shapes, might have distinctive tastes, and we're all familiar with this in some way. I mean, this is why, in language, we have metaphor. Now we already talk of certain tastes as being sharp, or a certain sound as being sharp, or, and if we have a bright, if you have a bright light and a dim light, you know we'll put the bright light higher than the dim light. I mean, there are associations that we, we all make. We make them really quite naturally, but I think they reveal that our, our way of encountering the world is is this really intricate weave of separation into different senses, but also integration, and that integration, just, I think, is much more pervasive than we often give credit for yeah, so at the core and I love you say this in the book as well we predict ourselves into existence.
Speaker 1:What we're talking about is very much that the brain in the skull, in complete darkness, is taking all these stimuli, is trying to make sense of the world and build a world model and needs to predict it to maximize survival. What happens if you don't have a brain, and why did this brain come about to begin with? When we want to actually go back to the question is a grasshopper really conscious? We need to understand. You know, how much brain do you need to be conscious? Doesn't really feel like something to be a grasshopper, it seems to me, because they seem to do stuff right. But but but where is that line? And let's talk a little bit about maybe the kind of evolutionary reason why we feel like this brain came about. Did it give us some advantages? Sure I?
Speaker 3:I think it's um. It's inconceivable that the brain doesn't give very significant advantages and this is partly because it's so expensive, like there's. There's some. I forget what the figure is, but it's it's very. I think there's some. I forget what the figure is, but I think it's maybe 20 or 30%.
Speaker 3:Yeah, in the order of your energy, or something, something like that is taken up by the brain, and of course evolution just would not stand for that unless brains, and large brains too, for us were extremely worth having. But we see brains of some sort in many animals, you know. Then you get to creatures like jellyfish and so on. They don't seem to have a centralized brain, but they still have neurons. They still have brain cells connected in some kind of loose mesh-like arrangements. Then you have octopuses who have a central brain but actually more neurons in their arms than in their central brain.
Speaker 3:So there's just so many interesting questions here. So why do brains evolve at all is one question. And another question is why did consciousness evolve? Consciousness evolve if, if we assume and again many philosophers might take issue with this, but if we assume that consciousness is a property of brains and only of some brains, then some point consciousness also evolved, and and so what's its function? What does it allow us to do that we couldn't do if we just had, I don't know complex brains, but without anything it is like to be you or me, and no one knows the answers to these questions, right? Because evolution, looking back in time, is hard in biology and it's really hard when we're looking at something like the brain, because it doesn't fossilize and also, obviously, things like thoughts and experiences leave no fossil trace.
Speaker 2:So it's always a bit of speculation about the deep history of ourselves we lost the thread, which was that on some level you thought evolutionarily it was more cost efficient or energy efficient that we use predictive models.
Speaker 3:Yeah, no, that's right so. So brains are expensive to run. There's a huge metabolic energetic budget that they impose, and so there's going to be a lot of pressure to make sure that whatever brains do, they do it maximally efficiently. And one of the other benefits of this whole view of the brain as a prediction machine is that it's quite efficient way to do things Because, effectively, the brain is trying to anticipate what sensory information it will get, and if it learns to do that better and better, then it needs to update its sort of best guess less and less and less. So it's quite a computationally although I hesitate to use that word because I don't think the brain is a computer but it's quite an energetically efficient way for brains to do things.
Speaker 3:Having said that, brains are just remarkably efficient. Even though they use a lot of our energy budget, it's not a lot of energy compared to the other things we have around us in our house. It's just amazing how energy hungry computers are, especially the ai systems that we have now compared to the, the human brain, and to me. This. This suggests that there are many other tricks that we've yet to learn from the biology of the brain that ai and engineering in general hasn't got a hold of yet, because if it had, ai would be so much more energy efficient than it already is.
Speaker 2:Why do you think that we can't decide how to use our brains?
Speaker 3:Well, we can to some extent, but anyway, who is the we who would be making this decision? It's not separate from your brain, surely? So we have cognitive control, right the kitchen and you know, and I have to exert a lot of cognitive effort in order to not eat a whole bag of them before I've made dinner. Sometimes I succeed, sometimes I don't. We can exert cognitive control, but I think this whole idea that there's a sort of separate self that should be able to go in and control the brain, yeah, but that's impulse control or whatever.
Speaker 2:But you talked in your book about that inattentional blindness thing, or you used examples of you know looking for a set of keys on a messy desk, etc. Etc. That you are being intentional, you have made a decision that you want to find your keys but, for whatever reason, your brain isn't receiving the information that's right in front of it. So we're not in control of how we're using our brains. And why is it that certain thoughts will? I mean, if we're conscious and obviously this happens in an extreme way when you're asleep and I guess that's your subconscious but when you're conscious, your conscious mind, you will have this little film that's playing across your mind's eye, if you like, even though you're focused on something else that keeps interrupting or keeps coming up, and sometimes you can repress it, sometimes you can't.
Speaker 2:We talk about focus. I mean that word is overused, but what is it really Actually? Is it advantageous to have lots and lots of different thoughts all at once? I just feel that a lot of the time, to have lots and lots of different thoughts all at once, I just feel that a lot of the time we really don't have control over how we use our brains. They present to us and we don't really necessarily select for what is being presented.
Speaker 3:I'd still resist putting it in those terms. I don't think the brain presents stuff to us as something that's standing separate from the things that are being presented. We are part of the presentation. Now, if the if, the if, the experience of the world is a movie, then we are not watching the movie, we're part of the movie, and part of that part of the movie is the experience of having control or not having control.
Speaker 3:But there are some. I don't want to sort of pretend there's no interesting distinctions here at all that there are some interesting ones. So you're quite right that some aspects of what our brains do are not open to voluntary control. Right, I can't look at a mug and decide to see it as a completely different color from the color I currently see it in. Maybe I can if someone's hypnotizing me or something, but in general I can't. So there are things that I experience that are not open to being changed by other things that my brains are doing. But then other things are open to change. You know, I can decide to focus on a particular thought or pay attention to a particular thing in the world or move my arm Usually not always. You know some of these things that are open to control, but maybe some of the time and not all of the time.
Speaker 2:But what about the relationship between your brain and your organs that you were referring to earlier? On that we're not necessarily aware of how our organs are functioning until we maybe do a biology class Our brain isn't giving us data on. Okay, you said it could be expressed through emotion, through mood, through feeling unwell, through a spike in temperature or these other things, I guess, is how it's coming. But we're not able to sort of program in and say I don't want to feel anxious.
Speaker 1:Give me my vitamins right now.
Speaker 3:I think that the answer is yes and no. So there are things about the way our brains are wired up that are very difficult to change, whether in the moment or over the long term. But the brain has a surprisingly high degree of changeability, of what people in the game call plasticity. There's changeability, there's potential for change there, and so, when it comes to something like an emotion, you may not be able to change the fact that if your heart is beating fast, then there's some signals will come up to the brain, signaling, signaling that and and that will typically lead to some heightened state of physiological arousal. But then does that need to develop into the experience of emotion, sorry, the experience of, let's say, high anxiety? Not necessarily, you know.
Speaker 3:Here I think there is some flexibility and a lot of mindfulness training for things like anxiety. Meditative training is all about trying to get into that gap. You know so you, you x, you still experience what's happening in your body, but now you frame it in a different way and you don't sort of assume that it's, it's the way things are and then sort of allow that experience of anxiety to to end to put you into into a vicious circle I, I guess I'm I'm thinking about more invisible things like, let's say, cell activity that we can't perceive if we are developing cancer, you know, often until it's too late.
Speaker 2:Let's say that certain things are getting duplicated in one part of our body and we're totally unaware of it, or it's that that kind of thing yeah, okay then then you're quite right.
Speaker 3:So there'll be things that I mean even in vision, there are things that that will affect our eyes, that we will never be conscious of. I mean, psychologists spend a long time studying so-called unconscious perception and arguing about where where the boundary is, and, of course, we're never aware of the process by which our brains reach these perceptual best guesses. Even Helmholtz, back in the day, he called his theory a theory of unconscious inference, to emphasize that what's going on is going on under the hood and we're only aware of the outcome. No one has ever directly experienced or very few people, unless you've had brain injury. You know, experience, the fact that you have a brain.
Speaker 3:I mean, the brain is perhaps the most opaque from our everyday experience of all our organs, precisely because it is the organ of experience. You know its job is to experience, to create experiences of other things and and not of itself. There's no pain centers in the brain, right, you can do brain surgery without anesthesia, um, as often as often happens to make sure you don't damage anything critical, um, and when it comes to this point about the body, yeah, for sure. So there'll be things that that happen in our bodies that we, we don't perceive. And you can ask why and it's the usual answer there would be that. Well, it may be possible in principle, for brains to perceive something like the early stages of cancer, but, frankly, there was no selection pressure for that it's. It's the things that our brains have evolved to perceive are things that mattered to our ancestors over hundreds of thousands of years, when nobody lived beyond the age of 25 or something anyway, so they weren't getting cancer.
Speaker 3:I mean, we can't perceive x-rays, even though x-rays are dangerous for us, because there was not strong enough selection pressure to push brains to spend the resources necessary for detecting things like that.
Speaker 1:Yeah, I think by continuing to triangulate it, we get to a sense that you know brains evolutionary, made sense to develop because they're good at regulating, there's a predictive element to it that allows for maximal survival, etc. There may be also something to be said about the neurocortex and simulating potential futures, which is cheaper than actually doing those things right, like burning calories to do something that you can actually imagine is very efficient, right, but also you might stay alive longer. Exactly this whole idea of dreams is some kind of threat simulator.
Speaker 3:So, it's better to die in your dreams than to die in the real world.
Speaker 1:Exactly. But why do you need consciousness? Why did consciousness come about?
Speaker 3:you know, this is the, the, you know, billion dollar question, isn't it right? Because a lot of these things. You can imagine building some kind of robot system that could do all this stuff and and and not have any conscious experience whatsoever. You know, just this complicated system of pulleys and motors and cameras and so on. So what is it? What is it about consciousness?
Speaker 3:And so here we do enter the realm of speculation a bit, and my feeling about this is that, if we look at what every conscious experience brings to the table, no matter what it is, it's not just one thing Like if I say I'm looking at this cup, I'm not only having the experience of the cup. There's the sound of things going on around me, there's a feeling of my emotional state, the chair behind me, my plans for the future, and so on. Every conscious experience brings together a huge amount of, brings together a huge amount of of information that is relevant for my survival prospects, in a way that sort of immediately reveals its, its relevance, its relevance for what happens next and what I might do. You know, I experience the mug partly as something I can pick up and drink from.
Speaker 3:This has affordances, as Gibson would say so I think that's a pretty unique thing. It's not, you know, it's not just these divided sensory channels. Every experience has this. It's a very efficient format for an organism that has there's a comp that has a certain level of complexity, a certain number of degrees of freedom. I, the mug in front of me, again I I can drink from it. I can throw it across the room, I can ignore it. There are many things I could do. You know, I could assemble a collection of mugs over months. The time horizons can be different too.
Speaker 3:Consciousness allows this great flexibility of response to an environment where there's a lot of information from different senses, modalities put together, that's very much centered on the survival prospects of the organism. Every experience that we have feels good or bad in some way or another. It's all valenced in that sense. Now, there may be ways of building systems that do something like this without being conscious. Um, you can imagine some sophisticated robot doing it, but I think biology is hit on conscious brains as its solution yeah, so it's essentially.
Speaker 1:I don't know if this is the right language, but I have a model of the world, but I also have a model of what it feels like to be in that world and what certain world states would make me feel like. And as I'm doing that, I am conscious about what that could be, or and and seeing myself in that world. Right, and that might not be in.
Speaker 3:You know, it's not that, these are conscious thoughts, right? That is just the nature of experience, like you know if you cross the road.
Speaker 2:Yes, and it's be. You know, it's not that these are conscious thoughts, right?
Speaker 3:That is just the nature of experience, like, you know, if you cross the road and it's like cars you know, I was in India a couple of weeks ago it's like, yeah, I feel this huge sense of being in imminent danger of dying whenever I cross the road in Delhi, and I don't have to think that that's just there in the nature of the experience itself.
Speaker 1:Yes, exactly, I mean, that sounded very rational and like logical, but it's more an experiential thing. So, in a way, though, I feel that there's a lot of computation happening here, even though you don't like to use the word, right, like there's a lot of activity happening in this brain and I know you don't like to use the term, but do you feel that there is a body that's required? Do you feel that emotions are part of the package? You know where I'm going with this question, obviously, right.
Speaker 3:Where are you going with this question?
Speaker 1:Is it substrate, independent, what we're experiencing as consciousness, right, right right, so I've got nothing against computers.
Speaker 3:I think computers are brilliant and I have several myself and, um, I like them very much and I also think you know it makes sense to describe the brain using the language of computation. I mean that language, that metaphor, has been incredibly powerful since the 1950s. It's developed our understanding of the brain massively and it's also powered a huge amount of technology. All the modern AI stuff we have is based on neural networks, which is the view of the brain you get if you treat it as some kind of computer. But I think we must always bear in mind that metaphors are in the end just metaphors and if we confuse the metaphor with the thing itself, then we're likely to get into trouble.
Speaker 3:And the symptoms of that kind of confusion are when people say things like well, of course it's a computer, what else could it be? Of course it's just computation, information processing, what else could it be? Now, the assumption the metaphor has become so deeply embedded embedded it doesn't even appear as a metaphor anymore, but it is. Computers are wonderful, but they're very, they're kind of specific kinds of objects. And one of the things that is pretty definitional about computers as we have them is you mentioned it this idea of substrate dependence or independence, like computers are so useful because, um, the same program that's running on my mac will will run on yours or will run on a different computer, even it will run on a windows machine or something, and I can run many different programs on the same computer. There's a independence of the software from the hardware and um, in the brain that doesn't make quite so much sense, you know, if you just look at a brain, there is no clean separation between its mind where and its wet where it's sort of intertwined at many levels of description.
Speaker 2:Even single neurons are very complex and intricate biological systems that are preserving their own integrity too, not just the integrity of the of the organism but even if you take the example that we were talking about earlier on, so much of the real estate is spent fixing and monitoring the rest of this biological substrate, which obviously a computer doesn't do.
Speaker 2:it has a an objective function and it fulfills that and it doesn't get distracted or I know there's talk about hallucinations at the moment with AI, but essentially it doesn't have the same interruptions, if you like, which is really, I suppose, what my question around how much agency we have over our brains. Yes, of course we can control the way we respond to things, and we can exercise our brains and change them, and they're plastic and et cetera. But there's also all this stuff going on that we just don't yet understand and we're not aware of, and yet we're entirely dependent upon them and intuitively, I always think the predictive idea seems like such a waste of energy. Until I really think about it, on the sort of the third tier, if you like, of the predictive model. You just think why would you spend all that time trying to imagine X, Y, Z if it's never going to happen?
Speaker 1:So Anil is actually suggesting that that stuff under the and maybe I'm wrong, but you can correct me but everything under the hood that you just described is what is required to make something conscious exactly that is being intertwined with that substrate, the brain, you know, doing the thing of the body and being confused in some shape or form, sometimes here and there is is required to create this conscious state that's, that's my hypothesis.
Speaker 3:You know, I I think that it's likely. I can't, it's certainly not yet demonstrated, it's. It's even difficult to think how you might demonstrate this, but I think it's likely that being conscious depends on being alive, depends on being a living organism, life being necessary but not sufficient for consciousness and you this means. If this is true, then it means that AI is not conscious, even though it might give a good, convincing appearance of being so, because it can speak to us and it can make up stuff and it's trained on all of human language. So if you ask it a question about consciousness, it will give you a good answer. But if I'm on the right track here, then AI will no more be conscious than a computer model of the weather and it's whirring away in the met office somewhere will actually get wet or windy. It will only ever be a model or a simulation, a mimic of that process, just sort of abstract information manipulation with nothing that breathes the fire into the equations. And you know, is this right? Well, it's really really hard to say.
Speaker 3:And I think and it's pretty much against the mainstream view and the mainstream view is to think of the brain as a computer. And if you literally think the brain is a computer and if you think that everything that brains do are forms of computation, then of course a computer is sufficient, whether it's made out of carbon or made out of silicon. So it is pushing against the mainstream, and I think the two ways that that push has its grip is to recognize the ways in which the computer is still just a metaphor, and the more you look into a brain and literally dig into it but also just ask well, what does computation mean? What do brains actually do? You realize that you're you're looking at the brain through what one of my colleagues, johannes kleiner, beautifully called turing lenses after alan turing, you know turing goggles.
Speaker 3:We see everything through the lens of a turing machine.
Speaker 3:And on the other hand, I think there are some good reasons for associating consciousness with living systems.
Speaker 3:And this really is about taking this whole idea of the brain as a prediction machine and unspooling this idea or pulling this thread as far as you can.
Speaker 3:And when I do this I get to this point where there's recognition that, fundamentally, the mechanisms that underpin, let's say, a visual experience of a cup on a table, which you can simulate in a computer, in a brain, they go right down into this fundamental imperative to stay alive, even for single cells, to maintain their own identity, integrity, autonomy and so on. So there might be this necessary connection between life and consciousness here which, at the level of our everyday experience, makes a little bit of sense, because it does seem to me that all of our experiences have, at their very, very most fundamental levels, this very hard to describe, inchoate, basal experience of just being alive. The feeling of being alive seems to me to underpin pretty much every other experience. Now, I might be totally wrong about this, but I can be wrong about that and still be right about the idea that the brain isn't a computer. So this is why I tend to separate these two things. You can.
Speaker 1:You can choose which part of this story most appeals no, I think that makes a lot of sense and uh, and I know you're aware of this, but you know we're very anthropocentric in our I would say maybe arrogance and saying we're so special by being the only things that are conscious and also anthropomorphic at the same time, where we see an LLM and we feel like it's conscious, even though we don't really give two hoots about the chicken that I just ate for dinner or something like that right, while I feel like that's probably a lot more conscious than that LLM, like that right, while I feel like that's probably a lot more conscious than that LLM.
Speaker 1:But you know, it is important to find where the line is, to avoid any potential suffering or being too anthropocentric. To begin with, it seems clear to me that current LLMs don't really have a world model. They don't have a model of themselves in the world. You know, obviously they cannot be conscious, but it seems to me that there is a way you could have something that is maybe even in a virtual world right, that has stakes, that understands stakes, that understands what it is to be embodied in this virtual world and to roam around and to learn and to figure things out. That could maybe get quite close and obviously this is a philosophical question. But should we then turn off the program? The Black Mirror season seven is coming out soon. I feel like there are some really good episodes that try to go into that subject.
Speaker 2:It just needs to have blood. It needs to have blood.
Speaker 1:No, you're absolutely right.
Speaker 3:I mean, I think science fiction has done a much better job of laying out these issues, the ethical issues especially, than has been done in academia in general.
Speaker 2:I feel like there are predictive models. Aren't there. This is us humans testing by writing so many books and making so many? Yeah, exactly.
Speaker 3:But I think these technologies, like LLMs I mean, some people would say they're conscious From my point of view I think it's a huge stretch, though I can't say for 100% sure. But I think we can easily account for the reasons we think they are in terms of these psychological biases like anthropocentrism, and that's the more parsimonious explanation for people's feelings that some LLMs are conscious rather than them actually being conscious. But it might be wrong and there is, I think, a very important ethical perspective here, which is that it's a really bad idea to try to build conscious AI. Sometimes you hear it as, like the Holy Grail, this is what we should be doing. It's like no, we shouldn't be doing it. If we were to succeed, whether on purpose or by accident, we would have huge ethical catastrophe unfolding, massive potential for new forms of suffering we might not even recognize.
Speaker 3:But quite what it takes is, you know so the ideas of LLMs you could. You're right to say that currently they're trained in a very passive way. They're just exposed to lots of data and it's all disembodied, abstract. It's just sort of word tokens in the various orders they come in in text. But you can imagine training a foundation model, one of the big models that underpins language models. You could imagine training and a foundation language model in a robot that's interacting with a world. What would you? What would that lead to?
Speaker 3:And one possibility here, which, which you know I don't think has received a lot of attention, is that it might be true to say that a system with that kind of language model, even if it was based on silicon in a robot or something, would truly understand in a way that current LLMs don't. They just sort of mimic understanding, perhaps Because the tokens that they are manipulating are grounded in physical interactions with a body in a world. So you could truly understand that mug means mug in the same way that you know that doesn't just mean parts of a mug, or this perspective back to the wittgenstein point that we started with um. But they might understand without any consciousness at all. Right, and and this is, you know, I didn't about this before, but just as we have to be careful of assuming that consciousness and intelligence go together, right, perhaps even understanding and consciousness might actually come apart.
Speaker 2:That's really cool, yeah, yeah.
Speaker 1:Nice, I like that we made conceptual progress here. Nice, tell us maybe briefly, what are you working on in terms of your science and what? What is next?
Speaker 3:I'm doing, uh, far too many different things, my own peace of mind at the moment. So one is actually really delving into this question of, uh, consciousness, ai, computation, and, and trying not to give people definitive answers, because I don't think that's possible, but trying to clarify the landscape and trying to make as best case as I can for, as your son would put it, blood matters. I think life matters. So that's one thing that I'm doing and then trying to figure out ways to test this doing and then trying to figure out ways to test this. And then in another project we're using computational models.
Speaker 3:I'm still very happy to use all the powerful tools that computers give us. We're using computational models of these kinds of predictive processes in brains to try to understand how different forms of visual experience happen. So we're comparing different kinds of hallucinations, different kinds of visual experiences, different kinds of visual perception, and trying to really map them onto different predictive mechanisms in the brain and sort of really ground the story of perception working in this way. We're also looking at individual differences a lot. So we have this big perception census project where we have about 40,000 people taking part. Try and get a snapshot of how different our inner worlds actually are.
Speaker 2:Oh yeah.
Speaker 1:Does it still make sense to send people to there In?
Speaker 2:2023 was the? It doesn't at the moment.
Speaker 3:These projects take a lot of time, so we will open it up again, but at the moment we're in a sort of analysis stage. I did contribute back in the day.
Speaker 1:Oh, thank you.
Speaker 3:Thank you. And then you know we're also working on some new mathematics that allows us to understand and make sense of what people say when they say something emerges from something, like right, often people say consciousness emerges from the brain, like what do you mean? And so we want to try and and make that into you know, some sensible statistical mathematical tool that makes sense, in the same way that if you see the starlings that flock above the west pier here in brighton in the winter, you know when they flock they're doing something.
Speaker 3:You know there's an emergent flockiness that these so we're trying to do that because I think neurons might also flock in some space of brain activity rather than in the sky. And if we can figure out a way of measuring how starlings flock or seem to flock, then we can generalize that and look at neurons flocking.
Speaker 1:That goes to your memory potentials we were talking about yeah, that's a lot, and you know, and there's a whole, there's a whole other branch of work where we're using stroboscopic light to give people visual hallucinations and oh, wow, what's going on there, so I'm happy to participate. Yeah, it's uh it's.
Speaker 3:It's fun to participate in that one right, nice, okay, cool.
Speaker 1:Thanks so much for taking the time. It was a pleasure to talk to you about consciousness and all this interesting science that you're doing thank you for writing your book as well. It was amazing, really great absolutely, totally had a massive impact on me, so thanks for that well.
Speaker 3:Thank you so much for reading it, thanks for the kind words and thanks for the conversation. I've really enjoyed it too.