
Where Shall We Meet
Explorations of topics about society, culture, arts, technology and science with your hosts Natascha McElhone and Omid Ashtari.
The spirit of this podcast is to interview people from all walks of life on different subjects. Our hope is to talk about ideas, divorced from our identities - listening, learning and maybe meeting somewhere in the middle. The perfect audio diet for shallow polymaths!
Natascha McElhone is an actor and producer.
Omid Ashtari is a tech entrepreneur and angel investor.
Where Shall We Meet
On Technology with Sophie Hackford
Questions, suggestions, or feedback? Send us a message!
In this episode we talk to Sophie Hackford about technology. Sophie is a futurist, and has given 220+ provocative talks to boards and exec teams on novel science and tech. Sophie is an advisor to John Deere & Co, on the future of food, climate, and agriculture. Sophie is also an advisor to New Lab in Brooklyn. Sophie co-founded and chaired 1715Labs: a spinout from Oxford University’s Astrophysics Department, labelling data to train algorithms. She previously worked at WIRED Magazine, Singularity University on the NASA Research Park in Silicon Valley, and the Oxford Martin School at Oxford University, where she raised $120m for frontier-bending research.
Our conversation covers:
- the merger of biological and silicon-based systems
- is technological advancement actually progress for humanity
- the hidden power of "dark compute"
- interspecies communication
- the power of narratives to inspire and drive positive change
- innovative solutions in environmental monitoring and conservation
If you want to help make science more relevant, representative and connected consider checking out The British Science Association.
If you want to channel your inner citizen scientist how about classifying some galaxies on the Galaxy Zoo page.
Web: www.whereshallwemeet.xyz
Twitter: @whrshallwemeet
Instagram: @whrshallwemeet
Hi this is Omid Ashtari and Natasha McElhone. Welcome to when Shall we Meet. Today's episode is pretty trippy. We'll take you on a helter-skelter ride into Sophie Hackford's futuristically oriented mind.
Speaker 1:We'll delve into her thesis that the world is a computer and the latent capacity of your household devices be harnessed by AI to run tasks that they were never designed for.
Speaker 2:And that's not just Alexa, but maybe your fridge microwave and perhaps even the old pregnancy test.
Speaker 1:We also talk about learning from and communicating with other species. We talk chips and the intersection of biological and silicon-based networks.
Speaker 2:And we discuss our responsibility as citizens participating in the adoption of these new technologies.
Speaker 1:A bit about Sophie Hackford. She's a futurist and has given more than 220 provocative talks to boards and exec teams on novel science and tech. Sophie is an advisor to John Deere and Co on the future of food, climate and agriculture. Sophie is also an advisor to New Lab in Brooklyn.
Speaker 2:She co-founded and chaired 1715 Labs, a spin-out from Oxford University's astrophysics department, labelling data to train algorithms. She previously worked at Wire magazine Singularity University on the NASA Research Park in Silicon Valley and the Oxford Martin School at Oxford University, where she raised $120 million for frontier-bending research.
Speaker 1:There are still some creaks in the recording of this episode, but we're getting there, and while this one is a little tech-heavy, next week we will be talking about legacy and social media.
Speaker 2:You love tech. What are you talking about? It's not tech-heavy, it's fabulous and fantastical. It was like disappearing into a rabbit hole in Alice in Wonderland. It's magical and wild and wonderful. You're going to love it, even the non-techies amongst you. So, without further ado, let's get into Sophie's mind. Hello, my name is Natasha McElhone and I'm Omid Ashtari, and with us today we have Sophie Hackford, who is a futurist. Hi Sophie Hi there.
Speaker 1:I'm quite excited to have you on. We spoke about many topics before we first record, but I think one that really struck a nerve is the one that the world is somehow a computer at large, currently run mostly by natural phenomena, but as we're building more technology and plugging ourselves into it, there's a symbiosis of technology and biology happening. You have thoughts on this? We'd love to hear them.
Speaker 3:I do, and a lot of this has been sort of presaged, I guess, by science fiction. As always, douglas Adams is an obvious one with the Hitchhiker's Guide to the Galaxy. You know, the Earth is a computer and it's sitting there calculating. We're just the computational bits sort of knocking around in it. But I love this idea that we are building a kind of intelligent planet. We're plugging in silicon to much more biological processes and trying to both understand them, but also so that's eavesdropping, I guess but also to think about how we can use some of these biological processes, I guess, to build big, leafy supercomputers or sort of plug ourselves in with sort of fungal architecture or whatever it might be, you know, build buildings that might be able to compute on our behalf, but made of natural products. I think there's so much that we could really explore on all of that.
Speaker 1:Yeah. So maybe let's start with the very basics. Before, say, the Industrial Revolution, we really had no understanding of continuous tracking of any of these phenomena. I would say right, and so we start building technology that is actually monitoring the world. How far have we come on that front?
Speaker 3:It's a great question because we've been sensing the planet for quite a long time. We've started listening to, I think, whale song in the 1950s and it led to an explosion in both whale research but also saved the whales, and I think this ability to be able to see the world helps us to be able to understand what we're perhaps losing, and I think some of the sensing that we've done pretty well at up until now optics and the ability to be able to do machine vision and to be able to see the planet in real time from space or indeed from the ground is amazing. But the next generation of that, I think, is going to be audio, is going to be sonics. There might even be smell and other sensors that we haven't digitized yet.
Speaker 3:And the sonics thing I think is so interesting. You know, the ability to be able to listen is actually sometimes better than visual. You can hear a lot of animals over a very long distance using audio recording technology, for example. There's so many interesting sort of cutting edge things that we're starting to develop there.
Speaker 2:Is there not a truth around the fact that we were very plugged in to the natural world, and the obvious example is indigenous communities?
Speaker 3:Maybe we did have a better understanding in the sort of biomimicry 3.8, Janine Benyus. No, I think it's really important for us to understand, like the last, I don't know, 150 years we've been living in a world built essentially by silicon. You know, we've been shipping a lot of chips out into the world, into the environment, into our homes and our buildings and our offices, and actually the irony I that we're pointing out here is that once we stretch that technology beyond the silicon infrastructure that we're used to, we're actually starting to re-tap into understanding the natural world and our place in it in a way that would probably make a great deal of sense to our ancient ancestors, who would say you know well, of course I can listen to the fish in the river, which you know, as a lot of anthropologists have shown us, perhaps over the last hundred years or so, we're going to be able to listen to the fish in the river, but in a very different sort of silicon enabled way. That I think is super fascinating.
Speaker 1:Yeah, I think the premise of your question is one that is a bit judgmental of the path that we have taken as humanity, which I understand there are. There are some, I think, even in sapience he's talking about was the sapience or some other book is talking about oh, we should have just stayed nomads. We should have stayed exactly so I think we've picked up the first, we've picked up the first tool and since then this, this pandora's box has been opened, you know.
Speaker 2:Oh, and the speed of change and all of those things is extraordinary. I just can't help looking at it as a continuum, as part of a continuum, that there's been peaks and troughs. We're now getting back to something, but, as you say, in a very amplified way, with all of our intel.
Speaker 1:Exactly Amplified, I think, is the right word here, because previously it was a local cluster of like insight that you may have had. But now what we're actually saying is we can get a global view on the actual ecosystem intelligence and not just my neighborhood or my little like woods intelligence.
Speaker 3:You can ask questions like what does Australia sound like? Questions that are utterly, I mean, I don't even know what I'd do with the answer to that question, to be honest, but, you know, fascinating. And if you're, you know, a biologist in that space right now it must be the most exciting time to be alive tell us a little bit more about this sonic thing I've.
Speaker 1:This is new to me.
Speaker 3:I've not heard much about well I think, as people in the space would probably say, we haven't digitized a sense for quite a long time. You know, this is the new frontier of sensing, it's the new way of plugging ourselves in. You know, to your point, natasha, we have spent probably the last hundred or so years in opposition technology, being in opposition to nature or in opposition to, like, the so-called real world. If you say to me, now we're seeing this sort of bridging happen and inevitably that's going to go through each of our senses, sort of one at a time. So we've kind of nailed optics but we're moving into those other senses now and thinking how we can digitize and make maps of them or make a way of computers not just to understand what we understand about what we can hear, but come up with entirely new sounds and smells that we are not even available to understand. And that's, I think, when it gets both quite trippy but also super interesting. If we are able to monitor the decline of certain populations because of industrialization or whatever in the Australian outback, that is a phenomenal tool. If you're interested in saving the planet, which presumably most of us are, that is something that is almost impossible to do just visually, I would argue we're going to need all of our senses to help with this sort of fight, but also to plug ourselves in. As I said, if intelligence is becoming a planetary phenomenon, both silicon-based intelligence and the natural world plugging in as well, that becomes something both very powerful but also, I think, quite humbling for us as the sort of what do they call it apex cogitator, the top thinker on the planet.
Speaker 3:Well, are we? We don't know. Elephants have got a dictionary. Now Honeybee dictionaries are coming out. You know, elephants can hear across a very, very long amount of distance. I didn't know that. I didn't know that either. You know they have a another elephant somewhere else has said that an elephant has died, or there's a honeybee. You know, pot here. We should all come and cluster, like all that stuff. We had no idea that that's what was happening. You know, when we start to translate that kind of stuff, I think it's really interesting. The next step on from that, though, I think is really mind-blowing is if we can then intervene. There's a professor called Tim Landgraf in Berlin who has recreated the honeybee waggle dance, so he can talk to the bees wow, he's basically broken the interspecies barrier, and this is vince surf's great project near the interspecies internet, um.
Speaker 1:He's the godfather of the real other internet, um, and so you know this is the next frontier, and gosh, isn't that exciting there are trends or things that I've been reading recently about ai being very helpful in translating old languages and old manuscripts of things that we couldn't figure out. Now, if you apply that to interspecies communication, I think that becomes very interesting, especially because now we're really tapping into and collecting the data. I guess in a way that we could create big models of this data that then you let AI run wild on and understand acutely.
Speaker 1:Okay, that's really interesting. So now all these sensors are coming online. What do you think are the predominant sensors? I guess satellite imagery like a lot of the satellite stuff is going on, obviously now propelled by Elon Musk and other commercial space ventures. What is happening on that front that's a lot of visual bandwidth that's coming online.
Speaker 3:Absolutely.
Speaker 3:And you know it's the same price performance curve that we're all used to across all technologies. Everything's becoming more powerful and cheaper. You know, we've heard it all before, and that's the case across. You know, pretty much any sensing technology. You can get cameras the size of a grain of salt. That means the whole of the back of your phone can become a camera. You don't need just two cameras on the back of your phone. The whole surface, the whole buildings could become cameras.
Speaker 3:This idea that we're kind of instrumenting the planet is something that's very compelling to me. It's what Professor Bratton calls the kind of the megastructure that we're building above our heads and underneath our feet. That is the computer that we live in, you know, and that we're probably a bit blind if we think that actually computers are little boxes that sit on our desk. If we're lucky and that's a very different future if actually we're living inside the machine, we are freeing computers from their boxes, as an MIT professor says. If that's the case, that's sort of the new normal. That's a way of breaking down our mental model of what a computer is in the first place. It isn't a thing. It's the world in which we live, and that world itself is computing.
Speaker 1:It's funny when you're speaking about the world becoming a computer, especially in this moment where I'm feeling where all the GPUs are going into big data centers owned by big corporations. So where are we going to bridge the gap between that vision of the world becoming a computer and that centralized infrastructure that we're seeing right now?
Speaker 3:Yeah, and obviously data centers and the big supercomputers and others are the key points of infrastructure for this sort of machine earth that we're building. But what's been very interesting over the last few months for me has been researching this concept of dark compute, the fact there might be two, three, four who knows how many times more compute available in the fans and smoke alarms and pregnancy tests and fridges and ATMs and everything that we've been consuming over the last 20, 30 years which have way overpowered chips inside them. They were. It's the sort of opportunity cost Once you've, as they say with the dark fiber, which was a phenomenon at the end of the 90s, if you've dug the trench already, you may as well throw loads of fibre optic cables into the trench and it doesn't cost you that much more to put a few more in there.
Speaker 3:The same with chips powering our everyday objects. They tend to be very overpowered and actually idle 99% of the time. And there is a meme on the internet of engineers trying to get Doom, the very old school computer game, run on everyday objects. You can do it on bridges, I said. Pregnancy tests is one of the most famous ones.
Speaker 1:I didn't know that.
Speaker 3:I think you can do it with bacteria and stuff. It's got idle control. The point is not to say everyone wants to play Doom on their household objects. It's to say the latent capacity capacity sitting dark and idle in our homes, our cities, our doctor's offices, our airports, everywhere is actually something that could be taken advantage of.
Speaker 1:Now that ai is is the thing that's going to unlock that dark compute at the edge where the internet meets the real world because we use a like very supply constrained gpus right now to do the most important calculations and the most important compute. So I wonder whether a pregnancy test chip is really going to be able to do stuff. But I guess you're saying AI could find a way, absolutely, and actually, probably most things don't need a massive data center.
Speaker 3:They maybe do today because we've got incredibly hungry large language models out there that need a lot of power. But they're going to get more efficient.
Speaker 1:They better get more efficient, Otherwise we're going to need 25 planets to power them.
Speaker 3:You know they will inevitably get more efficient. You're not going to have to mine the entire Internet to get an answer to a question. You know, when we're asking our fridge what we're going to have for dinner tonight, it will probably just ask a very limited amount of either the internet or itself to be able to tell us the answer to that. So I don't think we're going to require these massive data centers just to answer trivial questions. It's not to say that dark compute is going to power the net, whatever, but the point is it's sitting like dark fiber and in fact dark fiber, which is, as I said, the idea that we threw a whole bunch of fiber optic cables into the ground just because dark fiber came before Zoom and Netflix and everything, it enabled the next generation of the internet. It enabled us to be able to partly to be able to go move to the next generation of what we understand as the internet today.
Speaker 3:Those things didn't exist in 1999. My question and it's not my question actually it's Pete Warden's question, who one of the founders of TensorFlow at Google and UsefulSense is his company. Now he says you know what is? I guess what's going to happen. What delightful things, in his words, could this dead silicon do for us that's just lounging about in our homes?
Speaker 1:Yeah, interesting. Okay, that's indeed the world becoming a computer, I guess all the devices that we have there. Let's talk a little bit more about the organic world then, though, because obviously there are interesting computations happening there too, and there's biological intelligence, as you were also referring to, natasha, and that we were a little bit more connected to, I guess, in our tribal era. How have we kind of made sense of that, apart from understanding what whales are saying, with all this stuff that has come online?
Speaker 3:To me, that's where the interesting frontiers of sensing and compute are is when we can start thinking about these intelligent biological systems as part of this machine that we live inside. And I'm not saying, by the way, that any of this is particularly desirable, I just think it's a realistic way of looking at the world today, let alone what's coming down the track, and I don't believe you can really make decisions about what we need to do with all of these challenges that we face climate and otherwise unless you consider we're living in this sort of computer, because I think otherwise you're a bit misguided about the kind of infrastructure that's wrapped around us. But I think to plug ourselves into these other types of compute that so far we haven't really understood or we've, rather, we haven't been terribly humble, I guess, about as a species, I think it's going to be incredibly revealing. It's going to be revealing about us. It's a bit like artificial intelligence has been. You know, it's a strange mirror to look at when you realize that you know octopuses octopi, you know, are very sophisticated creatures that can make all kinds of decisions and are very sensitive.
Speaker 3:I mean oysters not something that any of us, I should think of a great deal of thought to are incredible ocean sensors. They tend to be able to tell that there's an oil spill before other silicon based sensing can do that. So can you whack a couple of sensors on the outside. It's a non-invasive process for the oyster lovers out there. But to be able to monitor the oceans using sort of a biological computer is perhaps something as a company called molluscan that's doing this at the moment, all the way up to off-earth satellites and literally every stage in between. To me that's an incredible sort of barometer, I suppose, of global health, so perhaps a barometer of pandemic management. It could be a barometer of a whole bunch of things that, frankly, we're going to need in our toolbox in this sort of systemically incredibly risky world that we live in today.
Speaker 1:The trends there are the merger of biological substrate with silicon substrate. There is something that's a little bit dystopian about that, that you need to plug a chip into mollusks so that you can track oil spills. That makes you think about do we have to put chips into humans? Do we have to put chips into cows? Do we have to put chips into everything so that we actually get to Lovelock's idea of we are all part of this Gaia hypothesis where we're sensing the ecosystem.
Speaker 2:But is there also a version where the more you understand of those natural processes, the more you can mimic them, rather than hack into them and interrupt them? What is it about the mollusk or the oyster that is able to detect this and can and can we mimic it? I mean going back to the biomimicry thing, and I'm sure when we look at how people thousands of years ago built their accommodation, it would often be in line with, obviously, because those are the only materials that they had or copying some kind of a nest or all of those sorts of things and be more resilient to climate or whatever else for sure.
Speaker 2:Yeah, and biodegradable. I wonder if there's a version where, as our artificial intelligence becomes more and more sophisticated, it intuits what all these other life forms are doing that, rather than just tapping into our version of intelligence that there's, particularly with the sensory thing that you were talking about isn't there, then, an ability to become a lot more 360?
Speaker 3:I hope so, um, but that would be the. To me, that would be the dream, that would be the best possible use of technology. It's supposed to be invisible. That's the, that's, that's the promise. Um, you know, it's not supposed to be in our fridges and our atms and everything else. That's you know. It's supposed to blend into the background in a way that doesn't interfere in a vertical with us fulfilling ourselves.
Speaker 2:For sure. Things do need some kind of human intervention. That would be the optimal version.
Speaker 1:I think the intermediary step is unfortunately the one that requires us because our computation is silicon-based. We haven't really found a better way to plug into the natural intelligence ecosystem computation that is currently occurring. We we went the silicon track and now what we're trying to do is integrate the silicon it back into the biology, to just track it, understand it, become like interface with it in many ways, because we have no other way. Maybe we've lost some ways. You may be right, but I think if you want to do it at scale, there is no other way Now. Maybe with AI we can find ways that there are some.
Speaker 2:That's what I think. I understand With mycelial networks or whatever you still need to plug into it somehow, right?
Speaker 1:So what's your interface to it is my question.
Speaker 2:I mean, I guess that hasn't been discovered yet.
Speaker 1:So I think what I wanted to go back to maybe is the question I originally posed, and that is there's something dystopian about the fact that if we want to tap into the natural intelligence and the biointelligence that exists, it feels a little bit crude that we need the silicon interfaces, but there is no way around it, because we've built our whole stack of technology on the silicon substrate. That said, how are we doing on these interfaces? And what about human or biological interfaces that we're creating in this realm that have started to emerge that we may not know of yet? Because you're glancing into the future, other than Neuralink is doing XYZ, which is always the thing that you read.
Speaker 3:Yeah, I mean the science fiction community and those inspired by science fiction, who tend to be a lot of the sort of big tech gods, are, you know, definitely interested in trying to understand how to plug our brains in, you know, in some crude or otherwise way. And I think that's part of that. Is is driven, you know, as I said, by probably a science fiction sort of vision of a, of a wonderful, somehow, future. You can, you know, download memories or whatever. But also it is this, also it's the frontier of neuroscience. There's so much we don't understand about the brain, which is so ironic because it's between our ears, it's so close to us in every way and yet actually so much of it is still a mystery. And I think that's the same about the biological world. Yes, it is not an elegant interface to strap some sensors onto an oyster, but at the end of the day, we're trying, I assume, from bench science perspective, trying to understand more about. That's the frontier of scientific endeavor, as it were. And the brain, obviously, as part of that.
Speaker 3:We talked about space as well. That's very much. Each of these I see as new frontiers, hugely exciting, tend to be incredibly expensive in the first instance. Hence it's why a lot of the tech folk are the ones with the resources to be able to do this kind of experimentation. But these sort of alien interfaces, I think, are absolutely fascinating, and off Earth is a beautiful, I think, example of that. Like, how do we, even if it's not us, how do we remotely even do robotic experiments on whatever comes after the International Space Station? You know, that is really fascinating to me. Could we automate laboratories in space? Can we push the frontiers, as we were talking about earlier? If bacteria spreads quicker in low-gravity environments, if cancer cells grow quicker in low-gravity environments, you can accelerate the speed of your experiments. Why wouldn't you want to do that in a zero-gravity environment.
Speaker 3:There's so much frontier-type stuff happening. As I said, if you're a scientist in any of these fields, it's a tremendous time to be alive. You have incredible tools at your fingertips. They will look incredibly crude in 50 years time, but you know what a great time to be a neuroscience, what a great time to be an astrophysicist.
Speaker 1:Yeah, I find these things sound quite out there. I remember reading the James Bridle book and there was a simple thing that he did there. It's about ways of being and you know how we're not recognizing intelligences around us Right Okay.
Speaker 1:You said this already. You referred to it as like we're not very humble about other intelligences on this planet, and that's the premise of the book. It was a very simple thing. They just strapped a, like gps, on a wolf and it just understood the migration patterns of these wolves better than ever, and they understood that there's certain, I guess, migration paths that some bigger herds take and understood that those should be protected corridors. Very simple thing. And this is just gps chips. We're not even going that far. So I think, as these, um, as these chips are getting cheaper, as these observation methods are becoming more prevalent, we are actually plugging into things oh yeah, and as, as we released and you know, forgive me, the purists out there, but as we release machine learning and algorithms into this sensor network.
Speaker 3:You know, even on a very cheap sensor strapped to the leg of a wolf, you could imagine plugging that into the system. If you can start doing compute on that sensor, something really exciting could emerge. It's not even just wolves. People are doing it on toddlers, just understanding how these are not remote objects to be studied from afar. This is everyday processes. We still don't understand how to taught us land language. You know that kind of stuff. So, yeah, I think you know. This concept of the edge, which is a sort of really goofy word to say chips that are out in the world with us, you know, is really fascinating to me because it's so latent. It's so sort of full of potential, once we have the intelligence in it, to be able to start giving us answers rather than just data points.
Speaker 1:And I guess the moment is a pertinent one, because we were creating a lot more data than we can even compute for the most time, but now we have these AIs that can integrate a lot of information and actually discover patterns better than before. So it feels like now is the time to get a lot more sensors online, because now we actually control through the information. If you think about seti and how slow it was to look at all these alien signals and understand that on people's laptops, now you can do this trivially very quickly.
Speaker 3:well, that's why you know I'm obsessed with astrophysicists or cosmologists or anyone in that in that realm, because the skills that they have are the skills that we need today. They are very, very used to looking for, using machines as tools to find needles in the haystack, even if they don't know what the needle is. I mean, no one knows what an alien is or looks like or sounds like or anything, and yet here they are out there trying to listen for them. I used to work next door to the SETI offices and it was just extraordinary.
Speaker 3:I couldn't get my head around how amazing and weird and mad that was as a goal is, I should say, as a goal, to listen for something you don't even know what you're listening for is just insane.
Speaker 3:And these are skills. It's said that the huge radius of telescopes that we're building today, the Square Kilometre Array in Australia and South Africa and various others, these are the skills that we need today, because you know these are the skills that we need today. You know not just the silicon infrastructure or the biological infrastructure, but the sort of human cognitive infrastructure that we need to manage and regulate and understand and talk about these technologies.
Speaker 2:You know we're going to have to skill ourselves up pretty quickly, I suppose, or whether you just opt out and things get done to you and you know social media is having a terrible influence on your life, and yet you opted in in the first place and you don't have any absence around it. I remember you mentioning once that GPS was becoming increasingly unreliable or risky and that you were going to move into some sort of quantum space. Can you just expand upon that for a second?
Speaker 3:Absolutely. I mean GPS will continue to be extremely useful. In fact, I don't know what would happen if GPS turned off globally.
Speaker 3:Yeah, exactly, but we're going to have to come up with and in fact a lot of militaries are spending a lot of money on research on this question at the moment of like, what alternatives can you run? You know, not necessarily to replace GPS, but in parallel and that's for. You know, civilian aircraft as much as it is for non-civilian purposes, I mean, location is a deal breaker for this century. There is no big company out there that isn't worried about knowing where things are inside warehouses or manufacturing facilities or whatever else. Autonomous robots need to know where they are and they need to know where the humans are and they need to know where each other are and all the rest of it, and also outside, whether it's autonomous vehicles driving us around or it's autonomous tractors and fields or whatever else.
Speaker 3:Location is everything. So we're going to have to come up with several not just quantum, but several different types of sensor. Quantum's useful? I can't answer any questions really on quantum because I don't understand it at all.
Speaker 2:I don't think anybody does.
Speaker 3:I don't trust anyone if they do. But it basically uses the Earth's magnetic field, which is quite difficult to jam, and it's constant solar weather. All that kind of stuff is quite useful but it's very much at the early stages of understanding even how to take advantage of that in a stable way that you can deploy. In the same way that GPS has been such a success. Gps is a hundred and whatever it is success, it's predicated on Einstein's gravitational weight. It's an insane R&D process, if you think about it, to get to where we are today.
Speaker 1:And so it's interesting to see what's happening with quantum sensors. They won't be the only answer.
Speaker 3:I'm sure, but we're certainly going to have. Happens with civilian aircraft, as I said, it happens with agricultural equipment. It happens a lot, either from non-malicious or malicious actors, and that is not something that's tolerable, if you know. Well, just fill in the gap. Okay, let's go back to the point that you raised, natasha at this juncture we're seeing that a lot of the ai and a lot of personal responsibility
Speaker 1:a lot of these tools that are coming online are slowly one encroaching on our ability to do a lot of things. You know, a lot of people are afraid of unemployment and whether they're going to be displaced.
Speaker 2:Or being able to remember anything.
Speaker 1:Well, that's already been a problem.
Speaker 1:I can't do anything without my Reflect or Evernote. So, given all this progress that we've had on these different fronts, I start thinking about what our role is as humans. Going forward, we see that we're going to be ever more connected. Ai is going to take over a lot of these things. There's so much data out there that a human can't compute over their whole lifetime. Even one second of this data stream can't be computed by a human throughout the whole lifetime. There's this HE Wells quote saying human history becomes more and more a race between education and catastrophe. What should we do on the education side? What is it that we should do? I like your idea around the astrophysicists and the fact that they know how to find needles and haystacks and they don't even know what those needles look like. What are the things that it should we be doing here and what is our responsibility personally here, if we maybe take offense with some of the conversation that we've had so far, to kind of drive it in a different direction?
Speaker 2:You also mentioned. You added to that quote about regulation.
Speaker 1:Yes, yeah, and what is maybe also the responsibility of the state and the regulators out there.
Speaker 3:There are so many thoughts in my head now, from demography and challenges on an aging workforce and all that sort of stuff, all the way through to social media and everything else. It's a difficult question to get one's head around. If we start with demography, I think it's very, very interesting. We are facing a major demographic crunch in a lot of countries around the world. It is no accident that a lot of those countries are world leaders in robotics. So you take China or Japan or South Korea, all with desperately aging populations and increasingly going to be reliant on robots of whatever description, software or otherwise to do tasks that ordinarily humans would have done. But those people haven't been born. So the teachers and plumbers and electricians and whatever else that just simply don't exist are, I think, going to be replaced by robots, and so there's a lot of that sort of the nuance of that. Are the robots going to take our jobs? Kind of argument is yeah, it's a very grey question. It'll take some people's jobs at some times across a century. Will we net net by the end of the century? Be okay, Possibly, but it's going to be a very bumpy ride and it's not going to be very even and it's certainly not going to be very equal.
Speaker 3:The second thing and these are in no particular order makes me think about the fact that we talk about a global. Well, I talk about, anyway, a sort of global form of intelligence, a global machine, but actually most of the algorithms are trained on English language. They're not just trained on English language, it's a subset of English language that we're probably very familiar with and lots of other people aren't, but really isn't a global intelligence that we are creating. It's an incredibly one-sided, very northern hemisphere kind of intelligence that's missing an enormous amount of both people and concepts and cultures and ideas. So global artificial intelligence is a difficult concept really to say, because there's so much challenge involved in that. Education and skills are absolutely critical to all of this, certainly from a cybersecurity perspective. I'm super nervous, you know. Could it have been a?
Speaker 3:that's a beautiful quote actually from during the pandemic, Reed Hastings, the founder of Netflix said, it could very easily have been a cyber pandemic, as it were, a cyber attack, and everyone would have been out on the streets because there'd been nothing to do at home and no one would have watched Netflix and his company could have fallen off a cliff.
Speaker 3:And I love that because it was incredibly. It's a very visual quote, isn't it? It really makes you think about the fact that actually, we face a tremendous number of systemic risks, and cyber is a huge one of those. As I said, malicious or otherwise, it's something called the Kessler effect, I think, which is about satellites knocking into each other the more we send up there, and Elon is very busy pumping our low Earth orbit full of very useful satellites. Don't get me wrong. I used to be a very avid Starlink user, so it's a wonderful service, but there is also non-malicious collapse that is very easy to imagine and, as I said, malicious as well, and that is the training that needs to happen yesterday, frankly, to train people up for that.
Speaker 3:So I think, this question of lifelong education is a huge one the race, as HG Wells said, between education and catastrophe. We probably can't. I'm going to paraphrase a lot of academics. I used to work with demographers. We probably can't have all of our young people in full-time education until they're 24. We're probably going to need to use their able bodies and their taxpaying efficiency, maybe from 16 onwards rather than 24.
Speaker 1:Crazy to think that it is.
Speaker 3:And will they be in and out of training throughout their life? I hope so.
Speaker 3:A new model of education is clearly completely necessary again for today, let alone what's coming down the track. Are we prepared for the kind of world we live in today? No, we're not. We have a very unresilient. We don't have any circuit breakers in the system so that when the butterfly flaps its wings in Mexico or whatever, we all catch the cold or whatever the metaphor is. We're all so tightly interconnected and we don't value redundancy. There is no dollar value really that you can put on spare socks for the army or spare PPE or whatever else.
Speaker 3:You sweat your assets. You do what the business school told you to do in the 80s and 90s. That's the reality of the world we live in. It means you shut off one of those big nodes a port or a canal, as we're seeing a lot of whatever else and you end up with a very sticky. Climate. Change feeds into that as well. When the Ukraine war started, part of it coincided with summer. In Europe, a lot of the rivers ran dry and the cargo ships that were replacing other routes they might have taken pre-war couldn't get down the rivers in Germany and others because the water level was too low. These are all just examples of second, third, fourth, fifth, sixth order consequences of certain things that you would never be able to. Then the question comes, and I will stop talking um can we simulate a lot of that stuff, you know, can we create digital twins of systems?
Speaker 3:can we pressure test, we war game this stuff in silico, so that we don't have to rely on us trying to second guess what it might mean if until it's too late. Something happens exactly what happens if there was a cyber attack? Can we war game this stuff in advance? Um, and try and, yeah, put some redundancy in place where we think we might need it.
Speaker 1:It's tough to simulate things that are not in the previous data set. You know so.
Speaker 2:For instance, but isn't that? The superpower of AI is predictive.
Speaker 1:Well, yes, but it predicts on the basis of the data that you've trained it with, for the most part, right. So, for instance, an AI could have not predicted, say it existed before LLMs came around, that LLMs would emerge. Right, because it doesn't know what the future holds, it doesn't know who's going to invent what tomorrow, and that could have a very disruptive force, for instance, right on the whole planet.
Speaker 3:And that's the role of futurists.
Speaker 1:Exactly there you go.
Speaker 2:That's why we have you here. But isn't there something to be said for us having our sort of priors set and our biases and relying on what's already happened, so we can't possibly know what's about to come in one respect, in terms of invention and innovation, in a way that I thought already we'd been surprised by, whether it's AlphaGo or something. Oh, those moves weren't things that we even knew.
Speaker 1:And yet I think for narrow, you're right, probably for now, I think it's easier for sure to do us new stuff, I think for something this broad. Well, I don't know, but what are your thoughts?
Speaker 2:that's my big hope is is that it will be able to predict and work out. I don't understand anything about protein folding, but I remember seeing a talk about it, just seeing how much could be crunched through in terms of possibilities, in a way that it's too exhaustive for us to even be able to get to maybe predictive kind of models, whereas it's not for AI.
Speaker 1:I think what my point is and I think you're absolutely right, but you're giving narrow examples. And I think a world model is very different. They're black swan events and we just don't see them the unknown unknowns are very hard to use from a data set that doesn't have the unknown unknown to begin with.
Speaker 3:Yeah, but I think maybe the power of some of these systems and I'm being slightly hopeful here rather than anything else is that they can take into account other data sets that wouldn't necessarily be part of another model. So, you know, can you look at innovations in origami or can you look at innovations in, like, banana genetics or whatever Like? Are there things that you can put into your model that we clearly couldn't, you know, begin to run a model like that in our heads? Yeah, and I, you know, I have no idea really, but I think that the power will come to be able to fold that sort of stuff in.
Speaker 3:And frankly, that's why people talk to futurists is that I get told this all the time Tell us what we don't know, that we need to know, like it's it's. You know we're very good at, you know, institutional banking, or we're very good at, you know, filling the gap, but what we don't know is what's happening in the private space industry and maybe that's going to impact our business tomorrow and you know we're going to think, gosh, we wish we called a futurist sort of thing.
Speaker 1:So we need to just create a model of all the futurists' brains, and then we sync that up with the world model and see what it spits out.
Speaker 2:So many blacks are going back to james biden ways of being, or, you know, one story, one intelligence presumably, if we do manage to get a version of what the natural world is doing and how it communicates, and all of these wonderful documentaries that david attenborough has been doing around little, tiny, tiny, tiny creatures with presumably not a ton of brain power yeah, like slime mold yeah exactly, and and also rewarding one another for favors given or warning.
Speaker 2:They've all got their systems and their ways of communicating, their ways of collaborating that if an artificial intelligence, as you said, can tune into that along with everything else, then imagine. Because, we can't do that, so I'm not so worried about there not having been models of it before. I just think, when you combine a combinatorial way of approaching things, then anything.
Speaker 1:I think you need to capture those data sources.
Speaker 2:And space too. That means a lot more silicon being out and looking back, yeah.
Speaker 1:Let me ask you, with all this technology that we're talking about, there's a trend that we see, especially when you look at VRAR, where you see a lot of this stuff disappearing a little bit. There's the Her movie thing, where he has just the thing in his ear and that's basically his interface to technology. We are already seeing all these ideas of a dystopian future where we're all just wearing AR glasses and they're pretty much the only technology we need going forward. Let's talk a little bit about this dematerialization trend. What are you seeing there on the horizon?
Speaker 3:you make the obvious point, which is that our devices should disappear, and I kind of broadly agree with that. You know, I think we all feel both sort of from a chiropractic basis and others, that it's not a good idea to be. You know, we kind of consent set intuitively and therefore charging 15 things is just not fun.
Speaker 3:But what is the alternative? And are we content with that? Are we pleased, actually, that there's a device because we can put it down? We could put it away, or, in my case, I just don't buy them. Like, I have a phone, but that's pretty much it. I'm not, I don't, I'm not really happy with other kind of what I would term surveillance devices within my house. Um, until I feel like there's some kind of regulatory or other you know body that I could get recourse from if I need to, and I don't feel like that's the case right now. I definitely feel like it's the sort of wild west. So there is a utility, I think, in devices in a way that it is a way that you can put it down. And it's more difficult if you've fallen in love with it and popped it in your pocket, which I think is what Joaquin Phoenix did in the movie Her. That's a difficult thing to separate yourself from.
Speaker 2:But what about implants?
Speaker 3:We already, of course, implant ourselves with certain things diabetes monitors or pacemakers and all that kind of stuff it gets very grey very quickly. There was a very famous court case a few years ago, which I can't remember the exact details of, but someone a programmer, who had a pacemaker, wanted to get the details his own data, of his own heart rate, and was denied access to it because he was told that that belonged to the pacemaker. Wow, and there was this big barore, of course, about this. You know who owns the data, who owns the photographs that we take on our phones of our children, like all this sort of stuff, and there's a lot of grey there that I think most of us are pretty and should be very uncomfortable with. So the more they get invisible, of course, the less chance I suppose we have to point at them and say this is what's happening Exactly.
Speaker 2:suppose we have to point at them and say, interesting, that is mine happening exactly. Don't you see that as an inevitable next step, that it goes into us as I mean? Yeah, I mean, it depends which day of the week you ask me.
Speaker 3:I mean, I think I really push against the inevitability. Uh, and there have been books written by silicon valley folk with words like inevitable in the title. Like you know, is the? Has the internet? Is the internet inevitable once you've created computers like is? Is there just a technology? Is not the weather, it's not something that we are subjected to, it's not an act of god, it's not you know but I think that's such a good analogy because I think for most people they feel it is like this is why I like to say we have a lot of agency.
Speaker 3:We still design the internet. It's still in our perv. It is not something that we have to accept. We don't have to get a new upgrade of our phone. I mean, to a certain degree we do, because they've, you know, there's I can't think of the word, but Sunsetting someone?
Speaker 1:Yeah, exactly.
Speaker 3:Exactly. You'd get booted out of the system eventually, which is, you know, deeply unfair on lots of different levels. Very cynical, just briefly and I'm generally an optimist, but very cynical we're fed this inevitability dialogue by people trying to sell us those same objects. Of course your new car sitting on the power forecourt is an inevitable purchase, says the used car dealer. Of course the two-for-one deal is an inevitable. If someone's trying to tell you something and tell you a story about it, one hopes that you think critically about that, and I don't think we do think critically about this, but where's the space to think critically?
Speaker 2:Where are people allowing themselves that space? And this is to your point about?
Speaker 3:responsibility is. I think we all do need to take advantage of our age, still having agency in this, before it gets plugged into our brains, which makes me sound very alarmist and I don't mean to be, but we do need to have dialogues about this. I hear all the time about this great rate of staggering rate of progress of AI. I always think I don't understand that, because progress to me anyway and you call me naive, but it's usually towards something that we've kind of agreed is something we want I don't know where we're going, so I don't know how we can be making progress if we don't to the everyday person.
Speaker 2:I think there'll be engineering progress.
Speaker 3:There'll be gates and milestones from an engineering perspective, but I think progress is a very odd word to use for me because it isn't just about Moore's law, it's not just following Moore's law, because Moore's law is the law. You know, and I think that's a very you know, and I spent a lot of time in Silicon Valley, drank a lot of Kool-Aid still probably got an enormous hangover from it. But you know, and I love so much of that, really I do, and I think you know it'll solve some of our greatest challenges. But we've got to be careful, of course, that there is inevitably an underbelly and you know we've got to remember we still do have agency.
Speaker 2:I'm really interested in that idea because I feel and this is very anecdotal there is no choice. I mean, I hear parents of teenagers in my little ecosphere but there just is a sense of inevitability, of not being able to put in any constraints. That day has passed and the choice is not just social annihilation or redundancy. You know kids saying I'm going to kill myself if you don't give me back my phone.
Speaker 3:What we know is that this sort of megastructure that we've built around us as I said, above our heads and under our feet, the sort of silicon house in which we live, has been built on our attention and for commercial purposes. I hope we all know and understand that. One of the reasons I do my job is I hope that more people know and understand that's the case. That's not the megastructure that we should all beiring. You know, the one we should be desiring is the one that's really helping us to manage the systemic risks that we face as a species. As I said, whether that's pandemic management or climate change or whatever else, we we don't need targeted ads. Like, targeted ads are not an inevitability. They're not like, as I said, there's something that just comes down from the sky. There's's nothing we can do about it, but that is how that's people's emotional response.
Speaker 2:Yeah, and that comes back to education.
Speaker 3:It comes back to education, lifelong skilling. It comes back to us understanding what all of this is and who is selling it to us and selling it on from us, and the supply chain that goes out of my house.
Speaker 1:And regulators right.
Speaker 3:For sure, definitely.
Speaker 1:They failed us with the social media situation.
Speaker 2:So would that be the biggest area of progress? Then regulation right now.
Speaker 1:I think it's twofold, as you're saying, sophie. I think it is definitely education and it is individual responsibility, but it's regulators as well. And I do think I guess that's what you're saying is that in the AI field, there's a lot more regulation occurring up front because large swaths of the GDP could be shifted to the private sector, and I think that's why now there's immediately, it's very clear and that's why there's immediate repercussions, and also there's major geopolitical issues in there as well, of course, and it's being clever and mobile not quite the right word, but I can't think of the right one about regulation.
Speaker 3:You know, nimble, that's the one I was going for. You know, there are moments where you can regulate and that moment passes. So at the moment we can regulate large language models over a certain size, because not everyone can build one like that it's quite expensive.
Speaker 3:Chips are hard to come by, as we know, and it's a sort of opportunity moment to grab or not to ship those chips overseas to other countries who may be perceived to be a threat. So there are ways you can do that, but that then disappears quite quickly. So you have to have a super nimble regulatory framework if you can say, right, we're going to stop exporting this thing to, you know, country X, but soon country X will have that capability themselves and it will be irrelevant.
Speaker 2:Why is this like 3D printing of guns or something?
Speaker 3:Yeah, exactly so. You have to be super.
Speaker 1:Gun escapes the lab at some point.
Speaker 3:Yeah, exactly, and the open source community would say that is how you regulate. The open source biotech community says that's how you regulate is actually you have the community self-regulate in a way. But then the question, I think, becomes what do you need to happen on an island surrounded by sharks? What's the kind of AI that needs to stay there and what's the AI that's okay to be in a kind of Wikipedia-style environment, where it isn't something that can have a huge release, a wet market moment, and you know, and it's impossible basically to draw the line between those two things. It's a constantly shifting. You know, and I don't envy the regulators, it's not an easy time, but I think there are moments that you can be able to, you can seize, but that only works if they're educated on the technologies.
Speaker 1:This is the problem and the people people are educating them are the very people of course exactly, and giving themselves serving narratives around. That. I think it's a good point that you're making is basically what's the equivalent of the level four bio lab for for ai yeah, which we have to just lock away. There is the meta approach yan lakoon and you know obviously also zok, who are pushing out this, this lama 2 model, probably lama 3 soon, and they're open sourcing it versus explain what yeah sure so uh, so basically the open ai and and google have a very closed off approach.
Speaker 1:They're not sharing any of their spoils with the wider world. Why, while facebook trying to redeem itself and the, in the view of people, is now, in this moment, pushing the open source agenda and really taking some of the models that they've trained for billions and giving them quote-unquote away for free. And so these are two different paradigms that are participating side by side, like the iPhone and Android kind of debate that we had in the mobile era. What is your approach? Do you think Is this open sourcing meta approach a dangerous one, or do you feel like it is an empowering one?
Speaker 2:And what about the Sharon Lanier thing that you have to pay for access, or that there's a moment, even if it's not payment, that there's a moment?
Speaker 3:Or a behavioral moment.
Speaker 1:Well, you have to pay either way if you want to deploy these models, right, and if you have to pay either way, if you want to deploy these models, right, and if you want to do things, you can use the weights and things, but if you actually want to compute something, you have to pay Amazon or you have to pay whoever. You're like Azure, or you're paying Microsoft or you're paying Google because you're hosting this stuff somewhere to compute, not quite free yet. Yes, it's not like free, so to say.
Speaker 2:This stuff is quite expensive actually to run, so there is a threshold, but each time it becomes more affordable and then, by the time it gets to us, is really what I meant by that is that we don't even know where it emanates from.
Speaker 1:We've built a system that's way too complex for us to even keep track of anymore, which is why I keep going back to AI as a solution that I know you said.
Speaker 2:The very people that are making this, the regulators or the people that understand it, are likely going to want to find reasons to continue to use it and for us to continue to plug into it. But I wonder if there's going to be a sort of adversary, a kind of AI who is built to check itself or to self-regulate in some way. Is that possible? You know, I don't know really, I mean, I think Some maverick yeah.
Speaker 3:Oh yeah, coder, exactly. I mean, I think we're going to see a lot of that, aren't we? We've just got to hope that the maverick coder you know isn't Is for humanity. Yeah, exactly, and there will be been the case, hasn't it?
Speaker 3:There's so many conscientious objectives, yeah but five minutes after the start of the internet, there were cyber attacks. There's always the underbelly right there. It's just us, isn't it? It's still us at the end of the day. And have we got the checks and balances in place? To a degree, yes. At the moment it's very expensive to do that. It's quite easy, therefore, to track if people are doing something because thing, because of the cost, as you say. The sheer cost of doing it means you can sort of trace that a bit, but obviously, again, if the things works, then that won't be a case anymore. You're right.
Speaker 1:Um, that's the problem. Yeah and yes, and underbelly of image generation is already problematic. The underbelly of, like, creating ai girlfriends that look like celebrities, is already problematic and that already exists all out there, just not a lot of people see it, don't see it. If you go and seek it out, you find everything, and this is already now.
Speaker 1:So, um, I think we're jumping around a little bit, but the one point that, uh, that you made earlier which I liked is to highlight the agency that we still have about this stuff and and I I feel there is this narrative and you may say it is the narrative by the Kool-Aid Brigade, but it seems like the price power increase in all this stuff has been very smooth, famously purported by Ray Kurzweil in many of his presentations. Basically, what he says is that the progression of technological process when it comes to price and power, computation power, has been something that has been going on for hundreds of years and has been very smooth, and it seems like even through world wars it progressed in a very linear fashion and it's exponential and it feels like we don't have control over that. How do you feel about this being a natural law as such?
Speaker 3:Well, I slightly promised myself. I left Silicon Valley more than 10 years ago Now.
Speaker 2:I'd never say the word exponential again so I might have to break that rule for today. You did already, yeah.
Speaker 3:But yeah, I mean, the challenge with exponentials is, of course, is they're very, very unintuitive for most people and you know what is, you know, doesn't necessarily look exponential at the early stages is exponential, and that's just really, really hard for regular folk like me included to get your head around. And Gartner's famous hype cycle is a beautiful example of us. You can track so many technologies through us getting so overexcited about 3D printing oh, we're all going to 3D print our jetpacks in our sitting room, and then whatever. And then everyone's like 3D printing oh, we're all going to 3D print our jetpacks in our sitting room, and then whatever. And then everyone's like 3D what? And, of course, additive manufacturing makes up most manufacturing processes in the developed world today. Like it hasn't gone anywhere. It just hasn't arrived in people's sitting rooms.
Speaker 3:And therefore exactly, and so we're just spectacularly bad at knowing a lot of these trends that shape our everyday and spectacularly bad at knowing a lot of these trends that shape our everyday and spectacularly bad at knowing what to do with them, despite the fact they're quite repeatable across lots of different and they're quite predictable in a way and I don't like to make predictions, it's not really my bag, but plenty of people do Ray included, of course, famously and the helpfulness of that, I think, to me to be sort of kind to prediction makers is to say it's helpful to flesh out some kind of longer term future so that we can make better short term decisions, whether AGI is in 2027 or 2031, if it gets people talking about AGI.
Speaker 2:I'm happy. I agree completely. I think also, if you do see something you know genetic sequencing is doing this.
Speaker 3:It's actually when all of those things splosh together and we have a genetically engineered nanorobot that's, you know, making its way around my body using AI. That is something I mean creepy, but sort of interesting. And it's the convergence of those, again, that's very difficult for people to understand, even people working in those specific fields. That's why I think it would be much more interesting to smash together lots of those experts. If we can duplicate them. There we go, we can create avatars of them and do conferences of those avatars to come up with interdisciplinary ideas to try and solve some of our major challenges. Because, generally speaking, challenges don't exist in academic disciplines or areas either. Don't exist in academic disciplines or areas either. Pandemics famously cross many different areas, from zoology to biotech and everything in between. We need to create the intellectual resources to come back at that in a very interdisciplinary way. The convergence argument, as I was saying, is one that I really took with me, I think, but then we mustn't become too interdisciplinary either. We mustn't lose discipline. It's this real balance between, you know, universities.
Speaker 2:Specialization yeah.
Speaker 3:Disciplinary excellence, someone getting really, really, really excited about the interaction between two molecules, or whatever it is that drives their PhD.
Speaker 3:You know, thank goodness because, otherwise we'd all died of smallpox a long time ago. So how do we create venues to be able to do that? Is AI a tool to help? Could language models help us? I think it must be quite stressful although a wonderful time to be an academic today. Quite stressful when goodness knows how many papers are coming out every day in your field. How do you keep track of that? Can you use a second synthetic brain to help you filter what's going on, and not just in your area?
Speaker 3:but in other domains that may be interesting to you or you've always been curious about. Or should I get a postdoc in this area, and how do we make that something that valuable without losing the disciplinary excellence that universities are?
Speaker 3:known for Before we finish, I just wanted to you mentioned nanotechnology. Talk a little bit about that in terms of going into us. Well, obviously we've got to be careful we don't create the new asbestosis. So you know, there's a lot, presumably of R&D and work to happen before we release fleets of autonomous robots into our body, but there is an inevitability.
Speaker 3:I think to even though I said I don't like that word to you know medical things getting smaller and being more interior, and you know a lot of that is the benefit of the frontiers of sensing and others.
Speaker 3:If you can make cameras so tiny, I was reading yesterday about edible robots, the ability to make things navigate their way around your body and not have to forgive the word excrete them afterwards, can we think about new ways to understand the body? A lot of this is driven by just this curiosity of like, what's going on in our body and how can we intervene in a way that isn't just blasting us with radiation if we get cancer or whatever it might be? And so broadly, yes, of course I'm supportive of a lot of the frontiers of that, but clearly there's a lot of experimentation that needs to happen first in silicon or in animal models, before we get too carried away and its promise obviously is, as you say, the ultimate sensing of everything that's going on in our body and therefore fulfilling the promise of proactive medicine rather than just going once the machine is broken.
Speaker 2:Right, it's just yeah, exactly, and I think was it.
Speaker 3:How much would be saved and how many resources and I think feinman said there's plenty of room at the bottom. There's just again so much. We don't yeah exactly and I think was it Feynman how much would be saved and how many resources. Yeah, and I think Feynman said there's plenty of room at the bottom. There's just again so much we don't understand down there. You know to be sort of basic about it and that's also, you know, a cusp of quantum stuff too. You know we don't understand how photosynthesis works. We would not be here without photosynthesis, like everything we eat. Eat is driven by a process that we don't really understand terribly well because it's a quantum process. Can we again, if we can understand it and observe it, we can interfere with it in some way and we can optimise it, perhaps for plants, and we're going to have to grow more on less land in the next decades to come.
Speaker 1:Yeah, making that more efficient. Yeah, making it more efficient.
Speaker 3:Genetic modification, like it or not, is an inevitable part, I think, of stabilizing food security over the next rocky, I'm sure, decades to come, with a growing although we'll stop soon, but growing global population. How do we do all that will be through understanding what's happening at the very tiniest. I read a beautiful quote the other day that we're using the most powerful tools in the world today to look at both the smallest things in the universe and the biggest.
Speaker 2:I thought that was rather wonderful.
Speaker 3:You know that we can really just take advantage of some of this stuff in a really insane way.
Speaker 1:The fact that we're stuck in the middle plane somehow and we look at the very big and the small. Just on the topic of nanotechnology, just to take through the other one out there there's a promise of self-replicating nanobots that could you know, grow things. The idea is really, when we create a cup, we're just smashing atoms together in a very imperfect way. If you would assemble the cup atom by atom? You could obviously be a lot more efficient and build something much more perfect.
Speaker 1:Quote unquote yeah, the idea of that and growing buildings growing, everything is the idea of the star trek or whatever it is how, how far are we away from that? You don't like making predictions no, well, I mean it's.
Speaker 3:I'm fascinated by the frontiers of manufacturing. I think it's so exciting and again you know, to natasha's point earlier a lot of that innovation is happening in space as well, where the constraints are much tighter. If you have to build buildings on the moon or on Mars or whatever else, you're probably going to build them more efficiently because you have a limited amount of resources. Everything's incredibly expensive to take up there with you and that is the kind of R&D lab that gets a lot of people very excited.
Speaker 3:It isn't just about atom by atom here on earth. It's all about jamming different ways to put materials together, 3d printing or additive manufacturing and going for a very long time. But it's the innovations that come with that, the fact that it could be lighter and stronger and weirder. That will also be the case with us. We're forgetting prosthetics in some sort of future, then, whether they're necessary prosthetics because you've lost an arm or a leg, or whether they're prosthetics that you choose willingly because they make you run faster, jump higher whatever it might be.
Speaker 3:Those are not going to be manufactured in the way that we understand and recognize today that you know there's some incredible stuff that can happen there.
Speaker 1:That, yeah, that gets me very excited yeah, um, I guess the final question is how we do we take everybody along for the ride here. A lot of these things natasha's saying, and I think she's right in the sense that a lot of people have given up and kind of just accept it all. But I feel like when we talk about some of these edge things that we just talked about and the frontier things that you're excited about, I'm excited about. I don't know if you're excited about that actually you are about many, but I know that I have these conversations and people are not what, what? As a futurist, when you encounter those moments, how do you? How do you deal with that? How do you get people along further?
Speaker 3:the reason I'm an optimist, uh, is not just because I feel like I have to be um, although I think there's part of that in there is that you know, it's what's the point of living otherwise sort of thing if we aren't excited about what's coming. But the more I suppose realistic part of that is that I'm so lucky in my job to hang out with people who are building interesting things. They are building the sensors and the infrastructure and the chips and the novel manufacturing techniques and the rockets and everything else, and it is impossible not to get infected with the excitement that they have in the very specific domain that they're working in. And that causes me to be optimistic, because it's very rare that people will say I'm building this, you know, because I want to sell more things to more people.
Speaker 3:You know it's usually because it'll have some incredible impact on novel building materials, with using lunar soil or intervention to help prolong our life or whatever it might be, and so my optimism comes from hanging around. It's been like, you know, we watch TED and you feel kind of good about the world. You know, certainly in the early days I felt like it was just this happy place where you could just download lots of people's excitement about the future and I think, as I said, for me, I'm very lucky. Now I realize that's not something that's available to everyone, but there's a lot of this available on the internet. I read a lot of science journals.
Speaker 2:There is a lot available to everyone, more than ever before, I think the optimistic part of it, and yet we're checking out of it more than ever before, I guess. Yes, yes, kim Kardashian is more interesting.
Speaker 3:And I think that there is so much on the cutting edge of science that that's really heartening and exciting. You know, and these are people who are not paid terribly well, it's not like you're reading Goldman Sachs size bonuses for people studying Beatles in a Delta somewhere, but their life is their passion and that's very exciting to read about that. And you know, sensing and video and everything citizen science projects. They used to spend quite a lot of time looking at the Zooniverse, this galaxy zoo that Professor Chris Lintott set up, of time looking at the Zooniverse, this galaxy zoo that Professor Chris Lintott set up, which really gives people a feeling that they can contribute to cutting edge science, and I think that's extraordinary.
Speaker 3:That's how you get people, I think along with you, because these tools are now available to all of us, but you have to seek them out. And it's probably not the major news sites and others. And it's not to say we should ignore that. And I'm not in this sort of blind Pollyanna type figure who thinks that I've spent a lot of time with people like that, and that's definitely not me.
Speaker 1:Of course, yeah, you don't like that, but neither am I a cynic?
Speaker 3:I don't think about what's coming, but we have to be very, very open.
Speaker 2:Maybe it becomes obligatory. It's sort of mandatory participation like voting in Australia. It becomes something that you just have to partake in, even if it's only for a short period of your life.
Speaker 3:Just that awareness would be Like a national service.
Speaker 2:Yeah, brilliant.
Speaker 1:Yeah, I think we should celebrate less, I guess, athletes and celebrities and more scientists and all that right. And it is shifting. I would argue already.
Speaker 3:I've been to some International Women's Day events today. It was super inspiring, learning about people's lives and what they've achieved, and it's really heartening. It makes you want to get out of bed in the morning, and I think that's what we should seek out on this incredible internet and set of technologies that we have at our fingertips but not blindly, you know we shouldn't sleepwalk into something either but use our tools and our intelligence and our intellect and our education systems and join things. I mean, this is what I learned.
Speaker 3:Harari did an interview or a panel yesterday with the Center of Existential Risk in Cambridge and one of the outcomes of the sort of panel that came afterwards was that we should join things. Like, don't just sort of sit back and let you know things happen to us, but join, do it, start local. That's easy in a way. I mean it's not easy easy, but it's easier than trying to change politics or whatever else, because it is about narratives. It's about the narrative that we tell ourselves about the future, and I hope that trying to give a balanced narrative of that to people is is really critical to me. It's not to say everything is going to hell in a handcart, but that we can do something about it, and here are some people really trying to shift the game on as whale conservation or whatever it might be is.
Speaker 1:Yeah, there's something really special about that that's a beautiful point to end it on, I think, quite hopeful. Thanks so much for your time my pleasure, thank you so much.