
Where Shall We Meet
Explorations of topics about society, culture, arts, technology and science with your hosts Natascha McElhone and Omid Ashtari.
The spirit of this podcast is to interview people from all walks of life on different subjects. Our hope is to talk about ideas, divorced from our identities - listening, learning and maybe meeting somewhere in the middle. The perfect audio diet for shallow polymaths!
Natascha McElhone is an actor and producer.
Omid Ashtari is a tech entrepreneur and angel investor.
Where Shall We Meet
On the Future with Howard Covington
Questions, suggestions, or feedback? Send us a message!
Our guest this week is Howard Covington. Howard is a Cambridge graduate in physics and maths. He has been a banker, a co-founder and chief executive of New Star Asset Management, and a trustee of the Science Museum. He’s also been and chair of the Isaac Newton Institute for Mathematical Sciences at the University of Cambridge, The Alan Turing Institute, ClientEarth, and the Scotia Group.
He is the incoming chair of the Smith School of Enterprise and the Environment at the University of Oxford. Howard is a fellow of the Institute of Physics and an honorary fellow of the Isaac Newton Institute and The Alan Turing Institute.
We want to a talk Howard gave recently and were very amazed about how many of his predictions have come to pass and therefore left comforted by his positive predictions of the future.
We talk about:
- A quick history of 540 million years
- Living in the midst of the third Intelligence Explosion
- Printing meat to eat
- Dark factories
- Are robots part of evolution
- How capitalism drives the race to net zero
- The restoration of the planet
Let’s gaze into the future!
Web: www.whereshallwemeet.xyz
Twitter: @whrshallwemeet
Instagram: @whrshallwemeet
Welcome to the when Shall we Meet podcast. We're back from our summer break and our first guest is Howard Covington. Howard is a Cambridge graduate in physics and maths. He has been a banker, a co-founder and chief executive of New Star Asset Management and a trustee of the Science Museum. He's also been a chair of the Isaac Newton Institute of Mathematical Sciences at the University of Cambridge, the Alan Turing Institute, Client Earth and Scotia Group.
Speaker 2:He's the incoming chair of the Smith School of Enterprise and the Environment at the University of Oxford. Howard is a fellow of the Institute of Physics and an honorary fellow of the Isaac Newton Institute and the Alan Turing Institute. We went to a talk Howard gave recently and we were amazed by how many of his predictions have come to pass and therefore left feeling quite comforted by his positive predictions about the future.
Speaker 1:We talk about a quick history of 540 million years.
Speaker 2:Living in the midst of the third intelligence explosion.
Speaker 1:Printing meat to eat Dark factories. Are robots part of evolution?
Speaker 2:How capitalism drives the race to net zero.
Speaker 1:The restoration of the planet.
Speaker 2:Let's gaze into the future.
Speaker 1:Hi, this is Omid Ashtari.
Speaker 2:And Natasha McElhone, and with us today we have Howard Covington. Hi, howard, hi, very good to be here. Thank you so much for coming along.
Speaker 1:Howard, you recently gave a really interesting talk that was uplifting and optimistic about the future of mankind, which we want to delve into today. In the talk, you go through a series of concrete, observable trends and you address a third intelligence explosion that we're in right now. Let's look back and talk about the first two before we talk about the current one.
Speaker 3:Very good. So, in simple terms, the first intelligence explosion was when multicellular animals evolved and, in order to direct themselves to food, evolved neurons and neural networks. And it was the neural networks that activated muscles and connected light sensors and chemical sensors to muscles and guided them towards food. And as eyes developed, this triggered something called the Cambrian explosion, which was a massive explosion in the kinds of animals there were in the world. And this happened about 540 million years ago, and it set a path which gave birth to vertebrates, fish, and the vertebrates then evolved and eventually gave rise to apes, to the great apes, and from the great apes humans eventually evolved.
Speaker 3:And the way that that took place was what I call the second intelligence explosion. And this was when some great apes were having trouble making a living in the forests that were drying out and took to the savannah where they scavenged meat from dead animals. And the savannah was very dangerous and it was a great evolutionary advantage to stand upright. So they did that. Bipedalism evolved that freed up their hands and arms and that meant they could use tools. So tool use began to evolve and with that cooperative hunting. So there was a great evolutionary pressure then for language to develop, both to coordinate hunting and to train the younger hominids, as they were called, to use tools, and over a period of something like 5 million years, language evolved. Humans became linguistically modern about 100,000 years ago and language led to a second explosion in intelligence.
Speaker 1:Yep, that is a quick history of 540 million years.
Speaker 2:That was very impressive.
Speaker 1:Really impressive indeed. The one thing that comes to mind when you go through this is that there's a counterfactual that we can't run, and that is what do you think could have happened. Do you think it is inevitable that we went down this capitalist path, this path of competition, rather than say one that is more in tune with nature, that would have developed in a harmonious way?
Speaker 3:So in the study of evolution it is generally thought that nothing is inevitable. Lined chance operates at every step, and I think that continues to be the case right up until the 17th century, when science and advanced maths are developed. And from then, from the time that Newton published his laws of gravity and motion, I think a certain amount of inevitability creeps in, because entrepreneurs could see how to use science to develop new processes, and at that point biological evolution gives way to capitalist evolution. So it was not at all surprising that within 100 years of Newton, the Industrial Revolution began.
Speaker 2:So to clarify because I didn't pick that up from your talk. That's really interesting. So the eyes and the bipedalism being upright necessity really was the mother of invention.
Speaker 3:As always with evolution, yeah.
Speaker 1:Okay, so I like how you're talking about the notion that the discovery of science, writ large, essentially, has kicked off different dynamics that are now no longer just biological. They're, so to say, connected to the world of ideas that has taken on machinations of its own, to the world of ideas that has taken on machinations of its own.
Speaker 3:Yes, I mean, if you look at any graphs of anything related to humans' population, gross domestic product, use of materials, they all go shooting up from around the 1750s. Something completely new happens to the planet and to the species on it from roundabout then, and it was caused by capitalism and capitalism was enabled by science.
Speaker 2:And competition.
Speaker 3:I mean, the essence of capitalism is competition, as is the essence of biological evolution.
Speaker 1:Right, so you're saying all the J-curves that we're now steeply in have been kicked off in the 1750s roughly.
Speaker 3:That's certainly my view, yeah.
Speaker 1:And so we are in one of those now, which is the Third Intellectual Explosion. Let's talk about that.
Speaker 3:Just as I think that capitalism was an inevitable consequence of the scientific revolution of the 17th century, I think the current intelligence explosion, the invention first of all of computation and then of neural networks and large language models, where we are now, I think that to go back to an earlier point is inevitable. That to go back to an earlier point is inevitable. It's an inevitable consequence that humans, once they are operating in advanced capitalism, using all the tools of technology and innovation, will find their way to computation and from computation to machine intelligence.
Speaker 2:But language was the beginning of all of that, in the second intelligence explosion.
Speaker 3:Yes, you can't get, in my view, to the use of advanced tools. You can't create advanced tools without language. So fully linguistically competent humans were present around 100,000 years ago. It took about 100,000 years of slow, gradual innovation to discover the scientific method, and then very rapid innovation after that to reach computers and large language models.
Speaker 1:Yeah, the notion of the beginning of infinity is probably something that you're familiar with. David Deutsch wrote this book and so the idea is that the scientific method will get us to everything that is physically possible, if we continue this trend and ride it.
Speaker 3:Yes, you may remember, in one of my slides I say provided we don't screw things up.
Speaker 1:You remember, in one of my slides I say, provided we don't screw things up. Yes, it's true, that is a big asterisk behind everything here. Okay, great. So let's talk a little bit about this whole transformation of society through AI. I would say that it's an interesting time that we find ourselves in, with AI coming onto the scene into probably one of the most extreme eras of capitalism. Some people would say shareholder capitalism is a little bit too out of whack with human values and human well-being to an extent, and so the question is as it is coming onto the scene in this environment, do you think that it will be imbued with the right values, so to say, that will be in line with human well-being rather than efficiency, competition, the cold rational values?
Speaker 3:That's not a straightforward question to answer, but I will do my best. So I think that we are at a time of advanced capitalism called techno-capitalism, and techno-capitalism has a slightly different structure than what went before, because what is driving the development of innovation and technology are now venture capital funds, and the venture capital funds are very sophisticated risk-takers who are willing to put large amounts of money behind scientific research and innovation in a way that hasn't been done before, but in return, of course, they want to see profits. They want to see a return on their investment. So innovation and the development of large language models and AI and all the rest of it are being driven certainly in the US by the pursuit of profit. So the main value is going to be the capitalist value of making money for shareholders.
Speaker 2:Now, but they didn't necessarily start with that intention, did they? I mean sam altman. At the beginning, anyway, he seemed to say that this he did seem to say was open.
Speaker 3:The capitalist system is a very hard one to fight. So I mean he, he did indeed seem to say, is it?
Speaker 2:just impossible to resist. Is the point, people, lots of startups are meant to be good for humanity, or when someone doesn't have any money, so they have nothing to lose. The intentions are very pure.
Speaker 1:Social impact investing is also a category of investing, for instance, but do they thrive and survive, those ones?
Speaker 2:Sure some.
Speaker 3:But what distinguishes the AI situation? And let's give Sam Altman the benefit of the doubt let's suppose he did set up OpenAI to do good for humanity. What he probably didn't foresee is the huge amount of money needed to set up data centers and and to scale these, to scale these models up continually. I mean, he certainly foresaw that you had to scale them up a bit, but I don't think anybody uh understood that data centers need their own will will need their own nuclear reactors.
Speaker 3:So, and the competition, I suppose, was so quick and, as always in these cases, if there's a, if there's a chance to make a lot of money, the, the competition is there, and there's also, by the way, which we need to come back, to move very fast, to scale things up and therefore to amass a great deal of capital. The only providers of that capital are the venture capital funds. The venture capital funds want an economic return. So, whatever the original idea, it is now about making money for venture capital funds.
Speaker 1:By the way, can I just slightly push back and say I don't think that all the VC funds are very wise. We've had various boom and bust cycles where there's so much herd mentality going into a paradigm and like burning a lot of money. But by and large I would say the collective of them is very smart. The individuals are not always and you just need one home run to justify the other nine out of ten failures.
Speaker 3:There's a second question about values, which goes under the technical name of the alignment problem. Can you align the behavior of large language models, the conclusions they reach, with human values? And there are two possible answers yes or no and there are different advocates of those two possible answers among the tech pros in Silicon Valley. I think that there's a very clear answer to this, which is the following that human values are not aligned amongst humans, 100%, exactly, yeah, even within myself, I'm not fully aligned.
Speaker 3:Yes, yes, so, since human values cannot be aligned, for reasons we understand, humans are very complex and they have different interests and different motives and they've been brought up in different. They've learned different things throughout their lives, if artificial intelligence is to operate at the same level as human intelligence. It is also going to be complex. I don't think there's any prospect of aligning it. I think it is more likely it will have its own values and, like the rest of us humans, it will have to learn to live.
Speaker 3:Be adaptive, yes, it will have to learn to live with us and we will have to learn to live with it. But I say that in the context of a third idea, which is that we shouldn't just be talking about artificial intelligence here. We should be talking about artificial intelligence embodied in robots. Right, so we will be interacting, um in, in a physical way with robots in the workplace, and it is through those interactions that I think artificial intelligence systems will learn their values ultimately, and learn what they can do with humans and what they can't do with because there are stakes also and if you have a physical body and if your battery goes off, you're no longer experiencing and all these things absolutely.
Speaker 3:And if somebody, if somebody gets cross with you and hits you with a wrench, you know you've done something wrong.
Speaker 2:Yes, well, not not least, I. I would like ai to help me with the mundane tasks of life, like washing up and doing the laundry rather than writing my script. I would like to have more time to do that rather than be delegating that to which right now.
Speaker 3:We're in this strange world where we're doing more and more menial tasks and the ai is taking the top even creating our art and our music again one has to look at the evolution of robots, which is happening very fast in the context of dark factories.
Speaker 1:Exactly.
Speaker 3:And dark factories are factories that are so well organized that machines do all the work without the need for human intervention other than a maintenance crew, for a couple of hours a day. The first dark factory that I know of was opened in China in 2001 or something like that. The latest dark factory opened, which opened last year, produces mobile phones. You feed computer chips and plastic in one end and a mobile phone comes out of the other. And about one mobile phone, a second comes out. Wow, yes, and again no workers, just the maintenance crew every now and then, and one can't help looking at that in the context of the Trump administration's desire to bring back mobile phone manufacturing to the UK.
Speaker 3:I mean it can be done, but there are no workers involved.
Speaker 1:Yes, only maintenance crews.
Speaker 3:Only maintenance crews.
Speaker 2:yes, I love to your point about when you talked about dark factories and I suppose that's why they're called dark factories that there's no need for light or electricity because they don't need that. Um, I love that idea of us adapting towards robots, which you mentioned earlier. We always think about this.
Speaker 3:Alignment is one way and actually us understanding that world view or perspective is going to be super interesting yeah, so, as in all ecosystems, there is interaction and mutual adaptation, and we're going to see that.
Speaker 1:And this segues really well into the other mega trend that we have, and that is essentially the population bust. Do you want to set the scene?
Speaker 3:So one of the things that science and technology has done and working through capitalism, has made us very rich and it's increased lifespans dramatically.
Speaker 3:And the consequence of that is population has gone, roaring up over the last 200 years and is now around 8 billion people and it will continue to go up for a while. And it has also let us control our own reproduction. And as society has changed, we've gone through a transition over the last 200 years from mainly rural work to largely urban work and women have entered the workforce and want full and satisfying careers. And those two things taken together have made the cost of having a child go up quite significantly. The data from the US is that the cost of raising a child to 18 is something like six or seven times the average income is something like six or seven times the average income. So when a couple decides to have a child, they mainly decide whether they're going to have one child or two children. It's only in exceptional circumstances that they'll consider having more than that. So guess what happens? The average number of children per woman, the fertility, falls to between 1 and 2.
Speaker 1:And replacement is 2.1.
Speaker 3:Replacement is 2.1. If fertility throughout the world falls to 1.5, which it could easily do by the end of the century, global population will halve in 200 years. And that will happen if countries around the world adopt the same kind of economic model as in the US and in Western Europe. It's entirely obvious.
Speaker 2:Which they will do, you presume.
Speaker 3:They would certainly like to. They're all striving towards that as an idea.
Speaker 2:And do you see that I got the feeling, when you spoke about this before and having listened to other things that you've said, that you well, there are lots of environmentalists who think that's a very good thing I got the feeling that you saw the, I guess, the rapid growth of AI and robots as replacements for humans. At a certain level, that is a hybrid version of what will care for our elders, the point being, you don't necessarily see this as a negative trend.
Speaker 3:So the main immediate problem with low fertility is that population ages and a larger number of old people have to be supported by a smaller number of young people of old people have to be supported by a smaller number of young people. But this is happening just at a time when dark factories and humanoid robots are evolving. So if we are lucky and if we manage things properly, the imbalance in the population will be offset by the use of dark factories and humanoid robots. And, Natasha, to pick up a point you made about care for old people, in Japan it is already the case that the old people want to be cared for by humanoid robots because they do a better job than human carers. So I think there are all sorts of opportunities here and declining fertility certainly for the next, at least for the next century, is not something to fear.
Speaker 3:It's something to manage by using the opportunities that AI, humanoid robots, dark factories give us.
Speaker 1:Yeah, I think we'll revisit this in a bit, because I think there's a notion here that these dynamics are as they are and we cannot intervene Just to counterpoint. What if we come up with exo-wombs? And what if we, you know, make creating humans very easy and convenient? And what if we are in a world of abundance and the cost of actually rearing a child comes down right? These trends could reverse or go in a different direction, in a way right they could reverse.
Speaker 3:I do agree with them. I find that this is not a technological issue. I find the economics of it difficult to understand. I mean if it does cost eight years of Eight times as much. Yeah, eight times average income to rear a child to the age of 18. It's difficult to see how technology is going to change that.
Speaker 2:It may do, but even if you take the economics out, I think there's an interesting trend and again we need to get back on topic which is, even though our life expectancy is much longer, we are leaving in the West, and again, this is a a very, very small strata, but I guess it's the one that we're talking about. Society who relies upon? I most women I know who are younger than me but have frozen eggs there's the whole compute around. How much energy is that using just to keep those eggs frozen?
Speaker 1:yeah.
Speaker 2:And it seems like a moot point, but you're a mathematician.
Speaker 1:I think data centers burn probably more than all the 100 times At the moment.
Speaker 2:But if you have a frozen egg for 25 years or many right. So, anyway, putting that aside, there's also this strange thing that if you get used to a life, if you have control over your fertility and you get used to a life without a child, that's very fulfilling and you have freedom. Lots of people may elect not for economic reasons, not to have kids.
Speaker 1:It's mostly people who have money who choose not to have kids. It is a lifestyle choice. In many cases it is indeed. It is a lifestyle stress. In many cases it is indeed. But I would argue to go back to your economic point, these dark factory robots are doing work at a fraction of a human right and that's why there's this pressure right and therefore, technically, the production costs of everything is trending to zero.
Speaker 3:Yes, the humanoid robot costs roughly $2 an hour.
Speaker 1:Yeah, exactly, there you go, Compared with $15 to $20 for a human. And that's probably going to come down as well, right?
Speaker 3:It's going to come down and what they can do is going to go up until it approaches human level. So we're going to go through a very interesting transition.
Speaker 1:Yes.
Speaker 2:But we think that money is always going to be the metric by which we continue to measure everything. Is that what I'm hearing from you both?
Speaker 3:The capitalist system drives you that way, and to get out of competition for more to be able to use more resources is quite a difficult thing to do and it's not clear how it can be done. I mean, there are some ideas from Silicon Valley of radical abundance. Yes, exactly Things become so cheap that everybody can have everything.
Speaker 1:Peter Diamantis. You know the Ray Kurzweil of this world. So maybe I find it difficult to understand how that happens, but maybe and the other thing that comes to mind just to say a point before we move on to the next thing is that the cosmic civilization will not be on the biological substrate. It seems to be our robots that are going to.
Speaker 3:Yeah. So I think Elon Musk is completely wrong. A number of people also think that Humanoid robots will be much better adapted to explore the solar system and the galaxy than humans. They're much better adapted. It's much cheaper, much easier to use humanoid robots. So I suspect one of the futures for humanoid robots is cosmic exploration.
Speaker 1:No, they already are. Yeah, they are. It's just sad that it's not going to be an iris, a biological iris that will see the the marvels of this universe, but just mediated by pictures from cameras. Um, anyway, let's move on. So one thing that's really important in this whole picture is obviously how we're going to fuel this exploding energy that is required for this AI and robotic revolution that we're going through. Let's talk a little bit about that, and you have a lot of experience here.
Speaker 3:Yes, I have some experience, so I am a clean tech optimist. If one looks at the amount of fossil fuel that has been used year by year for the last decade, you will see that it trends up slightly. And it's a very reasonable question to ask. We thought there was an energy transition going on away from fossil fuels to wind and solar and so on, but we don't see it in the numbers. And it's perfectly true. You don't see it in the numbers, just as you couldn't see the penetration of mobile phones in the first couple of years, when only a very few people had them. But the mobile phones, the number of mobile phones in use, was rising exponentially. And that's what's happening with wind and solar they are rising exponentially from a small base and over the next 10 or 20 years you will see them eating very rapidly into the energy produced by fossil fuels.
Speaker 3:And here's a nice story, if I may Sure so many years ago, maybe 20 years ago, pakistan wanted as many coal-fired power plants as it could get. So it entered contracts with coal-fired power plant producers which were very generous to them, and a term of the contract was that it would pay them whether or not it wanted to use the energy. And so go forward 20 years and you would think it's almost impossible to replace fossil fuel power plants in Pakistan because they're being paid so generously. But here is actually what happens. Because they're so expensive, there's been no investment in them. And because solar is so cheap now and batteries are so cheap, entrepreneurs, the people running corner stores and restaurants and little businesses are buying solar panels and batteries from China that is selling them very cheap. They are disconnecting from the grid, not because they want to see Pakistan's emissions going down, but because it's so much cheaper to use solar panels and batteries.
Speaker 3:And the growth rate in solar energy in Pakistan is 70% a year at the moment. That is the exponential curve that Pakistan is going up, and that's if a very difficult case difficult because of its locked-in coal-fired power plants can go through a transition that quickly. Think what the rest of the world can do. So I think it's very reasonable to anticipate a 40-year transition to clean energy, and by that I mean within 40 years 2065,. Nearly all fossil fuel will be squeezed out of the energy supply system. There are certain people who would say, gosh, that's a very conservative forecast. I think it's a realistic forecast because one must anticipate the oil companies and the coal companies fighting as hard as they can, which is what they've been doing for the last 20 or 30 years.
Speaker 3:It won't go as quickly as it could because of the damage that they are going to inflict on us, but nonetheless, it will happen.
Speaker 1:Yeah, one thing that I think is striking here that is worth highlighting is that by and large, when we have these competitive dynamics that emerge in some of these spaces that drive rapid development forward, the fruits of the labor are kind of distributed equally to an extent, right, and I feel here one country is really pushing the envelope, at least when it comes to solar cells, right.
Speaker 3:So I mean the country pushing the envelope is of course China. I mean the country pushing the envelope is of course China.
Speaker 2:China is responsible for half of the world's solar and wind and battery installation half of the world. Yeah, that's striking.
Speaker 3:And for the other half of the world. China supplies that as well. It supplies something like 80% across the cleantech supply chain. China's emissions are going up, and it's building lots of coal-fired power plants as well. So you have a conundrum here. It is installing wind and solar just about as fast as humanly possible, and it's still installing coal-fired power plants and its emissions slightly going up or perhaps turning right at the moment. What is going on here? And the answer is China has exactly the same problem that a lot of other countries have, which is that coal production is concentrated in just a couple of provinces. The governors of those provinces are very powerful. They don't want to see coal being displaced, because it gives them a big economic problem, and so I think that Mr Xi will probably take a leaf out of Mrs Thatcher's book and wait until there's so much solar power that you can just shut down that will happen, probably not as dramatically as that, but that will happen towards the end of this decade.
Speaker 3:I think China will have so much solar power End of this decade. Do you think End of this decade? Wow, you will see not coal-fired power plants.
Speaker 1:Completely turned off, but gradually.
Speaker 3:But you will see China's emissions begin to drop quite quickly Wow.
Speaker 2:I want to ask something how much do you think our language is going to change, given that a lot of our day-to-day speaking is in writing and in text form, and it's often giving prompts to or requesting bits of information from a machine? How much do you think? I know the machine is adapting to us at the moment and that's what we're being told and that's what we're training it for, but once it becomes embodied with robots and so forth, how will it emerge, and do you have any thoughts about that? Obviously, you can't answer that, because we just don't know.
Speaker 3:I suppose what the next 10 years holds is very difficult to foresee. The people in silicon valley a lot of them expect that we will get to superintelligence within a few years, three years, five years.
Speaker 1:Tim Sessebis and Sergey Brin were saying 2030 on a panel recently.
Speaker 3:Yeah, yeah, I mean nobody is surprised by suggestions of 2030 or any time around them.
Speaker 3:Right. And superintelligence in this context means a large language model which is better than humans in every category of knowledge. So a better mathematician, a better scientist, a better linguist you name it Better at planning the economy is a little unlikely, and the reason I think it's unlikely is because it tends to assume that a large language model can sit in its data center having great thoughts about the world and these thoughts will allow it to achieve things in the world, whereas it doesn't work like that. The way it works is you get out and you do stuff and you learn very quickly that what you thought was a good idea doesn't quite work and it has to be modified.
Speaker 1:The feedback loop with reality.
Speaker 3:There's a feedback loop with reality and it's not just a digital feedback loop.
Speaker 2:It's a physical feedback loop as well, so simulation is not actually going to teach you everything you need to know.
Speaker 3:So you can simulate physical systems, but you can't simulate in detail the behavior of humans and you're going to operate through humans, at least for the foreseeable future.
Speaker 3:So um it may well be that in some categories, for example mathematics, we get to super intelligence quite quickly, and I pick mathematics because it's a closed system that doesn't need interaction with the world, so you can imagine how an AI could be a super mathematician. Will it be a super entrepreneur, organizing people to do things? I don't know. I think it's a much bigger question mark over that. Um.
Speaker 1:Or a super matchmaker between humans.
Speaker 3:Well, it's quite interesting, isn't it? I mean, we've had dating platforms which have come and are now largely going because humans don't enjoy the experience anymore, and you can imagine all sorts of innovations that at first sound like a good idea and then tail off quite quickly as AIs learn that humans are quite difficult to interact with in many ways.
Speaker 3:So I go back to my earlier point.
Speaker 3:It is only when AI is embodied in humanoid robots that are interacting with humans that we begin to see how we adapt to them and they adapt to us.
Speaker 3:And I think of this as the creation of a new kind of life, silicon life, and a number of people are saying just that. And the reason it's a new kind of life is the combination of large language models, humanoid robots and dark factories creates a kind of silicon ecosystem where new generations of humanoid robots can be engineered by large language models and dark factories. So it's a bit like an ant colony different bits doing different things and it could become I think will become a new kind of life, and the interesting question is where will it emerge? So modern humans emerged in Africa 100,000 years ago, linguistically modern humans. It looks to me as if this new kind of life will emerge in China, because China has the engineering across the board to put together this ecosystem, whereas in the US, what they have achieved with large language models is wonderful, but I don't think they have the organization to create this silicon life.
Speaker 2:I just have one question for you about large language models and how much energy they use to run and how much they've become like a plaything for humans. Is that really the best way to be using this energy and these resources? Should they not be being used? I know they are being used for protein folding and so forth and incredible medical discoveries and maybe simulations around climate change and how we can try and stem it.
Speaker 3:So, on the energy question, we're going through a five or six year blip in energy caused by the energy demands of data centers. I mean, this is all about training these enormous large language models, but this is something which is going to last five years, maybe a bit longer. We'll be done, because we'll either be at superintelligence or we won't, but either way, I think we'll be done um, and it's only happening, um, on a large scale in the us and china, whereas the demand from the demand for energy from the rest of the world is what's driving the growth in energy. So interesting yeah, I.
Speaker 3:I don't think in the long term, demand from for energy from large language models is anything other than a short-term blip. It's an important short-term blip.
Speaker 1:It's a good point. That's a good point.
Speaker 3:And when I said that we'll get through an energy transition, in 40 years, we'll squeeze fossil fuels out of the energy system. That's including the energy demands from large language models. And then what should we? What about the uses? Should we be using them for serious uses and not for playful uses? We are a playful species and they are there for us to use as we wish.
Speaker 1:And I would argue that the investment that went through them is just because it has such a large adoption so quickly, right?
Speaker 3:Exactly that, but the uses that are harder to see which are likely to have the greatest impact are scientific uses Developing new materials, understanding biological systems in greater depth, and those are raw. You mentioned protein folding. They are roaring ahead.
Speaker 2:You just don't see them so much.
Speaker 3:Yes, you see the announcement of new products every now and then. Yeah, and I think where the large language models will have their greatest impact is in scientific research yes, and we're at the very beginning of those j curves, actually right at the beginning.
Speaker 2:But how easy is that to monetize? That requires funding. That that's my point. Where so you think I mean going back to your sam altman point, or the competitive nature of all of these things and they need to make money.
Speaker 1:Yeah, how much there are hundreds of millions of dollars that I'm tracking that are going every week into new startups and that are ai focused and they're focusing on research.
Speaker 2:They're focusing on research.
Speaker 1:They're focusing on research scientific research, medical research, all these researches, everything. And also on the other side, you have open source language models that are being put into research universities that can use this stuff at a small scale to actually really push research forward as well. So this is an absolute explosion and you know, we'll see it probably in 10 years, where it's really like hitting an interesting intensity and the j curve that we will feel it, where you're going to have drug discovery for things that you don't even didn't even imagine we could sort out and stuff that's absolutely j curve, is j so things?
Speaker 1:go down before they go up, we're going down.
Speaker 3:It's going to go shooting up.
Speaker 1:Absolutely. Let's go back to energy and the transition that you described. Within 40 years, we could wean ourselves off of fossil fuels, which is a beautiful notion to contemplate. One of the bigger sectors that contributes to this, energy usage is, by and large, and the CO2 emissions that, de facto, is, by and large, agriculture. So how is that going to play out?
Speaker 3:I just want to let me. This is my old client, earth chairman. Yeah, please are not going to wean ourselves off fossil fuel energy. We are going to squeeze the fossil fuel companies out of the energy system, got it? That's what's actually going to happen.
Speaker 2:Nice.
Speaker 3:So it will be done through legislation and active measures, rather than this I think it will be done, because demand for their products will just fall to zero because clean energy will replace them, and the sooner the better. Now, something like 20% of carbon emissions, or greenhouse gas emissions rather, come from the agricultural sector. So we only get off creating greenhouse gas emissions if greenhouse gas emissions from agriculture come down on the same kind of time scale 40 years and I think there's every reason to expect that to happen because of alternative proteins. And there are two sorts of alternative proteins which are greatly interesting. One is microbial proteins proteins grown in a vat from a bug producing them, and the other is cultured meat meat grown from a biopsy, taken from a cow or a pig or whatever animal.
Speaker 3:It is Starting with cultured meat. Meat grown in a vat this way is genetically identical to farmed meat. Farmed meat is a mixture of protein and fat and a few other things you can create in a vat the mixture of protein and fat and other things. You can adjust its flavour, so it is at least as good as, or better than, cultured meat. You don't need to throw in all the hormones and other chemicals that poor, intensively farmed animals have, and it will soon be highly competitive price-wise with agricultural meat. Over 10 years, the cost of producing cultured meat has fallen by something like 60%. A year.
Speaker 3:It's now 10… A year by something like 60% a year. It's now 10… A year.
Speaker 3:A year 60% a year. Ten years ago it was $300,000 to produce a hamburger. It's now $10 a kilogram and still falling. So there's every reason to think that cultured meat will displace ground meat or mincemeat from the food system over the next 10 or 20 or 30 years, and not because there's a mandate for that to happen, just because it will be cheaper and higher quality than farmed meat and people will want to eat it.
Speaker 3:Once you displace farmed meat from the food system, an extraordinary possibility opens up. So the agricultural land is 80% devoted to producing farmed meat and if you replaced farmed meat by cultured meat, it means you could rewild or restore or reforest 80% of farmed land. That's about four times the size of the US. Now I think that cultured meat is going to take off quite rapidly. So in my view of the future let's say, by 2050, there will be two or three USs in size to rewild or reforest or curate or restore whatever it is you want to do with the land. And if you ask me, what will happen to all the people whose jobs are displaced by AI and dark factories? There is for them, I think, a wonderful future restoring the land that we have done our best to mess up over the last couple of hundred years, going back to our hunter-gatherer days in some shape or form, being one with nature, without the hunting and gathering.
Speaker 3:Well, yes, I'm not sure. It's hunting and gathering, it's gardening.
Speaker 1:Yes, it's gardening, gardening on a scale of three times the US.
Speaker 3:It's a wonderful place, helped, of course, by humanoid robots, of course. Yeah, there is a vision of an arcadian future if we just manage our way to it.
Speaker 2:Yes, you mentioned something about the move south because of the need for solar energy and the transition into um, solar panels and so forth in one of your talks, that because the north takes an awful lot more energy to run it, we wouldn't have natural sunlight in the same way or as many hours of it. Can you dig into that or have I misunderstood? No, no.
Speaker 3:This is a point from some while back, which is because, in general, as you move towards the equator, you get more sunlight, more hours per day of sunlight. Solar will forever be cheaper closer to the equator than further away from it, and if you have industries which are energy intensive, it would be logical to move them towards the south from the north.
Speaker 2:But will these be robot run industries, because it will be too hot for people to live in by that time.
Speaker 3:Well, so it's interesting you raise that point. I said that robots would be better adapted to space exploration. They'll also be better adapted to working in uncomfortably hot places, so it might well be the case that robot factories powered by solar, located further south, is the way of industry.
Speaker 2:The robots can be powered, I suppose, by solar.
Speaker 3:Yeah, exactly, robots powered by solar I mean. China, of course, leads the world in battery technology and is so far ahead that it's very difficult to see how other countries will catch up unless they enter joint ventures with Chinese companies and do the reverse of what China did 20 or 30 years ago. Batteries and fast charging have been developed for electric vehicles, but they're exactly what can be said for human oil?
Speaker 1:One thing that I feel we haven't addressed, which I think you're very knowledgeable on, is when you project these trajectories that we have in displacing the oil industry, how are we looking at hitting certain targets in terms of heating of the planet?
Speaker 3:Ah, how much we heat the planet depends, among other things, on what the course of future carbon emissions or greenhouse gas emissions is. In a very, very simple projection of an energy transition that is completed within 40 years, the total amount of carbon emissions in that 40 years is something like two and a half decades worth at the current rate of emissions.
Speaker 1:Okay.
Speaker 3:So take a decade's emissions, multiply by two and a half and that's the total emissions over 40 years. Now the warming that's being produced by current emissions is something like a quarter of a degree per decade.
Speaker 1:Right.
Speaker 3:So you've got two and a half decades' worth of a quarter degree per decade, that's six-tenths of a degree of warming to come if the transition takes place within 40 years. Now we're at one and a half degrees of warming, so that will take us to 2.1.
Speaker 3:So let's just expand that we should be in the low 2 degrees of warming when the energy transition is finished. That is much more optimistic than a lot of the forecasts that one sees out there. It won't make the conditions for humanity any easier, but it will be livable. And by then, by the time we get there, 40 years time there will be a huge range of new technologies to help us through it. So, although in an ideal world we would have limited warming to below two degrees, I don't think we will. But I don't think it will go that much further above two degrees and we will get through it.
Speaker 1:Right, that's an optimistic view, and I actually think that it's not the technology that will be the problem. It's more the morality that needs to catch up, probably to solve the problems along the way.
Speaker 3:Well, yes, possibly, but you know, capitalism is an amazing thing. It can be amazingly bad, but it's also amazingly good. If the price of renewable energy continues to be driven down, it is capitalism which will solve the problem that capitalism created 100%.
Speaker 1:One thing that I wanted to mention still and we're jumping around a little bit, but when I think about meat cultivation, we're talking about bioreactors here, right, and so there's very little infrastructure in that regard at this point. How do you feel about the spinning up of all these bioreactors to create enough protein that is required by humanity?
Speaker 3:I can't remember how fast, how long it took to spin up the hamburger economy. Shall we say 40 years?
Speaker 1:sure yeah.
Speaker 3:So 50, 40, yeah, 50 years to spin up the number of hamburgers we now eat each year. I can't see why one can't replace of hamburgers we now eat each year. I can't see why one can't replace those hamburgers in the same amount of time by spinning up bioreactors. Once the technology is price competitive, it's very straightforward to roll out.
Speaker 1:There's plenty of capital for that.
Speaker 3:It's just the economics.
Speaker 1:Yeah, you're right. I think we briefly touched on this, howard. But when we look at all these timelines and when we become a little bit philosophical minded as we do so, it seems that, especially with the population crunch and the emergence of filican life, we seem to be a bit of a transitional species, us apes. How do you feel about our place and the way this is all playing out?
Speaker 3:So I think, first of all, we are in for a very exciting time. More than anything else, People alive today are going to see a whole series of transitions in AI, in energy, the creation of silicon life, the replacement of farmed meat with cultured meat, the restoration of huge swathes of the planet. I think that's going to keep us very, very busy for the foreseeable future. What happens as we emerge from this transition? Well, on the planet there will, I think, be humans and silicon life living symbiotically. It'll take a bit of getting used to one another, but that's what will happen. The Earth and the biosphere have been through many transitions before, and this is one more transition for life on Earth, and we get to see it. I mean, we're highly privileged.
Speaker 3:We are indeed oh and let me just repeat, providing we don't screw it up it will be fine.
Speaker 1:It's true, we need a positive vision to not screw it up as well.
Speaker 2:So this is appreciated in, in a sense, um, when you described your your first two intelligence explosions in the talk that we saw this transition into hybrid silicon human seem seems much less surprising, because you talk about the phases and how long they will take, and I feel, yeah, of course that's the next phase but for people who feel that the pace or the rate of change is too fast and is overwhelming and we have this expression, don't we?
Speaker 2:I'm a technophobe it's just a sort of write-off of anything that's new that's coming in, because it is incredibly fast and we only even know a tiny part of it, unless you happen to work in that world. So what would you say to those people? Or how do you remain inclusive?
Speaker 1:And get them along for the ride.
Speaker 3:I think, first of all, we have to shift from purely focusing on the negatives of this transition. It's very easy to list out all the things that could go wrong. It's much harder to see the great things that could go right, because they're not here yet.
Speaker 2:And we're nostalgic creatures.
Speaker 3:We're nostalgic creatures and to some extent'm I'm playing to that because I see how we could restore the earth, yeah the biosphere, on an enormous scale. And for the people who who don't like the technology, there is some, there's something great I think to look forward to, which is a restored earth, and the restoration is enabled by technology and economics.
Speaker 1:Yeah, and indeed, fear of the unknown has been a good instinct to make us survive and get here to begin with, but now, obviously, it's not necessarily working in our favor. Therefore, we need to fill the unknown with positive stories.
Speaker 3:Exactly that yeah, exactly that yeah.
Speaker 1:Exactly that. Tell us a little bit about the Smith School, because I feel that you are talking about all these beautiful things in that capacity.
Speaker 3:Well, the Smith School is a very remarkable department of the University of Oxford whose main concern is to engage policymakers and entrepreneurs and businesses with the energy transition so that the energy transition can go through as quickly as possible and policymakers and entrepreneurs can contribute to making it go as quickly as possible and to gaining from it to making it go as quickly as possible and to gaining from it.
Speaker 3:I sit on its advisory board and I'm going to become its chairman a bit later in the year, and it's like all universities has some quite remarkable in fact a large number of very remarkable academics who study how to speed up the energy transition, how to speed up the restoration of nature and how to get their findings out into the world so they actually change the world. And there are some absolutely delightful stories that it has to tell. It was funded by the British government 10 years ago to run a pilot project, for example, to try to improve water quality for 10 million people in East Africa and in Bangladesh. It ran this project for 10 years, working with not only the people in those countries but the academics at universities in those countries, and there's a delightful movie where the academics will say we used to sit in our offices writing papers and hoping somebody would do something.
Speaker 3:Now we're out changing the world and the result of the project is that, by using microeconomics to get incentives right, the water quality has improved for 10 million people and we're now being asked to do the same for 100 million people and, as I say, you just need to scale up one more time and you're at a billion people.
Speaker 3:So, by using the best of economics, the best of laws of technology you can use technology to really make a difference, and that's one of the things that people at Oxford University are trying very hard to do.
Speaker 1:Beautiful. Well, first of all, thank you for telling a positive story of the future and for all the work you do, and thanks for taking the time to talk to us about it today.
Speaker 3:Thank you very much.
Speaker 2:It was great to be here. Thank you so much, howard, it was fascinating. Bye.