Where Shall We Meet
Explorations of topics about society, culture, arts, technology and science with your hosts Natascha McElhone and Omid Ashtari.
The spirit of this podcast is to interview people from all walks of life on different subjects. Our hope is to talk about ideas, divorced from our identities - listening, learning and maybe meeting somewhere in the middle. The perfect audio diet for shallow polymaths!
Natascha McElhone is an actor and producer.
Omid Ashtari is a tech entrepreneur and angel investor.
Where Shall We Meet
On Trust with Jimmy Wales
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Questions, suggestions, or feedback? Send us a message!
Welcome to the Where Shall We Meet podcast. Our guest this week is Jimmy Wales. He is an internet entrepreneur best known as the co-founder of Wikipedia, the free online encyclopedia that launched in 2001, creating a radically collaborative model that allows anyone to contribute to and edit what has become the world’s largest free encyclopedia. Born in Huntsville, Alabama, in 1966, Wales studied finance before working in Chicago as a trader.
Beyond Wikipedia itself, Wales also founded the Wikimedia Foundation in 2003, the nonprofit organization that supports Wikipedia and its sister projects, and later co-founded Wikia, now known as Fandom, a commercial wiki platform built around fan communities.
Over the years, he has become both an advocate and a symbol of the broader idea that knowledge can be created collaboratively and made freely accessible at global scale. His role has often been less that of a traditional executive and more that of a public steward for a radically open model of information.
Wikipedia is not just a website; it is a living experiment in whether strangers can cooperate, disagree, revise one another, and still produce something of enormous public value. That question of trust is central to his 2025 book The Seven Rules of Trust, in which he reflects on how trust can be built, and sustained in institutions and communities.
We talk about:
- His seven rules of trust
- Trusting people to contribute wisely
- The mechanics of Wikipedia
- How a crisis lead to an innovation
- News shouldn’t be entertainment
- How collective knowledge negotiates truth
- Can we find consensus as a society
- Crisis of trust in politics
Let’s search!
Web: www.whereshallwemeet.xyz
Twitter: @whrshallwemeet
Instagram: @whrshallwemeet
Welcome And Who Jimmy Wales Is
SPEAKER_00Welcome to the Where Shall We Meet podcast. Our guest this week is Jimmy Wales. He's an internet entrepreneur, best known as the co-founder of Wikipedia, the free online encyclopedia that launched in 2001, creating a radically collaborative model that allows anyone to contribute to and edit what has become the world's largest free encyclopedia. Born in Huntsville, Alabama, in 1966, Wales studied finance before working in Chicago as a trader.
SPEAKER_01Beyond Wikipedia itself, Wales also founded the Wikimedia Foundation in 2003, the nonprofit organization that supports Wikipedia and its sister projects, and later co-founded Wikia, now known as Fandom, a commercial wiki platform built around fan communities.
SPEAKER_00Over the years, he's become both an advocate for and a symbol of the broader idea that knowledge can be created collaboratively and made freely accessible at a global scale. His role has often been less than that of a traditional executive, a more of a public steward for the open model of information.
SPEAKER_01It is a living experiment and whether strangers can cooperate, disagree, revise one another, and still produce something of enormous public value. That question of trust is central to his 2025 book, The Seven Rules of Trust, in which he reflects on how trust can be built and sustained in institutions and communities.
SPEAKER_00We talk about his seven rules of trust.
SPEAKER_01Trusting people to contribute wisely.
SPEAKER_00News shouldn't be entertainment.
SPEAKER_01How collective knowledge negotiates truth.
SPEAKER_00Let's search.
SPEAKER_01Hi, this is Omidashtari.
SPEAKER_00And Natasha McElhone, and with us today we have Jimmy Wales.
SPEAKER_01Hey Jimmy, how are you? I'm very good, thank you. Thanks for taking the time and joining us. It's good to be here. We want to talk to you today about your book on the topic of trust. But we thought we'd start with a little bit of a history of Wikipedia. Before Wikipedia existed, there was really no reason to believe that anonymous strangers on the internet would do anything other than destroy an open encyclopedia. What made you trust people with something this precious?
SPEAKER_02Yeah, it's true. And I actually it's a it's an interesting way that you put that, because these days people sometimes have a really rosy, romanticized view of the early days of the internet. Like, oh, the early days of the internet, and everybody was so cooperative and it was all academics and this and that. I'm like, no, it was actually a cesspool always. And when we criticize social media, which I do quite a lot for algorithmically promoting divisiveness and so forth, I said, but keep in mind humans can be awful to each other without algorithms. So there's no super easy solution. So actually, in the in the very early days, that was sort of just before Wikipedia, I had a project called New Pedia, which was a top-down seven-stage review process to get anything published. I didn't realize this, that you know, there was a seven-stage review process, and recently somebody said to me, um, oh, so that was the seven rules of mistrust. And I was like, oh yes, it was really. Because you had to fax in your CV to prove you were qualified. Um, every step of the way just oozed like a lack of trust. It's like, oh, now we're gonna send it to copy editing and because we think you probably can't spell, you know, and that sort of thing. And uh so that was failing. That didn't work. Uh, you know, in in almost two years, we had finished almost two dozen articles. Oh, yeah. It was really slow and it was not progressing. And I knew it wasn't gonna work because I I tried to write an article. Uh, and I, my background, I was in finance. I did all the coursework, but not the dissertation, for a PhD in finance. And at that time, recently, Robert Merton had won the Nobel Prize in Economics for his work on option pricing theory, which was my specialty. And I'd read all of his academic work, I had published a paper about option pricing theory, and I thought, oh, I can write this. And I sat down to write, and it was just like not fun. It was so intimidating, right? Uh, because they were gonna take my little biography of this uh guy and um it and send it to the most prestigious academics they could find to review. It was like being back in grad school, it was very oppressive and not fun. And that's when I realized, okay, this isn't gonna be fun. So it was really that kind of moment of desperation, like, this isn't gonna work. Uh, there's a story in the book about my daughter was born and she was very ill in the hospital, and it sort of doubled down my feeling that we really need this to work, and also a feeling of like, you know, whatever, like we just have to rip this up. Like um, I was being too cautious, and I I just so launched the wiki, and I the truth is I didn't know at that time. I didn't, I wasn't sure we would be able to trust people. And I actually I assumed that we would pretty quickly have to add more features to the software to sort of lock things down. Moderate, yeah. Yeah. Uh, you know, and at that time, you know, obviously you the way you imagine this sort of thing is you're like, oh, well, we're gonna need uh an editor-in-chief of the American history section and an editor of the but it doesn't work like that at all. And it turns out that was none of that was going to be necessary. And in fact, in the very, very earliest version of the software, which was called UseMod Wiki, which I installed because I knew Perl, programming language, and it was written in Perl, and it was a single script to install. Very, very simple. And it was just a quick way to get something up and running. And that wiki software, uh, it didn't even have real user accounts. You could set your username, but there were no passwords, so anybody could pretend to be you. And no email verification, right? Nothing. I mean, but email verification. I mean, literally you could come and say, Oh, I'm logging in, I'm Jimmy Wales, and then the next day somebody could say, Oh, I'm Jimmy Wales, and it would just accept that. Like there was no login. I mean, it was crazy. Yeah. So I was like, okay, clearly that's not gonna work. So I did, I did realize like, okay, we need some sort of something, but and it was it was that great pivot from a system that was very not trusting to say, like, actually, let's just be as open as possible for as long as possible. Because clearly that very controlled, tight, academic, old school approach isn't working. So let's be super open and not try to solve a problem before we actually have it. And so, as it turns out, Wikipedia is still incredibly open today. Like, you can edit 99 plus percent of the pages without even logging in. Uh, it's kind of amazing. The the pages that are locked down are typically the most common, is what we call semi-protected. And then the requirement there is you have to have had an account for uh four days, four days, and you have to have made at least 10 edits without getting yourself banned. That's a pretty low barrier to entry to edit something, you know, like Donald Trump or something like that. But that became necessary just because otherwise there's a lot of drive-by vandalism and people just come and write doo-doo head or whatever. You know, it's like a waste of time.
SPEAKER_00Yeah.
SPEAKER_02But it's still incredibly open.
SPEAKER_00And uh, you know, on that topic, sorry to jump in, what are the actual mechanics of the review process?
How Wikipedia Catches Vandalism
SPEAKER_02Yeah, so when you go to Wikipedia and you click at it and you make a change, that change immediately is shown on a recent changes page, and there are people who are called recent changes patrollers who are constantly watching that. There are also a few bots who are watching that. These are not Wikimedia Foundation software, these are community-approved bots that are run by community members. They're doing things like you know, if you radically change the length of a page from 6,000 words to two words, and the two words happen to be a curse, probably that's vandalism, and the bot will revert that and leave a message and then a human will check. That's just for speed. Um, but the the people doing recent changes patrol are, you know, they're constantly looking for vandalism and things like that. They're not necessarily fact-checking everything, they're really just that first line of defense. Then there are things like, you know, uh, pretty much all active editors have their uh watch list turned on. So anything you've ever edited in the past is then on your watch list. And so anything that you know about when you go to Wikipedia, you can say, oh, look at your watch list and see what's changed recently. So that's then the next level of you know people saying, oh, hold on, what is this added that in the last 24 hours this got added? Is that good? Is that not? Then there's you know, there's uh other sort of more long-term things like what we call a wiki project. So there'll be uh wiki project bridges is one of my favorites, which is all about bridges, suspension bridges, and this, that, and the other. And um, these are just a bunch of bridges. So one of my favorite stories there is I I had dinner in San Francisco once with a guy who's an architect who, as I understand it, he his professional work wasn't about bridges. Um, so he wasn't an architect of bridges, but he loved bridges and he was an architect, and he was kind of one of the leaders of Wiki Project Bridges, and he was very, very excited and very proud to tell me he's like, to me, there are three published lists of the longest suspension bridges. One is from Architectural Digest magazine about three years ago, one was I don't know where, you know, this authority. He's like, Our list, I can guarantee you it's better than those two. Because we went through every single entry of their list, we corrected some errors, we updated with new information. Like, this is a great list. I'm like, thank you. Thank you. I'm glad. Like, I'm very happy. If I go to that list, then you bunch of bridge geeks have been looking after me, and and there we are. And that is that's the phenomenon through Wikipedia. So actually, when we think about all the noise of the world today, all the noise of social media and claims of bias and this and that and the other, that's stuff we have to grapple with. But honestly, that is not the bulk of what goes on in Wikipedia. The bulk of what goes on in Wikipedia, just by sheer number of edits, is people who are, you know, really passionate about some period of history or they're really passionate about um Bollywood movies or whatever it might be, um, who are just constantly working and editing and updating in areas that are pretty obscure to the rest of us, and generally not that controversial, although I have learned that humans can have a fight about almost anything. So there's always some controversy. Yeah, so that process is you know, it has a lot of layers, a lot of features.
SPEAKER_01So the correcting source of geekdom is is the arbiter of truth here.
SPEAKER_02Oh, yeah, definitely. So every year we have an annual conference. Um, we we do it in a different city every year. We have almost like the an Olympic style process. Each like different groups put together their bid and do their pitch. I don't I don't get involved in it, I just I just wait until they tell me where I have to go. But the um so one year we had it in Alexandria, Egypt, um, which was really fantastic. And at our big kind of closing dinner, uh a friend of mine who came who's not really a Wikipedian that she'd come by just to visit. And so we sat down to dinner with, by random chance, it was kind of the majority of the English language arbitration committee. So the arbitration committee is like the Supreme Court of Wikipedia. Um and you know, we had dinner, and these these are like super nerds, like these are sort of like really geeky people, and you know, we're discussing, oh, you know, like all interesting stuff that's going on in Wikipedia, and you know, this problem or that problem, and there's a lot of funny things that happen, and so on. And when we left from dinner, I was like, you know, just really think reflect for a second on you just had dinner with some of the most powerful people in English language media. And aren't we really glad that it isn't Rupert Murdoch? It's these nerds, right? And they're really, really passionate about getting it right and about neutrality and about all that good stuff. So yeah, it's it's quite something.
SPEAKER_00And how did that Supreme Court get selected?
SPEAKER_02So they are elected by the community and kind of technically appointed by me, but I that that is sort of a default these days. I don't actually do anything there. Um and they serve like one-year terms and uh then they get elected again, and you know, this, that, and the other. And people serve sometimes for several years. Um and then there's also administrators, so administrators is a a lower level, uh, and administrators uh have certain powers in the software. They are also elected by the community. Um, and it's actually quite hard to be an admin, uh, and that's something we think is a problem, uh, but nobody can quite agree how to fix it, so it just goes on being a problem for years. But um, you know, and the admins, what's interesting about how it works is the admins um are not like judge, jury, and executioner. It's all very transparent, and and we can talk about how this differs from say social media, because as an admin, you there's a lot of rules about how you conduct yourself. It's really closer to a police role than a judge or jury. And and if you're not doing a good job as an admin, you can lose your admin privileges. Um, and there are things like you shouldn't, you know, one of the things admins do, if a big fight breaks out and it's just not productive, an admin can lock the page temporarily. Well, one of the rules is you shouldn't lock a page where you have been actively editing.
SPEAKER_03Right.
SPEAKER_02Because then you would have an unfair advantage in a matter of discourse and debate. You should steer clear of it. And indeed, best practice is steer clear of any area that you're, you know, it's just like, okay, you mainly edit in Bollywood movies, it's not an excuse to lock a page in Bollywood movies that you didn't happen to edit, if it's a wider conflict about what the sort of fun about with Bollywood movies, there will be something. You know, and it's very different. Like in in social media, the the moderators are um employees, uh, and it's very top-down. And and like if somebody deletes your post and blocks you from Facebook, you have no idea who it is. I mean, it's really hard to figure out what happened, it's hard to appeal, and so on, it's all very not transparent. Wikipedia, that's all very open, it's clear who did what and why, and so on and so forth, and you can argue about it and and all of that. And it's actually, you know, hearkening back to the history, it's one of the interesting historical circumstances of Wikipedia. I think it's brilliant, it's worked out very, very well that you know we put it together this way. But it also is because Wikipedia is a child of the dot-com crash. Because there was that moment in time, so originally, I was the only person who could block people from editing. Remember, the the software was so simple, it didn't even have any kind of block. In fact, all I could do is just block IP addresses at the server level. I mean, I could stop people from editing or stop them from accessing the site if they were really bad, but it wasn't great. Later we could block accounts, you can always recite a new account, and all of those kind of things happen. But if we had had the ability to raise money, and Wikipedia was growing very quickly at a time when basically the Silicon Valley venture capital was shut down. Right. Like they weren't investing in anything, it was a complete bloodbath. Well, it would have been a natural thing to say, oh, okay, actually now we're starting to scale. We need money, we're gonna have to raise money because we're obviously gonna have to hire some moderators.
SPEAKER_03Right.
Admins Arbitration And Transparent Power
SPEAKER_02And we would have ended up had we done that in a very top-down system, again, very much like any social like YouTube or whatever, where you just hire staff to manage all the community management. We had zero money, we had zero ability to raise money. So that forced us to figure it out, to say, like, okay, how can we devolve responsibility into the community? It sort of felt easy and natural at that time because it's collaborative effort. So it's different from a lot of social media is performative. And, you know, if you're a YouTuber and you're making great videos, that doesn't mean you have any ability whatsoever to moderate YouTube. You know, those are completely different. Whereas if you're editing and you're discussing facts and information with people, you probably can do the moderation piece. So, you know, it's quite easy. Yeah, these are all volunteers. And so it was quite easy. You know, I mean, I I remember in the very, very earliest times, I would wake up in the middle of the night and go downstairs and check uh to see has somebody trashed the whole site because it was small enough, you probably could have, in an hour's time, sort of made a mess, you know. And I quickly realized, like, oh well, hold on. When when I'm asleep, there's this guy in Australia, and he's up and he'll be editing Wikipedia, and that's brilliant, you know. Uh, and he's and I trust him, he's looking after things. That's great. And so it was pretty easy once I added the admin feature to the software to then start handing out administrator bits and say, okay, like you can be an admin. Here's what you do, here's the rules. I will take it away from you if you're not behaving yourself, you know. Uh, and then slowly we built the institutions and and sort of came up with like, oh, well, how should you become an admin? Because again, I didn't want to like it doesn't scale for me to decide who's an admin or not. Like that really wouldn't have been very effective, although I would have made a lot more people admins in the last couple of years because we need more. But and instead, it was just like, okay, well, let's have that. And my role has always been a bit of a safety valve. Um, sort of like there is an understanding that I could call elections, for example. So much like Queen here, or King now, I guess. Um uh to because when when we first instituted the arbitration committee, right, there was a significant concern in the community. What if the arbitration committee becomes ideological or becomes sort of tyrannical or whatever? And I was like, well, don't worry about that. You trust me, or more or less, I mean, I'm here, like you don't have a choice. I made it, you know. Uh I will retain that ability to say, like, if there's sufficient outcry in the community, I will call new elections. And that's never happened, of course. But that that's kind of like the checks and balances approach that we've taken.
SPEAKER_01Right. So a benevolent dictator in the background to pull the break if if something goes wrong. I love the story about constrained breeding creativity here that you couldn't raise and therefore you had to figure it out. That's actually really exciting.
SPEAKER_02Uh yeah, I I think it's very interesting. And I I think there is uh definitely a truth to it, but I also think we shouldn't we should be careful about it as a broad general principle because another thing that breeds creativity is freedom and luxury, you know. Like you had the ability to take a risk, and you know, in a certain way I did, so I was able to do this. So at the time and the the financial capacity to just go, let's try some things, you know. So there's a bit of both, right? Uh but I do think often for a lot of hyperscaling businesses, the uh a lack of discipline can lead them down a very dark path because uh they just throw money at any problem and they haven't actually solved the problem.
SPEAKER_00Right. One of your seven rules of trust is is um a clear purpose, right? Yeah. And from all that you've just said, this the swarm of the geeks um towards this feeding ground where they suddenly get to flex and express their particular passion with such fervor and such commitment. And there there's such a clear sense of purpose there, isn't there? The the kind of idea of coming to corrupt, it just seems counterintuitive. This is a place where you're sharing knowledge and you're you're sharing your hard-earned knowledge with the whole world.
SPEAKER_02Yeah, yeah, no, it it's it's absolutely crucial and it has been it has helped a lot with our decision making, say at the Wikimedia Foundation level. So again, early days, somebody um like we're older than Gmail, so when Gmail launched, um then, you know, a a couple people said, Oh, hey, maybe we should start offering like webmail accounts because people are using our personal email addresses to do Wikipedia work, and maybe we should have webmail accounts. And I was like, you know what? Our purpose is to build an encyclopedia. And actually, yes, that might be popular. Yes, it might attract new people just to use email, right? But that's it's not our purpose. And because we weren't a commercial business with ad revenue and all of that, it just didn't was like, no, let's not do that. Like, let's just stick to that simple purpose, which we always have. Um, and you know, in fact, it was a very, very, very long time before anybody had uh an official email address. I mean, eventually it became kind of important and necessary.
SPEAKER_01Briefly because you brought one of them up, highlight the seven rules of trust as outlined in your book. The first one is make it personal. The second one is be positive about people, the third one is create a clear purpose, as you mentioned, be trusting, be civil, be independent, and be transparent. Let me focus a little bit on that last one, be transparent. I heard you um talking about a podcast that there's a political slant that The Guardian has towards electric cars and the Telegraph has a different one, right? Yeah. Every time we have um, I'd say expert knowledge about a specific topic, the uh authoritative edifice of some of these mainstream media institutions starts crumbling a little bit because you realize, oh yeah, they have a certain opinion that they're trying to put on people here. How can you do this better to be more transparent when it comes to these you know opinionated things that you're saying? Is there a way that this can be done maybe in a decentralized way? You tried Wiki News before, right? Like why did that not work out? Yeah. And is is there something to be learned from that experience?
Purpose And The Seven Rules
SPEAKER_02Yeah, yeah. So there's a lot there's a lot there. So one of the things that I argue for in the book is that the the news media, which is Currently at very close to all-time lows in terms of public trust. And there's reasons why they got to this place, having to do with the financial pressures on journalism and so forth, but they have become more opinionated and therefore less trustworthy. And just to go into that a little bit more detail, there is sort of research showing that when newspapers make a political endorsement, they lose trust, not just from the people who don't agree with the endorsement, but with people who agree with the endorsement, trust them less. And the reason is, you know, I actually, yeah, okay, I support this politician, but I don't want just propaganda from them. I can read their press releases, right? I actually want fair news cover. So if they're doing something wrong, I need to know about that. Or if they're doing something that could be criticized, or maybe their policy isn't as strong as I would have hoped. Like I actually want to know all that, and I don't want to be just Fed news that tells me what I already believe. That isn't actually helping me. And so that matters, and I do think the media needs to really strive for more neutrality. At the same time, you know, I'm not saying every uh every news outlet needs to adopt the exact same kind of BBC style passion for absolute neutrality, which of course the BBC doesn't always achieve, but they it's it is part of their DNA and it's part of their remit. Um, and I think I would argue they're pretty good at that, but they screwed up from time to time. But um so I don't I don't mean that exactly, but I I do think like if you do have a certain, if you say, oh, we are a center-left newspaper, that would be the guardian. We are a center-right newspaper, that's the telegraph. Okay, that's fine. Like you've got a market position, it's a competitive marketplace, you want to you know cultivate a certain kind of audience, that's fine. But you want to be pretty transparent about it. And even then, you also want to be respectful of the minds of your readers to say they want to read a center-left newspaper, but they don't want to be fed left-wing propaganda all the time. They want to understand things in a balanced way, even though we understand like they're gonna agree with us on these issues and so on and so forth. So, you know, I think it's a delicate matter uh in many cases, but I also think, you know, oftentimes it's just um it's pretty bad, yeah, you know, when when the media is no pre-set. I mean, I actually think the telegraph has gotten worse in the last few years, um, of being a little bit over the top, so maybe not quite as center right, just right. But also it's about the style. It's about, you know, it's like, oh, it's fine. I this paper stands for these things, and you've been transparent about that. That's great. I still don't want you to hide information from me or misreport facts to try to make your case because now I'm reading your campaigning as opposed to just reading something with a bit of a slant.
SPEAKER_01Yeah.
Why News Trust Collapsed
SPEAKER_02And so uh the Wiki News effort? So WikiNews, we just, after quite a long time, shut down WikiNews uh very recently. It had been not very successful for a very, very long time. I actually personally tried Wiki Tribune because I had an idea of like, okay, that's not working. I've got another idea of how to do it. Combining both professional journalists and community also didn't work. So uh and so there's a few reasons for that. So one of the things that I think a lot of journalists have the attitude, which I think is mistaken, uh, that we're the only people who can explain the world to you. And it's like, really? Like people are really actually quite smart, and journalists are can be quite smart, but you're probably not an expert in everything in the world and so on. And so that ability to explain the world to people is not as rare as you think. However, what is hard, right, is the ability to have the resources to be there and to pursue a story and to do whatever. That is a full-time job. And so when we think about community members, volunteers trying to do journalism, uh, even if they would be quite good at it, uh, most people don't have the time. It's operationally expensive. It's just yeah, it's just like you can't. Like you need a professional to do that. Um and there are other elements of it as well. So, for example, if you want to uh cover uh local politics, right, well, you probably need to build some relationships. You you need to actually know the local council members, and and that has its dangers and downsides, of course, but it's actually important. Like you need access, you need to be able to sort of get the nuance of who's who's going to vote which way and things like that, which a lot of people don't have time. And unfortunately, if it's if it is purely volunteers, you can imagine uh, you know, like a classic local news type of issue is about development. Should we put in this new road or should we not put in this new road? And if the people who are available as volunteers or not paid by the news organization, they might either be activists who are against development or they might be developers who are in favor of the development. And so that's fine. Those are voices in society that need to be heard, but they are probably not the journalists, right? And a good journalist should be there. And I think people can be good journalists, even if they're not being paid to, but who's gonna bother to turn up to the city council meeting and and all of that? Maybe it's only gonna be the activist. So I think there's a lot of issues with citizen journalism that, you know, having explored it quite thoroughly, I think they're really hard to solve. I'd say the biggest one that we had is, you know, um just like who's who's there? I mean, I I remember this uh sort of fun little example that occurred to me. I would say probably in the first year of WikiNews. And I should have said, Oh, this is never gonna work, what am I doing? But okay, we let it go on for a long time. But there was near my house, uh, there was a truck, uh some sort of maintenance truck or whatever, that there was a hole, like a sinkhole in the road, and it fell in and like one wheel up in the air. It was quite a picture. I took a picture of my room. This is amazing. It looked like no one had been injured, but it was the kind of like cute local news story that I was like, oh, I could report on this, but like all I've got's a picture. And if you ask me to write up what happened, I'm like, a truck kind of fell in a hole. I don't actually have any idea. And I would like call the city department and call the owner of the truck and all that, but who's got time? I went to work instead. You know, I'm like, okay, well, I'll take a picture. So that was from pre-social media, so now I would probably just post it somewhere and go, ooh, look, a truck. But that's all I know. So so it's hard for people to do citizen journalism in that sense because a lot of journalism depends on actually having, you know, it wouldn't have taken long, but who's got two hours to like ring up the truck company and say, What happened to your truck? Uh so I was halfway in the hole.
SPEAKER_00Uh how much do you think it has to do with media, the blurring of entertainment and news and politics that in fact angertainment, entertainment, amplification of I I just I just feel like the fourth estate or whatever it was meant to be originally has slightly been lost. It's it's got swept up in getting eyeballs, people's attention. Yeah. So whether something like Wiki News is quite dry and it's quite fact-based and it's very trustworthy, and people don't actually want that. You were saying that we want um a as objective as possible news source. I I disagree. I I don't think people do want that. They do want their own opinion at first.
SPEAKER_02But I I so I I'm somewhat sympathetic, but I think they do. And I'd sort of to give my evidence. I think people want both. People can want both. Um so the BBC news is incredibly popular and they do strive for neutrality. They don't always hit it, of course, but you know, as as well as they can. Uh I would say the the big quality newspapers, I would say the best papers as a make a really rough generalization. Uh the financial newspapers tend to be higher quality and they tend to be quite neutral. Uh and they tend to report the facts straight. And I think there's a couple of reasons for that. One, they are largely have a subscription model. Yeah. Right. So they're not chasing clicks, right? Uh two, if you're reporting on business, I mean, I know this because I used to be a trader, a futures and options trader, is like in in making business and investment decisions, I don't need you to blow smoke. I actually need to know the facts. It's like oh, I'm a I'm a huge fan of um, I don't know, let's say NVIDIA, right? Well, that and and I'm gonna invest in NVIDIA because I love them. Does that mean I actually only want positive NVIDIA bias news? No, probably not. Like I actually want to know what the problems are because it's like money at stake and so on. So I think people do want it, and then I I think also ordinary people, like they don't really want to be, you know, completely sort of propagandized to all the time. Now, at the same time, in a media environment that involves algorithms and clicks and business models that are attuned to that, there is this kind of natural tendency to, you know, the the screaming headline gets more attention than not. Um, and that's you know annoying. I mean, one of the things for me, I'm actually just thinking, I what I used to do, I always would use Google News, but I would occasionally delete my cookies because I was like, I actually do like getting a nice mix of news sources, but when the algorithm gets to know me, it sort of knows what I'm gonna click on, and then it shows me more of that. And I'm like, what I click on in a moment, right? It's often like a story about the royal family. I'm like, I really want to read stories about the royal family. Like, I don't, I don't, I honestly don't, but you know, I click it, you know, and and so I kind of wanted to like, could you please wipe that from time to time? And so it's that both again. It's like, okay, right, as a as a human being, I have my personal sort of interests and things that I'm interested in in a more short-termist kind of way. Uh and then there's like, actually, what do I really want to click on? What do I really want to read? How do I want to really spend my time? And so with both of those as kind of competing, I think, you know, it it isn't as simple as just saying, oh no, people obviously they're if if chasing clicks is by being biased, then it's because people want bias. Well, they want bias, but they also want, you know, they want everything. They want to do that.
SPEAKER_00Well, it was more about entertain- I'm I mean, just entertainment, you know, that the pri the president of the states is an entertainer. He was an entertainer. Correct, yeah. And he's it's sort of clownish figure, and he's very entertaining, but he's now the president. Those things have become conflated.
SPEAKER_02They they they definitely have. And uh, you know, and that is um it's it's very unhealthy. I mean, I think it's hard to say it's in any way healthy. I mean, it's um you know, and I mean the truth is we've always had that to some extent. We've always had tabloid headlines, we've always had politicians who are grandstanding populists and so forth. Um, we have a particularly acute problem with it these days.
SPEAKER_01Um I think the main point here that I would like to put to you is we've realized that the uh landscape of technology through algorithms, as you pointed out rightfully, either gets things amplified that are exceptional or extreme. And extreme is the easier way to do it because being exceptional is not common. Yeah. And therefore, a lot of people tend to do the extreme thing. Um, even the news media that should not, you know, do that because they need the clicks, they need it to survive from a commercial point of view. Now, is this actually also changing humans and what humans want? Because we could say this is a tool and the tool exists in in its own isolated fashion, and it's not having a feedback loop with society and the neural networks of the people who are interacting with this tool. And I worry that it has, and it has really changed the way we think about it. Because you're right, strong men have always existed, all this stuff has always existed, but is there qualitative difference here? Is something happening? So I I don't think so.
Why Wikinews Could Not Work
SPEAKER_02But um uh I'm happy you don't think so. But but it's a good it's a good question. I mean, certainly, so teenagers, young people, and here's a very common complaint about teenagers and young people is like, oh, they're just flicking through their TikToks and their YouTube shorts, screenings, and a zero you know attention span of 30 seconds and so on. I'm like, well, hold on a minute. So I remember when MTV first launched, music television, yes, which I don't know for the young people listening, it used to show music videos, right? I don't even know if it exists anymore. I think it must, but um and like adults were like, oh my god, you're you're watching a TV show that's like literally three minutes long, and then it switches to something random and completely different. You like you have no attention to man, what is wrong with you? And then when I talk about kids today, I'm like, okay, these same kids who yeah, they do love a little short-form video, they also binge watch eight straight hours of super complicated TV shows, right? They are watching Stranger Things, which is a huge ensemble cast, mini interlocking storyline, Game of Thrones, yeah. Game of Thrones, like all this stuff. Uh, and so again, it's like, oh, okay, so we can we can say, oh, yeah, social media is ruining kids' attention span. I'm like, oh, these are the same kids who have on a massive attention span, and they will sit with a really complex piece of art for a very long period of time. Uh, and that's good news. So I don't know. I just think it's it's it's easy to be too simplistic about these things. At the same time, you know, I I do think that we have a political environment that is just so ridiculous these days and so toxic that there are a lot of conversations that could be had in a much more productive and constructive way. Um one that I'm I'm even hesitant to bring it up as a topic without taking any position on it. Like the whole trans issue is so unbelievably toxic, right? That you know, people are having a really hard time actually having thoughtful conversations about it, certainly in the media. I think in private life, people probably aren't sort of raging at each other. You know, I think a lot of people are like, and now I'm gonna take a position that may be controversial, but I think it's what most people think, which is like, I think everybody's life they should live with respect. And people should live their lives as they would see fit. And I'm not so sure about this in sports and this swimmer who's recently transitioned who's beating all the women. I don't know about that. And I'm not so sure about using drugs and surgery on teenagers, very young. Like, I don't know. That doesn't something about that. Great, that is not a person who wants to kill trans people. Like, that is actually where I think most people are. Like, oh, this is a hard issue. I think there's a lot of nuance here that we could explore and think about like, oh, what is the best way that we can sort of come to some solutions to these issues? And there's no easy answers and all that. So, but instead, what we what we get in social media is just like raging at each other, and you know, uh it's not helping.
SPEAKER_00Pulling on a couple of those threads, it it's a coincidence that I think Audrey Tang is trans or identifies as um and is someone that you mention in your book as a fantastic interview with Audrey. Yeah, I I love that whole segment about finding consensus and realizing that actually we've so much more in common um than than we have that pulls us apart. And the template of Wiki, there's a crossover with that that form of government and what what was what was it, uh polis, the the platform that they use to opinion.
SPEAKER_02Just to tell for the listeners who haven't yet read my book but are gonna rush out after they listen to us. But so Audrey was the uh first digital minister of Taiwan. And they did this sort of project of bringing citizens together using a digital platform called POLIS uh to try and work through various decisions that the government needed to make. And it was really well done in the sense of so like here, I remember when I first moved to the UK, um, they had just done uh sort of a public consultation. Uh I remember going to see Rohan Silva number 10 about this, and they had asked, you know, people like, send in your examples of government waste and and what might be done about it. And they got like 30,000 submissions, and then they're like, oh, uh-oh. Well now we've got 30,000 comments and we have no idea what to do with them or how to process them. I actually think AI might be useful in the same thing these days, you know. But uh back then there was no way of doing it. This was more like citizen assembly, like discourse dialogue, and they would bring citizens together and chew on the issues, and basically what they found is like ordinary people weren't uh super polarized about things that you might think they would be, that actually people are like, oh, I see your point, but we also have to accommodate this. Okay, yeah, so maybe we can do that, you know. And you you get to like what you think is like ideal in society, which is we're all we all live together on the planet, we've got to get some things done. You know, questions. I just saw uh Tony Blair said something about maybe we need to start drilling for North Sea oil or whatever. Is that because he hates the environment? Probably not. Is it because he's concerned about energy security? Definitely. Is he right? Is he wrong? People may have different views of that, but like could we have a constructive conversation? Like, would be fantastic. And so that's the kind of thing that, again, I think most people would say, oh, yeah, actually, being so dependent on straight up or moves could be quite a bad idea. And yet we don't really want to start drilling oil, so how are we gonna thread the needle here? And we can have discourse about that. So what they do is.
SPEAKER_00If this makes policy, we're somehow much more invested, aren't we?
SPEAKER_02Yeah, yeah, definitely. And I I there have been sort of multiple times where I have someone who is disagreeing with me in social media, I've just pinged them privately and said, I look, I'm let's have an email exchange about this that we disagree is private, because if if I'm being performative and you're being performative, never gonna have to. And particularly if we're if we're to the level that the media is gonna notice what we're doing, then suddenly it becomes really hard to just have a proper chat. Um and so, yes, that performative element is yeah, not good. And you know, I think uh, you know, a lot of politicians struggle with this, right? That they need to be able to discuss issues. And actually, but one of the one of the problems that we see is like the US just went through the longest government shutdown in history because they become so partisan that you know it's like, I'm just like, well, don't they have dinner parties in Washington? Like, could these people probably get together and in private with a trust in each other, right? And that's that's where trust comes into it to say, like, actually, we're just gonna have a private, we're gonna hash some of this out, we're gonna figure out what you need, what I need, right? And it isn't gonna be performative. And I trust you enough that we're not if I say I might be able to give you this, you're not gonna announce that I've given you that. But I mean, if you've got somebody like Donald Trump in the White House, I don't think he's a good faith negotiator.
SPEAKER_00So but on that point, what I thought was really interesting about what you said about Tang is that a lot of these discussions were video, they were live streamed.
SPEAKER_02Yeah, yeah, yeah.
SPEAKER_00There was nothing private, they were totally transparent.
SPEAKER_02And I I think a lot of that though was the people who were involved were not trying to be social media influencers, they were not politicians, they were just ordinary citizens. So therefore they didn't feel a need to be performative.
SPEAKER_00Because there was the trust that their opinion might actually contribute towards matter. That that's what I'm trying to drill into is I just think what you highlighted there about trust and therefore rising to the occasion and the request, which is participate in government, participate in democracy. Yeah. Rather than just stand on the sidelines, disempowered, criticized, wait for the next four years for the next election cycle.
Algorithms Anger And Performative Politics
SPEAKER_02Well, and and one of the one of the sad consequences of the crisis of trust in politics is you know, when when offered definitive proof that a politician lied and and behaved in a way that you don't approve of, too often people are willing to overlook that because they think all politicians are criminals. And that isn't actually true. They aren't all criminals, right? Some of them are, you know, and it's kind of like actually, would you rather have somebody who is lying, has absolutely no principles, and so on, or would you rather have somebody who has principles and has policies and I don't quite agree with all their policies? I'm going for that second one every time, you know, every time. Now, if you ask me, you know, or what about somebody who's quite extreme? Oh, it gets hard, right? To say, like, actually, I don't want to elect an extremist. So if my choice is between somebody who's really an extremist that I very much disagree with on policy, and this person who I don't think is very trustworthy, that's a sad choice. Hopefully, we don't have that kind of choice too often. Hopefully it's really about, you know, because you know, it's like uh in many cases, mainstream politicians, the differences on policy are actually pretty minimal. And this is true even in the US. I mean, there's certainly some extremist politicians in the US on all sides, but the vast majority of your centrist Democrats, centrist Republicans, yeah, they've got policy differences. They aren't they aren't going to collapse the country one way or the other. And they ought to be able to come together and say, actually, a little give and take. This is the one I think is more important. This is, you know, I can give you that. Or even The level of, I mean, this is very ambitious, the level of trust to say, like, actually, my party's in power now, but we're going to preserve some of the rules that disadvantage the party that's in power because I know I'm not always going to be in power. And so sort of like this is like the 60 vote thing in the Senate, um, which has been steadily eroded. It's like, okay, you're in power with a 51 to 49 majority. If you blow that up, then just remember it's going to be 51-49 in the other direction soon. And they're going to blow it back up at you. And maybe that 60 rule to say, like, for certain things, like Supreme Court nominations, it's important enough that we actually want to make sure we've got closer to consensus and not just ramming through our person.
SPEAKER_01Yeah.
SPEAKER_02I mean, that's easier.
SPEAKER_01Said than done. The sensible thing is not always the easy thing. You brought AI up briefly there. Um, on this topic of electric cars, I actually kind of after I listened to you on that podcast, asked AI about electric cars. And I thought it provided me, and you know, I also heard your criticism of it when you um use your litmus test, which is asking about your wife. Yes. Um, you see, I think with people queries, it's really worse than it's bad. And sometimes people would maybe criticize even on Wikipedia and say, hey, my my person entry is not like the most ideal. I think people is always tricky. Um, but like if you ask about electric cars, I think actually AI does a really good job in giving you both sides and not trying to be too extreme. And some studies have shown that people actually take AI advice as much more neutral and take it more serious than getting advice from a human being and guidance from a human being. How does that sit with you? How how do you think about all this?
SPEAKER_02I think it's it's a narrow doctor. It's a it's a moving target. Uh and obviously with AI, one of the problems is the hallucination problem, it's still quite bad, but also the sycophancy. So if you if you say to an AI, um, we should probably test this, but we don't have time with having a chat. But if you say to an AI, hey, I'm really, really a fan of electric cars, but I've heard that uh because they're heavier, the tires scrub more and there's more particles. Particles going up. Um, is that a real problem? It's very likely to say, that's a very insightful question. I'm like, oh come on. Like it's just a question. Like you don't have to suck up to me. But it is also very likely to say, well, like here's the facts. And and actually it can be quite helpful and quite good. And I do think a lot of people are using it for advice, career advice, uh, this, that, and the other. And it's probably not awful. Actually, it was I was with some psychotherapists, uh dinner party. One of them said um they had a patient who said, Oh, I've been talking about my issue, I've been talking to AI to get advice, and there was like alarm bells went off. It's like, well, hold on, can we look at that chat together? You know, because one of the things is people have had experiences of going down a really weird and dark path of his because of its sycopency and so on and so forth. And then they said, Well, so the patient agreed, and we we looked at the chat together, and it was like, it was pretty textbook good advice. Like it was pretty, like it had clearly read the the leading research in the field, and for this particular issue, this the advice that it gave was actually quite good. And they were like, Oh, so that's probably okay, even though there should be that caution, right? That um, particularly for people with mental health issues, um, there are some interesting bad consequences that can happen. That that will have to do with how you use it, but obviously a person who has already got some mental health issues may be very tempted to use it in a way that confirms their whatever. You know, like it's um, you know, if you are someone who's having trouble sticking to reality, you know, and uh I'm not a psychologist, I don't know any proper technical terms, but like you're kind of you're kind of crazy, yeah, delusional, that's a good one. Uh and you're yeah, and you're struggling, and and you're like, I actually I'm very concerned because I think that uh aliens are leaving me messages in my coffee grounds, right? Um, I would be concerned about it saying, you know, well, that's actually it probably will say probably not, but then if you keep talking to it to explain why and say, Oh, well, you have a good point there, because it will say that, I can see you could probably talk it into a really weird place because truthfully, it's not an intelligence, not really. And it can be very problematic. So, yeah, I don't know. I don't know where I come down on all that. It's like it probably can be useful to give you good advice, but can also go very far astray.
SPEAKER_00Do you think a lot of your wiki entries are now AI generated?
Taiwan POLIS And Finding Consensus
SPEAKER_02I don't, but it is a very interesting question. So the community, so uh I would say different language versions have a different attitude at the moment, and it's only a matter of degree, not fundamentally. Say English Wikipedia is getting a reputation as being quite anti-AI. Um and is that fair or not? I don't know, but that's sort of a thing that people are noticing. Interesting. Uh, there are definitely people in the community who are quite eager and excited about how it might help us, and there are people who are like, it's nonsense. And certainly if you ask an AI to just write a Wikipedia entry, it will be very bad. Um, and you mentioned this anecdote, which again I should repeat because I've I've it's not in the book. And uh so my wife, Kate Garvey I ask every new AI that I test, who is Kate Garvey? And it's always wrong and it's always amusing to me and to her, and it's usually quite plausible, and that is what is dangerous about how it gets things wrong, because it predicts the you know, a likely next token. So it it says things. And she's kind of the perfect example because she's not a famous person, but she is known to some extent. I mean, she worked for Tony Blair for 10 years and she's done work promoting the global goals, sustainable development goals. So she's been in the press a little bit about that. Our wedding was in the press, which is important for the second part of this joke. It's not a joke, it's actually true. Um and you know, so it once said that she um set up a nonprofit to promote women's empowerment in the workplace. Definitely sounds like Kate, also not true. Um and it said she set it up with uh Miriam Gonzalez, who's Nick Clegg's wife. Also, very plausible. We know Miriam and Nick, and our kids go to the same school. Obviously, I know Nick from his time at Meta and so on. So great, okay, weird, uh, completely false. Uh, but then I I always uh ask, and who did she marry? And this is always very amusing, uh, because it's usually it's a UK politician, uh UK political journalist, it's people we know um in our social circle here in London. Uh but my favorite was um it said Peter Mandelson. And I was like, oh, isn't isn't Peter Mandelson quite famously gay? And then it got very woke with me, and it said, uh, it's not really appropriate to speculate about people's personal sex lives. And gay marriage is legal in the UK. And I'm like, not really my point. But anyway, so you can't use it to write a Wikipedia entry, particularly in that way. At the same time, I think there is some interest and excitement about using it as a tool of a much more limited way. So just one example, I was just talking with someone this week. I'm actually gonna maybe code this up later this week. I just do this because I'm a bad programmer and I like playing around. But um you know, you can every every Wikipedia entry has references uh and footnotes. So you could easily say, quote a paragraph, quote the footnote at the end, go and get the link and say, does this link support that paragraph or not? And generally it's gonna say yes because I know, because I've like many people, I use Wikipedia a lot, and I often read the sources and I'm like, yeah, okay. But there'll be cases where it could say, actually, that goes further than the source, or uh whatever it might be. You know, it might say, no, that source doesn't say that at all. This is something completely different. And so I'm wondering if you could make a tool that I would the way I would start it is very, very, very cautiously to say, only tell me, like generally just say it's fine. Only tell me if it looks egregious and that that that that source absolutely does not support what's the Wikipedia. And then post a note on the talk page to say, oh, hi, I'm a friendly robot. I I know I'm not good enough to edit Wikipedia, but I did notice this. And I'm like, oh, could that be useful? Could that be because a lot of times a lot of Wikipedians come to Wikipedia as a hobby, and they didn't come with an axe to grind, they didn't come to do it, they just came to write about whatever. Sometimes they just like to click random article and see if there's something they can find to improve. Often it's in their area of interest. So um I see a book here about Japanese knives, and so maybe you're a Japanese knife geek, and I'm sure we've got a number of articles about knives. So they might say, oh, actually, if a bot was going around all of Wikipedia and on one of the pages that I've been working on, if it points out, like, oh, this actually, you know, this is, you know, the quote says this type of knife was developed in 1642, but the source says 1648. Oh, okay. That's interesting. I might not have ever noticed that. It might have been an error in Wikipedia for a long time, and a bot could probably recognize that. And that was not possible to do sort of 20 years ago at all.
SPEAKER_03Yeah.
SPEAKER_02You know, a similar idea I I've had in the past, and I I used to think hard about how could you do this? It seems so hard, and sort of playing around with whatever. But okay. Uh the height of uh Mount Everest is a very simple fact. How high is Mount Everest? But it has our our estimate of that has changed at some point in the last 20 years. I think possibly mountains move around a little bit, but more importantly, like laser measurement has improved. And so what was generally accepted as the standard height 20 years ago is maybe a little bit different by a foot or two now. I'm not making up the numbers, but so probably if you went through all of Wikipedia, every Wikipedia might have a different, slightly different answer. Some will have the old number, some will have the new number. And so a human could go around, go to every language version of Wikipedia, which they can't read that language, and sort of spot that as a simple like thing. But you can the number will look the same, so you can guess, you know, oh, that's the number, and I could probably use Google Translate. But what if you could do the same thing? Um about all kinds of things. So that's a very simple one because it's just a number, but you know, what about uh and then where it gets hard and where I don't think AI would be very useful is if the conflict in question is actually a legitimate debate in society. It could be useful to say, actually, this language presents this sort of controversial topic in a very different way than the majority of other languages. That could be useful, but not for the AI to go in and fix it because it probably can't, but to kind of notify that community, it's like, oh gee, you report on this in a very different way. Uh, an example, uh, there is an island between uh Japan and Korea that has there's a sovereignty dispute, and I the way I remember it is the Japanese do keep a couple of soldiers posted there, and but Korea claims it, and Japan claims it, and it's literally a rock, nobody cares, like it's not that important, right? But at one point in time, uh Japanese Wikipedia and Korean Wikipedia reported on this in a quite a different way. And English Wikipedia was quite good because it just said, oh, actually, kind of what I presented, although very badly because I haven't read in years, but it said, oh, here's the source, here's why there's a dispute, and here's the history, here's the current situation. So it doesn't take a side, it's just like, oh, here's this interesting point, where the two individual languages tended to take a side. Well, that one's not important. Like Korea, this is South Korea and Japan are not about to go to war over this at all. It's it's whatever, it's just one of these oddities of the world. But that's the sort of thing that maybe uh if uh and actually somebody told me that both languages have improved since then, maybe because I talk about it a lot, but the um but maybe the people in one of the languages just didn't realize, like they never heard the other side of the story, and they would say, Oh, uh Wikipedia should tell both sides of the story, we just never heard it, you know. Kind of like who invented the airplane. If you were raised in an English-speaking country, um it's the Wright brothers. You learned it when you were six years old, right? Uh so it's a very simple fact. Apparently, I am told, if you ask French people the same question, they've got an equally simple answer, and it's somebody completely different. And it's like, oh, why is that? And it's actually a great story. My the way I understand it, the Wright brothers flew further because they had power. It was the first powered flight that flew further than they would have if it were a glider. But this other guy, I think he's a Brazilian living in France, he figured out how to go up, which I admit is quite important. You know, is that not going down quite as quickly is also part of it. But and so they're both, and actually that's a much more interesting story, right? That's actually there isn't a simple answer that you learned when you're six years old. There's actually the history of aviation, which involved a lot of different advances and things like that. And so you would you would hope that if a language states it as a very simple fact, that it should be at least qualified a bit or explained more, that's brilliant. And so maybe another area that AI could help us is with interlanguage comparisons because you can do it at scale. And I think because if you're sophisticated about your prompting and you sort of do multiple passes of checking and so forth, and a judge somewhere and and you basically don't bother the humans unless you're pretty sure. And I think that's the kind of thing that I think we're going to be exploring over the next few years.
SPEAKER_00It does do that already. It does do it does do that task very well already. Comparative task. Yeah, yeah, yeah.
AI Advice Hallucinations And Sycophancy
SPEAKER_02I think it's a big sweep of. Yeah, I I think it it can be very, very good. And um you know, another uh fun one that I'm this will be the first time I ever talked about this publicly because I've just it occurred to me. So we have a uh language called, and I'm gonna mispronounce it, I'm sure, but Sebuano uh language. It's a language of Indonesia, and it is one of the largest languages of Wikipedia in terms of number of articles. Wow. But that's a bit phony because what happened is somebody went through and did machine translations of articles, and they did this years and years ago. I mean, the bulk of them are probably 10 years old by now. So it's this huge Wikipedia that doesn't have a community, uh, has very little editing, but it's there. And we let it happen because there was a hypothesis to say, like, oh, create it, and even if it's just badly machine translated, then maybe people will, you know, it's like search as your organization and it'll people will find it and they'll be like, oh, what's this weird thing that's got like weird grammar? Let's fix it. You know, turns out that didn't work. Okay, fine. Uh, but it wasn't causing any harm, so we haven't done anything about it. And then I just realized the other day it might be causing harm now because the AIs will read it.
SPEAKER_03Yeah.
SPEAKER_02And the AIs generally trust Wikipedia. I think they're quite reasonable about Wikipedia. They know it's not perfect, but you know, they're like, oh, it's a source of knowledge and it tries to be neutral and all that. And so if they read Sebuano Wikipedia, and that is the only, like that is the vast majority of the content, because they're camp, there's some newspapers or whatever, then maybe it's gonna start speaking bad Cebuano. And maybe we're actually damaging the translation, and maybe that's a bad thing, and maybe we shouldn't be doing that because it's maybe having a further impact on those language speakers by having AI that's like bad for them.
unknownYeah.
SPEAKER_02I don't know. I just raised this question two days ago with the boredom. I don't know. We're gonna look into it.
SPEAKER_01Speaking of of um the bot traffic, I'm I like 60% or 65% of traffic to Wikipedia is is bot traffic. I read somewhere. Um, what is your relationship with all these AI companies, foundation model companies, at this point?
SPEAKER_02So it's changing, it's good, it's bad, you know, it's it is what it is. So basically, our biggest concern. So Wikipedia is freely licensed, so it's like open source software. So we're not like most publishers who are like, oh, you're stealing our content. Right.
SPEAKER_03Yeah, yeah.
SPEAKER_02It's our gift, like go and use it. We do require attribution, and you're not very good at that. Yes. We'll have that. That's an ongoing conversation. But the um and facts can't be copyrighted, so and Wikipedia's facts, so you know, the actual like specific version or the specific write-up, the presentation is copyrighted, but they but the text. But it's under free license, but but the facts are not, and so it's it generally it's fine. But they hammering our servers is not cool. And you know, the average donation to Wikipedia is about ten dollars, and those people are donating to support our work, our work to build Wikipedia and to support the communities and all of that, not to subsidize open AI's revenue. OpenAI's revenue, right? And so what we are saying is look, we have an enterprise product, uh, it's a better way of doing it anyway. Like, like crawling the website is not great. And what I didn't realize for a long time, I don't know why, it just didn't occur to me until it was explained to me, is like when humans visit Wikipedia, the traffic pattern is very, very different from when bots visit Wikipedia. The reason is simple example, the queen died, and millions of people came on that day to read the one article about the queen, her life history and everything, and there's spillover to other articles and so on. And but that's you know, if if a hundred million people come to read one page in one day, we generate it from the database, we cache it, and you send it out, it costs almost nothing. Like it's really that's not a huge expense. But if a bot goes and looks at a hundred million pages in Wikipedia, that's database servers, like you can't cash it all. You have to construct the page, you have to get the most latest version, all of that. It's dramatically more impactful on us. And so the bot traffic is costing us a lot of money, and that's not what people are donating for. So we're pushing them. We started because we're very, very nice. We started asking them kindly, you know, please use the internet's product, and here's the fee, and all that. Now we're starting to block them if they're not behaving. Um, and we're sort of saying, like, you really do have to do this. Well, I mean, one of the interesting things is the community um has been supportive but concerned, and like some of the concerns people have is like it's free. Like people should be able to download all of Wikipedia. And what about researchers and so on? So we have to think about our policies. Like, if you're an academic researcher, you can do you can have the enterprise product for free. If you are a nonprofit, you can. If you're this and that. And it's you know, I don't know the all the details, but you know, it's like we're not trying to stop that free open access to Wikipedia. We're just saying for our physical infrastructure, you should pay for what you use. And we actually think it's a a good, simple moral argument. You know, it's sort of like, you know what, you're really using a lot of this free resource. You should probably chip in. You know, it's as simple as that. That's kind of it's sort of our it's what works with ordinary people. People are like, they see the little banner and it says, oh, for the price of a cup of coffee, you could, you know. And they're like, ah, you know what? I really I use Wikipedia all the time and I love it. I should chip it. And people It gets me every time. But yeah, it's a simple thing. It's like, yeah, it's it's simple fairness. And actually, we've tried messages over the years, like you know, mess banners that run that talk about our work in sub-Saharan Africa, in the languages of sub-Saharan Africa, and getting knowledge to people. Yeah, they kind of work. But the one that really gets people is like that that reciprocity, that fairness, like you use it all the time, you should chip in. Oh yeah, that sounds fair. Like most people are like Yeah, it seems fine. I should do that. And and so people do it.
SPEAKER_00As we wrap up, there's one question I I'd like to ask because I feel that we could probably talk more about the principle of trust. But if you could upscale or transfer the model of Wikipedia to another institution or another area of life, what would it be?
Bots Scraping Wikipedia And Fairness
SPEAKER_02That's a really good question. I mean So one of the things that it feels very broken right now is social media, and I think one of the problems of social media is the what I call the feudal model, uh, which I sort of described earlier, which is you know, all the users are just serfs on the master's estate, and the master makes the rules and and hires the moderators and everything. It's all very top-down, and it's not really genuinely democratic. Wikipedia doesn't function like that at all. It's very democratic, uh, it's very open. The community is literally in control of everything. We have institutions, we have processes and all of that. I think one of the things that may help to fix social media is to become more like that. More community control, more devolve more responsibility into the community. What would that look like? Okay, that's a really hard problem because uh with Wikipedia, going back to that rule of purpose, our purpose is to build an encyclopedia, not to be a wide open free speech forum. Right. And social media broadly, all of them, is to be a wide open free speech forum. Like it's you know, show your creativity, you know, post your YouTube video, or what's your thoughts on politics? That's Twitter, you know, whatever it might be. So it's harder problem. But there are the things like one of the things that I do think is One of the rare positive things about Twitter these days, X, as we're supposed to call it now, is the community notes feature, which kind of devolves into the community. I actually went on Threads recently because I thought I'm so fed up with Elon, I'm gonna go check out Threads. I actually found it to be worse than Twitter, which was very sad to meet. In the other direction, yeah. In in the other well, it does seem to have a sort of, but what I really meant was like people were posting things. Uh so you know, they uh there was just a pilot who was down in Iran and there was a huge operation to get them out. And a guy was posting and it got a lot of follows and it was posting saying, Well, obviously no such person exists. Oh, and this is all cover for an operation to do whatever. And I'm just like, you're just making that up. Literally no evidence, but he got like hundreds of likes and so on, and and on Twitter that would have been community noted, yeah, and it would have said, Well, actually, here's the name of the guy, and so on, you know, uh, and I think part of his evidence was we don't nobody's saying what his name was, so he obviously isn't real. I don't know if that's true or not. I think can't be true by now, but maybe on that day it was still, but it was like, okay, so like that's hard because if you've got a little box as what's on your mind, or whatever it might say, people are gonna post nonsense on you. Exactly. And other people are gonna be like, ooh, you know, and the problem I would say is like where it gets worse is if the algorithm takes that kind of attention and is like, ooh, this is getting attention, therefore we should promote it more, because those simple attention mechanisms do tend to promote like really stupid stuff and really bad stuff. So there's a lot to be fixed there, and I think a lot of the maybe not the exact mechanisms of Wikipedia, but a lot of the spirit and the philosophy of saying, like, actually, if your if your goal is to be the global town square, right, well, there are better and worse global town squares, right? Is it, you know, if your grandma goes there, does she get mugged? That's not a good global town square, right? If you go there to have free speech and basically it's completely covered up with crazy people saying nonsense, that's not actually functioning as the heart of you know global democracy, it's actually undermining any possibility of it. And so if you take those things seriously, you say, okay, well, what are the things we could do? And I don't have all the answers, right? I'm that don't work there. Uh, but you would say, okay, what can we do to lower the impact of really toxic speech? Yeah. Lower the impact of the extremism. How do we encourage people to take a more thoughtful approach and so on and so forth? And that, you know, I think right now, if I went and posted this on Twitter, I would I would be accused of trying to censor whatever might be, because I'm clearly uh what pick pick your poison. I'm left right, I'm left, right, I'm whatever, you know, I'm a horrible person. That we can all agree on. But um, you know, so that that is where I would like to see the values. Uh, and it does come down to trust. It's like, how do you how do we build an environment that uh where behaving in a trustworthy manner is supported? Um, and you know, that that therefore as a result of the information you're getting, you have a higher degree of trust. And that doesn't mean just, oh, I I don't like social media because it doesn't always agree with me. That that's not what it's about at all. It's like I don't like it because it's stupid. You know, like let's stop being stupid.
Can Social Media Copy Wikipedia
SPEAKER_01I think that's a great place to end it. Let's stop being stupid.
SPEAKER_03Yeah, Wikipedia is the answer that's just stupid in fact.
SPEAKER_01Very good. Jimmy, thanks so much for taking the time. It's really great to chat. And thanks for starting a page that I use very often, and therefore I'm happy to reciprocate and give back to. Yeah, lovely. Thank you.
SPEAKER_00Thanks so much.