Welcome to Ideas Untrapped podcast. In this episode, I talk with economist Robin Hanson. This episode is about an everyday exploration of some of Robin's biggest ideas. We discussed the hidden motives behind our everyday behaviours and how they shape institutions like education, healthcare, and government. We explore his ideas on signalling, innovation incentives, and alternative governance models like futarchy. Robin also discussed his latest idea of Culture Drift: how humanity's superpower of cultural evolution can tend towards a maladaptive direction. Robin thinks this explains worrying trends like persistent low fertility at a time of material abundance, and he also explains why we are reluctant to confront this problem despite our common practice of cultural entrepreneurship.
Robin Hanson is a professor of economics at George Mason University. He has written two fantastic books, Age of Em and The Elephant in the Brain (co-authored with Kevin Simler).
You can find all of the ideas discussed in Robin's books (linked above) and on his popular and immensely brilliant blog Overcoming Bias.
TRANSCRIPT
Tobi: Welcome Robin, to the show. It's an honour to talk to you, and I look forward to our conversation.
Robin: Let's get started.
Tobi: Okay. So I'd like to start with your book, with Kevin Simler, The Elephant in the Brain. You argue that much of our supposedly noble behaviour from charity to healthcare to politics is actually driven by hidden self-serving motives like signalling and status seeking. If so much of human activity is essentially about showing off or gaining social points, what does that imply for how we should design or reform institutions?
Robin: Well, the key idea of the book is that in many areas of life, our motives aren't what we like to say. And this fact is well known to psychologists, but not so well known to the people who do policy in each of these areas, like say education or medicine or politics. The people who do policy in those areas tend to take people at their word for their motives and they analyse those areas in terms of stated motives, and our claim is that you are misunderstanding these areas if you take people at their word and you'll get a better sense of what's going on there and therefore what you can do if you would consider that people might not be honest about their motives.
Tobi: Yeah, I mean, for example, schools, hospitals, and other public or perhaps even private institutions that we interact with openly acknowledge or accommodate our signalling drives rather than pretend that we're always pursuing high-minded ideals. What are the hard parts to reconcile about these facts of the human nature?
Robin: Uh, well, for example, people in the United States are most surprised by our medicine chapter, where we say that in fact on average people who get more medicine aren't any healthier and therefore they're spending way too much on medicine for the purpose of getting healthier. That's very surprising to people and it, of course, suggests that we don't need to spend as much as we do. Instead of subsidising it, maybe we should even tax it. But it also helps understand why we are doing as much as we're doing because we're using it as a way to show we care about each other rather than a way to get healthier. And so if you want to spend less on medicine, you'll have to ask, how can we find other ways to show that we care about each other instead of overspending on medicine?
Tobi: On a personal level, has recognising these uncomfortable hidden motives changed how you live your own life or conduct research? Do you ever catch yourself in acts of self-deception or signalling and you then consciously adjust your behaviour?
Robin: I think many people are tempted to try to look inside themselves to figure out what their hidden motives might be, and I don't think that's going to work very well. So my approach is just to look at how people on average are, and ask what motives best explain typical human behaviour and then just assume I'm like everybody else. So, I have come to terms with accepting that my behaviour is driven by motives that are probably not too different from the motives that drive most people, most of the time. So if other people are going to the doctor to show they care. I probably do too. If other people are going to school to show off how conscientious and intelligent they are, then that may be what I'm doing as well. And I'm just going to accept that I'm just not going to be that different from other people.
Tobi: Over the past, I would say six years or so, particularly with the rise of what is generally termed as woke, the phrase virtue signalling became quite popular. And this is something that you have been writing about before it gained that currency. You've noted that humans, when times are good, devote more energy to visibly displaying values either through charity, moral causes, patriotic posturing, as a way to boost our social standing. How do your theories of hidden motives and signalling help explain the way people behave online? And how does that affect the rise of political polarisation in the US perhaps?
Robin: So the term virtue signalling is usually used to describe behaviour that the speaker doesn't think is very virtuous. Um, so when we signal in general, typically our signals are effective and that we are actually showing the thing we claim to have. So if by going to school you show that you are smart and conscientious and conformist, then typically if you go to more school than other people, you are in fact more conscientious and conformist and intelligent than other people. You are successfully showing that. So the analog, if you took it literally for virtue signalling, would be that you are showing that you are virtuous. And that should be good. Maybe it's not so great that you are so eager to show it, but it is a good thing about you that you'd be showing. So the phrase virtue signalling is instead a criticism of people who are trying to appear virtuous without actually being very virtuous. That's, I think, the implication of the claim. And so that certainly could be happening and we should certainly wonder whether people who are claiming to be virtuous actually are virtuous. So certainly a lot of what's happened on the internet in the last ten years is what they call cancel culture and so that's where a particular person is accused of being bad or doing bad and then a mob, you know, jumps all over them and maybe gets them fired, gets them, uh, you know, thrown out of an organisation, gets people to quit their YouTube channel, et cetera, because they have been accused of being bad. Then the question is, well, if in fact they are bad, and if in fact these sort of responses are the appropriate response to someone who is bad in the way they are claimed to be bad, this wouldn't be such a terrible thing. Uh, the claim is that in fact they are accusing people of things they aren't guilty of or vastly exaggerating their guilt. And then it's bad if people are going way overboard to cause them harm without good cause.
So certainly one of the things that's going on in the world is the difference between gossip and law. So, uh, law didn't really exist until, say, 10,000 or so years ago. Before that, for maybe a million years, we had gossip. And the way we managed people doing bad things and dealing with that was by gossiping about it. And we mostly lived in pretty small groups who knew each other pretty well, so it wasn't that hard for people to gossip and figure out what's really going on and then react by whatever way they chose to do when they talked about it together. But in much larger societies that we've created in the last 10,000 years, gossip doesn't work so well. Because there's this incentive to a rush to judgment. When somebody comes to you with a complaint about somebody else, your main incentive is to agree with this person in front of you who you know better than the other person being complained about. And so in gossip, people tend to believe whatever they're told and they don't get the whole story. They don't ask for the other side of the dispute. And law was invented substantially to overcome this problem with gossip wherein there's a central place that you take an accusation to and that central place's job is to hear all the evidence before they make a decision. And then that overcomes the rush to judgment.
But when we have things people disapprove of that aren't illegal, then we revert back to gossip and then we have the problems of gossip wherein people are too quickly agreeing with an accusation before they've looked at the full, um, evidence from the other side. That's something that's going on lately with new social media when there are many accusations that many sympathize with that are things that aren't and not, in fact, illegal. But these are all relatively minor variations on the basic thesis of our book, which is that people are trying to look good and they do many things in order to look good, but when they do, they are actually good. On average, they are showing things that are actually good in order to look good.
Tobi: An idea that also become quite popular first in scholarly circles, but I mean, I see it almost everywhere now, maybe that's not a statistical fact, but it's the idea that evolutionarily, humans are not truth-seeking, we are coalition-seeking and our reasoning is basically to get people on our side. Looking at social media and how people use it, would you say that it's fundamentally amplified our worst signalling instincts by rewarding outrage and performative statements? And do you think it can somehow be a harnessed to improve honesty and information sharing?
Robin: So the thing I can be the most sure about is just looking overall at human behaviour across the world, across history, and roughly describing the middle of the distribution of that human behaviour and what's going on there. That's what the point of our book is about and that's what I feel most confident about. When you go to try to explain differences between some places and others or differences across time, you have to dig in deeper into the details of what's happening in your data about that in order to draw conclusions about those smaller differences. And honestly, we were just wrong about the basics here. So our book is saying that look up until our book or recently, people have been pretty wrong about the very basics of what people are doing on average across time and space. And so that's what our priority was, is to try to figure out just what are people doing in the typical situations. And because that was hard enough to figure out, maybe we shouldn't be very confident in our ability to judge differences in time and space. So, recently there have been some changes in the world in terms of social media, for example, and many other changes, and many people are eager and interested in tracking those changes and predicting their consequences, but... honestly, that's just a secondary priority from my point of view. Uh, I don't think I can judge as well. So we do have a long history for many centuries when any time there's a new element in the world of communication or, um, talk, people have disapproved. And they have complained about the new thing compared to the old and for novels, for example, and the telephones and video games and TV. Pretty much any substantial change in the way we get information and share with each other, people have criticised as making things go bad, and no doubt some of them were on average bad, and some of them were good, but I just don't feel like we can tell very well, uh, certainly at the moment, very recent changes in which ones were good or bad, how.
Tobi: So, I mean, before I move on from that line of questioning, as someone who call out uncomfortable truths and who has had some brushes with online outrage, generally. How do you personally navigate online conversations where image conscious signalling can really drown out sincere debate?
Robin: So compared to most people, I've chosen my role in the world to be a certain kind of intellectual analyst, a certain kind of person trying to figure important things out near sometimes uncomfortable topics. So I just feel it's my job to take whatever hits that are coming from doing that. And I think I have suffered some hits where people made accusations against me which I didn't think were fair but still cost me in reputation terms in some eyes. And I just feel like that's gotta be my job.
Now, honestly, actually, the biggest times when people complained about things I said were about pretty minor things relative to my whole main area of research. So... had I anticipated those particular things being what would bother people the most, I might have just not mentioned them out of the practical constraint that they weren't actually that important. Basically, side comments, often on gender, have been the things that have most bothered people about what I've said, mostly because I think... people believed other people's claims about what I intended when they were wrong.
Uh, look, compared to most societies in history, we have a lot more freedom to say things and think things and share them with each other. And even if we do suffer some penalties, they are still vastly less than people in the past have suffered for such things. So I still gotta think I've gotten off easy compared to, heretics or, uh, you know, people who give disturbing thoughts in history.
Tobi: So I'd like to move on to foragers versus farmers. You've written about the deep tension between our ancestral forager mindset, which is egalitarian, expressive, novelty seeking and later farmer mindsets, which is hierarchical, discipline, abstinent. And the cultural conflicts today, you say, can be traced to this clash. Uh, you've also written that wealthier societies are gradually reverting to more freewheeling forager values in many domains. Can you expand on this idea a bit and its implication broadly?
Robin: There are many long-term trends that people are eager to explain. You were just talking previously about recent trends in social media, but there are trends on many different timescales. And one of the most common interests that people have is explaining trends. What things have changed, how? So, in thinking about trends in the last few hundred years in the modern era, there are a number of consistent trends that are hard to explain. And so I, you know, ten or fifteen years ago, looked to our longer term history for a framework to explain these more recent trends. So the key idea here is that human's nature is actually pretty flexible. So there probably is sort of a human nature in the sense of what we revert to without any other pressures. But a distinctive fact about humans is we are culturally pliable. We can change and become different things in different cultural contexts. And the biggest example of that, most plausibly, was the switch from foraging to farming.
So as foragers we were more like animals in the sense of just doing what felt natural and that typically worked out okay. And then farming became possible, but only possible if humans would just drastically change a lot of their ways of life. And we did. So we became farmers. We stayed in one place instead of drifting around. We had property. We had marriage. We had war. We had trade. We had more inequality and slavery and domination. Disease. It was just a pretty different world. And we were actually pretty different. Humans became substantially different as farmers. And, you know, you could certainly just see that if you ever... You know, see traditional farming communities and compare them to traditional foraging communities. They are just enormously different. Foragers not only don't have much property or marriage, they're very egalitarian. They wander around. They have more variety of places they go and food they eat and they work less hard. They're certainly less religious and they're just really quite different.
So my key idea for explaining the last few centuries is this idea that: as we got rich, the pressures that had made us into farmers weakened. We were made into farmers in substantial part because we were poor and near the edge of survival and people could credibly threaten us that if we didn't follow the farmer norms of our world, we would die. And that actually happened. But as we've gotten rich, we can look at ourselves and say, if we don't follow the farmer norms, so what? We'll do okay. We see all these other people around us and they seem to do okay, even if they don't follow the farmer norms. And so we have... just drifted back more toward forager attitudes and styles because that deep down feels more natural. And this can explain a number of big trends over the last few centuries. So for example, more democracy, less religion, more leisure, more travel, less slavery, less domination, more egalitarian attitudes, less fertility. A number of the most important, largest trends over the last few centuries can be understood as our going back to being more like foragers.
Now, one thing to notice here is that this trend back to being more like foragers is plausibly explained by the absence or weakening of selection pressures. So that means this change is not plausibly adaptive. That is, we haven't changed because this is a better way to be in our new world. We've changed because the world is less disciplining us and forcing us to be any particular way. And this is what feels natural. So it may in fact be maladaptive. That is, we may be suffering in the long term in terms of evolutionary success by becoming more forager-like. Nevertheless, we have this space to do so because we're rich and comfortable and peaceful. And this is what we feel inclined to do.
Tobi: So, if that is the case, what does it mean for the stability of our more traditional farmer-style institutions that have brought us this far as a civilisation and um, yeah.
Robin: Well, so there's been a lot of change in the last few centuries and a key question about that change is which of it has been driven by healthy cultural selection pressures and adaptive processes such that the new behaviours make sense and are actually more useful and adaptive and productive in our new world, and which of these changes are not? Are just changes that are happening because we feel like it and we can't be stopped, at least for a while, but are not going to make our world more healthy and functional, etc. And so a major challenge in analysing the world to try to distinguish these two cases. So one simple way to distinguish them is to think about how local is the variation that's allowed. So think about most technology. Technology is the sort of thing that if you see a new technology, you're typically allowed to switch to it without too many other people complaining about it. And because of that, we have the strong selection pressures for people to adopt the technologies that they think they like. And therefore, plausibly, over time, as we've adopted technologies, those have been adaptive choices. They have been ways that the new world makes more sense when you have these new technologies than without them. Because there are these strong selection pressures. And more generally, if you think about firms in capitalism, businesses, we have a great many firms around the world and they try a great many different corporate cultures inside the company, different attitudes and practices and norms inside companies. And different companies try different approaches and plausibly there's enough different companies facing strong enough selection pressures that over time, the better business practices probably went out.
If we have firms having business practices today that are substantially different than they had three centuries ago, that's probably because these are better business practices. Or because there's enough selection of variation. But if you think about aspects of culture which are sort of our basic values and norms where we face strong conformity pressures to all adopt the same ones, it's less plausible that those are actually going to be adaptive. And therefore, it's plausible that changes are maladaptive. And that's more the problem. Uh, so if you'd like, we can go through, you know, a number of specific examples, but the key issue is to distinguish which kinds of changes seem to have been subject to enough selection pressure and variation to produce healthy cultural evolution and which have not.
Tobi: Please go ahead. Be as expansive as you can.
Robin: From a biological point of view, falling population in times of plenty… peace and plenty, and, you know, low disease is problematic. It's just puzzling. It's plausibly evidence of maladaption. So the most clearly maladaptive trend is falling fertility. And we can identify a number of cultural trends that are causing that. And so those are also candidate maladaptive cultural trends. So for example, we get a lot more education than we used to and education seems to be hindering fertility and plausibly we're just getting too much education. It's the amount we're getting [that] is maladaptive. We, for example, switched from cornerstone marriage norms to capstone marriage norms. Like, when I was young, the idea was to marry young, somebody. You weren't fully formed, they weren't fully formed. You didn't know exactly where you're going to be in the world. And both of you figured that out together. And now the norm is more that you should wait and to figure out who you are, find your place in the world, be secure, then find somebody else who matches your particular place in the world, after they found themselves and figured out who they are and then that's you should marry. So that change also is a big hindrance to fertility.
Another norm that's limiting fertility is that we pay a lot more attention to children now than we did in the past. So it could be that some distance time in the past we were paying too little attention to children, but plausibly now we're paying way too much attention. And the more each parent is supposed to pay a lot of attention to the children, the less likely they are to have more because they think they've kind of run out of time and energy for the children they have. Um. So these are some examples of cultural trends that have been hindering fertility and that are plausibly maladaptive, because they're not only hindering fertility, it's not obvious that they are overall making the world more adaptive.
Tobi: I'll get to some of what you've written on culture drift later. But for now, one of the preoccupations, at least in the domain of what is called development economics or studying economic development generally, is institutions. Largely because it is perhaps major determinants of which policies get adopted by countries and those policies can be the difference between being poor and being rich. Right. You've championed a provocative idea called Futarchy - in which elected officials would define a national welfare metric. I'm a big fan. And prediction market speculators will then decide which policies are most likely to improve that metric. In Futarchy, if the market odds clearly show that a proposed policy will increase expected national welfare, that policy becomes the law. In theory, this could make governments far more informed by leveraging collective expert knowledge and avoid, you know, some of the problems [of] political gridlock or interest-based politics. But I want to put it to you directly. How realistic is Futarchy in practice? Especially if we go outside of markets like the US.
Robin: So, for every policy proposal, there's two very different questions about that proposal. And it's realism. One question is, is anybody ever going to adopt this thing? And as an economist, I have liked many policy proposals over my decades that have not yet been adopted and that many people think are just never going to be adopted. And so they think we're wasting our time and effort if we pursue and elaborate and think through the details of policies that nobody's ever gonna do. And, you know, that's a fair critique and it comes down to what are the chances of it getting adopted and then, you know, finally being tried. A second question is, if we tried it, would it work? That's a very different kind of question in terms of feasibility or realism. And I feel much stronger that that should be addressed. So I'm okay with making proposals that maybe have a low chance of being adopted if I'm pretty sure that if they were adopted, there'd be a good chance of success. But, I mean, another thing to realise is all we need to do when we adopt something is try it for a bit and see if it works. So the main harm of trying something is the short time during which you would have tried it. And then found out it didn't work very well and then quit. Nobody should be proposing making vast changes to society on the basis of relatively speculative things that haven't been tried much. The proposal is to take an idea and then try it out on a small scale. And when it works on a small scale, try it on a bigger scale and it keeps working there, bigger, bigger, until eventually it might be big and be applied everywhere and give us huge gains.
Tobi: So why aren't there more experiments? What is holding up experimenting with this idea on a very small scale, perhaps within companies or local governments? Is it about trust or yeah?
Robin: Yes. So, um, like all my life I've been around economists and other people inventing things that they think should be tried and seeing some of them tried and most of them not.
So first thing is to say is there's a lot of energy in politics. A lot of people really like to be involved in politics, and they really want to argue for this policy or that, or this administration or that, or push for this person to be elected or that. But when it comes to thinking about new institution ideas, there's very little energy. Very few people care at all about that, or they don't want to be bothered to try. So unfortunately, just trying new institutions seems to be low status and not very interesting to most people. Right. And when things do get tried in the world, they tend to be tried because someone high status with a lot of prestige and power in some area endorses an idea. So a major limitation of things getting tried is what are the high status, prestigious people willing to endorse to get tried? Most people are just not interested in trying something unless it gets that sort of endorsement. And people like me aren't powerful, high status people. So that means we have to influence somebody who is more powerful and high status to be interested enough in something to try it in order to get it tried. So the major reason why not enough stuff is tried is because for most people, they don't care very much about trying new things, they're just not eager to. And then we have this bottleneck. They need high status, powerful people to endorse an idea, for trying it in order for it to get tried. And of course, if it doesn't work out well, that may be worse for the reputation. But if it succeeds, it may go better. And that's basically the world we live in.
I would rather people spent less time arguing about politics and more time trying out new institutions. I think the world would just be much better off if we put more energy into trying out new ideas, but just because I wish that were true doesn't make it actually true. Now, you asked about the specific idea of Futarchy a particular governance mechanism, and I can happily report that in the last few years we now have a bunch of experiments, a bunch of people trying it. So I first described the idea over 25 years ago, in roughly 1999, just before then. And for a very long time, many people had heard about these ideas and liked to talk about them and to hear about them, but almost nobody wanted to try them. But now, in the last few years, there are trials, and so far they seem to be relatively successful. But as I said, what we need to do is try things on smaller scales and then work up to bigger scales, etc. until we can get them much more widely adopted. But most of the experiments in the last few years have been crypto-based organisations. There is this thing called a DAO, a distributed autonomous organisation, that people have tried out on crypto where there's some governance mechanism on the blockchain and that drives an organisation's behaviour. Those haven't worked very well, but applying Futarchy recently has had more success in governing these organisations.
There's also been government based experiment that's seemed to have gone well. And now there's a number of other crypto-based organisations that are setting up to do trials and experiments here. So I'm excited that we finally have some trials.
Tobi: I assume you are in touch with some of the people running these experiments.
Robin: Yes. I am officially advisers on many of them.
Tobi: Oh, okay. That's cool. What have you learned from those experiments? And has it led you to refine your vision on how decision markets might work in governance?
Robin: Yes. I have to admit that 25 years ago, I thought through the idea in substantial detail, but there were some details I just didn't get into, and that finally seeing people try them now made me think a little more detail about some of the issues that I had neglected and slopped over before. So I guess I could have always thought about those more, but I didn't feel very motivated to when no one was actually trying it. So there have been some details. I've tried to work out more, and we're going to see how those play out in these small scale experiments. I also hope to do some lab experiments to test these things. So, yes, we're working out some particular details. I mean, I don't know how far you want to go here in this conversation into those details, but they're relatively detailed. But still, there's work to be done.
Tobi: Another subject that's risen in status over the years is, um, the idea of progress and debating progress has also risen in status, I should say, in public discourse. I mean, we have even, uh, people on the left arguing for abundance with the new Ezra Klein and Derek Thompson's book and there's the whole progress studies ecosystem that seems to be growing. Uh, one of the things that makes me laugh while researching the themes I want to cover for this episode is there seems to be either a Robin Hanson blog post or paper for everything. So one of your papers that I loved so much is Patterns of Patronage, where you looked at the 18th century practice of prizes in spurring scientific research and how that was replaced by the grant system. Uh, and you found that the shift wasn't because grants are inherently superior, but rather because the dominant patrons changed, you know, more non-local democratic governments who prefer grants. What does this historic lesson tell us about how the structure of funding can shape the pace and direction of innovation, especially at a time where there seem to be worry that innovation is slowing down.
Robin: Right. So a lot of people I know in economics or even science studies have a strong presumption of progress. That is, whatever changes happen must have been good. And unfortunately, this particular datum and some related ones suggest that's not true. So one of the biggest trends in academia and science in the last few centuries is that academics themselves wrested control of academia from other people who fund it. And that happened first in grants, but then it later happened in tenure and in basically other ways in which academia doesn't have to listen to outsiders as much as it used to. So many centuries ago, say, during the beginning of the Scientific Revolution, there were outsiders who had a lot of influence over science because they were paying the money. So scientists typically didn't all do it themselves on their own wealth. They got money from other people to do science, to do academic things. And those other people had a say. In the past, people initially gave out money more often through prizes than grants, and then they also gave money through sort of just supporting infrastructure, like making a library or funding an expedition, you know, paying for journals.
The funders had a sort of direct influence over the topics and priorities via they paid and so they could dictate, through their paying, some conditions. And what happened? We did have the switch from prizes to grants. And grants are a way in which academics are more in control of the money. You give money to a bunch of grant givers, and the grant givers decide who the money goes to. And then there's no particular accountability of whether those grant receivers actually do anything particular with the money. But in addition to moving from prizes to grants, we also moved to peer review. When Einstein did his papers, for example, those were not peer reviewed. Peer reviewed is something that showed up more mid 20th century. Previously, there were editors of journals who just had a lot of control over those journals and could use their judgment to decide what were good papers or not. And that was another way in which academics were held accountable to outside powers, in this case, journal editors who could disapprove of what they were doing or think things were low quality and make an impact that way. And the third change that happened was tenure. So most professors didn't have tenure long time ago. They might have job security the way most people have. If you've been working somewhere for a while, they don't really want to get rid of you because you have a good working relationship. But the formal idea of tenure is also something that showed up in the 20th century, and that was yet another way in which outsiders couldn't influence the academics and their behaviour and choices as much. The idea is after a certain number of years, you get tenure, then you can't be fired and nobody can complain about what you do.
Now, academics like all these freedoms, the way in which academics have wrested control from the outside of the world and run their own world to their own tastes, they like that. But it's not obvious that we actually have more total intellectual progress as a result of that, I guess in fact, we probably have less. But this was a consequence of academia becoming much more prestigious, and that prestige is what allowed the change. So the first change from prizes to grants happened because the people who were running prizes, who were managing prizes were in fact the main scientific societies of the time. And then the scientists who ran those societies decided to do a coup, basically, and to say, we refuse to accept any money to run prizes anymore. We're only going to accept money to run grants. If you want our name to be on this money you give, you have to give it in the form of grants. And that happened both in the French scientific society and in the British. And they had a successful coup. And so they made people giving the money change their mind about how to give it so that they could have their name on it. And that's a way the academics wrested control of the process from the people who are giving the money, and got more autonomy to give it to their friends. It's not like the money was handed out at random to people who said they wanted to do science. It was a set of insiders who took control over academia and then used it to favour themselves and their friends, and that's now basically how academia works.
Academia still has enormous autonomy from the outside world. People give it money, but then it's basically the most powerful insider academics who decide who gets the money, who gets the jobs, who gets the publications. That all is decided by those insider academics. They, of course, will claim that that's great because they have great taste and they have good judgment, and everybody else should shut up and leave them alone and let them decide. But we can reasonably be skeptical about whether they actually have better taste.
Tobi: Science is still largely funded by large institutions, governments. Uh, Donald Trump's clash with big education might reorder that, but the alternatives that are most likely to rise will also be rival governments who are engaged in geopolitical or technological competition with the US. And I take your point or caveat so to say about progress. But if we want to encourage more positive breakthrough innovation, what alternative funding models do you think needs to be revived?
Robin: I do think prizes just do work better than grants. So I think if we would switch back to prizes, that would be an improvement. With a prize, you basically say what you want to have accomplished, but you don't have to say how they do it or who does it. So anybody who achieves that accomplishment can get the prize money. And that's a more open competitive process than grants, where the grant giver has to decide who has a promising approach and who they believe has a chance of doing it and then they hand out money according to their judgment, to their friends, basically. Prizes would be better, but I actually have a more elaborate solution to academic problems. That's also one that you might think is less likely to be adopted, but that's the trade off we talked about before.
Tobi: Yeah. Yeah. So please tell me.
Robin: Okay. So, I am perhaps most famous for my work on prediction markets - betting markets for things - and that's the basis of the Futarchy governance mechanism we talked about a few minutes ago. And from the very beginning, my first motivation for thinking about prediction markets was how to reform academia. And over the years, I've thought a lot about different ways to do that and I've realised that the initial ideas that I had, and most people have when they come to the topic, just probably aren't going to work. And so initially I just thought if we just had betting markets and most scientific questions, that would be great and then we'd have a better consensus about it. But in fact, most academics just don't want to bet on their stuff. And so there's very little energy and interest in that.
Secondly, I thought, well, we could have betting markets on scientific questions and then you could subsidise those markets as a mechanism of funding so that people who figured out the answer to scientific questions first would then be able to trade in the market and make profit from their trades, and that would be how they would fund their research. But that would require that the people giving money change how they give money, but they don't want to, because the people giving money are in the same equilibrium game as everybody else. They're also just trying to gain prestige by affiliation with impressive people in the same way everybody else in the game is. So the problem is that the game as it's set up encourages people to do things to win personally, but that doesn't encourage the system as a whole to make more intellectual progress. So these approaches don't seem to work.
The approach I think that more plausibly would work relies on the following claim, the assumption that the one thing people will not give up as academics is the claim that the people they most celebrate today as the most prestigious academics are, in fact, the same people that historians looking centuries later back at this era will say were in fact the most important academics. Academics are not willing to say, oh yeah, we're just playing this game and later on none of us will seem very important but, hey, we just like to play this game. They are not willing to say that. They are going to continue to claim that the people that they give the most prestigious jobs, funding, journal article publications to, those people are in fact the people that, when you look back on this era from later on, will in fact have seemed to have been the most important. The people who had the most influence and who were doing the stuff that should have been looked at the most, that should have been paid the most attention to. So that's the thing I'm going to hold with. That's my lever to influence the system.
How am I going to influence the system? Well, simply, I want to create betting markets on what those distant evaluations will be. I want to create a futures market in the reputation of each academic. So centuries later, I want to have panels of historians go back and look at current academics and rank them according to who should have been listened to the most and they can use all their knowledge of the future to know which research programs petered out, which had promise, which led to important, interesting results. They use all of that to go back and say who should have been listened to the most in order to best produce more of the progress that had happened. So now we would have betting markets in those future numbers so that every academic would have some score, some current market price that represented the market consensus - evaluation of their potential to be somebody that the future would say, yeah, that person should have been listened to a lot. And now when academics make choices like hiring someone or publishing them or giving them a grant, we can all compare those choices to these market prices. So when Harvard sociology department hires somebody, we can say, okay, the guy you hired is now ranked 372 out of all the people who might be judged to be as important as the guy you hired. And why did you pick number 372? You had a lot more higher ranked people to pick. And now they'll face a choice. They can either make their choices more consistent with the market so there's less of an embarrassing question to ask. Or they can deny the market knows anything they can say: that's stupid market, why would you listen to that? We're the Harvard sociology department. We know better. But then you could say, well, how come you aren't betting in these markets if you know better? And they should be a little embarrassed not to be betting in the markets if they know better. And so that embarrassment they're not wanting to be too obviously differing from the market estimates would be a pressure that would make these market prices influential and therefore, academic choices would move more toward the choices that are actually the better choices about who should be getting funding and attention and resources and jobs because they, in fact, do have the better shot at having important long term influence.
Tobi: Hmm. That's deep.
Robin: That's that's my idea.
Tobi: That's deep. I'm still struggling to wrap my head around that. I mean, we are still in governance territory here. I want to speak a bit about the rise of philanthropic organisations who also do some funding and in some cases fund science, again, basically through grants. But generally philanthropy has risen in status. Most prominently, the Effective Altruism movement has done quite a lot to influence this. Another one of your papers, Showing That You Care, where you argue that much of what looked like altruistic policy say, paternalistic health regulation or support for universal health insurance may actually be driven by an urge to help and signal loyalty to our allies and shaped by our ancestral environment. We genuinely care about others, but we subconsciously choose ways of caring that also broadcast our good intentions to observers. You argue that this perspective can explain puzzles like why medical spending often have very low marginal health benefits, because the social role might be more about showing concern than improving health. So do you think organisations like Effective Altruism are trying to truly break free of our primitive signalling drives to do good more rationally? Or do you suspect that status or image and these things you've written about still quietly shape a lot of these newer, quote and unquote, altruistic efforts?
Robin: So I'm not the first person by a long shot to have noticed that humans often don't live up to their ideals. People talk a good talk about their grand goals and their grand ideals, and they talk as if they are trying to achieve their grand ideals, and they often don't. Their behaviour deviates from that. Now, the fact that many people have observed that then induces people often to say, well, we're different. We, this organisation over here is actually really going to be idealistic. So if many religious organisations don't achieve religious ideals but we're different, we're the religion that's actually going to do it. There are charities that aren't actually very effective at charity, and people notice that. Then another charity shows up and says, ah, but we're going to be better. And this is just a common feature of human behaviour forever. Like, a government that says we are here as a government for the people. Often people notice, well, you're not actually working so much for these people. Then another government shows up for another political party, says, hey, we're going to actually, we're actually going to help the people, right? So this just keeps going on and on. Why? Because when somebody claims that they're actually going to do better, the question is how carefully does anybody look to see if they do do better? Often it's enough just to make the claim. It's enough to make the claim that your political party actually cares more about the country and the people and whatever else it is. And the other political party, they're corrupt, they don't care. But it's often enough for your supporters to just make the claim. And then people are willing to assume you must be right because you're one of them, you're a friend of theirs, you think like them, you feel like them. You live in the same places they do. They're going to assume, okay, yeah, you're like me. So you must actually want to live up to your ideals. Not like those other hypocrites out there. But you can see that if we're allowed to just make the claim that we're better without anybody checking on it for better. We're not actually going to be better, right?
The way in which people will actually be pushed to live up to their ideals is if somebody's checking, looking at the difference between their actions and their ideals, right? That's the only way that people are actually going to be pushed more to live up to their ideals is to to be checked, to be on it. So, um, that's of course, the key question is who does the checking? Now, of course, some people will say, we'll do the checking. Trust us. Like, you know, there are people out there who say, we will tell you what politicians to vote for. Just trust us. And we'll send you a list of who to vote for. And then you'll vote for those people, and then everything will be better because we're doing the checking. You don't need to check us. We're just going to check for you. But of course, if you don't check them, they'll just claim they're doing better but not actually do better. And then it won't be any better, right? So this is always the problem: is can you make a process that people can inspect to see that you're actually doing better?
In the early days of Effective Altruism, one of the main mechanisms was evaluation of charities. So one of the new things was we're going to have an independent organisation that evaluates charities and their claims and sees how they're doing. So we have such things in other parts of the world. We have, like, Consumer Reports; does evaluations of business products. We have bond rating agencies that rate the risk of bonds. In many other parts of the world, our world, we have independent organisations that are offering independent evaluations of things. And often you can tell by their independence and their efforts that they are actually telling you more information than you were going to get from these sources themselves about the quality of their product. So I think that is, in fact, a great way to make people offer higher quality products of all sort, is to have independent evaluators, who are not funded by or getting kickbacks from the people they're evaluating. You're paying them separately to do the evaluation, and then they, in fact, tell you different ratings, and they show you the process they use to do that rating so that you can see they didn't just making up numbers. They are, in fact, looking at something real in order to evaluate the things they're rating. So I thought it was very promising in the early days of Effective Altruism that this was a solution. They're saying, well, how do you know which charities to trust? They're all claiming to be great. We're going to offer you an independent evaluation, and we're going to show you how we're doing it. We're going to show you the process and the formulas we use to evaluate these charities, so that you can trust our independent evaluation. And one of the first big organisations like that was called GiveWell. And they got a lot of attention, and a lot of people donated money to them so that they could do evaluations to help the rest of us decide which charities to go to.
And I still think that's a great idea. Unfortunately, the field or the community of Effective Altruism, they decided after a while that that wasn't such a good idea because that was too indirect. They wanted to just have the money and just do the stuff they thought was good. So now most Effective Altruism organisations, they're just getting money from someone and doing the things they think are good and they're not trying to, like, have independent evaluations of what they're doing in order to prove that it's good. You're just supposed to trust them. Hey, we have a good heart, so just trust us. We're doing the right thing. Which means they're just in the same boat as all the other charities everywhere.
Tobi: Especially with the whole FTX thing and some of the negative publicity that Effective Altruism got in the last two years, it makes me think about signalling generally. Do you think that it would be more efficient for us to just lean into it. I mean, kind of like education and Bryan Caplan's argument, even if that were true, that most of education is signalling, education is not a net negative and I don't see it going away anytime soon. So regarding altruism, charity and all the other feel good things that we do, should we just lean into it? I mean, should policymakers just create ways where we can visibly demonstrate how we care rather than trying to optimise for what is rational or not?
Robin: So it comes down to what it is you're showing off when you're showing off. See, if you're trying to signal that you were effective, then you'd have to show credible evidence that you were effective, and then people wouldn't actually support you or praise you until you were actually effective. The problem is, in a lot of these domains, we're satisfied with signals of other things. So for example, in medicine, we're satisfied with signals that show somebody they care about you, even if they're not very effective in how they care. With education, we're satisfied with signals that show somebody is smart and conscientious, even if they didn't learn anything at school and in charity, we're often satisfied with showing that somebody sacrificed and therefore cares without actually seeing how effective their efforts are. So it's not about whether there's signalling going on, it's what you're signalling and which signals you will accept as a sufficient signal. If all you want to know is that somebody cares enough to sacrifice and to, say, give up some money to donate it to a charitable cause, if you were willing to say you're a good person because you sacrificed and we don't care where the money went or what happened to it, all we care is to see that you sacrificed it. Then you'll continue to have people throwing money at random things that don't work because nobody cares about that. It's only if we care about how effective your donations were, as a reading of you, that we will then want you to show us how effective your donations were in order to judge you. But that requires that we change what we care about in you. And so that's unfortunately a problem. So a lot of these signalling games are driven by the things that people actually care about. And they would go better if people cared about other things. But of course that's the problem. People care about what they care about.
Tobi: Mm-Hmm. Let's drift back to culture drift, which is your... except I'm not current anymore, which is your latest big idea. And you touched on it earlier. Uh, basically, culture is evolving, you argue, I should say, in potentially maladaptive directions. And you talked about fertility, especially in wealthy societies where they are not even coming close to replacement rates. I mean, future generations might look back with amazement at how we squandered our abundance by failing to reproduce. How serious is this cultural drift, in your view? I should ask first.
Robin: The one thing that most distinguishes humans from all the other animals is that we have been driven by cultural evolution. We have some other distinctions, like, you know, we stand on two feet and we have bare skin instead of fur. But those other distinctions are just not remotely as important as this one key distinction that we are driven by cultural evolution, that's our superpower. So if we broke it, that's really important. If we broke our superpower, it means we're not going to be super very long unless we fix our superpower. Our superness is going to fade away and decay into collapse and death and destruction until we fix it. It might be a slow decay that takes centuries, but still, this is our superpower. So it's hard to exaggerate just how important this is. An analogy I'd like to have you imagine is driving down a road in a car, say, there's a control problem in controlling the car to stay on the road, and there's a bunch of control parameters of the process you use to control the car that need to be in the right sort of a range to make this feasible. So if the car is going slowly, you can see the road really clearly. The road only changes slowly. You're awake. You're not drunk. You can clearly see the road. You can think clearly and make a decision to turn the wheel. And the wheels just strongly connected to the car tires themselves to move the car. The car driving process will work. Those parameters are in the range that you can effectively see the road turn a little, you think about it a bit, you turn your steering wheel a little, the tires turn a little. The car stays on the road. You don't go off the road.
But if we turn these parameters to the other extreme, if you're driving really fast, the road is changing really fast. You can hardly see the road. It's really dark and rainy. The wheel is floppy. Your mind is slow. The tires are wobbly. You can see that as these parameters get bad enough, you're not going to stay on the road. You're just going to drift off the road. And depending on what's next to the road, you might well crash. Our superpower, this cultural evolution is a system that has parameters like this. It's a system where basically there's a set of points in a space, and somewhere in the space is the adaptive region and the points in the adaptive regions, they grow and they multiply and they do well. And points away from this adaptive region: they decay, they die, they disappear. And this adaptive region moves around in the space. And so if you have enough points near the adaptive region, then even if the adaptive region moves, some of the points will be there. And so those points can grow. And the cloud of points can be a lot of points near that adaptive region, even with the region moving around, or even if these points actually wiggle around randomly and drift around. Still, if there's enough points strong enough selection pressure of the points near the good region increasing, and the other ones going away, this whole process works, and it's worked for a million years. And 300 years ago, we had basically hundreds of thousands of little peasant cultures in the world, all of which [were] near the edge of survival. They were poor. They had famines, they had wars, they had disease. So if they made bad choices, they would just disappear and be replaced by neighbouring cultures. So we had large variation, strong selection, and the world was changing only slowly, and these cultures were very conservative. They didn't want to change very much. So this system worked. A control system to drive the car of human cultures had worked because the parameters were in the right regime.
But in the last few centuries, we've taken these hundreds of thousands of peasant cultures, and we smash them down into 100 or so national cultures, and then we smash those together into a shared world monoculture, and these cultures that are remaining much fewer of them, they face much weaker selection pressures in terms of disease and war and famine. They basically don't die anymore. And the world they're trying to track is changing much more rapidly. Technology and other changes are making the kinds of things they need to track to be adaptive, changing fast. And in addition, rather than being conservative and being reluctant to change our cultures, we've become eager to change our cultures. Cultural activists have become our biggest heroes, and we love to celebrate the people who tried to cause cultural change, even if it wasn't obviously adaptive. That's not an important feature of the cultural activists we celebrate. So these are four different parameters that are all gone wrong in the last 300 years. And plausibly what that means is this cloud of cultures that we have remaining is not tracking the adaptive region of cultural space. It's drifting away. And that means our cultures are becoming maladaptive. That's the key problem.
Now, I want to be clear. There's two levels of culture. There's the kind of things that can vary individually easier, as we talked about before. And those things go fine. So an analogy is biological species. In biology you can have habitats that are fragmented with lots of little species or habitats that are big and integrated with a few big species. In the first sort of place, evolution within species doesn't work as well because each species has fewer members, but evolution of the species of the features that species share, that does much better. In the big habitat place, the evolution in species does better because each species is larger. So in our world today, within cultures, the things that can change within cultures, because we have a few big cultures, that's going great. We have better evolution of business practices and technologies and all sorts of things that can vary within cultures. The thing we have less of is evolution of the things that define our cultures, that it's hard to vary within a culture like, marriage norms, education norms, medicine norms, things about war, things about community, things about patriotism, a lot of cultural attitudes that are hard to vary within a culture because you'll be punished if you deviate, those are the things that we have very little variation of now and those are plausibly drifting into maladaption.
Tobi: So I guess the big puzzle for me, though the culture drift argument is very Robin Hanson, Hansonian in that sense. You're not the only one talking about fertility. It's become quite a huge topic in the West.
Robin: Yes.
Tobi: And again, some other people, scholars, public intellectuals, whatever, also give cultural diagnosis, uh, the collapse of religion, or marriage, you know, and things like that. But my big puzzle or question is, can cultural trends really be reversed? What is the solution really?
Robin: So first, I think I mentioned this before, but when you're talking about cultural trends and which directions do we want culture to go? That's actually the most prestigious people we have in our intellectual world. The people we most celebrate and give the most attention to and love the most are the people who comment on cultural directions, what's happening and which directions they favour. So there's no lack of discussion of cultural evolution from the inside, which direction do we want to push for, for culture to change? Cultural activism is really popular. What's rare is to stand outside the system of culture and to see it as a system, and to think about how that system could go wrong even when the individuals in it are doing things they feel are good. That's the hard part to see. And that's plausibly what we need to think more about, because that's where the system’s going wrong. We're going wrong in just having a system like this where we make such huge changes and we have such little variation and weak selection pressures. So many people do see fertility as a thing happening, but whether they think it's a problem we're solving depends on how they frame it culturally. Many people say, oh, look, it's not such a bad thing if population declines, we have a huge world. We could, you know, go on for thousands of years before we went extinct, that probably won't happen. So, you know, what's to worry about? We'll have less environmental impact. We'll be happier. You know, just let the population decline. Because those people see the alternative policies that might make population no longer decline as much as their cultural enemies. That is, they're pushing for culture to go one way, and they see the people who are opposing their pushes on the other side, and they think those people will win more if we let, you know, this fertility argument go forward. And then the more people do about fertility then those people will win. For example, religious people tend to be more fertile. So one way to promote fertility is promote religion. And a lot of people just hate the idea of promoting religion. They exactly want to stop that. Or say gender equality. Gender equality has, in fact, been something that's been reducing fertility, and many people are so eager to promote gender equality that they don't want to risk at all any sort of weakening of that by acknowledging that fertility might be a problem.
Tobi: One last area I would like to touch on is your personal intellectual journey. You studied physics, computer science, and then you transition into economics, and you've become known for exploring big, really, really big ideas. What were the pivotal influences or moments in your life that set you on this, I would say, unique intellectual journey?
Robin: Well, uh, first of all, I just became someone who wanted to be an intellectual. That is when I first came to college, I was inspired by some lecturers who made this grand vision of some people who figure things out that are important, and I wanted to be one of those people who figured out important things. Now, that doesn't make me especially unique, lots of people bought into that, but that was my vision. And for many people, like wanting to be a professor and academic is about being respected person who has a nice office and who gets to give lectures and other people invite to give keynote addresses and they're distinguished and they're thought highly of. And for that sort of a future, then they just should like follow the standard path and do what they're told and get the proper credentials. And then they are successful in their eyes because they have become a respected person who's doing respected things. But that wasn't my vision because I was sold on being a person who figures important things out. And that was kind of random, I guess, I could have been sold on the other vision, but because I was sold on the vision of figuring important things out, it wasn't enough for me to just collect some credentials and, you know, be thought of as a respectful person. I wanted to figure important things out, so I relatively early on tried to dig into everything that I could to ask - what's the fundamental here? What's the deeper underlying thing here? What are the key questions? How can we figure them out? So, that's the first part of I guess my life is to pick that as my framing.
So everybody in some sense, early in life picks their image of status, their ideal, and they pursue that. And they often assume everybody else must have the same ideal. But they don't. And people often don't realise the choice they're making about what to set up as their ideal. But that's where I fell into. I fell into this ideal. I'm going to be the person who figures out big, important things. That was the game to me. That was the whole point of everything. And then through school, I kept thinking about things and figuring things out. And it was a long time before I could realise, oh, I should, like, have a career plan or something. I was just so focused on reading things and thinking about them and figuring them out, and then taking whatever excuse I could do that, that I didn't actually realise, oh, well, if I want to be one of these big thinking and important people, I'll have to, like, get a job somewhere and I'll have to have credentials to support that. And so it took a long time for me to realise that I should have a plan like that.
And basically, I'd been out of school and I had some things I thought were interesting ideas, including prediction markets. And then I thought, okay, how am I going to do this? Oh, I need to go back to school and get a degree. And at that point I was more focused on, okay, I'm going to have to make compromises here. I can't just study what I'm interested in. I'm going to have to also accommodate what other people are interested in in order for me to get credentials and, you know, institutional backing so that I could continue in this world. And so I returned to school at an old age of 34 to start my PhD. And I had at that point two kids aged zero and two. So it was a big disruption to my family to go back to school and get my PhD. And then for the duration of my PhD and then a postdoc and the first few years of a tenure track position, I did lots of fun stuff on the side, which I thought was interesting, but then I constrained my main intellectual activity to be close enough to what the establishment wants, that I could, you know, get a PhD, get a tenure track job, and eventually get tenure. And then when I finally got tenure, I could go wild again, ignoring what everybody else wanted and just doing what I thought was interesting and important.
Tobi: You've explored brilliant, uh, sometimes even eccentric ideas from prediction markets to ancestral psychology to far future brain emulations. You've also had to play the contrarian at times in academia and public discourse. How have you handled scepticism or controversy around your ideas? Have you ever had a moment of doubt or personal crisis, so to speak?
Robin: See how I frame this? I see the whole point of everything here as figuring out big, important things. And it's what we're trying to find is news. You're trying to find things that are surprising. So a lot of academic work actually tries to show that the usual conventional view is exactly right. And people find that as prestigious and important, and they give people, you know, big accolades for that. But it's not really news if you thought this was true and then you check and it's still true. To me, it's the thing you're looking for, is the news. Things that you didn't expect to see that are surprising. And so you should expect good work to be surprising, i.e. a priori unexpected. That is, people should not believe you until they see your arguments for your conclusions. That's what news and surprising stuff is, right? So I'm completely happy with having news, having results that people, upon hearing the claims I'm making saying that can't be right. That's what news is. Unfortunately, many people, upon hearing what you say and thinking, that can't be right. That's where they stop. And they're not willing to actually listen to the arguments and evidence you've collected for this surprising view. If we're going to be looking for surprises, you've got to be ready to hear something that's claimed to be a surprise and then consider if it actually is a surprise. Okay.
And then there's a separate effect, which is basically I've been focused on looking for the most important neglected problems where I can find an angle. And it turns out humans just have a bunch of blind spots. And so there are just more important neglected things near the blind spots than elsewhere. So that made me interested in those blind spots. That is, wherever it is that other people have been neglecting stuff that's important that I could make progress on then, yeah, I want to go there. So, for example, I was initially in physics and computer science, and I would say in those areas, if you find something that's unusual, there's usually been a lot of other people near there looking for the same sort of stuff. And so it's hard to find a big advance in computer science or physics exactly because so many people have been looking around for advances.
Tobi: Yeah.
Robin: You're competing with all these other people who would love to find a new, important neglected thing because the world is really eager to find all those things. The world's really eager for new physical materials or physical processes, or new computer science algorithms or devices. The world loves that stuff, and they're eager for it. They pay a lot for it. So there's a lot of people eagerly looking for all that stuff.
And then I started to dabble in social science, and it seemed to me that in social science, it was much easier to come up with surprising results. And so that's why I switched into social science. Wow. Now, I thought, I must be really good at this, but not so much. The problem is, in social science, people just have a bunch of opinions about what they want to believe about the social world, and they're not so open to hearing that they're wrong. Humans don't actually fundamentally care about computer science or physics. They have induced interest in those areas because they're practically useful, but they don't fundamentally care. So there's not so much resistance to a new idea in those areas because we don't really care so much about which particular theories we believe or things like that, we more care about what we can do with them. But in social science, people care about particular opinions on medicine or education or politics, etc. And because they have pre-existing opinions that they are working to support, that's an obstacle to them thinking clearly, which means that they often leave important neglected problems unattended because of those obstacles.
So I went, okay, great. There are these important and neglected questions, but that's also an obstacle to convincing people that you found an answer to something. That is, you can go past the obstacles that block other people because they have pre-existing opinions. You can think about it fresh. You can figure out new answers. You can get evidence for those answers and arguments. You can bring them back to people, and then they just won't listen. That's what I didn't realise about social science is the reason why it's so much easier to find new, important stuff there is because the world's not listening. The world's not so eager to get that stuff. And so, um, you know, by the time I figured that out, it's kind of too late. I'd made my commitment to social science. I'm still happy that I'm able to make a lot of surprising, important insights and advances. But I see the problem here is that the reason that it's so easy is other people just aren't trying that so much. There's not so much demand for that, because if you find something, the world shrugs and goes, nah, we don't believe you.
Tobi: Final question for you. Um, this is a bit of a tradition on the podcast. What's the one idea that you would like to see gain more status, influence, and be more widely held, adopted even? You're not allowed to say culture adrift, by the way...[Laughs]
Robin: So can I say Futarchy? Look, if my intellectual strategy is to find important, neglected things, I'm likely to think that the things I found are in fact important and neglected. If you ask me, what are the important neglected things I'm going to say? Well, the ones I've been working on, of course, that was my whole point in picking them.
Tobi: Yeah, I agree, I agree, but I mean, can you give me something new?
Robin: I mean, so for example, there are institutional ideas that I didn't invent that I still think have a lot of potential. I think, for example, we could change democracy in big ways to make voters more likely to be informed either through tests or for random selection or other incentives. We could switch from first-past-the-post to proportional representation. We could do Harberger taxes - self-assessed property taxes, that's a better solution to an eminent domain. And so we wouldn't need government to overrule individual property rights if we had Harberger taxes as a way to create large chunk projects. There is a world full of interesting institutional ideas that I didn't invent, that the world should be more eager. So if you want me to generalise it, I think I said this before, which is if we just gave more status and attention to trying out new ideas, we would just make a lot more progress in the world than if we argue more about politics. So people are eager for new ideas and technology in physics or computer science, they're just not very eager for new ideas in institutional arrangements. And they mostly want to fight over who's in control of institutions, not about the structure of institutions. So if we could just try more variations on the structures of institutions, see what works well, then we could just make a lot more progress on institutional change, and that would be a huge value to the world.
Tobi: I've been doing these podcasts for close to six years. I think that's the most meta answer I've gotten on that question.
Robin: I like to be meta.
Tobi: Yeah, that's very Robin. Thank you so much. You're listening to Ideas Untrapped Podcast, and my guest today is Robin Hanson, economics professor and an all-around intellectual giant. Thank you so much.
Robin: Books.
Tobi: Author of two books, actually, The Age of Em and Hidden Elephants in the Human Brain. I'll put up links to his books and some of his most important essays and articles in the show notes. Thank you so much, Robin, for joining me.
Robin: Thank you for talking to me, Tobi.
Share this post