Ideas Untrapped
Ideas Untrapped
SCIENCE, SKEPTICISM, AND TRUTH
0:00
-45:38

SCIENCE, SKEPTICISM, AND TRUTH

A Conversation with Oliver Beige
2702e8e9-4f29-4f94-b46e-806d6686dc47_1024x1024.jpeg (1024×1024)

Hello everyone, and welcome to Ideas Untrapped podcast. My guest for this episode is Decision Scientist, Oliver Beige - who is returning to the podcast for the third time. Oliver is not just a multidisciplinary expert, he is one of my favourite people in the world. In this episode, we talk about scientific expertise, the norms of academia, peer review, and how it all relates to academic claims about finding the truth. Oliver emphasized the importance of understanding the imperfections in academia, and how moral panics can be used to silence skeptics. I began the conversation with a confession about my arrogance about the belief in science - and closed with my gripe about ‘‘lockdown triumphalism’’. I thoroughly enjoyed this conversation, and I am grateful to Oliver for doing it with me. I hope you all find it useful as well. Thank you for always listening. The full transcript is available below.

Transcript

Tobi;

I mean, it's good to talk to you again, Oliver.

Oliver; 

Tobi, again.

Tobi;

This conversation is going to be a little bit different from our previous… well, not so much different, but I guess this time around I have a few things I want to get off my chest as well. And where I would start is with a brief story. So about, I dunno, I’ve forgotten precisely when the book came out, that was Thinking Fast and Slow by the Nobel Laureate Daniel Kahneman. So I had this brief exchange with my partner. She was quite sceptical in her reading of some of the studies that were cited in that book.

And I recall that the attitude was, “I mean, how can a lot of this be possibly true?” And I recall, not like I ever tell her this anyway…but I recall the sort of assured arrogance with which I dismissed some of her arguments and concerns at the time by saying that, oh yeah, these are peer-reviewed academic studies and they are most likely right than you are. So before you question them, you need to come up with something more than this doesn't feel right or it doesn't sound right. And, what do you know? A few years, like two or three years after that particular experience, almost that entire subfield imploded in what is now the reproducibility or the replication crisis, where a lot of these studies didn't replicate, a lot of them were done with very shoddy analysis and methodologies, and Daniel Kahneman himself had to come out to retract parts of the book based on that particular crisis.

So I'm sort of using this to set the background of how I have approached knowledge over my adult life. So as someone who has put a lot of faith naively, I would say, in science, in academia and its norms as something that is optimized for finding the truth. So to my surprise and even sometimes shock - over different stages of my life and recently in my interrogation of the field of development economics, people who work in global development - [at] the amount of politics, partisanship, bias, and even sometimes sheer status games that academics play and how it affects the production of knowledge, it's something that gave me a kind of deep personal crisis. So that's the background to which I'm approaching this conversation with you.

So where I'll start is, from the perspective of simply truth finding, and I know that a lot of people, not just me, think of academia in this way. They are people who are paid to think and research and tell us the truth about the world and about how things work, right? And they are properly incentivized to do that either by the norms in the institutional arrangements that birthed their workflows and, you know, so many other things we have known academia and educational institutions to be. What is wrong with that view - simply academia as a discipline dedicated to truth finding? What is wrong with that view?

Oliver;

There's many things. Starting point is that it was not only Daniel Kahneman, behavioral economics has multiple crises also with Falsified work. Not only with wrong predictions, wrong predictions are bad but acceptable. This is part of doing science, part of knowledge production. But Falsification is, of course, a bigger problem now and they had quite a few scandals in that. The way I approach it always is sort of like a metaphor from baseball. Basically there's something called the Mendoza Line in baseball which is a hitter that has a 200 hitting average. This is like the lowest end of baseball. If you go below 200, then you’re usually dropped off the baseball teams. And on the upper end you have really good hitters that hit an average of like 300 or something. If you have a constant 300 average you usually get like million dollar contracts, right? We can translate this to science in a lot of ways. Of course, there is a lot of effort involved in going from a 200 average to a 300 average to a 20% average of being right to 30% a average of being right. But still if you're at a 300 level, you're still wrong 70% of the time.

And so the conversations I observe, they're people that are not specialists in a field [and] we're trying to figure out who is right in a certain conversation. Talking about conversations in a scientific field we basically try to use simple pointers, right? One of the pointers is of course a paper that has gone through peer review. You see these conversations of like, okay, this paper has not been peer reviewed, this paper has been peer reviewed. But peer review does not create truth. It sort of reduces the likely likelihood of being wrong somewhat but it doesn't give us any indicator of this is true. The underlying mechanism of peer review usually cannot find outright fraud. Cannot detect outright fraud. This happened quite a few times. And also peer review is usually how close is the submitted paper to what the reviewers want to read. There is a quality aspect to it, but ultimately it changes the direction of the paper much more than it changes quality. So academia overall is a very imperfect truth finding mechanism. The goal has to be [that] the money we spend on academic research has to allow us to get a better grasp of so far undiscovered things, undiscovered related relationships, correlations, causal mechanisms, and ultimately, it has to give us a better grasp of future and it has to give us a better grasp of what we should do in order to create better futures. And this all basically comes down to, like, predicting the future or things that were in the past but yet are to be discovered.

Evolution tends to be a science that is focused on the past, looking at things in the past. But there's still things we have to discover, connections we still have to discover. And this is what academia is about. And the money, the social investment we put into academia has to create a social return in the way that we are better off doing the things we need to do to create a better future for everyone. And its [academia] track record in that regard has been quite mixed. That's true.

Tobi;

So let's talk a little bit about incentives here. Someone who has also written quite a lot, who talked so much about some of the issues - I think he's more focused on methods. He's andrew Gelman, the statistician. I read his blog quite a lot, and there's something he consistently allude to and I just want to check with you how much you think that influenced a lot of the things that we see in academia that are not so good, which is the popularity contest - the number of Twitter followers you have; whether you are blue checked or not; bestselling books; Ted Talks that then lead to people making simplistic claims. There's the issue of scientific fraud, right, some of which you alluded to also in behavioral economics, behavioral science generally. There was recently the case of Dan Ariely, who also wrote a very popular book, Predictably Irrational, but who was recently found to have used falsified data. And I recall that you also persistently criticized a lot of people during the pandemic, even till date - a lot of people who made outright wrong predictions with terrible real life consequences because policymakers and politicians were acting under the influence of the “expert” advice of some of these people who will never come out to admit they are wrong and are less likely to even correct their mistakes. So how is the incentive misaligned?

Oliver; 

Okay, many questions at once. How does academia work? And like I always like to say that academic truth finding or whatever you want to call it is not too far away from how gossip networks work. The underlying thing is, of course, any kind of communication network is basically sending signals. In this case, snippets of information, claims, hypotheses and the receiver has to make a decision on how credible this information is. You have the two extreme versions, which is basically saying, yeah, I just read this paper and I think this paper makes a good claim and is methodologically sound or I just read this paper and this paper is crap as everything about it is wrong. So you basically start with a factual claim and an evaluation. This happens in science Twitter in the same way a gossip network communicates typically good or bad news about the community. Also, a gossip network communicate hazards within the community, sending warnings, which is what academics have been doing quite a bit over the last two and a half years. And they also have this tendency to, a) exaggerate claims, reduce claims, and [they] also have this tendency to create opposing camps. Because very few middling signals are being retransmitted.

I've been watching the funeral of the Queen, I have no strong opinion about British royalty in either direction so if I post something on Twitter about it, nobody will retweet. And, of course, the two extreme ends will be retweeted. This is how Twitter works, but it's also how science usually works. You’ll see that strong claims in either direction are being transmitted much more frequently than middling moderate claims. So the bifurcation of opinions is inherent in both of them. This element of credibility, that you build credibility, based on how someone else reflects your own beliefs. Your own prior beliefs, really. This is the core mechanism [that if] I read something that confirms my prior beliefs, I'm much more likely to retransmit it with a positive note that "I really like this and I think it's methodologically sound." And if it's something that contradicts my prior beliefs, I'm very much more inclined to question its methodology. And I think we've seen this to an extreme over the two and a half years because we had situations where the discussion was very polarized. And the really bad thing to observe in a scientific discourse in general, but also the amplified scientific discourse on Twitter, is like the absolute lack of quality control when something confirms one's own prior beliefs. So this is usually what a scientist has to do. Like, if I get something that confirms my beliefs, I still have to do a minimum quality control [to check] if it's actually methodologically sound. And this clearly did not happen. People were just passing on anything that confirmed their beliefs and basically expected someone else to do the quality control. The first job any academic has is basically to subject everything, even that confirms your beliefs, and this is also [what] you think is true, you still have to subject it to quality control. And clearly this rarely ever happens. This is why academia is supposed to run on confrontation that, basically, the other camp does it. But if you bring academia together with Twitter, which is [an] amplification network that runs on social engagements, likes and retweets, then you have a very toxic mix. And this is the situation we had over the last two and a half years, how scientific communities can coalesce around things that are just not empirically sustainable.

Tobi;

Now pardon my language, there's a way that academics, whether they are scientists or social scientists (I know economists are particularly notorious in this arena), they completely fuck with your mind when you're a skeptic. So I'll give you an example. Two days ago…I opened with the replication crisis in psychology, so two days ago, I read a SubStack by someone who is presumably a psychologist, who was then basically complaining that, “oh, yeah, after the replication crisis, a lot of them in academia who were doing PhDs, were also having their own crisis of confidence, because then you have to confront a public who thinks they know everything.” So, like, you describe your study or you say you found something and someone says, "oh, but the field didn't replicate." The whole thing just sounded like some weak apologia that just didn't make any sense. I recall that sometimes a little bit after the financial crisis [of] '07-'08, if I recall correctly, Paul Krugman was dismissing something Talib, Nicholas Nassim Talib, wrote by saying that, oh, if you think you found something that a whole community of academic experts… I'm not quoting him verbatim, I'm paraphrasing… If you think you found anything that a whole academic community of experts missed, then you are most likely to be wrong.

So, it brings me to the question of skepticism and how to approach it, because at the other extreme end of this is to say… and certainly there are people like that in the world today who think that no scientific knowledge is true, who question even proven medicine, and there are also conspiracy theorists who say outrightly false things for their own motives, no doubt. So, like, how does one deal with skepticism? Especially if you have conspiracy theorists and outrightly ignorant people on one side, and on the other side you have academic confusion or experts who out of their own biases or some of these institutional and social problems that you have described can also not really come out and admit that, oh, we botched this and this and this is what we are doing to correct our errors. How do you handle skepticism in such a milieu?

Oliver;

The first thing is and it's also the reason why I like the baseball metaphor is if you are [an] academic, you're an expert in a field, you spend far more time studying this field than others, you're communicating with other experts in the field, so you can get this feeling, and probably justified feeling that because you put more effort into it you should get more reward in the form of more recognition and more credibility. But you should also come up with a realization or understanding that any field you're in and that includes economics and all other fields, there are so many things that are still undiscovered, so many things that are undiscoverable that we have to build axiomatic constructs around in order to actually help us move forward. And if you're able as an academic to move from 20% right over many years to 30% right, you're still 70% wrong. So these are not empirical numbers, but I think they get the point across. And if you don't get that, then you're doing something wrong in academics in general, right?

And we've seen this arrogance that was not supported by imperial superiority, like, quite a bit over the last years. Especially Paul Cook when he got some of the things very wrong just recently when he came out, when he admitted that most macroeconomists have been dead wrong about inflation for over a year. And then he claimed that nobody could have foreseen that. This is doubly wrong. You can be arrogant or you can be incompetent, but you cannot be both at the same time. Basically, academia is also a competition for attention. This is an attention industry and exaggerated claims get more attention than moderate claims. So this is not a problem. The problem is, and I see in the discussion is the complete absence of understanding of what the scientific method entails. And that clearly, a lot of academics become specialists in a particular subsection of the scientific method but don't have an understanding of how the whole thing works.

Which is interesting, especially in economics, because economics has this very strong claim that it underwent An Empirical Revolution over the last 20 years, which is certainly true. Econometrics have got a much bigger role over the last 20 years, but they also claimed that because they underwent an empirical revolution, they also underwent a credibility revolution, that their results are much more credible and this is a much bigger claim. And this is not a claim that recent events have validated or recent economic performance has not been up to par to support it. But the key thing [is that] the scientific method is basically starting out from a theory which does not have to be a formal way of expressing, but you have to have an overarching idea of how things are connected, how some things cause other things. And from this, you have to be able to create predictions. Basically, foresee future discoveries. And you do this in a number of steps. The first step is usually formalization. You try to come up with a formal model. There are lots of discussions about like, okay, how formal does a model have to be? Usually, formalization is a self-discipline device. It means that you don't come up with ad hoc predictions, but the predictions are based on a clear mechanism that should be working under a variety of conditions. And then once you have a formal model, which we've seen a lot of people trying to build formal models over the last few years, and a lot of them have gotten more attention than they deserved or that they expected, and then you come up with a hypothesis. Hypothesis usually means are you comparing your own view of the world to competing views of the world. You try to find the positions where they diverge the most or where it becomes visible. And then you do empirical test experiments. Or in economics, you try to do a natural experiment or control trials in order to show that your overarching theory, your model, is closer to the truth than the competition. But the key is also and this is remarkably what a lot of people have just simply missed out on, this is the replicability and the role of moving away from a subjective view of the world to an objective view of the world so this can be refuted or replicated by others.

And this also means that people who are opposed to your viewpoint have to admit that your view of the world was better than others. And this has almost completely broken down. Because in the two scenarios, economics (macroeconomics) in particular has been dead wrong, especially about inflation which is really one of the core predictive elements of macroeconomics and they have been dead wrong for an extended period of time for the very simple reason because they did not want to acknowledge it. And this is a problem, right? So then we start obfuscating about where you went wrong and you're trying to play political games that being wrong was not just unexpected change in economic environment or social environments or something but being dead wrong was basically caused by your model being fundamentally wrong.

Very clearly economics should be in a crisis. The crisis should be clear within the field and the less the field itself owns up to this crisis, the more the outside world [should] pressure the field itself to come clear with its wrong predictions because the cost of getting these things wrong are staggering

Tobi;

True. So I have three questions but I'll ask them differently. You mentioned towards the end of your answer you talked about political games which is something that also gets me really angry and sometimes confused. And a related issue about that I found also is in development economics. But that will take us into the second question. So let's talk about the politics here. For example, take a field like economics which is highly partisan. You have some people that are called neoliberal economists. Some people are socialists, some people are heterodox, some people are capitalists. I know within the field of macroeconomics itself, they have all these other labels - new Keynesian monetarist, you know, whatever.

But what I'm getting at is the role of partisanship, because you always have rival camps accusing themselves of partisanship. One story I related to, which I'm sure you also must have come across is - I saw a story on Twitter a couple of weeks ago before the Chilean constitutional referendum that Mariana Mazukato, Gabriel Zukman and Thomas Piketty, who are all economists, who are all leftists, who mix their research with political preferences and policy advocacy, plan to travel to Chile to celebrate the new draft constitution because it's a win for justice, it's a win for this or that. It's the final rejection of the Pinochet dictatorship and the neoliberal imposition that is. I did not encounter in that particular discourse chain anybody asking what is good for Chile, and Chileans, and even more relevantly how Chileans feel about this.

And, I mean, what do you know? The referendum happened and 60% of the voters rejected the new draft. And I know that partisanship and political games, like you said, play not just in economics, it happens in other fields as well. So I'm curious - is this okay? And how exactly did should I say, scholars, particularly in social science, people that have been able to make extraordinary contributions to our body of knowledge and what we know, how have they managed to keep their politics, their personal politics away from their work? Or is it just that everything just used to be easier before we had Twitter?

Oliver;

Politics and economics have been intermixed long before Twitter. So this is not particularly new, and the mechanism itself is also not new. But your starting point is basically, as I said, like, very simplified that the role of academia is to predict the future and to design strategies to reach good futures. So in that situation, it's not surprising that academics take political positions. The problem comes in, of course, that if the ideological mix in academia and the ideological mix in the overall population and the ideological mix in sort of the ruling elites don't line up. This is a tricky situation, but being close enough to the highest echelons of power for long enough to observe what happens. If you have a change in the administration in Washington DC, then usually the new administration brings in economic experts from favourite schools. And then if the administration loses to the other party, then the other party brings in their favourite economists. So in that regard, if you have this semiconstant exchange of viewpoint, an economic viewpoint gets discredited, it gets replaced via the political process with other people, this is usually how you get closer to the view - I used to call it the drunk unicyclist. You're not really moving forward in a straight path, but you're moving around left and right, and you just try to avoid falling into a ditch. And this is what we observe. No political process is perfect. And as long as the political interests of the academics and the political interests of the elite are aligned with population ones, this is as good as we can get it.

I generally have a problem with ideology in economics, but it's inevitable. And my quality is that I be able to read and appreciate writers from the left end of the spectrum, on the right end of the spectrum. I usually deduct points over ideological bent. But good thinkers can make good points even if they are driven by ideology. The problem also comes in when there is essentially no penalty for being wrong in academia. So basically being wrong and being catastrophically wrong externalizes the damage to others. So the worst scenario you do if you're tenured faculty, sort of what I call the endowed chair blue check, like a tenured faculty with a wide reach in social media, you can be dead wrong,you can be persistently wrong, completely unwilling to own up being wrong, and there's no real penalty to it. This is the major problem we're facing right now.

Tobi;

So that then brings me to the question of niches or what I'll call cottage industries in academic research generally. I know recently I did ask you about what you think about the EA movement. I'm not talking about them, but for descriptive purposes we see the behavior of that group, the adherents, the critics and how much commitment, particularly adherents display to their tribe. I see a lot of that too in academic research. One group I am very familiar with is in economic development (development economics) where everything now is about field experiments and randomized control trials. And one of the fundamental ways it biases research in my opinion and also have negative real life consequences is, if you do a field experiment, a randomized control trial on cash transfer, say in a Kenyan village over a period of time and you measure your results and they are positive and say oh yeah, well, cash transfer works.

But the real question that policymakers, whether local governments or central governments or regional governments really deal with every day are sometimes bigger than that. So, like, for example, if you want to choose between building a power station for that particular village at $1 million versus scaling up your cash transfer program, what you’ll find is that development economists in the current paradigm would most likely go for the cash transfer plan. Let's scale it up. We have tested this. It works. Essentially they are biased to what they can measure - like, we don't know the spillover benefits of electrification, it would be difficult to design a study, there are so many externalities. So basically they reduce real-life situations into the parameters of their methods and its limitations. And such behaviour is very, very similar to what you see with other social groups. Whether it is the Effective Altruism movement… I was briefly involved also with the Charter City people where for every problem that they can see, the solution is to build a charter city.

That movement was actually inspired by your dear friend, Paul Romer. So there is this almost blind commitment and loyalty to their method, to their cottage industry. And sometimes I see it as just drumming up support for their tribe, as opposed to a commitment to the truth and finding what works. So, again, pardon my big question, what's going on here?

Oliver; 

Okay, two things on the starting point about tribes within academia is…like, one of my favourite sayings is that tribalism is the shared belief in counterfactuals, counterfactual being everything that is unknown. And the less we know, the more unknowns there are, the more we tend to flock with our own tribes. So this is something you see everywhere in academia. That's what we call thought collectives. Ludwig Fleck, one of the guys who influenced Thomas Kuhn, came up with this term, thought collectives, to describe this idea that people that share the same idea of causal mechanisms tend to come together and confirm each other and create this thought collective. And this is, of course, what we see here, especially in academia. Economics has additional problem. I think it's not nearly as strong in development economics as other fields, but it's also visible there. This is very much the way economists are recruited. Economics, especially US and UK-centric economics, is extremely mathematicized. So, like, mathematical skills are basically number one, two, and three and the priority.

And so you have basically a situation where real-world understanding has almost no role in getting accepted into PhD programs or getting promoted within the system. It used to be theory knowledge, formal theory knowledge. Now it's econometrics knowledge that gets you promoted. And this is very far away from qualification to solve real-world problems. And of course, people are impressed by mathematical skills. So this is something that you can play as a trump card. And this is what happens in the field. And the field is closing itself off from all kinds of outside knowledge because of that, especially in the social sciences. And in my world, I use people with mathematical skills, but only for very, very clearly defined tasks. I have my own mathematical skill set, but I also understand what the limitations are, and I think that's a major problem. And basically, if everyone around you came up in this system that promotes mathematical skills over real-world skills, then you believe that this is the only thing you need. And it's been very clear that basically every ten years, economics has a major crisis about being completely wrong in their predictions. And this intellectual monopoly is a major problem with that.

Tobi;

My third question in that line then pertains to the philosophy of science.

Oliver;

Yes.

Tobi;

So there are people who argue that a lot of these problems are also because modern science or the methodology of science today is divorced from some kind of philosophical foundation. I'm familiar relatively mildly with three philosophical approaches to science and let's just say truth finding.

Thomas Kuhn basically puts everything down to competing paradigms. Like my last question, you know, competing tribes. And it's the tribe that wins at the moment that sort of has the monopoly of truth, not strictly, but socially. Then there's Karl Popper, which is also quite popular, that for anything to be valid as truth, it has to be falsifiable. And we've seen this play out so much in particle physics with things like string theory and things like many-worlds interpretation and so many things where their critics are saying, you guys are basically making claims that are not falsifiable, that cannot be tested and what you are doing is not science. And that has been going on now more or less for about three decades, right? And, of course, there's the Lakatos approach, which sort of fits into your own view, correct me if I'm wrong, which is that science has to make novel claims and it has to be predictive, it has to make predictions about the world. So my question then is academia, science, the truth-finding industry, so to speak, or the knowledge production industry, is it having a philosophical crisis?

Oliver;

I think it has more of a structural crisis. I'm not that deep in the philosophy of science I’m much more interested in the process itself. But one of the things that I think matters to me is Milton Friedman's claim that there are no wrong assumptions but whatever assumptions you make about the world has to generate correct predictions. A theory is being evaluated by its ability to produce non-falsifiable predictions, right? Predictions that turn out to be true even if others don't believe them. This is something you see in the arts as well, you see actually in religion as well, this mechanism of belief propagation that starts with one person believing and over time and over time, can be many decades, of something being accepted as true by everyone.

So everyone starts believing in it. Basically, social contagion mechanism. I've always been interested in this. One scenario where this happens or should be happening is science. Right. This is, of course, a process. A process happens via this academic mechanism of peer-reviewed publications, getting tenure based on publication records and so on. And these are all very very imperfect mechanisms. The two extreme versions of that [are] the American system, which is extremely stratified, and the German version is the opposite, it’s non-stratified, [and] we produce a massive amount of mediocrity. So, like, neither of them are optimal mechanisms to create truth. And we've seen that over the last two and a half years that political posturing took precedence over truth finding.

Is it in a crisis? I think, yes, very clearly. We have two and a half years where very wrong, easily debunkable claims were propagated and were not retracted, even after they've been proven to be wrong. And ultimately, we're in a situation where an economic crisis is very clearly caused by misjudgment from people which we support and pay for being less wrong than the overall population. And that just simply did not work.

Tobi;

One last thing I’ll like to get off my chest and then I'll pass them out to you is, I mean, specifically, if we follow from our last two podcast episodes, I'm a bit frustrated that there is a bit of lockdown triumphalism that the people who vigorously and vehemently used their academic or expert pedigree…

Oliver;

Credentials.

Tobi;

Yeah…to advocate for lockdowns are also taking a sort of victory lap. So the pandemic is over. Everything is back to normal. We did the right thing, even though the whole world was against us. That frustrates me a little. I was still watching a clip on YouTube recently because you get even more sensible take from everyday people, people who are experiencing these things than people who are building models and tweeting. One person somewhere here in southwest Nigeria complaining during the pandemic that the government has decided that it is better for us to die at home of hunger than not die from the pandemic. Because this pandemic, we don't know what it is, we don't know how it spreads, but without giving us any information, you basically confined us to our homes with no means of livelihood and nothing to depend on. That makes me sad because in Nigeria here and in many parts of Africa today, a lot of what we are seeing as, and are calling the food crisis, cost of living crisis, whatever it is you want to call it, did not necessarily start, but were aggravated or exacerbated by that approach to the pandemic. And it makes me sad that the people that are culpable, we can have a situation where they can take a victory lap. So that's me. Over to you. What would you like to get off your chest about everything that we have disclosed today?

Oliver; 

Number one is epidemiological modelling was clearly an empirical debacle. The predicted epidemic wave that would take five to six months, that would wipe all large parts of the population never happened. And we have, I don't know, how many thousand waves in our database now, they all go for eight weeks. They start declining, acceleration starts declining very early on. And now we had enough scenarios where simple no measures were taken at any time during the wave. The key moment in that case was, I think, Paul Krugman complained that Denmark was removing all restrictions at the height of the epidemic wave and basically the very next day, the Danish wave dropped. Not a lot of people saw it, but it was extremely embarrassing for him. I've been in very much the same situation because I was living in the United States in the early 2000s and I was very clear from the very beginning of the Iraq war that Saddam Hussein did not have bioweapons. And so the whole invasion was built on Untruth. And the United States and the UK back then also knew that.

Back then there was a strong moral panic, especially in the United States, against anyone who was basically speaking against the rationale for going to war. Now, 20 years later, almost nobody is willing to admit that they were speaking up in favour of the invasion back then. This is like a one-generation thing. And we'll see the same thing about the epidemic. This is very clear. The young people who had to carry most of the restrictions…up till now in Germany they’re still forced to wear masks at school. They will have a very different view about what happened than the politicians in power. These are the things that'll evolve over many, many years. So I expect the same thing to happen. The interesting thing is really sort of back then it was more on the right end of the spectrum that drove this moral panic. Now it's moved over to the left end of the political spectrum. This is something that we’re still to be investigated, why these moral panics unfolded onto the ideological spectrum as we know it. But it might be an interesting topic for the next call.

Tobi;

True.

0 Comments
Ideas Untrapped
Ideas Untrapped
a podcast about ideas on growth, progress, and prosperity