Is it possible to develop a unified science of catastrophic risk? How can we convince policymakers to take risks to human existence seriously enough? How can we improve our foresight so that we can spot the next big disaster before it hits us?
Dr Clarissa Rios Rojas discusses these questions with Toby Wardman of SAPEA. We also discuss how to start a difficult conversation with a politician, whether future doomsday manuals should be stuffed into envelopes or just put online, and why being a scientist is cool.
The transcript below was generated automatically and may contain inaccuracies.
Toby: Hello. Welcome to the Science for Policy podcast. My name is Toby, as it always is. And today, I'm delighted to be speaking to Dr. Clarissa Rios Rojas. Clarissa has a background in molecular biology, and she now works on science advice for policy at the centre for the Study of Existential Risk at the University of Cambridge. The centre's rather glamorous mission is to evaluate and reduce the risk of human extinction or the collapse of civilisation. Clarissa is also a member of the Global Young Academy, where she was the lead on the science advice working group. And before moving to the UK, where she now lives, she has lived and worked in her native Peru, Finland, Sweden, Germany, Australia, Italy, Peru again, and Switzerland. Clarissa, that's a long list of countries.
Clarissa: Thank you so much. Yes, I had the opportunity to be in different places. And I guess that goes together with my nomad style of life and really enjoying meeting new cultures and new people.
Toby: So it's an itchy feet thing. Has it always been a plan to travel, or are you just following where your career takes you?
Clarissa: I think both. But yeah, it has been where the opportunities open up and where I also like to go.
Toby: I guess I shouldn't be so cheeky as to ask you if you think you're settled now in the UK.
Clarissa: Well, for the next three years, for sure, I'll be in the UK, working at the University of Cambridge.
Toby: Right, at the Centre for the Study of Existential Risk. There seems to be a few of these kind of hybrid research centres, think tanks, springing up with very funky, almost science fiction names. I spoke with Peter Gluckman a few episodes back, and he's just founded the Centre for Informed Futures in New Zealand.
Toby: I know Oxford has a Future of Humanity Institute, and Cambridge is studying existential risk. It sounds like a very broad ambition. What's it all about?
Clarissa: Yes, so at the centre, as you mentioned, what we are trying to do is to dedicate our research to the mitigations of risks that could lead to human extinction of civilisation collapse. And we do it in a very transdisciplinary way. My colleagues are philosophers, molecular biologists, economists, lawyers, and we are all together trying to mitigate these risks. And our focus is on technological risk, biological risk, also everything that is associated with climate change and artificial intelligence.
Toby: Right, it sounds fascinating. And your background is molecular biology. Do you get to do any of that now?
Clarissa: Well, it depends. Sometimes we have some projects that are related to bioweapons or engineered pandemics, for example, and our expertise is required on that. But through the years, I have been learning a lot about other fields like artificial intelligence and climate change and circular economies recently. So it's a very broad type of topics.
Toby: Yeah, broad topics, yes. But also, it strikes me an equally broad range of solutions. I mean, if you want to protect against a rogue AI taking over the world, I don't know. Presumably, you have to regulate AI or get smarter about how you design things or something. Meanwhile, if you want to stop a terrorist from creating a superbug that kills everyone, I guess you need to try and control access to certain material-- I mean, I don't know, right? I have no idea. But my point is, does the broad class of existential risk really have enough in common that we can usefully have a centre for studying it all together? Can there be such a thing as a single unified science of catastrophic risk?
Clarissa: Well, that's actually very interesting for me because I'm part now of a project called A Science of Global Risk. And what we are trying to do is to find the answers to the question that you just made me. And for that, we are deciding to focus on three different things. So one will be, what are the methods that allow us to think better about the future risks of different events? And we are trying to do that using foresight methods like future scenarios and horizon scanning and so on. The second strand will be to the co-creation of policy. So to engage with policymakers, with citizens, with the industry since the beginning in order to have a broader idea and find solutions that can be built by all different partners that are part of the society where we live in. And the first strand will be how we communicate risk. So I guess that combining these three topics, we are going to try to find and push for an agenda that really can envelope, as you mentioned, and create a creative and concise science of global risk.
Toby: OK, and the hope is, or the theory is that this approach can still be helpful even if it's neutral about the source of the risk, about the actual thing that causes the catastrophe?
Clarissa: Yes, so we are thinking that, for example, when an event happens, it could be that it has different sources, but what's going to probably happen is that different systems are going to collapse in a relatively similar way.
Toby: I see.
Clarissa: For example, with the pandemic, with the COVID-19, we're seeing how the different systems started to one by one start collapsing, and then we have to think about policy solutions. So with the science of global risk, what we want is to be ready, to be prepared to know what may happen, and to have different ways how we can navigate through this risk in order to manage it.
Toby: So the source of the catastrophe could be anything from a mad AI to an asteroid smashing into the Earth, but the effects are going to have things in common.
Clarissa: That's the idea. Yes.
Toby: Well, so resources are finite, attention is finite, political will and political capital are finite, at least for an elected government. How can you make politicians take these big ideas seriously and to dedicate their time and brain space to thinking about global catastrophic risk?
Clarissa: It reminds me one advice that Jonathan Foreman, who was the scientific advisor for the organisation for the Prevention of Chemical Weapons, told me. He said, the best diplomat or the best way to engage with policymakers is to really put your shoes, your feet in their shoes, and think about what they are passionate about, what is their concern. So if they are very worried about the health system, let's say, so then that will be the way where I will talk about bioweapons or bioengineered pandemics and how that will impact public health and so on. If it's someone that is working more in agriculture, then I will talk probably about volcanic eruptions or the use of AI to improve certain systems. So I think it really depends with whom you engage. If it's someone that is already interested about future generations and thinking about the next steps that parliament has to take, then it's much easier to talk about a different variety and go more in-depth. One of the things that we are doing now, for example, is create a comic, a graphic comic from one of our papers. And our paper is about bioengineering issues that are going to be relevant in 5, 10, or 15 years. And we have hired a comic artist and also someone that is going to make the script. And that's also one of our ways that we want to test to see if this comic can make these issues more digestible for citizens, more appealing for policymakers. And then this opens a window that allow us to go and talk about all these different issues. And in this paper, which is "Emerging Issues on Biotechnology," we talk about different things, not just about, for example, CRISPR or genetic editing, but also about neuronal props for expanding new sensory capabilities, or the use of synthetic biology for fetal remediation, rivers and lakes, or enhancing carbon sequestration. So these are issues that could be relevant for different things, for the economy, for agriculture, and for health. So it really depends with whom you talk. And then I will think about quickly in my mind, like, OK, I will talk about this event that may be a catastrophic event in the future.
Toby: Yeah, so as a shout out to posterity, we're recording this right in the middle of the pandemic of 2020. I feel like maybe when it comes to giving science advice on these topics, perhaps politicians and policymakers might be a bit more receptive to these kinds of conversations now than they were a year ago.
Clarissa: Yes, I think that a year ago, talking about these type of risks has been a bit problematic because it's not on the priority agenda of many policymakers. But what COVID-19 has shown us is exactly how this type of studies and policies have to take place now in order to prevent what we are seeing now. That it seems that we are not prepared. It seems that we are not thinking how the economy, the transport, the health has been impacted or will be impacted, and how we can present solutions that are not, let's try it and see how it goes. But we have enough scientific evidence that it's going to tell us that this solution may be the most adequate for this type of risk, for example. And yes, definitely this COVID-19 has been really catastrophic, let's say. But it has opened an opportunity for everyone working on these fields, and also for citizens to understand how important it is to be thinking about the future. And not only that we are talking about 200 years, but maybe five years, 10 years, and then we have another pandemic, maybe. Or we have a volcanic eruption, and how is that going to affect the infrastructures that we are having now? And how do we have to think about the future of cities and agricultural food systems, et cetera?
Toby: Well, so having invited you to be optimistic in that way, I'm also a bit worried about the optimism. I mean, I can see that COVID-19 has helped with the elevator pitch. Sure, nobody now is going to say, oh, no, we don't need to worry about existential risk in the future. But will people still be saying that when this period of intense focus is behind us in a year or two or 10 or whatever it takes? I mean, how do you keep the conversation going? How do you persuade policymakers to keep one eye on the existential risk ball, as it were, even when we're all going back to worrying about more day-to-day things in society?
Clarissa: I think that as a society, we have to think about a holistic approach. In one way, it could be that we ask ourselves, what are we learning from COVID-19, and what are we going to do about it? So what we can do, for example, is to push that our governments create an office that is just focused on thinking about future events or thinking about global risks. And then some more funding, if they are already created, is allocated to them. It could also be that we think about the educational curricula of higher education. For example, in universities, can we incorporate global risks within philosophy, economy, biology, and see how this also impacts our future policymakers or advisors? And it's also important to educate citizens, because they are the ones electing the current people that are having very key positions in government. And if they understand what global risks are and how we can manage them, I guess that they will also be pushing for these things to the people that they elect. So I think that we need to do a lot of jobs at different levels and then do it as a society altogether, not just scientists, not just academia, industry, or policymakers, but citizens as well.
Toby: All right. So let's say you've got a politician in front of you and you've done that initial work, you've persuaded them to listen, they accept this is something they should be taking seriously. What do you do now? What do you offer to help them navigate this challenge?
Clarissa: I think that a lot of evidence has already been created by different centres, as the ones that you mentioned, that they have very funky names, like, for example, the centre for Future Intelligence. So they have already produced evidence about, for example, why do we need to use ethics with artificial intelligence in times of crisis? Or how can we use artificial intelligence to prevent the collapse of food systems during a crisis? So I guess that the first step will be to put them together and train scientists, of course, to deliver things that are concise and to the point and engaging conversation, a bi-directional conversation. And what I will also suggest is to include citizens in this conversation.
Toby: All right. So this is quite an ambitious end goal. At least it seems that way to me. What's the process here? What steps do you need to go through to make it happen?
Clarissa: Well, my job right now is at the local level, let's say, internal level at CSER where I work, which is to create a report based on all the policy engagements that they have had and have before, during, and after recommendations on how to do a successful policy engagement with policymakers, with the industry, and with citizens. And after that, I will also-- it's my plan to create a scoring system that can help our researchers in our centre to think about, since the beginning, about what's the policy impact going to look like for that specific research that they are doing. And then at an international level, what I'm planning to do is to do co-creation workshops where I bring policymakers, different stakeholders, industry citizens, and just put them together to talk about what do we need to create a framework for a science of global risk. Is it that we need to educate citizens? Is it that we need to reform the United Nations? Is it we don't know yet? How do we-- if it's a political problem, maybe policymakers understand where there is, but it's political suicide to spend all their time on that because they are not going to be re-elected. So what we want is to really find the answers, and then based on that, create some recommendations on how to create this framework. And on the second workshop, what I want to do with all our partners, which are, for example, the International Network for Government Science Advice, International Science Council, the Biological Weapon Convention, the Inter-American Global Change Institute, is to, OK, we have these nice recommendations. How do we push for this agenda from our fronts? And how do we make sure that we test all these policy recommendations? Maybe we cannot do it at the global level, but maybe we have 10 countries that are interested in taking these recommendations and test them. And once they are tested, we can really see if it works or not and how do we need to tweak them.
Toby: Yeah. Maybe you don't know the answer to this yet, but what's your instinct about the appropriate level to do this work? Is this-- are you targeting mostly national level governments, the international community? Is it more kind of regional? What do you think?
Clarissa: Well, for example, what COVID-19 has teach us is that everything and everyone is connected, especially in this globalised world. And I believe that what we need to do is a global effort. For example, maybe what we find out in this project is that we need to incorporate it within the Sendai framework that is working on risk and is having all these countries producing reports on how they are working on these recommendations. Or maybe it's another situation. But I think it has to be a global effort. But then at the moment that we really want to test if it works or not, it should be at the local way. That's why I mentioned that maybe we have one country or two countries that try it and then see what really is happening and how we can make it happen in a global way by learning from these experiences.
Toby: OK. And when you say test it, I mean, how do you test this kind of stuff? Presumably, you're not going to engineer some catastrophe. You're not going to produce a deadly virus just to see if the plans you put in place are the right ones.
Clarissa: Yes, that's true. But that's why I mentioned that one of our strengths in this project is the foresight methods. So with foresight, we can really play around future scenarios and present different crises that could happen. And then we together examine them and think about what are the different paths that we could take in order to have different outcomes. And then we can evaluate those outcomes. And then we can create a protocol that show us. So this is the priority. If the food system is the one that is hit first, maybe we should follow this protocol. If it's the transport system, the one, then maybe we could go and follow this one.
Toby: OK, I see. So the aim is to have a set of well-thought through hypotheticals. And then when a real crisis hits, maybe you have, I don't know, an instruction sheet and a pile of envelopes or something, and the sheet says, OK, is the financial system under threat? Then open envelope 17. Or is the food system under threat? Open envelope 9. And then inside the envelope is a little handbook which says, OK, we've thought about this. Here's what we reckon you need to do, A, B, and C.
Clarissa: Yes, but maybe we can even make it more interactive. And it's not a book, but it's a platform. It's online. And you can just navigate through it by clicking it and then see what are all the recommendations that have been given in which year based on which evidence or based on which events from the past, things like that. So that's more or less what we envision, to have like a doomsday scenario where you can just go and find because you need solutions fast. That's what we are seeing now. You need to take decisions, and they need to be-- yeah, you need to have all these variety of choices at hand.
Toby: OK. Then that leads me on to a practical question then. This is something that crosses my mind because of an observation that Peter Gluckman from INGSA made when I spoke to him a few weeks ago. So he's been looking at-- or he and his colleagues have been looking at how silence advice has been used during the COVID-19 pandemic and how that's affected public health outcomes for each country. And one observation he made is that there are countries that, I think, in theory, at least had some quite sophisticated risk evaluations and pandemic action plans and infrastructure and so on in place. So countries that had at least done some of the kind of homework in advance that you're talking about. And then there are other countries that have not done that. And his observation, at least from what he's seen so far, as I understand it, is that it doesn't seem to have actually made much difference in terms of the success of the response. And the reason for that is that actually when the pandemic hit, most of those plans and preparations simply weren't used. Countries either forgot they had them all, threw them out of the window, and started engaging directly with what was happening right there and then. So that's an anecdotal observation, I think, rather than a solid conclusion. He's still collecting evidence for sure. But nonetheless, does that problem of the risk of panic worry you? Or to put it another way, what can you do to make sure that the tools you are developing, which a country might well sign up to with good intentions, actually get used when the proverbial hits the fan?
Clarissa: Well, I think that there are different things that I can observe. Maybe one could be that the whole government needs a reform if it's true that there was-- I mean, it is true. We know that there were these protocols already in place and they didn't use it. Maybe they didn't know they have it. So then we have a problem of organisation and knowledge management. And then the other thing that comes to my mind, it could be that there are political reasons. And it depends who is in power and what they decide to do and where their beliefs are. Do they embrace scientific evidence or not? And we have seen how some countries have just gone out from the climate change pact, for example. And those are political decisions. Has nothing to do with what has to be done at this moment in time and what's more beneficial for citizens around the world. And those two observations are the things that we need maybe to prioritise is to think about who we elect and the second will be how good is our knowledge and management within our governments. That makes sense. Although I do kind of think we are always going to have to live with these kinds of issues in democracies.
Toby: I wonder if also if it's something to do with finding ways to improve our foresight. I mean, I wonder if one of the issues with what happened with COVID-19 in the early days is that we simply cottoned on to what was happening a bit too late. By we, I mean at least some governments, some scientists. By the time we realised what was happening, the dominoes had already started falling. That could explain also why the carefully laid pandemic plans weren't so carefully used in the end. I mean, foresight has to be a big part of it, right?
Clarissa: Part of the foresight method is to think about weak signals. That now they look weak, but then later they become very important and serious. And in this case, we can use the word catastrophic again. And that's why foresight is such an important part of our work to think about all these methodologies and use different examples and make people working at key places to think about this and think about the future. And again, what they can do from their fronts in order to prevent it or in order to be better prepared to manage it later. And I would also like to mention that foresight, it's not widely spread among the scientific community. And that's something that at the moment it worries me and maybe should be better thought about different mechanisms on how to do it better. And Science20, which is this-- they are advising the G20 summit that is happening in two months. They are very well focused on foresight and thinking about policy recommendations on how to incorporate foresight for different topics like circular economy, health, the digital revolution, and so on. And we are going to see some of their work in the G20 book that they are producing soon. CSER is part of it. So that's why I know about this at the moment.
Toby: What's the current state of your project? Where are you up to? What's the next step? What gaps are there?
Clarissa: The workshops that I'm planning to do are probably going to take place in April next year. And that will be the first one. And I'm looking for policymakers that are interested in these topics to be invited to these workshops. So if someone wants to come, please just drop me an email and I can explain you more about it.
Toby: Right, well, you heard it here first.
Clarissa: Yeah. But not only policymakers, but also different experts that are interested in this, what could be a science of global risk. So please feel free to contact me. And how might they do that? Well, they could go to LinkedIn, Clarissa Rios Rojas. Usually I publish there everything that is happening with the project and with our centre. And on Twitter, I'm a little bit more active. So I will share more articles related to my expertise and to things that are close to global risks. And I also created an Instagram account, which is more for inspiring future generations of scientists interested in policy and on science diplomacy and science government advice. And it's called Being a Scientist is Cool. I could not think of a better name. So my target audience are younger people. So you can also find me there.
Toby: Well, Clarissa, from our conversation today, I think you're a perfect example of why being a scientist is cool. I've very much enjoyed talking to you. I will follow your project with great interest. Perfect.
Clarissa: Thank you so much. An excellent podcast. And I'm also really looking forward to hear at the other experts and learn from them.