European flag
Lena Höglund-Isaksson, Behnam Zakeri and Zuelclady Araujo on modelling

Lena Höglund-Isaksson, Behnam Zakeri and Zuelclady Araujo on modelling

Science for Policy podcast episode

Play Video

About this episode

Broadcast on

25 March 2024


Show notes

How do scientific models inform policymakers? How can they keep countries honest in international climate negotiations? When is uncertainty not so much of a problem? And how much does it matter if policymakers don't instantly grasp the ins and outs of a model which takes six months for scientists to learn?

Join our Toby Wardman on a deep dive into what happens when scientific models meet international politics.


The transcript below was generated automatically and may contain inaccuracies.

Toby: Hello, welcome to the Science for Policy podcast. My name’s Toby, as always, and today I’m joined by Lena Höglund-Isaksson, Behnam Zakeri, and Zuelclady Araujo, who all work in various capacities as researchers at the International Institute for Applied Systems Analysis, or IIASA. Lena is a senior researcher in the pollution management group. She models the emission of greenhouse gases other than CO2, and also mitigation scenarios, and her work is used by the European Commission to assess the impact of EU climate policies and policy proposals. She also advises the UN, and her work on methane in particular has been used, for instance, to underpin the Global Methane Pledge put forward by the EU and the US at COP26 in 2021. Behnam Zakeri is a senior research scholar with IIASA’s Energy, Environment, and Climate Programme, and also an assistant professor at Vienna University of Economics and Business. He focuses on modelling energy systems, so he’s working on modelling the energy transition at both national and global level. He’s also worked with the UN and the Green Climate Fund to help countries to draft their own national energy and greenhouse gas emissions policies. And Zuelclady has worked across Central and South America, as well as in Indonesia and Spain. She’s now based in the Biodiversity and Natural Resources Programme at IIASA. She’s mainly focused on land use policies, and she has a background in biology, economics and public policy. So that’s enough of hearing my own voice. Lena, Behnam and Zu, welcome to the podcast.

Lena: Thank you very much.

Behnam: Thank you so much, Toby.

Toby: Now, as it’s been explained to me, the connection between the three of you, other than all being based at IIASA, I suppose, and therefore being focused more my own on sustainability research and climate change, is that you all work with, to some extent, scientific modeling, and modeling is a topic we discussed once or twice before on this podcast, but I have to admit, it’s one where I still don’t feel I have a super strong grasp of the details. This is not a reflection on past guests. I realize it’s a me problem. I think it’s just, I don’t know, I think it’s one step too far away from my own personal background, out of my comfort zone. So, just in case there are other listeners in the same boat as me, or if there aren’t then, just for my own benefit, I suppose, before we get into talking about the role of models in science advice and what policy makers can do with them and so on, I would like to ask each of you just to expand a little bit on the basic science. So, I’ve just read out some brief biographies, but if you could tell me about your particular area of work, which models you interact with, and basically what they’re for.

Zuelclady: Well, I can start, I think. I work with the team that is working with LoBallium. LoBallium is a partial equilibrium model, mainly focused on land use and agriculture. And the focus of that model is to integrate special variables of the land use change and agriculture and crops, and also the economic part. Also the trade is a worldwide model. So the interaction between the different countries, the greenhouse gas emissions, the production that they have. So I really like it at the end, because if you are not like a poor scientist, the outcomes of the model, that it could be prices, demand, amount of crops, it’s easily to understand. But at the end, it’s a model based in a lot of scientific assumptions. It’s a robust model and a solid model that you can dig a lot and model different type of things as specific as you want. But also it’s, for me, I think that is the link between the policy and the models. So how we can use the different outcomes of the models and how we can define which model we are going to use to show the climate change commitments that the countries are facing nowadays under the climate change convention.

Toby: Perfect. Well, good. So hold that thought because that’s what we need to get into, of course, the connection between the modeling and the policymaking process. Lena.

Lena: So I work with a slightly different model. It’s called GAINS. And the GAINS model is a bottom-up model which models the different technological structures you can say, underlying emissions to the air. So we focus very much on understanding how emissions to air, how they are driven by different underlying structures, both in terms of technology or other geological structures, and above all also what people can actually do to reduce emissions. So it’s not an economic model. So we rely actually on input from other models, like for example, the globio model for our drivers. So we take as given, we take drivers, for example, energy use and energy production, these kind of scenarios when we take them from other models as input and also for the agricultural sector. So the focus is really on understanding how emissions happen and given these other assumptions on the drivers and also how emissions can be reduced in the future.

Toby: Thanks. And you mentioned it’s a bottom up model. What does that mean exactly?

Lena: The bottom up means that we collect a lot of information, for example, on the country level. We collect information about different technology structures, different other assumptions, for example, on, if we take the agricultural sector, we model the emissions from cows, for example. Then we collect information about excretion rates, milk yield, all kinds of different temperatures, underlying assumptions for what actually causes, in this case, methane emissions. Because there is also a lot of difference, depending on what kind of emissions you are estimating. For example, the CO2 emissions from the energy sector is relatively easy, because there is quite a clear relationship between how much carbon you put in, that you have in the fuel that goes into the combustion, and the CO2 emissions that you get out. But for other emissions, like the non-CO2 greenhouse gases, so methane, nitrous oxide, and F-gases, as well as NOx, SO2, there are very many other processes that influence emissions. So there is not a direct link between the activity and the emissions. And then you need a lot more details in understanding what the processes are behind driving these emissions. So these are the kind of emissions that we focus on in the Gains model. The ones that are more tricky and more uncertain, and we try to, for the policy purposes, we try to find consistent methodologies for estimating these emissions bottom up, so that we get something that is comparable across countries.

Toby: Great, thank you. And finally, Behnam.

Behnam: Yeah, great. I work yet on another different model, which is energy models, or energy system models. My background is energy economics and energy policy analysis. And I use these energy models to represent how energy is produced, transported, or distributed and used. So these type of models are usually used to represent, for example, the electricity sector, but not only that, including also how energy is, for example, supplied to the industry, how the demand for fuels in transportation can be met. And this type of energy systems are usually used at the national level, at the macro level, or the regional, like, for example, the EU or even global. And the insights from these models can be fed into the policymaking, for example, to design different scenarios for, for example, energy transition, namely, what would be, for example, the implications of a certain target, whether it is a renewable energy target or emission target for the energy system. For example, do we need more electric vehicles, or do we need to, for example, invest on more wind and solar, or use another technology to meet that target? How much energy or fuel should be imported from other neighboring regions? So this type of questions and the implications of these transitions for the economy and the environment are the type of topics that can be evaluated by energy models. I'm also a member of the Message IX model development team, which is an open source modeling framework that has been used by different institutions nationally and globally.

Toby: And when you say a modeling framework, at the risk of spinning off into a conversation I'm never going to be able to follow, what do you mean by that? This is broader, presumably, than just a single model.

Behnam: Good question. Modeling framework here means that it's not only a mathematical model, but it comes with different tools, databases, and database management system. So it equips the modeler with a toolkit that can be used for both representing that phenomenon, but also handling the day-to-day data management work and database related issues. So in that sense, it is a framework. It's not only a model. It comes with all the tools that you need to do your modeling work.

Toby: Okay, so that's great. But then I realized that I should have asked a much more fundamental question before asking you about model frameworks, which is what really is a model? Like, conceptually, what's the point of it? How is a model different from a scientific theory, say?

Behnam: In my view, a model is a simplified representation of a physical phenomenon. You know, where it is not possible to test or to, you know, make a laboratory representation of that phenomenon, or it is costly or risky or dangerous, you can use models. And here by model, I mean mostly to computer software-based models that can be used to understand different interactions, different relationships between the components of that physical phenomenon or that system. In short, these models help us to understand the complex problem by representing that through simplified relationships and subsystems.

Lena: Yeah, I agree with Behnam. I mean, reality is very complex, and it's not possible in models to represent all the kinds of different external shocks that we know happen in reality. Yeah, what we are doing with a model is to represent or try to simulate our reality under certain conditions. And I think that for me, that's the most important point of a model, that you represent certain circumstances and you make certain assumptions in that model. So it could be an energy model, but even though all the different energy models represent different types of assumptions, and that happens also with the land use model, it depends two things, the data that you put in, but also the assumptions that the models are based. Yeah, and from a policy point of view, I think it's very important to recognize that we are not able to predict the future. What we are doing is projections given certain circumstances. These projections are at the same time extremely useful for policymakers because it allows them to have some kind of map that they can navigate the future with. And that map is then consistent in terms of assumptions. In particular, when it comes to setting country targets, for example for emission reductions in the future, then it's important that the countries have a trust in that they are treated reasonably fairly and that the data that is underlying these kinds of targets that they have been given, that they are consistent, that they are not depending on some kind of political influence from different countries, for example. And that's what the scientific models can do, is to present something that is independently, objectively developed without influence from policy or from other kind of vested interests. But of course, it requires also that both policy makers and also national experts and the scientists that there is an agreement that this is the role of the models to come in with this kind of scientifically sound results that sometimes may contrast what countries, each country say themselves, which doesn't always add up. For example, if you're modeling the energy systems, you would typically find that all countries have very rosy projections of how much electricity they are going to produce. And they are all going to be net exporters of electricity, for example, and that will not work. So then a model has to come in and actually make something that is more realistic and provide this to policy makers so that they get a more realistic projection of what is actually possible in the future.

Zuelclady: And sometimes we tend to expect that the model gave us a solution or the absolute solution or the absolute answer of a question or a projection. And I think that's key to understand that it's just a representation of the reality.

Toby: Yes, which makes sense. So, Behnam, you mentioned earlier about using your model to make policy decisions. Like, should we have more electric vehicles? Which measures should we take? So the idea, if I understand it, is that you can input hypothetical changes and then the model will tell you the outcomes of those changes. And that helps you decide which hypothetical change you want. And that then informs your policy decision to try and bring about that change. So then, the work the model does is based on certain assumptions about how the world works. If I understand right, the assumptions are essentially what the model is. They define how the inputs, those hypotheticals, are processed and turned into outputs. So then, the assumptions themselves aren't tested by the model, they're given, I guess, hence the name assumptions.

Lena: Depends, actually.

Toby: OK, go on.

Lena: I mean, when it comes to the non-CO2 greenhouse gases, for example, it is true, the kind of modelling that we do, bottom-up modelling of the non-CO2 greenhouse, so methane, nitrous oxide and F-gases that I've been working on. We also collaborate with other types of models that are able to also, for example, there is a type of models called inverse models that link bottom-up model results, which is very detailed at the sector level, with what is actually measured in the atmosphere in terms of the concentrations of methane, nitrous oxide and so on. And at that level, of course, you cannot say where these molecules that are you finding the atmosphere from where they are actually coming. But you can say on a regional scale, say Europe, maybe also Northern, Southern Europe, and so on, for methane at least. So the bottom-up models can then feeding into another type of models which are called inverse models. And they then use bottom-up models and also link it to the top down. And they can then see how well these actually compare. So they can say, okay, and then they use usually a few different bottom-up model results from a few different models. And then they can see that, okay, well actually this model produces reasonable result for this region, for other regions maybe not so good. And in the end, hopefully we can get to some kind of agreement on the different models.

Toby: Yeah, so you're testing your model against past realities. You have a model and you say, okay, this model kind of encodes our theory about how the various inputs result in more or fewer non-CO2 emissions or whatever. But you can test that. You can take a past year. You know what the emissions were that year, so you know the output. And you run that through the model, as it were, backwards and see if your model generates the inputs from that year. And if not, then you know something's wrong with your assumptions somewhere.

Lena: Yeah, because if we are not able to actually represent what has happened in the past, I mean, we have to learn from that to understand what our models can actually do in the future. So this is one way to validate if you want this kind of bottom-up models that we are working on.

Behnam: If I may come back to your previous question related to assumptions, I would say even one of the first ideas of developing these models was to test the different assumptions. So I wouldn't say that assumptions are hard-coded in these models. Indeed, these models are free. At least the model that we are working with is free. You can build it with your own assumptions. And I guess that is the beauty of that, because if you facilitate, for example, a forum or a meeting or seminar or bring different stakeholders and listen to them and bring their ideas, their predictions, their assumptions about a certain process, we can test that together. And these models can be used to show that what the output will be based on different assumptions. For example, I can tell one example that is very, you know, debated in the policy forums or scientific forums is how fast a technology can phase in or how fast another technology can phase out, how we can tackle with technology lock-in. Because this is, in some models, an assumption that you put, okay, this is the maximum growth, for example, for batteries or a certain technology to be penetrated in the system. And these are, of course, informed by different disciplines, but many different empirical studies. But at the end, it might be a number or assumption coming to our model. And this is not the choice of a modeler. The modeler ideally should read different studies and make judgment that what would be a good assumption in that regard. But one thing that I guess is very important is as modelers to be open to different possibilities. Sometimes in our field, we are even more conservative to call what we generate is projection. We just say that it's insight. We explore different ranges of possibilities, different scenarios, and the idea is to provide policy makers with the insights on the implication of a certain pathway rather than communicating numbers or percentages that could be taken to the policy course.

Toby: Yeah, so Lena, you've mentioned a couple of times this idea of connections between models, linking one model to another. You said, for instance, that the model you work with doesn't have any economic stuff in it. If you want to take economic factors into account as an input into your model, you need another model that can generate those factors and feed them in. Is that right?

Lena: Well, we need drivers. We need drivers, for example, for the energy system or for the agricultural system. They are usually generated by other models that model the markets of different energy markets and so on. They are usually called partial equilibrium models. We take that as given and we let other models develop those.

Toby: Yeah, which makes perfect sense. Trying to load everything into one giant supermodel, as it were, would be silly. That's not what the word supermodel means, but anyway. But then my question is about the multiplication of uncertainty because each model has its own degree of uncertainty, I guess, because the data you feed it and also the reliability of the assumptions, which is fine when you've got one model. But if you start chaining together multiple models, each with their own levels of uncertainty, using the output of one as the input of the next, doesn't your uncertainty grow quite quickly? And I can imagine perhaps at some point it crushes a threshold into being really not very useful because the uncertainty is so high. So do you have ways to manage that, or at least to quantify it? Or do you just rely on each link in the chain being as robust as it can be?

Zuelclady: It's going to be a little bit controversial, but I think that it depends what you want and what are you going to do with the model. Because having a high uncertainty is not necessary a bad thing if you report it and you are aware of it. Okay, let me go a little bit back. I'm thinking, and we have been talking about projections. And in my case, mainly, now, when I'm talking about projections, I'm thinking about these new, it is new, it is not new for all the countries, but these projection reportings from the countries to the Climate Change Convention that they need to submit the transparency of reports, that is like the new requirement to share the progress on their mitigation actions. So they have projections about how they are going to be in the future and everything. And one of the concerns from the countries when they start using the models is that they need to report the uncertainty that they have. And some of them don't want to report it because they have high uncertainty. And at the end, the Convention or when you review it or you are part of this review team of the Biennial of the Reports of the National Communication, having a high uncertainty is not necessarily a bad thing. If you know that it exists, because if you know that it exists, you can start doing things to improve it. And I think that here is one of the points that it's a... When we were talking before about the assumption of the model, we also need to talk about the data. Because at the end, you can have the perfect and the best model for something that could work under the European Commission because we have a lot of data, historical data, and mainly, for example, for energy sector. In general terms, for the energy sector, we have a lot of data and historical data about it. But it's not the case of the land use sector and it's not the case of the African countries or the Latin American countries. So you cannot use the same model and you're not going to have the same uncertainty or the same result because you don't have the data. So what we do most of the time with these global models or these models that we produce for certain regions is first to calibrate with the data to the conditions of the other country. We're not talking about energy, to talk about land use, for example, the type of ecosystem, what type of ecosystem they have in that region, or which ones are the main reasons they deforest or they have land use changes. So you can start changing these things, the type of weather, the type of conditions, to calibrate to that region, even though you are going to have certain uncertainty. But at the end is how to use the model under certain conditions. And the same model, depending on the data, is going to have more uncertainty or less uncertainty. And for example, with the greenhouse gas inventories, that is not a model per se, but it could be. So you have the activity data. That is land use change, number of cows or animals that you have, or amount of electricity that you are using, or amount of waste that you are producing. This is the activity data. And this data, for sure, you cannot measure. As Lena was saying, you cannot measure every time. Maybe you have an approximate amount of waste that you are disposing in a certain city, but you cannot exactly count in most of the countries exactly the number of kilograms of waste. So you have certain uncertainty data. Or even though if you are using a map, depending on the scale of the map, you are going to have certain uncertainty. And then if you use a mission factor, a national one, a national data, taking the top-down approach that you start with the national to go down to the municipal levels, so you are going to have another uncertainty there. And you need to start combining these uncertainties to have the final uncertainty. And you do it in a different way, but it's the same in a model. Depending on the different steps and the different data and changes that you are doing in the model, you are integrating that uncertainty. But I think that, for me, it's not bad to have a high uncertainty, or the point is that you are aware of it. If you are aware of it, even if it's low or high, it's perfect. If you don't know it, I think that's a real problem.

Behnam: Yeah, I completely agree with... Uncertainty happens in different parts of the modeling process. You know, some uncertainty is related to the data, to the input data. A part of that is related to the model structure, because even different energy models have a different view to the energy system. One of them would be a market-based view. Another one would be the view of a central planner. They have different granularity in terms of spatial, temporal, and technological resolution. So that creates quite a wide range of outcomes from different models. But there are indeed good practices in the modeling community, as mentioned, to understand the sources of these uncertainties. And when we do model linkage, ideally we shouldn't just blindly get some output from the other model and feed it into our model. It should happen in dialogue, what socioeconomic assumptions you have had for this outcome, what was, for example, different criteria you used to represent this output. And then in some cases, it's possible to arrive at consistent socioeconomic assumptions or other important assumptions for the two models that are being linked. But let me also open another discussion here, which is related to how to deal with uncertainties, because we know that the uncertainty exists and it's one of the challenges of communicating the model results to policymakers, because in some cases, it's confusing. In some cases, it's not easy to understand what policy action should we take based on the model results. There are some remedies. There are some best practices. One of them that I find personally interesting is this model intercomparisons. For example, for making policy at a certain jurisdiction or geography, namely a nation or the EU, for example, there are, let's say, five or ten models. They bring their results and they compare their assumptions. They visualize their outputs. And there are some techniques to understand where the uncertainty comes, where the source of the uncertainty is. Because even if you fit the same input data, the outcome might be different. There are very good practices here. I don't want to go too much to technical details. But in one of the projects, which was called European Climate and Energy Modeling Forum, more than ten different institutions with more than ten models brought their results. And we could see that, for example, most of the sources of uncertainties could be understood. Is it because of data? Is it because of granularity of representation? Is it because of the model structure? And so on.

Lena: I think the models have a very important role to play in mitigating also this uncertainty and maybe exposing the uncertainty. We know, for example, in the national reporting of emissions to UNFCCC, for example, in these emission inventories, there is a lot of uncertainty where countries... I mean, they all follow the guidelines, according to the guidelines of the IPCC, but the guidelines allow for a lot of flexibility in assumptions as well as in what methodology you want to use. And here, I think the models that we have, where we model all countries bottom up, which means that we build these huge databases where we store a lot of information at the country level, assumptions on all sorts of factors that influence emissions. And then when we model this, then bottom up, for all the countries using similar assumptions for similar circumstances, then we reveal these kinds of inconsistencies that we see in the national inventories. And the national inventories are usually used as basis in policy agreements. So it's really important that we have ways that we can also expose these uncertainties and these differences in the inconsistencies that are in the national inventories. And I think here the models have a very important role to act as if there is an interest in using them for this. Because of course, there are a lot of also political interests in, or I would say, I think when you work with these inventories and when you contrast them with the more consistent emissions that we have from the models, you see certain patterns. For example, often when 1990 is being used as the base here, you can see that some countries boost emissions or emission factors are suddenly much higher that are being used for the 1990 than later. To some extent, it can be because they have actually done something in terms of mitigation. Maybe they have put a lot of mitigation actions in place. Then we have to try to represent that in a consistent way in the model. But also when we do that, we see that some suspicions that there might be some political interest also going into these inventories.

Toby: Yeah, so the idea is like inflate your earlier emissions so it looks now like you've reduced them more than you actually have. That's what you're saying.

Lena: No, I mean, I'm not... Yeah, I mean, you see these kinds of patterns. When you gather all the data that is supposed to be producing these emissions also in the inventories, then you see such patterns coming up now and then.

Toby: And are you suggesting that the modelling can be useful to kind of spot those situations, to counteract them?

Lena: Yeah, I mean, I think it's very important here to separate the political level from the expert level. At the expert level, if we have model results and then we have the national inventory results and when we put them beside each other for a comparison, then we can usually solve these differences at the expert level. If we can talk directly to the experts in the member states, we can ask, what were the assumptions that you were using? Why are you assuming 1% leakage for a certain technology that other countries are assuming 5 or 10 or 20% leakage? And then when we say, okay, for the same technology, all other circumstances the same, you should have similar leakage rates. And then when you then go to the national experts to clarify where do these assumptions really come from, then you can usually sort out, and okay, you know, this was actually not based on measurements. This was more an assumption that we made. But in some cases, countries do have made very solid investigations into leakage rates. And that's great because then it also informs our modeling. And then we also learn and can incorporate this into the modeling. So what we try with the modeling is to present something that is as solid as possible where we consolidate the information from all the different countries in a consistent way. And usually then at the expert level, we can come to an agreement. But then, of course, at the political level, it's a different matter because at the political level, there might be agreements about we have to comply with what countries have reported. And then there comes a conflict between what is scientifically sound and what is politically acceptable. And that's where the scientists have to represent the scientifically sound side. And you have to come to some kind of compromise, I think, between these two. Well, I think in the end, it has to be the policy makers that have to take a decision. Where should we be between these two? Because in the end, if you set targets, legally binding targets in the future, it's the policy makers that will have to defend them in the, maybe even if countries go to court and start challenging the targets that they were given. I mean, countries that don't meet the targets will have to pay fines. Then it has to be possible to show that the basis for it, the country was fairly treated.

Zuelclady: For example, a good example about it is the long-term strategies. The long-term strategies are these commitments defined by the country in general terms to 2050. So some countries define those that are going to be net zero. That's a commitment, but they don't have pathways. And if you check the documents, they don't have any number behind that. And if you see the historical data, historical data show that the missions are going up, not going down. So, and yes, the LTS at the end, it's a political commitment. Something interesting related to what Lana was saying is that some of these countries also include the LTS target in their climate change law. So the objective to be net zero is included in their climate change law. But they don't have pathways. And what happened when they go to the technicals team in the country and it's like, hey, we already committed in the last COP that we're going to be carbon neutral or net zero. So they start building these pathways. And I think that here is, for example, where the models play a key role, because you have models from the different sectors. So you're gonna start choosing different pathways in the different sector to see how you are going to arrive to that target. And there are super interesting countries, for example, Colombia, where the land use pathway depends completely on how they are doing in the energy sector. So the land use sector is going to compensate it. So it depends how money they receive from international funding to achieve their energy targets, how much the land use sector is going to compensate. That is a way to present a pathway. If you saw the numbers, it could seem that you have a lot of uncertainty because it's 10 times bigger the minimum value to the maximum value. But underlines, you can read that at the end, that's linked to the energy pathways. And in the energy pathways, the pathways are defined with the amount of money that they are going to receive or the investment that they are expecting to have. So how the political commitments communicate with the technical teams, I think that at the end is when we start defining how we are going to do that thing. And that happens in different levels. So also you have like a commitment, but how you're going to implement it. And in that implementation process, also the technical teams and the models could give a option or different mapping or different routes, how they can do it. For example, we want to do it, but we need more money. Okay, so this pathway will work if we receive more investment. This pathway would work if the private sector decides to change the production process. This pathway is going to work if we reduce the consumption of meat. So you can have different pathways and different options based on different models that you can, as a decision maker, as a politician, to take and say, okay, how we are going to link it. And I think that that's the key point of the models, that we can support and show what is going to happen under certain circumstances. And at the end, the decision, yeah, the ideal ways that is going to be based on science and data, in most of the cases is wherever they want to choose. But at the end, they have the numbers, they have the pathways and they have the information to make the decision.

Toby: Good, I'd like to ask you a little bit about the kinds of conversations you have with policymakers about modeling. So in a past episode of this podcast, I spoke with Tracy Brown, who runs a British organization called Sense About Science, which is concerned with, well, many things, but among other things, concerned with the, as it were, naive use of modeling by politicians and policymakers. At one point that she made, and this is paraphrasing, these aren't her exact words. One point she made is that non-experts often don't understand enough about how models are built, what goes into them, what assumptions they embody and so on, to be able to think critically about them and to know what questions to ask, and therefore presumably to know when they are encountering a good useful model and when it's less useful. And so she argues they're kind of bamboozled sometimes into taking everything at face value, or else there's a risk of that. So then, so two questions really. The first question is whether you have an experience of being asked critical questions by policymakers about things like how does this thing work, how should I be interpreting it and so on. And the second question is what do you think? Do you think it's necessary for the end user, the politician or whoever, to understand the model, to understand the concepts? Do they need to?

Lena: I think it's very important to be able to understand and to be as transparent as possible with what is actually going into the assumptions. And I have to say that, so the model that we are working with, the Gaines model, is actually very simple. It's not that hard to understand the structure of the model. It's not an economic equilibrium model. I think that that's much more difficult because then you have a lot of, you have to explain under what circumstances markets clear and you have elasticity and so on. In that sense, I think the Gaines model is much more simpler. So we can very quickly show what we have assumed, but at the same time as I said, we take this economic part of the drivers as given. But with that said, I think it's also very important that the policy makers, I mean, they have to make an effort to try to understand the models and what they should be using the models for. So I think in the past we have had very good experience, both for the non-CO2 and for the air pollutants emissions that we have had very knowledgeable persons on the policy side, for example in the EU, who understood very well what we are doing and the purpose of it, and also the role that the scientists have here, for example, in bringing in consistency, bringing also uncertainties into some kind of manageable level. And it's really important that they also understand that we are on the same page, because otherwise if policy makers want to put more trust in the national inventories, then it becomes a difficulty, because what we do is to produce something that is scientifically sound and consistent also for the historical years in order to drive that into the future and in order to have mitigation potentials that are consistent across countries, for example, in terms of what countries can do and what, how much it will cost and so on. And you lose that consistency if you don't understand the importance of having also consistency in the historical years.

Behnam: Yeah, I see your point, but I have a slightly different opinion or experience maybe because of the type of modeling or a different type of modeling that I have been involved. I believe that this is on our modelers to explain how these models work and put that into the context of policymaking, bringing the most relevant insights for policymakers. Let me share you with one experience. I was working in a project with one of the international organizations, and when they sent one of their policy officers to our institute for one week, of course, it's really nice, you have in-person contact, and when she asked me that, okay, I want to learn your model in this one week, how should we start? You know, I was shocked. I explained to her that even though this model is open source, everything is open, it takes between six months to one year to learn how this model works. So that brings me to this point that I guess we have been communicating our results or outcome sometimes very complex without putting that into context, without using, you know, effective visuals or ways of communication. And I guess there are many things that as modelers or scientists, we can learn when it comes to communicating the results to policymakers. One thing that I personally found very useful is to have the stakeholders and policymakers on board early on in the project when you are developing scenarios, putting assumptions there. So they get, you know, into the process relatively early rather than at the end of the process overwhelming them with very complicated and interlinked and a lot of feedback loops in the results. But again, as you mentioned, it might be very dependent on the context, what type of policy is being made, but I found that sometimes it's on our shoulder to ease the process.

Lena: I mean, I fully agree that it's the scientists who have to explain it. But I also sometimes sense that and any kind of differences we have to work out, usually on an expert level. So you have to have meetings with stakeholders, with member state experts, in order to understand where we differ and why, in terms of our results. But there is one issue, and that's the time, because very often these policy processes are extremely time constrained. And that's where I often find a bit frustrating, because I would really like to explain all the differences and why, because I think we have a solid case. But the problem is, there's usually not enough time for that kind of process to run. And that's really very unfortunate, because in order to build trust into this, trust in to using these kind of models in the policy making, you need this time to explain.

Toby: So obviously we hear this all over the place, like everywhere. Whenever you discuss what it's like working at the science policy interface, that has to be among the top three, if not the top one, observations people make, that there isn't enough time. And Behnam, you said, you said you need six months, but sometimes you only get a week. That's all very well, but a week is sometimes a lot. Sometimes you need a week and you only get five minutes. They need to decide today or tomorrow what their position is going to be in negotiations or what position they're going to take in parliament or whatever. And as you were suggesting, that then puts a lot of responsibility back onto the scientist, because it's all very well saying you would like to explain and have a dialogue and help the one that needs to know and find a common understanding and so on. But you simply don't have that option. You simply have to do all that pre-processing yourself. And present them with something they can use right away. And that's a lot of pressure on the scientist.

Zuelclady: Well, at least for me, there are two points. So first is that we need to start communicating with them since the moment of the deciding which model. So since the beginning. So it's going to be like a full process. That's one thing. And the other thing is to manage expectations in both sides. So we work, okay, I'm an imposter. I'm not a modeler. I don't consider myself a modeler per se. So for me, most of the time, a model is a black box. Yes, it's a black box. And even with the Globarium team, because they have been working with it for years. And for years and every day, or at least 80% of their time. For them, something that is easy for me is like, no, there is no way. It's not easy for me. So if it's not easy for me that now I have been working with a team, what happened with a technician or a policymaker that even though he had a PhD in the same topic, they are not involved in this. And a model requires a lot of interaction on it and to understanding. So we need to be super flexible with them that they are not working on a daily basis with this. And something that for us is easy, sometimes it's like they don't have time for that. And it's not that they don't want to. They have a lot of priorities. So working with a model is not the only task of the day. So yeah, I think that at the end it's like working together in a long term. And it's the most challenging things because most of the time we work with a project. So yeah. Yeah.

Toby: With all that said, these things are obviously useful and practical and they have been used in policy. I wonder if you have a pet favorite example of when a model has been used to inform policy in a really successful way.

Lena: I think the obvious example is the EU climate policies, where a consortium actually of models have been used. And Gains and Globium are part of this consortium of models. And as we know, the EU is the only region where greenhouse gases have significantly dropped over the last decade. And also EU is the only region that has presented a consistent and consolidated plan for actually how to get to net zero in 2050. Whether it will be fully implemented or not is a political issue. But at least there is a plan that is consistent and possible. It shows a possible way to go forward. Even then it's up to the political level to take it to creation.

Toby: Okay, and that's based on modeling.

Lena: It's a consortium of five different models that has been used for the last 15 years for the EU climate policy, so for the impact assessments that are underlying the EU climate policies.

Toby: Yeah. Okay, that's good to hear. It makes me want to ask why. What's the magic ingredient that's made this work so well in the EU as opposed to other areas?

Lena: I think it's very important to understand that this is a whole universe of different actors. You have the policy makers, you have the scientists, but you have also the experts in the countries, which are often sector based. They have really very good expertise on certain sectors. And all of these different communities need to communicate, learn from each other and understand what is the current picture, where do we want to go and what is possible. And this is where, when all of these work together and there is trust between these different communities, I think that's when you actually get the possibility to move forward.

Toby: Well, much as I enjoyed this conversation, it's probably time to draw it to a close. I want to say thank you so much to all of you for being willing to go down these rabbit holes, and in particular for bearing with me with my very basic questions at the start. I think there has been in the end a real wealth of insights in this conversation. So thank you very much, Zuelclady Araujo, Behnam Zakeri and Lena Höglund-Isaksson.

Zuelclady: Thank you.

Lena: Thank you very much.

Behnam: Thanks a lot, Toby.

Uncle SAM

Staff login