European flag
Bart Koelmans on communicating risk and uncertainty to policymakers

Bart Koelmans on communicating risk and uncertainty to policymakers

Science for Policy podcast episode

Play Video

About this episode

Broadcast on

21 September 2020


Show notes

Do policymakers and scientists have different understandings of "risk"? How can scientific uncertainty be pinned down and quantified? When experts disagree about the evidence, is there anything useful that the policymaker can take away from that disagreement?

Bart Koelmans discusses these questions with Toby Wardman of SAPEA. We also discuss the strength of the evidence for harm from microplastics, the limits of the natural sciences, and how an English speaker should pronounce 'Wageningen'.


The transcript below was generated automatically and may contain inaccuracies.

Toby: Hello, welcome to the podcast. It's Toby here. Now, I have some good news and some bad news for you. The good news is that my guest whose interview you're about to hear is extremely interesting. The bad news is that we had a technical hitch during recording, so as a result, the sound quality is not quite as good as you might hope. Basically, we had to use a backup recording device because our main hardware let us down. I also think you can hear a squeaky chair intruding into the recording once or twice -- either that or my guest lives in a medieval castle with some very creaky oak doors. Anyway, it's still perfectly listenable and I'm confident you will quickly forget about the slightly subpar audio once you hear what Bart Koelmans has to say, but I do just want to put it out there. We're a new podcast and we are working hard to raise our production values, but sometimes it doesn't quite go according to plan. So please forgive the audio wobble and enjoy the episode.

Toby: Hello, welcome to the Science for Policy podcast. My name is Toby and I'm very happy to introduce and welcome to talk to me today Professor Bart Koelmans. Bart is the head of the aquatic ecology and water quality department at Wageningen University in the Netherlands. He's an expert on plastic pollution, especially microplastics and nanoplastics, and he's the editor-in-chief of a journal with the very creative title Microplastics and Nanoplastics. And most interestingly for today's conversation, he is frequently called to give expert advice on microplastics pollution to policymakers in prominent international bodies including the UN the EU and a few more besides. Bart, hi, welcome. Thanks so much for agreeing to have this conversation.

Bart: Good morning, Toby. Thank you for the invitation. Looking forward to talk with you about science and policy.

Toby: So first, and this is important to me personally. How was my pronunciation of Wageningen?

Bart: It's, it's, it's good. Yes So talking with English people I would mostly myself switch to Wageningen.

Toby: Oh, really? Okay.

Bart: But yeah, you already approached it much better. It's, it's Wageningen. So yes, very fairly good.

Toby: Yeah, okay. That's very generous of you. So I'm grateful to you not only for joining me but also for agreeing to speak my native language and not yours. I think that will probably work better all round.

Bart: I can imagine.

Toby: So full disclosure for the audience, Bart, you and I have worked together a bit in the world of evidence for policy and I'm very interested obviously in your views on that world and your role in it. But before we get into that I wanted to just ask you by way of introduction if you could tell us what exactly you do in your day job? Because it says here that your research focuses on quote the integration of environmental chemistry, bioecology, stress ecology and quantitative systems analysis by combining the strengths of laboratory experimentation field study and model development.

Bart: Right.

Toby: So what do you actually do?

Bart: Yeah, I understand. Yeah, on a website, you always write down in a very broad sense. It's more like the domain. But on a day-to-day basis, of course, we focus on specific issues. So very generally, I'm a natural scientist working on environmental quality. But my research now is focused very much on plastic pollution. So that means it's very much about improving our understanding of transport and fate processes, effects, and risk, and then especially ecological risks, but also human health implications. Of course, also teaching, working with other researchers, PhD students. As for the science, it is, yeah, it is mainly plastic research, and I think a key feature of what we do is trying to understand systems. So we see a lake as a system or a river or an environmental compartment, and there are always many processes going on in a system. So it's the whole that we would like to understand.

Toby: So in what I read, the part about model development presumably refers to that.

Bart: Right. So it's always about trying to understand the system and capture then the behavior of a system in a model and then use the modeling to, yeah, to make predictions. But also, these model simulations can inform experiments, so we can design experiments better if we use model simulations.

Toby: Right. Yeah, that makes sense, and it's a pretty high-profile field. I mean, lots of people, scientists and non-scientists, are interested in microplastic pollution.

Bart: That's true, yes.

Toby: Does that influence your work at all? I mean, what's it like working in a field that everyone's got their eye on?

Bart: I think it's, yeah, my university has a motto which is Science for impact now. This fits very well to that motto. It's, of course, very valuable to be able to do something which matters, that there are societal questions that you can try to answer that you can contribute to that. So there is, yeah, there's some beauty in doing that, but it's also very motivating, of course, to just solving a puzzle. But this topic has both sides in it. Plastic pollution just from a scientific point of view is very interesting. It's a very complex puzzle for scientists. But at the same time, it's so relevant. So many people have questions about the problem, how we can solve it. What is the best way to tackle the issue?

Toby: And that includes policymakers, of course.

Bart: Right, that's true, yes.

Toby: Which brings us neatly onto talk about your work in that area. So not only are you an eminent scientist, but you're also, I would say, a bit of a celebrity in the international science advice world when it comes to microplastics. So I know you've worked with the UN, the World Health Organization, and SAPEA and the European Commission. What have I missed?

Bart: I think SAPEA, the World Health Organization, and GESAMP are the most them at the well-known ones, I think. So GESAMP is a group of scientific experts that provides advice to the United Nations system on scientific aspects of marine environmental protection. But I've also been involved in advising the Dutch Ministry of Environment, for instance. And I'm now also involved in advising the state of California.

Toby: Really?

Bart: They're implementing regulations for water quality with respect to microplastics. And they need an actual assessment of the risks. So I think this far, scientists and many people have been talking about how to assess risk, but it was from a theoretical perspective. So as far as I know, this is the first risk assessment done or planned by a regulatory body. Yeah, to really set standards.

Toby: Interesting, so finally putting into practice the theory that you've been developing.

Bart: Yeah, exactly. This is exactly where some of the scientific concepts that scientists worldwide, but also our group, where these concepts can be used in a risk assessment. But it's not very straightforward. Doing this actually also means very much evaluating whether these concepts make sense. They have never been tried out in practice. They have never been evaluated or maybe even criticised by policymakers. So sometimes the scientists would say 'we need very detailed measurements or a very laborious way to assess certain aspects', which in practice is not feasible if it's for a standard or routine monitoring. So there's always a balance between wanting to know all the detail and the day-to-day feasibility and also cost. Resources play a role there.

Toby: Right, so this is where the rubber hits the road. Where you find out if the framework you've been building for policymakers is actually any use to them.

Bart: Right.

Toby: So this is one of the areas I really want to ask you about. It's the question of communicating risk to scientists, policymakers, and others. So when, for instance, a policymaker asks you, "Okay, what are the risks posed by microplastics pollution?" What do you say? Is there a simple reply?

Bart: Yeah, I get this question a lot. So yeah, I must say I have some sort of a standard reply and it's an answer that has several stages. So the first thing I would say is that it depends. I would explain that risk is a rather ambiguous notion. It means different things in the minds of different people and then I would explain that what the public understands of risk is very different from what many scientists understand risk is.

Toby: In what way? That's interesting.

Bart: Right. So I think that in society, it's more like anything that interacts or maybe harms something that people value is sometimes perceived as risk.

Toby: Okay

Bart: So it's, for instance, plastic does not look nice on a beach or if people see plastic floating in the ocean, they feel that it's not right and they value the ocean. They have a passion for nature very often so these two things do not fit together and then this notion of risk and adverse effect very easily pops up in their minds. Whereas in the natural sciences and also in formal procedures of risk assessment, it's a far more technical approach. There we would say that it is the dose that makes the poison. So any chemical or plastic is not toxic or safe inherently, not automatically. There's always a certain dose above which the chemical or plastic could cause adverse effects and also that if you are beyond that dose then basically it will still be safe. So that's very different. This plastic floating in the ocean formerly does not necessarily cause a risk to the organisms if the concentrations just are low.

Toby: So who has to budge in a situation like that? Because obviously then you've got two different understandings of risk or rather of what counts as an adverse effect, so you have your nature lover who says look if I'm walking on the beach and there might be a plastic bottle floating in the surf next to me well, that's something I'm concerned about is something I consider to be a risk. But if the scientist then comes back and says well actually technically that's not a risk because in the scientific sense we're only concerned with the level that pollution has to reach before it's toxic, don't you have some failure of connection, some failure of communication there? What do you do about that?

Bart: Right. Well, I think it's very important then to talk about roles that different people play in this whole discussion that you are referring to. So I think that the politicians, policymakers advising politicians, they basically have the responsibility to oversee everything and to take into account the economic interests, other interests, nature conservation, and also some scientific arguments that would relate to ecological risk, so whether some animals or human health would be affected by microplastics. So it's not only about this natural-sciences-motivated risk. I think the other type of risk, the broader type, is probably more important for a politician. So it's only this small role that scientists play, I think, in advising on this well-defined part of the whole story. And how these different aspects should be weighed, should be compared to one another. That is another, that is not a natural science aspect. I think that's policy. So basically, that is somehow the more or less democratic process, ideally, I should say of, how these different interests are balanced, and there we are. It will very often be some sort of a compromise between economic interests and nature conservation and so forth and so on. So does that answer your question?

Toby: Yeah, I think so. So essentially, it's about both sides of the conversation understanding that the natural scientists can provide some useful information but only in a limited domain. They can only tell you what their science can tell you.

Bart: That's right.

Toby: And it's for the politician then to weigh that against other considerations like aesthetics or economic costs or political constraints or whatever.

Bart: Yep.

Toby: And do you think there are risks... no, that's a bad choice of word, do you think there are dangers in talking about risks in this way?

Bart: Yeah, if you talk about risks and dangers, I think that it comes with a higher responsibility than when you talk about, yeah, for instance, how an organism grows over time in a natural environment. That's what this is about, risks. This is about the possibility that, in my case, plastic actually is harmful for human health or for environmental quality. So the communication of that comes just with a very high responsibility because if there are adverse effects, this triggers, this can trigger measures, mitigation, and this comes at a cost. Of course, it also fuels concern with the public, and that's important if that's correct. And so if there is a concern also according to scientific data, so then it's very important to communicate that, but there's also a balance there because if you exaggerate the message, then, yeah, there is public fear, which is simply unnecessary. So I think it's very important to be very precise as a scientist in communicating what we know and what we do not know and with the uncertainties that are associated with these facts and the data. So any facts always are surrounded by uncertainty. So we should also talk about the facts concerning this uncertainty; uncertainty can be quantified, and then it becomes factual as well. Now we also should be very clear that what scientists say is always provisional. It's for the time being because we do more research, and there are other scientists responding to what you do, so it's a slow process, but yeah, one should be very careful and not say this is the definitive answer concerning risk.

Toby: Yeah, so, um, there's a lot to unpack there. Let's go step by step. So we're starting to talk now not just about risk, but about uncertainty about that risk. I wonder whether you find that politicians will come to you and say look, I want to make some new rules about microplastic use, or I'm considering making some new rules. Can you just tell me what the risk is? And the ideal answer they would love to hear is something like, yeah, at current pollution rates, you have eight years or whatever until you hit the threshold of toxicity. You know, just like a single number short answer.

Bart: Right.

Toby" And what you're saying is, of course, it's not as simple as that, not just because risk is complicated but also because the evidence about the risk is uncertain.

No, I agree. I understand. I can say two things here, one of which is that my personal experience is that the policymakers that I advise to, they know very well what they want. So the quality of the processes that I was in was always very high. So it was always with people that understood all these nuances. So I always got the feeling that the people that I advised understood a lot about how the scientific process works. So they were very clear about what type of advice they wanted, what the reason for the advice was, how it would be used. And that helps a lot, that helps scientists to better frame their message. So it's often about how should you talk about risk as a scientist, but there's also a responsibility with the policymakers to try to dig into the minds of scientists, understand how this world works, and my experience is that they often understand this. And these two efforts then are crucial for success, I think. And the other thing is that, so you said they want to have, they might want to have a very short answer, but that's not necessarily the case. You can take your time in this process to, at the end of the whole process leading to some conclusion about risk, at the end think carefully about how a message should be conveyed. And there are also formal procedures for that, so expert elicitation procedures, so how should we, which wording should be used for certain outcomes? If you are with a group of 20 experts, they will all have a different view on this. But there are formal procedures to, uh, to tune the wording, to select carefully what the associations are of the words that you use and optimise this. It will always be a compromise, but yeah, I think this is all what you can do. And then there's also, I said there were two things, but maybe there's a third aspect which is important here. There's also what happens after this outcome is communicated to policymakers or the public? It is impossible to do it right. There will always be some noise on the line. And different people have different views so they will hear different things in what you say or what you write. So there's also a huge responsibility, I think, for both policymakers as well as scientists to not remain silent about what happens after these outcomes are communicated to the public and policymakers. So for instance, when I have an interview about risk of microplastics and it's a journalist, and I think "that's correct, but on this point, maybe it's not exactly what we meant to say". Then I have the opportunity to explain what I meant, and this is always appreciated. So it is not spot on right away, but it's a process, and by explaining again and again, I think, and at least hope that at the end the message converges to what it should be.

Toby: That's really good to hear that you're optimistic about that process of convergence. You mentioned something else there that I wanted to pick up on. I remember when you were chairing the SAPEA Working Group meeting on microplastics for the Scientific Advice Mechanism, so for the European Commission. And I was sitting in the audience as it were of one of your meetings, sitting at the back of the room. Of course, there wasn't an audience. These are closed-door meetings. And you had a fascinating discussion about exactly what words to use to describe levels of probability, I think it was. So if you say something is possible or likely or unlikely or extremely likely to happen. And the discussion, if I remember rightly, some of the discussion was about what kind of numerical value the audience might have in their mind when they hear a word like something is likely. Can you say a little bit more about that process and how it works?

Bart: Yeah, I can say some, I can explain a bit about that. There are formal expert elicitation procedures in place; they have been described, there are protocols for that. I think that also the people advising on climate change, the IPCC, uses these tools, right? I also know that when people talk about risks of pesticides, for instance, EFSA, the European Food Safety Authority, has these procedures in place, and in our discussions, we used these tools. So scientists could say, okay, the risks of microplastics or plastic debris in general is too early. We need 10 years of research. But a policymaker often says, yeah, but we need to say something today. We need a decision today. And then scientists say, well, there's insufficient data or the evidence is very limited and surrounded with a lot of uncertainty. So then what to do? And then just bringing in experts, that share their view, and sometimes, I must admit, a bit of a gut feeling, is the best you can do. And I think you still have to do it because it's better than not giving advice. But then you enter this domain of, yeah, semantics where what you actually are referring to, and yeah, people have different understandings of certain notions. So you have to be very explicit. What do you mean when you say uncertainty? What do you mean when you say evidence? What do you mean when you say probable or likely? So it's basically a discussion about semantics and defining that and writing that down and putting this on a scale. And then at the end, asking those 20 experts what their votes are on a scale of likeliness for, for instance, a risk or an adverse effect? And you can then in a rather quantitative way put that in a graph which expresses the uncertainty between the views of all these scientists. So for some of these things, it could be a very narrow interval and now for some other aspects, it could be a very wide interval. So we were talking about uncertainty. It's still uncertain. But this is about the views of the scientists about these things that have been said, and policymakers can just look at that, and it can inform them because the things that were said with more certainty, of course, then could have a, yeah, a higher weight in the whole discussion or what they would like to do with this advice.

Toby: Yeah, that's great. And I think I hadn't quite appreciated until listening to you explain it just then what manoeuvre is being done there. So let me try and let me try and describe it to make sure I understand what you're describing. So you have a situation where the evidence is incomplete, and therefore there's a degree of uncertainty around the conclusions because we have to wait to get more evidence or because the evidence we have is ambiguous or contested or whatever it happens to be. And so as a way to try and quantify that, as a way to give the policymaker or whoever some kind of clear expression of that uncertainty, a way for them to get a handle on it, you ask a bunch of scientists to essentially each individually vote where on the scale of likelihood of a particular outcome they think reality is. And then based on the range you get, not just the range but the shape of the curve, I suppose, is an expression of the uncertainty. Is that the idea?

Bart: Yes, it is an expression of the uncertainty in the minds of these scientists, right?

Toby: Right, but you're kind of doing a bait and switch here. Right because you're being asked about the likelihood of something, so essentially, um, a probability curve, an objective probability curve, and what you're giving instead in your answer is a different curve that describes epistemic uncertainty — the range of opinions of scientists on that empirical question.

Bart: Yes. So there is limited evidence, but yeah, experts are experts, but it's more like a jury. They have the role of a jury rather than people that know that two plus two makes four, right? It is like adding very uncertain things in a very complex environment, and the environment is complex. So you can never investigate all the locations on the earth. We do not know what the concentrations of plastics are. So if we, for instance, see a certain distribution of concentrations, can we say there is no higher concentration on earth than the highest one we have observed? No, that's not the case, and yeah, we often use this word a 'hotspot' location. So there could be hotspot locations in a harbour or dependent on the flow or the wind where plastic concentrations are far, yeah, far higher. And it's not unthinkable that these look that these locations exist, but it's speculation. It's pure speculation. But it's an educated guess. It's not a wild guess. It's still an educated guess, and I think that's a very important thing to say, that the educated guesses of 20 experts have value. But one should realise that it's basically a jury outcome rather than a fully deterministic outcome of a process. But once again, if that is also described explicitly, the way that this process was done, yeah, then it's just into the hands of the policymakers to say, well, that's that's useful enough or not. But the scientists should also then say we, we do not know for sure that our best guess is like this, and it's the outcome of, yeah, of an expert panel of 20 experts. And I already explained that you can quantify the uncertainty and the variability in these answers.

Toby: Yeah, that's right because part of me wants to complain that you're basically substituting one kind of uncertainty for another, which is kind of cheating, but of course, being practical, what the policymaker needs is just some kind of understanding of the degree of uncertainty so that they can decide what to do with that.

Bart: Right.

Toby: And this is a way to give them something rather than just saying, I don't know.

Bart: Yeah, it is. I realize and admit that it has a slight flavour of 'it's better than nothing', right? Sure. So, so basically you could say you're almost empty-handed, but are we empty-handed? Well, I think not. I think that if you put 20 smart people in a room and they go through a careful process that the outcome is valuable. And yeah, this may be an easy answer, but the scientists, of course, in my view only have to describe the situation as is. So if it's uncertain in the end, and we narrowed down an uncertainty of 100 by talking to one another to an uncertainty of 90, but that's a slight improvement only. But that's it then. And yeah, we should try to not overstate and not try to be more accurate than we are, and then it's up to political, yeah, the policymakers indeed, to take that uncertainty into account. And there are tools that they have that scientists do not have, but policymakers have options like we take a precautionary principle or well things like that.

Toby: So as you've said, uh, very clearly, this approach is valuable because it's a way of saying something rather than nothing. But it's only really valuable as long as it's accompanied by a careful explanation of what you've done, of how that conclusion was come to, so that the audience understands what they're getting when they see this graph of guesses about probability.

Bart: Right.

Toby: That's really interesting. Now we've been talking so far about advice mostly to policymakers. But there's a whole different dimension which you've touched on once or twice, which is how that advice is communicated to the rest of the world, which includes obviously non-scientists, but really a wide range of people with different interests and backgrounds and perspectives. So when you're talking to a journalist, as you mentioned, or when you're talking to a lay audience giving a public lecture or something, you aren't giving them the report or even this graph with the uncertainty curve. You're giving them something much simpler. So let me give a concrete example. Okay, the review that you chaired for SAPEA, I think was 170 pages or so, and the study on drinking water that you did for the World Health Organization was 124 pages, it says here. So these are long documents full of detail, and most people, I mean, even being realistic, most policymakers probably aren't going to read them from cover to cover, which is why the question then arises of how you summarize the content of these reports. So let me now read you the quick summary that the WHO published about your report. Here's what it said: "Based on the limited evidence available, chemicals and biofilms associated with microplastics in drinking water pose a low concern for human health. Although there is insufficient information to draw firm conclusions on the toxicity related to physical hazard of plastic particles, particularly for the nano-size particles, no reliable information suggests it is a concern." Now, given the conversation we've just been having, it seems clear that there are a load of those tricksy words in there, you know, like low concern and limited and firm and reliable. Things that your audience could reasonably interpret in a variety of ways. So I suppose my main question is: What do you think about that summary? Is it accurate? I mean, does it accurately describe what you said in your report?

Bart: So I think this wording is correct. However, people should realise that yeah, you have like 400 references in the report, and they have been studied and integrated, and there is a synthesis. And then that leads to a report of 100 pages or 200 pages with a lot of nuance and detail. And then there's an abstract, and the abstract, of course, is always less detailed and nuanced than the whole report, and the same holds for the lines, the sentences that you just read. So yeah, I think it's for me rather trivial to say do not complain about the headline. We know the headline in a newspaper is not something you should respond to. And I know that in today's society, there's a lot of response on tweets and headlines. But I would hope that the public realises that the headline has a function to draw your attention to something and then please read the whole thing. And that also applies here. So these parts of a document have a function; they are not there for nothing. They are just to, meant to touch upon something that you get an idea of what it's about. But there should maybe we should above each abstract always write this is the abstract, please read the whole thing. So like a manual, what does it mean, and it's, yeah, this is so self-evident in a way, but again and again, this misunderstanding of what these texts actually mean on a document or a website are going on and on and lead to confusions, I think. Of course, that doesn't mean that these headings and abstracts should not be carefully written, and that is why you see this text. 'Limited', what's that? 'Insufficient', what's that? So in these sentences that you read, there are a lot of relative notions that you can think of a scale, but each reader will have a different scale with 'limited' and so on so. Um, it there's a message here that is that there's not an urgent acute a disaster going on, but it's also not on the other end of the scale. There's a limited evidence available with insufficient information to draw firm conclusions and so on, and for the detail, one should read a report. So I think that the people involved in writing these reports, they, yeah, they talk to journalists, or they also give keynote presentations, or they have presentations in a science café, for instance. And these are always situations where you can explain in more detail. So if I talk about the WHO report in a science cafe, I know about these pitfalls and caveats, and I build up a lesson, basically, a storyline which explains why these words are used. So I would talk about this concept of "it's the dose that makes the poison". I would talk about the plastic emissions of today are not the plastic emissions of tomorrow. I would talk about that all this information is provisional in a way so that so that the public understands and I always get the feeling that these meetings are very fruitful and that most times there's a real understanding, especially when there is a good discussion afterwards. So yeah, that there is a good interaction with the group, I think that's very important because you get all kinds of questions about exactly these sentences. But it's you then get the opportunity to explain and I'm rather positive about the results of those meetings.

Toby: Yeah, and that's something you mentioned earlier, which I didn't really pick up on at the time, but I think is important, the idea that this is not just a one and done thing where you give the answer and then drop the mic, right?

Bart: Right.

Toby: That it's the start of a conversation where you can give the outline and then drill down interactively into the details depending on how the audience responds. And I suppose you've also got a kind of inevitable difficulty in principle that if you reduce something from 124 pages to two sentences, of course, by definition, you're not only going to remove detail, but you're going to widen the range of possible interpretations, given that you've taken out that detail. You can express yourself less clearly in 200 words than in 200 pages.

Bart: Right.

Toby: There's then a slightly different criticism that could also be made, perhaps. So far, we've been talking mostly about the importance of communicating exactly what you mean, uh, expressing the uncertainty clearly, and especially with these short summaries aimed at non-experts, they should be as carefully written so that they communicate as clearly as possible what the evidence does and doesn't say. Fine. But perhaps it's a slightly different question, also about political or societal, you might call it helpfulness, about the contribution you're making to the debate. So is it possible to have a summary that's accurate, in that we can agree it correctly reflects the evidence underlying it, but is nonetheless unhelpful? For instance, what if your carefully-worded abstract is seized by the plastic industry and waved around to say, um, look, no urgent cause for concern here, business as usual.

Bart: Okay. Yeah, I understand the question, and that's, um, I think it's a very good question. Um, yeah, this touches very much on, yeah, what you see as a role of a scientist, and different scientists can take a different position here, which they have the full right to do so. Yeah, obviously if, if, for instance, uh, there is a scientific result or an abstract explaining that the risks are limited, yeah, you could say that probably the plastic industry would have a certain view on that, whereas there are also environmentalist groups or NGOs that very rightfully try to, yeah, they care about nature, and they could say, well, we are just not happy with all this plastic in the environment. This just doesn't belong in the environment. So yeah, a scientific outcome, which would not show an effect would not really support that, um, that position; it would not be helpful for that position that plastic would not belong in the environment. So, and I think this is rather self-evident that there are all kinds of interests and also contrasting interests in society. And whatever you do or write, there will be some of these parties to which an outcome fits better than the other. So for me as a scientist, I would say the only thing I can do is to be accurate and if effects are found, report that. And if risks or effects are not found, report that but be careful about saying that it's not the end of the story. Yeah, so for instance, with the SAPEA report, we said there is no evidence at this moment for widespread risks, except perhaps in some hotspot locations. But we were talking about that and remember very much that we said should we just say that and, well, it's very likely that concentrations will increase, or will they? So we had a discussion about that and then we collected some evidence; we were just reviewing all the information. And there is sufficient evidence in the literature that plastic concentrations will increase. So then we said, so actually, there is a risk, not today, but in the future, within a couple of decades, we will pass these effect thresholds. So, and this was written with these interest groups also in mind, I think, so it was still scientifically accurate, but we said we should not be too easy on this conclusion. We should really carefully think about, what are we saying and what is the consequence and is it correct? And do we not overlook something? And in this example that I just gave, it was very clear. We should not overlook the future, in this case, a future where plastic concentrations will be higher. But at the end of the day, still, I think that if a group of scientists is asked to summarise the evidence, then that's just what they should do. The interest groups, that is the next step. So in the SAPEA process, for instance, when the report was published, later on, there were specific meetings with interest groups, with the plastic industry and with NGOs and many organisations that have anything to do with plastic production or the flow of plastic in society or in the environment. And there the outcomes were discussed, and for policymakers, this discussion is the place where the point which you made gets attention to its full extent. Because then you come to measures and whether or not things should be transformed into regulations. So I think that is the domain where these balancing of different interests should be made. But rather not in this first phase where just the evidence is reviewed.

Toby: Yeah, I can hear you treading this, carefully treading this really fine line, saying on the one hand that the role of the scientist is just to review the evidence and present things as they are. And it's not within the scientist's gift, at least as a scientist, to start interpreting and judging what should happen next and so on. And that's a very clear position. But on the other hand, you did also have in mind when you were thinking about how to present your results, you did think about how it might be interpreted by different interest groups, and you did try to make sure that what you were saying not only was accurate but also kind of fit with how you thought the consequences ought to play out. I don't, there isn't really a question here. Perhaps it's just maybe a tension that's intrinsic to being a scientist who's involved in policy.

Bart: Well, yes, but I think, um, so scientists or an expert group like that, the one that we had in with WHO or with SAPEA, they are part of society as well, right? We know that we're writing a report. There's always people advising on the summaries. And the scientists play an important role there. That we should not be naive. Always when you write something, whether it's a scientific article or a report like this or a more popular science article or you write something in the media, the author has its audience in his or her mind. So the expert groups that I was in always had in their minds, okay, who will be reading this? and how can we write in such a way that the essence of what we think, and that's just the science as is, is communicated as good as possible for the whole suite of possible views that people or groups or organisations could have on what we write. Now, that's very complex. So it will not be perfect. It will never be perfect. But by thinking that way, by at least making the effort, you can get a bit closer to a good text. And then the rest is just for the after-party, I would almost say. So what happens after the publication of a report?

Toby: Yeah, okay. I'll buy that. Sounds good. Just before we finish while I have you, I wanted to ask one maybe unrelated thing, but it might be of interest to our audience of people who take an interest in science and policy. You've advised multiple different organisations nationally and internationally. Are there any interesting differences between how these different organisations —the processes they use to gather science advice?

Bart: Yeah, there were differences, and they're related to the specific situations and the different questions asked and they emerged from what triggered these organisations to start with this certain science advice process. So for instance, WHO, um, yeah, there was this public concern. There were some studies finding microplastics in drinking water, and so there was some concern with the public, and WHO was just asked for an opinion. And at some point, WHO thought, well, let's dig into this, let's try to come up with a view on this issue, and that triggered it, but it was more open-ended. It's a report, and that's it, just a snapshot in time of what experts would think about implications of microplastics in drinking water. So it ended with the publication of the report. I think there was a difference with the SAPEA process, where the Science Advice Mechanism in Brussels, where there's this group of Chief Scientific Advisors who have the responsibility to inform policymakers in Brussels, they put it on the agenda at some point. And via the organisation SAPEA, they asked for an evidence review report, which got a clear follow-up, namely a scientific advice by this group of Chief Scientific Advisors. Now, there's a difference because, for instance, in the SAPEA process that I was in, for the evidence review report, we were clearly instructed. There were very clear guidelines. There was a quality assurance document, actually, because it was all part of this bigger SAPEA workflow, so to say. Well, one example is that we were strongly advised to end with conclusions, of course, but not further than options. So no recommendations. We should not give recommendations like, "You should." We should formulate at the end more like, "You could," and then A, B, C, or D, and why. But just offer a menu to the scientific advisors who could then pick and translate and interpret and make real recommendations out of that. So that was a clear instruction that we got, and that made the process much different. It was basically easier because we just had to follow a protocol in a certain way. Of course, within the scope of what we had to do, it was complex enough, but the whole setting was very clear. And the WHO process, I think, was different because there it was just reviewing the evidence and getting some idea about what the possible implications could be for human health, and there it stopped.

Toby: Mmhmm, thanks, and really the last question this time. It's obvious, I think, that your work as a science advisor is guided by your work as a scientist; that goes without saying. But do you think it also works the other way? Are there ways in which your research or your general professional role are enhanced by the fact that you have this policy advice gig too?

Bart: Yes, I think so. I think there are different types of advantages. It is just helpful to be very well aware of what the need is, the need for the information that you have access to as a scientist. So if you would not have this interaction with society, then you would just sit behind your desk and come up with questions yourself that you find interesting, and that's more the fundamental science approach, which is important, I think. It is important to keep that also for a certain part of your time. But it does not necessarily bring results that are easily applied. If you go this way, the other way around, from policy advice, you get a more clear view of what the actual questions are. You also get a clear view of how people think about that because you get a very good measure of how the policymakers think about these issues, the public, but also your colleague scientists because you never do this alone. It's always with a couple of people or an expert group. So it is, in a way, also just talking with experts about things that you are interested in yourself as well. And that helps to formulate your own research questions later on, prioritising things. You learn from the approaches that other scientists use. And maybe more trivial, meeting scientists and policymakers, but also scientists, is always nice. It's the networking. It's just talking about recent developments.

Toby: Yeah, that's good to hear. Bart, thanks for a stimulating discussion. I really appreciate the time you've taken, and to you listening to this, if you can think of someone else who might find this conversation interesting, please do recommend that they check it out. We're still a new show, and if you like what you hear, the chances are that other people you know might like it too. Thanks so much for listening, and we'll be back soon for another conversation with an interesting person. Bye for now.

Uncle SAM

Staff login