1 Introduction

Probabilistic terms and associated jargon are often part of the working environment of volcanologists. Research activities about volcanic hazard and the quantification of volcanic risk even led to officially defining volcanic hazard in terms of probability (Blong 2000). The last decade has produced a comprehensive framework of studies, surveys and computer-assisted procedures for transforming field data into probabilities of occurrence of a particular scenario (Newhall and Hoblitt 2002; Marzocchi et al. 2004, 2008, 2010; Aspinall 2006; Martí et al. 2008; Neri et al. 2008; Sobradelo and Martí 2010, 2015; Sobradelo et al. 2013). Following the successful development of probabilistic tools, came the challenge of communicating their results. Research and operational strategies started to incorporate the enhancement of the communication of these probabilistic forecasts to decision makers and the public (Marzocchi and Woo 2007; Marzocchi et al. 2012; Sobradelo et al. 2014). At the same time, extensive work has been done in the psychological and sociological aspects on the perception and interpretation of uncertainty, for both volcanology and across other hazards. Despite this extensive use, sometimes there is confusion surrounding the statistical interpretation of probabilities, partly due to unclear statistical concepts like: What is a probability? What is statistical science? How much can I rely on a probability estimate? What are they used for? What is uncertainty? How does uncertainty and probability relate to each other? Why are statistics and probabilities sometimes misunderstood? Why is it that scientists and/or users (officials) don’t fully appreciate the uncertainty surrounding a probability estimate?

In this chapter we try to address the above questions by focusing on the statistical meaning of probability estimates and their role in the quantification and communication of uncertainty. We hope to provide some insights into best practices for the use and communication of statistics during volcanic crises.

2 Quantifying and Communicating Uncertainty in Volcanology

Volcanology is by nature an inexact science. Deciphering the nature of unrest signals (volcanic reactivation), and determining whether or not an unrest episode may be an indication of a new eruption, requires knowledge on the volcano’s past, current and future behaviour. In order to achieve such a complex objective experts in field studies, volcano monitoring, experimental and probabilistic modelling, amongst other, work together under pressure and tight time constrains. It is important that these stakeholders communicate on a level that caters for the needs and expectations of all disciplines; in other words, it is important to agree on a common technical language. This is particularly relevant when volcano monitoring is carried out on a systematic survey basis without continuous scientific scrutiny of monitoring protocols or interpretation of data.

By definition, uncertainty is the state of being uncertain. It is used to refer to something that is doubtful or unknown. It means lack of confidence about something. Hence, it is directly related to the amount of knowledge we have about a process. A forecast, in the form of a probability estimate, is an attempt to quantify this uncertainty and support decision-making. Forecasting potential outcomes of volcanic reactivation (unrest) usually implies high levels of scientific uncertainty. Anticipating whether a particular volcanic unrest will end with an eruption and where (temporal and spatial uncertainty) requires scientific knowledge of how the volcano has behaved in the past, and scientific interpretation of precursory signals. Whilst this may be less challenging for volcanoes that erupt often, it is far more difficult for volcanoes with long eruptive recurrence and less data available, and even more so for those without historical records.

The main goal of volcano (eruption) forecasting is to be able to respond to questions of how, where, and when an eruption will happen (Sparks 2003). To address those questions we often use probabilities in an attempt to quantify the intrinsic variability due to the complexity of the process. The communication of those probabilities will have to adapt to the recipient of that information. Making predictions on the future behaviour of a volcano follows similar reasoning as in other natural phenomena (storms, landslides, earthquakes, tsunamis, etc.). Each volcano has its own characteristics depending on magma composition, physics, rock rheology, stress field, geodynamic environment, local geology, etc., which makes its behaviour unique. What is indicative in one volcano may not be relevant in another. All this makes the task of volcano forecasting challenging and difficult, especially when it comes to communicating uncertainty to population and decision-makers.

During a volcanic emergency, relevant questions are first how to quantify the uncertainty that accompanies any scientific forecast, and second, how to communicate it to policy-makers, the media and the public. Scientific communication during volcanic crises is incredibly challenging, with no standardized procedures on how this should be done among the stakeholders involved (scientists, governmental agencies, media and local populations). Of particular importance is the communication link between scientists and decision-makers (often Civil Protection agents). It is necessary to translate the scientific understanding of volcanic activity into a series of scenarios that are clear to decision-making authorities. Direct interaction between volcanologists and the general public is also important both during times of quiescence and activity. Information that comes directly from the scientific community has a special impact on risk perception and on the trust that people place on scientific information. Therefore, the effective management of a volcanic crisis requires the identification of practical actions, to improve communication strategies at different stages and across different stakeholders: scientists-to-scientists, scientists-to-technicians, scientists-to-Civil Protection, scientists-to-decision makers, and scientists-to-the general public.

3 The Role of Statistics and Probabilities in the Quantification of Uncertainty

3.1 Concepts, Definitions and Misconceptions

Formally speaking, Statistics is a body of principles and methods for extracting useful information from data, assessing the reliability of that information, measuring and managing risk, and supporting decision-making in the face of uncertainty. Rather than drowning in a flood of numbers, statistics helps to make better management decisions and gives a competitive advantage over intuition, experience and hunches alone.

Probability shows the likelihood, or chances, for each of the various future outcomes, based on a set of assumptions about how the world works. It allows handling randomness (uncertainty) in a consistent, rational manner and forms the foundation for statistical inference (drawing conclusions from data), sampling, linear regression, forecasting, and risk management.

With statistics, we go from observed data to generalizations about how the world works. For example, if we observe that the seven hottest years on record occurred in the most recent decade, we may conclude (perhaps without justification) that there is global warming. With probability, we start from an assumption about how the world works, and then figure out what type of data we are likely to see under that assumption. In the above example, we could assume the null hypothesis, H0: There is no global warming, and then test how likely is it to observe the seven hottest years within the last decade if H0 was true. We then use the observed data to look for significant statistical evidence to reject H0 in favour of the alternative, H1: Some phenomena related to global warming may be ongoing. To some extent, we could say that probability provides the justification for statistics.

However, there is no precise definition for probability. All attempts to define it must ultimately rely on circular reasoning. According to the Oxford Dictionary, probability is “the state of being probable; the extent to which something is likely to happen or be the case”. Roughly speaking, the probability of a random event is the “chance” or “likelihood” that the event will occur. To each random event A we attach a number P(A), called the probability of A, which represents the likelihood that A will occur. The three most useful approaches to obtaining a definition of probability are: the classical, the relative frequency, and the subjective (Jaynes 2003; Colyvan 2008), discussed further below.

The number of volcanic eruptions of magnitude greater than 1 in the next t years in a particular area is an example of a random variable, Y. When we try to quantify the value of Y we are implying that a true value exists, and we want to anticipate to it, so that we can make advanced decisions. That is, we want to estimate a range of values that we think will contain the true value of the random variable Y. The most common way of showing this range of values is by presenting a best estimate ± confidence margin. Here, we could distinguish between two types of uncertainty, the one surrounding the best estimate, type A, and the one that accounts for the level of confidence that we have in that best estimate, type B. It is not enough to provide a best guess (point estimate) for a parameter, we also need to say something about how far from the true parameter value such an estimator is likely to be. The confidence interval is one way of conveying our uncertainty about a parameter. With that, we report a range of numbers, in which we hope the true parameter will lie.

3.2 Measures of Uncertainty

Probability can be used as a measure of uncertainty, both type A and B. The way we understand probabilities depends on the degree of numeracy we have. It is common in our daily lives to make choices with some level of uncertainty, for instance, whether or not to order the fish of the day in a new restaurant, or whether to buy one or two bags of fruit in a new shop. To make those simple decisions, we unconsciously go through previous knowledge on similar experiences to work out some kind of odds of making the right choice. Suppose now that we are being rushed to make up our mind at the restaurant, we will have to rush our decision. The main difference between this and the decision of whether to evacuate a populated area threatened by a destructive volcanic event is the penalty or loss for making the wrong decision. In the first case, the loss is negligible to our daily lives, but a wrongly timed evacuation decision could have serious consequences. For this reason, the interpretation of probability must be in the context of how much we are willing to lose if we make the wrong decision. The difference between probability, the extent to which something is likely to happen; and risk, a situation involving exposure to danger; means that the relevance of a probability estimate for the occurrence of an event will depend on the associated risk, this is, on how much exposure to danger is in the occurrence of the event. Suppose the odds are one to ninety nine (1:99) that our car breaks down in the middle of a trip. We would most likely still take our family on that trip. Instead, suppose we are given the same odds for an airplane crash. We would most likely not want to take our loved ones on that plane. In both cases the probability is the same, but the risk is different. This illustrates how probability estimates must be interpreted in the context of their associated risk.

Clearly emotions, values, beliefs, culture and interpersonal dynamics play a significant role in decision-making processes. Extensive work in the field of psychology and sociology has examined perceptions and interpretation of uncertainty for both volcanology and across other hazards (weather, tsunami, operational earthquake forecasting, climate change) (Fischhoff 1994; Cosmides and Tooby 1996; Kuhberger 1998; Windschitl and Weber 1999; Bruine De Bruin et al. 2000; Gigerenzer et al. 2005; Patt and Dessai 2005; Risbey and Kandlikar 2007; Morss et al. 2008; Budescu et al. 2009; McClure et al. 2009; Joslyn et al. 2009; Mastrandrea et al. 2010; Jordan et al. 2011; Eiser et al. 2012; Doyle et al. 2014a, b). However, that is not the scope of this chapter. For the purpose of our argument, we focus on the ‘rational side’ of decision-making. That is, the quantification of uncertainty using statistical theory.

What makes statistics so unique is its ability to quantify uncertainty, so that statisticians can make a categorical statement about their level of uncertainty, with complete assurance. But the statements have to be made taking into account all possible factors (sources of uncertainty) and making sure the data are correctly selected to eliminate all sources of bias. These could have a significant impact and involve matters of life and death. So far we assumed that the probability estimates have been calculated using the right methods. For the restaurant or supermarket examples this could be a simple arithmetic mean. Forecasting the occurrence of a volcanic event will require more elaborated mathematical modelling. The accuracy in the probability estimate will depend largely on the model selection.

3.3 Disciplines and Schools of Thought

To quantify uncertainty using statistics there are three main disciplines statisticians rely on: (i) data analysis, (ii) probability, and (iii) statistical inference (Cooke 1991; Pollack 2003; Kirkup and Frenkel 2006). The first step is always the data analysis, that is, the gathering, display and summary of the data. In the case of volcanoes, we look at past and monitoring data, and we make the necessary adjustments for any inconsistencies (e.g.: Sobradelo and Martí 2015). The second step is the formal study of the laws of chance, also called the laws of probability, whose birthplace is in the 17th century for no other reason than to be used in gambling (Cooke 1991). Probabilities are the result of applying probability models to describe the world, and this is done using the concept of random variables, that is, the numerical outcome of a random experiment or a random process we are trying to understand, so that we can forecast its future outcome (height, weight, income, eruptive events in the last 500 years, number of seismic events in one day, etc.). Finally, we use the above so that we can make inferences in the real world with a certain degree of confidence (Rice 2006).

Approaches to develo** probability models, associated with different schools of thought, are: (1) the classical, based on gambling ideas, which assumes that the game is fair and all elementary outcomes have the same probability; (2) the relative (objective) frequency approach which believes that if an experiment can be repeated, then the probability estimate that an event will occur is equivalent to the proportion of times the event occurs in the long run; and (3) the personal (subjective) probability approach which believes that most of the events in life are not repeatable (Cooke 1991; Jaynes 2003). They base the probability on their personal belief of the likelihood of an outcome, and then update that probability as they receive new evidence (Cosmides and Tooby 1996). An objectivist uses either the classical or the frequency definition of probability. Subjectivists, also called Bayesians, apply formal laws of chance to their own personal probabilities. What makes the Bayesian approach subjective is the choice of models and a priori beliefs to define the prior probabilities, even if the rules and observed data to update and compute the posterior probabilities are quite “objective”. The Bayesian approach claims that any state of uncertainty can be described with a probability distribution, making it suitable for the study of volcanic areas where very little or no data exists, other than theoretical models or expert scientific beliefs. These initial probabilities get updated each time new information arrives, making the approach quite dynamic and easy to apply.

For many years there has been controversy over the “frequentist” versus “Bayesian” methods. However, neither the Bayesian nor the frequentist approaches are universally applicable (Jaynes 2003). For each situation, some approaches and models are more suitable than others to produce probability estimates as accurately as possible with high confidence. It is the task of the statistician to decide and justify the model selection to ensure reliability of the results. But a brilliant analysis is worthless unless the results are successfully communicated, including its degree of statistical uncertainty.

Often presented as an alternative to the probabilistic approach, is the deterministic approach. Events are completely determined by cause-effect chains (causality), without any room for random variation. Here, a given input will always produce the same output, as opposed to probabilistic models that use ranges of values for variables in the form of probability distributions. This approach is sometimes used in fields with a lot of data, like in weather forecasting, or where the underlying process can be explained with physics-based models, such as in seismology. In any case, the reliability of probabilistic versus deterministic forecasts is sometimes a cause of debate, and is often a mixed of both, a deterministic and a probabilistic approach, the preferred option.

3.4 How Reliable Is a Forecast: Data and Methodology

By giving an expected value for a forecast we are already quantifying a measure of uncertainty. This value will have an interpretation based on the degree of confidence which the estimate is made with, which will depend on the type, amount, quality and consistency of the evidence upon which the estimate is made, usually past data or theoretical models.

The degree of confidence, or certainty, is quantified and expressed via the variance or standard deviation (squared root of the variance). Suppose we have three measurements of a random process (e.g. inter-event time in years) of 2, 3, and 4 years, and want to draw some conclusion about the inter-event time based on these values. We use 3 years, a simple arithmetic mean, as the estimate of the inter-event time. The three measurements are equally distant and symmetrical around the mean. The variance, which measures the dispersion of the values around the mean, is 1, and the median, which is the value in the middle, is 3 as well as the mean. Suppose we do the same exercise with measurements 1, 3, and 5, we still get a mean of 3, but now we can see the values 1, and 5 are two units away from the mean, and so the variance, as a measure of dispersion around the mean, is now 4, instead of 1. Note, however, that the values are symmetrically distributed around the mean, and that the mean and median are still the same as before, 3. The only thing that has changed is the variance, now larger. The lower the variability around the point estimate, the more reliable is our estimate. Let’s take a sample with 10 measurements: 1, 1.4, 2, 2.1, 2.2, 2.3, 3, 4, 5, 7. The estimated inter event time, based on a simple arithmetic mean, is still 3, but we based this estimate on 10 rather than 3 observations. The more data we have to compute our estimates, the more confident we are in these results (Rice 2006).

Apart from the reliability of the data to produce an estimate, a crucial aspect of a forecast is the correct choice of methodology to model this. Most of the time we do not know the underlying distribution of a random process (e.g. number of volcanic eruptions in a time interval and particular area, assumed to be random), and so we make assumptions to help us find a function within a family of known distributions (Normal or Gaussian, Exponential, Binomial, Beta, Poisson, Chi-Squared, Log-normal, etc.) that would be suitable to model this unknown process (see Rice 2006; Gonick and Smith 2008; McKillup and Dyar 2010 for details on these distributions). This facilitates making inferences and forecasts based on the conveniently known properties of these functions. The choice of the distribution family depends on the characteristics of the sample data (how many observations are there, whether it is a symmetrical or a skewed distribution, what type of measurement was used, etc.). To select the most appropriate distribution, it is important that the data is an unbiased and representative sample of the population. Therefore, the data gathering process and a preliminary and exhaustive analysis of the dataset are crucial to reduce uncertainty and increase confidence in the final results. Needless to say, the choice of distribution and assumptions about the sample data add uncertainty to the results, and must be taken into account when presenting the final outcome.

Arithmetical means are pure descriptive measures used to sum up the information from the data sample. In practice, we would not use a simple arithmetical mean to estimate probabilities and make inferences about complex processes. There are a large number of statistical modelling techniques (not the scope of this chapter) based on the type of data we have, its distribution, quality and quantity and the type of question we want to answer. In the end, the reliability of the probability estimate (whereas an inter event time of 3 years or not) will depend on the accuracy, reliability and amount of data used to reach that conclusion, together with the statistical model and approach. That is why a probability estimate should always be presented with some measure of its variability (estimated error, usually given by the variance or standard deviation) and it should be made clear that it is an estimate based on the available data, and that we have assumed that a future behaviour of the random event will follow the same pattern we have observed in this dataset. This might in fact not be the case, and that is why sometimes we hear about time series data being “stationary or not”, meaning that depending on what time interval the data comes from, the pattern observed may be different. In short, there are many assumptions and sources of uncertainty around a probability estimate that have to be taken into consideration when interpreting probability.

Taking a bigger picture view, ultimately all we are doing is drawing some general conclusions about an unknown process (the inter-event time) from some samples of observations. We do not have access to all the possible observations of this process, but still want to anticipate the future value of this event, so we can be better prepared should the event strike. This is the reason why we use statistical approaches to model random events, unless we can see into the future, a probability estimate can never be either 0 or 100%.

4 Using Probabilities to Communicate Uncertainty

Since the late 1990s there has been significant focus on improving communications during volcanic crises (IAVCEI 1999; McGuire et al. 2009; Aspinall 2010; Donovan et al. 2012a, b; Marzocchi et al. 2012; Sobradelo et al. 2014). A common factor that emerges is the value of probabilities as a way to communicate scientific forecasts and their associated uncertainties, for natural hazards in general (Cooke 1991; Colyvan 2008; Stein and Stein 2013), or more specific for volcano forecasting (Aspinall and Cook 1998; Marzocchi et al. 2004; Aspinall 2006; Sobradelo and Martí 2010; Marzocchi and Bebbington 2012; Donovan et al. 2012c). However, it also requires the need to communicate the uncertainty that accompanies any forecast on the future behaviour of a natural system.

Making predictions on the future behaviour of a volcano involves analysis of past data, monitoring of the current situation and identification of possible scenarios. Quite often, these predictions are challenging to quantify and communicate due to lack of data and past experience. An added source of complexity is when the probability estimates are very small, <1%. Most lay people are not familiar with decimals or small fractions. A layman will easily understand a probability of 0.2 or 20%, but not so well one of 0.0002 or 0.02%, even when both are associated to the same level of risk. Scientists responsible for the communication of volcanic forecasts have the difficult task of selecting the scientific language to deliver a clear message to a non-scientific audience.

The uncertainty that accompanies the identification and interpretation of eruption precursors derives from the unpredictably of the volcano as a natural system (aleatory or deep uncertainties) and from our lack of knowledge on the behaviour of the system (epistemic or shallow uncertainties) (Cox 2012; Stein and Stein 2013). These uncertainties will depend on how well we know the volcanic system. Active volcanoes with high eruption frequencies can be more easily predicted (i.e. they are reasonably well known and so past events are good predictors of future ones, shallow uncertainties). In contrast, deep uncertainties are associated to probability estimates based on poorly known parameters or poor understanding of the system, this is usually the case for volcanoes characterised by low eruption frequencies.

In everyday life we are often quite unaware that we use probabilities (commonly known as “common sense”) to evaluate the degree of uncertainty we face. The question is whether we prefer or understand better the mathematical expression of probability (e.g.: 20% chance of an event occurring) or more verbal statements such as likely, improbable, certainly, to make our decisions. Greater precision does not necessarily imply greater understanding of what the message really is, as it will be perceived differently (Slovic 2016).

Some countries, like USA, prefer to use probabilities to express uncertainties with weather forecasts, while some European countries prefer to use verbal expressions. In both cases, people react according to the forecast. There are different ways in which probabilities (and uncertainties) can be described. These include words, numbers, or graphics. The use of words to explain probabilities tend to use language that appeals to people’s intuition and emotions (Lipkus 2007). However, it usually lacks precision as it tends to introduce significant ambiguity by the use of non-precise words such us “probable”, “likely”, “doubtful”, etc. A probability is the “measure” of the likeliness that an event will occur, so it makes sense to expect a numerical value (e.g. percentages) associated to that measure. However, in volcanology most of the time there is insufficient observational data to present probabilistic forecasts with enough level of confidence. Using only numerical expressions may fail when the audience has a low level of numeracy. The interpretation of probabilistic terms can vary greatly depending on the educational level of the receptor and whether verbal or numerical expressions are used (Budescu et al. 2009; Spiegelhalter et al. 2011; Doyle et al. 2014a; Gigerenzer 2014). To minimise this problem, a combination of verbal uncertainty terms (e.g.: very likely) with quantitative specifications (e.g.: <90% probability) has been recommended, for example, to better understand results from Intergovernmental Panel on Climate Change (IPCC) (Budescu et al. 2009, 2012). Climate scientists working within the IPCC have adopted a lexicon to communicate uncertainty through verbal probability expressions ranging from “very likely”, “likely”, “about as likely as not unlikely”, “very unlikely” and “exceptionally unlikely” to refer to probabilities (e.g. IPCC 2005, 2007). The terms are assigned specific numerical meanings but are typically presented in verbal format only, so that a probability of occurrence of 1% will be interpreted as “very unlikely” for that particular event, and a probability of 66% will be seen as “likely” for the event to happen. Similarly, anything in the range of 33–66% would be perceived as “about as likely as not unlikely”.

Since 2011 it has been increasingly common to use graphics to represent probabilities in natural hazards (Kunz et al. 2011; Spiegelhalter et al. 2011; Stein and Geller 2012). The advantage of communicating uncertainties (or probabilities) visually is that people are everyday better prepared and trained to use and understand infographics, as an immediate consequence of the globalised use of internet and informatics, and a graphic can be adapted to stress the importance of the content of the communication and can be adapted to the needs and capabilities of the audience (Spiegelhalter et al. 2011).

In addition to considering the way probabilities (and uncertainties) are communicated, there is a need to consider the local context of the particular society in which the volcanic crisis is occurring. “Odds” is an expression of relative probability that is well understood by many communities (e.g. gambling, games of chance) and can be effective also to communicate volcano forecasting if it is correctly adapted for the purpose. Regulations (i.e. legal and commonly accepted norms) frequently determine the articulation of uncertainty and risk used to manage environmental and natural hazards. Finally, culture is of key importance in communication (Oliver-Smith and Hoffmann 1999; Eiser et al. 2012). The way in which risk is perceived may change depending on cultural beliefs of each society, and in the same way the cultural diversity of societies facing a volcanic threat may imply that communication methods that work in one country or culture may not work in another. Therefore, it is important to investigate and gain in-depth understanding of the particular cultural aspects of each society in order to define the best communication procedures and languages in each case. There are numerous studies that demonstrate the importance of public education, pre-crisis education programmes, and risk perception to better understand scientific communication during crisis (e.g. Bird et al. 2009; Budescu et al. 2012; Dohaney et al. 2015). Most of them agree that better educated populations on natural hazards understand better risk communication and behave in a more orderly way for managing a crisis. There are additional sociological and qualitative aspects to consider when communicating probabilities beyond the scope of this chapter, but address issues around risk perception, trust, decision-making, and managing disasters e.g. Kilburn 1978; Fiske 1984; Tazieff 1977 ; Paton et al. 1999; Chester et al. 2002; Sparks 2003; Haynes et al. 2007, 2008; Baxter et al. 2008; Solana et al. 2008; Fearnley 2013; Doyle et al. 2015.

5 What Should Be Communicated?

The key questions focus around what can be forecasted. Should the forecasting of the outcome of a volcano be determining whether it will erupt of not? How big or explosive will it be? When? Where? What is the dimension of the problem? These are basic questions that civil protection asks to the scientist once an alert has been declared, and the process of managing a volcanic crisis has started (IAVCEI 1999; McGuire et al. 2009; Aspinall 2010; Donovan et al. 2012a, b; Marzocchi et al. 2012; Sobradelo et al. 2014). Usually, scientists can answer these questions with approximations (probabilities) based on knowledge of previous cases from the same volcano, or from other volcanoes with similar characteristics, knowledge of the past eruptive history of the volcano, warning signals (geophysical and geochemical monitoring), and knowledge about the significance of these warning signs. Whilst giving probabilities as an outcome of a volcano forecast may be relatively easy for the scientist (depending on the degree of information available), it may not be fully understood by the decision-maker or any other recipient of such information. It is necessary to find a clear and precise way to communicate this information between scientists and key decision-makers, to avoid misunderstandings and misinterpretations that could lead to an incorrect management of the volcanic emergency and, consequently, to a disaster.

In recent years, a way used to improve the communication of statistics, as well as decision-maker needs, is through the development of exercises where a volcanic crisis is simulated and all key players involved in risk management, such as scientists, civil protection, decision-makers, population and media are invited to participate, as in a real case. Exercises have been carried out at different volcanoes such as Vesuvius (MESIMEX, Barberi and Zuccaro 2004), or Campi Flegrei, Cotopaxi and Dominica (VUELCO Project, www.vuelco.com), New Zealand (DEVORA), among others. These simulations facilitate interaction and cooperation between the stakeholders, and the sharing and exchanging of procedures, methodologies and technologies among them, including scientific communication. They present an opportunity for learning the exact role and responsibilities that each key player has in the management of a volcanic crisis, as well as exchanging concerns and feedback on specific matters.

Whilst volcanic forecasts centre on scientific data and probabilities as much as possible, scientists may also recommend safe behaviour directly to the public, providing advice that saves people’s lives (e.g. going up a hill if a lahar threatens). Often this is beyond the legal requirements of the scientists, who are required to comment on the volcanic science only, but they could feel a moral duty to assist (Fearnley 2013). However, this should not imply or be confused with making decisions on how to manage a volcanic emergency (e.g. evacuation), as this frequently falls under the remit of civil protection (or other such government organisations), although in some countries such as Indonesia the scientists and the civil protection organisations work together rather than having distinct roles; it is dependent on the governance structures of the country.

6 When Should a Volcano Forecast Be Communicated?

Ideally, forecasts should be communicated as early as possible, and then with increasing frequency if, or when, an eruption nears. This means there should be a permanent flow of information between scientists, the vulnerable populations, and policy-makers on the eruptive characteristics of the volcano, its current state of activity, and its associated hazards, even when volcanoes do not show signs for alarm. This is to aid preparation for when an emergency starts and things need to move much faster. However, in many cases scientific communication in hazard assessment and volcano forecasting is just restricted to volcanic emergencies. When volcanic unrest starts and escalates, the origin of this unrest needs to be investigated to assess the level of hazard expected. Good detection and interpretation of precursors will help predict what will happen with a considerable degree of confidence. This means that scientific communication during a volcanic crisis needs to be constant and permanently updated with the arrival of each new piece of data. The longer it takes to make a decision, the greater the potential losses are likely to be as vulnerability increases. This constitutes the main challenge in communicating forecasts and probabilities during a volcanic crisis. In essence, the relationship between the decrease of uncertainty in the interpretation of the warning signs of pre-eruptive processes to acceptable (reliable) levels, and the time required to make a correct decision, is a function of the degree of the scientific knowledge of the volcanic process and of the effectiveness of scientific communication. Therefore, scientific communication during a volcanic crisis needs to be effective from the start.

7 Conclusion

In order to improve scientific communication during a volcanic crisis it is recommended that the communication protocols and procedures used by the different volcano observatories and scientific advisory committees are compared for each level of communication: scientist-scientist, scientist-technician, scientist-Civil Protection, scientist-general public. Experience from other natural hazards helps, as do clear and effective ways to show probabilities and associated uncertainties. Although each cultural and socio-economic situation will have different communication requirements, comparing different experiences will help improve each particular communication approach, thus reducing uncertainty in communicating volcano forecasts.

Finally it is worth mentioning that a crucial aspect in facilitating risk communication is education. This, however, is a long-term task that requires to be conducted permanently in societies threatened by natural hazards. Risk perception depends on cultural beliefs but also on whether or not a society has been educated on its natural environment and potential hazards. In the same way scientific communication is better perceived and understood when the population have previous knowledge on the existence and potential impacts of natural hazards. There are numerous studies that demonstrate the importance of public education, pre-crisis education programmes, and risk perception to better understand scientific communication during crisis (e.g. Bird et al. 2009; Budescu et al. 2012; Dohaney et al. 2015). Most of them agree that better educated populations on natural hazards understand better risk communication and behave in a more orderly way for managing a crisis. Therefore, best practices on communication should also consider improving education of population on natural hazards, their potential impacts and the ways to minimise the associated risks, as well as on how to behave during the implementation of emergency plans in a crisis.