Future Risks and Risk Management

cover_futurerisksEdited by Berndt Brehmer and Nils-Eric Sahlin

Kluwer Academic Publishers, Dordrecht 1994

Future Risks and Risk Management provides a broad perspective on risk, including basic philosophical issues concerned with values, psychological issues, such as the perception of risk, the factors that generate risks in current and future technological and social systems, including both technical and organizational factors. No other volume adopts this broad perspective. Future Risks and Risk Management will be useful in a variety of contects, both for teaching and as a source book for the risk professional needing to be informed of the broader issues in the field.

 

Buy the book here

Epistemic Risk: The Significance of Knowing What One Does Not Know

Nils-Eric Sahlin and Johannes Persson

It is a well-established psychological result that the unknown and unwanted in particular scare us.1 A thinner ozone-layer is felt to be a considerable risk. On a more mundane level, situating a gas-tank in a suburban neighbourhood or eating fish with an unknown level of dioxin provokes a similar feeling of risk-taking.

These and similar risks differ in many ways; in magnitude, seriousness, etc. But they have one thing in common: what is seen as risky are the consequences of certain well-defined events. We feel uncomfortable because the outcomes of these events are negative and/or unpredictable. Thus, it comes as no surprise that it is outcome-risks that have been studied in the literature and that by theory of risk is meant theory of outcome-risk.

Psychologists have developed theories of ”risk perception,” studied how people view various kinds of outcome-risk. The classical notion of risk in economics-risk is an intrinsic property of the utility function-is another example of a theory of outcome-risk. Similarly we have those who have taken a statistical approach to the subject and studied the frequencies with which various events or consequences occur. And the same goes for the risk research done in anthropology, engineering, medicine, etc. However, there is a completely different type of risk, seldom discussed but equally problematic and difficult to handle: the risk of not knowing or not knowing enough.

High level decision makers are often presented with (research) reports of some kind or another, on the basis of which a decision is to be taken. These reports may well give accurate and trustworthy information about the outcome-risks considered, but what about the things not considered, and what if the scientific and intellectual quality of the reports is not that dependable? Basing a decision, it may be a high level decision or a low level decision, on scanty or inaccurate information will inevitably lead to unnecessary risk-taking-to epistemic risk taking.

In what follows we will use a number of related case probes in order to show how lack of robust knowledge leads to a considerable epistemic risk, which in turn means that we cannot accurately monitor the outcome-risks involved.2

case probe 1: the risk of eating fish

It is a well-known fact that fish contains remarkably high doses of dioxin.3 It is mostly in fish high in fat that high doses of dioxin, for example, TCDD, ”the most dangerous molecule that man ever created,” have been found. A question that both the fish consumer and the risk analyst want to have an answer to is: How much fish can we eat without damaging our health?

An indirect way to answer this question is by establishing so-called safety levels, i.e. by assessing values on which recommendations can be based. The routine strategy is as follows: Determine the highest level at which no effects can be found on animals. This level can be used as a safety level, but for obvious reasons this is not recommended. First, man as a species can be far more sensitive than the animals used in the experiments and, second, there might well be individuals who are particularly sensitive to the substance in question. Thus, to account for these complications, the usual strategy is to divide the established value by a factor, a safety factor, commonly 100.

The highest dose of TCDD that does not show any effect on the rat is 1000 pg/kg body weight and day (i.e. 1000 billionth grams per kilogram body weight and day). With a safety factor of 100 the safety level is 10 pg/kg body weight and day. However, dioxin is not the most innocuous toxic substance there is; thus, it is argued, a factor of 200 is called for. This gives us a safety level of 5 pg/kg body weight and day (or 35 pg/kg body weight and week).

It is important to emphasize that as a case probe we are using the Nordic countries, and especially how the dioxin problem has been dealt with by the Swedish authorities. The strategy used, for example in the United States, is rather different. This difference in choice of scientific strategy will be discussed below.

How much fish can we eat and still be within the limits of the safety level? The level of dioxin in salmon from the Baltic is 20-40 pg/g. This means that you can eat no more than 100 grams of salmon a week. However, there are other fishes of which you can eat as much as 1.5 kg a week without going beyond the acceptable weekly dose (e.g. pike). The recommendations often vary from author to author and are for obvious reasons considerably lower for children.

The safety level only indirectly answers the question: What is the risk of eating fish? What we really want to know is how high the outcome-risk is. The answer to this question depends on what relation we think there is between dose and response, between level of consumption and negative effects on our health. Assuming a threshold-model, that there is a level below which there are no unwholesome secondary effects, and that for dioxin 35 pg/kg body weight and week is below this threshold, the risk is nil. However, if there is, say, a linear relation between dose and response, the risk is not nil. The risk is probably very low; but not nil; and there is an important proportionality between health effects and levels of consumption.

The risk of eating fish is an interesting case probe and a good exercise in risk analysis. It shows how safety levels are determined and how outcome-risks are calculated. But, more important, it demonstrates how flaws in the robustness of our knowledge, in our judgment base, present a special form of risk-an epistemic risk. We know very little about what we know, but more important we do not know what we do not know. To make decisions or to give recommendations on the basis of a brittle state of knowledge means that one exposes oneself to an epistemic risk.

What then are the deficiencies in our knowledge of the risk of eating fish and how does this inadequacy affect our judgements and decisions, e.g. the evaluation of various outcome-risks?

First, it should be noted that in setting a safety level of 35 pg/kg body weight and week, the decision was based on a factor of 200 instead of the common safety factor of 100. The reason for doing this is that obviously there are several important factors that are unknown. To counterbalance this lack of robust knowledge it was decided to double the safety factor. But why 200? Why not 300, 400 or 500? What are the scientific arguments behind doubling the safety factor? It is important that we do not fall prey to the magic of numbers. Assume, for example, that for a particular substance we have found that the highest dose that does not give any detectable effects is 1000 mg/kg body weight and day. A safety factor of 100 should result in a safety level of 10 mg/kg body weight and day; 200 a safety level of 5 mg/kg body weight and day. In this case, too, for such a high dose, is 200 to be considered a reasonable safety factor? There is a considerable difference between going from 10 mg to 5 mg and going from 10 billionth of a gram to 5 billionth of a gram.

Second, one notes that there is a significant difference in how sensitive to TCDD various species are. The hamster is, for example, 10 000 times less sensitive than the guinea-pig (Is man a hamster or a guinea-pig?). In view of this the choice of a factor of 200 appears even more of a puzzle and an arbitrary decision.

Third, one observes that there is a considerable difference between the safety level established by the Nordic countries of 5 pg/kg body weight and day, and the corresponding value established by EPA (the American Environmental Protection Agency), of 0.0064pg/kg body weight and day. This difference in risk assessment is due to the fact that the American organization assumes a linear model (that there is a linear relation between dose and response), where as the Nordic authorities have assumed a threshold-model, i.e. assumed that under a given level there are no negative health effects. In this context it is important to understand that which model we choose is a question of scientific strategy and that our choice will inevitably affect our judgment of the outcome-risks. If TCDD is not a complete carcinogen, i.e. if it by itself cannot cause cancer and if it does not, as is the case with the most well-known cancer-causing substances, affect hereditary factors, then one can argue for the Nordic way of reasoning (even if the safety level could be disputed). But we do not know that TCDD just functions as a promoter; there are experiments and studies that indicate the opposite. At present we also have a rather scanty knowledge of the effects of TCDD on our immune system. Potential synergistic effects are another worrying factor worth considering.

A considerable lack of robust knowledge and the fact that this state of ignorance is treated differently in different countries make the risk of eating fish an unusually interesting case probe. The EPA obviously argues that in situations where our knowledge is brittle and where we in fact take a risk of experimenting with public health, we have to be particularly careful. The Nordic countries, on the other hand, have a different strategy. The threshold-model emphasizes the outcome-risk. If the safety level is below the critical threshold the risk is acceptable. Very few people, if any, are expected to get cancer or any other illness by being exposed to dioxin. But the choice of a threshold-model means that the robustness of our knowledge is overweighted. A higher safety level is intended to compensate for this lack of knowledge, but it can never do so as efficiently as if a linear model had been used.

Thus there are several good reasons for taking the lack of knowledge under serious consideration and for pondering over the epistemic risks that it might lead to.

case probe 2:the risk of genetic engineering

Modern biotechnology brings in its train a number of risks-outcome-risks as well as epistemic risks. The consumer of fruit and vegetables often flinches at what look like unreasonable prices. It thus seems as if there is a demand for more efficient, price-cutting, cultivation, e.g. by minimizing the costs of damage caused by pests. Today modern biotechnology can, within certain limits, produce crops with a built-in biological insect resistance, i.e. provide a biological control of pests. The following example is archetypical for this kind of research (so-called recombinant-DNA-technology).4

A bacterium, Bacillus thuringiensis, produces a so-called -endotoxin which is highly toxic to pests, notably caterpillars. B.t. is a spore-forming bacterium and is characterized by parasporal crystalline protein inclusions. What happens is that the crystalline inclusions dissolve in the insect midgut and there the released proteins are activated, i.e. become toxic polypetides. Spore-preparations containing this protein and an exotoxin have been used as commercial bioinsecticides and as such have been far more effective than other pesticides. But the distinct B.t. toxic genes can be and have been isolated and sequenced. Thus, if that part of the genes that codes for the endotoxin is connected with a constitutive promoter and this new construction is built into, for example, tobacco-plants or tomato-plants, these plants become toxic to various types of pests. The toxin produced by the moved gene is stored in very low concentrations in the leaves. Though the concentration of toxin in the leaves is low, it is quite enough to be toxic, for example, to caterpillars, which if they are poisoned stop feeding and eventually die.

Not too much imagination is needed to see the advantages of this technique in general and of tomatoes containing thuringiensis in particular: We get better quality, at a lower price. The negative consequences are, however, not that readily identifiable or simple to evaluate. What are the ecological effects? Is the toxin harmless to humans? The difficulties we have assessing the outcome-risks simply reflect the fact that we are dealing with a problem where there is a considerable lack of robust knowledge.

Questions we need to know more about, before a robust risk assessment can be made, are:

How does the toxin affect human beings? Assume that in the near future and in any supermarket we can buy tomatoes containing thuringiensis. A fair guess is in fact that if such tomatoes are made available to the consumer they will take over the market. However, our present state of knowledge does not rule out the possibility that some human beings are highly sensitive to the toxin. It should also be noted that there are a large number of distinct B.t.toxin genes, thus a complex range of toxins can be developed. Nobody knows exactly how dangerous this group of toxins is. It is believed that -toxin has an effect only on certain types of insect. But there are no major experiments that show this, only indirect evidence (”In the past there have been no reports of …”). Furthermore, any sustainable knowledge of likely synergistic effects is wanting.

What are the ecological effects? Improving a plant’s resistance to insects will inevitably have an effect on the local ecological system. The insects that are eradicated were in the past the fodder of birds and other insects. These animals will now have a shortage of food. Cutting out a species also means that room is made for other species to take over this niche in the system. It is impossible to predict the outcome of this kind of ecological experiment. In particular cases the aftermath of a biotechnological encroachment may be rather insignificant, but there may well be immense consequences. What we know is that similar experiments in the past have, although on a far larger scale, had not too good a result.

At the beginning of this century settlers brought clover with them to Australia. The aim was to establish better pasturelands. But the clover spread into the forest and there had a sweeping effect on the flora and the soil. Among other things this meant that species introduced by the settlers could invade the area and that today they dominate the flora of the ground level.

Another thing we do not know is whether or not plants with improved resistance to pests, or plants that have been changed in some other way, can establish themselves as weeds and to what extent their higher resistance to attacks gives unwanted consequences. Thus there are several things we do not know about the effects on the environment of an uncontrolled spread of this type of transgenic plant. It cannot, for example, be precluded that via pollen genes are spread between cross-fertilizable populations.

If we confine ourselves to tomatoes cultivated in greenhouses, the risks are probably manageable. But if we look at the risks of biotechnology at large, then the answers to these and similar questions are of vital import.

The epistemic uncertainty is considerable, both regarding the potential spread of various genes, and also regarding the way these genes affect the ecological system. The price of this epistemic uncertainty is a none too robust risk analysis; it is rather difficult to identify and evaluate the outcome-risks.

The chosen example of tomatoes containing thuringiensis has, of course, its limitations as an example of the potential risks of biotechnology. But there are numerous other examples, far more spectacular, that bring the problem to a head and thus, if necessary, will do the prompting. What all these examples show is, however, that the most serious problem, evaluating the risks of modern biotechnology, is to monitor the epistemic risks.

case probe 3: the risk of using container seedlings

Finding a way to plant and grow Swedish pine and spruce at a low cost has been of considerable interest since the end of World War Two. Wood prices have dropped, and at least in the north of Sweden planting is not an economically good investment: it will be up to 130 years before what was planted can be felled and the rate of interest is low.5 But, fortunately for forest lovers, after felling, areas must, by law, be planted within four years. And they must be replanted if the result is unsatisfactory. So the situation is clear: a cheap planting method that guarantees a satisfactory result is what we desire.

Traditional ways like sowing or using bareroot seedlings turned out to be difficult to rationalise to the desired degree. Research began in order to find another alternative. In the sixties containers were introduced in forestry. In the seventies most companies used them. And they still do. The propensity to favour this planting system, was, besides the economic advantages, probably increased by the introduction of a new tree, the contorta pine, which seemed to be difficult to sow directly.

Of the early types the paper pot was one of the most frequently used. This container was designed so that the roots would be able to penetrate the walls and/or the walls would be broken down after some time in the soil. But the state of knowledge was too brittle. Moved north and high above sea level the system did not function well enough. Reports, particularly from Finland and Norrland, began to tell us about root spin. In a recent article in a Swedish journal of forestry, root spin is named ”the nightmare of Norrland.”6 Roots that did not find their way out continued to grow, in circles. And many doubt that these trees will ever be 130 years old.

Whether or not root spin can cause the death of the tree is not known. Some researchers say that the tree might be strangulated when the roots have grown sufficiently, and/or that in the end the roots stop functioning. Others say that it is probably only the stability of the tree that is affected by the spin. And as far as we can see the latter view is the ”official” standpoint. ”Nobody has proved anything else …”7

The stability problem is important enough since an unstable tree often becomes curved. That the spin-effect should be eliminated is something all seem to agree upon, so other types of containers have replaced the earlier ones by now. Thus, the lack or very brittle state of knowledge that we have about what is dangerous about root deformation, if anything, may seem to be unimportant.

But it is not. The decision to get rid of root spin was not a decision to get rid of all kinds of root deformation. What has been in focus since the early errors has been plant stability and root spin effects. And new types of containers do quite well in these respects. Nowadays the containers have vertical steering lists on the walls, and the containers are removed before planting. The cost of this acquisition of knowledge has of course been considerable. And the cost of replanting large areas that have failed is also considerable-a couple of billion paper pots had been planted. But the state of knowledge is still brittle. We have not answered the interesting question: How can we grow cheap seedlings that give a satisfactory result? just by answering the question: How can we grow cheap seedlings without root spin? The possibility of a new serious error has not been ruled out. And one of the reasons is that the question of what happens with trees that have root deformations has not been satisfactorily answered.

The new seedlings have deformations too. There is no more spin, but there are often two or more roots partly growing into one with bark remaining inside, to mention one thing that may cause troubles with the new solutions. Surely this was an effect of the paper pot too, but it was one that probably looked quite innocent compared with the rest. But maybe this is, and will be, the real cause of death of container seedlings. Maybe it is because of this that older container seedlings are so easy to break off by the root. Or maybe this is the link between planted trees and the fluctuation in recent years of the not seldom deadly Fungus gremmeniella. Since we really have no good scientific investigations concerning this and many other questions we are not in a good position to evaluate the outcome-risks for the new containers. And meantime things are getting worse: it is said that only naturally regenerated trees give high-quality timber, and the pulp industry has almost stopped using bleachers, which means that trees must not be too curved if they are to be usable for paper. What was to be an economically wise decision may turn out to be an economic disaster, and the reason is that the decision was, and still is, founded on too insecure epistemic ground.

There is no container that gives a root system resembling naturally regenerated trees, i.e. the only kind of seedlings that have lived long enough to be sure to have a functioning root system till they are old enough for felling. Until we have container trees that old there will be brittleness in the state of our knowledge. There will even be considerable brittleness after that. Today, the new hope of tomorrow has lived for about ten years, if we do not count the newest ”low-risk” container that will be introduced this year full scale based on one year of research and engineering. 500 million seedlings are produced every year in Sweden. In Norrland almost all the seedlings are of the container type; in southern Sweden half of them are. By means of trial and error we at least have quite a low proportion of one monster in our new plantings, the root spin devil. Is it enough?

factors producing epistemic risks

A reliable risk analysis demands a careful scrutiny of the present epistemic state. It will not suffice to identify and evaluate the outcome-risks; an estimation and evaluation of the prevailing state of ignorance is also needed. But what types of ignorance can encroach on the risk analysis and thus affect the level of epistemic risk we take?

The unreliable research process. One thing that these case probes show is that risk evaluation demands a study of the underlying scientific results but also an evaluation of the processes of which these are the outcomes. Research is a dynamic process. This process sometimes results in tangible proposals about how various things are related. But such results cannot be accepted as a basis for risk analysis/evaluation without thorough scrutiny. They are products, and their quality relies on the standard of the scientific machinery-on how well the dynamic process is functioning. Looking solely at the results means that we leave epistemic risks out of consideration.

It is obvious that we have reason to believe in the results of a scientific study that is carefully designed and carried out and that if the research process shows signs of considerable deficiencies the value of the results is almost nil. However, even results that are the product of studies of high quality can have little or no value because what has been studied is but a few aspects of a highly complex problem. It may be so, for instance, that avoiding root spin effects on planted trees is only one aspect of the complex problem one is really working on, in this case, to find a method that yields high and efficient wood production. In practice, there seems to be an inverse relation between how well the study is designed and carried out and how much it can say about a complex problem. What follows, we believe, is an illuminating example of how a risk assessment can be distorted when it is based on poor knowledge.

A few years ago, in an article in a major daily Swedish newspaper, two Swedish professors of medicine made an attack on those who argued that visual display units may be a potential occupational hazard, i.e. that electric fields may cause, for example, skin problems. With no lack of cocksureness they adopted an unresponsive attitude to all risks. Emphasizing the correctness of their analysis, they directed those who believed that their illness was caused by a VDU unit to psychologists and psychiatrists.

This short article is interesting for a number of reasons. It exemplifies what psychologists have found in their studies-people are afraid of what they cannot see and what is out of their control. Electro-magnetic fields cannot be detected with the eye and whether or not one works at a VDU unit is generally not decided of one’s own free will. The article also shows the incapacity professionals often have for understanding the cognizance and feeling of risk that people have. But above all the article gives us an archetype of how easy it is to make unwarranted and categorical risk assessments despite the fact that we lack robust knowledge of the factual risks. The two physicians supported their position by referring to one research report. They argued that this report conclusively showed that VDU units do not cause occupational dermatitis. But this is a misinterpretation of the research report and a reading which the authors of the report would not agree with. The researchers who conducted the experiment and wrote the report had only studied a rather restricted bioelectric hypothesis, not at all the complicated causal connections expected to be the mechanism behind this occupational hazard. It is also obvious that the report in itself is not beyond criticism, for one thing because it is based on subjects who themselves believed that their health problems were caused by their work at VDU units. Thus, are (30) subjects who themselves made a diagnosis of their problems really a sufficiently representative group for far-reaching conclusions to be made? Most puzzling, however, is how serious scientists on the basis of one, qualitatively weak research report can make such an unrestricted risk analysis-an analysis that has obviously had an impact on the willingness to pay attention to the problem in Sweden.

It is no doubt bad method to draw far-reaching conclusions from only one research report, but mere quantity does not necessarily guarantee a more robust risk assessment, that the epistemic risk decreases. Research reports are not seldom replications of or variations on a common theme. Thus, if the investigated hypothesis has a narrow scope, but the answers we seek probably have to do with complicated causal connections, mere quantity does not necessarily diminish the unreliability of knowledge. It goes without saying that quantity does not have a positive effect on epistemic risk either, if a poor method has been used in each and every one of the studies.

The ignorant expert. A particular problem in connexion with risk assessments is what one might call the illiterate expert, i.e. a person who in the light of his achieved expertise in one field is prepared to make unrestricted statements about things of which he has little or no knowledge at all. In view of the complexity of risk research it is important that questions of risk assessment are seen as a multi-disciplinary problem. The competence needed to predict the effects that a particular substance can have on human beings is rather different from the competence that is needed to say how the same substance will affect the environment; or to say how people in general evaluate the negative consequences that might be the result of approbation; or to make an ample risk analysis. Unnecessary epistemic risks are avoided if this problem of competence is taken under consideration.

The cyclopean perspective. Our search for knowledge can on and off make us one-eyed. Psychological research has unambiguously shown that people are unwilling to seek information that disproves the guesses or hypotheses they have. Man seems to have a pronounced wish to confirm or verify his hypotheses.8 If our a priori belief is that something is innocuous, we will without reflecting on it seek evidence in favour of this thesis. It would be naive to think that researchers are trained to avoid this type of methodological errors. Psychological research clearly shows that this is not the case.

More often than not one can see statements that declare that man as a decision maker is irrational-irrational in the sense that he does not obey the recommendations of the traditional (normative) theory of decision making. This canon was solidly established in an article by Daniel Kahneman and Amos Tversky, published in Science in 1981.9 Are we irrational? And does the psychological literature show that we are? Between 1972 and 1981 (when the article was published) 84 articles studying people’s choices in the light of a normative model were published. 37 of these articles reported that the subjects did well compared with the normative theory and 47 that their performance was not that good. Despite the fact that the distribution of results supporting the irrationality hypothesis and results speaking against it is fairly even, it is results verifying it that are most often quoted in the literature (27.8 against 4.7 times on average during the period. Lola Lopes, who made this comparative study, points out that there is no difference to be found in the quality of the journals in which the articles have been published.10

Let us therefore assume that the research had to do with the occupational risks of a known substance, rather than with the rationality/irrationality of human beings. Being one eyed would then mean that evidence in favour of the hypothesis is overstated, despite the fact that there are as many studies that speak in its favour as against it. If we believe that a substance is innoxious this belief will influence our search for new information. It may in fact result in our consciously or unconsciously discounting evidence that indicates that the substance is hazardous. (For example, discount all the people that have problems that support occupational dermatitis).

Unrealizable research. Another factor that can influence the epistemic risk is that we might get caught in situations where it is, for moral or practical reasons, difficult to carry out controlled experimental studies. Studying risks or hazards one has, in many cases, to do with things that in a negative way might influence the health of people, or even cause their death. This is true whether or not we are interested in the hazards of modern biotechnology, how dioxin affects the human organism or the danger of riding a motorbike. It is unethical, for example, to expose a representative group of Swedish citizens to a voluntary/involuntary biotechnological experiment. For the very same reason rats and not human beings were used when the acceptable level of dioxin was established. The ethics we subscribe to set limits to the scientific methods that can be employed and thus restrict the type of knowledge that is available. One consequence is that in some cases we can only achieve indirect knowledge of a particular phenomenon, e.g. through epidemiological studies or rat-experiments. There is no question about it, the moral or ethical arguments are heavy; sometimes we have to forgo direct knowledge, but at the same time we have to be aware of the fact that this means that we take a higher epistemic risk, i.e. that for purely ethical reasons we accept a degree of ignorance.

But also the built-in limitations of our scientific methods can be epistemic risk producers. In some cases, for purely practical reasons, it is impossible to carry out an experiment of the kind that is needed if the results are to be given a satisfactory statistical analysis. How, for example, do very small doses of ionizing radiation affect the human being? Say that the levels we are interested in are just above the background radiation. Statistically secured information about this risk demands that an ”experiment” be conducted with a very large population, in fact so large that the experiment is, in practice, unrealizable. In this and in many similar cases (e.g. dioxin) we have to rely on indirect information acquired through comparative studies (e.g. epidemiological studies of people living on different levels above the sea) or rely on various unintentional ”experiments” (e.g. the ”Chernobyl experiment”).

We thus note that regardless of whether it is the available scientific methods or our moral commitments that set the limits for our research, they will have an influence on the robustness of our knowledge and thus on the epistemic risk we take.

The choice of theory/model. Another factor that might enhance the epistemic risk is the choice of model or theory on which the risk analysis is based. One notes that the answer to the question ”How dangerous is it to eat fish containing small levels of dioxin?” depends on whether we believe that a threshold-model or a linear model best represents the relation between dose and response. We saw that Nordic countries used a threshold-model establishing the safety value and thus that the risk is zero if the consumption is below this threshold. But do we know that the threshold-model is the correct (true) model? There is no-nor can there ever be any-final evidence for the correctness of the threshold-hypothesis. Our model for describing the relation between dose and response is a set of, more or less well-established, theoretical assumptions-assumptions that, if the evidence is weak, will lead to a high epistemic risk.

Epistemic risks and the perspective of time. It is important to have well-designed experiments if we desire a low epistemic risk. Sometimes such experiments can be conducted within a quite small span of time. Sometimes they cannot. Reforestation must be one of the best examples of an area in which the time factor makes it difficult to monitor the epistemic risk of experimentation. Is it possible to conduct experiments that take 130 years? Probably not; and if not, what can be done? One alternative is to design experiments that follow the growth and survival of plants for about twenty years, which is up to an age where history has told us that the effects of root deformation become clearly visible (and sometimes lethal). Another alternative is to conduct short experiments, say with a time-span of a few years, and then carefully examine the root system. If this is to be a feasible way of diminishing the epistemic risk there has to be considerable knowledge about the further root development, and knowledge about how this developed root will affect the tree. This knowledge might be gained from a combination of physiological research and field experiments on mature trees.

But what then is needed are rather full fledged biological theories, providing causal explanations of how things work, and thus giving us the tools to sidestep unrealizable experimental situations. But such a theory may or may not be supported by independent evidence. If it is not, the theory is just as brittle as the knowledge acquired by insufficient experiments.

the management of epistemic risk

Case probes of the type examined above fairly conclusively show that in situations of considerable epistemic risk, the traditional tools of decision and risk analysis simply fall short.11

Among other things the traditional theory asks us: first, to identify the acts, states and consequences of the decision situation. An act is a possible choice alternative, a function from states to consequences. Rarely do we have total command of the factors, the states of the world that influence the outcome, the consequences, of our acts. Second, it asks us to evaluate the possible outcomes and to estimate the likelihood of the possible states of the world. The values of the consequences are determined by a utility measure and our beliefs about the possible states of the world are represented by a unique probability measure defined over the states. Finally, we are asked to act so that we maximize expected utility. We should choose the act with maximal expected utility.12

Facing hard choices,13 to use Isaac Levi’s phrase, means that the decision situation is far from being transparent. A high epistemic risk divulges that it is hard to identify the possible states of the world, and even harder to see the possible consequences of our acts. Case probe 2, the risk of genetic engineering, proves this point.

A fundamental assumption of traditional decision theory, in its Bayesian disguise, is that the decision maker’s beliefs can be represented by a unique probability measure. The best known argument in favour of this assumption is the so-called Dutch book theorem. However, as the three case probes above reveal, there are strong arguments against it. The three examples show that there are situations in which the reliability of the information influences the decision. We choose differently depending on the type and strength of information available. There is a clear difference between situations where we have scanty information about the events involved, if we do not know whether TCDD is a complete carcinogen or simply a promoter, if we lack information to foresee the consequences of an ecological experiment caused by a transgene organism, and situations where we have almost complete knowledge of the random processes involved, e.g. as when we buy a lottery ticket.

Thus if we want a better tool for risk analysis we have to look for a more general representation of our beliefs and knowledge, one that handles the epistemic risk. On a technical level this can be accomplished in many ways.14The best alternative, however, is to represent our knowledge and beliefs, not by a unique probability measure, but by a class of such measures. This allows us to represent up to a degree the epistemic risk of a decision situation. The risk of genetic engineering or the risk of eating fish is not so much a question of the consequences of our decisions, but of the instability of our knowledge. We do not have enough information to do what the traditional theory demands. It is, in this context, important to point out that this set of probability measures does not have to be a convex set. On the contrary, from an epistemological point of view it seems very reasonable that in situations of high epistemic risk the information available is such that a convex set of probability measures means a misrepresentation of the knowledge we have.

However, it is questionable if all aspects of epistemic risk can be mirrored by a set of probability measures. Scanty, inaccurate or imprecise information leads to indeterminacy, but this indeterminacy is not uniform. The information we have points in different directions, some pieces of it concur, other pieces conflict. To capture this aspect of epistemic risk it seems reasonable to introduce a second order measure, a measure defined over the set of first order probability measures.15 We now have a far more general representation of the knowledge and belief of a decision maker and an excellent tool for analysing, not only the outcome-risks, but also the epistemic risks of a hard choice.

Taking a calculated decision, a higher order measure helps us to monitor the epistemic risk we take. Through it we can select a larger or smaller subset of the set of epistemically possible first order assessments, i.e. measures that do not contradict our knowledge, and use this selection as a basis for action.

But decision theories, and theories of risk analysis, based on a more general representation of the decision maker’s knowledge and beliefs have to face up to a difficulty. Theories based on, for example, interval probabilities or sets of probability measures cannot simply employ the traditional decision rule of maximizing expected utility. The reason for this is that for each point in a probability interval, or for each probability measure in a set of such measures, we can calculate an expected utility value. Thus, each action alternative open to the agent will be associated with a set of such values, which cannot be ”maximized” in the traditional way. To solve this problem new decision rules have to be developed.

Isaac Levi advocates a lexicographical set of rules for reaching a decision in situations with ”indeterminate” probabilities.16 Levi assumes that the decision maker’s information about the states of nature is contained in a convex set of probability measures, the set of ”permissible” measures. The first step in Levi’s decision procedure is to determine the actions that are E-admissible. An action is E-admissible if and only if there is some permissible probability measure so that the expected utility of the choice relative to this measure is maximal among all available choices. Secondly, a choice is said to be safety optimal if and only if the minimum utility value assigned to some possible consequence of that choice is at least as great as the minimal utility value assigned to any other admissible alternative.

Gärdenfors & Sahlin assume that the agent’s knowledge and beliefs about the states can be represented by a set of probability measures, the set of epistemically possible measures.17 This set of measures is restricted by way of a second order measure of epistemic reliability. As a basis for action it is argued that the agent uses those and only those measures that have an acceptable degree of epistemic reliability. The theory suggests a two-step decision rule. First, the expected utility of each choice and each probability measure that meets the requirement of epistemic reliability is calculated and the minimal utility of each alternative is determined. Second, the choice alternative with the largest minimal expected utility is selected. This is the principle of maximizing the minimal expected utility (MMEU).

It is easily shown18 that there are decision problems in which these decision rules give totally different recommendations. There is no simple way to select one of the rules in preference to the other. Levi’s rule does not satisfy the well-established condition of irrelevant alternatives, which in its simplest form demands that if an alternative is not optimal in a decision situation it cannot be made optimal by adding new alternatives to the situation (i.e. Levi is giving up ordering), a criterion that, however, the MMEU principle satisfies. On the other hand, Seidenfeld19 has shown that MMEU is open to serious objections. Violating the independence axiom means that non-optimal decisions are taken in certain dynamic decision situations.

Even if these alternative theories have blemishes, it is obvious that they are preferable to the traditional methods of risk and decision analysis. The choice is in fact between doing nothing, i.e. pretend that there is no epistemic risk or that all situations have the same epistemic risk, or trying to deal seriously with the problem of the robustness of knowledge.20 And the theories outlined above allow us to handle the outcome-risks as well as the epistemic risks of decision problems of the type discussed in the three case probes.

epistemic risks and ethics

According to a widely embraced theory of ethics, it is an act’s consequences that count. If the consequences are good, then the act is right. This type of theory shares many of its virtues and defects with traditional theories of rational decision making. How are the consequences to be evaluated? What guidance can the theory give us when we want to do something that is morally right? The answers given by a consequentialist theory to questions like these are very similar to the type of answers found within the framework of rational decision making: the values are given by the preferences of an individual or a group of individuals, and in complex situations it is the ”expected consequences” that count.

Our view is that theories that deal with moral decisions from a consequentialist point of view must also take epistemic risks into account. It is in fact possible to construct situations which are identical in all the respects relevant to the consequentialist, but nevertheless, from a moral point of view, lead to different judgments. A simple example shows this: Person A returns from a long visit abroad. He finds his mother in terrible pain. He comes to believe that she must live with this pain for the rest of her life. Since he believes that it is better for her to be dead than to have a life of suffering, and since he knows that he himself would not be able to cope with knowing that she is in perpetual pain, he kills her.

Person B comes home from work and finds, as he does almost every day, his mother in agonizing pain. He has consulted the specialists there are (including several quacks) and independently of each other they have reached the same conclusion; with a probability neighbouring certainty his mother must live in terrible pain for the rest of her life. His mother makes it clear that she prefers death to constant pain. He is also quite sure that he can’t cope with it. Therefore, he kills her.

But post facto we find out that the mother of A simply had a dreadful hangover, and that B’s mother happily would have recovered pace what the doctor’s examinations had shown. Knowing this, the following question demands an answer: No matter whether we approve of either of the two acts, is it not obvious that B’s act is morally better than A’s act? Most of us, we think, give an affirmative answer to this question. B’s act is not reprehensible, but A’s is. Why? The consequences of the two acts are identical; both A and B killed his mother to free her from suffering, but in fact she would not have suffered. The only thing that differs between the two cases is thus that A’s act (and decision) is based on a far more insecure epistemic ground than B’s act. Although this is the only difference, it is an important difference and one that have been overlooked by those who just look to the consequences of our acts. If we want to act morally, we have to monitor the epistemic risk we take.

epitome

To summarize what we have just said: Outcome-risks are but one important factor of a complete risk analysis. Important though it may be to analyse and assess the consequences of our actions, it is equally important to analyse and assess the stability of our knowledge. An assessment of the stability of knowledge demands thorough, or meticulous, logical analysis of the decision problem, rather than an empirical study of the outcome-risks. Not seriously taking care of the instability of knowledge may well lead to decisions that lack moral robustness.

Let us end by quoting Socrates who in his apology makes clear what epistemic risk analysis is all about:

Probably neither of us knows anything really worth knowing: but whereas this man imagines he knows, without really knowing, I, knowing nothing, do not even suppose I know. On this one point, at any rate, I appear to be a little wiser than he, because I do not even think I know things about which I know nothing.21

Notes

1 See Paul Slovic’s article, Perception of risk: paradox and challenge, in this volume for a discussion and further references.

2 For a discussion of the notion of epistemic risk see Sahlin, N.-E. ”On second order probabilities and the notion of epistemic risk,” in Foundations of utility and risk theory with applications, ed. by Stigum, B. P. and Wenstøp F. (Dordrecht: Reidel, 1985) pp. 95-104, and ”On epistemic risk and outcome risk in criminal cases,” in In so many Words, ed. by Lindström S. and Rabinowicz, W. (Uppsala, 1989), pp. 176-186, and ”On higher order beliefs,” in Philosophy of probability, ed. by Dubucs, J.-P. (Boston: Kluwer, 1994), pp. 13-34.

3 See the essays by, for example, U. G. Ahlborg, and C. Rappe in the anthology Dioxinet in på livet. Källa/34. Forskningsrådsnämnden (1989).

4 See Fagerström T., et al., Ekologiska risker med spridning av transgena organismer (Solna: Naturvårdsverket, Rapport 3689, 1990) and Rask, L. et al. ”Rekombinant-DNA-teknik i resistensforskning och resistensförädling,”Sveriges Utsädesförenings Tidskrift, 98 (1988). See also Feitelson, J. S., Payne, J. and Kime, L., ”Bacillus thuringiensis: Insects and Beyond,”Bio/Technology, 10 (1992), pp. 271-275 for a detailed discussion and further references.

5 Söderström, V., Ekonomisk skogsproduktion. Volume 2. (Stockholm: LTs förlag, 1979)

6 Alriksson, B.-Å. Rotsnurr och instabilitet NORRLANDS MARDRÖM. Skogen, 10 (1992), pp. 18-19.

7 See Bergman, F., ”Några faktorer av betydelse vid skogsplantering med rotade plantor,” Sveriges Skogsvårdsförbunds Tidskrift, 6 (1973), pp. 565-578, Jansson, K.-Å., En orienterade studie av rotade tallplantor avseende rotdeformation (Stockholm: Skogshögskolan, 1971) and Lindström, A., Försök med olika behållartyper, Sveriges lantbruksuniversitet, Institutionen för skogsproduction, stencil no. 52 (1989).

8 But there are those who interpret the psychological results differently; see, for example, Evans, J., et al., Bias in human reasoning. Causes and consequences (Hillsdale: Earlbaum, 1989).

9 ”The framing of decisions and the psychology of choice,” Science (1981), p. 221, pp. 453-458.

10 See Lopes, L. L., ”The rhetoric of irrationality,” Theory & Psychology(1991), p. 1, pp. 65-82. It may be argued that although journal quality may be equal for articles indicating rationality and articles indicating irrationality, the quality of the studies themselves may still differ.

11 The traditional view is outlined in, for example, Fischhoff, B., et. al.,Acceptable risk. (Cambridge: Cambridge University Press, 1981) and Raiffa, H., Decision analysis: Introductory lectures on choices under uncertainty(Reading: Addison-Wesley, 1968).

12 See Gärdenfors, P. and Sahlin, N.-E., eds., Decision, probability, and utility: Selected readings. (Cambridge: Cambridge University Press, 1988) for a detailed discussion of this type of theory.

13 See Levi, I., Hard choices (Cambridge: Cambridge University Press, 1986). But also the appendix, ”A brief sermon on assessing accident risks in U.S. commercial nuclear power plants,” of Levi’s The enterprise of knowledge: An essay on knowledge, credal probability, and chance (Cambridge, Mass: The MIT Press, 1980).

14 See Sahlin, N.-E., ”On higher order beliefs,” in Philosophy of Probability, ed. by J.-P. Dubucs (Boston: Kluwer, 1994), pp. 13-34.

15 It is often said that Leonard Savage in The foundations of statistics showed that higher order probabilities lead to a trivialization; they are reducible. Savage discusses higher order probabilities in his book, but what he argues against is higher order personalistic probabilities.
Savage has two arguments: First, if the probability qua basis for action (first order probability) appears uncertain then one should employ a weighted average with second (or higher) order probabilities as weights to obtain a new point estimate, where the latter estimate then expresses all uncertainty of relevance in the situation. Second, that once second order probabilities are introduced, an infinite regress thwarts any attempt to draw practical conclusions from higher order probability assessments. Savage’s two arguments are valid, if the same interpretation is given to each level in the infinite (or finite) hierarchy, and if each level is represented by a probability measure. This must be what Savage has in mind. But the infinite regress argument is not valid if we assume that the various levels of the hierarchy are given distinct interpretations. It is also easy to note that Savage’s first argument does not hold if the first order measure is a traditional probability measure while the second order measure is given the properties of a so-called Shackle-like belief measure, i.e. a measure with properties violating the conditions on probability measures.
Both of Savage’s arguments are hence easily taken care of. It is thus obvious that those who have not grasped the fact that higher order beliefs add to our comprehension of judgmental and decision processes have far too readily accepted Savage’s two arguments. See Sahlin, op. cit..

16 Levi, I., ”On indeterminate probabilities” (1974, 1988), in Gärdenfors, P. and Sahlin, N.-E., eds., Decision, probability, and utility: Selected readings(Cambridge: Cambridge University Press, 1988), pp. 287-312.

17 Gärdenfors, P. and Sahlin, N.-E., ”Unreliable probabilities, risk taking, and decision making” (1982, 1988), in Gärdenfors, P. and Sahlin, N.-E., eds.,Decision, probability, and utility: Selected readings (Cambridge: Cambridge University Press, 1988), pp. 313-334.

18 See Sahlin, N.-E. ”Three decision rules for generalized probability representations,” The Behavioral and Brain Sciences, no. 4 (1985), pp. 751-753.

19 Seidenfeld, T., ”Decision theory without ’independence’ or without ’ordering’: What is the difference?,” Economics and Philosophy, 4 (1988), pp. 267-290.

20 The type of problems discussed here can also be handled using what is called a Robust Bayesianism, a type of theory discussed in Berger, J.,Statistical decision theory, 2nd edition (New York: Springer-Verlag, 1985) and in ”Robust Bayesian analysis: sensitivity to the prior,” Journal of Statist.PlannInference, 25, pp. 303-328. For example, Berger’s approach and that taken by Gärdenfors and Sahlin have much in common.

21 The Apology of Socrates, ed. by E. H. Blakeney (London: The Scholartis Press, 1929), pp. 67-68.

The essay was originally published in Future Risks and Risk Management, ed. by B. Brehmer and N.-E. Sahlin, Kluwer Academic Publishers, Dordrech 1994, 37-62, and is here published with kind permission of Kluwer Academic Publishers.

Copyright© 1994/2004 by
Nils-Eric Sahlin and Johannes Persson
University of Lund
nils-eric.sahlin@telia.se