Monday, 04 April 2011 20:19

Risk Acceptance

Rate this item
(4 votes)

The concept of risk acceptance asks the question, “How safe is safe enough?” or, in more precise terms, “The conditional nature of risk assessment raises the question of which standard of risk we should accept against which to calibrate human biases” (Pidgeon 1991). This question takes importance in issues such as: (1) Should there be an additional containment shell around nuclear power plants? (2) Should schools containing asbestos be closed? or (3) Should one avoid all possible trouble, at least in the short run? Some of these questions are aimed at government or other regulatory bodies; others are aimed at the individual who must decide between certain actions and possible uncertain dangers.

The question whether to accept or reject risks is the result of decisions made to determine the optimal level of risk for a given situation. In many instances, these decisions will follow as an almost automatic result of the exercise of perceptions and habits acquired from experience and training. However, whenever a new situation arises or changes in seemingly familiar tasks occur, such as in performing non-routine or semi-routine tasks, decision making becomes more complex. To understand more about why people accept certain risks and reject others we shall need to define first what risk acceptance is. Next, the psychological processes that lead to either acceptance or rejection have to be explained, including influencing factors. Finally, methods to change too high or too low levels of risk acceptance will be addressed.

Understanding Risk

Generally speaking, whenever risk is not rejected, people have either voluntarily, thoughtlessly or habitually accepted it. Thus, for example, when people participate in traffic, they accept the danger of damage, injury, death and pollution for the opportunity of benefits resulting from increased mobility; when they decide to undergo surgery or not to undergo it, they decide that the costs and/or benefits of either decision are greater; and when they are investing money in the financial market or deciding to change business products, all decisions accepting certain financial dangers and opportunities are made with some degree of uncertainty. Finally, the decision to work in any job also has varying probabilities of suffering an injury or fatality, based on statistical accident history.

Defining risk acceptance by referring only to what has not been rejected leaves two important issues open; (1) what exactly is meant by the term risk, and (2) the often made assumption that risks are merely potential losses that have to be avoided, while in reality there is a difference between merely tolerating risks, fully accepting them, or even wishing for them to occur to enjoy thrill and excitement. These facets might all be expressed through the same behaviour (such as participating in traffic) but have different underlying cognitive, emotional and physiological processes. It seems obvious that a merely tolerated risk relates to a different level of commitment than if one even has the desire for a certain thrill, or “risky” sensation. Figure 1 summarizes facets of risk acceptance.

Figure 1. Facets of risk acceptance and risk rejection

SAF070T1

If one looks up the term risk in the dictionaries of several languages, it often has the double meaning of “chance, opportunity” on one hand and “danger, loss” (e.g., wej-ji in Chinese, Risiko in German, risico in Dutch and Italian, risque in French, etc.) on the other. The word risk was created and became popular in the sixteenth century as a consequence of a change in people’s perceptions, from being totally manipulated by “good and evil spirits,” towards the concept of the chance and danger of every free individual to influence his or her own future. (Probable origins of risk lie in the Greek word rhiza, meaning “root and/or cliff”, or the Arabic word rizq meaning “what God and fate provide for your life”.) Similarly, in our everyday language we use proverbs such as “Nothing ventured, nothing gained” or “God helps the brave”, thereby promoting risk taking and risk acceptance. The concept always related to risk is that of uncertainty. As there is almost always some uncertainty about success or failure, or about the probability and quantity of consequences, accepting risks always means accepting uncertainties (Schäfer 1978).

Safety research has largely reduced the meaning of risk to its dangerous aspects (Yates 1992b). Only lately have positive consequences of risk re-emerged with the increase in adventurous leisure time activities (bungee jumping, motorcycling, adventure travels, etc.) and with a deeper understanding of how people are motivated to accept and take risks (Trimpop 1994). It is argued that we can understand and influence risk acceptance and risk taking behaviour only if we take the positive aspects of risks into account as well as the negative.

Risk acceptance therefore refers to the behaviour of a person in a situation of uncertainty that results from the decision to engage in that behaviour (or not to engage in it), after weighing the estimated benefits as greater (or lesser) than the costs under the given circumstances. This process can be extremely quick and not even enter the conscious decision-making level in automatic or habitual behaviour, such as shifting gears when the noise of the engine rises. At the other extreme, it may take very long and involve deliberate thinking and debates among several people, such as when planning a hazardous operation such as a space flight.

One important aspect of this definition is that of perception. Because perception and subsequent evaluation is based on a person’s individual experiences, values and personality, the behavioural acceptance of risks is based more on subjective risk than on objective risk. Furthermore, as long as a risk is not perceived or considered, a person cannot respond to it, no matter how grave the hazard. Thus, the cognitive process leading to the acceptance of risk is an information-processing and evaluation procedure residing within each person that can be extremely quick.

A model describing the identification of risks as a cognitive process of identification, storage and retrieval was discussed by Yates and Stone (1992). Problems can arise at each stage of the process. For example, accuracy in the identification of risks is rather unreliable, especially in complex situations or for dangers such as radiation, poison or other not easily perceptible stimuli. Furthermore, the identification, storage and retrieval mechanisms underlie common psychological phenomena, such as primacy and recency effects, as well as familiarity habituation. That means that people familiar with a certain risk, such as driving at high speed, will get used to it, accept it as a given “normal” situation and estimate the risk at a far lower value than people not familiar with the activity. A simple formalization of the process is a model with the components of:

Stimulus → Perception → Evaluation → Decision → Behaviour → Feedback loop

For example, a slowly moving vehicle in front of a driver may be the stimulus to pass. Checking the road for traffic is perception. Estimating the time needed to pass, given the acceleration capabilities of one’s car, is evaluation. The value of saving time leads to the decision and following behaviour to pass the car or not. The degree of success or failure is noticed immediately and this feedback influences subsequent decisions about passing behaviour. At each step of this process, the final decision whether to accept or reject risks can be influenced. Costs and benefits are evaluated based on individual-, context- and object-related factors that have been identified in scientific research to be of importance for risk acceptance.

Which Factors Influence Risk Acceptance?

Fischhoff et al. (1981) identified the factors (1) individual perception, (2) time, (3) space and (4) context of behaviour, as important dimensions of risk taking that should be considered in studying risks. Other authors have used different categories and different labels for the factors and contexts influencing risk acceptance. The categories of properties of the task or risk object, individual factors and context factors have been used to structure this large number of influential factors, as summarized in figure 2.

Figure 2. Factors influencing risk acceptance

SAF070T2

In normal models of risk acceptance, consequences of new technological risks (e.g., genetic research) were often described by quantitative summary measures (e.g., deaths, damage, injuries), and probability distributions over consequences were arrived at through estimation or simulation (Starr 1969). Results were compared to risks already “accepted” by the public, and thus offered a measure of acceptability of the new risk. Sometimes data were presented in a risk index to compare the different types of risk. The methods used most often were summarized by Fischhoff et al. (1981) as professional judgement by experts, statistical and historical information and formal analyses, such as fault tree analyses. The authors argued that properly conducted formal analyses have the highest “objectivity” as they separate facts from beliefs and take many influences into account. However, safety experts stated that the public and individual acceptance of risks may be based on biased value judgements and on opinions publicized by the media, and not on logical analyses.

It has been suggested that the general public is often misinformed by the media and political groups that produce statistics in favour of their arguments. Instead of relying on individual biases, only professional judgements based on expert knowledge should be used as a basis for accepting risks, and the general public should be excluded from such important decisions. This has drawn substantial criticism as it is viewed as a question of both democratic values (people should have a chance to decide issues that may have catastrophic consequences for their health and safety) and social values (does the technology or risky decision benefit receivers more than those who pay the costs). Fischhoff, Furby and Gregory (1987) suggested the use of either expressed preferences (interviews, questionnaires) or revealed preferences (observations) of the “relevant” public to determine the acceptability of risks. Jungermann and Rohrmann have pointed out the problems of identifying who is the “relevant public” for technologies such as nuclear power plants or genetic manipulations, as several nations or the world population may suffer or benefit from the consequences.

Problems with solely relying on expert judgements have also been discussed. Expert judgements based on normal models approach statistical estimations more closely than those of the public (Otway and von Winterfeldt 1982). However, when asked specifically to judge the probability or frequency of death or injuries related to a new technology, the public’s views are much more similar to the expert judgements and to the risk indices. Research also showed that although people do not change their first quick estimate when provided with data, they do change when realistic benefits or dangers are raised and discussed by experts. Furthermore, Haight (1986) pointed out that because expert judgements are subjective, and experts often disagree about risk estimates, that the public is sometimes more accurate in its estimate of riskiness, if judged after the accident has occurred (e.g., the catastrophe at Chernobyl). Thus, it is concluded that the public uses other dimensions of risk when making judgements than statistical number of deaths or injuries.

Another aspect that plays a role in accepting risks is whether the perceived effects of taking risks are judged positive, such as adrenaline high, “flow” experience or social praise as a hero. Machlis and Rosa (1990) discussed the concept of desired risk in contrast to tolerated or dreaded risk and concluded that in many situations increased risks function as an incentive, rather than as a deterrent. They found that people may behave not at all averse to risk in spite of media coverage stressing the dangers. For example, amusement park operators reported a ride becoming more popular when it reopened after a fatality. Also, after a Norwegian ferry sank and the passengers were set afloat on icebergs for 36 hours, the operating company experienced the greatest demand it had ever had for passage on its vessels. Researchers concluded that the concept of desired risk changes the perception and acceptance of risks, and demands different conceptual models to explain risk-taking behaviour. These assumptions were supported by research showing that for police officers on patrol the physical danger of being attacked or killed was ironically perceived as job enrichment, while for police officers engaged in administrative duties, the same risk was perceived as dreadful. Vlek and Stallen (1980) suggested the inclusion of more personal and intrinsic reward aspects in cost/benefit analyses to explain the processes of risk assessment and risk acceptance more completely.

Individual factors influencing risk acceptance

Jungermann and Slovic (1987) reported data showing individual differences in perception, evaluation and acceptance of “objectively” identical risks between students, technicians and environmental activists. Age, sex and level of education have been found to influence risk acceptance, with young, poorly educated males taking the highest risks (e.g., wars, traffic accidents). Zuckerman (1979) provided a number of examples for individual differences in risk acceptance and stated that they are most likely influenced by personality factors, such as sensation seeking, extroversion, overconfidence or experience seeking. Costs and benefits of risks also contribute to individual evaluation and decision processes. In judging the riskiness of a situation or action, different people reach a wide variety of verdicts. The variety can manifest itself in terms of calibration—for example, due to value-induced biases which let the preferred decision appear less risky so that overconfident people choose a different anchor value. Personality aspects, however, account for only 10 to 20% of the decision to accept a risk or to reject it. Other factors have to be identified to explain the remaining 80 to 90%.

Slovic, Fischhoff and Lichtenstein (1980) concluded from factor-analytical studies and interviews that non-experts assess risks qualitatively differently by including the dimensions of controllability, voluntariness, dreadfulness and whether the risk has been previously known. Voluntariness and perceived controllability were discussed in great detail by Fischhoff et al. (1981). It is estimated that voluntarily chosen risks (motorcycling, mountain climbing) have a level of acceptance which is about 1,000 times as high as that of involuntarily chosen, societal risks. Supporting the difference between societal and individual risks, the importance of voluntariness and controllability has been posited in a study by von Winterfeldt, John and Borcherding (1981). These authors reported lower perceived riskiness for motorcycling, stunt work and auto racing than for nuclear power and air traffic accidents. Renn (1981) reported a study on voluntariness and perceived negative effects. One group of subjects was allowed to choose between three types of pills, while the other group was administered these pills. Although all pills were identical, the voluntary group reported significantly fewer “side-effects” than the administered group.

When risks are individually perceived as having more dreadful consequences for many people, or even catastrophic consequences with a near zero probability of occurrence, these risks are often judged as unacceptable in spite of the knowledge that there have not been any or many fatal accidents. This holds even more true for risks previously unknown to the person judging. Research also shows that people use their personal knowledge and experience with the particular risk as the key anchor of judgement for ­accepting well-defined risks while previously unknown risks are judged more by levels of dread and severity. People are more likely to underestimate even high risks if they have been exposed for an extended period of time, such as people living below a power dam or in earthquake zones, or having jobs with a “habitually” high risk, such as in underground mining, logging or construction (Zimolong 1985). Furthermore, people seem to judge human-made risks very differently from natural risks, accepting natural ones more readily than self-constructed, human-made risks. The approach used by experts to base risks for new technologies within the low-end and high-end “objective risks” of already accepted or natural risks seems not to be perceived as adequate by the public. It can be argued that already “accepted risks” are merely tolerated, that new risks add on to the existing ones and that new dangers have not been experienced and coped with yet. Thus, expert statements are essentially viewed as promises. Finally, it is very hard to determine what has been truly accepted, as many people are seemingly unaware of many risks surrounding them.

Even if people are aware of the risks surrounding them, the problem of behavioural adaptation occurs. This process is well described in risk compensation and risk homeostasis theory (Wilde 1986), which states that people adjust their risk acceptance decision and their risk-taking behaviour towards their target level of perceived risk. That means that people will behave more cautiously and accept fewer risks when they feel threatened, and, conversely, they will behave more daringly and accept higher levels of risk when they feel safe and secure. Thus, it is very difficult for safety experts to design safety equipment, such as seat-belts, ski boots, helmets, wide roads, fully enclosed machinery and so on, without the user’s offsetting the possible safety benefit by some personal benefit, such as increased speed, comfort, decreased attention or other more “risky” behaviour.

Changing the accepted level of risk by increasing the value of safe behaviour may increase the motivation to accept the less dangerous alternative. This approach aims at changing individual values, norms and beliefs to motivate alternative risk acceptance and risk-taking behaviour. Among the factors that increase or decrease the likelihood of risk acceptance, are those such as whether the technology provides a benefit corresponding to present needs, increases the standard of living, creates new jobs, facilitates economic growth, enhances national prestige and independence, requires strict security measures, increases the power of big business, or leads to centralization of political and economic systems (Otway and von Winterfeldt 1982). Similar influences of situational frames on risk evaluations were reported by Kahneman and Tversky (1979 and 1984). They reported that if they phrased the outcome of a surgical or radiation therapy as 68% probability of survival, 44% of the subjects chose it. This can be compared to only 18% who chose the same surgical or radiation therapy, if the outcome was phrased as 32% probability of death, which is mathematically equivalent. Often subjects choose a personal anchor value (Lopes and Ekberg 1980) to judge the acceptability of risks, especially when dealing with cumulative risks over time.

The influence of “emotional frames” (affective context with induced emotions) on risk assessment and acceptance was shown by Johnson and Tversky (1983). In their frames, positive and negative emotions were induced through descriptions of events such as personal success or the death of a young man. They found that subjects with induced negative feelings judged the risks of accidental and violent fatality rates as significantly higher, regardless of other context variables, than subjects of the positive emotional group. Other factors influencing individual risk acceptance include group values, individual beliefs, societal norms, cultural values, the economic and political situation, and recent experiences, such as seeing an accident. Dake (1992) argued that risk is—apart from its physical component—a concept very much dependent on the respective system of beliefs and myths within a cultural frame. Yates and Stone (1992) listed the individual biases (figure 3) that have been found to influence the judgement and acceptance of risks.

Figure 3. Individual biases that influence risk evaluation and risk acceptance

SAF070T3

Cultural factors influencing risk acceptance

Pidgeon (1991) defined culture as the collection of beliefs, norms, attitudes, roles and practices shared within a given social group or population. Differences in cultures lead to different levels of risk perception and acceptance, for example in comparing the work safety standards and accident rates in industrialized countries with those in developing countries. In spite of the differences, one of the most consistent findings across cultures and within cultures is that usually the same concepts of dreadfulness and unknown risks, and those of voluntariness and controllability emerge, but they receive different priorities (Kasperson 1986). Whether these priorities are solely culture dependent remains a question of debate. For example, in estimating the hazards of toxic and radioactive waste disposal, British people focus more on transportation risks; Hungarians more on operating risks; and Americans more on environmental risks. These differences are attributed to cultural differences, but may just as well be the consequence of a perceived population density in Britain, operating reliability in Hungary and the environmental concerns in the United States, which are situational factors. In another study, Kleinhesselink and Rosa (1991) found that Japanese perceive atomic power as a dreadful but not unknown risk, while for Americans atomic power is a predominantly unknown source of risk.

The authors attributed these differences to different exposure, such as to the atomic bombs dropped on Hiroshima and Nagasaki in 1945. However, similar differences were reported between Hispanic and White American residents of the San Francisco area. Thus, local cultural, knowledge and individual differences may play an equally important role in risk perception as general cultural biases do (Rohrmann 1992a).

These and similar discrepancies in conclusions and interpretations derived from identical facts led Johnson (1991) to formulate cautious warnings about the causal attribution of cultural differences to risk perception and risk acceptance. He worried about the widely spread differences in the definition of culture, which make it almost an all-encompassing label. Moreover, differences in opinions and behaviours of subpopulations or individual business organizations within a country add further problems to a clear-cut measurement of culture or its effects on risk perception and risk acceptance. Also, the samples studied are usually small and not representative of the cultures as a whole, and often causes and effects are not separated properly (Rohrmann 1995). Other cultural aspects examined were world views, such as individualism versus egalitarianism versus belief in hierarchies, and social, political, religious or economic factors.

Wilde (1994) reported, for example, that the number of accidents is inversely related to a country’s economic situation. In times of recession the number of traffic accidents drops, while in times of growth the number of accidents rises. Wilde attributed these findings to a number of factors, such as that in times of recession since more people are unemployed and gasoline and spare parts are more costly, people will consequently take more care to avoid accidents. On the other hand, Fischhoff et al. (1981) argued that in times of recession people are more willing to accept dangers and uncomfortable working conditions in order to keep a job or to get one.

The role of language and its use in mass media were discussed by Dake (1991), who cited a number of examples in which the same “facts” were worded such that they supported the political goals of specific groups, organizations or governments. For example, are worker complaints about suspected occupational hazards “legitimate concerns” or “narcissistic phobias”? Is hazard information available to the courts in personal injury cases “sound evidence” or “scientific flotsam”? Do we face ecological “nightmares” or simply “incidences” or “challenges”? Risk acceptance thus depends on the perceived situation and context of the risk to be judged, as well as on the perceived situation and context of the judges themselves (von Winterfeldt and Edwards 1984). As the previous examples show, risk perception and acceptance strongly depend on the way the basic “facts” are presented. The credibility of the source, the amount and type of media coverage—in short, risk communication—is a factor determining risk acceptance more often than the results of formal analyses or expert judgements would suggest. Risk communication is thus a context factor that is specifically used to change risk acceptance.

Changing Risk Acceptance

To best achieve a high degree of acceptance for a change, it has proven very successful to include those who are supposed to accept the change in the planning, decision and control process to bind them to support the decision. Based on successful project reports, figure 4 lists six steps that should be considered when dealing with risks.

Figure 4. Six steps for choosing, deciding upon and accepting optimal risks

SAF070T4

Determining “optimal risks”

In steps 1 and 2, major problems occur in identifying the desirability and the “objective risk” of the objective. while in step 3, it seems to be difficult to eliminate the worst options. For individuals and organizations alike, large-scale societal, catastrophic or lethal dangers seem to be the most dreaded and least acceptable options. Perrow (1984) argued that most societal risks, such as DNA research, power plants, or the nuclear arms race, possess many closely coupled subsystems, meaning that if one error occurs in a subsystem, it can trigger many other errors. These consecutive errors may remain undetected, due to the nature of the initial error, such as a nonfunctioning warning sign. The risks of accidents happening due to interactive failures increases in complex technical systems. Thus, Perrow (1984) suggested that it would be advisable to leave societal risks loosely coupled (i.e., independently controllable) and to allow for independent assessment of and protection against risks and to consider very carefully the necessity for technologies with the potential for catastrophic consequences.

Communicating “optimal choices”

Steps 3 to 6 deal with accurate communication of risks, which is a necessary tool to develop adequate risk perception, risk estimation and optimal risk-taking behaviour. Risk communication is aimed at different audiences, such as residents, employees, patients and so on. Risk communication uses different channels such as newspapers, radio, television, verbal communication and all of these in different situations or “arenas”, such as training sessions, public hearings, articles, campaigns and personal communications. In spite of little research on the effectiveness of mass media communication in the area of health and safety, most authors agree that the quality of the communication largely determines the likelihood of attitudinal or behavioural changes in risk acceptance of the targeted audience. According to Rohrmann (1992a), risk communication also serves different purposes, some of which are listed in figure 5.

Figure 5. Purposes of risk communication

SAF070T5

Risk communication is a complex issue, with its effectiveness seldom proven with scientific exactness. Rohrmann (1992a) listed necessary factors for evaluating risk communication and gave some advice about communicating effectively. Wilde (1993) separated the source, the message, the channel and the recipient and gave suggestions for each aspect of communication. He cited data that show, for example, that the likelihood of effective safety and health communication depends on issues such as those listed in figure 6.

Figure 6. Factors influencing the effectiveness of risk communication

SAF070T6

Establishing a risk optimization culture

Pidgeon (1991) defined safety culture as a constructed system of meanings through which a given people or group understands the hazards of the world. This system specifies what is important and legitimate, and explains relationships to matters of life and death, work and danger. A safety culture is created and recreated as members of it repeatedly behave in ways that seem to be natural, obvious and unquestionable and as such will construct a particular version of risk, danger and safety. Such versions of the perils of the world also will embody explanatory schemata to describe the causation of accidents. Within an organization, such as a company or a country, the tacit and explicit rules and norms governing safety are at the heart of a safety culture. Major components are rules for handling hazards, attitudes toward safety, and reflexivity on safety practice.

Industrial organizations that already live an elaborate safety culture emphasize the importance of common visions, goals, standards and behaviours in risk taking and risk acceptance. As uncertainties are unavoidable within the context of work, an optimal balance of taking chances and control of hazards has to be stricken. Vlek and Cvetkovitch (1989) stated:

Adequate risk management is a matter of organizing and maintaining a sufficient degree of (dynamic) control over a technological activity, rather than continually, or just once, measuring accident probabilities and distributing the message that these are, and will be, “negligibly low”. Thus more often than not, “acceptable risk” means “sufficient control”.

Summary

When people perceive themselves to possess sufficient control over possible hazards, they are willing to accept the dangers to gain the benefits. Sufficient control, however, has to be based on sound information, assessment, perception, evaluation and finally an optimal decision in favour of or against the “risky objective”.

 

Back

Read 22373 times Last modified on Monday, 22 August 2011 14:32
More in this category: « Risk Perception

" DISCLAIMER: The ILO does not take responsibility for content presented on this web portal that is presented in any language other than English, which is the language used for the initial production and peer-review of original content. Certain statistics have not been updated since the production of the 4th edition of the Encyclopaedia (1998)."

Contents

Safety Policy and Leadership References

Abbey, A and JW Dickson. 1983. R&D work climate and innovation in semiconductors. Acad Manage J 26:362–368.

Andriessen, JHTH. 1978. Safe behavior and safety motivation. J Occup Acc 1:363–376.

Bailey, C. 1993. Improve safety program effectiveness with perception surveys. Prof Saf October:28–32.

Bluen, SD and C Donald. 1991. The nature and measurement of in-company industrial relations climate. S Afr J Psychol 21(1):12–20.

Brown, RL and H Holmes. 1986. The use of a factor-analytic procedure for assessing the validity of an employee safety climate model. Accident Anal Prev 18(6):445–470.

CCPS (Center for Chemical Process Safety). N.d. Guidelines for Safe Automation of Chemical Processes. New York: Center for Chemical Process Safety of the American Institution of Chemical Engineers.

Chew, DCE. 1988. Quelles sont les mesures qui assurent le mieux la sécurité du travail? Etude menée dans trois pays en développement d’Asie. Rev Int Travail 127:129–145.

Chicken, JC and MR Haynes. 1989. The Risk Ranking Method in Decision Making. Oxford: Pergamon.

Cohen, A. 1977. Factors in successful occupational safety programs. J Saf Res 9:168–178.

Cooper, MD, RA Phillips, VF Sutherland and PJ Makin. 1994. Reducing accidents using goal setting and feedback: A field study. J Occup Organ Psychol 67:219–240.

Cru, D and Dejours C. 1983. Les savoir-faire de prudence dans les métiers du bâtiment. Cahiers médico-sociaux 3:239–247.

Dake, K. 1991. Orienting dispositions in the perception of risk: An analysis of contemporary worldviews and cultural biases. J Cross Cult Psychol 22:61–82.

—. 1992. Myths of nature: Culture and the social construction of risk. J Soc Issues 48:21–37.

Dedobbeleer, N and F Béland. 1989. The interrelationship of attributes of the work setting and workers’ safety climate perceptions in the construction industry. In Proceedings of the 22nd Annual Conference of the Human Factors Association of Canada. Toronto.

—. 1991. A safety climate measure for construction sites. J Saf Res 22:97–103.

Dedobbeleer, N, F Béland and P German. 1990. Is there a relationship between attributes of construction sites and workers’ safety practices and climate perceptions? In Advances in Industrial Ergonomics and Safety II, edited by D Biman. London: Taylor & Francis.

Dejours, C. 1992. Intelligence ouvrière et organisation du travail. Paris: Harmattan.

DeJoy, DM. 1987. Supervisor attributions and responses for multicausal workplace accidents. J Occup Acc 9:213–223.

—. 1994. Managing safety in the workplace: An attribution theory analysis and model. J Saf Res 25:3–17.

Denison, DR. 1990. Corporate Culture and Organizational Effectiveness. New York: Wiley.

Dieterly, D and B Schneider. 1974. The effect of organizational environment on perceived power and climate: A laboratory study. Organ Behav Hum Perform 11:316–337.

Dodier, N. 1985. La construction pratique des conditions de travail: Préservation de la santé et vie quotidienne des ouvriers dans les ateliers. Sci Soc Santé 3:5–39.

Dunette, MD. 1976. Handbook of Industrial and Organizational Psychology. Chicago: Rand McNally.

Dwyer, T. 1992. Life and Death at Work. Industrial Accidents as a Case of Socially Produced Error. New York: Plenum Press.

Eakin, JM. 1992. Leaving it up to the workers: Sociological perspective on the management of health and safety in small workplaces. Int J Health Serv 22:689–704.

Edwards, W. 1961. Behavioural decision theory. Annu Rev Psychol 12:473–498.

Embrey, DE, P Humphreys, EA Rosa, B Kirwan and K Rea. 1984. An approach to assessing human error probabilities using structured expert judgement. In Nuclear Regulatory Commission NUREG/CR-3518, Washington, DC: NUREG.

Eyssen, G, J Eakin-Hoffman and R Spengler. 1980. Manager’s attitudes and the occurrence of accidents in a telephone company. J Occup Acc 2:291–304.

Field, RHG and MA Abelson. 1982. Climate: A reconceptualization and proposed model. Hum Relat 35:181–201.

Fischhoff, B and D MacGregor. 1991. Judged lethality: How much people seem to know depends on how they are asked. Risk Anal 3:229–236.

Fischhoff, B, L Furby and R Gregory. 1987. Evaluating voluntary risks of injury. Accident Anal Prev 19:51–62.

Fischhoff, B, S Lichtenstein, P Slovic, S Derby and RL Keeney. 1981. Acceptable risk. Cambridge: CUP.

Flanagan, O. 1991. The Science of the Mind. Cambridge: MIT Press.

Frantz, JP. 1992. Effect of location, procedural explicitness, and presentation format on user processing of and compliance with product warnings and instructions. Ph.D. Dissertation, University of Michigan, Ann Arbor.

Frantz, JP and TP Rhoades.1993. Human factors. A task analytic approach to the temporal and spatial placement of product warnings. Human Factors 35:713–730.

Frederiksen, M, O Jensen and AE Beaton. 1972. Prediction of Organizational Behavior. Elmsford, NY: Pergamon.
Freire, P. 1988. Pedagogy of the Oppressed. New York: Continuum.

Glick, WH. 1985. Conceptualizing and measuring organizational and psychological climate: Pitfalls in multi-level research. Acad Manage Rev 10(3):601–616.

Gouvernement du Québec. 1978. Santé et sécurité au travail: Politique québecoise de la santé et de la sécurité des travailleurs. Québec: Editeur officiel du Québec.

Haas, J. 1977. Learning real feelings: A study of high steel ironworkers’ reactions to fear and danger. Sociol Work Occup 4:147–170.

Hacker, W. 1987. Arbeitspychologie. Stuttgart: Hans Huber.

Haight, FA. 1986. Risk, especially risk of traffic accident. Accident Anal Prev 18:359–366.

Hale, AR and AI Glendon. 1987. Individual Behaviour in the Control of Danger. Vol. 2. Industrial Safety Series. Amsterdam: Elsevier.

Hale, AR, B Hemning, J Carthey and B Kirwan. 1994. Extension of the Model of Behaviour in the Control of Danger. Volume 3—Extended model description. Delft University of Technology, Safety Science Group (Report for HSE). Birmingham, UK: Birmingham University, Industrial Ergonomics Group.
Hansen, L. 1993a. Beyond commitment. Occup Hazards 55(9):250.

—. 1993b. Safety management: A call for revolution. Prof Saf 38(30):16–21.

Harrison, EF. 1987. The Managerial Decision-making Process. Boston: Houghton Mifflin.

Heinrich, H, D Petersen and N Roos. 1980. Industrial Accident Prevention. New York: McGraw-Hill.

Hovden, J and TJ Larsson. 1987. Risk: Culture and concepts. In Risk and Decisions, edited by WT Singleton and J Hovden. New York: Wiley.

Howarth, CI. 1988. The relationship between objective risk, subjective risk, behaviour. Ergonomics 31:657–661.

Hox, JJ and IGG Kreft. 1994. Multilevel analysis methods. Sociol Methods Res 22(3):283–300.

Hoyos, CG and B Zimolong. 1988. Occupational Safety and Accident Prevention. Behavioural Strategies and Methods. Amsterdam: Elsevier.

Hoyos, CG and E Ruppert. 1993. Der Fragebogen zur Sicherheitsdiagnose (FSD). Bern: Huber.

Hoyos, CT, U Bernhardt, G Hirsch and T Arnhold. 1991. Vorhandenes und erwünschtes sicherheits-relevantes Wissen in Industriebetrieben. Zeitschrift für Arbeits-und Organisationspychologie 35:68–76.

Huber, O. 1989. Information-procesing operators in decision making. In Process and Structure of Human Decision Making, edited by H Montgomery and O Svenson. Chichester: Wiley.

Hunt, HA and RV Habeck. 1993. The Michigan disability prevention study: Research highlights. Unpublished report. Kalamazoo, MI: E.E. Upjohn Institute for Employment Research.

International Electrotechnical Commission (IEC). N.d. Draft Standard IEC 1508; Functional Safety: Safety-related Systems. Geneva: IEC.

Instrument Society of America (ISA). N.d. Draft Standard: Application of Safety Instrumented Systems for the Process Industries. North Carolina, USA: ISA.

International Organization for Standardization (ISO). 1990. ISO 9000-3: Quality Management and Quality Assurance Standards: Guidelines for the Application of ISO 9001 to the Development, Supply and Maintenance of Software. Geneva: ISO.

James, LR. 1982. Aggregation bias in estimates of perceptual agreement. J Appl Psychol 67:219–229.

James, LR and AP Jones. 1974. Organizational climate: A review of theory and research. Psychol Bull 81(12):1096–1112.
Janis, IL and L Mann. 1977. Decision-making: A Psychological Analysis of Conflict, Choice and Commitment. New York: Free Press.

Johnson, BB. 1991. Risk and culture research: Some caution. J Cross Cult Psychol 22:141–149.

Johnson, EJ and A Tversky. 1983. Affect, generalization, and the perception of risk. J Personal Soc Psychol 45:20–31.

Jones, AP and LR James. 1979. Psychological climate: Dimensions and relationships of individual and aggregated work environment perceptions. Organ Behav Hum Perform 23:201–250.

Joyce, WF and JWJ Slocum. 1984. Collective climate: Agreement as a basis for defining aggregate climates in organizations. Acad Manage J 27:721–742.

Jungermann, H and P Slovic. 1987. Die Psychologie der Kognition und Evaluation von Risiko. Unpublished manuscript. Technische Universität Berlin.

Kahneman, D and A Tversky. 1979. Prospect theory: An analysis of decision under risk. Econometrica 47:263–291.

—. 1984. Choices, values, and frames. Am Psychol 39:341–350.

Kahnemann, D, P Slovic and A Tversky. 1982. Judgement under Uncertainty: Heuristics and Biases. New York: Cambridge University Press.

Kasperson, RE. 1986. Six propositions on public participation and their relevance for risk communication. Risk Anal 6:275–281.

Kleinhesselink, RR and EA Rosa. 1991. Cognitive representation of risk perception. J Cross Cult Psychol 22:11–28.

Komaki, J, KD Barwick and LR Scott. 1978. A behavioral approach to occupational safety: Pinpointing and reinforcing safe performance in a food manufacturing plant. J Appl Psychol 4:434–445.

Komaki, JL. 1986. Promoting job safety and accident precention. In Health and Industry: A Behavioral Medicine Perspective, edited by MF Cataldo and TJ Coats. New York: Wiley.

Konradt, U. 1994. Handlungsstrategien bei der Störungsdiagnose an flexiblen Fertigungs-einrichtungen. Zeitschrift für Arbeits-und Organisations-pychologie 38:54–61.

Koopman, P and J Pool. 1991. Organizational decision making: Models, contingencies and strategies. In Distributed Decision Making. Cognitive Models for Cooperative Work, edited by J Rasmussen, B Brehmer and J Leplat. Chichester: Wiley.

Koslowski, M and B Zimolong. 1992. Gefahrstoffe am Arbeitsplatz: Organisatorische Einflüsse auf Gefahrenbewußstein und Risikokompetenz. In Workshop Psychologie der Arbeitssicherheit, edited by B Zimolong and R Trimpop. Heidelberg: Asanger.

Koys, DJ and TA DeCotiis. 1991. Inductive measures of psychological climate. Hum Relat 44(3):265–285.

Krause, TH, JH Hidley and SJ Hodson. 1990. The Behavior-based Safety Process. New York: Van Norstrand Reinhold.
Lanier, EB. 1992. Reducing injuries and costs through team safety. ASSE J July:21–25.

Lark, J. 1991. Leadership in safety. Prof Saf 36(3):33–35.

Lawler, EE. 1986. High-involvement Management. San Francisco: Jossey Bass.

Lehto, MR. 1992. Designing warning signs and warnings labels: Scientific basis for initial guideline. Int J Ind Erg 10:115–119.

Lehto, MR and JD Papastavrou. 1993. Models of the warning process: Important implications towards effectiveness. Safety Science 16:569–595.

Lewin, K. 1951. Field Theory in Social Science. New York: Harper and Row.

Likert, R. 1967. The Human Organization. New York: McGraw Hill.

Lopes, LL and P-HS Ekberg. 1980. Test of an ordering hypothesis in risky decision making. Acta Physiol 45:161–167.

Machlis, GE and EA Rosa. 1990. Desired risk: Broadening the social amplification of risk framework. Risk Anal 10:161–168.

March, J and H Simon. 1993. Organizations. Cambridge: Blackwell.

March, JG and Z Shapira. 1992. Variable risk preferences and the focus of attention. Psychol Rev 99:172–183.

Manson, WM, GY Wong and B Entwisle. 1983. Contextual analysis through the multilevel linear model. In Sociologic Methodology, 1983–1984. San Francisco: Jossey-Bass.

Mattila, M, M Hyttinen and E Rantanen. 1994. Effective supervisory behavior and safety at the building site. Int J Ind Erg 13:85–93.

Mattila, M, E Rantanen and M Hyttinen. 1994. The quality of work environment, supervision and safety in building construction. Saf Sci 17:257–268.

McAfee, RB and AR Winn. 1989. The use of incentives/feedback to enhance work place safety: A critique of the literature. J Saf Res 20(1):7–19.

McSween, TE. 1995. The Values-based Safety Process. New York: Van Norstrand Reinhold.

Melia, JL, JM Tomas and A Oliver. 1992. Concepciones del clima organizacional hacia la seguridad laboral: Replication del modelo confirmatorio de Dedobbeleer y Béland. Revista de Psicologia del Trabajo y de las Organizaciones 9(22).

Minter, SG. 1991. Creating the safety culture. Occup Hazards August:17–21.

Montgomery, H and O Svenson. 1989. Process and Structure of Human Decision Making. Chichester: Wiley.

Moravec, M. 1994. The 21st century employer-employee partnership. HR Mag January:125–126.

Morgan, G. 1986. Images of Organizations. Beverly Hills: Sage.

Nadler, D and ML Tushman. 1990. Beyond the charismatic leader. Leadership and organizational change. Calif Manage Rev 32:77–97.

Näsänen, M and J Saari. 1987. The effects of positive feedback on housekeeping and accidents at a shipyard. J Occup Acc 8:237–250.

National Research Council. 1989. Improving Risk Communication. Washington, DC: National Academy Press.

Naylor, JD, RD Pritchard and DR Ilgen. 1980. A Theory of Behavior in Organizations. New York: Academic Press.

Neumann, PJ and PE Politser. 1992. Risk and optimality. In Risk-taking Behaviour, edited by FJ Yates. Chichester: Wiley.

Nisbett, R and L Ross. 1980. Human Inference: Strategies and Shortcomings of Social Judgement. Englewood Cliffs: Prentice-Hall.

Nunnally, JC. 1978. Psychometric Theory. New York: McGraw-Hill.

Oliver, A, JM Tomas and JL Melia. 1993. Una segunda validacion cruzada de la escala de clima organizacional de seguridad de Dedobbeleer y Béland. Ajuste confirmatorio de los modelos unofactorial, bifactorial y trifactorial. Psicologica 14:59–73.

Otway, HJ and D von Winterfeldt. 1982. Beyond acceptable risk: On the social acceptability of technologies. Policy Sci 14:247–256.

Perrow, C. 1984. Normal Accidents: Living with High-risk Technologies. New York: Basic Books.

Petersen, D. 1993. Establishing good “safety culture” helps mitigate workplace dangers. Occup Health Saf 62(7):20–24.

Pidgeon, NF. 1991. Safety culture and risk management in organizations. J Cross Cult Psychol 22:129–140.

Rabash, J and G Woodhouse. 1995. MLn command reference. Version 1.0 March 1995, ESRC.

Rachman, SJ. 1974. The Meanings of Fear. Harmondsworth: Penguin.

Rasmussen, J. 1983. Skills, rules, knowledge, signals, signs and symbols and other distinctions. IEEE T Syst Man Cyb 3:266–275.

Reason, JT. 1990. Human Error. Cambridge: CUP.

Rees, JV. 1988. Self-regulation: An effective alternative to direct regulation by OSHA? Stud J 16:603–614.

Renn, O. 1981. Man, technology and risk: A study on intuitive risk assessment and attitudes towards nuclear energy. Spezielle Berichte der Kernforschungsanlage Jülich.

Rittel, HWJ and MM Webber. 1973. Dilemmas in a general theory of planning. Pol Sci 4:155-169.

Robertson, A and M Minkler. 1994. New health promotion movement: A critical examination. Health Educ Q 21(3):295–312.

Rogers, CR. 1961. On Becoming a Person. Boston: Houghton Mifflin.

Rohrmann, B. 1992a. The evaluation of risk communication effectiveness. Acta Physiol 81:169–192.

—. 1992b. Risiko Kommunikation, Aufgaben-Konzepte-Evaluation. In Psychologie der Arbeitssicherheit, edited by B Zimolong and R Trimpop. Heidelberg: Asanger.

—. 1995. Risk perception research: Review and documentation. In Arbeiten zur Risikokommunikation. Heft 48. Jülich: Forschungszentrum Jülich.

—. 1996. Perception and evaluation of risks: A cross cultural comparison. In Arbeiten zur Risikokommunikation Heft 50. Jülich: Forschungszentrum Jülich.

Rosenhead, J. 1989. Rational Analysis for a Problematic World. Chichester: Wiley.

Rumar, K. 1988. Collective risk but individual safety. Ergonomics 31:507–518.

Rummel, RJ. 1970. Applied Factor Analysis. Evanston, IL: Northwestern University Press.

Ruppert, E. 1987. Gefahrenwahrnehmung—ein Modell zur Anforderungsanalyse für die verhaltensabbhängige Kontrolle von Arbeitsplatzgefahren. Zeitschrift für Arbeitswissenschaft 2:84–87.

Saari, J. 1976. Characteristics of tasks associated with the occurrence of accidents. J Occup Acc 1:273–279.

Saari, J. 1990. On strategies and methods in company safety work: From informational to motivational strategies. J Occup Acc 12:107–117.

Saari, J and M Näsänen. 1989. The effect of positive feedback on industrial housekeeping and accidents: A long-term study at a shipyard. Int J Ind Erg 4:3:201–211.

Sarkis, H. 1990. What really causes accidents. Presentation at Wausau Insurance Safety Excellence Seminar. Canandaigua, NY, US, June 1990.

Sass, R. 1989. The implications of work organization for occupational health policy: The case of Canada. Int J Health Serv 19(1):157–173.

Savage, LJ. 1954. The Foundations of Statistics. New York: Wiley.

Schäfer, RE. 1978. What Are We Talking About When We Talk About “Risk”? A Critical Survey of Risk and Risk Preferences Theories. R.M.-78-69. Laxenber, Austria: International Institute for Applied System Analysis.

Schein, EH. 1989. Organizational Culture and Leadership. San Francisco: Jossey-Bass.

Schneider, B. 1975a. Organizational climates: An essay. Pers Psychol 28:447–479.

—. 1975b. Organizational climate: Individual preferences and organizational realities revisited. J Appl Psychol 60:459–465.

Schneider, B and AE Reichers. 1983. On the etiology of climates. Pers Psychol 36:19–39.

Schneider, B, JJ Parkington and VM Buxton. 1980. Employee and customer perception of service in banks. Adm Sci Q 25:252–267.

Shannon, HS, V Walters, W Lewchuk, J Richardson, D Verma, T Haines and LA Moran. 1992. Health and safety approaches in the workplace. Unpublished report. Toronto: McMaster University.

Short, JF. 1984. The social fabric at risk: Toward the social transformation of risk analysis. Amer Social R 49:711–725.

Simard, M. 1988. La prise de risque dans le travail: un phénomène organisationnel. In La prise de risque dans le travail, edited by P Goguelin and X Cuny. Marseille: Editions Octares.

Simard, M and A Marchand. 1994. The behaviour of first-line supervisors in accident prevention and effectiveness in occupational safety. Saf Sci 19:169–184.

Simard, M et A Marchand. 1995. L’adaptation des superviseurs à la gestion parcipative de la prévention des accidents. Relations Industrielles 50: 567-589.

Simon, HA. 1959. Theories of decision making in economics and behavioural science. Am Econ Rev 49:253–283.

Simon, HA et al. 1992. Decision making and problem solving. In Decision Making: Alternatives to Rational Choice Models, edited by M Zev. London: Sage.

Simonds, RH and Y Shafai-Sahrai. 1977. Factors apparently affecting the injury frequency in eleven matched pairs of companies. J Saf Res 9(3):120–127.

Slovic, P. 1987. Perception of risk. Science 236:280–285.

—. 1993. Perceptions of environmental hazards: Psychological perspectives. In Behaviour and Environment, edited by GE Stelmach and PA Vroon. Amsterdam: North Holland.

Slovic, P, B Fischhoff and S Lichtenstein. 1980. Perceived risk. In Societal Risk Assessment: How Safe Is Safe Enough?, edited by RC Schwing and WA Albers Jr. New York: Plenum Press.

—. 1984. Behavioural decision theory perspectives on risk and safety. Acta Physiol 56:183–203.

Slovic, P, H Kunreuther and GF White. 1974. Decision processes, rationality, and adjustment to natural hazards. In Natural Hazards, Local, National and Global, edited by GF White. New York: Oxford University Press.

Smith, MJ, HH Cohen, A Cohen and RJ Cleveland. 1978. Characteristics of successful safety programs. J Saf Res 10:5–15.

Smith, RB. 1993. Construction industry profile: Getting to the bottom of high accident rates. Occup Health Saf June:35–39.

Smith, TA. 1989. Why you should put your safety program under statistical control. Prof Saf 34(4):31–36.

Starr, C. 1969. Social benefit vs. technological risk. Science 165:1232–1238.

Sulzer-Azaroff, B. 1978. Behavioral ecology and accident prevention. J Organ Behav Manage 2:11–44.

Sulzer-Azaroff, B and D Fellner. 1984. Searching for performance targets in the behavioral analysis of occupational health and safety: An assessment strategy. J Organ Behav Manage 6:2:53–65.

Sulzer-Azaroff, B, TC Harris and KB McCann. 1994. Beyond training: Organizational performance management techniques. Occup Med: State Art Rev 9:2:321–339.

Swain, AD and HE Guttmann. 1983. Handbook of Human Reliability Analysis with Emphasis on Nuclear Power Plant Applications. Sandia National Laboratories, NUREG/CR-1278, Washington, DC: US Nuclear Regulatory Commission.

Taylor, DH. 1981. The hermeneutics of accidents and safety. Ergonomics 24:48–495.

Thompson, JD and A Tuden. 1959. Strategies, structures and processes of organizational decisions. In Comparative Studies in Administration, edited by JD Thompson, PB Hammond, RW Hawkes, BH Junker, and A Tuden. Pittsburgh: Pittsburgh University Press.

Trimpop, RM. 1994. The Psychology of Risk Taking Behavior. Amsterdam: Elsevier.

Tuohy, C and M Simard. 1992. The impact of joint health and safety committees in Ontario and Quebec. Unpublished report, Canadian Association of Administrators of Labour Laws, Ottawa.

Tversky, A and D Kahneman. 1981. The framing of decisions and the psychology of choice. Science 211:453–458.

Vlek, C and G Cvetkovich. 1989. Social Decision Methodology for Technological Projects. Dordrecht, Holland: Kluwer.

Vlek, CAJ and PJ Stallen. 1980. Rational and personal aspects of risk. Acta Physiol 45:273–300.

von Neumann, J and O Morgenstern. 1947. Theory of Games and Ergonomic Behaviour. Princeton, NJ: Princeton University Press.

von Winterfeldt, D and W Edwards. 1984. Patterns of conflict about risky technologies. Risk Anal 4:55–68.

von Winterfeldt, D, RS John and K Borcherding. 1981. Cognitive components of risk ratings. Risk Anal 1:277–287.

Wagenaar, W. 1990. Risk evaluation and causes of accidents. Ergonomics 33, Nos. 10/11.

Wagenaar, WA. 1992. Risk taking and accident causation. In Risk-taking Behaviour, edited by JF Yates. Chichester: Wiley.

Wagenaar, W, J Groeneweg, PTW Hudson and JT Reason. 1994. Promoting safety in the oil industry. Ergonomics 37, No. 12:1,999–2,013.

Walton, RE. 1986. From control to commitment in the workplace. Harvard Bus Rev 63:76–84.

Wilde, GJS. 1986. Beyond the concept of risk homeostasis: Suggestions for research and application towards the prevention of accidents and lifestyle-related disease. Accident Anal Prev 18:377–401.

—. 1993. Effects of mass media communications on health and safety habits: An overview of issues and evidence. Addiction 88:983–996.

—. 1994. Risk homeostatasis theory and its promise for improved safety. In Challenges to Accident Prevention: The Issue of Risk Compensation Behaviour, edited by R Trimpop and GJS Wilde. Groningen, The Netherlands: STYX Publications.

Yates, JF. 1992a. The risk construct. In Risk Taking Behaviour, edited by JF Yates. Chichester: Wiley.

—. 1992b. Risk Taking Behaviour. Chichester: Wiley.

Yates, JF and ER Stone. 1992. The risk construct. In Risk Taking Behaviour, edited by JF Yates. Chichester: Wiley.

Zembroski, EL. 1991. Lessons learned from man-made catastrophes. In Risk Management. New York: Hemisphere.


Zey, M. 1992. Decision Making: Alternatives to Rational Choice Models. London: Sage.

Zimolong, B. 1985. Hazard perception and risk estimation in accident causation. In Trends in Ergonomics/Human Factors II, edited by RB Eberts and CG Eberts. Amsterdam: Elsevier.

Zimolong, B. 1992. Empirical evaluation of THERP, SLIM and ranking to estimate HEPs. Reliab Eng Sys Saf 35:1–11.

Zimolong, B and R Trimpop. 1994. Managing human reliability in advanced manufacturing systems. In Design of Work and Development of Personnel in Advanced Manufacturing Systems, edited by G Salvendy and W Karwowski. New York: Wiley.

Zohar, D. 1980. Safety climate in industrial organizations: Theoretical and applied implications. J Appl Psychol 65, No.1:96–102.

Zuckerman, M. 1979. Sensation Seeking: Beyond the Optimal Level of Arousal. Hillsdale: Lawrence Erlbaum.