Monday, 04 April 2011 20:04

Methods of Safety Decision Making

Rate this item
(3 votes)

A company is a complex system where decision making takes place in many connections and under various circumstances. Safety is only one of a number of requirements managers must consider when choosing among actions. Decisions relating to safety issues vary considerably in scope and character depending on the attributes of the risk problems to be managed and the decision maker’s position in the organization.

Much research has been undertaken on how people actually make decisions, both individually and in an organizational context: see, for instance, Janis and Mann (1977); Kahnemann, Slovic and Tversky (1982); Montgomery and Svenson (1989). This article will examine selected research experience in this area as a basis for decision-making methods used in management of safety. In principle, decision making concerning safety is not much different from decision making in other areas of management. There is no simple method or set of rules for making good decisions in all situations, since the activities involved in safety management are too complex and varied in scope and character.

The main focus of this article will not be on presenting simple prescriptions or solutions but rather to provide more insight into some of the important challenges and principles for good decision making concerning safety. An overview of the scope, levels and steps in problem solving concerning safety issues will be given, mainly based on the work by Hale et al. (1994). Problem solving is a way of identifying the problem and eliciting viable remedies. This is an important first step in any decision process to be examined. In order to put the challenges of real-life decisions concerning safety into perspective, the principles of rational choice theory will be discussed. The last part of the article covers decision making in an organizational context and introduces the sociological perspective on decision making. Also included are some of the main problems and methods of decision making in the context of safety management, so as to provide more insight into the main dimensions, challenges and pitfalls of making decisions on safety issues as an important activity and challenge in management of safety.

The Context of Safety Decision Making

A general presentation of the methods of safety decision making is complicated because both safety issues and the character of the decision problems vary considerably over the lifetime of an enterprise. From concept and establishment to closure, the life cycle of a company may be divided into six main stages:

  1. design
  2. construction
  3. commissioning
  4. operation
  5. maintenance and modification
  6. decomposition and demolition.

 

Each of the life-cycle elements involves decisions concerning safety which are not only specific to that phase alone but which also impact on some or all of the other phases. During design, construction and commissioning, the main challenges concern the choice, development and realization of the safety standards and specifications that have been decided upon. During operation, maintenance and demolition, the main objectives of safety management will be to maintain and possibly improve the determined level of safety. The construction phase also represents a “production phase” to some extent, because at the same time that construction safety principles must be adhered to, the safety specifications for what is being built must be realized.

Safety Management Decision Levels

Decisions about safety also differ in character depending on organizational level. Hale et al. (1994) distinguish among three main decision levels of safety management in the organization:

The level of execution is the level at which the actions of those involved (workers) directly influence the occurrence and control of hazards in the workplace. This level is concerned with the recognition of the hazards and the choice and implementation of actions to eliminate, reduce and control them. The degrees of freedom present at this level are limited; therefore, feedback and correction loops are concerned essentially with correcting deviations from established procedures and returning practice to a norm. As soon as a situation is identified where the norm agreed upon is no longer thought to be appropriate, the next higher level is activated.

The level of planning, organization and procedures is concerned with devising and formalizing the actions to be taken at the execution level in respect to the entire range of expected hazards. The planning and organization level, which sets out responsibilities, procedures, reporting lines and so on, is typically found in safety manuals. It is this level which develops new procedures for hazards new to the organization, and modifies existing procedures to keep up either with new insights about hazards or with standards for solutions relating to hazards. This level involves the translation of abstract principles into concrete task allocation and implementation, and corresponds to the improvement loop required in many quality systems.

The level of structure and management is concerned with the overall principles of safety management. This level is activated when the organization considers that the current planning and organizing levels are failing in fundamental ways to achieve accepted performance. It is the level at which the “normal” functioning of the safety management system is critically monitored and through which it is continually improved or maintained in face of changes in the external environment of the organization.

Hale et al. (1994) emphasize that the three levels are abstractions corresponding to three different kinds of feedback. They should not be seen as contiguous with the hierarchical levels of shop floor, first line and higher management, as the activities specified at each abstract level can be applied in many different ways. The way task allocations are made reflects the culture and methods of working of the individual company.

Safety Decision-Making Process

Safety problems must be managed through some kind of problem-solving or decision-making process. According to Hale et al. (1994) this process, which is designated the problem-solving cycle, is common to the three levels of safety management described above. The problem-solving cycle is a model of an idealized stepwise procedure for analysing and making decisions on safety problems caused by potential or actual deviations from desired, expected or planned achievements (figure 1).

Figure 1. The problem-solving cycle

SAF090F1

Although the steps are the same in principle at all three safety management levels, the application in practice may differ somewhat depending on the nature of problems treated. The model shows that decisions which concern safety management span many types of problems. In practice, each of the following six basic decision problems in safety management will have to be broken down into several subdecisions which will form the basis for choices on each of the main problem areas.

  1. What is an acceptable safety level or standard of the activity/department/company, etc.?
  2. What criteria shall be used to assess the safety level?
  3. What is the current safety level?
  4. What are the causes of identified deviations between acceptable and observed level of safety?
  5. What means should be chosen to correct the deviations and keep up the safety level?
  6. How should corrective actions be implemented and followed up?

 

Rational Choice Theory

Managers’ methods for making decisions must be based on some principle of rationality in order to gain acceptance among members of the organization. In practical situations what is rational may not always be easy to define, and the logical requirements of what may be defined as rational decisions may be difficult to fulfil. Rational choice theory (RCT), the conception of rational decision making, was originally developed to explain economic behaviour in the marketplace, and later generalized to explain not only economic behaviour but also the behaviour studied by nearly all social science disciplines, from political philosophy to psychology.

The psychological study of optimal human decision making is called subjective expected utility theory (SEU). RCT and SEU are basically the same; only the applications differ. SEU focuses on the thinking of individual decision making, while RCT has a wider application in explaining behaviour within whole organizations or institutions—see, for example, Neumann and Politser (1992). Most of the tools of modern operations research use the assumptions of SEU. They assume that what is desired is to maximize the achievement of some goal, under specific constraints, and assuming that all alternatives and consequences (or their probability distribution) are known (Simon and associates 1992). The essence of RCT and SEU can be summarized as follows (March and Simon 1993):

Decision makers, when encountering a decision-making situation, acquire and see the whole set of alternatives from which they will choose their action. This set is simply given; the theory does not tell how it is obtained.

To each alternative is attached a set of consequences—the events that will ensue if that particular alternative is chosen. Here the existing theories fall into three categories:

  • Certainty theories assume the decision maker has complete and accurate knowledge of the consequences that will follow on each alternative. In the case of certainty, the choice is unambiguous.
  • Risk theories assume accurate knowledge of a probability distribution of the consequences of each alternative. In the case of risk, rationality is usually defined as the choice of that alternative for which expected utility is greatest.
  • Uncertainty theories assume that the consequences of each alternative belong to some subset of all possible consequences, but that the decision maker cannot assign definite probabilities to the occurrence of particular consequences. In the case of uncertainty, the definition of rationality becomes problematic.

 

At the outset, the decision maker makes use of a “utility function” or a “preference ordering” that ranks all sets of consequences from the most preferred to the least preferred. It should be noted that another proposal is the rule of “minimax risk”, by which one considers the “worst set of consequences” that may follow from each alternative, then selects the alternative whose worst set of consequences is preferred to the worst sets attached to other alternatives.

The decision maker elects the alternative closest to the preferred set of consequences.

One difficulty of RCT is that the term rationality is in itself problematic. What is rational depends upon the social context in which the decision takes place. As pointed out by Flanagan (1991), it is important to distinguish between the two terms rationality and logicality. Rationality is tied up with issues related to the meaning and quality of life for some individual or individuals, while logicality is not. The problem of the benefactor is precisely the issue which rational choice models fail to clarify, in that they assume value neutrality, which is seldom present in real-life decision making (Zey 1992). Although the value of RCT and SEU as explanatory theory is somewhat limited, it has been useful as a theoretical model for “rational” decision making. Evidence that behaviour often deviates from outcomes predicted by expected utility theory does not necessarily mean that the theory inappropriately prescribes how people should make decisions. As a normative model the theory has proven useful in generating research concerning how and why people make decisions which violate the optimal utility axiom.

Applying the ideas of RCT and SEU to safety decision making may provide a basis for evaluating the “rationality” of choices made with respect to safety—for instance, in the selection of preventive measures given a safety problem one wants to alleviate. Quite often it will not be possible to comply with the principles of rational choice because of lack of reliable data. Either one may not have a complete picture of available or possible actions, or else the uncertainty of the effects of different actions, for instance, implementation of different preventive measures, may be large. Thus, RCT may be helpful in pointing out some weaknesses in a decision process, but it provides little guidance in improving the quality of choices to be made. Another limitation in the applicability of rational choice models is that most decisions in organizations do not necessarily search for optimal solutions.

Problem Solving

Rational choice models describe the process of evaluating and choosing between alternatives. However, deciding on a course of action also requires what Simon and associates (1992) describe as problem solving. This is the work of choosing issues that require attention, setting goals, and finding or deciding on suitable courses of action. (While managers may know they have problems, they may not understand the situation well enough to direct their attention to any plausible course of action.) As mentioned earlier, the theory of rational choice has its roots mainly in economics, statistics and operations research, and only recently has it received attention from psychologists. The theory and methods of problem solving has a very different history. Problem solving was initially studied principally by psychologists, and more recently by researchers in artificial intelligence.

Empirical research has shown that the process of problem solving takes place more or less in the same way for a wide range of activities. First, problem solving generally proceeds by selective search through large sets of possibilities, using rules of thumb (heuristics) to guide the search. Because the possibilities in realistic problem situations are virtually endless, a trial-and-error search would simply not work. The search must be highly selective. One of the procedures often used to guide the search is described as hill climbing—using some measure of approach to the goal to determine where it is most profitable to look next. Another and more powerful common procedure is means-ends analysis. When using this method, the problem solver compares the present situation with the goal, detects differences between them, and then searches memory for actions that are likely to reduce the difference. Another thing that has been learned about problem solving, especially when the solver is an expert, is that the solver’s thought process relies on large amounts of information that is stored in memory and that is retrievable whenever the solver recognizes cues signalling its relevance.

One of the accomplishments of contemporary problem-solving theory has been to provide an explanation for the phenomena of intuition and judgement frequently seen in experts’ behaviour. The store of expert knowledge seems to be in some way indexed by the recognition cues that make it accessible. Combined with some basic inferential capabilities (perhaps in the form of means-ends analysis), this indexing function is applied by the expert to find satisfactory solutions to difficult problems.

Most of the challenges which managers of safety face will be of a kind that require some kind of problem solving—for example, detecting what the underlying causes of an accident or a safety problem really are, in order to figure out some preventive measure. The problem-solving cycle developed by Hale et al. (1994)—see figure 1—gives a good description of what is involved in the stages of safety problem solving. What seems evident is that at present it is not possible and may not even be desirable to develop a strictly logical or mathematical model for what is an ideal problem-solving process in the same manner as has been followed for rational choice theories. This view is supported by the knowledge of other difficulties in the real-life instances of problem solving and decision making which are discussed below.

Ill-Structured Problems, Agenda Setting and Framing

In real life, situations frequently occur when the problem-solving process becomes obscure because the goals themselves are complex and sometimes ill-defined. What often happens is that the very nature of the problem is successively transformed in the course of exploration. To the extent that the problem has these characteristics, it may be called ill-structured. Typical examples of problem-solving processes with such characteristics are (1) the development of new designs and (2) scientific discovery.

The solving of ill-defined problems has only recently become a subject of scientific study. When problems are ill-defined, the problem-solving process requires substantial knowledge about solution criteria as well as knowledge about the means for satisfying those criteria. Both kinds of knowledge must be evoked in the course of the process, and the evocation of the criteria and constraint continually modifies and remoulds the solution which the problem-solving process is addressing. Some research concerning problem structuring and analysis within risk and safety issues has been published, and may be profitably studied; see, for example, Rosenhead 1989 and Chicken and Haynes 1989.

Setting the agenda, which is the very first step of the problem-solving process, is also the least understood. What brings a problem to the head of the agenda is the identification of a problem and the consequent challenge to determine how it can be represented in a way that facilitates its solution; these are subjects that only recently have been focused upon in studies of decision processes. The task of setting an agenda is of utmost importance because both individual human beings and human institutions have limited capacities in dealing with many tasks simultaneously. While some problems are receiving full attention, others are neglected. When new problems emerge suddenly and unexpectedly (e.g., firefighting), they may replace orderly planning and deliberation.

The way in which problems are represented has much to do with the quality of the solutions that are found. At present the representation or framing of problems is even less well understood than agenda setting. A characteristic of many advances in science and technology is that a change in framing will bring about a whole new approach to solving a problem. One example of such change in the framing of problem definition in safety science in recent years, is the shift of focus away from the details of the work operations to the organizational decisions and conditions which create the whole work situation—see, for example, Wagenaar et al. (1994).

Decision Making in Organizations

Models of organizational decision making view the question of choice as a logical process in which decision makers try to maximize their objectives in an orderly series of steps (figure 2). This process is in principle the same for safety as for decisions on other issues that the organization has to manage.

Figure 2. The decision-making process in organizations

SAF090F2

These models may serve as a general framework for “rational decision making” in organizations; however, such ideal models have several limitations and they leave out important aspects of processes which actually may take place. Some of the significant characteristics of organizational decision-making processes are discussed below.

Criteria applied in organizational choice

While rational choice models are preoccupied with finding the optimal alternative, other criteria may be even more relevant in organizational decisions. As observed by March and Simon (1993), organizations for various reasons search for satisfactory rather than optimal solutions.

  • Optimal alternatives. An alternative can be defined as optimal if (1) there exists a set of criteria that permits all alternatives to be compared and (2) the alternative in question is preferred, by these criteria, to all other alternatives (see also the discussion of rational choice, above).
  • Satisfactory alternatives. An alternative is satisfactory if (1) there exists a set of criteria that describes minimally satisfactory alternatives and (2) the alternative in question meets or exceeds these criteria.

 

According to March and Simon (1993) most human decision making, whether individual or organizational, is concerned with the discovery and selection of satisfactory alternatives. Only in exceptional cases is it concerned with discovery and selection of optimal alternatives. In safety management, satisfactory alternatives with respect to safety will usually suffice, so that a given solution to a safety problem must meet specified standards. The typical constraints which often apply to optimal choice safety decisions are economic considerations such as: “Good enough, but as cheap as possible”.

Programmed decision making

Exploring the parallels between human decision making and ­organizational decision making, March and Simon (1993) argued that organizations can never be perfectly rational, because their members have limited information-processing capabilities. It is claimed that decision makers at best can achieve only limited forms of rationality because they (1) usually have to act on the basis of incomplete information, (2) are able to explore only a limited number of alternatives relating to any given decision, and (3) are unable to attach accurate values to outcomes. March and Simon maintain that the limits on human rationality are institutionalized in the structure and modes of functioning of our organizations. In order to make the decision-making process manageable, organizations fragment, routinize and limit the decision process in several ways. Departments and work units have the effect of segmenting the organization’s environment, of compartmentalizing responsibilities, and thus of simplifying the domains of interest and decision making of managers, supervisors and workers. Organizational hierarchies perform a similar function, providing channels of problem solving in order to make life more manageable. This creates a structure of attention, interpretation and operation that exerts a crucial influence on what is appreciated as “rational” choices of the individual decision maker in the organizational context. March and Simon named these organized sets of responses performance programmes, or simply programmes. The term programme is not intended to connote complete rigidity. The content of the programme may be adaptive to a large number of characteristics that initiate it. The programme may also be conditional on data that are independent of the initiating stimuli. It is then more properly called a performance strategy.

A set of activities is regarded as routinized to the degree that choice has been simplified by the development of fixed response to defined stimuli. If searches have been eliminated, but choice remains in the form of clearly defined systematic computing routines, the activity is designated as routinized. Activities are regarded as unroutinized to the extent that they have to be preceded by programme-developing activities of a problem-solving kind. The distinction made by Hale et al. (1994) (discussed above) between the levels of execution, planning and system structure/management carry similar implications concerning the structuring of the decision-making process.

Programming influences decision making in two ways: (1) by defining how a decision process should be run, who should participate, and so on, and (2) by prescribing choices to be made based on the information and alternatives at hand. The effects of programming are on the one hand positive in the sense that they may increase the efficiency of the decision process and assure that problems are not left unresolved, but are treated in a way that is well structured. On the other hand, rigid programming may hamper the flexibility that is needed especially in the problem-solving phase of a decision process in order to generate new solutions. For example, many airlines have established fixed procedures for treatment of reported deviations, so-called flight reports or maintenance reports, which require that each case be examined by an appointed person and that a decision be made concerning preventive actions to be taken based on the incident. Sometimes the decision may be that no action shall be taken, but the procedures assure that such a decision is deliberate, and not a result of negligence, and that there is a responsible decision maker involved in the decisions.

The degree to which activities are programmed influences risk taking. Wagenaar (1990) maintained that most accidents are consequences of routine behaviour without any consideration of risk. The real problem of risk occurs at higher levels in organizations, where the unprogrammed decisions are made. But risks are most often not taken consciously. They tend to be results of decisions made on issues which are not directly related to safety, but where preconditions for safe operation were inadvertently affected. Managers and other high-level decision makers are thus more often permitting opportunities for risks than taking risks.

Decision Making, Power and Conflict of Interests

The ability to influence the outcomes of decision-making processes is a well-recognized source of power, and one that has attracted considerable attention in organization-theory literature. Since organizations are in large measure decision-making systems, an individual or group can exert major influence on the decision processes of the organization. According to Morgan (1986) the kinds of power used in decision making can be classified into the following three interrelated elements:

  1. The decision premises. Influence on the decision premises may be exerted in several ways. One of the most effective ways of “making” a decision is to allow it to be made by default. Hence much of the political activity within an organization depends on the control of agendas and other decision ­premises that influence how particular decisions will be ­approached, perhaps in ways that prevent certain core issues from surfacing at all. In addition, decision premises are ­manipulated by the unobtrusive control embedded in choice of those vocabularies, structures of communications, attitudes, beliefs, rules and procedures which are accepted without questioning. These factors shape decisions by the way we think and act. According to Morgan (1986), visions of what the problems and issues are and how they can be tackled, often act as mental straitjackets that prevent us from seeing other ways of formulating our basic concerns and the alternative courses of action that are available.
  2. The decision processes. Control of decision processes is usually more visible than the control of decision premises. How to treat an issue involves questions such as who should be involved, when the decision should be made, how the issue should be handled at meetings, and how it should be reported. The ground rules that are to guide decision making are important variables that organization members can manipulate in order to influence the outcome.
  3. The decision issues and objectives. A final way of controlling decision making is to influence the issues and objectives to be addressed and the evaluative criteria to be employed. An individual can shape the issues and objectives most directly through preparing reports and contributing to the discussion on which the decision will be based. By emphasizing the importance of particular constraints, selecting and evaluating the alternatives on which a decision will be made, and highlighting the importance of certain values or outcomes, decision makers can exert considerable influence on the decision that emerges from discussion.

 

Some decision problems may carry a conflict of interest—for example, between management and employees. Disagreement may occur on the definition of what is really the problem—what Rittel and Webber (1973) characterized as “wicked” problems, to be distinguished from problems that are “tame” with respect to securing consent. In other cases, parties may agree on problem definition but not on how the problem should be solved, or what are acceptable solutions or criteria for solutions. The attitudes or strategies of conflicting parties will define not only their problem-solving behaviour, but also the prospects of reaching an acceptable solution through negotiations. Important variables are how parties attempt to satisfy their own versus the other party’s concerns (figure 3). Successful collaboration requires that both parties are assertive concerning their own needs, but are simultaneously willing to take the needs of the other party equally into consideration.

Figure 3. Five styles of negotiating behaviour

SAF090F3

Another interesting typology based on the amount of agreement between goals and means, was developed by Thompson and Tuden (1959) (cited in Koopman and Pool 1991). The authors suggested what was a “best-fitting strategy” based on knowledge about the parties’ perceptions of the causation of the problem and about preferences of outcomes (figure 4).

Figure 4. A typology of problem-solving strategy

SAF090F4

If there is agreement on goals and means, the decision can be calculated—for example, developed by some experts. If the means to the desired ends are unclear, these experts will have to reach a solution through consultation (majority judgement). If there is any conflict about the goals, consultation between the parties involved is necessary. However, if agreement is lacking both on goals and means, the organization is really endangered. Such a situation requires charismatic leadership which can “inspire” a solution acceptable to the conflicting parties.

Decision making within an organizational framework thus opens up perspectives far beyond those of rational choice or individual problem-solving models. Decision processes must be seen within the framework of organizational and management processes, where the concept of rationality may take on new and different meanings from those defined by the logicality of rational choice approaches embedded in, for example, operations research models. Decision making carried out within safety management must be regarded in light of such a perspective as will allow a full understanding of all aspects of the decision problems at hand.

Summary and Conclusions

Decision making can generally be described as a process starting with an initial situation (initial state) which decision makers perceive to be deviating from a desired goal situation (goal state), although they do not know in advance how to alter the initial state into the goal state (Huber 1989). The problem solver transforms the initial state into the goal state by applying one or more operators, or activities to alter states. Often a sequence of operators is required to bring about the desired change.

The research literature on the subject provides no simple answers to how to make decisions on safety issues; therefore, the methods of decision making must be rational and logical. Rational choice theory represents an elegant conception of how optimal decisions are made. However, within safety management, rational choice theory cannot be easily applied. The most obvious limitation is the lack of valid and reliable data on potential choices with respect to both completeness and to knowledge of consequences. Another difficulty is that the concept rational assumes a benefactor, which may differ depending on which perspective is chosen in a decision situation. However, the rational choice approach may still be helpful in pointing out some of the difficulties and shortcomings of the decisions to be made.

Often the challenge is not to make a wise choice between alternative actions, but rather to analyse a situation in order to find out what the problem really is. In analysing safety management problems, structuring is often the most important task. Understanding the problem is a prerequisite for finding an acceptable solution. The most important issue concerning problem solving is not to identify a single superior method, which probably does not exist on account of the wide range of problems within the areas of risk assessment and safety management. The main point is rather to take a structured approach and document the analysis and decisions made in such a way that the procedures and evaluations are traceable.

Organizations will manage some of their decision making through programmed actions. Programming or fixed procedures for decision-making routines may be very useful in safety management. An example is how some companies treat reported deviations and near accidents. Programming can be an efficient way to control decision-making processes in the organization, provided that the safety issues and decision rules are clear.

In real life, decisions take place within an organizational and social context where conflicts of interest sometimes emerge. The decision processes may be hindered by different perceptions of what the problems are, of criteria, or of the acceptability of proposed solutions. Being aware of the presence and possible effects of vested interests is helpful in making decisions which are acceptable to all parties involved. Safety management includes a large variety of problems depending on which life cycle, organizational level and stage of problem solving or hazard alleviation a problem concerns. In that sense, decision making concerning safety is as wide in scope and character as decision making on any other management issues.

 

Back

Read 19457 times Last modified on Tuesday, 23 August 2011 23:01

" DISCLAIMER: The ILO does not take responsibility for content presented on this web portal that is presented in any language other than English, which is the language used for the initial production and peer-review of original content. Certain statistics have not been updated since the production of the 4th edition of the Encyclopaedia (1998)."

Contents

Safety Policy and Leadership References

Abbey, A and JW Dickson. 1983. R&D work climate and innovation in semiconductors. Acad Manage J 26:362–368.

Andriessen, JHTH. 1978. Safe behavior and safety motivation. J Occup Acc 1:363–376.

Bailey, C. 1993. Improve safety program effectiveness with perception surveys. Prof Saf October:28–32.

Bluen, SD and C Donald. 1991. The nature and measurement of in-company industrial relations climate. S Afr J Psychol 21(1):12–20.

Brown, RL and H Holmes. 1986. The use of a factor-analytic procedure for assessing the validity of an employee safety climate model. Accident Anal Prev 18(6):445–470.

CCPS (Center for Chemical Process Safety). N.d. Guidelines for Safe Automation of Chemical Processes. New York: Center for Chemical Process Safety of the American Institution of Chemical Engineers.

Chew, DCE. 1988. Quelles sont les mesures qui assurent le mieux la sécurité du travail? Etude menée dans trois pays en développement d’Asie. Rev Int Travail 127:129–145.

Chicken, JC and MR Haynes. 1989. The Risk Ranking Method in Decision Making. Oxford: Pergamon.

Cohen, A. 1977. Factors in successful occupational safety programs. J Saf Res 9:168–178.

Cooper, MD, RA Phillips, VF Sutherland and PJ Makin. 1994. Reducing accidents using goal setting and feedback: A field study. J Occup Organ Psychol 67:219–240.

Cru, D and Dejours C. 1983. Les savoir-faire de prudence dans les métiers du bâtiment. Cahiers médico-sociaux 3:239–247.

Dake, K. 1991. Orienting dispositions in the perception of risk: An analysis of contemporary worldviews and cultural biases. J Cross Cult Psychol 22:61–82.

—. 1992. Myths of nature: Culture and the social construction of risk. J Soc Issues 48:21–37.

Dedobbeleer, N and F Béland. 1989. The interrelationship of attributes of the work setting and workers’ safety climate perceptions in the construction industry. In Proceedings of the 22nd Annual Conference of the Human Factors Association of Canada. Toronto.

—. 1991. A safety climate measure for construction sites. J Saf Res 22:97–103.

Dedobbeleer, N, F Béland and P German. 1990. Is there a relationship between attributes of construction sites and workers’ safety practices and climate perceptions? In Advances in Industrial Ergonomics and Safety II, edited by D Biman. London: Taylor & Francis.

Dejours, C. 1992. Intelligence ouvrière et organisation du travail. Paris: Harmattan.

DeJoy, DM. 1987. Supervisor attributions and responses for multicausal workplace accidents. J Occup Acc 9:213–223.

—. 1994. Managing safety in the workplace: An attribution theory analysis and model. J Saf Res 25:3–17.

Denison, DR. 1990. Corporate Culture and Organizational Effectiveness. New York: Wiley.

Dieterly, D and B Schneider. 1974. The effect of organizational environment on perceived power and climate: A laboratory study. Organ Behav Hum Perform 11:316–337.

Dodier, N. 1985. La construction pratique des conditions de travail: Préservation de la santé et vie quotidienne des ouvriers dans les ateliers. Sci Soc Santé 3:5–39.

Dunette, MD. 1976. Handbook of Industrial and Organizational Psychology. Chicago: Rand McNally.

Dwyer, T. 1992. Life and Death at Work. Industrial Accidents as a Case of Socially Produced Error. New York: Plenum Press.

Eakin, JM. 1992. Leaving it up to the workers: Sociological perspective on the management of health and safety in small workplaces. Int J Health Serv 22:689–704.

Edwards, W. 1961. Behavioural decision theory. Annu Rev Psychol 12:473–498.

Embrey, DE, P Humphreys, EA Rosa, B Kirwan and K Rea. 1984. An approach to assessing human error probabilities using structured expert judgement. In Nuclear Regulatory Commission NUREG/CR-3518, Washington, DC: NUREG.

Eyssen, G, J Eakin-Hoffman and R Spengler. 1980. Manager’s attitudes and the occurrence of accidents in a telephone company. J Occup Acc 2:291–304.

Field, RHG and MA Abelson. 1982. Climate: A reconceptualization and proposed model. Hum Relat 35:181–201.

Fischhoff, B and D MacGregor. 1991. Judged lethality: How much people seem to know depends on how they are asked. Risk Anal 3:229–236.

Fischhoff, B, L Furby and R Gregory. 1987. Evaluating voluntary risks of injury. Accident Anal Prev 19:51–62.

Fischhoff, B, S Lichtenstein, P Slovic, S Derby and RL Keeney. 1981. Acceptable risk. Cambridge: CUP.

Flanagan, O. 1991. The Science of the Mind. Cambridge: MIT Press.

Frantz, JP. 1992. Effect of location, procedural explicitness, and presentation format on user processing of and compliance with product warnings and instructions. Ph.D. Dissertation, University of Michigan, Ann Arbor.

Frantz, JP and TP Rhoades.1993. Human factors. A task analytic approach to the temporal and spatial placement of product warnings. Human Factors 35:713–730.

Frederiksen, M, O Jensen and AE Beaton. 1972. Prediction of Organizational Behavior. Elmsford, NY: Pergamon.
Freire, P. 1988. Pedagogy of the Oppressed. New York: Continuum.

Glick, WH. 1985. Conceptualizing and measuring organizational and psychological climate: Pitfalls in multi-level research. Acad Manage Rev 10(3):601–616.

Gouvernement du Québec. 1978. Santé et sécurité au travail: Politique québecoise de la santé et de la sécurité des travailleurs. Québec: Editeur officiel du Québec.

Haas, J. 1977. Learning real feelings: A study of high steel ironworkers’ reactions to fear and danger. Sociol Work Occup 4:147–170.

Hacker, W. 1987. Arbeitspychologie. Stuttgart: Hans Huber.

Haight, FA. 1986. Risk, especially risk of traffic accident. Accident Anal Prev 18:359–366.

Hale, AR and AI Glendon. 1987. Individual Behaviour in the Control of Danger. Vol. 2. Industrial Safety Series. Amsterdam: Elsevier.

Hale, AR, B Hemning, J Carthey and B Kirwan. 1994. Extension of the Model of Behaviour in the Control of Danger. Volume 3—Extended model description. Delft University of Technology, Safety Science Group (Report for HSE). Birmingham, UK: Birmingham University, Industrial Ergonomics Group.
Hansen, L. 1993a. Beyond commitment. Occup Hazards 55(9):250.

—. 1993b. Safety management: A call for revolution. Prof Saf 38(30):16–21.

Harrison, EF. 1987. The Managerial Decision-making Process. Boston: Houghton Mifflin.

Heinrich, H, D Petersen and N Roos. 1980. Industrial Accident Prevention. New York: McGraw-Hill.

Hovden, J and TJ Larsson. 1987. Risk: Culture and concepts. In Risk and Decisions, edited by WT Singleton and J Hovden. New York: Wiley.

Howarth, CI. 1988. The relationship between objective risk, subjective risk, behaviour. Ergonomics 31:657–661.

Hox, JJ and IGG Kreft. 1994. Multilevel analysis methods. Sociol Methods Res 22(3):283–300.

Hoyos, CG and B Zimolong. 1988. Occupational Safety and Accident Prevention. Behavioural Strategies and Methods. Amsterdam: Elsevier.

Hoyos, CG and E Ruppert. 1993. Der Fragebogen zur Sicherheitsdiagnose (FSD). Bern: Huber.

Hoyos, CT, U Bernhardt, G Hirsch and T Arnhold. 1991. Vorhandenes und erwünschtes sicherheits-relevantes Wissen in Industriebetrieben. Zeitschrift für Arbeits-und Organisationspychologie 35:68–76.

Huber, O. 1989. Information-procesing operators in decision making. In Process and Structure of Human Decision Making, edited by H Montgomery and O Svenson. Chichester: Wiley.

Hunt, HA and RV Habeck. 1993. The Michigan disability prevention study: Research highlights. Unpublished report. Kalamazoo, MI: E.E. Upjohn Institute for Employment Research.

International Electrotechnical Commission (IEC). N.d. Draft Standard IEC 1508; Functional Safety: Safety-related Systems. Geneva: IEC.

Instrument Society of America (ISA). N.d. Draft Standard: Application of Safety Instrumented Systems for the Process Industries. North Carolina, USA: ISA.

International Organization for Standardization (ISO). 1990. ISO 9000-3: Quality Management and Quality Assurance Standards: Guidelines for the Application of ISO 9001 to the Development, Supply and Maintenance of Software. Geneva: ISO.

James, LR. 1982. Aggregation bias in estimates of perceptual agreement. J Appl Psychol 67:219–229.

James, LR and AP Jones. 1974. Organizational climate: A review of theory and research. Psychol Bull 81(12):1096–1112.
Janis, IL and L Mann. 1977. Decision-making: A Psychological Analysis of Conflict, Choice and Commitment. New York: Free Press.

Johnson, BB. 1991. Risk and culture research: Some caution. J Cross Cult Psychol 22:141–149.

Johnson, EJ and A Tversky. 1983. Affect, generalization, and the perception of risk. J Personal Soc Psychol 45:20–31.

Jones, AP and LR James. 1979. Psychological climate: Dimensions and relationships of individual and aggregated work environment perceptions. Organ Behav Hum Perform 23:201–250.

Joyce, WF and JWJ Slocum. 1984. Collective climate: Agreement as a basis for defining aggregate climates in organizations. Acad Manage J 27:721–742.

Jungermann, H and P Slovic. 1987. Die Psychologie der Kognition und Evaluation von Risiko. Unpublished manuscript. Technische Universität Berlin.

Kahneman, D and A Tversky. 1979. Prospect theory: An analysis of decision under risk. Econometrica 47:263–291.

—. 1984. Choices, values, and frames. Am Psychol 39:341–350.

Kahnemann, D, P Slovic and A Tversky. 1982. Judgement under Uncertainty: Heuristics and Biases. New York: Cambridge University Press.

Kasperson, RE. 1986. Six propositions on public participation and their relevance for risk communication. Risk Anal 6:275–281.

Kleinhesselink, RR and EA Rosa. 1991. Cognitive representation of risk perception. J Cross Cult Psychol 22:11–28.

Komaki, J, KD Barwick and LR Scott. 1978. A behavioral approach to occupational safety: Pinpointing and reinforcing safe performance in a food manufacturing plant. J Appl Psychol 4:434–445.

Komaki, JL. 1986. Promoting job safety and accident precention. In Health and Industry: A Behavioral Medicine Perspective, edited by MF Cataldo and TJ Coats. New York: Wiley.

Konradt, U. 1994. Handlungsstrategien bei der Störungsdiagnose an flexiblen Fertigungs-einrichtungen. Zeitschrift für Arbeits-und Organisations-pychologie 38:54–61.

Koopman, P and J Pool. 1991. Organizational decision making: Models, contingencies and strategies. In Distributed Decision Making. Cognitive Models for Cooperative Work, edited by J Rasmussen, B Brehmer and J Leplat. Chichester: Wiley.

Koslowski, M and B Zimolong. 1992. Gefahrstoffe am Arbeitsplatz: Organisatorische Einflüsse auf Gefahrenbewußstein und Risikokompetenz. In Workshop Psychologie der Arbeitssicherheit, edited by B Zimolong and R Trimpop. Heidelberg: Asanger.

Koys, DJ and TA DeCotiis. 1991. Inductive measures of psychological climate. Hum Relat 44(3):265–285.

Krause, TH, JH Hidley and SJ Hodson. 1990. The Behavior-based Safety Process. New York: Van Norstrand Reinhold.
Lanier, EB. 1992. Reducing injuries and costs through team safety. ASSE J July:21–25.

Lark, J. 1991. Leadership in safety. Prof Saf 36(3):33–35.

Lawler, EE. 1986. High-involvement Management. San Francisco: Jossey Bass.

Lehto, MR. 1992. Designing warning signs and warnings labels: Scientific basis for initial guideline. Int J Ind Erg 10:115–119.

Lehto, MR and JD Papastavrou. 1993. Models of the warning process: Important implications towards effectiveness. Safety Science 16:569–595.

Lewin, K. 1951. Field Theory in Social Science. New York: Harper and Row.

Likert, R. 1967. The Human Organization. New York: McGraw Hill.

Lopes, LL and P-HS Ekberg. 1980. Test of an ordering hypothesis in risky decision making. Acta Physiol 45:161–167.

Machlis, GE and EA Rosa. 1990. Desired risk: Broadening the social amplification of risk framework. Risk Anal 10:161–168.

March, J and H Simon. 1993. Organizations. Cambridge: Blackwell.

March, JG and Z Shapira. 1992. Variable risk preferences and the focus of attention. Psychol Rev 99:172–183.

Manson, WM, GY Wong and B Entwisle. 1983. Contextual analysis through the multilevel linear model. In Sociologic Methodology, 1983–1984. San Francisco: Jossey-Bass.

Mattila, M, M Hyttinen and E Rantanen. 1994. Effective supervisory behavior and safety at the building site. Int J Ind Erg 13:85–93.

Mattila, M, E Rantanen and M Hyttinen. 1994. The quality of work environment, supervision and safety in building construction. Saf Sci 17:257–268.

McAfee, RB and AR Winn. 1989. The use of incentives/feedback to enhance work place safety: A critique of the literature. J Saf Res 20(1):7–19.

McSween, TE. 1995. The Values-based Safety Process. New York: Van Norstrand Reinhold.

Melia, JL, JM Tomas and A Oliver. 1992. Concepciones del clima organizacional hacia la seguridad laboral: Replication del modelo confirmatorio de Dedobbeleer y Béland. Revista de Psicologia del Trabajo y de las Organizaciones 9(22).

Minter, SG. 1991. Creating the safety culture. Occup Hazards August:17–21.

Montgomery, H and O Svenson. 1989. Process and Structure of Human Decision Making. Chichester: Wiley.

Moravec, M. 1994. The 21st century employer-employee partnership. HR Mag January:125–126.

Morgan, G. 1986. Images of Organizations. Beverly Hills: Sage.

Nadler, D and ML Tushman. 1990. Beyond the charismatic leader. Leadership and organizational change. Calif Manage Rev 32:77–97.

Näsänen, M and J Saari. 1987. The effects of positive feedback on housekeeping and accidents at a shipyard. J Occup Acc 8:237–250.

National Research Council. 1989. Improving Risk Communication. Washington, DC: National Academy Press.

Naylor, JD, RD Pritchard and DR Ilgen. 1980. A Theory of Behavior in Organizations. New York: Academic Press.

Neumann, PJ and PE Politser. 1992. Risk and optimality. In Risk-taking Behaviour, edited by FJ Yates. Chichester: Wiley.

Nisbett, R and L Ross. 1980. Human Inference: Strategies and Shortcomings of Social Judgement. Englewood Cliffs: Prentice-Hall.

Nunnally, JC. 1978. Psychometric Theory. New York: McGraw-Hill.

Oliver, A, JM Tomas and JL Melia. 1993. Una segunda validacion cruzada de la escala de clima organizacional de seguridad de Dedobbeleer y Béland. Ajuste confirmatorio de los modelos unofactorial, bifactorial y trifactorial. Psicologica 14:59–73.

Otway, HJ and D von Winterfeldt. 1982. Beyond acceptable risk: On the social acceptability of technologies. Policy Sci 14:247–256.

Perrow, C. 1984. Normal Accidents: Living with High-risk Technologies. New York: Basic Books.

Petersen, D. 1993. Establishing good “safety culture” helps mitigate workplace dangers. Occup Health Saf 62(7):20–24.

Pidgeon, NF. 1991. Safety culture and risk management in organizations. J Cross Cult Psychol 22:129–140.

Rabash, J and G Woodhouse. 1995. MLn command reference. Version 1.0 March 1995, ESRC.

Rachman, SJ. 1974. The Meanings of Fear. Harmondsworth: Penguin.

Rasmussen, J. 1983. Skills, rules, knowledge, signals, signs and symbols and other distinctions. IEEE T Syst Man Cyb 3:266–275.

Reason, JT. 1990. Human Error. Cambridge: CUP.

Rees, JV. 1988. Self-regulation: An effective alternative to direct regulation by OSHA? Stud J 16:603–614.

Renn, O. 1981. Man, technology and risk: A study on intuitive risk assessment and attitudes towards nuclear energy. Spezielle Berichte der Kernforschungsanlage Jülich.

Rittel, HWJ and MM Webber. 1973. Dilemmas in a general theory of planning. Pol Sci 4:155-169.

Robertson, A and M Minkler. 1994. New health promotion movement: A critical examination. Health Educ Q 21(3):295–312.

Rogers, CR. 1961. On Becoming a Person. Boston: Houghton Mifflin.

Rohrmann, B. 1992a. The evaluation of risk communication effectiveness. Acta Physiol 81:169–192.

—. 1992b. Risiko Kommunikation, Aufgaben-Konzepte-Evaluation. In Psychologie der Arbeitssicherheit, edited by B Zimolong and R Trimpop. Heidelberg: Asanger.

—. 1995. Risk perception research: Review and documentation. In Arbeiten zur Risikokommunikation. Heft 48. Jülich: Forschungszentrum Jülich.

—. 1996. Perception and evaluation of risks: A cross cultural comparison. In Arbeiten zur Risikokommunikation Heft 50. Jülich: Forschungszentrum Jülich.

Rosenhead, J. 1989. Rational Analysis for a Problematic World. Chichester: Wiley.

Rumar, K. 1988. Collective risk but individual safety. Ergonomics 31:507–518.

Rummel, RJ. 1970. Applied Factor Analysis. Evanston, IL: Northwestern University Press.

Ruppert, E. 1987. Gefahrenwahrnehmung—ein Modell zur Anforderungsanalyse für die verhaltensabbhängige Kontrolle von Arbeitsplatzgefahren. Zeitschrift für Arbeitswissenschaft 2:84–87.

Saari, J. 1976. Characteristics of tasks associated with the occurrence of accidents. J Occup Acc 1:273–279.

Saari, J. 1990. On strategies and methods in company safety work: From informational to motivational strategies. J Occup Acc 12:107–117.

Saari, J and M Näsänen. 1989. The effect of positive feedback on industrial housekeeping and accidents: A long-term study at a shipyard. Int J Ind Erg 4:3:201–211.

Sarkis, H. 1990. What really causes accidents. Presentation at Wausau Insurance Safety Excellence Seminar. Canandaigua, NY, US, June 1990.

Sass, R. 1989. The implications of work organization for occupational health policy: The case of Canada. Int J Health Serv 19(1):157–173.

Savage, LJ. 1954. The Foundations of Statistics. New York: Wiley.

Schäfer, RE. 1978. What Are We Talking About When We Talk About “Risk”? A Critical Survey of Risk and Risk Preferences Theories. R.M.-78-69. Laxenber, Austria: International Institute for Applied System Analysis.

Schein, EH. 1989. Organizational Culture and Leadership. San Francisco: Jossey-Bass.

Schneider, B. 1975a. Organizational climates: An essay. Pers Psychol 28:447–479.

—. 1975b. Organizational climate: Individual preferences and organizational realities revisited. J Appl Psychol 60:459–465.

Schneider, B and AE Reichers. 1983. On the etiology of climates. Pers Psychol 36:19–39.

Schneider, B, JJ Parkington and VM Buxton. 1980. Employee and customer perception of service in banks. Adm Sci Q 25:252–267.

Shannon, HS, V Walters, W Lewchuk, J Richardson, D Verma, T Haines and LA Moran. 1992. Health and safety approaches in the workplace. Unpublished report. Toronto: McMaster University.

Short, JF. 1984. The social fabric at risk: Toward the social transformation of risk analysis. Amer Social R 49:711–725.

Simard, M. 1988. La prise de risque dans le travail: un phénomène organisationnel. In La prise de risque dans le travail, edited by P Goguelin and X Cuny. Marseille: Editions Octares.

Simard, M and A Marchand. 1994. The behaviour of first-line supervisors in accident prevention and effectiveness in occupational safety. Saf Sci 19:169–184.

Simard, M et A Marchand. 1995. L’adaptation des superviseurs à la gestion parcipative de la prévention des accidents. Relations Industrielles 50: 567-589.

Simon, HA. 1959. Theories of decision making in economics and behavioural science. Am Econ Rev 49:253–283.

Simon, HA et al. 1992. Decision making and problem solving. In Decision Making: Alternatives to Rational Choice Models, edited by M Zev. London: Sage.

Simonds, RH and Y Shafai-Sahrai. 1977. Factors apparently affecting the injury frequency in eleven matched pairs of companies. J Saf Res 9(3):120–127.

Slovic, P. 1987. Perception of risk. Science 236:280–285.

—. 1993. Perceptions of environmental hazards: Psychological perspectives. In Behaviour and Environment, edited by GE Stelmach and PA Vroon. Amsterdam: North Holland.

Slovic, P, B Fischhoff and S Lichtenstein. 1980. Perceived risk. In Societal Risk Assessment: How Safe Is Safe Enough?, edited by RC Schwing and WA Albers Jr. New York: Plenum Press.

—. 1984. Behavioural decision theory perspectives on risk and safety. Acta Physiol 56:183–203.

Slovic, P, H Kunreuther and GF White. 1974. Decision processes, rationality, and adjustment to natural hazards. In Natural Hazards, Local, National and Global, edited by GF White. New York: Oxford University Press.

Smith, MJ, HH Cohen, A Cohen and RJ Cleveland. 1978. Characteristics of successful safety programs. J Saf Res 10:5–15.

Smith, RB. 1993. Construction industry profile: Getting to the bottom of high accident rates. Occup Health Saf June:35–39.

Smith, TA. 1989. Why you should put your safety program under statistical control. Prof Saf 34(4):31–36.

Starr, C. 1969. Social benefit vs. technological risk. Science 165:1232–1238.

Sulzer-Azaroff, B. 1978. Behavioral ecology and accident prevention. J Organ Behav Manage 2:11–44.

Sulzer-Azaroff, B and D Fellner. 1984. Searching for performance targets in the behavioral analysis of occupational health and safety: An assessment strategy. J Organ Behav Manage 6:2:53–65.

Sulzer-Azaroff, B, TC Harris and KB McCann. 1994. Beyond training: Organizational performance management techniques. Occup Med: State Art Rev 9:2:321–339.

Swain, AD and HE Guttmann. 1983. Handbook of Human Reliability Analysis with Emphasis on Nuclear Power Plant Applications. Sandia National Laboratories, NUREG/CR-1278, Washington, DC: US Nuclear Regulatory Commission.

Taylor, DH. 1981. The hermeneutics of accidents and safety. Ergonomics 24:48–495.

Thompson, JD and A Tuden. 1959. Strategies, structures and processes of organizational decisions. In Comparative Studies in Administration, edited by JD Thompson, PB Hammond, RW Hawkes, BH Junker, and A Tuden. Pittsburgh: Pittsburgh University Press.

Trimpop, RM. 1994. The Psychology of Risk Taking Behavior. Amsterdam: Elsevier.

Tuohy, C and M Simard. 1992. The impact of joint health and safety committees in Ontario and Quebec. Unpublished report, Canadian Association of Administrators of Labour Laws, Ottawa.

Tversky, A and D Kahneman. 1981. The framing of decisions and the psychology of choice. Science 211:453–458.

Vlek, C and G Cvetkovich. 1989. Social Decision Methodology for Technological Projects. Dordrecht, Holland: Kluwer.

Vlek, CAJ and PJ Stallen. 1980. Rational and personal aspects of risk. Acta Physiol 45:273–300.

von Neumann, J and O Morgenstern. 1947. Theory of Games and Ergonomic Behaviour. Princeton, NJ: Princeton University Press.

von Winterfeldt, D and W Edwards. 1984. Patterns of conflict about risky technologies. Risk Anal 4:55–68.

von Winterfeldt, D, RS John and K Borcherding. 1981. Cognitive components of risk ratings. Risk Anal 1:277–287.

Wagenaar, W. 1990. Risk evaluation and causes of accidents. Ergonomics 33, Nos. 10/11.

Wagenaar, WA. 1992. Risk taking and accident causation. In Risk-taking Behaviour, edited by JF Yates. Chichester: Wiley.

Wagenaar, W, J Groeneweg, PTW Hudson and JT Reason. 1994. Promoting safety in the oil industry. Ergonomics 37, No. 12:1,999–2,013.

Walton, RE. 1986. From control to commitment in the workplace. Harvard Bus Rev 63:76–84.

Wilde, GJS. 1986. Beyond the concept of risk homeostasis: Suggestions for research and application towards the prevention of accidents and lifestyle-related disease. Accident Anal Prev 18:377–401.

—. 1993. Effects of mass media communications on health and safety habits: An overview of issues and evidence. Addiction 88:983–996.

—. 1994. Risk homeostatasis theory and its promise for improved safety. In Challenges to Accident Prevention: The Issue of Risk Compensation Behaviour, edited by R Trimpop and GJS Wilde. Groningen, The Netherlands: STYX Publications.

Yates, JF. 1992a. The risk construct. In Risk Taking Behaviour, edited by JF Yates. Chichester: Wiley.

—. 1992b. Risk Taking Behaviour. Chichester: Wiley.

Yates, JF and ER Stone. 1992. The risk construct. In Risk Taking Behaviour, edited by JF Yates. Chichester: Wiley.

Zembroski, EL. 1991. Lessons learned from man-made catastrophes. In Risk Management. New York: Hemisphere.


Zey, M. 1992. Decision Making: Alternatives to Rational Choice Models. London: Sage.

Zimolong, B. 1985. Hazard perception and risk estimation in accident causation. In Trends in Ergonomics/Human Factors II, edited by RB Eberts and CG Eberts. Amsterdam: Elsevier.

Zimolong, B. 1992. Empirical evaluation of THERP, SLIM and ranking to estimate HEPs. Reliab Eng Sys Saf 35:1–11.

Zimolong, B and R Trimpop. 1994. Managing human reliability in advanced manufacturing systems. In Design of Work and Development of Personnel in Advanced Manufacturing Systems, edited by G Salvendy and W Karwowski. New York: Wiley.

Zohar, D. 1980. Safety climate in industrial organizations: Theoretical and applied implications. J Appl Psychol 65, No.1:96–102.

Zuckerman, M. 1979. Sensation Seeking: Beyond the Optimal Level of Arousal. Hillsdale: Lawrence Erlbaum.