Origins
Standardization in the field of ergonomics has a relatively short history. It started in the beginning of the 1970s when the first committees were founded on the national level (e.g., in Germany within the standardization institute DIN), and it continued on an international level after the foundation of the ISO (International Organization for Standardization) TC (Technical Committee) 159 “Ergonomics”, in 1975. In the meantime ergonomics standardization takes place on regional levels as well, for example, on the European level within the CEN (Commission européenne de normalisation), which established its TC 122 “Ergonomics” in 1987. The existence of the latter committee underscores the fact that one of the important reasons for establishing committees for the standardization of ergonomics knowledge and principles can be found in legal (and quasi-legal) regulations, especially with respect to safety and health, which require the application of ergonomics principles and findings in the design of products and work systems. National laws requiring the application of well-established ergonomics findings were the reason for the establishment of the German ergonomics committee in 1970, and European Directives, especially the Machinery Directive (relating to safety standards), were responsible for establishing an ergonomics committee on the European level. Since legal regulations usually are not, cannot and should not be very specific, the task of specifying which ergonomics principles and findings should be applied was given to or taken up by ergonomics standardization committees. Especially on the European level, it can be recognized that ergonomics standardization can contribute to the task of providing for broad and comparable conditions of machinery safety, thus removing barriers to the free trade of machinery within the continent itself.
Perspectives
Ergonomics standardization thus started with a strong protective, although preventive, perspective, with ergonomics standards being developed with the aim of protecting workers against adverse effects at different levels of health protection. Ergonomics standards were thus prepared with the following intentions in view:
Â
International standardization, which was not so closely coupled to legislation, on the other hand, always also tried to open a perspective in the direction of producing standards which would go beyond the prevention of and protection against adverse effects (e.g., by specifying minimal/maximal values) and instead proactively provide for optimal working conditions to promote the well-being and personal development of the worker, as well as the effectiveness, efficiency, reliability and productivity of the work system.
This is a point where it becomes evident that ergonomics, and especially ergonomics standardization, has very distinct social and political dimensions. Whereas the protective approach with respect to safety and health is generally accepted and agreed upon among the parties involved (employers, unions, administration and ergonomics experts) for all levels of standardization, the proactive approach is not equally accepted by all parties in the same way. This might be due to the fact that, especially where legislation requires the application of ergonomics principles (and thus either explicitly or implicitly the application of ergonomics standards), some parties feel that such standards might limit their freedom of action or negotiation. Since international standards are less compelling (transferring them into the body of national standards is at the discretion of the national standardization committees) the proactive approach has been developed furthest at the international level of ergonomics standardization.
The fact that certain regulations would indeed restrict the discretion of those to whom they applied served to discourage standardization in certain areas, for example in connection with the European Directives under Article 118a of the Single European Act, relating to safety and health in the use and operation of machinery at the workplace, and in the design of work systems and workplace design. On the other hand, under the Directives issued under Article 100a, relating to safety and health in the design of machinery with regard to the free trade of this machinery within the European Union (EU), European ergonomics standardization is mandated by the European Commission.
From an ergonomics point of view, however, it is difficult to understand why ergonomics in the design of machinery should be different from that in the use and operation of machinery within a work system. It is thus to be hoped that the distinction will be given up in the future, since it seems to be more detrimental than beneficial to the development of a consistent body of ergonomics standards.
Types of Ergonomics Standards
The first international ergonomics standard to have been developed (based on a German DIN national standard) is ISO 6385, “Ergonomic principles in the design of work systems”, published in 1981. It is the basic standard of the ergonomics standards series and set the stage for the standards which followed by defining the basic concepts and stating the general principles of the ergonomic design of work systems, including tasks, tools, machinery, workstations, work space, work environment and work organization. This international standard, which is now undergoing revision, is a guideline standard, and as such provides guidelines to be followed. It does not, however, provide technical or physical specifications which have to be met. These can be found in a different type of standards, that is, specification standards, for example, those on anthropometry or thermal conditions. Both types of standards fulfil different functions. While guideline standards intend to show their users “what to do and how to do it” and indicate those principles that must or should be observed, for example, with respect to mental workload, specification standards provide users with detailed information about safety distances or measurement procedures, for example, that have to be met and where compliance with these prescriptions can be tested by specified procedures. This is not always possible with guideline standards, although despite their relative lack of specificity it can usually be demonstrated when and where guidelines have been violated. A subset of specification standards are “database” standards, which provide the user with relevant ergonomics data, for example, body dimensions.
CEN standards are classified as A-, B- and C-type standards, depending on their scope and field of application. A-type standards are general, basic standards which apply to all kinds of applications, B-type standards are specific for an area of application (which means that most of the ergonomics standards within the CEN will be of this type), and C-type standards are specific for a certain kind of machinery, for example, hand-held drilling machines.
Standardization Committees
Ergonomics standards, like other standards, are produced in the appropriate technical committees (TCs), their subcommittees (SCs) or working groups (WGs). For the ISO this is TC 159, for CEN it is TC 122, and on the national level, the respective national committees. Besides the ergonomics committees, ergonomics is also dealt with in TCs working on machine safety (e.g., CEN TC 114 and ISO TC 199) with which liaison and close cooperation is maintained. Liaisons are also established with other committees for which ergonomics might be of relevance. Responsibility for ergonomics standards, however, is reserved to the ergonomics committees themselves.
A number of other organizations are engaged in the production of ergonomics standards, such as the IEC (International Electrotechnical Commission); CENELEC, or the respective national committees in the electrotechnical field; CCITT (Comité consultative international des organisations téléphoniques et télégraphiques) or ETSI (European Telecommunication Standards Institute) in the field of telecommunications; ECMA (European Computer Manufacturers Association) in the field of computer systems; and CAMAC (Computer Assisted Measurement and Control Association) in the field of new technologies in manufacturing, to name only a few. With some of these the ergonomics committees do have liaisons in order to avoid duplication of work or inconsistent specifications; with some organizations (e.g., the IEC) even joint technical committees are established for cooperation in areas of mutual interest. With other committees, however, there is no coordination or cooperation at all. The main purpose of these committees is to produce (ergonomics) standards that are specific to their field of activity. Since the number of such organizations at the different levels is rather large, it becomes quite complicated (if not impossible) to carry out a complete overview of ergonomics standardization. The present review will therefore be restricted to ergonomics standardization in the international and European ergonomics committees.
Structure of Standardization Committees
Ergonomics standardization committees are quite similar to one another in structure. Usually one TC within a standardization organization is responsible for ergonomics. This committee (e.g., ISO TC 159) mainly has to do with decisions about what should be standardized (e.g., work items) and how to organize and coordinate the standardization within the committee, but usually no standards are prepared at this level. Below the TC level are other committees. For example, the ISO has subcommittees (SCs), which are responsible for a defined field of standardization: SC 1 for general ergonomic guiding principles, SC 3 for anthropometry and biomechanics, SC 4 for human-system interaction and SC 5 for the physical work environment. CEN TC 122 has working groups (WGs) below the TC level which are so constituted as to deal with specified fields within ergonomics standardization. SCs within ISO TC 159 operate as steering committees for their field of responsibility and do the first voting, but usually they do not also prepare standards. This is done in their WGs, which are composed of experts nominated by their national committees, whereas SC and TC meetings are attended by national delegations representing national points of view. Within the CEN, duties are not sharply distinguished at the WG level; WGs operate both as steering and production committees, although a good deal of work is accomplished in ad hoc groups, which are composed of members of the WG (nominated by their national committees) and established to prepare the drafts for a standard. WGs within an ISO SC are established to do the practical standardization work, that is, prepare drafts, work on comments, identify needs for standardization, and prepare proposals to the SC and TC, which will then take the appropriate decisions or actions.
Preparation of Ergonomics Standards
The preparation of ergonomics standards has changed quite markedly within the last years in view of the stronger emphasis now being placed on European and other international developments. In the beginning, national standards, which had been prepared by experts from one country in their national committee and agreed upon by the interested parties among the general public of that country in a specified voting procedure, were transferred as input to the responsible SC and WG of ISO TC 159, after a formal vote had been taken at the TC level that such an international standard should be prepared. The working group, composed of ergonomics experts (and experts from politically interested parties) from all participating member bodies (i.e., the national standardization organizations) of TC 159 who were willing to cooperate in this work project, would then work on any inputs and prepare a working draft (WD). After this draft proposal is agreed upon in the WG, it becomes a committee draft (CD), which is distributed to the member bodies of the SC for approval and comments. If the draft receives substantial support from the SC member bodies (i.e., if at least two-thirds vote in favour) and after comments by the national committees have been incorporated by the WG in the improved version, a Draft International Standard (DIS) is submitted for voting to all members of TC 159. If substantial support, at this step from the member bodies of the TC, is achieved (and perhaps after incorporating editorial changes), this version will then be published as an International Standard (IS) by the ISO. Voting of the member bodies at the TC and SC level is based on voting at the national level, and comments can be supplied through the member bodies by experts or interested parties in each country. The procedure is roughly equivalent in CEN TC 122, with the exception that there are no SCs below the TC level and that voting takes part with weighted votes (according to the size of the country) whereas within the ISO the rule is one country, one vote. If a draft fails at any step, and unless the WG decides that an agreeable revision cannot be achieved, it has to be revised and then has to pass through the voting procedure again.
International standards are then transferred into national standards if the national committees vote accordingly. By contrast, European Standards (ENs) have to be transferred into national standards by the CEN members and conflicting national standards have to be withdrawn. That means that harmonized ENs will be effective in all CEN countries (and, due to their influence on trade, will be relevant to manufacturers in all other countries who intend to sell goods to a customer in a CEN country).
ISO-CEN Cooperation
In order to avoid conflicting standards and duplication of work and to allow non-CEN members to take part in developments in the CEN, a cooperative agreement between the ISO and the CEN has been achieved (the so-called Vienna Agreement) which regulates the formalities and provides for a so-called parallel voting procedure, which allows the same drafts to be voted upon in the CEN and the ISO in parallel, if the responsible committees agree to do so. Among the ergonomics committees the tendency is quite clear: avoid duplication of work (manpower and financial resources are too limited), avoid conflicting specifications, and try to achieve a consistent body of ergonomics standards based on a division of labour. Whereas CEN TC 122 is bound by the decisions of the EU administration and gets mandated work items to stipulate the specifications of European directives, ISO TC 159 is free to standardize whatever it thinks necessary or appropriate in the field of ergonomics. This has led to shifts in the emphasis of both committees, with the CEN concentrating on machinery and safety-related topics and the ISO concentrating on areas where broader market interests than Europe are concerned (e.g., work with VDUs and control-room design for process and related industries); on areas where the operation of machinery is concerned, as in work system design; and on such areas as work environment and work organization as well. The intention, however, is to transfer work results from the CEN to the ISO, and vice versa, in order to build up a body of consistent ergonomics standards which in fact are effective all over the world.
The formal procedure of producing standards is still the same today. But since the emphasis has shifted more and more to the international or the European level, more and more activities are being transferred to these committees. Drafts are now usually worked out directly in these committees and are no longer based on existing national standards. After the decision has been made that a standard should be developed, work directly starts at one of these supranational levels, based on whatever input there may be available, sometimes starting from zero. This changes the role of the national ergonomics committees quite dramatically. While heretofore they formally developed their own national standards according to their national rules, they now have the task of observing and influencing standardization on the supranational levels—via the experts who work out the standards or via comments made at the different steps of voting (within the CEN, a national standardization project will be halted if a comparable project is being simultaneously worked on at the CEN level). This makes the task still more complicated, since this influence can only be exerted indirectly and since the preparation of ergonomics standards is not just a matter of pure science but a matter of bargaining, consensus and agreement (not least due to the political implications which the standard might have). This, of course, is one of the reasons why the process of producing an international or European ergonomics standard usually takes several years and why ergonomics standards cannot reflect the latest state of the art in ergonomics. International ergonomics standards thus have to be examined every five years, and, if necessary, undergo revision.
Fields of Ergonomics Standardization
International ergonomics standardization started with guidelines on the general principles of ergonomics in the design of work systems; they were laid down in ISO 6385, which is now under revision in order to incorporate new developments. The CEN has produced a similar basic standard (EN 614, Part 1, 1994)—this is oriented more to machinery and safety—and is preparing a standard with guidelines on task design as a second part of this basic standard. The CEN thus emphasizes the importance of operator tasks in the design of machinery or work systems, for which appropriate tools or machinery have to be designed.
Another area where concepts and guidelines have been laid down in standards is the field of mental workload. ISO 10075, Part 1, defines terms and concepts (e.g., fatigue, monotony, reduced vigilance), and Part 2 (at the stage of a DIS in the latter half of the 1990s) provides guidelines for the design of work systems with respect to mental workload in order to avoid impairments.
SC 3 of ISO TC 159 and WG 1 of CEN TC 122 produce standards on anthropometry and biomechanics, covering, among other topics, methods of anthropometric measurements, body dimensions, safety distances and access dimensions, the evaluation of working postures and the design of workplaces in relation to machinery, recommended limits of physical strength and problems of manual handling.
SC 4 of ISO 159 shows how technological and social changes affect ergonomics standardization and the programme of such a subcommittee. SC 4 started as “Signals and Controls” by standardizing principles for displaying information and designing control actuators, with one of its work items being the visual display unit (VDU), used for office tasks. It soon became apparent, however, that standardizing the ergonomics of VDUs would not be sufficient, and that standardization “around” this workstation—in the sense of a work system—was required, covering areas such as hardware (e.g., the VDU itself, including displays, keyboards, non-keyboard input devices, workstations), work environment (e.g., lighting), work organization (e.g., task requirements), and software (e.g., dialogue principles, menu and direct manipulation dialogues). This led to a multipart standard (ISO 9241) covering “ergonomic requirements for office work with VDUs” with at the moment 17 parts, 3 of which have reached the status of an IS already. This standard will be transferred to the CEN (as EN 29241) which will specify requirements for the VDU directive (90/270 EEC) of the EU—although this is a directive under article 118a of the Single European Act. This series of standards provides guidelines as well as specifications, depending on the subject of the given part of the standard, and introduces a new concept of standardization, the user performance approach, which might help to solve some of the problems in ergonomics standardization. It is described more fully in the chapter Visual Display Units .
The user performance approach is based on the idea that the aim of standardization is to prevent impairment and to provide for optimal working conditions for the operator, but not to establish technical specification per se. Specification is thus regarded only as a means to the end of unimpaired, optimal user performance. The important thing is to achieve this unimpaired performance of the operator, regardless of whether a certain physical specification is met. This requires that the unimpaired operator performance which has to be achieved, for example, reading performance on a VDU, must be specified in the first place, and second, that technical specifications be developed which will enable the desired performance to be achieved, based on the available evidence. The manufacturer is then free to follow these technical specifications, which will ensure that the product complies with the ergonomics requirements. Or he may demonstrate, by comparison with a product that is known to fulfil the requirements (either by compliance with the technical specifications of the standard or by proven performance), that with the new product the performance requirements are equally or better fulfilled than with the reference product, with or without compliance to the technical specifications of the standard. A test procedure which has to be followed for demonstrating conformance with the user performance requirements of the standard is specified in the standard.
This approach helps to overcome two problems. Standards, by virtue of their specifications, which are based on the state of the art (and technology) at the time of preparation of the standard, can restrict new developments. Specifications that are based on a certain technology (e.g., cathode-ray tubes) may be inappropriate for other technologies. Independently of technology, however, the user of a display device (for instance) should be able to read and understand the information displayed effectively and efficiently without any impairments, irrespective of whatever technique may be used. Performance in this case must, however, not be restricted to the pure output (as measured in terms of speed or accuracy) but must include considerations of comfort and effort as well.
The second problem that can be dealt with by this approach is the problem of interactions between conditions. Physical specification usually is unidimensional, leaving other conditions out of consideration. In the case of interactive effects, however, this can be misleading or even wrong. By specifying performance requirements, on the other hand, and leaving the means to achieve these to the manufacturer, any solution that satisfies these performance requirements will be acceptable. Treating specification as a means to an end thus represents a genuine ergonomic perspective.
Another standard with a work system approach is under preparation in SC 4, which relates to the design of control rooms, for instance, for process industries or power stations. A multipart standard (ISO 11064) is expected to be prepared as a result, with the different parts dealing with such aspects of control-room design as layout, operator workstation design, and the design of displays and input devices for process control. Because these work items and the approach taken clearly exceed problems of the design of “displays and controls”, SC 4 has been renamed “Human-System Interaction”.
Environmental problems, especially those relating to thermal conditions and communication in noisy environments, are dealt with in SC 5, where standards have been or are being prepared on measurement methods, methods for the estimation of heat stress, conditions of thermal comfort, metabolic heat production, and on auditory and visual danger signals, speech interference level and the assessment of speech communication.
CEN TC 122 covers roughly the same fields of ergonomics standardization, although with a different emphasis and a different structure of its working groups. It is intended, however, that by a division of labour between the ergonomics committees, and mutual acceptance of work results, a general and usable set of ergonomics standards will be developed.
Â
It is difficult to speak of work analysis without putting it in the perspective of recent changes in the industrial world, because the nature of activities and the conditions in which they are carried out have undergone considerable evolution in recent years. The factors giving rise to these changes have been numerous, but there are two whose impact has proved crucial. On the one hand, technological progress with its ever-quickening pace and the upheavals brought about by information technologies have revolutionized jobs (De Keyser 1986). On the other hand, the uncertainty of the economic market has required more flexibility in personnel management and work organization. If the workers have gained a wider view of the production process that is less routine-oriented and undoubtedly more systematic, they have at the same time lost exclusive links with an environment, a team, a production tool. It is difficult to view these changes with serenity, but we have to face the fact that a new industrial landscape has been created, sometimes more enriching for those workers who can find their place in it, but also filled with pitfalls and worries for those who are marginalized or excluded. However, one idea is being taken up in firms and has been confirmed by pilot experiments in many countries: it should be possible to guide changes and soften their adverse effects with the use of relevant analyses and by using all resources for negotiation between the different work actors. It is within this context that we must place work analyses today—as tools allowing us to describe tasks and activities better in order to guide interventions of different kinds, such as training, the setting up of new organizational modes or the design of tools and work systems. We speak of analyses, and not just one analysis, since there exist a large number of them, depending on the theoretical and cultural contexts in which they are developed, the particular goals they pursue, the evidence they collect, or the analyser’s concern for either specificity or generality. In this article, we will limit ourselves to presenting a few characteristics of work analyses and emphasizing the importance of collective work. Our conclusions will highlight other paths that the limits of this text prevent us from pursuing in greater depth.
Some Characteristics of Work Analyses
The context
If the primary goal of any work analysis is to describe what the operator does, or should do, placing it more precisely into its context has often seemed indispensable to researchers. They mention, according to their own views, but in a broadly similar manner, the concepts of context, situation, environment, work domain, work world or work environment. The problem lies less in the nuances between these terms than in the selection of variables that need to be described in order to give them a useful meaning. Indeed, the world is vast and the industry is complex, and the characteristics that could be referred to are innumerable. Two tendencies can be noted among authors in the field. The first one sees the description of the context as a means of capturing the reader’s interest and providing him or her with an adequate semantic framework. The second has a different theoretical perspective: it attempts to embrace both context and activity, describing only those elements of the context that are capable of influencing the behavior of operators.
The semantic framework
Context has evocative power. It is enough, for an informed reader, to read about an operator in a control room engaged in a continuous process to call up a picture of work through commands and surveillance at a distance, where the tasks of detection, diagnosis, and regulation predominate. What variables need to be described in order to create a sufficiently meaningful context? It all depends on the reader. Nonetheless, there is a consensus in the literature on a few key variables. The nature of the economic sector, the type of production or service, the size and the geographical location of the site are useful.
The production processes, the tools or machines and their level of automation allow certain constraints and certain necessary qualifications to be guessed at. The structure of the personnel, together with age and level of qualification and experience are crucial data whenever the analysis concerns aspects of training or of organizational flexibility. The organization of work established depends more on the firm’s philosophy than on technology. Its description includes, notably, work schedules, the degree of centralization of decisions and the types of control exercised over the workers. Other elements may be added in different cases. They are linked to the firm’s history and culture, its economic situation, work conditions, and any restructuring, mergers, and investments. There exist at least as many systems of classification as there are authors, and there are numerous descriptive lists in circulation. In France, a special effort has been made to generalize simple descriptive methods, notably allowing for the ranking of certain factors according to whether or not they are satisfactory for the operator (RNUR 1976; Guelaud et al. 1977).
The description of relevant factors regarding the activity
The taxonomy of complex systems described by Rasmussen, Pejtersen, and Schmidts (1990) represents one of the most ambitious attempts to cover at the same time the context and its influence on the operator. Its main idea is to integrate, in a systematic fashion, the different elements of which it is composed and to bring out the degrees of freedom and the constraints within which individual strategies can be developed. Its exhaustive aim makes it difficult to manipulate, but the use of multiple modes of representation, including graphs, to illustrate the constraints has a heuristic value that is bound to be attractive to many readers. Other approaches are more targeted. What the authors seek is the selection of factors that can influence a precise activity. Hence, with an interest in the control of processes in a changing environment, Brehmer (1990) proposes a series of temporal characteristics of the context which affect the control and anticipation of the operator (see figure 1). This author’s typology has been developed from “micro-worlds”, computerized simulations of dynamic situations, but the author himself, along with many others since, used it for the continuous-process industry (Van Daele 1992). For certain activities, the influence of the environment is well known, and the selection of factors is not too difficult. Thus, if we are interested in heart rate in the work environment, we often limit ourselves to describing the air temperatures, the physical constraints of the task or the age and training of the subject—even though we know that by doing so we perhaps leave out relevant elements. For others, the choice is more difficult. Studies on human error, for example, show that the factors capable of producing them are numerous (Reason 1989). Sometimes, when theoretical knowledge is insufficient, only statistical processing, combining context and activity analysis, allows us to bring out the relevant contextual factors (Fadier 1990).
Figure 1. The criteria and sub-criteria of the taxonomy of micro-worlds proposed by Brehmer (1990)
The Task or the Activity?
The task
The task is defined by its objectives, its constraints and the means it requires for achievement. A function within the firm is generally characterized by a set of tasks. The realized task differs from the prescribed task scheduled by the firm for a large number of reasons: the strategies of operators vary within and among individuals, the environment fluctuates and random events require responses that are often outside the prescribed framework. Finally, the task is not always scheduled with the correct knowledge of its conditions of execution, hence the need for adaptations in real-time. But even if the task is updated during the activity, sometimes to the point of being transformed, it still remains the central reference.
Questionnaires, inventories, and taxonomies of tasks are numerous, especially in the English-language literature—the reader will find excellent reviews in Fleishman and Quaintance (1984) and in Greuter and Algera (1989). Certain of these instruments are merely lists of elements—for example, the action verbs to illustrate tasks—that are checked off according to the function studied. Others have adopted a hierarchical principle, characterizing a task as interlocking elements, ordered from the global to the particular. These methods are standardized and can be applied to a large number of functions; they are simple to use, and the analytical stage is much shortened. But where it is a question of defining specific work, they are too static and too general to be useful.
Next, there are those instruments requiring more skill on the part of the researcher; since the elements of analysis are not predefined, it is up to the researcher to characterize them. The already outdated critical incident technique of Flanagan (1954), where the observer describes a function by reference to its difficulties and identifies the incidents which the individual will have to face, belongs to this group.
It is also the path adopted by cognitive task analysis (Roth and Woods 1988). This technique aims to bring to light the cognitive requirements of a job. One way to do this is to break the job down into goals, constraints and means. Figure 2 shows how the task of an anesthetist, characterized first by a very global goal of patient survival, can be broken down into a series of sub-goals, which can themselves be classified as actions and means to be employed. More than 100 hours of observation in the operating theatre and subsequent interviews with anesthetists were necessary to obtain this synoptic “photograph” of the requirements of the function. This technique, although quite laborious, is nevertheless useful in ergonomics in determining whether all the goals of a task are provided with the means of attaining them. It also allows for an understanding of the complexity of a task (its particular difficulties and conflicting goals, for example) and facilitates the interpretation of certain human errors. But it suffers, as do other methods, from the absence of a descriptive language (Grant and Mayes 1991). Moreover, it does not permit hypotheses to be formulated as to the nature of the cognitive processes brought into play to attain the goals in question.
Figure 2. Cognitive analysis of the task: general anesthesia
Other approaches have analyzed the cognitive processes associated with given tasks by drawing up hypotheses as to the information processing necessary to accomplish them. A frequently employed cognitive model of this kind is Rasmussen’s (1986), which provides, according to the nature of the task and its familiarity for the subject, three possible levels of activity-based either on skill-based habits and reflexes, on acquired rule-based procedures or on knowledge-based procedures. But other models or theories that reached the height of their popularity during the 1970s remain in use. Hence, the theory of optimal control, which considers man as a controller of discrepancies between assigned and observed goals, is sometimes still applied to cognitive processes. And modeling by means of networks of interconnected tasks and flow charts continues to inspire the authors of cognitive task analysis; figure 3 provides a simplified description of the behavioral sequences in an energy-control task, constructing a hypothesis about certain mental operations. All these attempts reflect the concern of researchers to bring together in the same description not only elements of the context but also the task itself and the cognitive processes that underlie it—and to reflect the dynamic character of work as well.
Figure 3. A simplified description of the determinants of a behavior sequence in energy control tasks: a case of unacceptable consumption of energy
Since the arrival of the scientific organization of work, the concept of the prescribed task has been adversely criticized because it has been viewed as involving the imposition on workers of tasks that are not only designed without consulting their needs but are often accompanied by specific performance time, a restriction not welcomed by many workers. Even if the imposition aspect has become rather more flexible today and even if the workers contribute more often to the design of tasks, an assigned time for tasks remains necessary for schedule planning and remains an essential component of work organization. The quantification of time should not always be perceived in a negative manner. It constitutes a valuable indicator of workload. A simple but common method of measuring the time pressure exerted on a worker consists of determining the quotient of the time necessary for the execution of a task divided by the available time. The closer this quotient is to unity, the greater the pressure (Wickens 1992). Moreover, quantification can be used in flexible but appropriate personnel management. Let us take the case of nurses where the technique of predictive analysis of tasks has been generalized, for example, in the Canadian regulation Planning of Required Nursing (PRN 80) (Kepenne 1984) or one of its European variants. Thanks to such task lists, accompanied by their meantime of execution, one can, each morning, taking into account the number of patients and their medical conditions, establish a care schedule and a distribution of personnel. Far from being a constraint, PRN 80 has, in a number of hospitals, demonstrated that a shortage of nursing personnel exists, since the technique allows a difference to be established (see figure 4) between the desired and the observed, that is, between the number of staff necessary and the number available, and even between the tasks planned and the tasks carried out. The times calculated are only averages, and the fluctuations in the situation do not always make them applicable, but this negative aspect is minimized by a flexible organization that accepts adjustments and allows the personnel to participate in effecting those adjustments.
Figure 4.  Discrepancies between the numbers of personnel present and required  on the basis of PRN80
The activity, the evidence, and the performance
An activity is defined as the set of behaviors and resources used by the operator so that work occurs—that is to say, the transformation or production of goods or the rendering of a service. This activity can be understood through observation in different ways. Faverge (1972) has described four forms of analysis. The first is an analysis in terms of gestures and postures, where the observer locates, within the visible activity of the operator, classes of behavior that are recognizable and repeated during work. These activities are often coupled with a precise response: for example, the heart rate, which allows us to assess the physical load associated with each activity. The second form of analysis is in terms of information uptake. What is discovered, through direct observation—or with the aid of cameras or recorders of eye movements—is the set of signals picked up by the operator in the information field surrounding him or her. This analysis is particularly useful in cognitive ergonomics in trying to better understand the information processing carried out by the operator. The third type of analysis is in terms of regulation. The idea is to identify the adjustments of activity carried out by the operator in order to deal with either fluctuation in the environment or changes in his own condition. There we find the direct intervention of context within the analysis. One of the most frequently cited research projects in this area is that of Sperandio (1972). This author studied the activity of air traffic controllers and identified important strategy changes during an increase in air traffic. He interpreted them as an attempt to simplify the activity by aiming to maintain an acceptable load level, while at the same time continuing to meet the requirements of the task. The fourth is an analysis in terms of thought processes. This type of analysis has been widely used in the ergonomics of highly automated posts. Indeed, the design of computerized aids and notably intelligent aids for the operator requires a thorough understanding of the way in which the operator reasons in order to solve certain problems. The reasoning involved in scheduling, anticipation, and diagnosis has been the subject of analyses, an example of which can be found in figure 5. However, evidence of mental activity can only be inferred. Apart from certain observable aspects of behavior, such as eye movements and problem-solving time, most of these analyses resort to the verbal response. Particular emphasis has been placed, in recent years, on the knowledge necessary to accomplish certain activities, with researchers trying not to postulate them at the outset but to make them apparent through the analysis itself.
Figure 5. Analysis of mental activity. Strategies in the control of processes  with long response times: the need for computerized support in diagnosis
Such efforts have brought to light the fact that almost identical performances can be obtained with very different levels of knowledge, as long as operators are aware of their limits and apply strategies adapted to their capabilities. Hence, in our study of the start-up of a thermoelectric plant (De Keyser and Housiaux 1989), the start-ups were carried out by both engineers and operators. The theoretical and procedural knowledge that these two groups possessed, which had been elicited by means of interviews and questionnaires, were very different. The operators in particular sometimes had an erroneous understanding of the variables in the functional links of the process. In spite of this, the performances of the two groups were very close. But the operators took into account more variables in order to verify the control of the start-up and undertook more frequent verifications. Such results were also obtained by Amalberti (1991), who mentioned the existence of metaknowledge allowing experts to manage their own resources.
What evidence of activity is appropriate to elicit? Its nature, as we have seen, depends closely on the form of analysis planned. Its form varies according to the degree of methodological care exercised by the observer. Provoked evidence is distinguished from spontaneous evidence and concomitant from subsequent evidence. Generally speaking, when the nature of the work allows, concomitant and spontaneous evidence are to be preferred. They are free of various drawbacks such as the unreliability of memory, observer interference, the effect of rationalizing reconstruction on the part of the subject, and so forth. To illustrate these distinctions, we will take the example of verbalizations. Spontaneous verbalizations are verbal exchanges, or monologues expressed spontaneously without being requested by the observer; provoked verbalizations are those made at the specific request of the observer, such as the request made to the subject to “think aloud”, which is well known in the cognitive literature. Both types can be done in real-time, during work, and are thus concomitant.
They can also be subsequent, as in interviews, or subjects’ verbalizations when they view videotapes of their work. As for the validity of the verbalizations, the reader should not ignore the doubt raised in this regard by the controversy between Nisbett and De Camp Wilson (1977) and White (1988) and the precautions suggested by numerous authors aware of their importance in the study of mental activity in view of the methodological difficulties encountered (Ericson and Simon 1984; Savoyant and Leplat 1983; Caverni 1988; Bainbridge 1986).
The organization of this evidence, its processing and its formalization require descriptive languages and sometimes analyses that go beyond field observation. Those mental activities which are inferred from the evidence, for example, remain hypothetical. Today they are often described using languages derived from artificial intelligence, making use of representations in terms of schemes, production rules, and connecting networks. Moreover, the use of computerized simulations—of micro-worlds—to pinpoint certain mental activities has become widespread, even though the validity of the results obtained from such computerized simulations, in view of the complexity of the industrial world, is subject to debate. Finally, we must mention the cognitive modelings of certain mental activities extracted from the field. Among the best known is the diagnosis of the operator of a nuclear power plant, carried out in ISPRA (Decortis and Cacciabue 1990), and the planning of the combat pilot perfected in Centre d’études et de recherches de médecine aérospatiale (CERMA) (Amalberti et al. 1989).
Measurement of the discrepancies between the performance of these models and that of real, living operators is a fruitful field in activity analysis. Performance is the outcome of the activity, the final response given by the subject to the requirements of the task. It is expressed at the level of production: productivity, quality, error, incident, accident—and even, at a more global level, absenteeism or turnover. But it must also be identified at the individual level: the subjective expression of satisfaction, stress, fatigue or workload, and many physiological responses are also performance indicators. Only the entire set of data permits interpretation of the activity—that is to say, judging whether or not it furthers the desired goals while remaining within human limits. There exists a set of norms which, up to a certain point, guide the observer. But these norms are not situated—they do not take into account the context, its fluctuations and the condition of the worker. This is why in design ergonomics, even when rules, norms, and models exist, designers are advised to test the product using prototypes as early as possible and to evaluate the users’ activity and performance.
Individual or Collective Work?
While in the vast majority of cases, work is a collective act, most work analyses focus on tasks or individual activities. Nonetheless, the fact is that technological evolution, just like work organization, today emphasizes distributed work, whether it be between workers and machines or simply within a group. What paths have been explored by authors so as to take this distribution into account (Rasmussen, Pejtersen and Schmidts 1990)? They focus on three aspects: structure, the nature of exchanges and structural lability.
Structure
Whether we view structure as elements of the analysis of people, or of services, or even of different branches of a firm working in a network, the description of the links that unite them remains a problem. We are very familiar with the organigrams within firms that indicate the structure of authority and whose various forms reflect the organizational philosophy of the firm—very hierarchically organized for a Taylor-like structure, or flattened like a rake, even matrix-like, for a more flexible structure. Other descriptions of distributed activities are possible: an example is given in figure 6. More recently, the need for firms to represent their information exchanges at a global level has led to a rethinking of information systems. Thanks to certain descriptive languages—for example, design schemas, or entity-relations-attribute matrixes—the structure of relations at the collective level can today be described in a very abstract manner and can serve as a springboard for the creation of computerized management systems.
Figure 6. Â Integrated life cycle design
The nature of exchanges
Simply having a description of the links uniting the entities says little about the content itself of the exchanges; of course the nature of the relation can be specified—movement from place to place, information transfers, hierarchical dependence, and so on—but this is often quite inadequate. The analysis of communications within teams has become a favored means of capturing the very nature of collective work, encompassing subjects mentioned, creation of a common language in a team, modification of communications when circumstances are critical, and so forth (Tardieu, Nanci and Pascot 1985; Rolland 1986; Navarro 1990; Van Daele 1992; Lacoste 1983; Moray, Sanderson and Vincente 1989). Knowledge of these interactions is particularly useful for the creation of computer tools, notably decision-making aids for understanding errors. The different stages and the methodological difficulties linked to the use of this evidence have been well described by Falzon (1991).
Structural lability
It is the work on activities rather than on tasks that have opened up the field of structural lability—that is to say, of the constant reconfigurations of collective work under the influence of contextual factors. Studies such as those of Rogalski (1991), who over a long period analyzed the collective activities dealing with forest fires in France, and Bourdon and Weill Fassina (1994), who studied the organizational structure set up to deal with railway accidents, are both very informative. They clearly show how the context molds the structure of exchanges, the number, and type of actors involved, the nature of the communications and the number of parameters essential to the work. The more this context fluctuates, the further the fixed descriptions of the task are removed from reality. Knowledge of this lability, and a better understanding of the phenomena that take place within it, are essential in planning for the unpredictable and in order to provide better training for those involved in collective work in a crisis.
Conclusions
The various phases of the work analysis that have been described are an iterative part of any human factors design cycle (see figure 6). In this design of any technical object, whether a tool, a workstation or a factory, in which human factors are a consideration, certain information is needed in time. In general, the beginning of the design cycle is characterized by a need for data involving environmental constraints, the types of jobs that are to be carried out, and the various characteristics of the users. This initial information allows the specifications of the object to be drawn up so as to take into account work requirements. But this is, in some sense, only a coarse model compared to the real work situation. This explains why models and prototypes are necessary that, from their inception, allow not the jobs themselves, but the activities of the future users to be evaluated. Consequently, while the design of the images on a monitor in a control room can be based on a thorough cognitive analysis of the job to be done, only a data-based analysis of the activity will allow an accurate determination of whether the prototype will actually be of use in the actual work situation (Van Daele 1988). Once the finished technical object is put into operation, greater emphasis is put on the performance of the users and on dysfunctional situations, such as accidents or human error. The gathering of this type of information allows the final corrections to be made that will increase the reliability and usability of the completed object. Both the nuclear industry and the aeronautics industry serve as an example: operational feedback involves reporting every incident that occurs. In this way, the design loop comes full circle.
Â
Definition and Scope
Ergonomics means literally the study or measurement of work. In this context, the term work signifies purposeful human function; it extends beyond the more restricted concept of work as labour for monetary gain to incorporate all activities whereby a rational human operator systematically pursues an objective. Thus it includes sports and other leisure activities, domestic work such as child care and home maintenance, education and training, health and social service, and either controlling engineered systems or adapting to them, for example, as a passenger in a vehicle.
The human operator, the focus of study, may be a skilled professional operating a complex machine in an artificial environment, a customer who has casually purchased a new piece of equipment for personal use, a child sitting in a classroom or a disabled person in a wheelchair. The human being is highly adaptable but not infinitely so. There are ranges of optimum conditions for any activity. One of the tasks of ergonomics is to define what these ranges are and to explore the undesirable effects which occur if the limits are transgressed—for example if a person is expected to work in conditions of excessive heat, noise or vibration, or if the physical or mental workload is too high or too low.
Ergonomics examines not only the passive ambient situation but also the unique advantages of the human operator and the contributions that can be made if a work situation is designed to permit and encourage the person to make the best use of his or her abilities. Human abilities may be characterized not only with reference to the generic human operator but also with respect to those more particular abilities that are called upon in specific situations where high performance is essential. For example, an automobile manufacturer will consider the range of physical size and strength of the population of drivers who are expected to use a particular model to ensure that the seats are comfortable, that the controls are readily identifiable and within reach, that there is clear visibility to the front and the rear, and that the internal instruments are easy to read. Ease of entry and egress will also be taken into account. By contrast, the designer of a racing car will assume that the driver is athletic so that ease of getting in and out, for example, is not important and, in fact, design features as a whole as they relate to the driver may well be tailored to the dimensions and preferences of a particular driver to ensure that he or she can exercise his or her full potential and skill as a driver.
In all situations, activities and tasks the focus is the person or persons involved. It is assumed that the structure, the engineering and any other technology is there to serve the operator, not the other way round.
History and Status
About a century ago it was recognized that working hours and conditions in some mines and factories were not tolerable in terms of safety and health, and the need was evident to pass laws to set permissible limits in these respects. The determination and statement of those limits can be regarded as the beginning of ergonomics. They were, incidentally, the beginning of all the activities which now find expression through the work of the International Labour Organization (ILO).
Research, development and application proceeded slowly until the Second World War. This triggered greatly accelerated development of machines and instrumentation such as vehicles, aircraft, tanks, guns and vastly improved sensing and navigation devices. As technology advanced, greater flexibility was available to allow adaptation to the operator, an adaptation that became the more necessary because human performance was limiting the performance of the system. If a powered vehicle can travel at a speed of only a few kilometres per hour there is no need to worry about the performance of the driver, but when the vehicle’s maximum speed is increased by a factor of ten or a hundred, then the driver has to react more quickly and there is no time to correct mistakes to avert disaster. Similarly, as technology is improved there is less need to worry about mechanical or electrical failure (for instance) and attention is freed to think about the needs of the driver.
Thus ergonomics, in the sense of adapting engineering technology to the needs of the operator, becomes simultaneously both more necessary and more feasible as engineering advances.
The term ergonomics came into use about 1950 when the priorities of developing industry were taking over from the priorities of the military. The development of research and application for the following thirty years is described in detail in Singleton (1982). The United Nations agencies, particularly the ILO and the World Health Organization (WHO), became active in this field in the 1960s.
In immediate postwar industry the overriding objective, shared by ergonomics, was greater productivity. This was a feasible objective for ergonomics because so much industrial productivity was determined directly by the physical effort of the workers involved—speed of assembly and rate of lifting and movement determined the extent of output. Gradually, mechanical power replaced human muscle power. More power, however, leads to more accidents on the simple principle that an accident is the consequence of power in the wrong place at the wrong time. When things are happening faster, the potential for accidents is further increased. Thus the concern of industry and the aim of ergonomics gradually shifted from productivity to safety. This occurred in the 1960s and early 1970s. About and after this time, much of manufacturing industry shifted from batch production to flow and process production. The role of the operator shifted correspondingly from direct participation to monitoring and inspection. This resulted in a lower frequency of accidents because the operator was more remote from the scene of action but sometimes in a greater severity of accidents because of the speed and power inherent in the process.
When output is determined by the speed at which machines function then productivity becomes a matter of keeping the system running: in other words, reliability is the objective. Thus the operator becomes a monitor, a trouble-shooter and a maintainer rather than a direct manipulator.
This historical sketch of the postwar changes in manufacturing industry might suggest that the ergonomist has regularly dropped one set of problems and taken up another set but this is not the case for several reasons. As explained earlier, the concerns of ergonomics are much wider than those of manufacturing industry. In addition to production ergonomics, there is product or design ergonomics, that is, adapting the machine or product to the user. In the car industry, for example, ergonomics is important not only to component manufacturing and the production lines but also to the eventual driver, passenger and maintainer. It is now routine in the marketing of cars and in their critical appraisal by others to review the quality of the ergonomics, considering ride, seat comfort, handling, noise and vibration levels, ease of use of controls, visibility inside and outside, and so on.
It was suggested above that human performance is usually optimized within a tolerance range of a relevant variable. Much of the early ergonomics attempted to reduce both muscle power output and the extent and variety of movement by way of ensuring that such tolerances were not exceeded. The greatest change in the work situation, the advent of computers, has created the opposite problem. Unless it is well designed ergonomically, a computer workspace can induce too fixed a posture, too little bodily movement and too much repetition of particular combinations of joint movements.
This brief historical review is intended to indicate that, although there has been continuous development of ergonomics, it has taken the form of adding more and more problems rather than changing the problems. However, the corpus of knowledge grows and becomes more reliable and valid, energy expenditure norms are not dependent on how or why the energy is expended, postural issues are the same in aircraft seats and in front of computer screens, much human activity now involves using videoscreens and there are well-established principles based on a mix of laboratory evidence and field studies.
Ergonomics and Related Disciplines
The development of a science-based application which is intermediate between the well-established technologies of engineering and medicine inevitably overlaps into many related disciplines. In terms of its scientific basis, much of ergonomic knowledge derives from the human sciences: anatomy, physiology and psychology. The physical sciences also make a contribution, for example, to solving problems of lighting, heating, noise and vibration.
Most of the European pioneers in ergonomics were workers among the human sciences and it is for this reason that ergonomics is well-balanced between physiology and psychology. A physiological orientation is required as a background to problems such as energy expenditure, posture and application of forces, including lifting. A psychological orientation is required to study problems such as information presentation and job satisfaction. There are of course many problems which require a mixed human sciences approach such as stress, fatigue and shift work.
Most of the American pioneers in this field were involved in either experimental psychology or engineering and it is for this reason that their typical occupational titles—human engineering and human factors—reflect a difference in emphasis (but not in core interests) from European ergonomics. This also explains why occupational hygiene, from its close relationship to medicine, particularly occupational medicine, is regarded in the United States as quite different from human factors or ergonomics. The difference in other parts of the world is less marked. Ergonomics concentrates on the human operator in action, occupational hygiene concentrates on the hazards to the human operator present in the ambient environment. Thus the central interest of the occupational hygienist is toxic hazards, which are outside the scope of the ergonomist. The occupational hygienist is concerned about effects on health, either long-term or short-term; the ergonomist is, of course, concerned about health but he or she is also concerned about other consequences, such as productivity, work design and workspace design. Safety and health are the generic issues which run through ergonomics, occupational hygiene, occupational health and occupational medicine. It is, therefore, not surprising to find that in a large institution of a research, design or production kind, these subjects are often grouped together. This makes possible an approach based on a team of experts in these separate subjects, each making a specialist contribution to the general problem of health, not only of the workers in the institution but also of those affected by its activities and products. By contrast, in institutions concerned with design or provision of services, the ergonomist might be closer to the engineers and other technologists.
It will be clear from this discussion that because ergonomics is interdisciplinary and still quite new there is an important problem of how it should best be fitted into an existing organization. It overlaps onto so many other fields because it is concerned with people and people are the basic and all-pervading resource of every organization. There are many ways in which it can be fitted in, depending on the history and objectives of the particular organization. The main criteria are that ergonomics objectives are understood and appreciated and that mechanisms for implementation of recommendations are built into the organization.
Aims of Ergonomics
It will be clear already that the benefits of ergonomics can appear in many different forms, in productivity and quality, in safety and health, in reliability, in job satisfaction and in personal development.
The reason for this breadth of scope is that its basic aim is efficiency in purposeful activity—efficiency in the widest sense of achieving the desired result without wasteful input, without error and without damage to the person involved or to others. It is not efficient to expend unnecessary energy or time because insufficient thought has been given to the design of the work, the workspace, the working environment and the working conditions. It is not efficient to achieve the desired result in spite of the situation design rather than with support from it.
The aim of ergonomics is to ensure that the working situation is in harmony with the activities of the worker. This aim is self-evidently valid but attaining it is far from easy for a variety of reasons. The human operator is flexible and adaptable and there is continuous learning, but there are quite large individual differences. Some differences, such as physical size and strength, are obvious, but others, such as cultural differences and differences in style and in level of skill, are less easy to identify.
In view of these complexities it might seem that the solution is to provide a flexible situation where the human operator can optimize a specifically appropriate way of doing things. Unfortunately such an approach is sometimes impracticable because the more efficient way is often not obvious, with the result that a worker can go on doing something the wrong way or in the wrong conditions for years.
Thus it is necessary to adopt a systematic approach: to start from a sound theory, to set measurable objectives and to check success against these objectives. The various possible objectives are considered below.
Safety and health
There can be no disagreement about the desirability of safety and health objectives. The difficulty stems from the fact that neither is directly measurable: their achievement is assessed by their absence rather than their presence. The data in question always pertain to departures from safety and health.
In the case of health, much of the evidence is long-term as it is based on populations rather than individuals. It is, therefore, necessary to maintain careful records over long periods and to adopt an epidemiological approach through which risk factors can be identified and measured. For example, what should be the maximum hours per day or per year required of a worker at a computer workstation? It depends on the design of the workstation, the kind of work and the kind of person (age, vision, abilities and so on). The effects on health can be diverse, from wrist problems to mental apathy, so it is necessary to carry out comprehensive studies covering quite large populations while simultaneously keeping track of differences within the populations.
Safety is more directly measurable in a negative sense in terms of kinds and frequencies of accidents and damage. There are problems in defining different kinds of accidents and identifying the often multiple causal factors and there is often a distant relationship between the kind of accident and the degree of harm, from none to fatality.
Nevertheless, an enormous body of evidence concerning safety and health has been accumulated over the past fifty years and consistencies have been discovered which can be related back to theory, to laws and standards and to principles operative in particular kinds of situations.
Productivity and efficiency
Productivity is usually defined in terms of output per unit of time, whereas efficiency incorporates other variables, particularly the ratio of output to input. Efficiency incorporates the cost of what is done in relation to achievement, and in human terms this requires the consideration of the penalties to the human operator.
In industrial situations, productivity is relatively easy to measure: the amount produced can be counted and the time taken to produce it is simple to record. Productivity data are often used in before/after comparisons of working methods, situations or conditions. It involves assumptions about equivalence of effort and other costs because it is based on the principle that the human operator will perform as well as is feasible in the circumstances. If the productivity is higher then the circumstances must be better. There is much to recommend this simple approach provided that it is used with due regard to the many possible complicating factors which can disguise what is really happening. The best safeguard is to try to make sure that nothing has changed between the before and after situations except the aspects being studied.
Efficiency is a more comprehensive but always a more difficult measure. It usually has to be specifically defined for a particular situation and in assessing the results of any studies the definition should be checked for its relevance and validity in terms of the conclusions being drawn. For example, is bicycling more efficient than walking? Bicycling is much more productive in terms of the distance that can be covered on a road in a given time, and it is more efficient in terms of energy expenditure per unit of distance or, for indoor exercise, because the apparatus required is cheaper and simpler. On the other hand, the purpose of the exercise might be energy expenditure for health reasons or to climb a mountain over difficult terrain; in these circumstances walking will be more efficient. Thus, an efficiency measure has meaning only in a well-defined context.
Reliability and quality
As explained above, reliability rather than productivity becomes the key measure in high technology systems (for instance, transport aircraft, oil refining and power generation). The controllers of such systems monitor performance and make their contribution to productivity and to safety by making tuning adjustments to ensure that the automatic machines stay on line and function within limits. All these systems are in their safest states either when they are quiescent or when they are functioning steadily within the designed performance envelope. They become more dangerous when moving or being moved between equilibrium states, for example, when an aircraft is taking off or a process system is being shut down. High reliability is the key characteristic not only for safety reasons but also because unplanned shut-down or stoppage is extremely expensive. Reliability is straightforward to measure after performance but is extremely difficult to predict except by reference to the past performance of similar systems. When or if something goes wrong human error is invariably a contributing cause, but it is not necessarily an error on the part of the controller: human errors can originate at the design stage and during setting up and maintenance. It is now accepted that such complex high-technology systems require a considerable and continuous ergonomics input from design to the assessment of any failures that occur.
Quality is related to reliability but is very difficult if not impossible to measure. Traditionally, in batch and flow production systems, quality has been checked by inspection after output, but the current established principle is to combine production and quality maintenance. Thus each operator has parallel responsibility as an inspector. This usually proves to be more effective, but it may mean abandoning work incentives based simply on rate of production. In ergonomic terms it makes sense to treat the operator as a responsible person rather than as a kind of robot programmed for repetitive performance.
Job satisfaction and personal development
From the principle that the worker or human operator should be recognized as a person and not a robot it follows that consideration should be given to responsibilities, attitudes, beliefs and values. This is not easy because there are many variables, mostly detectable but not quantifiable, and there are large individual and cultural differences. Nevertheless a great deal of effort now goes into the design and management of work with the aim of ensuring that the situation is as satisfactory as is reasonably practicable from the operator’s viewpoint. Some measurement is possible by using survey techniques and some principles are available based on such working features as autonomy and empowerment.
Even accepting that these efforts take time and cost money, there can still be considerable dividends from listening to the suggestions, opinions and attitudes of the people actually doing the work. Their approach may not be the same as that of the external work designer and not the same as the assumptions made by the work designer or manager. These differences of view are important and can provide a refreshing change in strategy on the part of everyone involved.
It is well established that the human being is a continuous learner or can be, given the appropriate conditions. The key condition is to provide feedback about past and present performance which can be used to improve future performance. Moreover, such feedback itself acts as an incentive to performance. Thus everyone gains, the performer and those responsible in a wider sense for the performance. It follows that there is much to be gained from performance improvement, including self-development. The principle that personal development should be an aspect of the application of ergonomics requires greater designer and manager skills but, if it can be applied successfully, can improve all the aspects of human performance discussed above.
Successful application of ergonomics often follows from doing no more than developing the appropriate attitude or point of view. The people involved are inevitably the central factor in any human effort and the systematic consideration of their advantages, limitations, needs and aspirations is inherently important.
Conclusion
Ergonomics is the systematic study of people at work with the objective of improving the work situation, the working conditions and the tasks performed. The emphasis is on acquiring relevant and reliable evidence on which to base recommendation for changes in specific situations and on developing more general theories, concepts, guidelines and procedures which will contribute to the continually developing expertise available from ergonomics.
Â
In the 3rd edition of the ILO’s Encyclopaedia, published in 1983, ergonomics was summarized in one article that was only about four pages long. Since the publication of the 3rd edition, there has been a major change in emphasis and in understanding of interrelationships in safety and health: the world is no longer easily classifiable into medicine, safety and hazard prevention. In the last decade almost every branch in the production and service industries has expended great effort in improving productivity and quality. This restructuring process has yielded practical experience which clearly shows that productivity and quality are directly related to the design of working conditions. One direct economical measure of productivity—the costs of absenteeism through illness—is affected by working conditions. Therefore it should be possible to increase productivity and quality and to avoid absenteeism by paying more attention to the design of working conditions.
In sum, the simple hypothesis of modern ergonomics can be stated thus: Pain and exhaustion cause health hazards, wasted productivity and reduced quality, which are measures of the costs and benefits of human work.
This simple hypothesis can be contrasted to occupational medicine which generally restricts itself to establishing the aetiology of occupational diseases. Occupational medicine’s goal is to establish conditions under which the probability of developing such diseases is minimized. Using ergonomic principles these conditions can be most easily formulated in the form of demands and load limitations. Occupational medicine can be summed up as establishing “limitations through medico-scientific studies”. Traditional ergonomics regards its role as one of formulating the methods where, using design and work organization, the limitations established through occupational medicine can be put into practice. Traditional ergonomics could then be described as developing “corrections through scientific studies”, where “corrections” are understood to be all work design recommendations that call for attention to be paid to load limits only in order to prevent health hazards. It is a characteristic of such corrective recommendations that practitioners are finally left alone with the problem of applying them—there is no multidisciplinary team effort.
The original aim of inventing ergonomics in 1857 stands in contrast to this kind of “ergonomics by correction”:
... a scientific approach enabling us to reap, for the benefit of ourselves and others, the best fruits of life’s labour for the minimum effort and maximum satisfaction (Jastrzebowski 1857).
The root of the term “ergonomics” stems from the Greek “nomos” meaning rule, and “ergo” meaning work. One could propose that ergonomics should develop “rules” for a more forward-looking, prospective concept of design. In contrast to “corrective ergonomics”, the idea of prospective ergonomics is based on applying ergonomic recommendations which simultaneously take into consideration profitability margins (Laurig 1992).
The basic rules for the development of this approach can be deduced from practical experience and reinforced by the results of occupational hygiene and ergonomics research. In other words, prospective ergonomics means searching for alternatives in work design which prevent fatigue and exhaustion on the part of the working subject in order to promote human productivity (“... for the benefit of ourselves and others”). This comprehensive approach of prospective ergonomics includes workplace and equipment design as well as the design of working conditions determined by an increasing amount of information processing and a changing work organization. Prospective ergonomics is, therefore, an interdisciplinary approach of researchers and practitioners from a wide range of fields united by the same goal, and one part of a general basis for a modern understanding of occupational safety and health (UNESCO 1992).
Based on this understanding, the Ergonomics chapter in the 4th edition of the ILO Encyclopaedia covers the different clusters of knowledge and experiences oriented toward worker characteristics and capabilities, and aimed at an optimum use of the resource “human work” by making work more “ergonomic”, that is, more humane.
The choice of topics and the structure of articles in this chapter follows the structure of typical questions in the field as practised in industry. Beginning with the goals, principles and methods of ergonomics, the articles which follow cover fundamental principles from basic sciences, such as physiology and psychology. Based on this foundation, the next articles introduce major aspects of an ergonomic design of working conditions ranging from work organization to product design. “Designing for everyone” puts special emphasis on an ergonomic approach that is based on the characteristics and capabilities of the worker, a concept often overlooked in practice. The importance and diversity of ergonomics is shown in two examples at the end of the chapter and can also be found in the fact that many other chapters in this edition of the ILO Encyclopaedia are directly related to ergonomics, such as Heat and Cold, Noise, Vibration, Visual Display Units, and virtually all chapters in the sections Accident and Safety Management and Management and Policy.
Â
The complex merchant vessels, passenger ships and ships of war of the 1990s comprise tons of steel and aluminium as well as a variety of materials that range from the most common to the very exotic. Each vessel may contain hundreds or even thousands of kilometres of pipe and wire equipped with the most sophisticated power plants and electronic equipment available. They must be constructed and maintained to survive the most hostile of environments, while providing comfort and safety for the crews and passengers aboard and reliably completing their missions.
Ship construction and repair rank among the most hazardous industries in the world. According to the US Bureau of Labor Statistics (BLS), for example, shipbuilding and repair is one of the three most hazardous industries. While materials, construction methods, tools and equipment have changed, improved radically over time and continue to evolve, and while training and emphasis on safety and health have significantly improved the lot of the shipyard worker, the fact remains that throughout the world each year workers die or are seriously injured while employed in the construction, maintenance or repair of ships.
Despite advances in technology, many of the tasks and conditions associated with constructing, launching, maintaining and repairing today’s vessels are essentially the same as they were when the very first keel was laid thousands of years ago. The size and shape of the components of a vessel and the complexity of the work involved in assembling and outfitting them largely preclude any kind of automated processes, although some automation has been made possible by recent technological advances. Repair work remains largely resistant to automation. Work in the industry is very labour intensive, requiring highly specialized skills, which often must be utilized under less than ideal circumstances and in a physically challenging situation.
The natural environment in itself poses a significant challenge for shipyard work. While there are a few shipyards that have the capability to construct or repair vessels under cover, in most cases shipbuilding and repairing is done largely out of doors. There are shipyards located in every climatic region of the world, and while the more extreme northern yards are dealing with winter (i.e., slippery conditions wrought by ice and snow, short daylight hours and the physical effects on workers of long hours on cold steel surfaces, often in uncomfortable postures), the yards in more southerly climes are faced with the potential for heat stress, sunburn, working surfaces hot enough to cook on, insects and even snake bites. Much of this work is done over, in, under or around the water. Often, swift tidal currents may be whipped by the wind, causing a pitching and rolling working surface on which workers must perform very exacting tasks in a variety of positions, with tools and equipment that have the potential for inflicting serious physical injury. That same often unpredictable wind is a force to be reckoned with when moving, suspending or placing units often weighing in excess of 1,000 tons with a single or multiple crane lift. The challenges presented by the natural environment are manifold and provide for a seemingly endless combination of situations for which safety and health practitioners must design preventive measures. A well-informed and trained workforce is critical.
As the ship grows from the first steel plates which comprise the keel, it becomes an ever-changing, ever-more-complex environment with a constantly changing subset of potential hazards and hazardous situations requiring not only well-founded procedures for accomplishing the work, but mechanisms for recognizing and dealing with the thousands of unplanned situations which invariably arise during the construction process. As the vessel grows, scaffolding or staging is added continuously to provide access to the hull. While the very construction of this staging is highly specialized and at times inherently hazardous work, its completion means that workers are subjected to greater and greater risk as the height of the staging over the ground or water increases. As the hull begins to take form, the interior of the ship is also taking shape as modern construction methods permit large subassemblies to be stacked on one another, and enclosed and confined spaces are formed.
It is at this point in the process that the labour-intensive nature of the work is most apparent. Safety and health measures must be well coordinated. Worker awareness (for the safety of both the individual worker and those nearby) is fundamental to accident-free work.
Each space within the confines of the hull is designed for a very specialized purpose. The hull may be a void which will contain ballast, or it may house tanks, cargo holds, sleeping compartments or a highly sophisticated combat control centre. In every case building it will require a number of specialized workers to perform a variety of tasks within close proximity of one another. A typical scenario may find pipefitters brazing valves into position, electricians pulling wire cable and installing circuit boards, brush painters doing touch-up, shipfitters positioning and welding deckplates, crews of insulators or carpenters and a test crew verifying that a system is activated in the same area at the same time. Such situations, and others even more complex, take place all day, every day, in an ever-changing pattern dictated by schedule or engineering changes, personnel availability and even the weather.
The application of coatings presents a number of hazards. Spray-painting operations must be accomplished, often in confined spaces and with volatile paints and solvents and/or a variety of epoxy-type coatings, notorious for their sensitizing characteristics.
Enormous progress in the area of safety and health for the shipyard worker has been made over the years through the development of improved equipment and construction methods, safer facilities and a highly-trained workforce. However, the greatest gains have been made and continue to be made as we turn our attention toward the individual worker and focus on eliminating behaviour which contributes so significantly to accidents. While this could be said of almost any industry, the labour-intensive character of shipyard work makes it especially important. As we move toward safety and health programmes which more actively involve the worker and incorporate his or her ideas, not only does the worker’s awareness of the hazards inherent in the job and how to avoid them increase, he or she begins to feel ownership for the programme. It is with this ownership that true success in safety and health can be realized.
Â
General Profile
Distinct segments of the automobile and transportation equipment industry produce:
Â
The characteristic assembly line for the finished vehicle is supported by separate manufacturing facilities for various parts and components. Vehicle components may be manufactured within the parent enterprise or purchased from separate corporate entities. The industry is a century old. Production in the North American, European and (since the Second World War) Japanese sectors of the industry became concentrated in a few corporations which maintained branch assembly operations in South America, Africa and Asia for sales to those markets. International trade in finished vehicles has increased since the 1970s, and trade in original equipment and replacement auto parts from facilities in the developing world is increasingly important.
Manufacture of heavy trucks, buses and farm and construction equipment are distinct businesses from car production, although some auto producers manufacture for both markets, and farm and construction equipment are also made by the same corporations. This line of products uses large diesel engines rather than gasoline engines. Production rates are typically slower, volumes smaller and processes less mechanized.
The types of facilities, the production processes and the typical components in car production are shown in table 1. Figure 1provides a flow chart for the steps in automobile production. The standard industrial classifications that are found in this industry include: motor vehicles and car body assembly, truck and bus body assembly, motor vehicle parts and accessories, iron and steel foundries, non-ferrous foundries, automotive stampings, iron and steel forgings, engine electrical equipment, auto and apparel trimmings and others. The number of people employed in the manufacture of parts exceeds that employed in assembly. These processes are supported by facilities for design of the vehicle, construction and maintenance of plant and equipment, clerical and managerial functions and a dealer and repair function. In the United States, car dealers, service stations and wholesale auto parts facilities employ about twice as many workers as the manufacturing functions.
Table 1. Production processes for automobile production. Â
Facility type |
Product and process |
Ferrous foundry |
Castings for machining into engine blocks and heads, other components |
Aluminium foundry and die cast |
Engine blocks and heads, transmission casings, other cast components |
Forging and heat treatment |
Pre-machined parts for engines, suspensions and transmissions |
Stamping |
Body panels and subassemblies |
Engine |
Machining of castings, assembly into finished product |
Transmission |
Machining of castings and forgings, assembly into product |
Glass |
Windshields, side windows and backlights |
Automotive parts |
Machining, stamping and assembly, including brakes, suspension parts, heating and air conditioning, pollution-control equipment, vehicle lighting |
Electrical and electronic |
Ignition systems, radios, motors, controllers |
Hardware and hard trim |
Polymer moulded exterior body panels, trim components |
Soft trim |
Seat cushions, built up seats, dashboard assemblies, interior body panels |
Vehicle assembly |
Body shop, painting, chassis assembly, final assembly |
Parts depots |
Warehousing, parts painting and assembly, packaging and shipping |
Â
Figure 1. Flow chart for automobile production.Â
The workforce is predominantly male. In the United States, for example, it is about 80% male. Female employment is higher in trim and other lighter manufacturing processes. There is limited opportunity for job transfer from hourly work to clerical work or to technical and professional employment. Assembly line supervisors do, however, often come from the production and maintenance units. About 20% of hourly employees are employed in the skilled trades, although the fraction of employees in any particular facility who are in skilled trades varies greatly, from less than 10% in assembly operations to almost 50% in stamping operations. Because of contractions in employment levels over the decade of the 1980s, the average age of the workforce in the late 1990s exceeds 45 years, with hiring of new workers appearing only since 1994.
Major Sectors and Processes
Ferrous casting
Founding or metal casting involves the pouring of molten metal into a hollow inside a heat-resistant mould, which is the outside or negative shape of the pattern of the desired metal object. The mould may contain a core to determine the dimensions of any internal cavity in the final metal object. Foundry work consists of the following basic steps:
Â
Ferrous foundries of the production type are a characteristic auto industry process. They are used in the automobile industry to produce engine blocks, heads and other parts. There are two basic types of ferrous foundries: gray iron foundries and ductile iron foundries. Gray iron foundries use scrap iron or pig iron (new ingots) to make standard iron castings. Ductile iron foundries add magnesium, cerium or other additives (often called ladle additives) to the ladles of molten metal before pouring to make nodular or malleable iron castings. The different additives have little impact on workplace exposures.
Typical automobile foundries use cupola or induction furnaces to melt the iron. A cupola furnace is a tall vertical furnace, open at the top, with hinged doors at the bottom. It is charged from the top with alternate layers of coke, limestone and metal; the molten metal is removed at the bottom. An induction furnace melts the metal by passing a high electric current through copper coils on the outside of the furnace. This induces an electric current in the outer edge of the metal charge, which heats the metal due to the high electrical resistance of the metal charge. Melting progresses from the outside of the charge to the inside.
In ferrous foundries, moulds are traditionally made from green sand (silica sand, coal dust, clay and organic binders), which is poured around the pattern, which is usually in two parts, and then compacted. This can be done manually or mechanically on a conveyor belt in production foundries. The pattern is then removed and the mould assembled mechanically or manually. The mould must have a sprue.
If the metal casting is to have a hollow interior, a core must be inserted into the mould. Cores can be made from thermosetting phenol-formaldehyde resins (or similar resins) mixed with sand which is then heated (hot box method) or from amine-cured urethane/sand mixtures which cure at room temperature (cold box method). The resin/sand mixture is poured into a core box which has a cavity in the desired shape of the core.
The products produced in gray iron castings are typically of a large size, such as engine blocks. The physical size increases the physical hazards on the job and also presents more difficult dust control problems.
Atmospheric contaminants in foundry processes
Silica-containing dusts. Silica-containing dusts are found in finishing, in shakeout-knockout, in moulding, in core making and in sand system and melt department maintenance activities. Air sampling studies during the 1970s typically found severalfold overexposures to silica, with the highest levels in finishing. Exposures were higher in mechanized production foundries than job shops. Improved control measures including enclosure and exhaust of sand systems and shakeout, mechanization and periodic industrial hygiene measurements have reduced levels. Standard ventilation designs are available for most foundry operations. Exposures above current limits persist in finishing operations due to inadequate sand removal after shakeout and silica burn-in on casting surfaces.
Carbon monoxide. Acutely dangerous carbon monoxide levels are encountered during cupola furnace maintenance and during upsets in process ventilation in the melt department. Excessive levels can also be encountered in cooling tunnels. Carbon monoxide exposures have also been associated with cupola melting and with the combustion of carbon material in green sand moulds. Exposure to sulphur dioxide of unknown origin can also occur, perhaps from sulphur contaminants in the mould.
Metal fumes. Metal fumes are found in melting and pouring operations. It is necessary to use compensating hoods over pouring stations in order to exhaust both metal fumes and combustion gases. Excessive exposures to lead fumes are occasionally encountered in iron foundries and are pervasive in brass foundries; lead fumes in gray iron arise from lead contamination of the scrap iron starting materials.
Other chemical and physical hazards. Formaldehyde, amine vapours and isocyanate pyrolysis products can be found in coremaking and core burn-off products. High-production coremaking is characteristic of the auto industry. Hot box phenol-formaldehyde coremaking replaced oil-sand cores in the mid-1960s and brought substantial formaldehyde exposures, which, in turn, increased the risks of respiratory irritation, lung function abnormalities and lung cancer. Protection requires local exhaust ventilation (LEV) at the core machine, core check stations and conveyor and low emission resins. When the phenol-formaldehyde coremaking has been replaced by cold box amine-cured polyurethane systems, effective maintenance of seals at the core box, and LEV where the cores are stored prior to insertion in the mould, are needed to protect employees against ocular effects of amine vapours.
Workers who are employed in these areas should undergo pre-placement and periodic medical examinations, including a chest x ray reviewed by an expert reader, a lung function test and a symptoms questionnaire, which are essential to detect early signs of pneumoconiosis, chronic bronchitis and emphysema. Periodic audiograms are needed, as hearing protection is often ineffective.
High levels of noise and vibration are encountered in processes such as furnace loading, mechanical de-coring, stripping and knockout of castings and fettling with pneumatic tools.
Foundry processes are heat intensive. The radiant heat load in melting, pouring, shakeout, core knockout and sprue removal requires special protective measures. Some of these measures include increased relief time (time away from the job), which is a common practice. Still extra relief during hot, summer months is also commonly provided. Workers should be outfitted with heat-protective clothing and eye and face protection in order to prevent the formation of cataracts. Climatized break areas near the work area improve the protective value of heat relief.
Aluminium casting
Aluminium casting (foundry and die-casting) is used to produce cylinder heads, transmission cases, engine blocks and other automotive parts. These facilities typically cast the products in permanent moulds, with and without sand cores, although the lost foam process has been introduced. In the lost foam process, the polystyrene foam pattern is not removed from the mould but is vaporized by the molten metal. Die casting involves the forcing of molten metal under pressure into metal moulds or dies. It is used to make large numbers of small, precise parts. Die-casting is followed by trim removal on a forge press and some finishing activities. Aluminium may be melted onsite or it can be delivered in molten form.
Hazards can arise because of significant pyrolysis of the core. Silica exposures may be found in permanent mould foundries where large cores are present. Local exhaust on shakeout is needed to prevent hazardous levels of exposure.
Other non-ferrous casting
Other non-ferrous die casting and electroplating processes are used to produce the trim on automotive products, the hardware and the bumpers. Electroplating is a process in which a metal is deposited onto another metal by an electrochemical process.
Bright metal trim traditionally was die-cast zinc, successively plated with copper, nickel and chrome, and then finished by polishing. Carburettor and fuel-injector parts are also die cast. Manual extraction of parts from die-casting machines is increasingly being replaced by mechanical extraction, and bright metal parts are being replaced by painted metal parts and plastic. Bumpers had been produced by pressing steel, followed by plating, but these methods are increasingly being replaced by the use of polymer parts in passenger vehicles.
Electroplating with chrome, nickel, cadmium, copper and so on is normally carried out in separate workshops and involves exposure to, inhalation of or contact with vapours from the acid plating baths. An increased incidence of cancer has been associated with both chromic acid and sulphuric acid mists. These mists are also extremely corrosive to the skin and respiratory tract. Electroplating baths should be labelled as to contents and should be fitted with special push-pull local exhaust systems. Anti-foaming surface tension agents should be added to the liquid in order to minimize mist formation. Workers should wear eye and face protection, hand and arm protection and aprons. Workers need periodic health checks as well.
Inserting and removing components from open-surface tanks are very hazardous operations which are increasingly becoming more mechanized. The buffing and polishing of plated components on felt belts or discs is strenuous and entails exposure to cotton, hemp and flax dust. This hazard can be minimized by providing a fixture or by mechanizing with transfer-type polishing machines.
Forging and heat treatment
Hot forging and cold forging followed by heat treatment are used to produce engine, transmission and suspension parts and other components.
Historically, automotive forging involved heating iron billets (bars) in individual oil-fired furnaces set close to individually operated steam hammer forges. In these drop hammer forges, the heated iron is placed in the bottom half of a metal die; the top half of the die is attached to the drop hammer. The iron is formed into the desired size and shape by multiple impacts of the dropping hammer. Today, such processes are replaced by induction heating of billets, which are worked in forging presses, which use pressure instead of impact to form the metal part, and drop hammer forges (upsetters) or by cold forging followed by heat treatment.
The forging process is extremely noisy. Noise exposure can be abated by replacing oil furnaces with induction heating devices, and the steam hammers with forging presses and upsetters. The process is also smoky. Oil smoke can be reduced by modernizing the furnace.
Forging and heat treatment are heat-intensive operations. Spot cooling using make-up air that circulates over workers in process areas is needed to reduce heat stress.
Machining
High production machining of engine blocks, crankshafts, transmissions and other components is characteristic of the auto industry. Machining processes are found within various parts manufacturing facilities and are the dominant process in engine, transmission and bearing production. Components such as camshafts, gears, differential pinions and brake drums are produced in machining operations. One-person machining stations are increasingly replaced by multiple station machines, machining cells and transfer lines which may be up to 200 metres in length. Soluble oils and synthetic and semi-synthetic coolants increasingly predominate over straight oils.
Foreign body injuries are common in machining operations; increased mechanical material handling and personal protective equipment are key preventive measures. Increased automation, particularly long transfer lines, increases the risk of severe acute trauma; improved machine guarding and energy lockout are preventive programmes.
The highest level of control measures for coolant mist include full enclosure of machining stations and fluid circulation systems, local exhaust directed outside or recirculated only through a high-efficiency filter, coolant system controls to reduce mist generation and coolant maintenance to control micro-organisms. Addition of nitrite to amine-containing fluids must be prohibited due to risk of nitrosamine production. Oils with substantial polynuclear aromatic hydrocarbon (PAH) content must not be used.
In case-hardening, tempering, nitrate salt baths and other metal heat-treatment processes using furnaces and controlled atmospheres, the microclimate may be oppressive and various airborne toxic substances encountered (e.g., carbon monoxide, carbon dioxide, cyanides).
Machine attendants and workers handling swarf and centrifuging cutting oil prior to filtration and regeneration are exposed to the risk of dermatitis. Exposed workers should be provided with oil-resistant aprons and encouraged to wash thoroughly at the end of each shift.
Grinding and tool sharpening may present a danger of hard metal disease (interstitial lung disease) unless cobalt exposure is measured and controlled. Grinding wheels should be fitted with screens, and eye and face protection and respiratory protective equipment should be worn by grinders.
Machined parts are typically assembled into a finished component, with attendant ergonomic risks. In engine facilities engine testing and running-in must be carried out at test stations fitted with equipment for removing exhaust gases (carbon monoxide, carbon dioxide, unburned hydrocarbons, aldehydes, nitrogen oxides) and with noise-control facilities (booths with sound-absorbent walls, insulated bedplates). Noise levels may be as high as 100 to 105 dB with peaks at 600 to 800 Hz.
Stamping
Pressing of sheet metal (steel) into body panels and other components, often combined with subassembly by welding, is done in large facilities with large and small mechanical power presses. Individual load and unload presses were successively replaced by mechanical extraction devices and now shuttle transfer mechanisms which can load as well, yielding fully automated press lines. Fabrication of subassemblies such as hoods and doors is accomplished with resistance welding presses and is increasingly performed in cells with robot transfer of parts.
The main process is the pressing of steel sheet, strip and light sections on mechanical power presses ranging in capacity from roughly 20 to 2,000 tonnes.
Modern press safety requires effective machinery guarding, prohibition of hands in dies, safety controls including anti-tie down two-hand controls, part revolution clutches and brake monitors, automatic feed and ejection systems, collection of press scrap and the use of personal protective equipment such as aprons, foot and leg protection and hand and arm protection. Outmoded and hazardous full-revolution clutch machines and pull-back devices must be eliminated. Handling rolled steel with cranes and loading of decoilers prior to blanking at the head of a press lines poses a severe safety hazard.
Press operators are exposed to substantial mist levels from drawing compounds which are similar in composition to machining fluids such as soluble oils. Welding fumes are present in fabrication. Noise exposures are high in stamping. Control measures for noise include mufflers on air valves, lining metal chutes with vibration-damping equipment, quieting parts carts, and isolation of presses; the point of operation of the press is not the main site of noise generation.
Following pressing, the pieces are assembled into sub-groups such as hoods and doors using resistance welding presses. Chemical hazards include welding fumes from primarily resistance welding and pyrolysis products of surface coatings, including drawing compound and sealers.
Plastic body panels and trim components
Metal trim parts such as chrome strips are being increasingly replaced by polymer materials. Hard body parts may be made from fibrous glass-reinforced polyester polystyrene systems, acrylonitrile-butadiene-styrene (ABS) thermosetting systems or polyethylene. Polyurethane systems may be high density for body parts, such as nose cones, or low-density foam for seats and interior padding.
Polyurethane foam moulding presents severe respiratory sensitization problems from inhalation of di-isocyanate monomer and possibly catalysts. Complaints persist in operations which are in compliance with limits for toluene di-isocyanate (TDI). Methylene chloride exposures from gun flushing can be substantial. Pouring stations need enclosure and LEV; spills of isocyanate should be minimized by safety devices and cleaned promptly by trained crews. Fires in curing ovens are also a problem in these facilities. Seat manufacture has severe ergonomic stresses, which can be reduced by fixtures, especially for stretching upholstery over cushions.
Styrene exposure from fibrous glass lay-up should be controlled by enclosing storage of mats and local exhaust. Dusts from grinding cured parts contain fibrous glass and should also be controlled by ventilation.
Vehicle assembly
Assembly of components into the finished vehicle typically takes place on a mechanized conveyor involving upwards of a thousand employees per shift, with additional support personnel. The largest segment of employees in the industry are in this process type.
A vehicle assembly plant is divided into distinct units: the body shop, which can include subassembly activities also found in a stamping; paint; chassis assembly; cushion room (which can be outsourced); and final assembly. Paint processes have evolved toward lower-solvent, more reactive formulations in recent years, with increasing use of robot and mechanical application. The body shop has become increasingly automated with reduced arc welding and replacement of hand-operated spot-welding guns with robots.
Light truck assembly (vans, pickups, sport utility vehicles) is similar in process to car assembly. Heavy truck, farm and construction equipment manufacture involves less mechanization and automation, longer cycle jobs, heavier physical labour, more arc welding and different paint systems.
The body shop of an assembly plant assembles the shell of the vehicle. Resistance welding machines may be transfer type, robotic or individually operated. Suspended spot welding machines are heavy and cumbersome to manipulate even when fitted with a counterbalance system. Transfer machines and robots have eliminated many manual jobs and removed workers from close, direct exposure to hot metal, sparks and combustion products of the mineral oil which contaminates the sheet metal. However, increased automation carries increased risk of severe injury to maintenance workers; energy lockout programmes and more elaborate and automatic machine guarding systems, including presence-sensing devices, are needed in automated body shops. Arc welding is employed to a limited degree. During this work, employees are exposed to intense visible and ultraviolet radiation and risk inhalation of combustion gases. LEV, protective screens and partitions, welding visors or goggles, gloves and aprons are needed for arc welders.
The body shop has the greatest laceration and foreign body injury hazards.
In past years assembly techniques and body panel defect retouching processes entailed soldering with lead and tin alloys (also containing traces of antimony). Soldering and especially the grinding away of excess solder produced a severe risk of lead poisoning, including fatal cases when the process was introduced in the 1930s. Protective measures included an isolated solder grind booth, respirators supplying positive-pressure air for solder grinders, hygiene facilities and lead-in-blood monitoring. Nevertheless, increased body burdens of lead and occasional cases of lead poisoning among workers and families persisted into the 1970s. Lead body solder has been eliminated in US passenger vehicles. In addition, noise levels in these processes may range up to 95 to 98 dB, with peaks at 600 to 800 Hz.
Automobile bodies from the body shop enter the paint shop on a conveyor where they are degreased, often by the manual application of solvents, cleaned in a closed tunnel (bonderite) and undercoated. The undercoat is then rubbed down by hand with an oscillating tool using wet abrasive paper, and the final layers of paint are applied and then cured in an oven. In paint shops, workers may inhale toluene, xylene, methylene chloride, mineral spirits, naphtha, butyl and amyl acetate and methyl alcohol vapours from body, booth and paint gun cleaning. Spray painting is carried on in downdraft booths with a continuously filtered air supply. Solvent vapour at painting stations is typically well controlled by down-draft ventilation, which is needed for product quality. Inhalation of paint particulate was formerly less well controlled, and some paints in the past contained salts of chromium and lead. In a well controlled booth, the workers should not have to wear respiratory protective equipment to achieve compliance with exposure limits. Many voluntarily wear respirators for overspray. Recently introduced two-component polyurethane paints should be sprayed only when air-supplied helmets are used with suitable booth re-entry times. Environmental regulations have spurred the development of high-solids paints with lower solvent content. Newer resin systems may generate substantial formaldehyde exposure, and powdered paints now being introduced are epoxy formulations which may be sensitizers. Recirculation of paint booth and oven exhaust from roof ventilating units into work areas outside the booth is a common complaint; this problem can be prevented by exhaust stacks of sufficient height.
In the production of commercial vehicles (lorries (trucks), trams, trolley buses) and farm and construction equipment, manual spray painting is still widely employed due to the large surfaces to be covered and the need for frequent retouching. Lead and chromate paints may still be employed in these operations.
The painted body work is dried in hot air and infra-red ovens fitted with exhaust ventilation and then moves on to join the mechanical components in the final assembly shop, where the body, engine and transmission are joined together and the upholstery and internal trim are fitted. It is here that conveyor belt work is to be seen in its most highly developed version. Each worker carries out a series of tasks on each vehicle with cycle times of about 1 minute. The conveyor system transports the bodies gradually along the assembly line. These processes demand constant vigilance and may be highly monotonous and act as stressors on certain subjects. Although normally not imposing excessive metabolic lead, these processes virtually all involve moderate to severe risk factors for musculoskeletal disorders.
The postures or movements the worker is obliged to adopt, such as when installing components inside the vehicle or working under the body (with hands and forearms above head level) are the most readily abated hazards, although force and repetition must also be reduced to abate risk factors. After final assembly the vehicle is tested, finished and dispatched. Inspection can be limited to roller tests on a roller bed (where ventilation of exhaust fumes is important) or can include track trials on different types of surface, water and dust tightness trials and road trials outside the factory.
Parts depots
Parts depots are integral to distributing the finished product and supplying repair parts. Workers in these high-production warehouses use order pickers to retrieve parts from elevated locations, with automated parts-delivery systems in three-shift operations. Manual handling of packaged parts is common. Painting and other production processes may be found in parts depots.
Testing of prototypes
Testing of automobile prototypes is specialized to the industry. Test drivers are exposed to a variety of physiological stresses, such as violent acceleration and deceleration, jolting and vibration, carbon monoxide and exhaust fumes, noise, work spells of prolonged duration and different ambient and climatic conditions. Endurance drivers endure special stresses. Fatal vehicle accidents occur in this occupation.
Assembly of heavy trucks and farm and construction equipment
The processes in these industry sectors are essentially the same as in the assembly of cars and light trucks. Contrasts include: slower pace of production, including non-assembly-line operations; more arc welding; riveting of truck cabs; movement of components by crane; use of chromate-containing pigments; and diesel on drive-off at the end of the assembly line. These sectors include more producers relative to volume and are less vertically integrated.
Manufacture of locomotives and rail cars
Distinct segments of railroad equipment manufacture include locomotives, passenger cars, freight cars and electric self-propelled passenger cars. Compared to car and truck manufacture, assembly processes involve longer cycles; there is more reliance on cranes for material handling; and arc welding is more heavily used. The large size of the products makes engineering control of spray paint operations difficult and creates situations where workers are completely enclosed in the product while welding and spray painting.
Health Problems and Disease Patterns
Production processes are not unique to the auto industry, but often the scale of production and the high degree of integration and automation combine to present special hazards to employees. Hazards to employees in this complex industry must be arrayed in three dimensions: process type, job classification group and adverse outcome.
Adverse outcomes with distinct cause and prevention methods can be distinguished as: fatal and severe acute injuries; injuries generally; repeated trauma disorders; short-onset chemical effects; occupational disease from long-term chemical exposure; service sector hazards (including infectious disease and client- or customer-initiated violence); and work environment hazards such as psychosocial stress.
Job classification groups in the automobile industry can usefully be divided by divergent hazard spectra: skilled trades (maintenance, service, fabrication and installation of production equipment); mechanical material handling (powered industrial truck and crane operators); production service (including non-skilled maintenance and cleaners); fixed production (the largest grouping, including assemblers and machine operators); clerical and technical; and executive and managerial.
Health and safety outcomes common to all processes
According to the US Bureau of Labor Statistics, the auto industry has one of the highest injury rates overall, with 1 in 3 employees hurt each year, 1 in 10 seriously enough to lose time from work. Lifetime risk of occupational fatality from acute traumatic injury is 1 in 2,000. Certain hazards are generally characteristic of occupational groupings throughout the industry. Other hazards, particularly chemicals, are characteristic of specific production processes.
Skilled trades and mechanical material-handling occupations are at high risk for fatal and severe acute traumatic injuries. The skilled trades are less than 20% of the workforce, yet suffer 46% of fatal occupational injuries. Mechanical material-handling occupations suffer 18% of fatalities. The skilled-trades fatalities largely occur during maintenance and service activities, with uncontrolled energy as the leading cause. Preventive measures include energy lockout programmes, machine guarding, fall prevention and industrial truck and crane safety, all based on directed job safety analysis.
By contrast, fixed production occupations suffer higher rates of injuries generally and repeated trauma disorders, but are at reduced risk to fatal injury. Musculoskeletal injuries, including repeated trauma disorders and closely related strains and sprains caused by overexertion or repetitive motion are 63% of disabling injuries in assembly facilities and about half the injuries in other process types. The chief preventive measures are ergonomics programmes based on risk factor analysis and structured reduction in force, frequency and postural stresses of high-risk jobs.
Production service occupations and skilled trades face the majority of acute and high-level chemical hazards. Typically these exposures occur during routine cleaning, response to spills and process upsets and in confined space entry during maintenance and service activities. Solvent exposures are prominent among these hazardous situations. The long-term health consequences of these intermittent high exposures are not known. High exposures to carcinogenic coal tar pitch volatiles are experienced by employees tarring wood block floors in many facilities or torching floor bolts in stamping plants. Excess mortality from lung cancer has been observed in such groups. Preventive measures focus on confined space entry and hazardous waste and emergency response programmes, although long-term prevention depends on process change to eliminate exposure.
Effects of chronic exposure to chemicals and some physical agents are most evident among fixed production workers, principally because these groups can more feasibly be studied. Virtually all the process-specific adverse effects described above arise from exposures in compliance with existing occupational exposure limits, so protection will depend on reduction of allowable limits. In the near term, best practices including well designed and maintained exhaust systems serve to reduce exposures and risks.
Noise-induced hearing loss is pervasive in all segments of the industry.
All sectors of the workforce are subject to psychosocial stress, although these are more apparent in the clerical, technical, administrative support, managerial and professional occupations because of their generally less intense exposure to other hazards. Nevertheless, job stress is likely more intense among production and maintenance employees, and stress effects are likely greater. No effective means of reducing stresses from night work and rotating shift work have been implemented, although shift preference agreements allow for some self selection, and shift premiums compensate those employees assigned to off shifts. Acceptance of rotating shifts by the workforce is historical and cultural. Skilled trades and maintenance employees work substantially more overtime and during holidays, vacations and shutdowns, compared to production employees. Typical work schedules include two production shifts and a shorter maintenance shift; this provides flexibility for overtime in periods of increased production.
The discussion which follows groups chemical and some specific physical hazards by production type and addresses injury and ergonomic hazards by job classification.
Foundries
Foundries stand out among auto industry processes with a higher fatality rate, arising from molten metal spills and explosions, cupola maintenance, including bottom drop, and carbon monoxide hazards during relining. Foundries report a higher fraction of foreign body, contusion and burn injuries and a lower fraction of musculoskeletal disorders than other facilities. Foundries also have the highest noise exposure levels (Andjelkovich et al. 1990; Andjelkovich et al. 1995; Koskela 1994; Koskela et al. 1976; Silverstein et al. 1986; Virtamo and Tossavainen 1976).
A recent review of mortality studies including the American auto industry showed that foundry workers experienced increased rates of deaths from lung cancer in 14 of 15 studies (Egan-Baum, Miller and Waxweiller 1981; Mirer et al. 1985). Because high lung cancer rates are found among cleaning room workers where the primary exposure is silica, it is likely that mixed silica-containing dust exposure is a major cause (IARC 1987, 1996), although polynuclear aromatic hydrocarbon exposures are also found. Increased mortality from non-malignant respiratory disease was found in 8 of 11 studies. Silicosis deaths were recorded as well. Clinical studies find x-ray changes characteristic of pneumoconiosis, lung function deficits characteristic of obstruction and increased respiratory symptoms in modern production foundries with the highest levels of controls. These effects arose from exposure conditions which prevailed from the 1960s onward and strongly indicate that health risks persist under current conditions as well.
Asbestos effects are found on x ray among foundry workers; victims include production as well as maintenance workers with identifiable asbestos exposures.
Machining operations
A recent review of mortality studies among workers in machining operations found apparent exposure-related increased stomach, oesophageal, rectal, pancreatic and laryngeal cancer in multiple studies (Silverstein et al. 1988; Eisen et al. 1992). Known carcinogenic agents historically present in coolants include polynuclear aromatic compounds, nitrosamines, chlorinated paraffins and formaldehyde. Present formulations contain reduced amounts of these agents, and exposures to coolant particulate are reduced, but cancer risk may still occur with present exposures. Clinical studies have documented occupational asthma, increased respiratory symptoms, cross-shift lung function drop and, in one case, legionnaire’s disease associated with coolant mist exposure (DeCoufle 1978; Vena et al. 1985; Mallin, Berkeley and Young 1986; Park et al. 1988; Delzell et al. 1993). Respiratory effects are more prominent with synthetics and soluble oils, which contain chemical irritants such as petroleum sulphonates, tall oils, ethanolamines, formaldehyde and formaldehyde donor biocides, as well as bacterial products such as endotoxin. Skin disorders are still common among machining workers, with greater problems reported for those exposed to synthetic fluids.
Pressed metal operations
The characteristic injury hazards in mechanical power presswork are crushing and amputation injuries, especially of the hands, due to trapping in the press, and hand, foot and leg injuries, caused by scrap metal from the press.
Pressed metal facilities have twice the proportion of laceration injuries of auto industry facilities generally. Such operations have a higher proportion of skilled workers than typical for the industry, especially if die construction is pursued onsite. Die change is an especially hazardous activity.
Mortality studies in the metal-stamping industry are limited. One such study found increased mortality from stomach cancer; another found increased mortality from lung cancer among maintenance welders and millwrights exposed to coal tar pitch volatiles.
Hardware and electroplating
A mortality study of employees at an automotive hardware plant found excess mortality from lung cancer among workers in departments which integrated zinc die-cast and electroplating. Chromic and sulphuric acid mist or die-cast smoke were likely causes.
Vehicle assembly
Injury rates, including cumulative trauma disorders (CTDs), are now the highest in assembly of all processes in the auto sector, due largely to the high rate of musculoskeletal disorders from repetitive work or overexertion. Musculoskeletal disorders account for more than 60% of disabling injuries in this sector.
Several mortality studies in assembly plants observed increased deaths from lung cancer. No specific process within the assembly sector has been shown responsible, so this issue remains under investigation.
Testing of prototypes
Fatal vehicle accidents occur in this occupation.
Design work
The design staffs of auto companies have been the subject of health and safety concern. Prototype dies are made by first constructing the pattern of wood, using extremely hard wood, laminates and particleboard. Plastic models are made by fibrous glass lay-up with polyester-polystyrene resins. Metal models are essentially dies constructed by precision machining. Wood, plastic and metal model and pattern makers have been shown to suffer excess incidence and mortality from colon and rectal cancer in repeated studies. A specific agent has not been identified.
Environmental and Public Health Issues
Environmental regulation aimed at stationary sources in the auto industry principally addresses volatile organic compounds from spray painting and other surface coatings. Pressure to reduce solvent content of paints has actually changed the nature of the coatings used. These rules affect supplier and parts plants as well as vehicle assembly. Foundries are regulated for air emissions of particulates and sulphur dioxide, while spent sand is treated as hazardous waste.
Vehicle emissions and vehicle safety are critical public health and safety issues regulated outside the occupational arena.
Â
Several examples of workplace hazards often are quoted to exemplify not only the possible adverse health effects associated with workplace exposures, but also to reveal how a systematic approach to the study of worker populations can uncover important exposure-disease relationships. One such example is that of asbestos. The simple elegance with which the late Dr. Irving J. Selikoff demonstrated the elevated cancer risk among asbestos workers has been documented in an article by Lawrence Garfinkel. It is reprinted here with only slight modification and with the permission of CA-A Cancer Journal for Clinicians (Garfinkel 1984). The tables came from the original article by Dr. Selikoff and co-workers (1964).
Asbestos exposure has become a public health problem of considerable magnitude, with ramifications that extend beyond the immediate field of health professionals to areas served by legislators, judges, lawyers, educators, and other concerned community leaders. As a result, asbestos-related diseases are of increasing concern to clinicians and health authorities, as well as to consumers and the public at large.
Historical Background
Asbestos is a highly useful mineral that has been utilized in diverse ways for many centuries. Archaeological studies in Finland have shown evidence of asbestos fibres incorporated in pottery as far back as 2500 BC. In the 5th century BC, it was used as a wick for lamps. Herodotus commented on the use of asbestos cloth for cremation about 456 BC. Asbestos was used in body armour in the 15th century, and in the manufacture of textiles, gloves, socksand handbags in Russia c. 1720. Although it is uncertain when the art of weaving asbestos was developed, we know that the ancients often wove asbestos with linen. Commercial asbestos production began in Italy about 1850, in the making of paper and cloth.
The development of asbestos mining in Canada and South Africa about 1880 reduced costs and spurred the manufacture of asbestos products. Mining and production of asbestos in the United States, Italy and Russia followed soon after. In the United States, the development of asbestos as pipe insulation increased production and was followed shortly thereafter by other varied uses including brake linings, cement pipes, protective clothing and so forth.
Production in the US increased from about 6,000 tons in 1900 to 650,000 tons in 1975, although by 1982, it was about 300,000 tons and by 1994, production had dropped to 33,000 tons.
It is reported that Pliny the Younger (61-113 AD) commented on the sickness of slaves who worked with asbestos. Reference to occupational disease associated with mining appeared in the 16th century, but it was not until 1906 in England that the first reference to pulmonary fibrosis in an asbestos worker appeared. Excess deaths in workers involved with asbestos manufacturing applications were reported shortly thereafter in France and Italy, but major recognition of asbestos-induced disease began in England in 1924. By 1930, Wood and Gloyne had reported on 37 cases of pulmonary fibrosis.
The first reference to carcinoma of the lung in a patient with “asbestos-silicosis” appeared in 1935. Several other case reports followed. Reports of high percentages of lung cancer in patients who died of asbestosis appeared in 1947, 1949 and 1951. In 1955 Richard Doll in England reported an excess risk of lung cancer in persons who had worked in an asbestos plant since 1935, with an especially high risk in those who were employed more than 20 years.
Clinical Observations
It was against this background that Dr. Irving Selikoff’s clinical observations of asbestos-related disease began. Dr. Selikoff was at that time already a distinguished scientist. His prior accomplishments included the development and first use of isoniazid in the treatment of tuberculosis, for which he received a Lasker Award in 1952.
In the early 1960s, as a chest physician practising in Paterson, New Jersey, he had observed many cases of lung cancer among workers in an asbestos factory in the area. He decided to extend his observations to include two locals of the asbestos insulator workers union, whose members also had been exposed to asbestos fibres. He recognized that there were still many people who did not believe that lung cancer was related to asbestos exposure and that only a thorough study of a total exposed population could convince them. There was the possibility that asbestos exposure in the population could be related to other types of cancer, such as pleural and peritoneal mesothelioma, as had been suggested in some studies, and perhaps other sites as well. Most of the studies of the health effects of asbestos in the past had been concerned with workers exposed in the mining and production of asbestos. It was important to know if asbestos inhalation also affected other asbestos-exposed groups.
Dr. Selikoff had heard of the accomplishments of Dr. E. Cuyler Hammond, then Director of the Statistical Research Section of the American Cancer Society (ACS), and decided to ask him to collaborate in the design and analysis of a study. It was Dr. Hammond who had written the landmark prospective study on smoking and health published a few years earlier.
Dr. Hammond immediately saw the potential importance of a study of asbestos workers. Although he was busily engaged in analysing data from the then new ACS prospective study, Cancer Prevention Study I (CPS I), which he had begun a few years earlier, he readily agreed to a collaboration in his “spare time”. He suggested confining the analysis to those workers with at least 20 years’ work experience, who thus would have had the greatest amount of asbestos exposure.
The team was joined by Mrs. Janet Kaffenburgh, a research associate of Dr. Selikoff’s at Mount Sinai Hospital, who worked with Dr. Hammond in preparing the lists of the men in the study, including their ages and dates of employment and obtaining the data on facts of death and causes from union headquarters records. This information was subsequently transferred to file cards that were sorted literally on the living room floor of Dr. Hammond’s house by Dr. Hammond and Mrs. Kaffenburgh.
Dr. Jacob Churg, a pathologist at Barnert Memorial Hospital Center in Paterson, New Jersey, provided pathologic verification of the cause of death.
Tabe 1. Man-years of experience of 632 asbestos workers exposed to asbestos dust 20 years or longer
Age |
Time period |
|||
1943-47 |
1948-52 |
1953-57 |
1958-62 |
|
35–39 |
85.0 |
185.0 |
7.0 |
11.0 |
40–44 |
230.5 |
486.5 |
291.5 |
70.0 |
45–49 |
339.5 |
324.0 |
530.0 |
314.5 |
50–54 |
391.5 |
364.0 |
308.0 |
502.5 |
55–59 |
382.0 |
390.0 |
316.0 |
268.5 |
60–64 |
221.0 |
341.5 |
344.0 |
255.0 |
65–69 |
139.0 |
181.0 |
286.0 |
280.0 |
70–74 |
83.0 |
115.5 |
137.0 |
197.5 |
75–79 |
31.5 |
70.0 |
70.5 |
75.0 |
80–84 |
5.5 |
18.5 |
38.5 |
23.5 |
85+ |
3.5 |
2.0 |
8.0 |
13.5 |
Total |
1,912.0 |
2,478.0 |
2,336.5 |
2,011.0 |
Â
The resulting study was of the type classified as a “prospective study retrospectively carried out”. The nature of the union records made it possible to accomplish an analysis of a long-range study in a relatively short period of time. Although only 632 men were involved in the study, there were 8,737 man-years of exposure to risk (see table 1); 255 deaths occurred during the 20-year period of observation from 1943 through 1962 (see table 2). It is in table 28.17 where the observed number of deaths can be seen invariably to exceed the number expected, demonstrating the association between workplace asbestos exposure and an elevated cancer death rate.
Table 2. Observed and expected number of deaths among 632 asbestos workers exposed to asbestos dust 20 years or longer
Cause of death |
Time period |
Total |
|||
1943-47 |
1948-52 |
1953-57 |
1958-62 |
1943-62 |
|
Total, all causes |
|||||
Observed (asbestos workers) |
28.0 |
54.0 |
85.0 |
88.0 |
255.0 |
Expected (US White males) |
39.7 |
50.8 |
56.6 |
54.4 |
203.5 |
Total cancer, all sites |
|||||
Observed (asbestos workers) |
13.0 |
17.0 |
26.0 |
39.0 |
95.0 |
Expected (US White males) |
5.7 |
8.1 |
13.0 |
9.7 |
36.5 |
Cancer of lung and pleura |
|||||
Observed (asbestos workers) |
6.0 |
8.0 |
13.0 |
18.0 |
45.0 |
Expected (US White males) |
0.8 |
1.4 |
2.0 |
2.4 |
6.6 |
Cancer of stomach, colon and rectum |
|||||
Observed (asbestos workers) |
4.0 |
4.0 |
7.0 |
14.0 |
29.0 |
Expected (US White males) |
2.0 |
2.5 |
2.6 |
2.3 |
9.4 |
Cancer of all other sites combined |
|||||
Observed (asbestos workers) |
3.0 |
5.0 |
6.0 |
7.0 |
21.0 |
Expected (US White males) |
2.9 |
4.2 |
8.4 |
5.0 |
20.5 |
Â
Significance of the Work
This paper constituted a turning point in our knowledge of asbestos-related disease and set the direction of future research. The article has been cited in scientific publications at least 261 times since it was originally published. With financial support from the ACS and the National Institutes of Health, Dr. Selikoff and Dr. Hammond and their growing team of mineralogists, chest physicians, radiologists, pathologists, hygienists and epidemiologists continued to explore various facets of asbestos disease.
A major paper in 1968 reported the synergistic effect of cigarette smoking on asbestos exposure (Selikoff, Hammond and Churg 1968). The studies were expanded to include asbestos production workers, persons indirectly exposed to asbestos in their work (shipyard workers, for example) and those with family exposure to asbestos.
In a later analysis, in which the team was joined by Herbert Seidman, MBA, Assistant Vice President for Epidemiology and Statistics of the American Cancer Society, the group demonstrated that even short-term exposure to asbestos resulted in a significant increased risk of cancer up to 30 years later (Seidman, Selikoff and Hammond 1979). There were only three cases of mesothelioma in this first study of 632 insulators, but later investigations showed that 8% of all deaths among asbestos workers were due to pleural and peritoneal mesothelioma.
As Dr. Selikoff’s scientific investigations expanded, he and his co-workers made noteworthy contributions toward reducing exposure to asbestos through innovations in industrial hygiene techniques; by persuading legislators about the urgency of the asbestos problem; in evaluating the problems of disability payments in connection with asbestos disease; and in investigating the general distribution of asbestos particles in water supplies and in the ambient air.
Dr. Selikoff also called the medical and scientific community’s attention to the asbestos problem by organizing conferences on the subject and participating in many scientific meetings. Many of his orientation meetings on the problem of asbestos disease were structured particularly for lawyers, judges, presidents of large corporations and insurance executives.
Â
Role of Questionnaires in Epidemiological Research
Epidemiological research is generally carried out in order to answer a specific research question which relates the exposures of individuals to hazardous substances or situations with subsequent health outcomes, such as cancer or death. At the heart of nearly every such investigation is a questionnaire which constitutes the basic data-gathering tool. Even when physical measurements are to be made in a workplace environment, and especially when biological materials such as serum are to be collected from exposed or unexposed study subjects, a questionnaire is essential in order to develop an adequate exposure picture by systematically collecting personal and other characteristics in an organized and uniform way.
The questionnaire serves a number of critical research functions:
Â
Place of questionnaire design within overall study goals
While the questionnaire is often the most visible part of an epidemiological study, particularly to the workers or other study participants, it is only a tool and indeed is often called an “instrument” by researchers. Figure 1 depicts in a very general way the stages of survey design from conception through data collection and analysis. The figure shows four levels or tiers of study operation which proceed in parallel throughout the life of the study: sampling, questionnaire, operations, and analysis. The figure demonstrates quite clearly the way in which stages of questionnaire development are related to the overall study plan, proceeding from an initial outline to a first draft of both the questionnaire and its associated codes, followed by pretesting within a selected subpopulation, one or more revisions dictated by pretest experiences, and preparation of the final document for actual data collection in the field. What is most important is the context: each stage of questionnaire development is carried out in conjunction with a corresponding stage of creation and refinement of the overall sampling plan, as well as the operational design for administration of the questionnaire.
Figure 1. The stages of a survey
Types of studies and questionnaires
The research goals of the study itself determine the structure, length and content of the questionnaire. These questionnaire attributes are invariably tempered by the method of data collection, which usually falls within one of three modes: in person, mail and telephone. Each of these has its advantages and disadvantages which can affect not only the quality of the data but the validity of the overall study.
A mailed questionnaire is the least expensive format and can cover workers in a wide geographical area. However, in that overall response rates are often low (typically 45 to 75%), it cannot be overly complex since there is little or no opportunity for clarification of questions, and it may be difficult to ascertain whether potential responses to critical exposure or other questions differ systematically between respondents and non-respondents. The physical layout and language must accommodate the least educated of potential study participants, and must be capable of completion in a fairly short time period, typically 20 to 30 minutes.
Telephone questionnaires can be used in population-based studies—that is, surveys in which a sample of a geographically defined population is canvassed—and are a practical method to update information in existing data files. They may be longer and more complex than mailed questionnaires in language and content, and since they are administered by trained interviewers the greater cost of a telephone survey can be partially offset by physically structuring the questionnaire for efficient administration (such as through skip patterns). Response rates are usually better than with mailed questionnaires, but are subject to biases related to increasing use of telephone answering machines, refusals, non-contacts and problems of populations with limited telephone service. Such biases generally relate to the sampling design itself and not especially to the questionnaire. Although telephone questionnaires have long been in use in North America, their feasibility in other parts of the world has yet to be established.
Face-to-face interviews provide the greatest opportunity for collecting accurate complex data; they are also the most expensive to administer, since they require both training and travel for professional staff. The physical layout and order of questions may be arranged to optimize administration time. Studies which utilize in-person interviewing generally have the highest response rates and are subject to the least response bias. This is also the type of interview in which the interviewer is most likely to learn whether or not the participant is a case (in a case-control study) or the participant’s exposure status (in a cohort study). Care must therefore be taken to preserve the objectivity of the interviewer by training him or her to avoid leading questions and body language that might evoke biased responses.
It is becoming more common to use a hybrid study design in which complex exposure situations are assessed in a personal or telephone interview which allows maximum probing and clarification, followed by a mailed questionnaire to capture lifestyle data like smoking and diet.
Confidentiality and research participant issues
Since the purpose of a questionnaire is to obtain data about individuals, questionnaire design must be guided by established standards for ethical treatment of human subjects. These guidelines apply to acquisition of questionnaire data just as they do for biological samples such as blood and urine, or to genetic testing. In the United States and many other countries, no studies involving humans may be conducted with public funds unless approval of questionnaire language and content is first obtained from an appropriate Institutional Review Board. Such approval is intended to assure that questions are confined to legitimate study purposes, and that they do not violate the rights of study participants to answer questions voluntarily. Participants must be assured that their participation in the study is entirely voluntary, and that refusal to answer questions or even to participate at all will not subject them to any penalties or alter their relationship with their employer or medical practitioner.
Participants must also be assured that the information they provide will be held in strict confidence by the investigator, who must of course take steps to maintain the physical security and inviolability of the data. This often entails physical separation of information regarding the identity of participants from computerized data files. It is common practice to advise study participants that their replies to questionnaire items will be used only in aggregation with responses of other participants in statistical reports, and will not be disclosed to the employer, physician or other parties.
Measurement aspects of questionnaire design
One of the most important functions of a questionnaire is to obtain data about some aspect or attribute of a person in either qualitative or quantitative form. Some items may be as simple as weight, height or age, while others may be considerably more complicated, as with an individual’s response to stress. Qualitative responses, such as gender, will ordinarily be converted into numerical variables. All such measures may be characterized by their validity and their reliability. Validity is the degree to which a questionnaire-derived number approaches its true, but possibly unknown, value. Reliability measures the likelihood that a given measurement will yield the same result on repetition, whether that result is close to the “truth” or not. Figure 2 shows how these concepts are related. It demonstrates that a measurement can be valid but not reliable, reliable but not valid, or both valid and reliable.
Figure 2. Validity & reliability relationship
Over the years, many questionnaires have been developed by researchers in order to answer research questions of wide interest. Examples include the Scholastic Aptitude Test, which measures a student’s potential for future academic achievement, and the Minnesota Multiphasic Personality Inventory (MMPI), which measures certain psychosocial characteristics. A variety of other psychological indicators are discussed in the chapter on psychometrics. There are also established physiological scales, such as the British Medical Research Council (BMRC) questionnaire for pulmonary function. These instruments have a number of important advantages. Chief among these are the facts that they have already been developed and tested, usually in many populations, and that their reliability and validity are known. Anyone constructing a questionnaire is well advised to utilize such scales if they fit the study purpose. Not only do they save the effort of “re-inventing the wheel”, but they make it more likely that study results will be accepted as valid by the research community. It also makes for more valid comparisons of results from different studies provided they have been properly used.
The preceding scales are examples of two important types of measures which are commonly used in questionnaires to quantify concepts that may not be fully objectively measurable in the way that height and weight are, or which require many similar questions to fully “tap the domain” of one specific behavioural pattern. More generally, indexes and scales are two data reduction techniques that provide a numerical summary of groups of questions. The above examples illustrate physiological and psychological indexes, and they are also frequently used to measure knowledge, attitude and behaviour. Briefly, an index is usually constructed as a score obtained by counting, among a group of related questions, the number of items that apply to a study participant. For instance, if a questionnaire presents a list of diseases, a disease history index could be the total number of those which a respondent says he or she has had. A scale is a composite measure based on the intensity with which a participant answers one or more related questions. For example, the Likert scale, which is frequently used in social research, is typically constructed from statements with which one may agree strongly, agree weakly, offer no opinion, disagree weakly, or disagree strongly, the response being scored as a number from 1 to 5. Scales and indexes may be summed or otherwise combined to form a fairly complex picture of study participants’ physical, psychological, social or behavioural characteristics.
Validity merits special consideration because of its reflection of the “truth”. Three important types of validity often discussed are face, content and criterion validity. Face validity is a subjective quality of an indicator which insures that the wording of a question is clear and unambiguous. Content validity insures that the questions will serve to tap that dimension of response in which the researcher is interested. Criterion (or predictive) validity is derived from an objective assessment of how closely a questionnaire measurement approaches a separately measurable quantity, as for instance how well a questionnaire assessment of dietary vitamin A intake matches the actual consumption of vitamin A, based upon food consumption as documented with dietary records.
Questionnaire content, quality and length
Wording. The wording of questions is both an art and a professional skill. Therefore, only the most general of guidelines can be presented. It is generally agreed that questions should be devised which:
Â
Question sequence and structure. Both the order and presentation of questions can affect the quality of information gathered. A typical questionnaire, whether self-administered or read by an interviewer, contains a prologue which introduces the study and its topic to the respondent, provides any additional information he or she will need, and tries to motivate the respondent to answer the questions. Most questionnaires contain a section designed to collect demographic information, such as age, gender, ethnic background and other variables about the participant’s background, including possibly confounding variables. The main subject matter of data collection, such as nature of the workplace and exposure to specific substances, is usually a distinct questionnaire section, and is often preceded by an introductory prologue of its own which might first remind the participant of specific aspects of the job or workplace in order to create a context for detailed questions. The layout of questions that are intended to establish worklife chronologies should be arranged so as to minimize the risk of chronological omissions. Finally, it is customary to thank the respondent for his or her participation.
Types of questions. The designer must decide whether to use open-ended questions in which participants compose their own answers, or closed questions that require a definite response or a choice from a short menu of possible responses. Closed questions have the advantage that they clarify alternatives for the respondent, avoid snap responses, and minimize lengthy rambling that may be impossible to interpret. However, they require that the designer anticipate the range of potential responses in order to avoid losing information, particularly for unexpected situations that occur in many workplaces. This in turn requires well planned pilot testing. The investigator must decide whether and to what extent to permit a “don’t know” response category.
Length. Determining the final length of a questionnaire requires striking a balance between the desire to obtain as much detailed information as possible to achieve the study goals with the fact that if a questionnaire is too lengthy, at some point many respondents will lose interest and either stop responding or respond hastily, inaccurately and without thought in order to bring the session to an end. On the other hand, a questionnaire which is very short may obtain a high response rate but not achieve the study goals. Since respondent motivation often depends on having a personal stake in the outcome, such as improving working conditions, tolerance for a lengthy questionnaire may vary widely, especially when some participants (such as workers in a particular plant) may perceive their stake to be higher than others (such as persons contacted via random telephone dialling). This balance can be achieved only through pilot testing and experience. Interviewer-administered questionnaires should record the beginning and ending time to permit calculation of the duration of the interview. This information is useful in assessing the level of quality of the data.
Language. It is essential to use the language of the population to make the questions understood by all. This may require becoming familiar with local vernacular that may vary within any one country. Even in countries where the same language is nominally spoken, such as Britain and the United States, or the Spanish-speaking countries of Latin America, local idioms and usage may vary in a way that can obscure interpretation. For example, in the US “tea” is merely a beverage, whereas in Britain it may mean “a pot of tea,” “high tea,” or “the main evening meal,” depending on locale and context. It is especially important to avoid scientific jargon, except where study participants can be expected to possess specific technical knowledge.
Clarity and leading questions. While it is often the case that shorter questions are clearer, there are exceptions, especially where a complex subject needs to be introduced. Nevertheless, short questions clarify thinking and reduce unnecessary words. They also reduce the chance of overloading the respondent with too much information to digest. If the purpose of the study is to obtain objective information about the participant’s working situation, it is important to word questions in a neutral way and to avoid “leading” questions that may favour a particular answer, such as “Do you agree that your workplace conditions are harmful to your health?”
Questionnaire layout. The physical layout of a questionnaire can affect the cost and efficiency of a study. It is more important for self-administered questionnaires than those which are conducted by interviewers. A questionnaire which is designed to be completed by the respondent but which is overly complex or difficult to read may be filled out casually or even discarded. Even questionnaires which are designed to be read aloud by trained interviewers need to be printed in clear, readable type, and patterns of question skipping must be indicated in a manner which maintains a steady flow of questioning and minimizes page turning and searching for the next applicable question.
Validity Concerns
Bias
The enemy of objective data gathering is bias, which results from systematic but unplanned differences between groups of people: cases and controls in a case-control study or exposed and non-exposed in a cohort study. Information bias may be introduced when two groups of participants understand or respond differently to the same question. This may occur, for instance, if questions are posed in such a way as to require special technical knowledge of a workplace or its exposures that would be understood by exposed workers but not necessarily by the general public from which controls are drawn.
The use of surrogates for ill or deceased workers has the potential for bias because next-of-kin are likely to recall information in different ways and with less accuracy than the worker himself or herself. The introduction of such bias is especially likely in studies in which some interviews are carried out directly with study participants while other interviews are carried out with relatives or co-workers of other research participants. In either situation, care must be taken to reduce any effect that might arise from the interviewer’s knowledge of the disease or exposure status of the worker of interest. Since it is not always possible to keep interviewers “blind,” it is important to emphasize objectivity and avoidance of leading or suggestive questions or unconscious body language during training, and to monitor performance while the study is being carried out.
Recall bias results when cases and controls “remember” exposures or work situations differently. Hospitalized cases with a potential occupationally related illness may be more capable of recalling details of their medical history or occupational exposures than persons contacted randomly on the telephone. A type of this bias that is becoming more common has been labelled social desirability bias. It describes the tendency of many people to understate, whether consciously or not, their indulgence in “bad habits” such as cigarette smoking or consumption of foods high in fat and cholesterol, and to overstate “good habits” like exercise.
Response bias denotes a situation in which one group of study participants, such as workers with a particular occupational exposure, may be more likely to complete questionnaires or otherwise participate in a study than unexposed persons. Such a situation may result in a biased estimation of the association between exposure and disease. Response bias may be suspected if response rates or the time taken to complete a questionnaire or interview differ substantially between groups (e.g., cases vs. controls, exposed vs. unexposed). Response bias generally differs depending upon the mode of questionnaire administration. Questionnaires which are mailed are usually more likely to be returned by individuals who see a personal stake in study findings, and are more likely to be ignored or discarded by persons selected at random from the general population. Many investigators who utilize mail surveys also build in a follow-up mechanism which may include second and third mailings as well as subsequent telephone contacts with non-respondents in order to maximize response rates.
Studies which utilize telephone surveys, including those which make use of random digit dialling to identify controls, usually have a set of rules or a protocol defining how many times attempts to contact potential respondents must be made, including time of day, and whether evening or weekend calls should be attempted. Those who conduct hospital-based studies usually record the number of patients who refuse to participate, and reasons for non-participation. In all such cases, various measures of response rates are recorded in order to provide an assessment of the extent to which the target population has actually been reached.
Selection bias results when one group of participants preferentially responds or otherwise participates in a study, and can result in biased estimation of the relationship between exposure and disease. In order to assess selection bias and whether it leads to under- or over-estimation of exposure, demographic information such as educational level can be used to compare respondents with non-respondents. For example, if participants with little education have lower response rates than participants with higher education, and if a particular occupation or smoking habit is known to be more frequent in less educated groups, then selection bias with underestimation of exposure for that occupation or smoking category is likely to have occurred.
Confounding is an important type of selection bias which results when the selection of respondents (cases and controls in a case-control study, or exposed and unexposed in a cohort study) depends in some way upon a third variable, sometimes in a manner unknown to the investigator. If not identified and controlled, it can lead unpredictably to underestimates or overestimates of disease risks associated with occupational exposures. Confounding is usually dealt with either by manipulating the design of the study itself (e.g., through matching cases to controls on age and other variables) or at the analysis stage. Details of these techniques are presented in other articles within this chapter.
Documentation
In any research study, all study procedures must be thoroughly documented so that all staff, including interviewers, supervisory personnel and researchers, are clear about their respective duties. In most questionnaire-based studies, a coding manual is prepared which describes on a question-by-question basis everything the interviewer needs to know beyond the literal wording of the questions. This includes instructions for coding categorical responses and may contain explicit instructions on probing, listing those questions for which it is permitted and those for which it is not. In many studies new, unforeseen response choices for certain questions are occasionally encountered in the field; these must be recorded in the master codebook and copies of additions, changes or new instructions distributed to all interviewers in a timely fashion.
Planning, testing and revision
As can be seen from figure 1, questionnaire development requires a great deal of thoughtful planning. Every questionnaire needs to be tested at several stages in order to make certain that the questions “work”, i.e., that they are understandable and produce responses of the intended quality. It is useful to test new questions on volunteers and then to interrogate them at length to determine how well specific questions were understood and what types of problems or ambiguities were encountered. The results can then be utilized to revise the questionnaire, and the procedure can be repeated if necessary. The volunteers are sometimes referred to as a “focus group”.
All epidemiological studies require pilot testing, not only for the questionnaires, but for the study procedures as well. A well designed questionnaire serves its purpose only if it can be delivered efficiently to the study participants, and this can be determined only by testing procedures in the field and making adjustments when necessary.
Interviewer training and supervision
In studies which are conducted by telephone or face-to-face interview, the interviewer plays a critical role. This person is responsible not simply for presenting questions to the study participants and recording their responses, but also for interpreting those responses. Even with the most rigidly structured interview study, respondents occasionally request clarification of questions, or offer responses which do not fit the available response categories. In such cases the interviewer’s job is to interpret either the question or the response in a manner consistent with the intent of the researcher. To do so effectively and consistently requires training and supervision by an experienced researcher or manager. When more than one interviewer is employed on a study, interviewer training is especially important to insure that questions are presented and responses interpreted in a uniform manner. In many research projects this is accomplished in group training settings, and is repeated periodically (e.g., annually) in order to keep the interviewers’ skills fresh. Training seminars commonly cover the following topics in considerable detail:
Â
Study supervision often entails onsite observation, which may include tape-recording of interviews for subsequent dissection. It is common practice for the supervisor to personally review every questionnaire prior to approving and submitting it to data entry. The supervisor also sets and enforces performance standards for interviewers and in some studies conducts independent re-interviews with selected participants as a reliability check.
Data collection
The actual distribution of questionnaires to study participants and subsequent collection for analysis is carried out using one of the three modes described above: by mail, telephone or in person. Some researchers organize and even perform this function themselves within their own institutions. While there is considerable merit to a senior investigator becoming familiar with the dynamics of the interview at first hand, it is most cost effective and conducive to maintaining high data quality for trained and well-supervised professional interviewers to be included as part of the research team.
Some researchers make contractual arrangements with companies that specialize in survey research. Contractors can provide a range of services which may include one or more of the following tasks: distributing and collecting questionnaires, carrying out telephone or face-to-face interviews, obtaining biological specimens such as blood or urine, data management, and statistical analysis and report writing. Irrespective of the level of support, contractors are usually responsible for providing information about response rates and data quality. Nevertheless, it is the researcher who bears final responsibility for the scientific integrity of the study.
Reliability and re-interviews
Data quality may be assessed by re-interviewing a sample of the original study participants. This provides a means for determining the reliability of the initial interviews, and an estimate of the repeatability of responses. The entire questionnaire need not be re-administered; a subset of questions usually is sufficient. Statistical tests are available for assessing the reliability of a set of questions asked of the same participant at different times, as well as for assessing the reliability of responses provided by different participants and even for those queried by different interviewers (i.e., inter- and intra-rater assessments).
Technology of questionnaire processing
Advances in computer technology have created many different ways in which questionnaire data can be captured and made available to the researcher for computer analysis. There are three fundamentally different ways in which data can be computerized: in real time (i.e., as the participant responds during an interview), by traditional key entry methods, and by optical data capture methods.
Computer-aided data capture
Many researchers now use computers to collect responses to questions posed in both face-to-face and telephone interviews. Researchers in the field find it convenient to use laptop computers which have been programmed to display the questions sequentially and which permit the interviewer to enter the response immediately. Survey research companies which do telephone interviewing have developed analogous systems called computer-aided telephone interview (CATI) systems. These methods have two important advantages over more traditional paper questionnaires. First, responses can be instantly checked against a range of permissible answers and for consistency with previous responses, and discrepancies can be immediately brought to the attention of both the interviewer and the respondent. This greatly reduces the error rate. Secondly, skip patterns can be programmed to minimize administration time.
The most common method for computerizing data still is the traditional key entry by a trained operator. For very large studies, questionnaires are usually sent to a professional contract company which specializes in data capture. These firms often utilize specialized equipment which permits one operator to key a questionnaire (a procedure sometimes called keypunch for historical reasons) and a second operator to re-key the same data, a process called key verification. Results of the second keying are compared with the first to assure the data have been entered correctly. Quality assurance procedures can be programmed which ensure that each response falls within an allowable range, and that it is consistent with other responses. The resulting data files can be transmitted to the researcher on disk, tape or electronically by telephone or other computer network.
For smaller studies, there are numerous commercial PC-based programs which have data entry features which emulate those of more specialized systems. These include database programs such as dBase, Foxpro and Microsoft Access, as well as spreadsheets such as Microsoft Excel and Lotus 1-2-3. In addition, data entry features are included with many computer program packages whose principal purpose is statistical data analysis, such as SPSS, BMDP and EPI INFO.
One widespread method of data capture which works well for certain specialized questionnaires uses optical systems. Optical mark reading or optical sensing is used to read responses on questionnaires that are specially designed for participants to enter data by marking small rectangles or circles (sometimes called “bubble codes”). These work most efficiently when each individual completes his or her own questionnaire. More sophisticated and expensive equipment can read hand-printed characters, but at present this is not an efficient technique for capturing data in large-scale studies.
Archiving Questionnaires and Coding Manuals
Because information is a valuable resource and is subject to interpretation and other influences, researchers sometimes are asked to share their data with other researchers. The request to share data can be motivated by a variety of reasons, which may range from a sincere interest in replicating a report to concern that data may not have been analysed or interpreted correctly.
Where falsification or fabrication of data is suspected or alleged, it becomes essential that the original records upon which reported findings are based be available for audit purposes. In addition to the original questionnaires and/or computer files of raw data, the researcher must be able to provide for review the coding manual(s) developed for the study and the log(s) of all data changes which were made in the course of data coding, computerization and analysis. For example, if a data value had been altered because it had initially appeared as an outlier, then a record of the change and the reasons for making the change should have been recorded in the log for possible data audit purposes. Such information also is of value at the time of report preparation because it serves as a reminder about how the data which gave rise to the reported findings had actually been handled.
For these reasons, upon completion of a study, the researcher has an obligation to ensure that all basic data are appropriately archived for a reasonable period of time, and that they could be retrieved if the researcher were called upon to provide them.
Â
The function of the epithelium of the epidermis is to form the surface or horny layer of the skin, of which the major component is the fibrous protein, keratin. In certain areas the epithelium is specially developed to produce a particular type of keratin structure. One of these is hair, and another is nail. The nail plate is formed partly by the epithelium of the matrix and partly by that of the nail bed. The nail grows in the same way as the hair and the horny layer and is affected by similar pathogenic mechanisms to those responsible for diseases of the hair and epidermis. Some elements such as arsenic and mercury accumulate in the nail as in the hair.
Figure 1 shows that the nail matrix is an invagination of the epithelium and it is covered by the nail fold at its base. A thin film of horny layer called the cuticle serves to seal the paronychial space by stretching from the nail fold to the nail plate.
Figure 1. The structure of the nail.
The most vulnerable parts of the nail are the nail fold and the area beneath the tip of the nail plate, although the nail plate itself may suffer direct physical or chemical traumata. Chemical substances or infective agents may penetrate under the nail plate at its free margin. Moisture and alkali may destroy the cuticle and allow the entry of bacteria and fungi which will cause inflammation of the paronychial tissue and produce secondary growth disturbance of the nail plate.
The most frequent causes of nail disease are chronic paronychia, ringworm, trauma, psoriasis, impaired circulation and eczema or other dermatitis. Paronychia is an inflammation of the nail fold. Acute paronychia is a painful suppurative condition requiring antibiotic and sometimes surgical treatment. Chronic paronychia follows loss of the cuticle which allows water, bacteria and Candida albicans to penetrate into the paronychial space. It is common among persons with intense exposure to water, alkaline substances and detergents, such as kitchen staff, cleaners, fruit and vegetable preparers and canners and housewives. Full recovery cannot be achieved until the integrity of the cuticle and eponychium sealing the paronychial space has been restored.
Exposure to cement, lime and organic solvents, and work such as that of a butcher or poulterer may also cause trauma of the cuticle and nail folds.
Any inflammation or disease of the nail matrix may result in dystrophy (distortion) of the nail plate, which is usually the symptom which has brought the condition to medical attention. Exposure to chilling cold, or the arterial spasm of Raynaud’s phenomenon, can also damage the matrix and produce nail dystrophy. Sometimes the damage is temporary and the nail dystrophy will disappear after removal of the cause and treatment of the inflammatory condition. (An example is shown in figure 2.)
Figure 2. Onychodystrophy secondary to contact dermatitis resulting from chronic irritation.
One cause of nail damage is the direct application of certain cosmetic preparations, such as base coats under nail polish, nail hardeners and synthetic nail dressings to the nail.
Some special occupations may cause nail damage. There has been a report of dystrophy due to handling the concentrated dipyridylium pesticide compounds paraquat and diquat. During the manufacture of selenium dioxide, a fine powder of this substance may get under the fringe of the nail plate and cause intense irritation and necrosis of the finger tip and damage to the nail plate. Care should be taken to warn workers of this hazard and advise them always to clean the subungual areas of their fingers each day.
Certain types of allergic contact dermatitis of the finger tips frequently result in secondary nail dystrophy. Six common sensitizers which will do this are:
Â
The diagnosis can be confirmed by a positive patch test. The condition of the skin and nails will recover when contact ceases.
Protective measures
In many cases nails can be safeguarded by the use of suitable hand protection. However, where hand exposure exists, nails should receive adequate care, consisting essentially of preserving the cuticle and protecting the subungual area. The skin under the free margin of the nails should be cleaned daily in order to remove foreign debris or chemical irritants. Where barrier creams or lotions are employed, care should be taken to ensure that the cuticle and the area under the free margin are coated.
To preserve the intact cuticle it is necessary to avoid excessive manicure or trauma, maceration by prolonged exposure to water, and dissolution by repeated exposure to alkali, solvent and detergent solutions.
Â
The goal of occupational health programmes is to allow workers to maintain their job and their health over several years. The development of effective programmes requires the identification of sectoral, population-based, and workplace-specific risk factors. This information can then be used to develop prevention policies both for groups and individuals.
The Québec Occupational Health and Safety Commission (Commission de la santé et de la sécurité au travail du Québec) has characterized work activities in 30 industrial, commercial, and service sectors (Commission de la santé et de la sécurité au travail 1993). Its surveys reveal that occupational dermatoses are most prevalent in the food and beverage industries, medical and social services, miscellaneous commercial and personal services and construction (including public works). Affected workers are typically engaged in service, manufacturing, assembly, repair, materials handling, food-processing, or health-care activities.
Occupational dermatoses are particularly prevalent in two age groups: young and inexperienced workers who may be unaware of the sometimes insidious risks associated with their work, and those workers approaching retirement age who may not have noticed the progressive drying of their skin over the years, which increases over several consecutive workdays. Because of such dehydration, repeated exposure to previously well-tolerated irritant or astringent substances may cause irritative dermatitis in these workers.
As table 1 indicates, even though most cases of occupational dermatoses do not involve compensation exceeding two weeks, a significant number of cases may persist for over two months (Durocher and Paquette 1985). This table clearly illustrates the importance of preventing chronic dermatoses requiring prolonged work absences.
Table 1. Occupational dermatoses in Quebec in 1989: Distribution by length of compensation
Length of compensation (days) |
0 |
1–14 |
15–56 |
57–182 |
>183 |
Number of cases (total: 735) |
10 |
370 |
195 |
80 |
80 |
Source: Commission de la santé et de la sécurité au travail, 1993.
Risk Factors
Many substances used in industry are capable of causing dermatoses, the risk of which depends on the concentration of the substance and the frequency and duration of skin contact. The general classification scheme presented in table 2 (overleaf) based on the classification of risk factors as mechanical, physical, chemical or biological, is a useful tool for the identification of risk factors during site visits. During workplace evaluation, the presence of risk factors may be either directly observed or suspected on the basis of observed skin lesions. Particular attention is paid to this in the classification scheme presented in table 2. In some cases effects specific to a given risk factor may be present, while in others, the skin disorders may be associated with several factors in a given category. Disorders of this last type are known as group effects. The specific cutaneous effects of physical factors are listed in table 2 and described in other sections of this chapter.
Â
Table 2. Risk factors and their effects on the skin
Â
Mechanical factors
Trauma
Friction
Pressure
Dusts
Physical factors
Radiation
Humidity
Heat
Cold
Chemical factors
Acids, bases
Detergents, solvents
Metals, resins
Cutting oils
Dyes, tar
Rubber, etc.
Biological factors
Bacteria
Viruses
Dermatophytes
Parasites
Plants
Insects
Risk co-factors
Eczema (atopic, dyshidrotic, seborrhoeic, nummular)
Psoriasis
Xeroderma
Acne
Group effects
Cuts, punctures, blisters
Abrasions, isomorphism
Lichenification
Calluses
Specific effects
Photodermatitis, radiodermatitis, cancer
Maceration, irritation
Heat rash, burns, erythema
Frostbite, xeroderma, urticaria, panniculitis, Raynaud’s phenomenon
Group effects
Dehydration
Inflammation
Necrosis
Allergy
Photodermatitis
Dyschromia
Specific effects
Pyodermatitis
Multiple warts
Dermatomycosis
Parasitosis
Phytodermatitis
Urticaria
Â
Â
Mechanical factors include repeated friction, excessive and prolonged pressure, and the physical action of some industrial dusts, whose effects are a function of the shape and size of the dust particles and the extent of their friction with the skin. The injuries themselves may be mechanical (especially in workers exposed to repeated vibrations), chemical, or thermal, and include physical lesions (ulcers, blisters), secondary infection, and isomorphism (Koebner phenomenon). Chronic changes, such as scars, keloid, dyschromia, and Raynaud’s phenomenon, which is a peripheral neurovascular alteration caused by prolonged use of vibrating tools, may also develop.
Chemical factors are by far the most common cause of occupational dermatoses. To establish an exhaustive list of the many chemicals is not practical. They may cause allergic, irritant or photodermatotic reactions, and may leave dyschromic sequelae. The effects of chemical irritation vary from simple drying to inflammation to complete cell necrosis. More information on this subject is provided in the article on contact dermatitis. Material Safety Data Sheets, which provide toxicological and other information are indispensable tools for developing effective preventive measures against chemicals. Several countries, in fact, require chemical manufacturers to provide every workplace using their products with information on the occupational health hazards posed by their products.
Bacterial, viral and fungal infections contracted in the workplace arise from contact with contaminated materials, animals, or people. Infections include pyodermatitis, folliculitis, panaris, dermatomycosis, anthrax, and brucellosis. Workers in the food-processing sector may develop multiple warts on their hands, but only if they have already suffered microtraumas and are exposed to excessive levels of humidity for prolonged periods (Durocher and Paquette 1985). Both animals and humans such as day-care and health-care workers, may act as vectors for parasitic contamination like mites, scabies and head lice. Phytodermatitis may be caused by plants (Rhus sp.) or flowers (alstromeria, chrysanthemums, tulips). Finally, some wood extracts may cause contact dermatitis.
Risk Co-factors
Some non-occupational cutaneous pathologies may exacerbate the effects of environmental factors on workers’ skin. For example, it has long been recognized that the risk of irritant contact dermatitis is greatly increased in individuals with a medical history of atopy, even in the absence of an atopic dermatitis. In a study of 47 cases of irritant contact dermatitis of the hands of food-processing workers, 64% had a history of atopy (Cronin 1987). Individuals with atopic dermatitis have been shown to develop more severe irritation when exposed to sodium lauryl sulphate, commonly found in soaps (Agner 1991). Predisposition to allergies (Type I) (atopic diathesis) does not however increase the risk of delayed allergic (Type IV) contact dermatitis, even to nickel (Schubert et al. 1987), the allergen most commonly screened for. On the other hand, atopy has recently been shown to favour the development of contact urticaria (Type I allergy) to rubber latex among health-care workers (Turjanmaa 1987; Durocher 1995) and to fish among caterers (Cronin 1987).
In psoriasis, the outmost layer of the skin (stratum corneum) is thickened but not calloused (parakeratotic) and less resistant to skin irritants and mechanical traction. Frequent skin injury may worsen pre-existing psoriasis, and new isomorphic psoriatic lesions may develop on scar tissue.
Repeated contact with detergents, solvents, or astringent dusts may lead to secondary irritant contact dermatitis in individuals suffering from xeroderma. Similarly, exposure to frying oils may exacerbate acne.
Prevention
A thorough understanding of relevant risk factors is a prerequisite to establishing prevention programmes, which may be either institutional or personal ones such as relying on personal protective equipment. The efficacy of prevention programmes depends on the close collaboration of workers and employers during their development. Table 3 provides some information on prevention.
Â
Table 3. Collective measures (group approach) to prevention
Â
Collective measures
Use of tools for handling materials
Ventilation
Closed systems
Automation
Â
Personal protection
Â
Â
Workplace Prevention
The primary goal of workplace preventive measures is the elimination of hazards at their source. When feasible, substitution of a toxic substance by a non-toxic one is the ideal solution. For example, the toxic effects of a solvent being incorrectly used to clean the skin can be eliminated by substituting a synthetic detergent that presents no systemic hazard and that is less irritating. Several non-allergenic cement powders which substitute ferrous sulphate for hexavalent chromium, a well-known allergen, are now available. In water-based cooling systems, chromate-based anti-corrosion agents can be replaced by zinc borate, a weaker allergen (Mathias 1990). Allergenic biocides in cutting oils can be replaced by other preservatives. The use of gloves made of synthetic rubber or PVC can eliminate the development of latex allergies among health-care workers. Replacement of aminoethanolamine by triethanolamine in welding fluxes used to weld aluminium cables has led to a reduction in allergies (Lachapelle et al. 1992).
Modification of production processes to avoid skin contact with hazardous substances may be an acceptable alternative when substitution is impossible or the risk is low. Simple modifications include using screens or flexible tubes to eliminate splashing during the transfer of liquids, or filters that retain residues and reduce the need for manual cleaning. More natural grasp points on tools and equipment that avoid exerting excessive pressure and friction on the hands and that prevent skin contact with irritants may also work. Local capture ventilation with capture inlets that limit nebulisation or reduce the concentration of airborne dusts are useful. Where processes have been completely automated in order to avoid environmental hazards, particular attention should be paid to training workers responsible for repairing and cleaning the equipment and specific preventive measures may be required to limit their exposure (Lachapelle et al. 1992).
All personnel must be aware of the hazards present in their workplace, and collective measures can only be effective when implemented in conjunction with a comprehensive information programme. Material Safety Data Sheets can be used to identify hazardous and potentially hazardous substances. Hazard warning signs can be used to rapidly identify these substances. A simple colour code allows visual coding of the risk level. For example, a red sticker could signal the presence of a hazard and the necessity of avoiding direct skin contact. This code would be appropriate for a corrosive substance that rapidly attacks the skin. Similarly, a yellow sticker could indicate the need for prudence, for example when dealing with a substance capable of damaging the skin following repeated or prolonged contact (Durocher 1984). Periodic display of posters and the occasional use of audio-visual aids reinforce the information delivered and stimulate interest in occupational dermatosis prevention programmes.
Complete information on the hazards associated with work activities should be provided to workers prior to starting work. In several countries, workers are given special occupational training by professional instructors.
Workplace training must be repeated each time a process or task is changed with resulting change in risk factors. Neither an alarmist nor paternalistic attitude favours good working relationships. Employers and workers are partners who both desire work to be executed safely, and the information delivered will only be credible if it is realistic.
Given the absence of safety standards for dermatotoxic substances (Mathias 1990), preventive measures must be supported by vigilant observation of the state of workers’ skin. Fortunately this is easily implemented, since the skin, particularly that on the hands and face, can be directly observed by everyone. The goal of this type of observation is the identification of early signs of cutaneous modifications indicating an overwhelming of the body’s natural equilibrium. Workers and health and safety specialists should therefore be on the lookout for the following early warning signs:
Â
Prompt identification and treatment of cutaneous pathologies is essential, and their underlying causal factors must be identified, to prevent them from becoming chronic.
When workplace controls are unable to protect the skin from contact with hazardous substances, the duration of skin contact should be minimized. For this purpose, workers should have ready access to appropriate hygienic equipment. Contamination of cleaning agents can be avoided by using closed containers equipped with a pump that dispenses an adequate amount of the cleanser with a single press. Selecting cleansers requires compromising between cleaning power and the potential for irritation. For example, so-called high-performance cleansers often contain solvents or abrasives which increase irritation. The cleanser selected should take into account the specific characteristics of the workplace, since workers will often simply use a solvent if available cleansers are ineffective. Cleansers may take the form of soaps, synthetic detergents, waterless pastes or creams, abrasive preparations and antimicrobial agents (Durocher 1984).
In several occupations, the application of a protective cream before work facilitates skin cleaning, regardless of the cleaner used. In all cases, the skin must be thoroughly rinsed and dried after each washing. Failure to do so may increase irritation, for example by re-emulsification of the soap residues caused by the humidity inside impermeable gloves.
Industrial soaps are usually provided as liquids dispensed by hand pressure. They are composed of fatty acids of animal (lard) or vegetable (oil) origin, buffered with a base (e.g., sodium hydroxide). Buffering may be incomplete and may leave residual free radicals that are capable of irritating the skin. To avoid this, a near-neutral pH (4 to 10) is desirable. These liquid soaps are adequate for many tasks.
Synthetic detergents, available in both liquid and powder form, emulsify greases. Thus they usually remove the human skin’s sebum, which is a substance that protects the skin against drying. Skin emulsification is generally less marked with soaps than with synthetic detergents and is proportional to detergent concentration. Emollients such as glycerine, lanolin, and lecithin are often added to detergents to counteract this effect.
Pastes and creams, also known as “waterless soaps” are emulsions of oil-based substances in water. Their primary cleaning agent is a solvent, generally a petroleum derivative. They are called “waterless” because they are effective in the absence of tap water, and are typically used to remove stubborn soils or to wash hands when water is unavailable. Because of their harshness, they are not considered cleansers of choice. Recently, “waterless soaps” containing synthetic detergents that are less irritating to the skin than solvents have become available. The American Association of Soap and Detergent Manufacturers recommends washing with a mild soap after using solvent-based “waterless soaps.” Workers who use “waterless soaps” three or four times per day should apply a moisturizing lotion or cream at the end of the work day, in order to prevent drying.
Abrasive particles, which are often added to one of the cleaners described above to increase their cleaning power are irritants. They may be soluble (e.g., borax) or insoluble. Insoluble abrasives may be mineral (e.g., pumice), vegetal (e.g., nut shells) or synthethic (e.g., polystyrene).
Antimicrobial cleaners should only be used in workplaces where there is a real risk of infection, since several of them are potential allergens and workers should not be exposed needlessly.
Under the influence of certain substances or repeated washings, workers’ hands may tend to dry out. Long-term maintenance of good skin hygiene under these conditions requires daily moisturizing, the frequency of which will depend on the individual and the type of work. In many cases, moisturizing lotions or creams, also known as hand creams, are adequate. In cases of severe drying or when the hands are immersed for prolonged periods, hydrophilic vaselines are more appropriate. So-called protective or barrier creams are usually moisturizing creams; they may contain silicones or zinc or titanium oxides. Exposure-specific protective creams are rare, with the exception of those which protect against ultraviolet radiation. These have been greatly improved over the last few years and now provide effective protection against both UV-A and UV-B. A minimum protection factor of 15 (North American scale) is recommended. StokogarÔ cream appears to be effective against contact dermatitis caused by poison ivy. Protective or barrier creams should never be seen as equivalent to some form of invisible impermeable glove (Sasseville 1995). Furthermore, protective creams are only effective on healthy skin.
While few people like wearing protective equipment, there may be no choice when the measures described above are inadequate. Protective equipment includes: boots, aprons, visors, sleeves, overalls, shoes, and gloves. These are discussed elsewhere in the Encyclopaedia.
Many workers complain that protective gloves reduce their dexterity, but their use is nevertheless inevitable in some situations. Special efforts are required to minimize their inconvenience. Many types are available, both permeable (cotton, leather, metal mesh, KevlaÔasbestos) and impermeable (rubber latex, neoprene, nitrile, polyvinyl chloride, VitoÔ, polyvinyl alcohol, polyethylene) to water. The type selected should take into account the specific needs of each situation. Cotton offers minimal protection but good ventilation. Leather is effective against friction, pressure, traction, and some types of injury. Metal mesh protects against cuts. KevlaÔis fire-resistant. Asbestos is fire- and heat-resistant. The solvent resistance of water-impermeable gloves is highly variable and depends on their composition and thickness. To increase solvent resistance, some researchers have developed gloves incorporating multiple polymer layers.
Several characteristics have to be taken into account when selecting gloves. These include thickness, flexibility, length, roughness, wrist and finger adjustment, and chemical, mechanical, and thermal resistance. Several laboratories have developed techniques, based on the measurement of break-through times and permeability constants, with which to estimate the resistance of gloves to specific chemicals. Lists to help guide glove selection are also available (Lachapelle et al. 1992; Berardinelli 1988).
In some cases, the prolonged wear of protective gloves may cause allergic contact dermatitis due to glove components or to allergens that penetrate the gloves. Wearing protective gloves is also associated with an increased risk of skin irritation, due to prolonged exposure to high levels of humidity within the glove or penetration of irritants through perforations. To avoid deterioration of their condition, all workers suffering from hand dermatitis, regardless of its origin, should avoid wearing gloves that increase the heat and humidity around their lesions.
Establishing a comprehensive occupational dermatosis prevention programme depends on careful adaptation of standards and principles to the unique characteristics of each workplace. To ensure their effectiveness, prevention programmes should be revised periodically to take into account changes in the workplace, experience with the programme and technological advances.
Â
" DISCLAIMER: The ILO does not take responsibility for content presented on this web portal that is presented in any language other than English, which is the language used for the initial production and peer-review of original content. Certain statistics have not been updated since the production of the 4th edition of the Encyclopaedia (1998)."