Monday, 04 April 2011 18:53

Technical Requirements for Safety-Related Systems Based on Electrical, Electronic and Programmable Electronic Devices

Written by
Rate this item
(0 votes)

Machinery, process plants and other equipment can, if they malfunction, present risks from hazardous events such as fires, explosions, radiation overdoses and moving parts. One of the ways such plants, equipment and machinery can malfunction is from failures of electro-mechanical, electronic and programmable electronic (E/E/PE) devices used in the design of their control or safety systems. These failures can arise either from physical faults in the device (e.g., from wear and tear occurring randomly in time (random hardware failures)); or from systematic faults (e.g., errors made in the specification and design of a system that cause it to fail due to (1) some particular combination of inputs, (2) some environmental condition (3) incorrect or incomplete inputs from sensors, (4) incomplete or erroneous data entry by operators, and (5) potential systematic faults due to poor interface design).

Safety-Related Systems Failures

This article covers the functional safety of safety-related control systems, and considers the hardware and software technical requirements necessary to achieve the required safety integrity. The overall approach is in accordance with the proposed International Electrotechnical Commission Standard IEC 1508, Parts 2 and 3 (IEC 1993). The overall goal of draft international standard IEC 1508, Functional Safety: Safety-Related Systems, is to ensure that plant and equipment can be safety automated. A key objective in the development of the proposed international standard is to prevent or minimize the frequency of:

  • failures of control systems triggering other events which in turn could lead to danger (e.g., control system fails, control is lost, process goes out of control resulting in a fire, release of toxic materials, etc.)
  • failures in alarm and monitoring systems so that operators are not given information in a form that can be quickly identified and understood in order to carry out the necessary emergency actions
  • undetected failures in protection systems, making them unavailable when needed for a safety action (e.g., a failed input card in an emergency shut-down system).

 

The article “Electrical, electronic and programmable electronic safety-related systems” sets out the general safety management approach embodied within Part 1 of IEC 1508 for assuring the safety of control and protection systems that are important to safety. This article describes the overall conceptual engineering design that is needed to reduce the risk of an accident to an acceptable level, including the role of any control or protection systems based on E/E/PE technology.

In figure 1, the risk from the equipment, process plant or machine (generally referred to as equipment under control (EUC) without protective devices) is marked at one end of the EUC Risk Scale, and the target level of risk that is needed to meet the required level of safety is at the other end. In between is shown the combination of safety-related systems and external risk reduction facilities needed to make up the required risk reduction. These can be of various types—mechanical (e.g., pressure relief valves), hydraulic, pneumatic, physical, as well as E/E/PE systems. Figure 2 emphasizes the role of each safety layer in protecting the EUC as the accident progresses.

Figure 1. Risk reduction: General concepts

SAF060F1

 

Figure 2. Overall model: Protection layers

SAF060F2

Provided that a hazard and risk analysis has been performed on the EUC as required in Part 1 of IEC 1508, the overall conceptual design for safety has been established and therefore the required functions and Safety Integrity Level (SIL) target for any E/E/PE control or protection system have been defined. The Safety Integrity Level target is defined with respect to a Target Failure Measure (see table 1).


Table 1. Safety Integrity Levels for protection systems: Target failure measures

Safety integrity Level                        Demand mode of operation (Probability of failure to perform its design function on demand)

4                                                10-5 ≤ × <10-4

3                                                10-4 ≤ × <10-3

2                                                10-3 ≤ × <10-2

1                                                10-2 ≤ × <10-1 


Protection Systems

This paper outlines the technical requirements that the designer of an E/E/PE safety-related system should consider to satisfy the required Safety Integrity Level target. The focus is on a typical protection system utilizing programmable electronics in order to allow for a more in-depth discussion of the key issues with little loss in generality. A typical protection system is shown in figure 3, which depicts a single channel safety system with a secondary switch-off activated via a diagnostic device. In normal operation the unsafe condition of the EUC (e.g., overspeed in a machine, high temperature in a chemical plant) will be detected by the sensor and transmitted to the programmable electronics, which will command the actuators (via the output relays) to put the system into a safe state (e.g., removing power to electric motor of the machine, opening a valve to relieve pressure).

Figure 3. Typical protection system

SAF060F3

But what if there are failures in the protection system components? This is the function of the secondary switch-off, which is activated by the diagnostic (self-checking) feature of this design. However, the system is not completely fail-safe, as the design has only a certain probability of being available when being asked to carry out its safety function (it has a certain probability of failure on demand or a certain Safety Integrity Level). For example, the above design might be able to detect and tolerate certain types of output card failure, but it would not be able to withstand a failure of the input card. Therefore, its safety integrity will be much lower than that of a design with a higher-reliability input card, or improved diagnostics, or some combination of these.

There are other possible causes of card failures, including “traditional” physical faults in the hardware, systematic faults including errors in the requirements specification, implementation faults in the software and inadequate protection against environmental conditions (e.g., humidity). The diagnostics in this single-channel design may not cover all these types of faults, and therefore this will limit the Safety Integrity Level achieved in practice. (Coverage is a measure of the percentage of faults that a design can detect and handle safely.)

Technical Requirements

Parts 2 and 3 of draft IEC 1508 provide a framework for identifying the various potential causes of failure in hardware and software and for selecting design features that overcome those potential causes of failure appropriate to the required Safety Integrity Level of the safety-related system. For example, the overall technical approach for the protection system in figure 3 is shown in figure 4. The figure indicates the two basic strategies for overcoming faults and failures: (1) fault avoidance, where care is taken in to prevent faults being created; and (2) fault tolerance, where the design is created specifically to tolerate specified faults. The single-channel system mentioned above is an example of a (limited) fault tolerant design where diagnostics are used to detect certain faults and put the system into a safe state before a dangerous failure can occur.

Figure 4. Design specification: Design solution

SAF060F4

Fault avoidance

Fault avoidance attempts to prevent faults being introduced into a system. The main approach is to use a systematic method of managing the project so that safety is treated as a definable and manageable quality of a system, during design and then subsequently during operation and maintenance. The approach, which is similar to quality assurance, is based on the concept of feedback and involves: (1) planning (defining safety objectives, identifying the ways and means to achieve the objectives); (2) measuring achievement against the plan during implementation and (3) applying feedback to correct for any deviations. Design reviews are a good example of a fault avoidance technique. In IEC 1508 this “quality” approach to fault avoidance is facilitated by the requirements to use a safety lifecycle and employ safety management procedures for both hardware and software. For the latter, these often manifest themselves as software quality assurance procedures such as those described in ISO 9000-3 (1990).

In addition, Parts 2 and 3 of IEC 1508 (concerning hardware and software, respectively) grade certain techniques or measures that are considered useful for fault avoidance during the various safety lifecycle phases. Table 2 gives an example from Part 3 for the design and development phase of software. The designer would use the table to assist in the selection of fault avoidance techniques, depending on the required Safety Integrity Level. With each technique or measure in the tables there is a recommendation for each Safety Integrity Level, 1 to 4. The range of recommendations covers Highly Recommended (HR), Recommended (R), Neutral—neither for or against (—) and Not Recommended (NR).

Table 2. Software design and development

Technique/measure

SIL 1

SIL 2

SIL 3

SIL 4

1. Formal methods including, for example, CCS, CSP, HOL, LOTOS

R

R

HR

2. Semi-formal methods

HR

HR

HR

HR

3. Structured. Methodology including, for example, JSD, MASCOT, SADT, SSADM and YOURDON

HR

HR

HR

HR

4. Modular approach

HR

HR

HR

HR

5. Design and coding standards

R

HR

HR

HR

HR = highly recommended; R = recommended; NR = not recommended;— = neutral: the technique/measure is neither for or against the SIL.
Note: a numbered technique/measure shall be selected according to the safety integrity level.

Fault tolerance

IEC 1508 requires increasing levels of fault tolerance as the safety integrity target increases. The standard recognizes, however, that fault tolerance is more important when systems (and the components that make up those systems) are complex (designated as Type B in IEC 1508). For less complex, “well proven” systems, the degree of fault tolerance can be relaxed.

Tolerance against random hardware faults

Table 3 shows the requirements for fault tolerance against random hardware failures in complex hardware components (e.g., microprocessors) when used in a protection system such as is shown in figure 3. The designer may need to consider an appropriate combination of diagnostics, fault tolerance and manual proof checks to overcome this class of fault, depending on the required Safety Integrity Level.


Table 3. Safety Integrity Level - Fault requirements for Type B components1

1    Safety-related undetected faults shall be detected by the proof check.

2    For components without on-line medium diagnostic coverage, the system shall be able to perform the safety function in the presence of a single fault. Safety-related undetected faults shall be detected by the proof check.

3    For components with on-line high diagnostic coverage, the system shall be able to perform the safety function in the presence of a single fault. For components without on-line high diagnostic coverage, the system shall be able to perform the safety function in the presence of two faults. Safety-related undetected faults shall be detected by the proof check.

4    The components shall be able to perform the safety function in the presence of two faults. Faults shall be detected with on-line high diagnostic coverage. Safety-related undetected faults shall be detected by the proof check. Quantitative hardware analysis shall be based on worst-case assumptions.

1Components whose failure modes are not well defined or testable, or for which there are poor failure data from field experience (e.g., programmable electronic components).


IEC 1508 aids the designer by providing design specification tables (see table 4) with design parameters indexed against the Safety Integrity Level for a number of commonly used protection system architectures.

Table 4. Requirements for Safety Integrity Level 2 - Programmable electronic system architectures for protection systems

PE system configuration

Diagnostic coverage per channel

Off-line proof test Interval (TI)

Mean time to spurious trip

Single PE, Single I/O, Ext. WD

High

6 months

1.6 years

Dual PE, Single I/O

High

6 months

10 years

Dual PE, Dual I/O, 2oo2

High

3 months

1,281 years

Dual PE, Dual I/O, 1oo2

None

2 months

1.4 years

Dual PE, Dual I/O, 1oo2

Low

5 months

1.0 years

Dual PE, Dual I/O, 1oo2

Medium

18 months

0.8 years

Dual PE, Dual I/O, 1oo2

High

36 months

0.8 years

Dual PE, Dual I/O, 1oo2D

None

2 months

1.9 years

Dual PE, Dual I/O, 1oo2D

Low

4 months

4.7 years

Dual PE, Dual I/O, 1oo2D

Medium

18 months

18 years

Dual PE, Dual I/O, 1oo2D

High

48+ months

168 years

Triple PE, Triple I/O, IPC, 2oo3

None

1 month

20 years

Triple PE, Triple I/O, IPC, 2oo3

Low

3 months

25 years

Triple PE, Triple I/O, IPC, 2oo3

Medium

12 months

30 years

Triple PE, Triple I/O, IPC, 2oo3

High

48+ months

168 years

 

The first column of the table represents architectures with varying degrees of fault tolerance. In general, architectures placed near the bottom of the table have a higher degree of fault tolerance than those near the top. A 1oo2 (one out of two) system is able to withstand any one fault, as can 2oo3.

The second column describes the percentage coverage of any internal diagnostics. The higher the level of the diagnostics, the more faults will be trapped. In a protection system this is important because, provided the faulty component (e.g., an input card) is repaired within a reasonable time (often 8 hours), there is little loss in functional safety. (Note: this would not be the case for a continuous control system, because any fault is likely to cause an immediate unsafe condition and the potential for an incident.)

The third column shows the interval between proof tests. These are special tests that are required to be carried out to thoroughly exercise the protection system to ensure that there are no latent faults. Typically these are carried out by the equipment vendor during plant shutdown periods.

The fourth column shows the spurious trip rate. A spurious trip is one that causes the plant or equipment to shut down when there is no process deviation. The price for safety is often a higher spurious trip rate. A simple redundant protection system—1oo2—has, with all other design factors unchanged, a higher Safety Integrity Level but also a higher spurious trip rate than a single-channel (1oo1) system.

If one of the architectures in the table is not being used or if the designer wants to carry out a more fundamental analysis, then IEC 1508 allows this alternative. Reliability engineering techniques such as Markov modelling can then be used to calculate the hardware element of the Safety Integrity Level (Johnson 1989; Goble 1992).

Tolerance against systematic and common cause failures

This class of failure is very important in safety systems and is the limiting factor on the achievement of safety integrity. In a redundant system a component or subsystem, or even the whole system, is duplicated to achieve a high reliability from lower-reliability parts. Reliability improvement occurs because, statistically, the chance of two systems failing simultaneously by random faults will be the product of the reliabilities of the individual systems, and hence much lower. On the other hand, systematic and common cause faults cause redundant systems to fail coincidentally when, for example, a specification error in the software leads the duplicated parts to fail at the same time. Another example would be the failure of a common power supply to a redundant system.

IEC 1508 provides tables of engineering techniques ranked against the Safety Integrity Level considered effective in providing protection against systematic and common cause failures.

Examples of techniques providing defences against systematic failures are diversity and analytical redundancy. The basis of diversity is that if a designer implements a second channel in a redundant system using a different technology or software language, then faults in the redundant channels can be regarded as independent (i.e., a low probability of coincidental failure). However, particularly in the area of software-based systems, there is some suggestion that this technique may not be effective, as most mistakes are in the specification. Analytical redundancy attempts to exploit redundant information in the plant or machine to identify faults. For the other causes of systematic failure—for example, external stresses—the standard provides tables giving advice on good engineering practices (e.g., separation of signal and power cables) indexed against Safety Integrity Level.

Conclusions

Computer-based systems offer many advantages—not only economic, but also the potential for improving safety. However, the attention to detail required to realize this potential is significantly greater than is the case using conventional system components. This article has outlined the main technical requirements that a designer needs to take into account to successfully exploit this technology.

 

Back

Additional Info

Read 5523 times Last modified on Wednesday, 31 August 2011 16:28

" DISCLAIMER: The ILO does not take responsibility for content presented on this web portal that is presented in any language other than English, which is the language used for the initial production and peer-review of original content. Certain statistics have not been updated since the production of the 4th edition of the Encyclopaedia (1998)."

Contents

Preface
Part I. The Body
Part II. Health Care
Part III. Management & Policy
Part IV. Tools and Approaches
Part V. Psychosocial and Organizational Factors
Part VI. General Hazards
Part VII. The Environment
Part VIII. Accidents and Safety Management
Accident Prevention
Audits, Inspections and Investigations
Safety Applications
Safety Policy and Leadership
Safety Programs
Part IX. Chemicals
Part X. Industries Based on Biological Resources
Part XI. Industries Based on Natural Resources
Part XII. Chemical Industries
Part XIII. Manufacturing Industries
Part XIV. Textile and Apparel Industries
Part XV. Transport Industries
Part XVI. Construction
Part XVII. Services and Trade
Part XVIII. Guides

Safety Applications References

Arteau, J, A Lan, and J-F Corveil. 1994. Use of Horizontal Lifelines in Structural Steel Erection. Proceedings of the International Fall Protection Symposium, San Diego, California (October 27–28, 1994). Toronto: International Society for Fall Protection.

Backström, T. 1996. Accident risk and safety protection in automated production. Doctoral thesis. Arbete och Hälsa 1996:7. Solna: National Institute for Working Life.

Backström, T and L Harms-Ringdahl. 1984. A statistical study of control systems and accidents at work. J Occup Acc. 6:201–210.

Backström, T and M Döös. 1994. Technical defects behind accidents in automated production. In Advances in Agile Manufacturing, edited by PT Kidd and W Karwowski. Amsterdam: IOS Press.

—. 1995. A comparison of occupational accidents in industries with of advanced manufacturing technology. Int J Hum Factors Manufac. 5(3). 267–282.

—. In press. The technical genesis of machine failures leading to occupational accidents. Int J Ind Ergonomics.

—. Accepted for publication. Absolute and relative frequencies of automation accidents at different kinds of equipment and for different occupational groups. J Saf Res.

Bainbridge, L. 1983. Ironies of automation. Automatica 19:775–779.

Bell, R and D Reinert. 1992. Risk and system integrity concepts for safety related control systems. Saf Sci 15:283–308.

Bouchard, P. 1991. Échafaudages. Guide série 4. Montreal: CSST.

Bureau of National Affairs. 1975. Occupational Safety and Health Standards. Roll-over Protective Structures for Material Handling Equipment and Tractors, Sections 1926, 1928. Washington, DC: Bureau of National Affairs.

Corbett, JM. 1988. Ergonomics in the development of human-centred AMT. Applied Ergonomics 19:35–39.

Culver, C and C Connolly. 1994. Prevent fatal falls in construction. Saf Health September 1994:72–75.

Deutsche Industrie Normen (DIN). 1990. Grundsätze für Rechner in Systemen mit Sicherheitsauffgaben. DIN V VDE 0801. Berlin: Beuth Verlag.

—. 1994. Grundsätze für Rechner in Systemen mit Sicherheitsauffgaben Änderung A 1. DIN V VDE 0801/A1. Berlin: Beuth Verlag.

—. 1995a. Sicherheit von Maschinen—Druckempfindliche Schutzeinrichtungen [Machine safety—Pressure-sensitive protective equipment]. DIN prEN 1760. Berlin: Beuth Verlag.

—. 1995b. Rangier-Warneinrichtungen—Anforderungen und Prüfung [Commercial vehicles—obstacle detection during reversing—requirements and tests]. DIN-Norm 75031. February 1995.

Döös, M and T Backström. 1993. Description of accidents in automated materials handling. In Ergonomics of Materials Handling and Information Processing at Work, edited by WS Marras, W Karwowski, JL Smith, and L Pacholski. Warsaw: Taylor and Francis.

—. 1994. Production disturbances as an accident risk. In Advances in Agile Manufacturing, edited by PT Kidd and W Karwowski. Amsterdam: IOS Press.

European Economic Community (EEC). 1974, 1977, 1979, 1982, 1987. Council Directives on Rollover Protection Structures of Wheeled Agricultural and Forestry Tractors. Brussels: EEC.

—. 1991. Council Directive on the Approximation of the Laws of the Member States relating to Machinery. (91/368/EEC) Luxembourg: EEC.

Etherton, JR and ML Myers. 1990. Machine safety research at NIOSH and future directions. Int J Ind Erg 6:163–174.

Freund, E, F Dierks and J Roßmann. 1993. Unterschungen zum Arbeitsschutz bei Mobilen Rototern und Mehrrobotersystemen [Occupational safety tests of mobile robots and multiple robot systems]. Dortmund: Schriftenreihe der Bundesanstalt für Arbeitsschutz.

Goble, W. 1992. Evaluating Control System Reliability. New York: Instrument Society of America.

Goodstein, LP, HB Anderson and SE Olsen (eds.). 1988. Tasks, Errors and Mental Models. London: Taylor and Francis.

Gryfe, CI. 1988. Causes and prevention of falling. In International Fall Protection Symposium. Orlando: International Society for Fall Protection.

Health and Safety Executive. 1989. Health and safety statistics 1986–87. Employ Gaz 97(2).

Heinrich, HW, D Peterson and N Roos. 1980. Industrial Accident Prevention. 5th edn. New York: McGraw-Hill.

Hollnagel, E, and D Woods. 1983. Cognitive systems engineering: New wine in new bottles. Int J Man Machine Stud 18:583–600.

Hölscher, H and J Rader. 1984. Mikrocomputer in der Sicherheitstechnik. Rheinland: Verlag TgV-Reinland.

Hörte, S-Å and P Lindberg. 1989. Diffusion and Implementation of Advanced Manufacturing Technologies in Sweden. Working paper No. 198:16. Institute of Innovation and Technology.

International Electrotechnical Commission (IEC). 1992. 122 Draft Standard: Software for Computers in the Application of Industrial Safety-related Systems. IEC 65 (Sec). Geneva: IEC.

—. 1993. 123 Draft Standard: Functional Safety of Electrical/Electronic/Programmable Electronic Systems; Generic Aspects. Part 1, General requirements Geneva: IEC.

International Labour Organization (ILO). 1965. Safety & Health in Agricultural Work. Geneva: ILO.

—. 1969. Safety and Health in Forestry Work. Geneva: ILO.

—. 1976. Safe Construction and Operation of Tractors. An ILO Code of Practice. Geneva: ILO.

International Organization for Standardization (ISO). 1981. Agricultural and Forestry Wheeled Tractors. Protective Structures. Static Test Method and Acceptance Conditions. ISO 5700. Geneva: ISO.

—. 1990. Quality Management and Quality Assurance Standards: Guidelines for the Application of ISO 9001 to the Development, Supply and Maintenance of Software. ISO 9000-3. Geneva: ISO.

—. 1991. Industrial Automation Systems—Safety of Integrated Manufacturing Systems—Basic Requirements (CD 11161). TC 184/WG 4. Geneva: ISO.

—. 1994. Commercial Vehicles—Obstacle Detection Device during Reversing—Requirements and Tests. Technical Report TR 12155. Geneva: ISO.

Johnson, B. 1989. Design and Analysis of Fault Tolerant Digital Systems. New York: Addison Wesley.

Kidd, P. 1994. Skill-based automated manufacturing. In Organization and Management of Advanced Manufacturing Systems, edited by W Karwowski and G Salvendy. New York: Wiley.

Knowlton, RE. 1986. An Introduction to Hazard and Operability Studies: The Guide Word Approach. Vancouver, BC: Chemetics.

Kuivanen, R. 1990. The impact on safety of disturbances in flexible manufacturing systems. In Ergonomics of Hybrid Automated Systems II, edited by W Karwowski and M Rahimi. Amsterdam: Elsevier.

Laeser, RP, WI McLaughlin and DM Wolff. 1987. Fernsteurerung und Fehlerkontrolle von Voyager 2. Spektrum der Wissenshaft (1):S. 60–70.

Lan, A, J Arteau and J-F Corbeil. 1994. Protection Against Falls from Above-ground Billboards. International Fall Protection Symposium, San Diego, California, October 27–28, 1994. Proceedings International Society for Fall Protection.

Langer, HJ and W Kurfürst. 1985. Einsatz von Sensoren zur Absicherung des Rückraumes von Großfahrzeugen [Using sensors to secure the area behind large vehicles]. FB 605. Dortmund: Schriftenreihe der bundesanstalt für Arbeitsschutz.

Levenson, NG. 1986. Software safety: Why, what, and how. ACM Computer Surveys (2):S. 129–163.

McManus, TN. N.d. Confined Spaces. Manuscript.

Microsonic GmbH. 1996. Company communication. Dortmund, Germany: Microsonic.

Mester, U, T Herwig, G Dönges, B Brodbeck, HD Bredow, M Behrens and U Ahrens. 1980. Gefahrenschutz durch passive Infrarot-Sensoren (II) [Protection against hazards by infrared sensors]. FB 243. Dortmund: Schriftenreihe der bundesanstalt für Arbeitsschutz.

Mohan, D and R Patel. 1992. Design of safer agricultural equipment: Application of ergonomics and epidemiology. Int J Ind Erg 10:301–310.

National Fire Protection Association (NFPA). 1993. NFPA 306: Control of Gas Hazards on Vessels. Quincy, MA: NFPA.

National Institute for Occupational Safety and Health (NIOSH). 1994. Worker Deaths in Confined Spaces. Cincinnati, OH, US: DHHS/PHS/CDCP/NIOSH Pub. No. 94-103. NIOSH.

Neumann, PG. 1987. The N best (or worst) computer-related risk cases. IEEE T Syst Man Cyb. New York: S.11–13.

—. 1994. Illustrative risks to the public in the use of computer systems and related technologies. Software Engin Notes SIGSOFT 19, No. 1:16–29.

Occupational Safety and Health Administration (OSHA). 1988. Selected Occupational Fatalities Related to Welding and Cutting as Found in Reports of OSHA Fatality/Catastrophe Investigations. Washington, DC: OSHA.

Organization for Economic Cooperation and Development (OECD). 1987. Standard Codes for the Official Testing of Agricultural Tractors. Paris: OECD.

Organisme professionel de prévention du bâtiment et des travaux publics (OPPBTP). 1984. Les équipements individuels de protection contre les chutes de hauteur. Boulogne-Bilancourt, France: OPPBTP.

Rasmussen, J. 1983. Skills, rules and knowledge: Agenda, signs and symbols, and other distinctions in human performance models. IEEE Transactions on Systems, Man and Cybernetics. SMC13(3): 257–266.

Reason, J. 1990. Human Error. New York: Cambridge University Press.

Reese, CD and GR Mills. 1986. Trauma epidemiology of confined space fatalities and its application to intervention/prevention now. In The Changing Nature of Work and Workforce. Cincinnati, OH: NIOSH.

Reinert, D and G Reuss. 1991. Sicherheitstechnische Beurteilung und Prüfung mikroprozessorgesteuerter
Sicherheitseinrichtungen. In BIA-Handbuch. Sicherheitstechnisches Informations-und Arbeitsblatt 310222. Bielefeld: Erich Schmidt Verlag.

Society of Automotive Engineers (SAE). 1974. Operator Protection for Industrial Equipment. SAE Standard j1042. Warrendale, USA: SAE.

—. 1975. Performance Criteria for Rollover Protection. SAE Recommended Practice. SAE standard j1040a. Warrendale, USA: SAE.

Schreiber, P. 1990. Entwicklungsstand bei Rückraumwarneinrichtungen [State of developments for rear area warning devices]. Technische Überwachung, Nr. 4, April, S. 161.

Schreiber, P and K Kuhn. 1995. Informationstechnologie in der Fertigungstechnik [Information technology in production technique, series of the Federal Institute for Occupational Safety and Health]. FB 717. Dortmund: Schriftenreihe der bundesanstalt für Arbeitsschutz.

Sheridan, T. 1987. Supervisory control. In Handbook of Human Factors, edited by G. Salvendy. New York: Wiley.

Springfeldt, B. 1993. Effects of Occupational Safety Rules and Measures with Special Regard to Injuries. Advantages of Automatically Working Solutions. Stockholm: The Royal Institute of Technology, Department of Work Science.

Sugimoto, N. 1987. Subjects and problems of robot safety technology. In Occupational Safety and Health in Automation and Robotics, edited by K Noto. London: Taylor & Francis. 175.

Sulowski, AC (ed.). 1991. Fundamentals of Fall Protection. Toronto, Canada: International Society for Fall Protection.

Wehner, T. 1992. Sicherheit als Fehlerfreundlichkeit. Opladen: Westdeutscher Verlag.

Zimolong, B, and L Duda. 1992. Human error reduction strategies in advanced manufacturing systems. In Human-robot Interaction, edited by M Rahimi and W Karwowski. London: Taylor & Francis.