Wednesday, 12 January 2011 19:55

Hours of Work

Introduction

The patterning and duration of the hours a person works are a very important aspect of his or her experience of the work situation. Most workers feel that they are paid for their time rather than explicitly for their efforts, and thus the transaction between the worker and the employer is one of exchanging time for money. Thus, the quality of the time being exchanged is a very important part of the equation. Time that has high value because of its importance to the worker in terms of allowing sleep, interaction with family and friends and participation in community events may be more highly prized, and thus require extra financial compensation, as compared to normal “day work” time when many of the worker’s friends and family members are themselves at work or at school. The balance of the transaction can also be changed by making the time spent at work more congenial to the worker, for example, by improving working conditions. The commute to and from work is unavailable to the worker for recreation, so this time too must be considered as “grey time” (Knauth et al. 1983) and therefore a “cost” to the worker. Thus, measures such as compressed workweeks, which reduce the number of commuting trips taken per week, or flexitime, which reduces the commute time by allowing the worker to avoid the rush hour, are again likely to change the balance.

Background Literature

As Kogi (1991) has remarked, there is a general trend in both manufacturing and service industries towards greater flexibility in the temporal programming of work. There are a number of reasons for this trend, including the high cost of capital equipment, consumer demand for around-the-clock service, legislative pressure to reduce the length of the workweek and (in some societies such as the United States and Australia) taxation pressure on the employer to have as few different employees as possible. For many employees, the conventional “9 to 5” or “8 to 4”, Monday through Friday workweek is a thing of the past, either because of new work systems or because of the large amounts of excessive overtime required.

Kogi notes that while the benefits to the employer of such flexibility are quite clear in allowing extended business hours, accommodation of market demand and greater management flexibility, the benefits to the worker may be less certain. Unless the flexible schedule involves elements of choice for workers with respect to their particular hours of work, flexibility can often mean disruptions in their biological clocks and domestic situations. Extended work shifts may also lead to fatigue, compromising safety and productivity, as well as to increased exposure to chemical hazards.

Biological Disruptions due to Abnormal Work Hours

Human biology is specifically oriented towards wakefulness during daylight and sleep at night. Any work schedule which requires late evening or all-night wakefulness as a result of compressed workweeks, mandatory overtime or shiftwork will lead, therefore, to disruptions of the biological clock (Monk and Folkard 1992). These disruptions can be assessed by measuring workers’ “circadian rhythms”, which comprise regular fluctuations over the 24 hours in vital signs, blood and urine composition, mood and performance efficiency over the 24-hour period (Aschoff 1981). The measure used most often in shiftwork studies has been body temperature, which, under normal conditions, shows a clear rhythm with a peak at about 2000 hours, a trough at about 0500 hours and a difference of about 0.7°C. between the two. After an abrupt change in routine, the amplitude (size) of the rhythm diminishes and the phase (timing) of the rhythm is slow to adjust to the new schedule. Until the adjustment process is complete, sleep is disrupted and daytime mood and performance efficiency are impaired. These symptoms can be regarded as the shiftwork equivalent of jet-lag and can be extremely long lasting (Knauth and Rutenfranz 1976).

Abnormal work hours can also lead to poor health. Although it has proved difficult to precisely quantify the exact size of the effect, it appears that, in addition to sleep disorders, gastrointestinal disorders (including peptic ulcers) and cardiovascular disease can be more frequently found in shift workers (and former shift workers) than in day workers (Scott and LaDou 1990). There is also some preliminary evidence for increased incidence of psychiatric symptoms (Cole, Loving and Kripke 1990).

Social Disruptions due to Abnormal Work Hours

Not only human biology, but also human society, opposes those who work abnormal hours. Unlike the nocturnal sleep of the majority, which is carefully protected by strict taboos against loud noise and telephone use at night, the late wakening, day-sleeping and napping that are required by those working abnormal work hours is only grudgingly tolerated by society. Evening and weekend community events can also be denied to these people, leading to feelings of alienation.

It is with the family, however, that the social disruptions of abnormal work hours may be the most devastating. For the worker, the family roles of parent, caregiver, social companion and sexual partner can all be severely compromised by abnormal work hours, leading to marital disharmony and problems with children (Colligan and Rosa 1990). Moreover, the worker’s attempts to rectify, or to avoid, such social problems may result in a decrease in sleep time, thus leading to poor alertness and compromised safety and productivity.

Suggested Solutions

Just as the problems of abnormal work hours are multifaceted, so too must be the solutions to those problems. The primary areas to be addressed should include:

  1. selection and education of the worker
  2. selection of the most appropriate work schedule or roster
  3. improvement of the work environment.

 

Selection and education of the worker should involve identification and counselling of those persons likely to experience difficulties with abnormal or extended work hours (e.g., older workers and those with high sleep needs, extensive domestic workloads or long commutes). Education in circadian and sleep hygiene principles and family counselling should also be made available (Monk and Folkard 1992). Education is an extremely powerful tool in helping those with abnormal work hours to cope, and in reassuring them about why they may be experiencing problems. Selection of the most appropriate schedule should begin with a decision as to whether abnormal work hours are actually needed at all. For example, night work may in many cases be done better at a different time of day (Knauth and Rutenfranz 1982). Consideration should be also be given to the schedule best suited to the work situation, bearing in mind the nature of the work and the demographics of the workforce. Improvement of the work environment may involve raising illumination levels and providing adequate canteen facilities at night.

Conclusions

The particular pattern of work hours chosen for an employee can represent a significant challenge to his or her biology, domestic situation and role in the community. Informed decisions should be made, incorporating a study of the demands of the work situation and the demographics of the workforce. Any changes in hours of work should be preceded by detailed investigation and consultation with the employees and followed by evaluation studies.


Back

Wednesday, 12 January 2011 19:52

Workload

Workload and Brain Function

Knowledge about human needs, abilities and constraints provides guidelines for shaping psychosocial work conditions so as to reduce stress and improve occupational health (Frankenhaeuser 1989). Brain research and behavioural research have identified the conditions under which people perform well and the conditions under which performance deteriorates. When the total inflow of impressions from the outside world falls below a critical level and work demands are too low, people tend to become inattentive and bored and to lose their initiative. Under conditions of excessive stimulus flow and too high demands, people lose their ability to integrate messages, thought processes become fragmented and judgement is impaired. This inverted U-relationship between workload and brain function is a fundamental biological principle with wide applications in working life. Stated in terms of efficiency at different workloads, it means that the optimal level of mental functioning is located at the midpoint of a scale ranging from very low to very high work demands. Within this middle zone the degree of challenge is “just right”, and the human brain functions efficiently. The location of the optimal zone varies among different people, but the crucial point is that large groups spend their lives outside the optimal zone that would provide opportunities for them to develop their full potential. Their abilities are constantly either underutilized or overtaxed.

A distinction should be made between quantitative overload, which means too much work within a given time period, and qualitative underload, which means that tasks are too repetitive, lacking variety and challenge (Levi, Frankenhaeuser and Gardell 1986).

Research has identified criteria for “healthy work” (Frankenhaeuser and Johansson 1986; Karasek and Theorell 1990). These criteria emphasize that workers should be given the opportunity to: (a) influence and control their work; (b) understand their contribution in a wider context; (c) experience a sense of togetherness and belonging at their place of work; and (d) develop their own abilities and vocational skill by continuous learning.

Monitoring Bodily Responses at Work

People are challenged by different work demands whose nature and strength are appraised via the brain. The appraisal process involves a weighing, as it were, of the severity of the demands against one’s own coping abilities. Any situation which is perceived as a threat or challenge requiring compensatory effort is accompanied by the transmission of signals from the brain to the adrenal medulla, which responds with an output of the catecholamines epinephrine and norepinephrine. These stress hormones make us mentally alert and physically fit. In the event that the situation induces feelings of uncertainty and helplessness, the brain messages also travel to the adrenal cortex, which secretes cortisol, a hormone which plays an important part in the body’s immune defence (Frankenhaeuser 1986).

With the development of biochemical techniques that permit the determination of exceedingly small amounts of hormones in blood, urine and saliva, stress hormones have come to play an increasingly important role in research on working life. In the short term, a rise in stress hormones is often beneficial and seldom a threat to health. But in the longer term, the picture may include damaging effects (Henry and Stephens 1977; Steptoe 1981). Frequent or long-lasting elevations of stress-hormone levels in the course of daily life may result in structural changes in the blood vessels which, in turn, may lead to cardiovascular disease. In other words, consistently high levels of stress hormones should be regarded as warning signals, telling us that the person may be under excessive pressure.

Biomedical recording techniques permit the monitoring of bodily responses at the workplace without interfering with the worker’s activities. Using such ambulatory-monitoring techniques, one can find out what makes the blood pressure rise, the heart beat faster, the muscles tense up. These are important pieces of information which, together with stress-hormone assays, have helped in identifying both aversive and protective factors related to job content and work organization. Thus, when searching the work environment for harmful and protective factors, one can use the people themselves as “measuring rods”. This is one way in which the study of human stress and coping may contribute to intervention and prevention at the workplace (Frankenhaeuser et al. 1989; Frankenhaeuser 1991).

Personal Control as a “Buffer”

Data from both epidemiological and experimental studies support the notion that personal control and decision latitude are important “buffering” factors which help people to simultaneously work hard, enjoy their jobs and remain healthy (Karasek and Theorell 1990). The chance of exercising control may “buffer” stress in two ways: first, by increasing job satisfaction, thus reducing bodily stress responses, and secondly, by helping people develop an active, participatory work role. A job that allows the worker to use his or her skills to the full will increase self-esteem. Such jobs, while demanding and taxing, may help to develop competencies that aid in coping with heavy workloads.

The pattern of stress hormones varies with the interplay of positive versus negative emotional responses evoked by the situation. When demands are experienced as a positive and manageable challenge, the adrenaline output is typically high, whereas the cortisol-producing system is put to rest. When negative feelings and uncertainty dominate, both cortisol and adrenaline increase. This would imply that the total load on the body, the “cost of achievement”, will be lower during demanding, enjoyable work than during less demanding but tedious work, and it would seem that the fact that cortisol tends to be low in controllable situations could account for the positive health effects of personal control. Such a neuroendocrine mechanism could explain the epidemiological data obtained from national surveys in different countries which show that high job demands and work overload have adverse health consequences mainly when combined with low control over job-related decisions (Frankenhaeuser 1991; Karasek and Theorell 1990; Levi, Frankenhaeuser and Gardell 1986).

Total Workload of Women and Men

In order to assess the relative workloads associated with men’s and women’s different life situations, it is necessary to modify the concept of work so as to include the notion of total workload, that is, the combined load of demands related to paid and unpaid work. This includes all forms of productive activities defined as “all the things that people do that contribute to the goods and services that other people use and value” (Kahn 1991). Thus, a person’s total workload includes regular employment and overtime at work as well as housework, child care, care of elderly and sick relatives and work in voluntary organizations and unions. According to this definition, employed women have a higher workload than men at all ages and all occupational levels (Frankenhaeuser 1993a, 1993b and 1996; Kahn 1991).

The fact that the division of labour between spouses in the home has remained the same, while the employment situation of women has changed radically, has led to a heavy workload for women, with little opportunity for them to relax in the evenings (Frankenhaeuser et al. 1989). Until a better insight has been gained into the causal links between workload, stress and health, it will remain necessary to regard prolonged stress responses, displayed in particular by women at the managerial level, as warning signals of possible long-term health risks (Frankenhaeuser, Lundberg and Chesney 1991).

 

Back

Wednesday, 12 January 2011 19:20

Person–Environment Fit

Person–environment fit (PE) theory offers a framework for assessing and predicting how characteristics of the employee and the work environment jointly determine worker well-being and, in the light of this knowledge, how a model for identifying points of preventive intervention may be elaborated. Several PE fit formulations have been proposed, the most widely known ones being those of Dawis and Lofquist (1984); French, Rodgers and Cobb (1974); Levi (1972); McGrath (1976); and Pervin (1967). The theory of French and colleagues, illustrated in figure 1, may be used to discuss the conceptual components of PE fit theory and their implications for research and application.

Figure 1. Schematic of French, Rogers and Cobb's theory of person-environment (PE) fit

Poor PE fit can be viewed from the perspectives of the employee’s needs (needs–supplies fit) as well as the job–environment’s demands (demands–abilities fit). The term needs–supplies fit refers to the degree to which employee needs, such as the need to use skills and abilities, are met by the work environment’s supplies and opportunities to satisfy those needs. Demands–abilities fit refers to the degree to which the job’s demands are met by the employee’s skills and abilities. These two types of fit can overlap. For example, work overload may leave the employer’s demands unmet as well as threaten the employee’s need to satisfy others.

Conceptualizing Person (P) and Environment (E)

Characteristics of the person (P) include needs as well as abilities. Characteristics of the environment (E) include supplies and opportunities for meeting the employee’s needs as well as demands which are made on the employee’s abilities. In order to assess the degree to which P equals (or fits), exceeds, or is less than E, the theory requires that P and E be measured along commensurate dimensions. Ideally, P and E should be measured on equal interval scales with true zero points. For example, one could assess PE fit on workload for a data-entry operator in terms of both the number of data-entry keystrokes per minute demanded by the job (E) and the employee’s keystroke speed (P). As a less ideal alternative, investigators often use Likert type scales. For example, one could assess how much the employee wants to control the work pace (P) and how much control is provided by the job’s technology (E) by using a rating scale, where a value of 1 corresponds to no control, or almost no control and a value of 5 corresponds to complete control.

Distinguishing Subjective from Objective Fit

Subjective fit (FS) refers to the employee’s perceptions of P and E, whereas objective fit (FO) refers to assessments that are, in theory, free of subjective bias and error. In practice, there is always measurement error, so that it is impossible to construct truly objective measures. Consequently, many researchers prefer to create a working distinction between subjective and objective fit, referring to measures of objective fit as ones which are relatively, rather than absolutely, immune to sources of bias and error. For example, one can assess objective PE fit on keystroke ability by examining the fit between a count of required keystrokes per minute in the actual workload assigned to the employee (EO) and the employee’s ability as assessed on an objective-type test of keystroke ability (PO). Subjective PE fit might be assessed by asking the employee to estimate per minute keystroke ability (PS) and the number of keystrokes per minute demanded by the job (ES).

Given the challenges of objective measurement, most tests of PE fit theory have used only subjective measures of P and E (for an exception, see Chatman 1991). These measures have tapped a variety of dimensions including fit on responsibility for the work and well-being of other persons, job complexity, quantitative workload and role ambiguity.

Dynamic Properties of the PE Fit Model

Figure 1 depicts objective fit influencing subjective fit which, in turn, has direct effects on well-being. Well-being is broken down into responses called strains, which serve as risk factors for subsequent illness. These strains can involve emotional (e.g., depression, anxiety), physiological (e.g., serum cholesterol, blood pressure), cognitive (e.g., low self-evaluation, attributions of blame to self or others), as well as behavioural responses (e.g., aggression, changes in lifestyle, drug and alcohol use).

According to the model, levels of and changes in objective fit, whether due to planned intervention or otherwise, are not always perceived accurately by the employee, so that discrepancies arise between objective and subjective fit. Thus, employees can perceive good fit as well as poor fit when, objectively, such is not the case.

Inaccurate employee perceptions can arise from two sources. One source is the organization, which, unintentionally or by design (Schlenker 1980), may provide the employee with inadequate information regarding the environment and the employee. The other source is the employee. The employee might fail to access available information or might defensively distort objective information about what the job requires or about his or her abilities and needs — Taylor (1991) cites such an example.

French, Rodgers and Cobb (1974) use the concept of defences to refer to employee processes for distorting the components of subjective fit, PS and ES, without changing the commensurate components of objective fit, PO and EO. By extension, the organization can also engage in defensive processes—for example, cover-ups, denial or exaggeration—aimed at modifying employee perceptions of subjective fit without concomitantly modifying objective fit.

The concept of coping is, by contrast, reserved for responses and processes that aim to alter and, in particular, improve objective fit. The employee can attempt to cope by improving objective skills (PO) or by changing objective job demands and resources (EO) such as through a change of jobs or assigned responsibilities. By extension, the organization can also apply coping strategies to improve objective PE fit. For example, organizations can make changes in selection and promotion strategies, in training and in job design to alter EO and PO.

The distinctions between coping and defence on the one hand and objective and subjective fit on the other can lead to an array of practical and scientific questions regarding the consequences of using coping and defence and the methods for distinguishing between effects of coping and effects of defence on PE fit. By derivation from the theory, sound answers to such questions require sound measures of objective as well as subjective PE fit.

Statistical Models

PE fit can have non-linear relations with psychological strain. Figure 2 presents a U-shaped curve as an illustration. The lowest level of psychological strain on the curve occurs when employee and job characteristics fit each other (P = E). Strain increases as the employee’s abilities or needs respectively fall short of the job’s demands or resources (PE). Caplan and colleagues (1980) report a U-shaped relation between PE fit on job complexity and symptoms of depression in a study of employees from 23 occupations.

Figure 2. Hypothetical U-shaped relation of person-environment fit to psychological strain

Efficacy of the Model

A variety of different approaches to the measurement of PE fit demonstrate the model’s potential for predicting well-being and performance. For example, careful statistical modelling found that PE fit explained about 6% more variance in job satisfaction than was explained by measures of P or E alone (Edwards and Harrison 1993). In a series of seven studies of accountants measuring PE fit using a card-sort method, high-performers had higher correlations between P and E (average r = 0.47) than low performers (average r = 0.26; Caldwell and O’Reilly 1990). P was assessed as the employee’s knowledge, skills and abilities (KSAs), and E was assessed as the commensurate KSAs required by the job. Poor PE fit between the accountant’s values and the firm’s also served to predict employee turnover (Chatman 1991).

 

Back

Wednesday, 12 January 2011 18:48

Social Support: an Interactive Stress Model

The stress concept

Various definitions of stress have been formulated since the concept was first named and described by Hans Selye (Selye 1960). Almost invariably these definitions have failed to capture what is perceived as the essence of the concept by a major proportion of stress researchers.

The failure to reach a common and generally acceptable definition may have several explanations; one of them may be that the concept has become so widespread and has been used in so many different situations and settings and by so many researchers, professionals and lay persons that to agree on a common definition is no longer possible. Another explanation is that there really is no empirical basis for a single common definition. The concept may be so diverse that one single process simply does not explain the whole phenomenon. One thing is clear—in order to examine the health effects of stress, the concept needs to include more than one component. Selye’s definition was concerned with the physiological fight or flight reaction in response to a threat or a challenge from the environment. Thus his definition involved only the individual physiological response. In the 1960s a strong interest arose in so-called life events, that is, major stressful experiences that occur in an individual’s life. The work by Holmes and Rahe (1967) nicely demonstrated that an accumulation of life events was harmful to health. These effects were found mostly in retrospective studies. To confirm the findings prospectively proved to be more difficult (Rahe 1988).

In the 1970s another concept was introduced into the theoretical framework, that of the vulnerability or resistance of the individual who was exposed to stressful stimuli. Cassel (1976) hypothesized that host resistance was a crucial factor in the outcome of stress or the impact of stress on health. The fact that host resistance had not been taken into account in many studies might explain why so many inconsistent and contradictory results had been obtained on the health effect of stress. According to Cassel, two factors were essential in determining the degree of a person’s host resistance: his or her capacity for coping and his or her social supports.

Today’s definition has come to include considerably more than the physiological “Selye stress” reactions. Both social environ-mental effects as represented by (for instance) life events and the resistance or vulnerability of the individual exposed to the life events are included.

Figure 1. Components of stress in the stress-disease model of Kagan and Levi (1971)

In the stress-disease model proposed by Kagan and Levi (1971), several distinctions between different components are made (figure 1). These components are:

  • stressful factors or stressors in the environment— social or psychological stimuli that evoke certain harmful reactions
  • the individual psychobiological programme, predetermined both by genetic factors and early experiences and learning
  • individual physiological stress reactions (“Selye Stress” reactions). A combination of these three factors may lead to
  • precursors which may eventually provoke the final outcome, namely 
  • manifest physical illness.

 

It is important to note, that—contrary to Selye’s beliefs—several different physiological pathways have been identified that mediate the effects of stressors on physical health outcomes. These include not only the originally described sympatho-adreno-medullary reaction but also the action of the sympatho-adreno-cortical axis, which may be of equal importance, and the counterbalance provided by parasympathetic gastrointestinal neurohormonal regulation, which has been observed to dampen and buffer the harmful effects of stress. In order for a stressor to evoke such reactions, a harmful influence of the psychobiological programme is required— in other words, an individual propensity to react to stressors has to be present. This individual propensity is both genetically determined and based on early childhood experiences and learning.

If the physiological stress reactions are severe and long-standing enough, they may eventually lead to chronic states, or become precursors of illness. An example of such a precursor is hypertension, which is often stress-related and may lead to manifest somatic disease, such as stroke or heart disease.

Another important feature of the model is that the interaction effects of intervening variables are anticipated at each step, further increasing the complexity of the model. This complexity is illustrated by feed-back loops from all stages and factors in the model to every other stage or factor. Thus the model is complex—but so is nature.

Our empirical knowledge about the accuracy of this model is still insufficient and unclear at this stage, but further insight will be gained by applying the interactive model to stress research. For example, our ability to predict disease may increase if the attempt is made to apply the model.

Empirical evidence on host resistance

In our group of investigators at the Karolinska Institute in Stockholm, recent research has been focused on factors that promote host resistance. We have hypothesized that one such powerful factor is the health-promoting effects of well-functioning social networks and social support.

Our first endeavour to investigate the effects of social networks on health were focused on the entire Swedish population from a “macroscopic” level. In cooperation with the Central Swedish Bureau of Statistics we were able to evaluate the effects of self-assessed social network interactions on health outcome, in this case on survival (Orth-Gomér and Johnson 1987).

Representing a random sample of the adult Swedish population, 17,433 men and women responded to a questionnaire about their social ties and social networks. The questionnaire was included in two of the annual Surveys of Living Conditions in Sweden, which were designed to assess and measure the welfare of the nation in material as well as in social and psychological terms. Based on the questionnaire, we created a comprehensive social network interaction index which included the number of members in the network and the frequency of contacts with each member. Seven sources of contacts were identified by means of factor analysis: parents, siblings, nuclear family (spouse and children), close relatives, co-workers, neighbours, distant relatives and friends. The contacts with each source were calculated and added up to a total index score, which ranged from zero to 106.

By linking the Surveys of Living Conditions with the national death register, we were able to investigate the impact of the social network interaction index on mortality. Dividing the study population into tertiles according to their index score, we found that those men and women who were in the lower tertile had an invariably higher mortality risk than those who were in the middle and upper tertiles of the index score.

The risk of dying if one was in the lower tertile was four to five times higher than in the other tertiles, although many other factors might explain this association such as the fact that increasing age is associated with higher risk of dying. Also, as one ages the number of social contacts decrease. If one is sick and disabled, mortality risk increases and it is likely that the extent of the social network decreases. Morbidity and mortality are also higher in lower social classes, and social networks are also smaller and social contacts less abundant. Thus, controlling for these and other mortality risk factors is necessary in any analysis. Even when these factors were taken into account, a statistically significant 40% increase in risk was found to be associated with a sparse social network among those in the lowest third of the population. It is interesting to note that there was no additional health-promoting effect of being in the highest as compared to the middle tertile. Possibly, a great number of contacts can represent a strain on the individual as well as protection against harmful health effects.

Thus, without even knowing anything further about the stressors in the lives of these men and women we were able to confirm a health-promoting effect of social networks.

Social networks alone cannot explain the health effects observed. It is probable that the way in which a social network functions and the basis of support the network members provide are more important than the actual number of people included in the network. In addition, an interactive effect of different stressors is possible. For example the effects of work-related stress have been found to worsen when there is also a lack of social support and social interaction at work (Karasek and Theorell 1990).

In order to explore the issues of interaction, research studies have been carried out using various measures for assessing both qualitative and quantitative aspects of social support. Several interesting results were obtained which are illustrative of the health effects that have been associated with social support. For example, one study of heart disease (myocardial infarct and sudden cardiac death) in a population of 776 fifty-year-old men born in Gothenburg, randomly selected from the general population and found healthy on initial examination, smoking and lack of social support were found to be the strongest predictors of disease (Orth-Gomér, Rosengren and Wilheemsen 1993). Other risk factors included elevated blood pressure, lipids, fibrinogen and a sedentary lifestyle.

In the same study it was shown that only in those men who lacked support, in particular emotional support from a spouse, close relatives or friends, were the effects of stressful life events harmful. Men who both lacked support and had experienced several serious life events had more than five times the mortality of men who enjoyed close and emotional support (Rosengren et al. 1993).

Another example of interactive effects was offered in a study of cardiac patients who were examined for psychosocial factors such as social integration and social isolation, as well as myocardial indicators of an unfavourable prognosis and then followed for a ten-year period. Personality and behaviour type, in particular the Type A behaviour pattern, was also assessed.

The behaviour type in itself had no impact on prognosis in these patients. Of Type A men, 24% died as compared to 22% of Type B men. But when considering the interactive effects with social isolation another picture emerged.

Using a diary of activities during a regular week, men partici-pating in the study were asked to describe anything they would do in the evenings and weekends of a normal week. Activities were then divided into those that involved physical exercise, those that were mainly involved with relaxation and performed at home and those that were performed for recreation together with others. Of these activity types, lack of social recreational activity was the strongest predictor of mortality. Men who never engaged in such activities—called socially isolated in the study—had about three times higher mortality risk than those who were socially active. In addition, Type A men who were socially isolated had an even higher mortality risk than those in any of the other categories (Orth-Gomér, Undén and Edwards 1988).

These studies demonstrate the need to consider several aspects of the psychosocial environment, individual factors as well as of course the physiological stress mechanisms. They also demonstrate that social support is one important factor in stress-related health outcomes.

 

Back

Most previous stress theories were developed to describe reactions to “inevitable” acute stress in situations threatening biological survival (Cannon 1935; Selye 1936). However, the Demand/Control model was developed for work environments where “stressors” are chronic, not initially life threatening, and are the product of sophisticated human organizational decision making. Here, the controllability of the stressor is very important, and becomes more important as we develop ever more complex and integrated social organizations, with ever more complex limitations on individual behaviour. The Demand/Control model (Karasek 1976; Karasek 1979; Karasek and Theorell 1990), which is discussed below, is based on psychosocial characteristics of work: the psychological demands of work and a combined measure of task control and skill use (decision latitude). The model predicts, first, stress-related illness risk, and, secondly, active/passive behavioural correlates of jobs. It has mainly been used in epidemiological studies of chronic disease, such as coronary heart disease.

Pedagogically, it is a simple model which can help to demonstrate clearly several important issues relevant for social policy discussions of occupational health and safety:

  1. that the social organizational characteristics of work, and not just physical hazards, lead to illness and injury
  2. that stress-related consequences are related to the social organization of work activity and not just its demands
  3. that work’s social activity affects stress-related risks, not just person-based characteristics
  4. that the possibility of both “positive stress” and “negative stress” can be explained in terms of combinations of demands and control
  5. that can provide the simple model—with basic face validity—to begin discussions on personal stress response for shop-floor workers, clerical staff and other lay people for whom this is a sensitive topic.

 

Beyond the health consequences of work, the model also captures the perspectives of the work’s organizers who are concerned with productivity results. The psychological demand dimension relates to “how hard workers work”; the decision latitude dimension reflects work organization issues of who makes decisions and who does what tasks. The model’s active learning hypothesis describes the motivation processes of high performance work. The economic logic of extreme labour specialization, the past conventional wisdom about productive job design is contradicted by adverse health consequences in the Demand/Control model. The model implies alternative, health-promoting perspectives on work organization which emphasize broad skills and participation for workers, and which may also bring economic advantages for innovative manufacturing and in service industries because of the increased possibilities for learning and participation.

Hypotheses of the Demand/Control Model

Psychosocial functioning at the workplace, based on psychological demands and decision latitude

Job strain hypothesis

The first hypothesis is that the most adverse reactions of psychological strain occur (fatigue, anxiety, depression and physical illness) when the psychological demands of the job are high and the worker’s decision latitude in the task is low (figure 1, lower right cell). These undesirable stress-like reactions, which result when arousal is combined with restricted opportunities for action or coping with the stressor, are referred to as psychological strain (the term stress is not used at this point as it is defined differently by many groups).

Figure 1. Psychological demand/decision latitude model

For example, the assembly-line worker has almost every behaviour rigidly constrained. In a situation of increased demands (“speed-up”), more than just the constructive response of arousal, the often helpless, long-lasting, and negatively experienced response of residual psychological strain occurs. When the lunch-time rush occurs (Whyte 1948), it is the restaurant worker who does not know how to “control” her customers’ behaviour (“get the jump on the customer”) who experiences the greatest strain on the job. Kerckhoff and Back (1968) describe garment workers under heavy deadline pressure and the subsequent threat of layoff. They conclude that when the actions normally needed to cope with job pressures cannot be taken, the most severe behavioural symptoms of strain occur (fainting, hysteria, social contagion). It is not only the freedom of action as to how to accomplish the formal work task that relieves strain, it may also be the freedom to engage in the informal “rituals”, the coffee break, smoke break or fidgeting, which serve as supplementary “tension release” mechanisms during the work day (Csikszentmihalyi 1975).These are often social activities with other workers— precisely those activities eliminated as “wasted motions” and “soldiering” by Frederick Taylor’s methods (1911 (1967)). This implies a needed expansion of the model to include social relations and social support.

In the model, decision latitude refers to the worker’s ability to control his or her own activities and skill usage, not to control others. Decision latitude scales have two components: task authority—a socially predetermined control over detailed aspects of task performance (also called autonomy); and skill discretion— control over use of skills by the individual, also socially determined at work (and often called variety or “substantive complexity” (Hackman and Lawler 1971; Kohn and Schooler 1973)). In modern organizational hierarchies, the highest levels of knowledge legitimate the exercise of the highest levels of authority, and workers with limited-breadth, specialized tasks are coordinated by managers with higher authority levels. Skill discretion and authority over decisions are so closely related theoretically and empirically that they are often combined.

Examples of work’s psychological demands—“how hard you work”—include the presence of deadlines, the mental arousal or stimulation necessary to accomplish the task, or coordination burdens. The physical demands of work are not included (although psychological arousal comes with physical exertion). Other components of psychological job demands are stressors arising from personal conflicts. Fear of losing a job or skill obsolescence may obviously be a contributor. Overall, Buck (1972) notes that “task requirements” (workload) are the central component of psychological job demands for most workers in spite of the above diversity. While simple measures of working hours, in moderate ranges, do not seem to strongly predict illness, one such measure, shiftwork—especially rotating shiftwork, is associated with substantial social problems as well as increased illness.

While some level of “demands” is necessary to achieve new learning and effective performance on the job (i.e., interest), too high a level is obviously harmful. This has implied the inverted “U-shaped” curve of “optimal” level of demands in the well known General Adaptation Syndrome of Selye (1936) and related, classic theories by Yerkes and Dodson (1908) and Wundt (1922) on stress and performance.* However, our findings show that most work situations have an overload, rather than an underload, problem.

* Although Selye’s “U-shaped” association between demands and stress purported to be unidimensional along a stressor axis, it probably also included a second dimension of constraint in his animal experiments - and thus was really a composite model of stress-related physiological deterioration - potentially similar to the high demand, low control situation, as other researchers have found (Weiss 1971).

Active learning hypothesis

When control on the job is high, and psychological demands are also high, but not overwhelming (fig. 34.2 upper right cell) learning and growth are the predicted behavioural outcomes (i.e., the active learning hypothesis). Such a job is termed the “active job”, since research in both the Swedish and American populations has shown this to be the most active group outside of work in leisure and political activity, in spite of heavy work demands (Karasek and Theorell 1990). Only average psychological strain is predicted for the ‘active job’ because much of the energy aroused by the job’s many stressors (“challenges”) are translated into direct action—effective problem solving—with little residual strain to cause disturbance. This hypothesis parallels White’s “concept of competence” (1959): the psychological state of individuals in challenging circumstances is enhanced by increasing “demands”, an environment-based theory of motivation. The model also predicts that the growth and learning stimuli of these settings, when they occur in a job context, are conducive to high productivity.

In the Demand/Control model, learning occurs in situations which require both individual psychological energy expenditure (demands or challenges) and the exercise of decision-making capability. As the individual with decision-making latitude makes a “choice” as to how to best cope with a new stressor, that new behaviour response, if effective, will be incorporated into the individual’s repertoire of coping strategies (i.e., it will be “learned”). The potential activity level in the future will be raised because of the expanded range of solutions to environmental challenges, yielding an increase in motivation. Opportunities for constructive reinforcement of behaviour patterns are optimal when the challenges in the situation are matched by the individual’s control over alternatives or skill in dealing with those challenges (Csikszentmihalyi 1975). The situation will not be unchallengingly simple (thus, unimportant) nor so demanding that appropriate actions can not be taken because of high anxiety level (the psychological “strain” situation).

The Demand/Control model predicts that situations of low demand and low control (Figure 1 opposite end of diagonal B) cause a very “unmotivating” job setting which leads to “negative learning” or gradual loss of previously acquired skills. Evidence shows that disengagement from leisure and political activity outside the job appear to increase over time in such jobs (Karasek and Theorell 1990). These “passive” job, may be the result of “learned helplessness”, discussed by Seligman (1975) from a sequence of job situations which reject worker’s initiatives.

The fact that environmental demands can thus be conceptualized in both positive and negative terms is congruent with the common understanding that there is both “good” and “bad” stress. Evidence that at least two separable mechanisms must be used to describe “psychological functioning” on the job is one of the primary validations of the multidimensional “Demand/ Control” model structure. The “active”-“passive” diagonal B implies that learning mechanisms are independent of (i.e., orthogonal to) psychological strain mechanisms. This yields a parsimonious model with two broad dimensions of work activity and two major psychological mechanisms (the primary reason for calling it an “interaction” model (Southwood 1978)). (Multiplicative interactions for the axes is too restrictive a test for most sample sizes.)

Clarifying Demand and Control definitions

The Demand/Control model has sometimes been assumed to be congruent with a model of “demands and resources”, allowing a simple fit with currently common “cost/benefit” thinking—where the positive “benefits” of resources are subtracted from the negative “costs” of demands. “Resources” allows inclusion of many factors outside the worker’s immediate task experience of obvious importance. However, the logic of the Demand/ Control model hypotheses cannot be collapsed into a unidimensional form. The distinction between decision latitude and psychological stressors must be retained because the model predicts both learning and job strain—from two different combinations of demands and control which are not simply mathematically additive. Job “control” is not merely a negative stressor, and “demands and challenges” associated with lack of control are not associated with increased learning. Having decision latitude over the work process will reduce a worker’s stress, but increase his learning, while psychological demands would increase both learning and stress. This distinction between demands and control allows understanding of the otherwise unclear prediction of the effects of: (a) “responsibility”, which actually combines high demands and high decision latitude; (b) “qualitative job demands”, which also measures the possibility of decision making about what skills to employ; and (c) “piece work”, where the decision latitude to work faster almost directly brings with it increased demands.

Expanding the Model

Social support hypotheses

The Demand/Control model has been usefully expanded by Johnson by the addition of social support as a third dimension (Johnson 1986; Kristensen 1995). The primary hypothesis, that jobs which are high in demands, low in control—and also low in social support at work (high “iso-strain”) carry the highest risks of illness, has been empirically successful in a number of chronic disease studies. The addition clearly acknowledges the need of any theory of job stress to assess social relations at the workplace (Karasek and Theorell 1990; Johnson and Hall 1988). Social support “buffering” of psychological strain may depend on the degree of social and emotion integration and trust between co-workers, supervisors, etc.—“socio-emotional support” (Israel and Antonnuci 1987). Addition of social support also makes the Demand/Control perspective more useful in job redesigning. Changes in social relations between workers (i.e., autonomous work groups) and changes in decision latitude are almost inseparable in job redesign processes, particularly “participatory” processes (House 1981).

However, a full theoretical treatment of the impact of social relations on both job stress and behaviour is a very complex problem which needs further work. The associations with measures of co-worker and supervisor interactions and chronic disease are less consistent than for decision latitude, and social relations can strongly increase, as well as decrease, the nervous system arousal that may be the risk-inducing link between social situation and illness. The dimensions of work experience that reduce job stress would not necessarily be the same dimensions that are relevant for active behaviour in the Demand/Control model. Facilitating collective forms of active behaviour would likely focus on the distribution of and ability to use competences, communication structure and skills, coordination possibilities, “emotional intelligence skills” (Goleman 1995)—as well as the trust important for social support.

Occupation and psychosocial job characteristics

Job characteristics can be displayed in a four quadrant diagram using the average job characteristics of occupations in the US Census occupation codes (Karasek and Theorell 1990). The “active” job quadrant, with high demand and high control, has high-prestige occupations: lawyers, judges, physicians, professors, engineers, nurses and managers of all kinds. The “passive” job quadrant, with low demands and low control, has clerical workers such as stock and billing clerks, transport operatives and low status service personnel such as janitors. The “high strain” quadrant, with high demands and low control, has machine-paced operatives such as assemblers, cutting operatives, inspectors and freight handlers, as well as other low-status service operatives such as waiters or cooks. Female-dominated occupations are frequent (garment stitchers, waitresses, telephone operators and other office automation workers). “Low strain” self-paced occupations, such as repairmen, sales clerks, foresters, linemen and natural scientists, often involve significant training and self-pacing.

Thus, executives and professionals have a moderate level of stress, and not the highest level of stress, as popular belief often holds. While “managerial stress” certainly exists because of the high psychological demands that come with these jobs, it appears that the frequent occasions for decision-making and deciding how to do the job are a significant stress moderator. Of course, at the highest status levels, executive jobs consist of decision-making as the primary psychological demand, and then the Demand/ Control model fails. However, the implication here is that executives could reduce their stress if they made fewer decisions, and lower status workers would be better off with more decision opportunities, so that all groups could be better off with a more equal share of decision power.

Men are more likely than women to have high control over their work process at the task level, with a difference as great as wage differentials (Karasek and Theorell 1990). Another major gender difference is the negative correlation between decision latitude and demands for women: women with low control also have higher job demands. This means that women are several times as likely to hold high strain jobs in the full working population. By contrast, men’s high demand jobs are generally accompanied by somewhat higher decision latitude (“authority commensurate with responsibility”)

Theoretical linkages between the Demand/Control model and other theoretical perspectives

The Demand/Control models arises out of theoretical integration of several disparate scientific directions. Thus, it falls outside the boundaries of a number of established scientific traditions from which it has gained contributions or with which it is often contrasted: mental health epidemiology and sociology, and stress physiology, cognitive psychology and personality psychology. Some of these previous stress theories have focused on a person-based causal explanation, while the Demand/Control model predicts a stress response to social and psychological environments. However, the Demand/Control model has attempted to provide a set of interfacing hypotheses with person-based perspectives. In addition, linkage to macro social organizational and political economic issues, such as social class, have also been proposed. These theoretical integrations and contrasts with other theories are discussed below at several levels. The linkages below provide the background for an extended set of scientific hypotheses.

Contrast between the Demand/Control model and the cognitive psychological model

One area of stress theory grows out of the currently popular field of cognitive psychology. The central tenet of the cognitive model of human psychological functioning is that it is the processes of perception and interpretation of the external world that determine the development of psychological states in the individual. Mental workload is defined as the total information load that the worker is required to perceive and interpret while performing job tasks (Sanders and McCormick 1993; Wickens 1984). “Overload” and stress occur when this human information processing load is too large for the individual’s information processing capabilities. This model has enjoyed great currency since modelling human mental functions in the same rough conceptual model as modern computers utilize, and thus fits an engineering conception of work design. This model makes us aware of the importance of information overloads, communication difficulties and memory problems. It does well in the design of some aspects of human/computer interfaces and human monitoring of complex processes.

However, the cognitive psychological perspective tends to downplay the importance of “objective” workplace stressors, for example, and emphasize instead the importance of the stressed individuals’ interpretation of the situation. In the cognitive-based “coping approach”, Lazarus and Folkman (1986) advocate that the individual “cognitively reinterpret” the situation in a way that makes it appear less threatening, thus reducing experienced stress. However, this approach could be harmful to workers in situations where the environmental stressors are “objectively” real and must be modified. Another variant of the cognitive approach, more consistent with worker empowerment, is Bandura’s (1977) “self-efficacy /motivation” theory which emphasizes the increases in self-esteem which occur when individuals: (a) define a goal for a change process; (b) receive feedback on the positive results from the environment; and (c) successfully achieve incremental progress.

Several omissions in the cognitive model are problematic for an occupational health perspective on stress and conflict with the Demand/Control model:

  • There is no role for the social and mental “demands” of work that do not translate into information loads (i.e., no role for tasks which require social organizational demands, conflicts and many non-intellectual time deadlines).
  • The cognitive model predicts that situations which require taking a lot of decisions are stressful because they can overload the individual’s information-processing capacity. This directly contradicts the Demand/Control model which predicts lower strain in demanding situations that allow freedom of decision making. The majority of epidemiological evidence from field studies supports the Demand/Control model, but laboratory tests can generate decision-based cognitive overload effect also.
  • The cognitive model also omits physiological drives and primitive emotions, which often dominate cognitive response in challenging situations. There is little discussion of how either negative emotions, nor learning-based behaviour (except for Bandura, above) arise in common adult social situations.

 

Although overlooked in the cognitive model, emotional response is central to the notion of “stress”, since the initial stress problem is often what leads to unpleasant emotional states such as anxiety, fear and depression. “Drives” and emotions are most centrally affected by the limbic regions of the brain—a different and more primitive brain region than the cerebral cortex addressed by most of the processes described by cognitive psychology. Possibly, the failure to develop an integrated perspective on psychological functioning reflects the difficulty of integrating different research specializations focusing on two different neurological systems in the brain. However, recently, evidence has begun to accumulate about the joint effects of emotion and cognition. The conclusion seems to be that emotion is an underlying determinant of strength of behaviour pattern memory and cognition (Damasio 1994; Goleman 1995).

Integrating Sociological and Emotional Stress Perspectives

Development of the Demand/Control model

The goal of the Demand/Control model has been to integrate understanding of the social situation with evidence of emotional response, psychosomatic illness symptoms and active behaviour development in major spheres of adult life activity, particularly in the highly socially structured work situation. However, when the model was being developed, one likely platform for this work, sociological research exploring illness in large population studies, often omitted the detailed level of social or personal response data of stress research, and thus much integrating work was needed to develop the model.

The first Demand/Control integrating idea—for social situation and emotional response—involved stress symptoms, and linked two relatively unidimensional sociological and social psychological research traditions. First, the life stress/illness tradition (Holmes and Rahe 1967; Dohrenwend and Dohrenwend 1974) predicted that illness was based on social and psychological demands alone, without mention of control over stressors. Second, the importance of control at the workplace had been clearly recognized in the job satisfaction literature (Kornhauser 1965): task autonomy and skill variety were used to predict job satisfaction, absenteeism or productivity, with limited additions reflecting the workers’ social relationship to the job—but there was little mention of job workloads. Integrating studies helped bridge the gaps in the area of illness and mental strain. Sundbom (1971) observed symptoms of psychological strain in “mentally heavy work”—which was actually measured by questions relating to both heavy mental pressures and monotonous work (presumably also representing restricted control). The combined insight of these two studies and research traditions was that a two-dimensional model was needed to predict illness: the level of psychological demands determined whether low control could lead to two significantly different types of problem: psychological strain, or passive withdrawal.

The second Demand/Control integration predicted behaviour patterns related to work experience. Behavioural outcomes of work activity also appeared to be affected by the same two broad job characteristics—but in a different combination. Kohn and Schooler (1973) had observed that active orientations to the job were the consequence of both high skill and autonomy levels, plus psychologically demanding work. Social class measures were important correlates here. Meissner (1971) had also found that leisure behaviour was positively associated with opportunities both to take decisions on the job and to perform mentally challenging work. The combined insight of these studies was that “challenge” or mental arousal was necessary, on the one hand, for effective learning and, on the other, could contribute to psychological strain. “Control” was the crucial moderating variable that determined whether environmental demands would lead to “positive” learning consequences, or “negative” strain consequences.

The combination of these two integrating hypotheses, predicting both health and behavioural outcomes, is the basis of the Demand/Control model. “Demand” levels are the contingent factor which determines whether low control leads to either passivity or psychological strain; and “control” levels are the contingent factor which determines whether demands lead to either active learning or psychological strain (Karasek 1976; 1979). The model was then tested on a representative national sample of Swedes (Karasek 1976) to predict both illness symptoms and leisure and political behavioural correlates of psychosocial working conditions. The hypotheses were confirmed in both areas, although many confounding factors obviously share in these results. Shortly after these empirical confirmations, two other conceptual formulations, consistent with the Demand/Control model, appeared, which confirmed the robustness of the general hypotheses. Seligman (1976) observed depression and learned helplessness in conditions of intense demand with restricted control. Simultaneously, Csikszentmihalyi (1975) found that an “active experience” (“flow”) resulted from situations which involved both psychological challenges and high levels of competence. Use of this integrated model was able to resolve some paradoxes in job satisfaction and mental strain research (Karasek 1979): for example, that qualitative workloads were often negatively associated with strain (because they also reflected the individual’s control over his or her use of skills). The most extensive acceptance of the model by other researchers came in 1979 after the expansion of empirical prediction to coronary heart disease, with the assistance of colleague Tores Theorell, a physician with significant background in cardiovascular epidemiology.

A second Demand/Control model integration—physiological response

Additional research has allowed a second level of integration linking the Demand/Control model to physiological response.  The main research developments in physiological research had identified two patterns of an organism’s adaptation to its environment. Cannon’s (1914) fight-flight response is most associated with stimulation of the adrenal medulla—and adrenaline secretion. This pattern, occurring in conjunction with sympathetic arousal of the cardiovascular system, is clearly an active and energetic response mode where the human body is able to use maximum metabolic energy to support both mental and physical exertion necessary to escape major threats to its survival. In the second physiological response pattern, the adrenocortical response is a response to defeat or withdrawal in a situation with little possibility of victory. Selye’s research (1936) on stress dealt with the adrenocortical response to animals in a stressed but passive condition (i.e., his animal subjects were restrained while they were stressed, not a fight-flight situation). Henry and Stephens (1977) describe this behaviour as the defeat or loss of social attachments, which leads to a withdrawal and submissiveness in social interactions.

* A major stimulus for the development of the strain hypothesis of the Demand/Control model in 1974 were Dement’s observations (1969) that vital relaxation related to REM dreaming was inhibited if sleep-deprived cats were “constrained” by a treadmill (perhaps like an assembly line) after periods of extreme psychological stressor exposure. The combined actions of both environmental stressors and low environmental control were essential elements in producing these effects. The negative impacts, in terms of mental derangement, were catastrophic and led to inability to coordinate the most basic physiological processes.

In the early 1980s, Frankenhaeuser’s (1986) research demonstrated the congruence of these two patterns of physiological response with the main hypotheses of the Demand/ Control model—allowing linkage to be made between physiological response and social situation, and emotional response patterns. In high-strain situations, cortisol from the adrenal cortex, and adrenaline from the adrenal medulla, secretions are both elevated, whereas in a situation where the subject has a controllable and predictable stressor, adrenaline secretion alone is elevated (Frankenhaeuser, Lundberg and Forsman 1980). This demonstrated a significant differentiation of psychoendocrine response associated with different environmental situations. Frankenhaeuser used a two-dimension model with the same structure as the Demand/Control model, but with dimensions labelling personal emotional response. “Effort” describes adrenal-medullary stimulating activity (demands in the Demand/Control model) and “distress” describes adrenocortical stimulating activity (lack of decision latitude in the Demand/ Control model). Frankenhaeuser’s emotional response categories illuminate a clearer link between emotion and physiological response, but in this form the Demand/Control model fails to illuminate the association between work sociology and physiological response, which has been another strength of the model.

Integrating person-based stress theory: The dynamic version of the Demand/Control model

One of the challenges behind the development of the Demand/ Control model has been to develop an alternative to the socially conservative explanation that the worker’s perception or response orientations are primary responsible for stress—the claim of some person-based stress theories. For example, it is hard to accept the claims, extended by personality-based stress models, that the majority of stress reactions develop because common individual personality types habitually misinterpret real world stresses or are oversensitive to them, and that these types of personality can be identified on the basis of simple tests. Indeed, evidence for such personality effects has been mixed at best with even the most common measures (although a stress denial personality has been identified—alexithymia (Henry and Stephens 1977). The Type A behaviour pattern, for example, was originally interpreted as the individual’s proclivity to select stressful activities, but research in this area has now shifted to the “anger-prone” personality (Williams 1987). Of course, anger response could have a significant environment-response component. A more generalized version of the personality approach is found in the “person-environment fit” model (Harrison 1978), which postulates that a good match between the person and the environment is what reduces stress. Here also it has been difficult to specify the specific personality characteristics to be measured. Nevertheless, personal response/personality-based approaches addressed the obvious fact that: (a) person-based perceptions are an important part of the process in which environments affect individuals; and (b) there are long-term differences in personal responses to environments. Thus, a time dynamic, integrated environment and person-based version of the Demand/Control model was developed.

The dynamic version of the Demand/Control model (figure 2) integrates environment effects with person-based phenomena such as self-esteem development and long-term exhaustion. The dynamic version integrates person-based and environmental factors by building two combined hypotheses on the original strain and learning mechanisms: (a) that stress inhibits learning; and (b) that learning, in the long term, can inhibit stress. The first hypothesis is that high-strain levels may inhibit the normal capacity to accept a challenge, and thus inhibit new learning. These high-strain levels may be the result of long-lasting psychological strain accumulated over time—and reflected in person-based measures (figure 2, diagonal arrow B). The second hypothesis is that new learning may lead to feelings of mastery or confidence—a person-based measure. These feelings of mastery, in turn, can lead to reduced perceptions of events as stressful and increased coping success (figure 3, diagonal arrow A). Thus, environmental factors, over the long term, partly determine personality, and later, environmental effects are moderated by these previously developed personality orientations. This broad model could incorporate the following, more specific measures of personal response: feelings of mastery, denial, alexithymia, trait anxiety, trait anger, vital exhaustion, burnout, culmulative life-stressor implications, and possibly Type A behaviour components.

Figure 2. Dynamic associations linking environmental strain and learning to evolution of personality

The dynamic model yields the possibility of two long-term dynamic “spirals” of behaviour. The positive behavioural dynamic begins with the active job setting, the increased “feeling of mastery”, and the increased ability to cope with inevitable job stressors. These, in turn, reduce accumulated anxiety and thus increase the capacity to accept still more learning challenges —yielding still further positive personality change and improved well-being. The undesirable behavioural dynamic begins with the high-strain job, the high accumulated residual strain and the restricted capacity to accept learning challenges. These, in turn, lead to diminishing self-esteem and increased stress perceptions—yielding still further negative personality change and diminished well-being. Evidence for submechanisms is discussed in Karasek and Theorell (1990), although the complete model has not been tested. Two promising research directions which could easily integrate with Demand/Control research are “vital exhaustion” research integrated with                                                                                                                                     changing responses to life demands (Appels 1990), and                                                                                                                                   Bandura’s (1977) “self-efficacy” methods, which integrate skill                                                                                                                         development and self-esteem development.

The Demand/Control model and the system dynamics of physiological stress

One necessary next step for Demand/Control research is a more comprehensive specification of the physiological pathways of illness causation. Physiological response is increasingly being understood as a complex system response. The physiology of human stress response—to accomplish, for example, a fight or flight behaviour—is a highly integrated combination of changes in cardiovascular output, brain-stem regulation, respiratory interaction, limbic-system control of the endocrine response, general cortical activation and peripheral circulatory system changes. The concept of “stress” is very possibly most relevant for complex systems—which involve multiple, interacting subsystems and complex causality.*  Accompanying this new perspective of systems dynamic principles in physiology, are definitions of many diseases as disorders of system regulation (Henry and Stephens 1977; Weiner 1977), and investigation of the results of time-dependent, multifactoral adjustments to system equilibrium, or alternatively, their absence in “chaos”.

* Instead of a single and unambiguous cause and effect linkage, as in the "hard sciences" (or hard science mythologically), in stress models causal associations are more complex: there may be many causes which "accumulate" to contribute to a single effect; a single cause ("stressor") may have many effects; or effects which occur only after significant time delays.

Interpreting such observations from the perspective of a “generalized” Demand/Control model, we could say that stress refers to a disequilibrium of the system as a whole, even when parts of the system are functioning. All organisms must have control mechanisms to integrate the actions of separate subsystems (i.e., the brain, the heart and the immune systems). Stress (or job strain) would be an overload condition experienced by the organism’s “control system” when it attempts to maintain integrated functioning in the face of too many environmental challenges (“high demands”), and when the system’s capability of integrated control of its submechanisms fails (“high strain”). To impose order on its chaotic environment, the individual’s internal physiological control systems must “do the work” of maintaining a coordinated physiological regularity (i.e., a constant heart rate) in the face of irregular environmental demands. When the organism’s control capacity is exhausted after too much “organizing” (a low entropy condition, by analogy from thermodynamics), further demands lead to excess fatigue or debilitating strain. Furthermore, all organisms must periodically return their control systems to the rest-state—sleep or relaxation periods (a state of relaxed disorder or high entropy)—to be capable of undertaking the next round of coordinating tasks. The system’s coordination processes or its relaxation attempts may be inhibited if it cannot follow its own optimal course of action, i.e., if it has no possibilities to control its situation or find a satisfactory internal equilibrium state. In general, “lack of control” may represent restriction of the organism’s ability to use all of its adaptive mechanisms to maintain physiological equilibrium in the face of demands, leading to increased long-term burdens and disease risk. This is a direction for future Demand/Control physiological research.

One potentially consistent finding is that while the Demand/Control model predicts cardiovascular mortality, no single conventional risk factor or physiological indicator seems to be the primary pathway of this risk. Future research may show whether “systems dynamic failures” are the pathway.

Macro-social implications of Demand/Control model

Models which integrate over several spheres of research allow broader predictions about the health consequences of human social institutions. For example, Henry and Stephens (1977) observe that in the animal world “psychological demands” result from the thoroughly “social” responsibilities of finding family food and shelter, and rearing and defending offspring; situations of enforced demands combined with social isolation would be hard to imagine. However, the human world of work is so organized that demands can occur without any social affiliation at all. Indeed, according to Frederick Taylor’s Principles of Scientific Management (1911 (1967)), increasing workers’ job demands often should be done in isolation, otherwise the workers would revolt against the process—and return to time-wasting socializing! In addition to showing the utility of an integrated model, this example shows the need to expand even further the social understanding of the human stress response (for example, by adding a social support dimension to the Demand/Control model).

An integrated, socially anchored, understanding of human stress response is particularly needed to understand future economic and political development. Less comprehensive models could be misleading. For example, according to the cognitive model which has dominated public dialogues about future social and industrial development (i.e., the direction for worker’s skills, life in the information society, etc.), an individual has freedom to interpret—i.e., reprogramme—his perception of real world events as stressful or non-stressful. The social implication is that, literally, we can design for ourselves any social arrangement—and we should take the responsibility for adapting to any stresses it may cause. However, many of the physiological consequences of stress relate to the “emotional brain” in the limbic system, which has a deterministic structure with clear limitations on overall demands. It is definitely not “infinitely” re-programmable, as studies of post traumatic stress syndrome clearly indicate (Goleman 1995). Overlooking the limbic system’s limits—and the integration of emotional response and social integration—can lead to a very modern set of basic conflicts for human development. We may be developing social systems on the basis of the extraordinary cognitive capabilities of our brain cortex that place impossible demands on the more basic limbic brain functions in terms of overloads: lost social bonds, lack of internal control possibilities, and restricted ability to see the “whole picture”. In short, we appear to be running the risk of developing work organizations for which we are sociobiologically misfit. These results are not just the consequence of scientific incomplete models, they also facilitate the wrong kinds of social process—processes where the interests of some groups with social power are served to the cost to others of previously inexperienced levels of social and personal dysfunction.

Social class and psychosocial job measures

In many cases, individual level stressors can be modelled as the causal outcome of larger-scale social, dynamic and political- economic processes. Thus, theoretical linkages to concepts such as social class are also needed. Assessment of associations between social situation and illness raise the question of the relation between psychosocial Demand/Control factors and broad measures of social circumstance such as social class. Job decision latitude measure is, indeed, clearly correlated with education and other measures of social class. However, social class conventionally measures effects of income and education which operate via different mechanisms than the psychosocial pathways of the Demand/Control model. Importantly, the job strain construct is almost orthogonal to most social class measures in national populations (however, the active/passive dimension is highly correlated with social class among high status workers (only)) (Karasek and Theorell 1990). The low-decision latitude aspects of low status jobs appear to be a more important contributor to psychological strain than the distinction between mental and physical workload, the conventional determinant of white/blue-collar status. Indeed, the physical exertion common in many blue-collar jobs may be protective for psychological strain in some circumstances. While job strain is indeed more common in low status jobs, psychosocial job dimensions define a strain-risk picture which is significantly independent of the conventional social class measures.

Although it has been suggested that the observed Demand/Control job/illness associations merely reflect social class differences (Ganster 1989; Spector 1986), a review of evidence rejects this view (Karasek and Theorell 1990). Most of the Demand/Control research has simultaneously controlled for social class, and Demand/Control associations persist within social class groups. However, blue-collar associations with the model are more consistently confirmed, and the strength of white-collar associations varies (see “Job strain and cardiovascular disease”, below) across studies, with white-collar single occupation studies being somewhat less robust. (Of course, for the very highest status managers and professionals decision making may become a significant demand in itself.)

The fact that conventional “social class” measures often find weaker associations with mental distress and illness outcomes than the Demand/Control model actually makes a case for new social class conceptions. Karasek and Theorell (1990) define a new set of psychosocially advantaged and disadvantaged workers, with job stress “losers” in routinized, commercialized and bureaucratized jobs, and “winners” in highly creative learning-focused intellectual work. Such a definition is consistent with a new, skill-based industrial output in the “information society”, and a new perspective on class politics.

Methodological Issues

Objectivity of psychosocial job measures

Self-report questionnaires administered to workers have been the most common method of gathering data on psychosocial characteristics of work since they are simple to administer and can be easily designed to tap core concepts in work redesign efforts also (Hackman and Oldham’s JDS 1975), Job Content Questionnaire (Karasek 1985), the Swedish Statshalsan questionnaire. While designed to measure the objective job, such questionnaire instruments inevitably measure job characteristics as perceived by the worker. Self-report bias of findings can occur with self-reported dependent variables such as depression, exhaustion and dissatisfaction. One remedy is to aggregate self-report responses by work groups with similar work situations—diluting individual biases (Kristensen 1995). This is the basis of extensively used systems linking psychosocial job characteristics to occupations (Johnson et al. 1996).

There is also evidence assessing the “objective” validity of self-reported psychosocial scales: correlations between self-report and expert observation data are typically 0.70 or higher for decision latitude, and lower (0.35) correlations for work demands (Frese and Zapf 1988). Also supporting objective validity is the high between-occupation variances of (40 to 45%) of decision latitude scales, which compare favourably with 21% for income and 25% for the physical exertion, which are acknowledged to vary dramatically by occupation (Karasek and Theorell 1990). However, only 7% and 4%, of psychological demands and social support scale variance, respectively, is between occupations, leaving the possibility of a large person-based component of self-reports of these measures.

More objective measurement strategies would be desirable. Some well-known objective assessment methods are congruent with the Demand/Control model (for decision latitude: VERA, Volpert et al. (1983)). However, expert observations have problems also: observations are costly, time consuming, and, in assessment of social interactions, obviously do not generate more accurate measures. There are also theoretical biases involved in the very concept of standard “expert” measures: it is much easier to “measure” the easily observed, repetitive quality of the low status assembly-line worker jobs, than the diverse tasks of high status managers or professionals. Thus, objectivity of the psychosocial measures is inversely related to the decision latitude of the subject.

Some reviews of empirical evidence for the Demand/Control model

Job strain and cardiovascular disease (CVD)

Job strain and heart disease associations represent the broadest base of empirical support for the model. Recent comprehensive reviews have been done by Schnall, Landsbergis and Baker (1994), Landsbergis et al. (1993) and Kristensen (1995). Summarizing Schnall, Landsbergis and Baker(1994) (updated by Landsbergis, personal communication, Fall 1995): 16 of 22 studies have confirmed a job strain association with cardiovascular mortality using a wide range of methodologies, including 7 of 11 cohort studies; 2 of 3 cross-sectional studies; 4 of 4 case control studies; and 3 of 3 studies utilizing disease symptom indicators. Most negative studies have been in older populations (mainly over age 55, some with much post-retirement time) and are mainly based upon aggregated occupation scores which, although they minimize self-report bias, are weak in statistical power. The job strain hypothesis appears to be somewhat more consistent when predicting blue-collar than white-collar CVD (Marmot and Theorell 1988). Conventional CVD risk factors such as serum cholesterol, smoking and even blood pressure, when measured in the conventional manner, have so far only shown inconsistent or weak job-strain effects. However, more sophisticated methods (ambulatory blood pressures) show substantial positive results (Theorell and Karasek 1996).

Job strain and psychological distress/behaviour, absenteeism

Psychological disorder findings are reviewed in Karasek and Theorell (1990). The majority of the studies confirm a job strain association and are from broadly representative or nationally representative populations in a number of countries. The common study limitations are cross-section design and the difficult-to-avoid problem of self-reported job and psychological strain questionnaires, although some studies also include objective observer assessment of work situations and there are also supportive longitudinal studies. While some have claimed that a person-based tendency towards negative affect inflates work-mental strain associations (Brief et al. 1988), this could not be true for several strong findings on absenteeism (North et al. 1996; Vahtera Uutela and Pentii 1996). Associations in some studies are very strong and, in a number of studies, are based on a linkage system which minimizes potential self-report bias (at the risk of loss of statistical power). These studies confirm associations for a broad range of psychological strain outcomes: moderately severe forms of depression, exhaustion, drug consumption, and life and job dissatisfaction, but findings also differ by outcome. There is also some differentiation of negative affect by Demand/Control model dimensions. Exhaustion, rushed tempo or simply reports of “feeling stressed” are more strongly related to psychological demands—and are higher for managers and professionals. More serious strain symptoms such as depression, loss of self-esteem, and physical illness seem to be more strongly associated with low decision latitude—a larger problem for low status workers.

Job strain and musculoskeletal disorders and other chronic diseases

Evidence of the utility of the Demand/Control model is accumulating in other areas (see Karasek and Theorell 1990). Prediction of occupational musculoskeletal illness is reviewed for 27 studies by Bongers et al. (1993) and other researchers (Leino and Häøninen 1995; Faucett and Rempel 1994). This work supports the predictive utility of the Demand/ Control/support model, particularly for upper extremity disorders. Recent studies of pregnancy disorders (Fenster et al. 1995; Brandt and Nielsen 1992) also show job strain associations.

Summary and Future Directions

The Demand/Control/support model has stimulated much research during recent years. The model has helped to document more specifically the importance of social and psychological factors in the structure of current occupations as a risk factor for industrial society’s most burdensome diseases and social conditions. Empirically, the model has been successful: a clear relationship between adverse job conditions (particularly low decision latitude) and coronary heart disease has been established.

However, it is still difficult to be precise about which aspects of psychological demands, or decision latitude, are most important in the model, and for what categories of workers. Answers to these questions require more depth of explanation of the physiological and micro-behavioural effects of psychological demands, decision latitude and social support than the model’s original formulation provided, and require simultaneous testing of the dynamic version of the model, including the active/passive hypotheses. Future utility of Demand/Control research could be enhanced by an expanded set of well-structured hypotheses, developed through integration with other intellectual areas, as outlined above (also in Karasek and Theorell 1990). The active/passive hypotheses, in particular, have received too little attention in health outcome research.

Other areas of progress are also needed, particularly new methodological approaches in the psychological demand area. Also, more longitudinal studies are needed, methodological advances are needed to address self-report bias and new physiological monitoring technologies must be introduced. At the macro level, macro social occupational factors, such as worker collective and organizational level decision influence and support, communication limitations and job and income insecurity, need to be more clearly integrated into the model. The linkages to social class concepts need to be further explored, and the strength of the model for women and the structure of work/family linkages need to be further investigated. Population groups in insecure employment arrangements, which have the highest stress levels, must be covered by new types of study designs—especially relevant as the global economy changes the nature of work relationships. As we are more exposed to the strains of the global economy, new measures at macro levels are needed to test the lack of local control and increased intensity of work activity— apparently making the general form of the Demand/Control model relevant in the future.

 

Back

Tuesday, 11 January 2011 20:43

Environmental Design

Overview

In this article, the links between the physical features of the workplace and occupational health are examined. Workplace design is concerned with a variety of physical conditions within work environments that can be objectively observed or recorded and modified through architectural, interior design and site planning interventions. For the purposes of this discussion, occupational health is broadly construed to encompass multiple facets of workers’ physical, mental and social well-being (World Health Organization 1984). Thus, a broad array of health outcomes is examined, including employee satisfaction and morale, work-group cohesion, stress reduction, illness and injury prevention, as well as environmental supports for health promotion at the worksite.

Empirical evidence for the links between workplace design and occupational health is reviewed below. This review, highlighting the health effects of specific design features, must be qualified in certain respects. First, from an ecological perspective, worksites function as complex systems comprised of multiple social and physical environmental conditions, which jointly influence employee well-being (Levi 1992; Moos 1986; Stokols 1992). Thus, the health consequences of environmental conditions are often cumulative and sometimes involve complex mediated and moderated relationships among the sociophysical environment, personal resources and dispositions (Oldham and Fried 1987; Smith 1987; Stellman and Henifin 1983). Moreover, enduring qualities of people-environment transaction, such as the degree to which employees perceive their work situation to be controllable, socially supportive and compatible with their particular needs and abilities, may have a more pervasive influence on occupational health than any single facet of workplace design (Caplan 1983; Karasek and Theorell 1990; Parkes 1989; Repetti 1993; Sauter, Hurrell and Cooper 1989). The research findings reviewed should be interpreted in light of these caveats.

Research Findings

The relationships between worksite design and occupational health can be considered at several levels of analysis, including the:

  1. physical arrangement of employees’ immediate work area
  2. ambient environmental qualities of the work area
  3. physical organization of buildings that comprise a particular workplace
  4. exterior amenities and site planning of those facilities.

 

Previous research has focused primarily on the first and second levels, while giving less attention to the third and fourth levels of workplace design.

Physical features of the immediate work area

The immediate work area extends from the core of an employee’s desk or workstation to the physical enclosure or imaginary boundary surrounding his or her work space. Several features of the immediate work area have been found to influence employee well-being. The degree of physical enclosure surrounding one’s desk or workstation, for example, has been shown in several studies to be positively related to the employee’s perception of privacy, satisfaction with the work environment and overall job satisfaction (Brill, Margulis and Konar 1984; Hedge 1986; Marans and Yan 1989; Oldham 1988; Sundstrom 1986; Wineman 1986). Moreover, “open-plan” (low enclosure) work areas have been linked to more negative social climates in work groups (Moos 1986) and more frequent reports of headaches among employees (Hedge 1986). It is important to note, however, that the potential health effects of workstation enclosure may depend on the type of work being performed (e.g., confidential versus non-confidential, team versus individualized tasks; see Brill, Margulis and Konar 1984), job status (Sundstrom 1986), levels of social density adjacent to one’s work area (Oldham and Fried 1987), and workers’ needs for privacy and stimulation screening (Oldham 1988).

A number of studies have shown that the presence of windows in the employees’ immediate work areas (especially windows that afford views of natural or landscaped settings), exposure to indoor natural elements (e.g., potted plants, pictures of wilderness settings), and opportunities to personalize the decor of one’s office or workstation are associated with higher levels of environmental and job satisfaction and lower levels of stress (Brill, Margulis and Konar 1984; Goodrich 1986; Kaplan and Kaplan 1989; Steele 1986; Sundstrom 1986). Providing employees with localized controls over acoustic, lighting and ventilation conditions within their work areas has been linked to higher levels of environmental satisfaction and lower levels of stress in some studies (Becker 1990; Hedge 1991; Vischer 1989). Finally, several research programmes have documented the health benefits associated with employees’ use of adjustable, ergonomically sound furniture and equipment; these benefits include reduced rates of eyestrain and of repetitive motion injuries and lower back pain (Dainoff and Dainoff 1986; Grandjean 1987; Smith 1987).

Ambient environmental qualities of the work area

Ambient environmental conditions originate from outside the worker’s immediate work area. These pervasive qualities of the worksite influence the comfort and well-being of employees whose work spaces are located within a common region (e.g., a suite of offices located on one floor of a building). Examples of ambient environmental qualities include levels of noise, speech privacy, social density, illumination and air quality—conditions that are typically present within a particular portion of the worksite. Several studies have documented the adverse health impacts of chronic noise disturbance and low levels of speech privacy in the workplace, including elevated levels of physiological and psychological stress and reduced levels of job satisfaction (Brill, Margulis and Konar 1984; Canter 1983; Klitzman and Stellman 1989; Stellman and Henifin 1983; Sundstrom 1986; Sutton and Rafaeli 1987). High levels of social density in the immediate vicinity of one’s work area have also been linked with elevated stress levels and reduced job satisfaction (Oldham 1988; Oldham and Fried 1987; Oldham and Rotchford 1983).

Health consequences of office lighting and ventilation systems have been observed as well. In one study, lensed indirect fluorescent uplighting was associated with higher levels of employee satisfaction and reduced eyestrain, in comparison with traditional fluorescent downlighting (Hedge 1991). Positive effects of natural lighting on employees’ satisfaction with the workplace also have been reported (Brill, Margulis and Konar 1984; Goodrich 1986; Vischer and Mees 1991). In another study, office workers exposed to chilled-air ventilation systems evidenced higher rates of upper-respiratory problems and physical symptoms of “sick building syndrome” than those whose buildings were equipped with natural or mechanical (non-chilled, non-humidified) ventilation systems (Burge et al. 1987; Hedge 1991).

Features of the ambient environment that have been found to enhance the social climate and cohesiveness of work groups include the provision of team-oriented spaces adjacent to individualized offices and workstations (Becker 1990; Brill, Margulis and Konar 1984; Steele 1986; Stone and Luchetti 1985) and visible symbols of corporate and team identity displayed within lobbies, corridors, conference rooms, lounges and other collectively used areas of the worksite (Becker 1990; Danko, Eshelman and Hedge 1990; Ornstein 1990; Steele 1986).

Overall organization of buildings and facilities

This level of design encompasses the interior physical features of work facilities that extend throughout an entire building, many of which are not immediately experienced within an employee’s own work space or within those adjacent to it. For example, enhancing the structural integrity and fire-resistance of buildings, and designing stairwells, corridors and factories to prevent injuries, are essential strategies for promoting worksite safety and health (Archea and Connell 1986; Danko, Eshelman and Hedge 1990). Building layouts that are consistent with the adjacency needs of closely interacting units within an organization can improve coordination and cohesion among work groups (Becker 1990; Brill, Margulis and Konar 1984; Sundstrom and Altman 1989). The provision of physical fitness facilities at the worksite has been found to be an effective strategy for enhancing employees’ health practices and stress management (O’Donnell and Harris 1994). Finally, the presence of legible signs and wayfinding aids, attractive lounge and dining areas, and child-care facilities at the worksite have been identified as design strategies that enhance employees’ job satisfaction and stress management (Becker 1990; Brill, Margulis and Konar 1984; Danko, Eshelman and Hedge 1990; Steele 1986; Stellman and Henifin 1983; Vischer 1989).

Exterior amenities and site planning

Exterior environmental conditions adjacent to the worksite may also carry health consequences. One study reported an association between employees’ access to landscaped, outdoor recreational areas and reduced levels of job stress (Kaplan and Kaplan 1989). Other researchers have suggested that the geographic location and site planning of the worksite can influence the mental and physical well-being of workers to the extent that they afford greater access to parking and public transit, restaurants and retail services, good regional air quality and the avoidance of violent or otherwise unsafe areas in the surrounding neighbourhood (Danko, Eshelman and Hedge 1990; Michelson 1985; Vischer and Mees 1991). However, the health benefits of these design strategies have not yet been evaluated in empirical studies.

Directions for Future Research

Prior studies of environmental design and occupational health reflect certain limitations and suggest several issues for future investigation. First, earlier research has emphasized the health effects of specific design features (e.g., workstation enclosure, furnishings, lighting systems), while neglecting the joint influence of physical, interpersonal and organizational factors on well-being. Yet the health benefits of improved environmental design may be moderated by the social climate and organizational qualities (as moderated, for example, by a participative versus non-participative structure) of the workplace (Becker 1990; Parkes 1989; Klitzman and Stellman 1989; Sommer 1983; Steele 1986). The interactive links between physical design features, employee characteristics, social conditions at work and occupational health, therefore, warrant greater attention in subsequent studies (Levi 1992; Moos 1986; Stokols 1992). At the same time, an important challenge for future research is to clarify the operational definitions of particular design features (e.g., the “open plan” office), which have varied widely in earlier studies (Brill, Margulis and Konar 1984; Marans and Yan 1989; Wineman 1986).

Secondly, employee characteristics such as job status, gender and dispositional styles have been found to mediate the health consequences of worksite design (Burge et al. 1987; Oldham 1988; Hedge 1986; Sundstrom 1986). Yet, it is often difficult to disentangle the separate effects of environmental features and individual differences (these differences may have to do with, for example, workstation enclosures, comfortable furnishings, and job status) because of ecological correlations among these variables (Klitzman and Stellman 1989). Future studies should incorporate experimental techniques and sampling strategies that permit an assessment of the main and interactive effects of personal and environmental factors on occupational health. Moreover, specialized design and ergonomic criteria to enhance the health of diverse and vulnerable employee groups (e.g., disabled, elderly and single-parent female workers) remain to be developed in future research (Michelson 1985; Ornstein 1990; Steinfeld 1986).

Thirdly, prior research on the health outcomes of worksite design has relied heavily on survey methods to assess employees’ perceptions of both their work environments and health status, placing certain constraints (for example, “common method variance”) on the interpretation of data (Klitzman and Stellman 1989; Oldham and Rotchford 1983). Furthermore, the majority of these studies have used cross-sectional rather than longitudinal research designs, the latter incorporating comparative assessments of intervention and control groups. Future studies should emphasize both field-experimental research designs and multi-method strategies that combine survey techniques with more objective observations and recordings of environmental conditions, medical exams and physiological measures.

Finally, the health consequences of building organization, exterior amenities and site-planning decisions have received considerably less attention in prior studies than those associated with the more immediate, ambient qualities of employees’ work areas. The health relevance of both proximal and remote aspects of workplace design should be examined more closely in future research.

Role of Workplace Design in Illness Prevention and Health Promotion

Several environmental design resources and their potential health benefits are summarized in table 1, based on the preceding review of research findings. These resources are grouped according to the four levels of design noted above and emphasize physical features of work settings that have been empirically linked to improved mental, physical and social health outcomes (especially those found at levels 1 and 2), or have been identified as theoretically plausible leverage points for enhancing employee well-being (e.g., several of the features subsumed under levels 3 and 4).

Table 1. Workplace design resources and potential health benefits

Levels of environmental design Environmental design features of the workplace Emotional, social and physical health outcomes
Immediate work area Physical enclosure of the work area
Adjustable furniture and equipment
Localized controls of acoustics, lighting and ventilation
Natural elements and personalized decor
Presence of windows in work area
Enhanced privacy and job satisfaction
Reduced eyestrain and repetitive-strain and lower-back injuries
Enhanced comfort and stress reduction
Enhanced sense of identity and involvement at the workplace
Job satisfaction and stress reduction
Ambient qualities
of the work area
Speech privacy and noise control
Comfortable levels of social density
Good mix of private and team spaces
Symbols of corporate and team identity
Natural, task, and lensed indirect lighting
Natural ventilation vs. chilled-air systems
Lower physiological, emotional stress
Lower physiological, emotional stress
Improved social climate, cohesion
Improved social climate, cohesion
Reduced eyestrain, enhanced satisfaction
Lower rates of respiratory problems
Building organization Adjacencies among interacting units
Legible signage and wayfinding aids
Injury-resistant architecture
Attractive lounge and food areas onsite
Availability of worksite child care
Physical fitness facilities onsite
Enhanced coordination and cohesion
Reduced confusion and distress
Lower rates of unintentional injuries
Enhanced satisfaction with job, worksite
Employee convenience, stress reduction
Improved health practices, lower stress
Exterior amenities
and site planning
Availability of outside recreation areas
Access to parking and public transit
Proximity to restaurants and stores
Good air quality in surrounding area
Low levels of neighbourhood violence
Enhanced cohesion, stress reduction
Employee convenience, stress reduction
Employee convenience, stress reduction
Improved respiratory health
Reduced rates of intentional injuries

 

The incorporation of these resources into the design of work environments should, ideally, be combined with organizational and facilities management policies that maximize the health- promoting qualities of the workplace. These corporate policies include:

  1. the designation of worksites as “smoke-free” (Fielding and Phenow 1988)
  2. the specification and use of non-toxic, ergonomically sound furnishings and equipment (Danko, Eshelman and Hedge 1990)
  3. managerial support for employees’ personalization of their workspace (Becker 1990; Brill, Margulis and Konar 1984; Sommer 1983; Steele 1986)
  4. job designs that prevent health problems linked with computer-based work and repetitive tasks (Hackman and Oldham 1980; Sauter, Hurrell and Cooper 1989; Smith and Sainfort 1989)
  5. the provision of employee training programmes in the areas of ergonomics and occupational safety and health (Levy and Wegman 1988)
  6. incentive programmes to encourage employees’ use of physical fitness facilities and compliance with injury prevention protocols (O’Donnell and Harris 1994)
  7. flexitime, telecommuting, job-sharing and ride-sharing programmes to enhance workers’ effectiveness in residential and corporate settings (Michelson 1985; Ornstein 1990; Parkes 1989; Stokols and Novaco 1981)
  8. the involvement of employees in the planning of worksite relocations, renovations and related organizational developments (Becker 1990; Brill, Margulis and Konar 1984; Danko, Eshelman and Hedge 1990; Miller and Monge 1986; Sommer 1983; Steele 1986; Stokols et al. 1990).

 

Organizational efforts to enhance employee well-being are likely to be more effective to the extent that they combine complementary strategies of environmental design and facilities management, rather than relying exclusively on either one of these approaches.


Back

Tuesday, 11 January 2011 20:25

Psychosocial Factors, Stress and Health

In the language of engineering, stress is “a force which deforms bodies”. In biology and medicine, the term usually refers to a process in the body, to the body’s general plan for adapting to all the influences, changes, demands and strains to which it is exposed. This plan swings into action, for example, when a person is assaulted on the street, but also when someone is exposed to toxic substances or to extreme heat or cold. It is not just physical exposures which activate this plan however; mental and social ones do so as well. For instance, if we are insulted by our supervisor, reminded of an unpleasant experience, expected to achieve something of which we do not believe we are capable, or if, with or without cause, we worry about our job or marriage.

There is something common to all these cases in the way the body attempts to adapt. This common denominator—a kind of “revving up” or “stepping on the gas”—is stress. Stress is, then, a stereotype in the body’s responses to influences, demands or strains. Some level of stress is always to be found in the body, just as, to draw a rough parallel, a country maintains a certain state of military preparedness, even in peacetime. Occasionally this preparedness is intensified, sometimes with good cause and at other times without.

In this way the stress level affects the rate at which processes of wear and tear on the body take place. The more “gas” given, the higher the rate at which the body’s engine is driven, and hence the more quickly the “fuel” is used up and the “engine” wears out. Another metaphor also applies: if you burn a candle with a high flame, at both ends, it will give off brighter light but will also burn down more quickly. A certain amount of fuel is necessary otherwise the engine will stand still, the candle will go out; that is, the organism would be dead. Thus, the problem is not that the body has a stress response, but that the degree of stress—the rate of wear and tear—to which it is subject may be too great. This stress response varies from one minute to another even in one individual, the variation depending in part on the nature and state of the body and in part on the external influences and demands—the stressors—to which the body is exposed. (A stressor is thus something that produces stress.)

Sometimes it is difficult to determine whether stress in a particular situation is good or bad. Take, for instance, the exhausted athlete on the winner’s stand, or the newly appointed but stress-racked executive. Both have achieved their goals. In terms of pure accomplishment, one would have to say that their results were well worth the effort. In psychological terms, however, such a conclusion is more doubtful. A good deal of torment may have been necessary to get so far, involving long years of training or never-ending overtime, usually at the expense of family life. From the medical viewpoint such achievers may be considered to have burnt their candles at both ends. The result could be physiological; the athlete may rupture a muscle or two and the executive develop high blood pressure or have a heart attack.

Stress in relation to work

An example may clarify how stress reactions can arise at work and what they might lead to in terms of health and quality of life. Let us imagine the following situation for a hypothetical male worker. Based on economic and technical considerations, management has decided to break up a production process into very simple and primitive elements which are to be performed on an assembly line. Through this decision, a social structure is created and a process set into motion which can constitute the starting point in a stress- and disease-producing sequence of events. The new situation becomes a psychosocial stimulus for the worker, when he first perceives it. These perceptions may be further influenced by the fact that the worker may have previously received extensive training, and thus was consequently expecting a work assignment which required higher qualifications, not reduced skill levels. In addition, past experience of work on an assembly line was strongly negative (that is, earlier environmental experiences will influence the reaction to the new situation). Furthermore, the worker’s hereditary factors make him more prone to react to stressors with an increase in blood pressure. Because he is more irritable, perhaps his wife criticizes him for accepting his new assignment and bringing his problems home. As a result of all these factors, the worker reacts to the feelings of distress, perhaps with an increase in alcohol consumption or by experiencing undesirable physiological reactions, such as the elevation in blood pressure. The troubles at work and in the family continue, and his reactions, originally of a transient type, become sustained. Eventually, he may enter a chronic anxiety state or develop alcoholism or chronic hypertensive disease. These problems, in turn, increase his difficulties at work and with his family, and may also increase his physiological vulnerability. A vicious cycle may set in which may end in a stroke, a workplace accident or even suicide. This example illustrates the environmental programming involved in the way a worker reacts behaviourally, physiologically and socially, leading to increased vulnerability, impaired health and even death.

Psychosocial conditions in present working life

According to an important International Labour Organization (ILO) (1975) resolution, work should not only respect workers’ lives and health and leave them free time for rest and leisure, but also allow them to serve society and achieve self-fulfilment by developing their personal capabilities. These principles were also set down as early as 1963, in a report from the London Tavistock Institute (Document No. T813) which provided the following general guidelines for job design:

  1.  The job should be reasonably demanding in terms other than sheer endurance and provide at least a minimum of variety.
  2.  The worker should be able to learn on the job and go on learning.
  3.  The job should comprise some area of decision-making that the individual can call his or her own.
  4.  There should be some degree of social support and recognition in the workplace.
  5.  The worker should be able to relate what he or she does or produces to social life.
  6.  The worker should feel that the job leads to some sort of desirable future.

 

The Organization for Economic Cooperation and Development (OECD), however, draws a less hopeful picture of the reality of working life, pointing out that:

  • Work has been accepted as a duty and a necessity for most adults.
  • Work and workplaces have been designed almost exclusively with reference to criteria of efficiency and cost.
  • Technological and capital resources have been accepted as the imperative determinants of the optimum nature of jobs and work systems.
  • Changes have been motivated largely by aspirations to unlimited economic growth.
  • The judgement of the optimum designs of jobs and choice of work objectives has resided almost wholly with managers and technologists, with only a slight intrusion from collective bargaining and protective legislation.
  • Other societal institutions have taken on forms that serve to sustain this type of work system.

 

 In the short run, benefits of the developments which have proceeded according to this OECD list have brought more productivity at lesser cost, as well as an increase in wealth. However, the long-term disadvantages of such developments are often more worker dissatisfaction, alienation and possibly ill health which, when considering society in general, in turn, may affect the economic sphere, although the economic costs of these effects have only recently been taken into consideration (Cooper, Luikkonen and Cartwright 1996; Levi and Lunde-Jensen 1996).

We also tend to forget that, biologically, humankind has not changed much during the last 100,000 years, whereas the environment—and in particular the work environment—has changed dramatically, particularly during the past century and decades. This change has been partly for the better; however, some of these “improvements” have been accompanied by unexpected side effects. For example, data collected by the National Swedish Central Bureau of Statistics during the 1980s showed that:

  • 11% of all Swedish employees are continuously exposed to deafening noise.
  • 15% have work which makes them very dirty (oil, paint, etc.).
  • 17% have inconvenient working hours, i.e., not only daytime work but also early or late night work, shift work or other irregular working hours.
  • 9% have gross working hours exceeding 11 per day (this concept includes hours of work, breaks, travelling time, overtime, etc.; in other words, that part of the day which is set aside for work).
  • 11% have work that is considered both “hectic” and “monotonous”.
  • 34% consider their work “mentally exacting”.
  • 40% consider themselves “without influence on the arrangement of time for breaks”.
  • 45% consider themselves without “opportunities to learn new things” at their work.
  • 26% have an instrumental attitude to their work. They consider “their work to yield nothing except the pay—i.e. no feeling of personal satisfaction”. Work is regarded purely as an instrument for acquiring an income.


In its major study of conditions of work in the 12 member States of the European Union at that time (1991/92), the European Foundation (Paoli 1992) found that 30% of the workforce regarded their work to risk their health, 23 million to have night work more than 25% of total hours worked, each third to report highly repetitive, monotonous work, each fifth male and each sixth female to work under “continuous time pressure”, and each fourth worker to carry heavy loads or to work in a twisted or painful position more than 50% of his or her working time.

Main psychosocial stressors at work

As already indicated, stress is caused by a bad “person- environment fit”, objectively, subjectively, or both, at work or elsewhere and in an interaction with genetic factors. It is like a badly fitting shoe: environmental demands are not matched to individual ability, or environmental opportunities do not measure up to individual needs and expectations. For example, the individual is able to perform a certain amount of work, but much more is required, or on the other hand no work at all is offered. Another example would be that the worker needs to be part of a social network, to experience a sense of belonging, a sense that life has meaning, but there may be no opportunity to meet these needs in the existing environment and the “fit” becomes bad.

Any fit will depend on the “shoe” as well as on the “foot”, on situational factors as well as on individual and group characteristics. The most important situational factors that give rise to “misfit” can be categorized as follows:

Quantitative overload. Too much to do, time pressure and repetitive work-flow. This is to a great extent the typical feature of mass production technology and routinized office work.

Qualitative underload. Too narrow and one-sided job content, lack of stimulus variation, no demands on creativity or problem- solving, or low opportunities for social interaction. These jobs seem to become more common with suboptimally designed automation and increased use of computers in both offices and manufacturing even though there may be instances of the opposite.

Role conflicts. Everybody occupies several roles concurrently. We are the superiors of some people and the subordinates of others. We are children, parents, marital partners, friends and members of clubs or trade unions. Conflicts easily arise among our various roles and are often stress evoking, as when, for instance, demands at work clash with those from a sick parent or child or when a supervisor is divided between loyalty to superiors and to fellow workers and subordinates.

Lack of control over one’s own situation. When someone else decides what to do, when and how; for example, in relation to work pace and working methods, when the worker has no influence, no control, no say. Or when there is uncertainty or lack of any obvious structure in the work situation.

Lack of social support at home and from your boss or fellow workers.

Physical stressors. Such factors can influence the worker both physically and chemically, for example, direct effects on the brain of organic solvents. Secondary psychosocial effects can also originate from the distress caused by, say, odours, glare, noise, extremes of air temperature or humidity and so on. These effects can also be due to the worker’s awareness, suspicion or fear that he is exposed to life-threatening chemical hazards or to accident risks.

Finally, real life conditions at work and outside work usually imply a combination of many exposures. These might become superimposed on each other in an additive or synergistic way. The straw which breaks the camel’s back may therefore be a rather trivial environmental factor, but one that comes on top of a very considerable, pre-existing environmental load.

Some of the specific stressors in industry merit special discussion, namely those characteristic of:

  • mass production technology
  • highly automated work processes
  • shift work


Mass production technology. Over the past century work has become fragmented in many workplaces, changing from a well defined job activity with a distinct and recognized end-product, into numerous narrow and highly specified subunits which bear little apparent relation to the end-product. The growing size of many factory units has tended to result in a long chain of command between management and the individual workers, accentuating remoteness between the two groups. The worker also becomes remote from the consumer, since rapid elaborations for marketing, distribution and selling interpose many steps between the producer and the consumer.

Mass production, thus, normally involves not just a pronounced fragmentation of the work process but also a decrease in worker control of the process. This is partly because work organization, work content and work pace are determined by the machine system. All these factors usually result in monotony, social isolation, lack of freedom and time pressure, with possible long-term effects on health and well-being.

Mass production, moreover, favours the introduction of piece rates. In this regard, it can be assumed that the desire—or necessity—to earn more can, for a time, induce the individual to work harder than is good for the organism and to ignore mental and physical “warnings”, such as a feeling of tiredness, nervous problems and functional disturbances in various organs or organ systems. Another possible effect is that the employee, bent on raising output and earnings, infringes safety regulations thereby increasing the risk of occupational disease and of accidents to oneself and others (e.g., lorry drivers on piece rates).

Highly automated work processes. In automated work the repetitive, manual elements are taken over by machines, and the workers are left with mainly supervisory, monitoring and controlling functions. This kind of work is generally rather skilled, not regulated in detail and the worker is free to move about. Accordingly, the introduction of automation eliminates many of the disadvantages of the mass-production technology. However, this holds true mainly for those stages of automation where the operator is indeed assisted by the computer and maintains some control over its services. If, however, operator skills and knowledge are gradually taken over by the computer—a likely development if decision making is left to economists and technologists—a new impoverishment of work may result, with a re-introduction of monotony, social isolation and lack of control.

Monitoring a process usually calls for sustained attention and readiness to act throughout a monotonous term of duty, a requirement that does not match the brain’s need for a reasonably varied flow of stimuli in order to maintain optimal alertness. It is well documented that the ability to detect critical signals declines rapidly even during the first half-hour in a monotonous environment. This may add to the strain inherent in the awareness that temporary inattention and even a slight error could have extensive economic and other disastrous consequences.

Other critical aspects of process control are associated with very special demands on mental skill. The operators are concerned with symbols, abstract signals on instrument arrays and are not in touch with the actual product of their work.

Shift work. In the case of shift work, rhythmical biological changes do not necessarily coincide with corresponding environmental demands. Here, the organism may “step on the gas” and activation occurs at a time when the worker needs to sleep (for example, during the day after a night shift), and deactivation correspondingly occurs at night, when the worker may need to work and be alert.

A further complication arises because workers usually live in a social environment which is not designed for the needs of shift workers. Last but not least, shift workers must often adapt to regular or irregular changes in environmental demands, as in the case of rotating shifts.

In summary, the psychosocial demands of the modern workplace are often at variance with the workers’ needs and capabilities, leading to stress and ill health. This discussion provides only a snapshot of psychosocial stressors at work, and how these unhealthy conditions can arise in today’s workplace. In the sections that follow, psychosocial stressors are analysed in greater detail with respect to their sources in modern work systems and technologies, and with respect to their assessment and control.


Back

Tuesday, 11 January 2011 20:11

Psychosocial and Organizational Factors

In 1966, long before job stress and psychosocial factors became household expressions, a special report entitled “Protecting the Health of Eighty Million Workers—A National Goal for Occupational Health” was issued to the Surgeon General of the United States (US Department of Health and Human Services 1966). The report was prepared under the auspices of the National Advisory Environmental Health Committee to provide direction to Federal programmes in occupational health. Among its many observations, the report noted that psychological stress was increasingly apparent in the workplace, presenting “... new and subtle threats to mental health,” and possible risk of somatic disorders such as cardiovascular disease. Technological change and the increasing psychological demands of the workplace were listed as contributing factors. The report concluded with a list of two dozen “urgent problems” requiring priority attention, including occupational mental health and contributing workplace factors.

Thirty years later, this report has proven remarkably prophetic. Job stress has become a leading source of worker disability in North America and Europe. In 1990, 13% of all worker disability cases handled by Northwestern National Life, a major US underwriter of worker compensation claims, were due to disorders with a suspected link to job stress (Northwestern National Life 1991). A 1985 study by the National Council on Compensation Insurance found that one type of claim, involving psychological disability due to “gradual mental stress” at work, had grown to 11% of all occupational disease claims (National Council on Compensation Insurance 1985)  

* In the United States, occupational disease claims are distinct from injury claims, which tend to greatly outnumber disease claims.

These developments are understandable considering the demands of modern work. A 1991 survey of European Union members found that “The proportion of workers who complain from organizational constraints, which are in particular conducive to stress, is higher than the proportion of workers complaining from physical constraints” (European Foundation for the Improvement of Living and Working Conditions 1992). Similarly, a more recent study of the Dutch working population found that one-half of the sample reported a high work pace, three-fourths of the sample reported poor possibilities of promotion, and one-third reported a poor fit between their education and their jobs (Houtman and Kompier 1995). On the American side, data on the prevalence of job stress risk factors in the workplace are less available. However, in a recent survey of several thousand US workers, over 40% of the workers reported excessive workloads and said they were “used up” and “emotionally drained” at the end of the day (Galinsky, Bond and Friedman 1993).

The impact of this problem in terms of lost productivity, disease and reduced quality of life is undoubtedly formidable, although difficult to estimate reliably. However, recent analyses of data from over 28,000 workers by the Saint Paul Fire and Marine Insurance company are of interest and relevance. This study found that time pressure and other emotional and personal problems at work were more strongly associated with reported health problems than any other personal life stressor; more so than even financial or family problems, or death of a loved one (St. Paul Fire and Marine Insurance Company 1992).

Looking to the future, rapid changes in the fabric of work and the workforce pose unknown, and possibly increased, risks of job stress. For example, in many countries the workforce is rapidly ageing at a time when job security is decreasing. In the United States, corporate downsizing continues almost unabated into the last half of the decade at a rate of over 30,000 jobs lost per month (Roy 1995). In the above-cited study by Galinsky, Bond and Friedman (1993) nearly one-fifth of the workers thought it likely they would lose their jobs in the forthcoming year. At the same time the number of contingent workers, who are generally without health benefits and other safety nets, continues to grow and now comprises about 5% of the workforce (USBLS 1995).

The aim of this chapter is to provide an overview of current knowledge on conditions which lead to stress at work and associated health and safety problems. These conditions, which are commonly referred to as psychosocial factors, include aspects of the job and work environment such as organizational climate or culture, work roles, interpersonal relationships at work, and the design and content of tasks (e.g., variety, meaning, scope, repetitiveness, etc.). The concept of psychosocial factors extends also to the extra-organizational environment (e.g., domestic demands) and aspects of the individual (e.g., personality and attitudes) which may influence the development of stress at work. Frequently, the expressions work organization or organizational factors are used interchangeably with psychosocial factors in reference to working conditions which may lead to stress.

This section of the Encyclopaedia begins with descriptions of several models of job stress which are of current scientific interest, including the job demands-job control model, the person- environment (P-E) fit model, and other theoretical approaches to stress at work. Like all contemporary notions of job stress, these models have a common theme: job stress is conceptualized in terms of the relationship between the job and the person. According to this view, job stress and the potential for ill health develop when job demands are at variance with the needs, expectations or capacities of the worker. This core feature is implicit in figure 1, which shows the basic elements of a stress model favoured by researchers at the National Institute for Occupational Safety and Health (NIOSH). In this model, work-related psychosocial factors (termed stressors) result in psychological, behavioural and physical reactions which may ultimately influence health. However, as illustrated in figure 1, individual and contextual factors (termed stress moderators) intervene to influence the effects of job stressors on health and well-being. (See Hurrell and Murphy 1992 for a more elaborate description of the NIOSH stress model.)

Figure 1. The Job Stress Model of the National Institute for Occupational Safety and Health (NIOSH)

PSY005F1

But putting aside this conceptual similarity, there are also non-trivial theoretical differences among these models. For example, unlike the NIOSH and P-E fit models of job stress, which acknowledge a host of potential psychosocial risk factors in the workplace, the job demands-job control model focuses most intensely on a more limited range of psychosocial dimensions pertaining to psychological workload and opportunity for workers to exercise control (termed decision latitude) over aspects of their jobs. Further, both the demand-control and the NIOSH models can be distinguished from the P-E fit models in terms of the focus placed on the individual. In the P-E fit model, emphasis is placed on individuals’ perceptions of the balance between features of the job and individual attributes. This focus on perceptions provides a bridge between P-E fit theory and another variant of stress theory attributed to Lazarus (1966), in which individual differences in appraisal of psychosocial stressors and in coping strategies become critically important in determining stress outcomes. In contrast, while not denying the importance of individual differences, the NIOSH stress model gives primacy to environmental factors in determining stress outcomes as suggested by the geometry of the model illustrated in figure 1. In essence, the model suggests that most stressors will be threatening to most of the people most of the time, regardless of circumstances. A similar emphasis can be seen in other models of stress and job stress (e.g., Cooper and Marshall 1976; Kagan and Levi 1971; Matteson and Ivancevich 1987).

These differences have important implications for both guiding job stress research and intervention strategies at the workplace. The NIOSH model, for example, argues for primary prevention of job stress via attention first to psychosocial stressors in the workplace and, in this regard, is consistent with a public health model of prevention. Although a public health approach recognizes the importance of host factors or resistance in the aetiology of disease, the first line of defence in this approach is to eradicate or reduce exposure to environmental pathogens.

The NIOSH stress model illustrated in figure 1 provides an organizing framework for the remainder of this section. Following the discussions of job stress models are short articles containing summaries of current knowledge on workplace psychosocial stressors and on stress moderators. These subsections address conditions which have received wide attention in the literature as stressors and stress moderators, as well as topics of emerging interest such as organizational climate and career stage. Prepared by leading authorities in the field, each summary provides a definition and brief overview of relevant literature on the topic. Further, to maximize the utility of these summaries, each contributor has been asked to include information on measurement or assessment methods and on prevention practices.

The final subsection of the chapter reviews current knowledge on a wide range of potential health risks of job stress and underlying mechanisms for these effects. Discussion ranges from traditional concerns, such as psychological and cardiovascular disorders, to emerging topics such as depressed immune function and musculoskeletal disease.

In summary, recent years have witnessed unprecedented changes in the design and demands of work, and the emergence of job stress as a major concern in occupational health. This section of the Encyclopaedia tries to promote understanding of psychosocial risks posed by the evolving work environment, and thus better protect the well-being of workers.

Back

Monday, 20 December 2010 19:25

Genetic Determinants of Toxic Response

It has long been recognized that each person’s response to environmental chemicals is different. The recent explosion in molecular biology and genetics has brought a clearer understanding about the molecular basis of such variability. Major determinants of individual response to chemicals include important differences among more than a dozen superfamilies of enzymes, collectively termed xenobiotic- (foreign to the body) or drug-metabolizing enzymes. Although the role of these enzymes has classically been regarded as detoxification, these same enzymes also convert a number of inert compounds to highly toxic intermediates. Recently, many subtle as well as gross differences in the genes encoding these enzymes have been identified, which have been shown to result in marked variations in enzyme activity. It is now clear that each individual possesses a distinct complement of xenobiotic-metabolizing enzyme activities; this diversity might be thought of as a “metabolic fingerprint”. It is the complex interplay of these many different enzyme superfamilies which ultimately determines not only the fate and the potential for toxicity of a chemical in any given individual, but also assessment of exposure. In this article we have chosen to use the cytochrome P450 enzyme superfamily to illustrate the remarkable progress made in understanding individual response to chemicals. The development of relatively simple DNA-based tests designed to identify specific gene alterations in these enzymes, is now providing more accurate predictions of individual response to chemical exposure. We hope the result will be preventive toxicology. In other words, each individual might learn about those chemicals to which he or she is particularly sensitive, thereby avoiding previously unpredictable toxicity or cancer.

Although it is not generally appreciated, human beings are exposed daily to a barrage of innumerable diverse chemicals. Many of these chemicals are highly toxic, and they are derived from a wide variety of environmental and dietary sources. The relationship between such exposures and human health has been, and continues to be, a major focus of biomedical research efforts worldwide.

What are some examples of this chemical bombardment? More than 400 chemicals from red wine have been isolated and characterized. At least 1,000 chemicals are estimated to be produced by a lighted cigarette. There are countless chemicals in cosmetics and perfumed soaps. Another major source of chemical exposure is agriculture: in the United States alone, farmlands receive more than 75,000 chemicals each year in the form of pesticides, herbicides and fertilizing agents; after uptake by plants and grazing animals, as well as fish in nearby waterways, humans (at the end of the food chain) ingest these chemicals. Two other sources of large concentrations of chemicals taken into the body include (a) drugs taken chronically and (b) exposure to hazardous substances in the workplace over a lifetime of employment.

It is now well established that chemical exposure may adversely affect many aspects of human health, causing chronic diseases and the development of many cancers. In the last decade or so, the molecular basis of many of these relationships has begun to be unravelled. In addition, the realization has emerged that humans differ markedly in their susceptibility to the harmful effects of chemical exposure.

Current efforts to predict human response to chemical exposure combine two fundamental approaches (figure 1): monitoring the extent of human exposure through biological markers (biomarkers), and predicting the likely response of an individual to a given level of exposure. Although both of these approaches are extremely important, it should be emphasized that the two are distinctly different from one another. This article will focus on the genetic factors underlying individual susceptibility to any particular chemical exposure. This field of research is broadly termed ecogenetics, or pharmacogenetics (see Kalow 1962 and 1992). Many of the recent advances in determining individual susceptibility to chemical toxicity have evolved from a greater appreciation of the processes by which humans and other mammals detoxify chemicals, and the remarkable complexity of the enzyme systems involved.

Figure 1. The interrelationships among exposure assessment, ethnic differences, age, diet, nutrition and genetic susceptibility assessment - all of which play a role in the individual risk of toxicity and cancerTOX050F1

We will first describe the variability of toxic responses in humans. We will then introduce some of the enzymes responsible for such variation in response, due to differences in the metabolism of foreign chemicals. Next, the history and nomenclature of the cytochrome P450 superfamily will be detailed. Five human P450 polymorphisms as well as several non-P450 polymorphisms will be briefly described; these are responsible for human differences in toxic response. We will then discuss an example to emphasize the point that genetic differences in individuals can influence exposure assessment, as determined by environmental monitoring. Lastly, we will discuss the role of these xenobiotic-metabolizing enzymes in critical life functions.

Variation in Toxic Response Among the Human Population

Toxicologists and pharmacologists commonly speak about the average lethal dose for 50% of the population (LD50), the average maximal tolerated dose for 50% of the population (MTD50), and the average effective dose of a particular drug for 50% of the population (ED50). However, how do these doses affect each of us on an individual basis? In other words, a highly sensitive individual may be 500 times more affected or 500 times more likely to be affected than the most resistant individual in a population; for these people, the LD50 (and MTD50 and ED50) values would have little meaning. LD50, MTD50 and ED50 values are only relevant when referring to the population as a whole.

Figure 2 illustrates a hypothetical dose-response relationship for a toxic response by individuals in any given population. This generic diagram might represent bronchogenic carcinoma in response to the number of cigarettes smoked, chloracne as a function of dioxin levels in the workplace, asthma as a function of air concentrations of ozone or aldehyde, sunburn in response to ultraviolet light, decreased clotting time as a function of aspirin intake, or gastrointestinal distress in response to the number of jalapeño peppers consumed. Generally, in each of these instances, the greater the exposure, the greater the toxic response. Most of the population will exhibit the mean and standard deviation of toxic response as a function of dose. The “resistant outlier” (lower right in figure 2) is an individual having less of a response at higher doses or exposures. A “sensitive outlier” (upper left) is an individual having an exaggerated response to a relatively small dose or exposure. These outliers, with extreme differences in response compared to the majority of individuals in the population, may represent important genetic variants that can help scientists in attempting to understand the underlying molecular mechanisms of a toxic response. 

Figure 2. Generic relationship between any toxic response and the dose of any environmental, chemical or physical agent

TOX050F2

Using these outliers in family studies, scientists in a number of laboratories have begun to appreciate the importance of Mendelian inheritance for a given toxic response. Subsequently, one can then turn to molecular biology and genetic studies to pinpoint the underlying mechanism at the gene level (genotype) responsible for the environmentally caused disease (phenotype).

Xenobiotic- or Drug-metabolizing Enzymes

How does the body respond to the myriad of exogenous chemicals to which we are exposed? Humans and other mammals have evolved highly complex metabolic enzyme systems comprising more than a dozen distinct superfamilies of enzymes. Almost every chemical to which humans are exposed will be modified by these enzymes, in order to facilitate removal of the foreign substance from the body. Collectively, these enzymes are frequently referred to as drug-metabolizing enzymes or xenobiotic-metabolizing enzymes. Actually, both terms are misnomers. First, many of these enzymes not only metabolize drugs but hundreds of thousands of environmental and dietary chemicals. Second, all of these enzymes also have normal body compounds as substrates; none of these enzymes metabolizes only foreign chemicals.

For more than four decades, the metabolic processes mediated by these enzymes have commonly been classified as either Phase I or Phase II reactions (figure 3). Phase I (“functionalization”) reactions generally involve relatively minor structural modifications of the parent chemical via oxidation, reduction or hydrolysis in order to produce a more water-soluble metabolite. Frequently, Phase I reactions provide a “handle” for further modification of a compound by subsequent Phase II reactions. Phase I reactions are primarily mediated by a superfamily of highly versatile enzymes, collectively termed cytochromes P450, although other enzyme superfamilies can also be involved (figure 4).

Figure 3. The classical designation of Phase I and Phase II xenobiotic- or drug-metabolizing enzymestox050f4

Figure 4. Examples of drug-metabolizing enzymes

TOX050T1

Phase II reactions involve the coupling of a water-soluble endogenous molecule to a chemical (parent chemical or Phase I metabolite) in order to facilitate excretion. Phase II reactions are frequently termed “conjugation” or “derivatization” reactions. The enzyme superfamilies catalyzing Phase II reactions are generally named according to the endogenous conjugating moiety involved: for example, acetylation by the N-acetyltransferases, sulphation by the sulphotransferases, glutathione conjugation by the glutathione transferases, and glucuronidation by the UDP glucuronosyltransferases (figure 4). Although the major organ of drug metabolism is the liver, the levels of some drug- metabolizing enzymes are quite high in the gastrointestinal tract, gonads, lung, brain and kidney, and such enzymes are undoubtedly present to some extent in every living cell.

Xenobiotic-metabolizing Enzymes Represent Double-edged Swords

As we learn more about the biological and chemical processes leading to human health aberrations, it has become increasingly evident that drug-metabolizing enzymes function in an ambivalent manner (figure 3). In the majority of cases, lipid-soluble chemicals are converted to more readily excreted water-soluble metabolites. However, it is clear that on many occasions the same enzymes are capable of transforming other inert chemicals into highly reactive molecules. These intermediates can then interact with cellular macromolecules such as proteins and DNA. Thus, for each chemical to which humans are exposed, there exists the potential for the competing pathways of metabolic activation and detoxification.

Brief Review of Genetics

In human genetics, each gene (locus) is located on one of the 23 pairs of chromosomes. The two alleles (one present on each chromosome of the pair) can be the same, or they can be different from one another. For example, the B and b alleles, in which B (brown eyes) is dominant over b (blue eyes): individuals of the brown-eyed phenotype can have either the BB or Bb genotypes, whereas individuals of the blue-eyed phenotype can only have the bb genotype.

A polymorphism is defined as two or more stably inherited phenotypes (traits)—derived from the same gene(s)—that are maintained in the population, often for reasons not necessarily obvious. For a gene to be polymorphic, the gene product must not be essential for development, reproductive vigour or other critical life processes. In fact, a “balanced polymorphism,” wherein the heterozygote has a distinct survival advantage over either homozygote (e.g., resistance to malaria, and the sickle-cell haemoglobin allele) is a common explanation for maintaining an allele in the population at otherwise unexplained high frequencies (see Gonzalez and Nebert 1990).

Human Polymorphisms of Xenobiotic-metabolizing Enzymes

Genetic differences in the metabolism of various drugs and environmental chemicals have been known for more than four decades (Kalow 1962 and 1992). These differences are frequently referred to as pharmacogenetic or, more broadly, ecogenetic polymorphisms. These polymorphisms represent variant alleles that occur at a relatively high frequency in the population and are generally associated with aberrations in enzyme expression or function. Historically, polymorphisms were usually identified following unexpected responses to therapeutic agents. More recently, recombinant DNA technology has enabled scientists to identify the precise alterations in genes that are responsible for some of these polymorphisms. Polymorphisms have now been characterized in many drug-metabolizing enzymes—including both Phase I and Phase II enzymes. As more and more polymorphisms are identified, it is becoming increasingly apparent that each individual may possess a distinct complement of drug-metabolizing enzymes. This diversity might be described as a “metabolic fingerprint”. It is the complex interplay of the various drug- metabolizing enzyme superfamilies within any individual that will ultimately determine his or her particular response to a given chemical (Kalow 1962 and 1992; Nebert 1988; Gonzalez and Nebert 1990; Nebert and Weber 1990).

Expressing Human Xenobiotic-metabolizingEnzymes in Cell Culture

How might we develop better predictors of human toxic responses to chemicals? Advances in defining the multiplicity of drug-metabolizing enzymes must be accompanied by precise knowledge as to which enzymes determine the metabolic fate of individual chemicals. Data gleaned from laboratory rodent studies have certainly provided useful information. However, significant interspecies differences in xenobiotic-metabolizing enzymes necessitate caution in extrapolating data to human populations. To overcome this difficulty, many laboratories have developed systems in which various cell lines in culture can be engineered to produce functional human enzymes that are stable and in high concentrations (Gonzalez, Crespi and Gelboin 1991). Successful production of human enzymes has been achieved in a variety of diverse cell lines from sources including bacteria, yeast, insects and mammals.

In order to define the metabolism of chemicals even more accurately, multiple enzymes have also been successfully produced in a single cell line (Gonzalez, Crespi and Gelboin 1991). Such cell lines provide valuable insights into the precise enzymes involved in the metabolic processing of any given compound and likely toxic metabolites. If this information can then be combined with knowledge regarding the presence and level of an enzyme in human tissues, these data should provide valuable predictors of response.

Cytochrome P450

History and nomenclature

The cytochrome P450 superfamily is one of the most studied drug-metabolizing enzyme superfamilies, having a great deal of individual variability in response to chemicals. Cytochrome P450 is a convenient generic term used to describe a large superfamily of enzymes pivotal in the metabolism of innumerable endogenous and exogenous substrates. The term cytochrome P450 was first coined in 1962 to describe an unknown pigment in cells which, when reduced and bound with carbon monoxide, produced a characteristic absorption peak at 450 nm. Since the early 1980s, cDNA cloning technology has resulted in remarkable insights into the multiplicity of cytochrome P450 enzymes. To date, more than 400 distinct cytochrome P450 genes have been identified in animals, plants, bacteria and yeast. It has been estimated that any one mammalian species, such as humans, may possess 60 or more distinct P450 genes (Nebert and Nelson 1991). The multiplicity of P450 genes has necessitated the development of a standardized nomenclature system (Nebert et al. 1987; Nelson et al. 1993). First proposed in 1987 and updated on a biannual basis, the nomenclature system is based on divergent evolution of amino acid sequence comparisons between P450 proteins. The P450 genes are divided into families and subfamilies: enzymes within a family display greater than 40% amino acid similarity, and those within the same subfamily display 55% similarity. P450 genes are named with the root symbol CYP followed by an arabic numeral designating the P450 family, a letter denoting the subfamily, and a further arabic numeral designating the individual gene (Nelson et al. 1993; Nebert et al. 1991). Thus, CYP1A1 represents P450 gene 1 in family 1 and subfamily A.

As of February 1995, there are 403 CYP genes in the database, composed of 59 families and 105 sub- families. These include eight lower eukaryotic families, 15 plant families, and 19 bacterial families. The 15 human P450 gene families comprise 26 subfamilies, 22 of which have been mapped to chromosomal locations throughout most of the genome. Some sequences are clearly orthologous across many species—for example, only one CYP17 (steroid 17α-hydroxylase) gene has been found in all vertebrates examined to date; other sequences within a subfamily are highly duplicated, making the identification of orthologous pairs impossible (e.g., the CYP2C subfamily). Interestingly, human and yeast share an orthologous gene in the CYP51 family. Numerous comprehensive reviews are available for readers seeking further information on the P450 superfamily (Nelson et al. 1993; Nebert et al. 1991; Nebert and McKinnon 1994; Guengerich 1993; Gonzalez 1992).

The success of the P450 nomenclature system has resulted in similar terminology systems being developed for the UDP glucuronosyltransferases (Burchell et al. 1991) and flavin-containing mono-oxygenases (Lawton et al. 1994). Similar nomenclature systems based on divergent evolution are also under development for several other drug-metabolizing enzyme superfamilies (e.g., sulphotransferases, epoxide hydrolases and aldehyde dehydrogenases).

Recently, we divided the mammalian P450 gene superfamily into three groups (Nebert and McKinnon 1994)—those involved principally with foreign chemical metabolism, those involved in the synthesis of various steroid hormones, and those participating in other important endogenous functions. It is the xenobiotic-metabolizing P450 enzymes that assume the most significance for prediction of toxicity.

Xenobiotic-metabolizing P450 enzymes

P450 enzymes involved in the metabolism of foreign compounds and drugs are almost always found within families CYP1, CYP2, CYP3 and CYP4. These P450 enzymes catalyze a wide variety of metabolic reactions, with a single P450 often capable of meta-bolizing many different compounds. In addition, multiple P450 enzymes may metabolize a single compound at different sites. Also, a compound may be metabolized at the same, single site by several P450s, although at varying rates.

A most important property of the drug-metabolizing P450 enzymes is that many of these genes are inducible by the very substances which serve as their substrates. On the other hand, other P450 genes are induced by nonsubstrates. This phenomenon of enzyme induction underlies many drug-drug interactions of therapeutic importance.

Although present in many tissues, these particular P450 enzymes are found in relatively high levels in the liver, the primary site of drug metabolism. Some of the xenobiotic-metabolizing P450 enzymes exhibit activity toward certain endogenous substrates (e.g., arachidonic acid). However, it is generally believed that most of these xenobiotic-metabolizing P450 enzymes do not play important physiological roles—although this has not been established experimentally as yet. The selective homozygous disruption, or “knock-out,” of individual xenobiotic-metabolizing P450 genes by means of gene targeting methodologies in mice is likely to provide unequivocal information soon with regard to physiological roles of the xenobiotic-metabolizing P450s (for a review of gene targeting, see Capecchi 1994).

In contrast to P450 families encoding enzymes involved primarily in physiological processes, families encoding xenobiotic-metabolizing P450 enzymes display marked species specificity and frequently contain many active genes per subfamily (Nelson et al. 1993; Nebert et al. 1991). Given the apparent lack of physiological substrates, it is possible that P450 enzymes in families CYP1, CYP2, CYP3 and CYP4 that have appeared in the past several hundred million years have evolved as a means of detoxifying foreign chemicals encountered in the environment and diet. Clearly, evolution of the xenobiotic-metabolizing P450s would have occurred over a time period which far precedes the synthesis of most of the synthetic chemicals to which humans are now exposed. The genes in these four gene families may have evolved and diverged in animals due to their exposure to plant metabolites during the last 1.2 billion years—a process descriptively termed “animal-plant warfare” (Gonzalez and Nebert 1990). Animal-plant warfare is the phenomenon in which plants developed new chemicals (phytoalexins) as a defence mechanism in order to prevent ingestion by animals, and animals, in turn, responded by developing new P450 genes to accommodate the diversifying substrates. Providing further impetus to this proposal are the recently described examples of plant-insect and plant-fungus chemical warfare involving P450 detoxification of toxic substrates (Nebert 1994).

The following is a brief introduction to several of the human xenobiotic-metabolizing P450 enzyme polymorphisms in which genetic determinants of toxic response are believed to be of high significance. Until recently, P450 polymorphisms were generally suggested by unexpected variance in patient response to administered therapeutic agents. Several P450 polymorphisms are indeed named according to the drug with which the polymorphism was first identified. More recently, research efforts have focused on identification of the precise P450 enzymes involved in the metabolism of chemicals for which variance is observed and the precise characterization of the P450 genes involved. As described earlier, the measurable activity of a P450 enzyme towards a model chemical can be called the phenotype. Allelic differences in a P450 gene for each individual is termed the P450 genotype. As more and more scrutiny is applied to the analysis of P450 genes, the precise molecular basis of previously documented phenotypic variance is becoming clearer.

The CYP1A subfamily

The CYP1A subfamily comprises two enzymes in humans and all other mammals: these are designated CYP1A1 and CYP1A2 under standard P450 nomenclature. These enzymes are of considerable interest, because they are involved in the metabolic activation of many procarcinogens and are also induced by several compounds of toxicological concern, including dioxin. For example, CYP1A1 metabolically activates many compounds found in cigarette smoke. CYP1A2 metabolically activates many arylamines—associated with urinary bladder cancer—found in the chemical dye industry. CYP1A2 also metabolically activates 4-(methylnitrosamino)-1-(3-pyridyl)-1-butanone (NNK), a tobacco-derived nitrosamine. CYP1A1 and CYP1A2 are also found at higher levels in the lungs of cigarette smokers, due to induction by polycyclic hydrocarbons present in the smoke. The levels of CYP1A1 and CYP1A2 activity are therefore considered to be important determinants of individual response to many potentially toxic chemicals.

Toxicological interest in the CYP1A subfamily was greatly intensified by a 1973 report correlating the level of CYP1A1 inducibility in cigarette smokers with individual susceptibility to lung cancer (Kellermann, Shaw and Luyten-Kellermann 1973). The molecular basis of CYP1A1 and CYP1A2 induction has been a major focus of numerous laboratories. The induction process is mediated by a protein termed the Ah receptor to which dioxins and structurally related chemicals bind. The name Ah is derived from the aryl hydrocarbon nature of many CYP1A inducers. Interestingly, differences in the gene encoding the Ah receptor between strains of mice result in marked differences in chemical response and toxicity. A polymorphism in the Ah receptor gene also appears to occur in humans: approximately one-tenth of the population displays high induction of CYP1A1 and may be at greater risk than the other nine-tenths of the population for development of certain chemically induced cancers. The role of the Ah receptor in the control of enzymes in the CYP1A subfamily, and its role as a determinant of human response to chemical exposure, has been the subject of several recent reviews (Nebert, Petersen and Puga 1991; Nebert, Puga and Vasiliou 1993).

Are there other polymorphisms that might control the level of CYP1A proteins in a cell? A polymorphism in the CYP1A1 gene has also been identified, and this appears to influence lung cancer risk amongst Japanese cigarette smokers, although this same polymorphism does not appear to influence risk in other ethnic groups (Nebert and McKinnon 1994).

CYP2C19

Variations in the rate at which individuals metabolize the anticonvulsant drug (S)-mephenytoin have been well documented for many years (Guengerich 1989). Between 2% and 5% of Caucasians and as many as 25% of Asians are deficient in this activity and may be at greater risk of toxicity from the drug. This enzyme defect has long been known to involve a member of the human CYP2C subfamily, but the precise molecular basis of this deficiency has been the subject of considerable controversy. The major reason for this difficulty was the six or more genes in the human CYP2C subfamily. It was recently demonstrated, however, that a single-base mutation in the CYP2C19 gene is the primary cause of this deficiency (Goldstein and de Morais 1994). A simple DNA test, based on the polymerase chain reaction (PCR), has also been developed to identify this mutation rapidly in human populations (Goldstein and de Morais 1994).

CYP2D6

Perhaps the most extensively characterized variation in a P450 gene is that involving the CYP2D6 gene. More than a dozen examples of mutations, rearrangements and deletions affecting this gene have been described (Meyer 1994). This polymorphism was first suggested 20 years ago by clinical variability in patients’ response to the antihypertensive agent debrisoquine. Alterations in the CYP2D6 gene giving rise to altered enzyme activity are therefore collectively termed the debrisoquine polymorphism.

Prior to the advent of DNA-based studies, individuals had been classified as poor or extensive metabolizers (PMs, EMs) of debrisoquine based on metabolite concentrations in urine samples. It is now clear that alterations in the CYP2D6 gene may result in individuals displaying not only poor or extensive debrisoquine metabolism, but also ultrarapid metabolism. Most alterations in the CYP2D6 gene are associated with partial or total deficiency of enzyme function; however, individuals in two families have recently been described who possess multiple functional copies of the CYP2D6 gene, giving rise to ultrarapid metabolism of CYP2D6 substrates (Meyer 1994). This remarkable observation provides new insights into the wide spectrum of CYP2D6 activity previously observed in population studies. Alterations in CYP2D6 function are of particular significance, given the more than 30 commonly prescribed drugs metabolized by this enzyme. An individual’s CYP2D6 function is therefore a major determinant of both therapeutic and toxic response to administered therapy. Indeed, it has recently been argued that consideration of a patient’s CYP2D6 status is necessary for the safe use of both psychiatric and cardiovascular drugs.

The role of the CYP2D6 polymorphism as a determinant of individual susceptibility to human diseases such as lung cancer and Parkinson’s disease has also been the subject of intense study (Nebert and McKinnon 1994; Meyer 1994). While conclusions are difficult to define given the diverse nature of the study protocols utilized, the majority of studies appear to indicate an association between extensive metabolizers of debrisoquine (EM phenotype) and lung cancer. The reasons for such an association are presently unclear. However, the CYP2D6 enzyme has been shown to metabolize NNK, a tobacco-derived nitrosamine.

As DNA-based assays improve—enabling even more accurate assessment of CYP2D6 status—it is anticipated that the precise relationship of CYP2D6 to disease risk will be clarified. Whereas the extensive metabolizer may be linked with susceptibility to lung cancer, the poor metabolizer (PM phenotype) appears to be associated with Parkinson’s disease of unknown cause. Whereas these studies are also difficult to compare, it appears that PM individuals having a diminished capacity to metabolize CYP2D6 substrates (e.g., debrisoquine) have a 2- to 2.5-fold increase in risk of developing Parkinson’s disease.

CYP2E1

The CYP2E1 gene encodes an enzyme that metabolizes many chemicals, including drugs and many low-molecular-weight carcinogens. This enzyme is also of interest because it is highly inducible by alcohol and may play a role in liver injury induced by chemicals such as chloroform, vinyl chloride and carbon tetrachloride. The enzyme is primarily found in the liver, and the level of enzyme varies markedly between individuals. Close scrutiny of the CYP2E1 gene has resulted in the identification of several polymorphisms (Nebert and McKinnon 1994). A relationship has been reported between the presence of certain structural variations in the CYP2E1 gene and apparent lowered lung cancer risk in some studies; however, there are clear interethnic differences which require clarification of this possible relationship.

The CYP3A subfamily

In humans, four enzymes have been identified as members of the CYP3A subfamily due to their similarity in amino acid sequence. The CYP3A enzymes metabolize many commonly prescribed drugs such as erythromycin and cyclosporin. The carcinogenic food contaminant aflatoxin B1 is also a CYP3A substrate. One member of the human CYP3A subfamily, designated CYP3A4, is the principal P450 in human liver as well as being present in the gastrointestinal tract. As is true for many other P450 enzymes, the level of CYP3A4 is highly variable between individuals. A second enzyme, designated CYP3A5, is found in only approximately 25% of livers; the genetic basis of this finding has not been elucidated. The importance of CYP3A4 or CYP3A5 variability as a factor in genetic determinants of toxic response has not yet been established (Nebert and McKinnon 1994).

Non-P450 Polymorphisms

Numerous polymorphisms also exist within other xenobiotic-metabolizing enzyme superfamilies (e.g., glutathione transferases, UDP glucuronosyltransferases, para-oxonases, dehydrogenases, N-acetyltransferases and flavin-containing mono-oxygenases). Because the ultimate toxicity of any P450-generated intermediate is dependent on the efficiency of subsequent Phase II detoxification reactions, the combined role of multiple enzyme polymorphisms is important in determining susceptibility to chemically induced diseases. The metabolic balance between Phase I and Phase II reactions (figure 3) is therefore likely to be a major factor in chemically induced human diseases and genetic determinants of toxic response.

The GSTM1 gene polymorphism

A well studied example of a polymorphism in a Phase II enzyme is that involving a member of the glutathione S-transferase enzyme superfamily, designated GST mu or GSTM1. This particular enzyme is of considerable toxicological interest because it appears to be involved in the subsequent detoxification of toxic metabolites produced from chemicals in cigarette smoke by the CYP1A1 enzyme. The identified polymorphism in this glutathione transferase gene involves a total absence of functional enzyme in as many as half of all Caucasians studied. This lack of a Phase II enzyme appears to be associated with increased susceptibility to lung cancer. By grouping individuals on the basis of both variant CYP1A1 genes and the deletion or presence of a functional GSTM1 gene, it has been demonstrated that the risk of developing smoking-induced lung cancer varies significantly (Kawajiri, Watanabe and Hayashi 1994). In particular, individuals displaying one rare CYP1A1 gene alteration, in combination with an absence of the GSTM1 gene, were at higher risk (as much as ninefold) of developing lung cancer when exposed to a relatively low level of cigarette smoke. Interestingly, there appear to be interethnic differences in the significance of variant genes which necessitate further study in order to elucidate the precise role of such alterations in susceptibility to disease (Kalow 1962; Nebert and McKinnon 1994; Kawajiri, Watanabe and Hayashi 1994).

Synergistic effect of two or more polymorphisms on the toxic response

A toxic response to an environmental agent may be greatly exaggerated by the combination of two pharmacogenetic defects in the same individual, for example, the combined effects of the N-acetyltransferase (NAT2) polymorphism and the glucose-6-phosphate dehydrogenase (G6PD) polymorphism.

Occupational exposure to arylamines constitutes a grave risk of urinary bladder cancer. Since the elegant studies of Cartwright in 1954, it has become clear that the N-acetylator status is a determinant of azo-dye-induced bladder cancer. There is a highly significant correlation between the slow-acetylator phenotype and the occurrence of bladder cancer, as well as the degree of invasiveness of this cancer in the bladder wall. On the contrary, there is a significant association between the rapid-acetylator phenotype and the incidence of colorectal carcinoma. The N-acetyltransferase (NAT1, NAT2) genes have been cloned and sequenced, and DNA-based assays are now able to detect the more than a dozen allelic variants which account for the slow-acetylator phenotype. The NAT2 gene is polymorphic and responsible for most of the variability in toxic response to environmental chemicals (Weber 1987; Grant 1993).

Glucose-6-phosphate dehydrogenase (G6PD) is an enzyme critical in the generation and maintenance of NADPH. Low or absent G6PD activity can lead to severe drug- or xenobiotic-induced haemolysis, due to the absence of normal levels of reduced glutathione (GSH) in the red blood cell. G6PD deficiency affects at least 300 million people worldwide. More than 10% of African-American males exhibit the less severe phenotype, while certain Sardinian communities exhibit the more severe “Mediterranean type” at frequencies as high as one in every three persons. The G6PD gene has been cloned and localized to the X chromosome, and numerous diverse point mutations account for the large degree of phenotypic heterogeneity seen in G6PD-deficient individuals (Beutler 1992).

Thiozalsulphone, an arylamine sulpha drug, was found to cause a bimodal distribution of haemolytic anaemia in the treated population. When treated with certain drugs, individuals with the combination of G6PD deficiency plus the slow-acetylator phenotype are more affected than those with the G6PD deficiency alone or the slow-acetylator phenotype alone. G6PD-deficient slow acetylators are at least 40 times more susceptible than normal-G6PD rapid acetylators to thiozalsulphone-induced haemolysis.

Effect of genetic polymorphisms on exposure assessment

Exposure assessment and biomonitoring (figure 1) also requires information on the genetic make-up of each individual. Given identical exposure to a hazardous chemical, the level of haemoglobin adducts (or other biomarkers) might vary by two or three orders of magnitude among individuals, depending upon each person’s metabolic fingerprint.

The same combined pharmacogenetics has been studied in chemical factory workers in Germany (table 1). Haemoglobin adducts among workers exposed to aniline and acetanilide are by far the highest in G6PD-deficient slow acetylators, as compared with the other possible combined pharmacogenetic phenotypes. This study has important implications for exposure assessment. These data demonstrate that, although two individuals might be exposed to the same ambient level of hazardous chemical in the work place, the amount of exposure (via biomarkers such as haemoglobin adducts) might be estimated to be two or more orders of magnitude less, due to the underlying genetic predisposition of the individual. Likewise, the resulting risk of an adverse health effect may vary by two or more orders of magnitude.

Table 1: Haemoglobin adducts in workers exposed to aniline and acetanilide

Acetylator status G6PD deficiency
Fast Slow No Yes Hgb adducts
+   +   2
+     + 30
  + +   20
  +   + 100

Source: Adapted from Lewalter and Korallus 1985.

Genetic differences in binding as well as metabolism

It should be emphasized that the same case made here for meta-bolism can also be made for binding. Heritable differences in the binding of environmental agents will greatly affect the toxic response. For example, differences in the mouse cdm gene can profoundly affect individual sensitivity to cadmium-induced testicular necrosis (Taylor, Heiniger and Meier 1973). Differences in the binding affinity of the Ah receptor are likely affect dioxin-induced toxicity and cancer (Nebert, Petersen and Puga 1991; Nebert, Puga and Vasiliou 1993).

Figure 5 summarizes the role of metabolism and binding in toxicity and cancer. Toxic agents, as they exist in the environment or following metabolism or binding, elicit their effects by either a genotoxic pathway (in which damage to DNA occurs) or a non-genotoxic pathway (in which DNA damage and mutagenesis need not occur). Interestingly, it has recently become clear that “classical” DNA-damaging agents can operate via a reduced glutathione (GSH)-dependent nongenotoxic signal transduction pathway, which is initiated on or near the cell surface in the absence of DNA and outside the cell nucleus (Devary et al. 1993). Genetic differences in metabolism and binding remain, however, as the major determinants in controlling different individual toxic responses.

Figure 5. The general means by which toxicity occurs

TOX050F6

Role of Drug-metabolizing Enzymesin Cellular Function

Genetically based variation in drug-metabolizing enzyme function is of major importance in determining individual response to chemicals. These enzymes are pivotal in determining the fate and time course of a foreign chemical following exposure.

As illustrated in figure 5, the importance of drug-metabolizing enzymes in individual susceptibility to chemical exposure may in fact present a far more complex issue than is evident from this simple discussion of xenobiotic metabolism. In other words, during the past two decades, genotoxic mechanisms (measurements of DNA adducts and protein adducts) have been greatly emphasized. However, what if nongenotoxic mechanisms are at least as important as genotoxic mechanisms in causing toxic responses?

As mentioned earlier, the physiological roles of many drug-metabolizing enzymes involved in xenobiotic metabolism have not been accurately defined. Nebert (1994) has proposed that, because of their presence on this planet for more than 3.5 billion years, drug-metabolizing enzymes were originally (and are now still primarily) responsible for regulating the cellular levels of many nonpeptide ligands important in the transcriptional activation of genes affecting growth, differentiation, apoptosis, homeostasis and neuroendocrine functions. Furthermore, the toxicity of most, if not all, environmental agents occurs by means of agonist or antagonist action on these signal transduction pathways (Nebert 1994). Based on this hypothesis, genetic variability in drug-metabolizing enzymes may have quite dramatic effects on many critical biochemical processes within the cell, thereby leading to important differences in toxic response. It is indeed possible that such a scenario may also underlie many idiosyncratic adverse reactions encountered in patients using commonly prescribed drugs.

Conclusions

The past decade has seen remarkable progress in our understanding of the genetic basis of differential response to chemicals in drugs, foods and environmental pollutants. Drug-metabolizing enzymes have a profound influence on the way humans respond to chemicals. As our awareness of drug-metabolizing enzyme multiplicity continues to evolve, we are increasingly able to make improved assessments of toxic risk for many drugs and environmental chemicals. This is perhaps most clearly illustrated in the case of the CYP2D6 cytochrome P450 enzyme. Using relatively simple DNA-based tests, it is possible to predict the likely response of any drug predominantly metabolized by this enzyme; this prediction will ensure the safer use of valuable, yet potentially toxic, medication.

The future will no doubt see an explosion in the identification of further polymorphisms (phenotypes) involving drug-metabolizing enzymes. This information will be accompanied by improved, minimally invasive DNA-based tests to identify genotypes in human populations.

Such studies should be particularly informative in evaluating the role of chemicals in the many environmental diseases of presently unknown origin. The consideration of multiple drug-metabolizing enzyme polymorphisms, in combination (e.g., table 1), is also likely to represent a particularly fertile research area. Such studies will clarify the role of chemicals in the causation of cancers. Collectively, this information should enable the formulation of increasingly individualized advice on avoidance of chemicals likely to be of individual concern. This is the field of preventive toxicology. Such advice will no doubt greatly assist all individuals in coping with the ever increasing chemical burden to which we are exposed.

 

Back

Monday, 20 December 2010 19:23

Effect of Age, Sex and Other Factors

There are often large differences among humans in the intensity of response to toxic chemicals, and variations in susceptibility of an individual over a lifetime. These can be attributed to a variety of factors capable of influencing absorption rate, distribution in the body, biotransformation and/or excretion rate of a particular chemical. Apart from the known hereditary factors which have been clearly demonstrated to be linked with increased susceptibility to chemical toxicity in humans (see “Genetic determinants of toxic response”), other factors include: constitutional characteristics related to age and sex; pre-existing disease states or a reduction in organ function (non-hereditary, i.e., acquired); dietary habits, smoking, alcohol consumption and use of medications; concomitant exposure to biotoxins (various micro- organisms) and physical factors (radiation, humidity, extremely low or high temperatures or barometric pressures particularly relevant to the partial pressure of a gas), as well as concomitant physical exercise or psychological stress situations; previous occupational and/or environmental exposure to a particular chemical, and in particular concomitant exposure to other chemicals, not necessarily toxic (e.g., essential metals). The possible contributions of the aforementioned factors in either increasing or decreasing susceptibility to adverse health effects, as well as the mechanisms of their action, are specific for a particular chemical. Therefore only the most common factors, basic mechanisms and a few characteristic examples will be presented here, whereas specific information concerning each particular chemical can be found in elsewhere in this Encyclopaedia.

According to the stage at which these factors act (absorption, distribution, biotransformation or excretion of a particular chemical), the mechanisms can be roughly categorized according to two basic consequences of interaction: (1) a change in the quantity of the chemical in a target organ, that is, at the site(s) of its effect in the organism (toxicokinetic interactions), or (2) a change in the intensity of a specific response to the quantity of the chemical in a target organ (toxicodynamic interactions). The most common mechanisms of either type of interaction are related to competition with other chemical(s) for binding to the same compounds involved in their transport in the organism (e.g., specific serum proteins) and/or for the same biotransformation pathway (e.g., specific enzymes) resulting in a change in the speed or sequence between initial reaction and final adverse health effect. However, both toxicokinetic and toxicodynamic interactions may influence individual susceptibility to a particular chemical. The influence of several concomitant factors can result in either: (a) additive effects—the intensity of the combined effect is equal to the sum of the effects produced by each factor separately, (b) synergistic effects—the intensity of the combined effect is greater than the sum of the effects produced by each factor separately, or (c) antagonistic effects—the intensity of the combined effect is smaller than the sum of the effects produced by each factor separately.

The quantity of a particular toxic chemical or characteristic metabolite at the site(s) of its effect in the human body can be more or less assessed by biological monitoring, that is, by choosing the correct biological specimen and optimal timing of specimen sampling, taking into account biological half-lives for a particular chemical in both the critical organ and in the measured biological compartment. However, reliable information concerning other possible factors that might influence individual susceptibility in humans is generally lacking, and consequently the majority of knowledge regarding the influence of various factors is based on experimental animal data.

It should be stressed that in some cases relatively large differences exist between humans and other mammals in the intensity of response to an equivalent level and/or duration of exposure to many toxic chemicals; for example, humans appear to be considerably more sensitive to the adverse health effects of several toxic metals than are rats (commonly used in experimental animal studies). Some of these differences can be attributed to the fact that the transportation, distribution and biotransformation pathways of various chemicals are greatly dependent on subtle changes in the tissue pH and the redox equilibrium in the organism (as are the activities of various enzymes), and that the redox system of the human differs considerably from that of the rat.

This is obviously the case regarding important antioxidants such as vitamin C and glutathione (GSH), which are essential for maintaining redox equilibrium and which have a protective role against the adverse effects of the oxygen- or xenobiotic-derived free radicals which are involved in a variety of pathological conditions (Kehrer 1993). Humans cannot auto-synthesize vitamin C, contrary to the rat, and levels as well as the turnover rate of erythrocyte GSH in humans are considerably lower than that in the rat. Humans also lack some of the protective antioxidant enzymes, compared to the rat or other mammals (e.g., GSH- peroxidase is considered to be poorly active in human sperm). These examples illustrate the potentially greater vulnerability to oxidative stress in humans (particularly in sensitive cells, e.g., apparently greater vulnerability of the human sperm to toxic influences than that of the rat), which can result in different response or greater susceptibility to the influence of various factors in humans compared to other mammals (Telišman 1995).

Influence of Age

Compared to adults, very young children are often more susceptible to chemical toxicity because of their relatively greater inhalation volumes and gastrointestinal absorption rate due to greater permeability of the intestinal epithelium, and because of immature detoxification enzyme systems and a relatively smaller excretion rate of toxic chemicals. The central nervous system appears to be particularly susceptible at the early stage of development with regard to neurotoxicity of various chemicals, for example, lead and methylmercury. On the other hand, the elderly may be susceptible because of chemical exposure history and increased body stores of some xenobiotics, or pre-existing compromised function of target organs and/or relevant enzymes resulting in lowered detoxification and excretion rate. Each of these factors can contribute to weakening of the body’s defences—a decrease in reserve capacity, causing increased susceptibility to subsequent exposure to other hazards. For example, the cytochrome P450 enzymes (involved in the biotransformation pathways of almost all toxic chemicals) can be either induced or have lowered activity because of the influence of various factors over a lifetime (including dietary habits, smoking, alcohol, use of medications and exposure to environmental xenobiotics).

Influence of Sex

Gender-related differences in susceptibility have been described for a large number of toxic chemicals (approximately 200), and such differences are found in many mammalian species. It appears that males are generally more susceptible to renal toxins and females to liver toxins. The causes of the different response between males and females have been related to differences in a variety of physiological processes (e.g., females are capable of additional excretion of some toxic chemicals through menstrual blood loss, breast milk and/or transfer to the foetus, but they experience additional stress during pregnancy, delivery and lactation), enzyme activities, genetic repair mechanisms, hormonal factors, or the presence of relatively larger fat depots in females, resulting in greater accumulation of some lipophilic toxic chemicals, such as organic solvents and some medications.

Influence of Dietary Habits

Dietary habits have an important influence on susceptibility to chemical toxicity, mostly because adequate nutrition is essential for the functioning of the body’s chemical defence system in maintaining good health. Adequate intake of essential metals (including metalloids) and proteins, especially the sulphur-containing amino acids, is necessary for the biosynthesis of various detoxificating enzymes and the provision of glycine and glutathione for conjugation reactions with endogenous and exogenous compounds. Lipids, especially phospholipids, and lipotropes (methyl group donors) are necessary for the synthesis of biological membranes. Carbohydrates provide the energy required for various detoxification processes and provide glucuronic acid for conjugation of toxic chemicals and their metabolites. Selenium (an essential metalloid), glutathione, and vitamins such as vitamin C (water soluble), vitamin E and vitamin A (lipid soluble), have an important role as antioxidants (e.g., in controlling lipid peroxidation and maintaining integrity of cellular membranes) and free-radical scavengers for protection against toxic chemicals. In addition, various dietary constituents (protein and fibre content, minerals, phosphates, citric acid, etc.) as well as the amount of food consumed can greatly influence the gastrointestinal absorption rate of many toxic chemicals (e.g., the average absorption rate of soluble lead salts taken with meals is approximately eight per cent, as opposed to approximately 60% in fasting subjects). However, diet itself can be an additional source of individual exposure to various toxic chemicals (e.g., considerably increased daily intakes and accumulation of arsenic, mercury, cadmium and/or lead in subjects who consume contaminated seafood).

Influence of Smoking

The habit of smoking can influence individual susceptibility to many toxic chemicals because of the variety of possible interactions involving the great number of compounds present in cigarette smoke (especially polycyclic aromatic hydrocarbons, carbon monoxide, benzene, nicotine, acrolein, some pesticides, cadmium, and, to a lesser extent, lead and other toxic metals, etc.), some of which are capable of accumulating in the human body over a lifetime, including pre-natal life (e.g., lead and cadmium). The interactions occur mainly because various toxic chemicals compete for the same binding site(s) for transport and distribution in the organism and/or for the same biotransformation pathway involving particular enzymes. For example, several cigarette smoke constituents can induce cytochrome P450 enzymes, whereas others can depress their activity, and thus influence the common biotransformation pathways of many other toxic chemicals, such as organic solvents and some medications. Heavy cigarette smoking over a long period can considerably reduce the body’s defence mechanisms by decreasing reserve capacity to cope with the adverse influence of other life-style factors.

Influence of Alcohol

Consumption of alcohol (ethanol) can influence susceptibility to many toxic chemicals in several ways. It can influence the absorption rate and distribution of certain chemicals in the body—for example, increase the gastrointestinal absorption rate of lead, or decrease the pulmonary absorption rate of mercury vapour by inhibiting oxidation which is necessary for retention of inhaled mercury vapour. Ethanol can also influence susceptibility to various chemicals through short-term changes in tissue pH and increase in the redox potential resulting from ethanol metabolism, as both ethanol oxidizing to acetaldehyde and acetaldehyde oxidizing to acetate produce an equivalent of reduced nicotinamide adenine dinucleotide (NADH) and hydrogen (H+). Because the affinity of both essential and toxic metals and metalloids for binding to various compounds and tissues is influenced by pH and changes in the redox potential (Telišman 1995), even a moderate intake of ethanol may result in a series of consequences such as: (1) redistribution of long-term accumulated lead in the human organism in favour of a biologically active lead fraction, (2) replacement of essential zinc by lead in zinc-containing enzyme(s), thus affecting enzyme activity, or influence of mobil- ized lead on the distribution of other essential metals and metalloids in the organism such as calcium, iron, copper and selenium, (3) increased urinary excretion of zinc and so on. The effect of possible aforementioned events can be augmented due to the fact that alcoholic beverages can contain an appreciable amount of lead from vessels or processing (Prpic-Majic et al. 1984; Telišman et al. 1984; 1993).

Another common reason for ethanol-related changes in susceptibility is that many toxic chemicals, for example, various organic solvents, share the same biotransformation pathway involving the cytochrome P450 enzymes. Depending on the intensity of exposure to organic solvents as well as the quantity and frequency of ethanol ingestion (i.e., acute or chronic alcohol consumption), ethanol can either decrease or increase biotransformation rates of various organic solvents and thus influence their toxicity (Sato 1991).

Influence of Medications

The common use of various medications can influence susceptibility to toxic chemicals mainly because many drugs bind to serum proteins and thus influence the transport, distribution or excretion rate of various toxic chemicals, or because many drugs are capable of inducing relevant detoxifying enzymes or depressing their activity (e.g., the cytochrome P450 enzymes), thus affecting the toxicity of chemicals with the same biotransformation pathway. Characteristic for either of the mechanisms is increased urinary excretion of trichloroacetic acid (the metabolite of several chlorinated hydrocarbons) when using salicylate, sulphonamide or phenylbutazone, and an increased hepato-nephrotoxicity of carbon tetrachloride when using phenobarbital. In addition, some medications contain a considerable amount of a potentially toxic chemical, for example, the aluminium-containing antacids or preparations used for therapeutic management of the hyperphosphataemia arising in chronic renal failure.

Influence of Concomitant Exposure to Other Chemicals

The changes in susceptibility to adverse health effects due to interaction of various chemicals (i.e., possible additive, synergistic or antagonistic effects) have been studied almost exclusively in experimental animals, mostly in the rat. Relevant epidemiological and clinical studies are lacking. This is of concern particularly considering the relatively greater intensity of response or the variety of adverse health effects of several toxic chemicals in humans compared to the rat and other mammals. Apart from published data in the field of pharmacology, most data are related only to combinations of two different chemicals within specific groups, such as various pesticides, organic solvents, or essential and/or toxic metals and metalloids.

Combined exposure to various organic solvents can result in various additive, synergistic or antagonistic effects (depending on the combination of certain organic solvents, their intensity and duration of exposure), mainly due to the capability of influencing each other’s biotransformation (Sato 1991).

Another characteristic example are the interactions of both essential and/or toxic metals and metalloids, as these are involved in the possible influence of age (e.g., a lifetime body accumulation of environmental lead and cadmium), sex (e.g., common iron deficiency in women), dietary habits (e.g., increased dietary intake of toxic metals and metalloids and/or deficient dietary intake of essential metals and metalloids), smoking habit and alcohol consumption (e.g., additional exposure to cadmium, lead and other toxic metals), and use of medications (e.g., a single dose of antacid can result in a 50-fold increase in the average daily intake of aluminium through food). The possibility of various additive, synergistic or antagonistic effects of exposure to various metals and metalloids in humans can be illustrated by basic examples related to the main toxic elements (see table 1), apart from which further interactions may occur because essential elements can also influence one another (e.g., the well-known antagonistic effect of copper on the gastrointestinal absorption rate as well as the metabolism of zinc, and vice versa). The main cause of all these interactions is the competition of various metals and metalloids for the same binding site (especially the sulphhydryl group, -SH) in various enzymes, metalloproteins (especially metallothionein) and tissues (e.g., cell membranes and organ barriers). These interactions may have a relevant role in the development of several chronic diseases which are mediated through the action of free radicals and oxidative stress (Telišman 1995).

Table 1. Basic effects of possible multiple interactions concerning the main toxic and/or essential metals and matalloids in mammals

Toxic metal or metalloid Basic effects of the interaction with other metal or metalloid
Aluminium (Al) Decreases the absorption rate of Ca and impairs the metabolism of Ca; deficient dietary Ca increases the absorption rate of Al. Impairs phosphate metabolism. Data on interactions with Fe, Zn and Cu are equivocal (i.e., the possible role of another metal as a mediator).
Arsenic (As) Affects the distribution of Cu (an increase of Cu in the kidney, and a decrease of Cu in the liver, serum and urine). Impairs the metabolism of Fe (an increase of Fe in the liver with concomitant decrease in haematocrit). Zn decreases the absorption rate of inorganic As and decreases the toxicity of As. Se decreases the toxicity of As and vice versa.
Cadmium (Cd) Decreases the absorption rate of Ca and impairs the metabolism of Ca; deficient dietary Ca increases the absorption rate of Cd. Impairs the phosphate metabolism, i.e., increases urinary excretion of phosphates. Impairs the metabolism of Fe; deficient dietary Fe increases the absorption rate of Cd. Affects the distribution of Zn; Zn decreases the toxicity of Cd, whereas its influence on the absorption rate of Cd is equivocal. Se decreases the toxicity of Cd. Mn decreases the toxicity of Cd at low-level exposure to Cd. Data on the interaction with Cu are equivocal (i.e., the possible role of Zn, or another metal, as a mediator). High dietary levels of Pb, Ni, Sr, Mg or Cr(III) can decrease the absorption rate of Cd.
Mercury (Hg) Affects the distribution of Cu (an increase of Cu in the liver). Zn decreases the absorption rate of inorganic Hg and decreases the toxicity of Hg. Se decreases the toxicity of Hg. Cd increases the concentration of Hg in the kidney, but at the same time decreases the toxicity of Hg in the kidney (the influence ofthe Cd-induced metallothionein synthesis).
Lead (Pb) Impairs the metabolism of Ca; deficient dietary Ca increases the absorption rate of inorganic Pb and increases the toxicity of Pb. Impairs the metabolism of Fe; deficient dietary Fe increases the toxicity of Pb, whereas its influence on the absorption rate of Pb is equivocal. Impairs the metabolism of Zn and increases urinary excretion of Zn; deficient dietary Zn increases the absorption rate of inorganic Pb andincreases the toxicity of Pb. Se decreases the toxicity of Pb. Data on interactions with Cu and Mg are equivocal (i.e., the possible role of Zn, or another metal, as a mediator).

Note: Data are mostly related to experimental studies in the rat, whereas relevant clinical and epidemiological data (particularly regarding quantitative dose-response relationships) are generally lacking (Elsenhans et al. 1991; Fergusson 1990; Telišman et al. 1993).

 

Back

Page 87 of 87

" DISCLAIMER: The ILO does not take responsibility for content presented on this web portal that is presented in any language other than English, which is the language used for the initial production and peer-review of original content. Certain statistics have not been updated since the production of the 4th edition of the Encyclopaedia (1998)."

Contents