Introduction
The patterning and duration of the hours a person works are a very important aspect of his or her experience of the work situation. Most workers feel that they are paid for their time rather than explicitly for their efforts, and thus the transaction between the worker and the employer is one of exchanging time for money. Thus, the quality of the time being exchanged is a very important part of the equation. Time that has high value because of its importance to the worker in terms of allowing sleep, interaction with family and friends and participation in community events may be more highly prized, and thus require extra financial compensation, as compared to normal “day work” time when many of the worker’s friends and family members are themselves at work or at school. The balance of the transaction can also be changed by making the time spent at work more congenial to the worker, for example, by improving working conditions. The commute to and from work is unavailable to the worker for recreation, so this time too must be considered as “grey time” (Knauth et al. 1983) and therefore a “cost” to the worker. Thus, measures such as compressed workweeks, which reduce the number of commuting trips taken per week, or flexitime, which reduces the commute time by allowing the worker to avoid the rush hour, are again likely to change the balance.
Background Literature
As Kogi (1991) has remarked, there is a general trend in both manufacturing and service industries towards greater flexibility in the temporal programming of work. There are a number of reasons for this trend, including the high cost of capital equipment, consumer demand for around-the-clock service, legislative pressure to reduce the length of the workweek and (in some societies such as the United States and Australia) taxation pressure on the employer to have as few different employees as possible. For many employees, the conventional “9 to 5” or “8 to 4”, Monday through Friday workweek is a thing of the past, either because of new work systems or because of the large amounts of excessive overtime required.
Kogi notes that while the benefits to the employer of such flexibility are quite clear in allowing extended business hours, accommodation of market demand and greater management flexibility, the benefits to the worker may be less certain. Unless the flexible schedule involves elements of choice for workers with respect to their particular hours of work, flexibility can often mean disruptions in their biological clocks and domestic situations. Extended work shifts may also lead to fatigue, compromising safety and productivity, as well as to increased exposure to chemical hazards.
Biological Disruptions due to Abnormal Work Hours
Human biology is specifically oriented towards wakefulness during daylight and sleep at night. Any work schedule which requires late evening or all-night wakefulness as a result of compressed workweeks, mandatory overtime or shiftwork will lead, therefore, to disruptions of the biological clock (Monk and Folkard 1992). These disruptions can be assessed by measuring workers’ “circadian rhythms”, which comprise regular fluctuations over the 24 hours in vital signs, blood and urine composition, mood and performance efficiency over the 24-hour period (Aschoff 1981). The measure used most often in shiftwork studies has been body temperature, which, under normal conditions, shows a clear rhythm with a peak at about 2000 hours, a trough at about 0500 hours and a difference of about 0.7°C. between the two. After an abrupt change in routine, the amplitude (size) of the rhythm diminishes and the phase (timing) of the rhythm is slow to adjust to the new schedule. Until the adjustment process is complete, sleep is disrupted and daytime mood and performance efficiency are impaired. These symptoms can be regarded as the shiftwork equivalent of jet-lag and can be extremely long lasting (Knauth and Rutenfranz 1976).
Abnormal work hours can also lead to poor health. Although it has proved difficult to precisely quantify the exact size of the effect, it appears that, in addition to sleep disorders, gastrointestinal disorders (including peptic ulcers) and cardiovascular disease can be more frequently found in shift workers (and former shift workers) than in day workers (Scott and LaDou 1990). There is also some preliminary evidence for increased incidence of psychiatric symptoms (Cole, Loving and Kripke 1990).
Social Disruptions due to Abnormal Work Hours
Not only human biology, but also human society, opposes those who work abnormal hours. Unlike the nocturnal sleep of the majority, which is carefully protected by strict taboos against loud noise and telephone use at night, the late wakening, day-sleeping and napping that are required by those working abnormal work hours is only grudgingly tolerated by society. Evening and weekend community events can also be denied to these people, leading to feelings of alienation.
It is with the family, however, that the social disruptions of abnormal work hours may be the most devastating. For the worker, the family roles of parent, caregiver, social companion and sexual partner can all be severely compromised by abnormal work hours, leading to marital disharmony and problems with children (Colligan and Rosa 1990). Moreover, the worker’s attempts to rectify, or to avoid, such social problems may result in a decrease in sleep time, thus leading to poor alertness and compromised safety and productivity.
Suggested Solutions
Just as the problems of abnormal work hours are multifaceted, so too must be the solutions to those problems. The primary areas to be addressed should include:
Selection and education of the worker should involve identification and counselling of those persons likely to experience difficulties with abnormal or extended work hours (e.g., older workers and those with high sleep needs, extensive domestic workloads or long commutes). Education in circadian and sleep hygiene principles and family counselling should also be made available (Monk and Folkard 1992). Education is an extremely powerful tool in helping those with abnormal work hours to cope, and in reassuring them about why they may be experiencing problems. Selection of the most appropriate schedule should begin with a decision as to whether abnormal work hours are actually needed at all. For example, night work may in many cases be done better at a different time of day (Knauth and Rutenfranz 1982). Consideration should be also be given to the schedule best suited to the work situation, bearing in mind the nature of the work and the demographics of the workforce. Improvement of the work environment may involve raising illumination levels and providing adequate canteen facilities at night.
Conclusions
The particular pattern of work hours chosen for an employee can represent a significant challenge to his or her biology, domestic situation and role in the community. Informed decisions should be made, incorporating a study of the demands of the work situation and the demographics of the workforce. Any changes in hours of work should be preceded by detailed investigation and consultation with the employees and followed by evaluation studies.
Workload and Brain Function
Knowledge about human needs, abilities and constraints provides guidelines for shaping psychosocial work conditions so as to reduce stress and improve occupational health (Frankenhaeuser 1989). Brain research and behavioural research have identified the conditions under which people perform well and the conditions under which performance deteriorates. When the total inflow of impressions from the outside world falls below a critical level and work demands are too low, people tend to become inattentive and bored and to lose their initiative. Under conditions of excessive stimulus flow and too high demands, people lose their ability to integrate messages, thought processes become fragmented and judgement is impaired. This inverted U-relationship between workload and brain function is a fundamental biological principle with wide applications in working life. Stated in terms of efficiency at different workloads, it means that the optimal level of mental functioning is located at the midpoint of a scale ranging from very low to very high work demands. Within this middle zone the degree of challenge is “just right”, and the human brain functions efficiently. The location of the optimal zone varies among different people, but the crucial point is that large groups spend their lives outside the optimal zone that would provide opportunities for them to develop their full potential. Their abilities are constantly either underutilized or overtaxed.
A distinction should be made between quantitative overload, which means too much work within a given time period, and qualitative underload, which means that tasks are too repetitive, lacking variety and challenge (Levi, Frankenhaeuser and Gardell 1986).
Research has identified criteria for “healthy work” (Frankenhaeuser and Johansson 1986; Karasek and Theorell 1990). These criteria emphasize that workers should be given the opportunity to: (a) influence and control their work; (b) understand their contribution in a wider context; (c) experience a sense of togetherness and belonging at their place of work; and (d) develop their own abilities and vocational skill by continuous learning.
Monitoring Bodily Responses at Work
People are challenged by different work demands whose nature and strength are appraised via the brain. The appraisal process involves a weighing, as it were, of the severity of the demands against one’s own coping abilities. Any situation which is perceived as a threat or challenge requiring compensatory effort is accompanied by the transmission of signals from the brain to the adrenal medulla, which responds with an output of the catecholamines epinephrine and norepinephrine. These stress hormones make us mentally alert and physically fit. In the event that the situation induces feelings of uncertainty and helplessness, the brain messages also travel to the adrenal cortex, which secretes cortisol, a hormone which plays an important part in the body’s immune defence (Frankenhaeuser 1986).
With the development of biochemical techniques that permit the determination of exceedingly small amounts of hormones in blood, urine and saliva, stress hormones have come to play an increasingly important role in research on working life. In the short term, a rise in stress hormones is often beneficial and seldom a threat to health. But in the longer term, the picture may include damaging effects (Henry and Stephens 1977; Steptoe 1981). Frequent or long-lasting elevations of stress-hormone levels in the course of daily life may result in structural changes in the blood vessels which, in turn, may lead to cardiovascular disease. In other words, consistently high levels of stress hormones should be regarded as warning signals, telling us that the person may be under excessive pressure.
Biomedical recording techniques permit the monitoring of bodily responses at the workplace without interfering with the worker’s activities. Using such ambulatory-monitoring techniques, one can find out what makes the blood pressure rise, the heart beat faster, the muscles tense up. These are important pieces of information which, together with stress-hormone assays, have helped in identifying both aversive and protective factors related to job content and work organization. Thus, when searching the work environment for harmful and protective factors, one can use the people themselves as “measuring rods”. This is one way in which the study of human stress and coping may contribute to intervention and prevention at the workplace (Frankenhaeuser et al. 1989; Frankenhaeuser 1991).
Personal Control as a “Buffer”
Data from both epidemiological and experimental studies support the notion that personal control and decision latitude are important “buffering” factors which help people to simultaneously work hard, enjoy their jobs and remain healthy (Karasek and Theorell 1990). The chance of exercising control may “buffer” stress in two ways: first, by increasing job satisfaction, thus reducing bodily stress responses, and secondly, by helping people develop an active, participatory work role. A job that allows the worker to use his or her skills to the full will increase self-esteem. Such jobs, while demanding and taxing, may help to develop competencies that aid in coping with heavy workloads.
The pattern of stress hormones varies with the interplay of positive versus negative emotional responses evoked by the situation. When demands are experienced as a positive and manageable challenge, the adrenaline output is typically high, whereas the cortisol-producing system is put to rest. When negative feelings and uncertainty dominate, both cortisol and adrenaline increase. This would imply that the total load on the body, the “cost of achievement”, will be lower during demanding, enjoyable work than during less demanding but tedious work, and it would seem that the fact that cortisol tends to be low in controllable situations could account for the positive health effects of personal control. Such a neuroendocrine mechanism could explain the epidemiological data obtained from national surveys in different countries which show that high job demands and work overload have adverse health consequences mainly when combined with low control over job-related decisions (Frankenhaeuser 1991; Karasek and Theorell 1990; Levi, Frankenhaeuser and Gardell 1986).
Total Workload of Women and Men
In order to assess the relative workloads associated with men’s and women’s different life situations, it is necessary to modify the concept of work so as to include the notion of total workload, that is, the combined load of demands related to paid and unpaid work. This includes all forms of productive activities defined as “all the things that people do that contribute to the goods and services that other people use and value” (Kahn 1991). Thus, a person’s total workload includes regular employment and overtime at work as well as housework, child care, care of elderly and sick relatives and work in voluntary organizations and unions. According to this definition, employed women have a higher workload than men at all ages and all occupational levels (Frankenhaeuser 1993a, 1993b and 1996; Kahn 1991).
The fact that the division of labour between spouses in the home has remained the same, while the employment situation of women has changed radically, has led to a heavy workload for women, with little opportunity for them to relax in the evenings (Frankenhaeuser et al. 1989). Until a better insight has been gained into the causal links between workload, stress and health, it will remain necessary to regard prolonged stress responses, displayed in particular by women at the managerial level, as warning signals of possible long-term health risks (Frankenhaeuser, Lundberg and Chesney 1991).
Person–environment fit (PE) theory offers a framework for assessing and predicting how characteristics of the employee and the work environment jointly determine worker well-being and, in the light of this knowledge, how a model for identifying points of preventive intervention may be elaborated. Several PE fit formulations have been proposed, the most widely known ones being those of Dawis and Lofquist (1984); French, Rodgers and Cobb (1974); Levi (1972); McGrath (1976); and Pervin (1967). The theory of French and colleagues, illustrated in figure 1, may be used to discuss the conceptual components of PE fit theory and their implications for research and application.
Figure 1. Schematic of French, Rogers and Cobb's theory of person-environment (PE) fit
Poor PE fit can be viewed from the perspectives of the employee’s needs (needs–supplies fit) as well as the job–environment’s demands (demands–abilities fit). The term needs–supplies fit refers to the degree to which employee needs, such as the need to use skills and abilities, are met by the work environment’s supplies and opportunities to satisfy those needs. Demands–abilities fit refers to the degree to which the job’s demands are met by the employee’s skills and abilities. These two types of fit can overlap. For example, work overload may leave the employer’s demands unmet as well as threaten the employee’s need to satisfy others.
Conceptualizing Person (P) and Environment (E)
Characteristics of the person (P) include needs as well as abilities. Characteristics of the environment (E) include supplies and opportunities for meeting the employee’s needs as well as demands which are made on the employee’s abilities. In order to assess the degree to which P equals (or fits), exceeds, or is less than E, the theory requires that P and E be measured along commensurate dimensions. Ideally, P and E should be measured on equal interval scales with true zero points. For example, one could assess PE fit on workload for a data-entry operator in terms of both the number of data-entry keystrokes per minute demanded by the job (E) and the employee’s keystroke speed (P). As a less ideal alternative, investigators often use Likert type scales. For example, one could assess how much the employee wants to control the work pace (P) and how much control is provided by the job’s technology (E) by using a rating scale, where a value of 1 corresponds to no control, or almost no control and a value of 5 corresponds to complete control.
Distinguishing Subjective from Objective Fit
Subjective fit (FS) refers to the employee’s perceptions of P and E, whereas objective fit (FO) refers to assessments that are, in theory, free of subjective bias and error. In practice, there is always measurement error, so that it is impossible to construct truly objective measures. Consequently, many researchers prefer to create a working distinction between subjective and objective fit, referring to measures of objective fit as ones which are relatively, rather than absolutely, immune to sources of bias and error. For example, one can assess objective PE fit on keystroke ability by examining the fit between a count of required keystrokes per minute in the actual workload assigned to the employee (EO) and the employee’s ability as assessed on an objective-type test of keystroke ability (PO). Subjective PE fit might be assessed by asking the employee to estimate per minute keystroke ability (PS) and the number of keystrokes per minute demanded by the job (ES).
Given the challenges of objective measurement, most tests of PE fit theory have used only subjective measures of P and E (for an exception, see Chatman 1991). These measures have tapped a variety of dimensions including fit on responsibility for the work and well-being of other persons, job complexity, quantitative workload and role ambiguity.
Dynamic Properties of the PE Fit Model
Figure 1 depicts objective fit influencing subjective fit which, in turn, has direct effects on well-being. Well-being is broken down into responses called strains, which serve as risk factors for subsequent illness. These strains can involve emotional (e.g., depression, anxiety), physiological (e.g., serum cholesterol, blood pressure), cognitive (e.g., low self-evaluation, attributions of blame to self or others), as well as behavioural responses (e.g., aggression, changes in lifestyle, drug and alcohol use).
According to the model, levels of and changes in objective fit, whether due to planned intervention or otherwise, are not always perceived accurately by the employee, so that discrepancies arise between objective and subjective fit. Thus, employees can perceive good fit as well as poor fit when, objectively, such is not the case.
Inaccurate employee perceptions can arise from two sources. One source is the organization, which, unintentionally or by design (Schlenker 1980), may provide the employee with inadequate information regarding the environment and the employee. The other source is the employee. The employee might fail to access available information or might defensively distort objective information about what the job requires or about his or her abilities and needs — Taylor (1991) cites such an example.
French, Rodgers and Cobb (1974) use the concept of defences to refer to employee processes for distorting the components of subjective fit, PS and ES, without changing the commensurate components of objective fit, PO and EO. By extension, the organization can also engage in defensive processes—for example, cover-ups, denial or exaggeration—aimed at modifying employee perceptions of subjective fit without concomitantly modifying objective fit.
The concept of coping is, by contrast, reserved for responses and processes that aim to alter and, in particular, improve objective fit. The employee can attempt to cope by improving objective skills (PO) or by changing objective job demands and resources (EO) such as through a change of jobs or assigned responsibilities. By extension, the organization can also apply coping strategies to improve objective PE fit. For example, organizations can make changes in selection and promotion strategies, in training and in job design to alter EO and PO.
The distinctions between coping and defence on the one hand and objective and subjective fit on the other can lead to an array of practical and scientific questions regarding the consequences of using coping and defence and the methods for distinguishing between effects of coping and effects of defence on PE fit. By derivation from the theory, sound answers to such questions require sound measures of objective as well as subjective PE fit.
Statistical Models
PE fit can have non-linear relations with psychological strain. Figure 2 presents a U-shaped curve as an illustration. The lowest level of psychological strain on the curve occurs when employee and job characteristics fit each other (P = E). Strain increases as the employee’s abilities or needs respectively fall short of the job’s demands or resources (P
Figure 2. Hypothetical U-shaped relation of person-environment fit to psychological strain
Efficacy of the Model
A variety of different approaches to the measurement of PE fit demonstrate the model’s potential for predicting well-being and performance. For example, careful statistical modelling found that PE fit explained about 6% more variance in job satisfaction than was explained by measures of P or E alone (Edwards and Harrison 1993). In a series of seven studies of accountants measuring PE fit using a card-sort method, high-performers had higher correlations between P and E (average r = 0.47) than low performers (average r = 0.26; Caldwell and O’Reilly 1990). P was assessed as the employee’s knowledge, skills and abilities (KSAs), and E was assessed as the commensurate KSAs required by the job. Poor PE fit between the accountant’s values and the firm’s also served to predict employee turnover (Chatman 1991).
The stress concept
Various definitions of stress have been formulated since the concept was first named and described by Hans Selye (Selye 1960). Almost invariably these definitions have failed to capture what is perceived as the essence of the concept by a major proportion of stress researchers.
The failure to reach a common and generally acceptable definition may have several explanations; one of them may be that the concept has become so widespread and has been used in so many different situations and settings and by so many researchers, professionals and lay persons that to agree on a common definition is no longer possible. Another explanation is that there really is no empirical basis for a single common definition. The concept may be so diverse that one single process simply does not explain the whole phenomenon. One thing is clear—in order to examine the health effects of stress, the concept needs to include more than one component. Selye’s definition was concerned with the physiological fight or flight reaction in response to a threat or a challenge from the environment. Thus his definition involved only the individual physiological response. In the 1960s a strong interest arose in so-called life events, that is, major stressful experiences that occur in an individual’s life. The work by Holmes and Rahe (1967) nicely demonstrated that an accumulation of life events was harmful to health. These effects were found mostly in retrospective studies. To confirm the findings prospectively proved to be more difficult (Rahe 1988).
In the 1970s another concept was introduced into the theoretical framework, that of the vulnerability or resistance of the individual who was exposed to stressful stimuli. Cassel (1976) hypothesized that host resistance was a crucial factor in the outcome of stress or the impact of stress on health. The fact that host resistance had not been taken into account in many studies might explain why so many inconsistent and contradictory results had been obtained on the health effect of stress. According to Cassel, two factors were essential in determining the degree of a person’s host resistance: his or her capacity for coping and his or her social supports.
Today’s definition has come to include considerably more than the physiological “Selye stress” reactions. Both social environ-mental effects as represented by (for instance) life events and the resistance or vulnerability of the individual exposed to the life events are included.
Figure 1. Components of stress in the stress-disease model of Kagan and Levi (1971)
In the stress-disease model proposed by Kagan and Levi (1971), several distinctions between different components are made (figure 1). These components are:
It is important to note, that—contrary to Selye’s beliefs—several different physiological pathways have been identified that mediate the effects of stressors on physical health outcomes. These include not only the originally described sympatho-adreno-medullary reaction but also the action of the sympatho-adreno-cortical axis, which may be of equal importance, and the counterbalance provided by parasympathetic gastrointestinal neurohormonal regulation, which has been observed to dampen and buffer the harmful effects of stress. In order for a stressor to evoke such reactions, a harmful influence of the psychobiological programme is required— in other words, an individual propensity to react to stressors has to be present. This individual propensity is both genetically determined and based on early childhood experiences and learning.
If the physiological stress reactions are severe and long-standing enough, they may eventually lead to chronic states, or become precursors of illness. An example of such a precursor is hypertension, which is often stress-related and may lead to manifest somatic disease, such as stroke or heart disease.
Another important feature of the model is that the interaction effects of intervening variables are anticipated at each step, further increasing the complexity of the model. This complexity is illustrated by feed-back loops from all stages and factors in the model to every other stage or factor. Thus the model is complex—but so is nature.
Our empirical knowledge about the accuracy of this model is still insufficient and unclear at this stage, but further insight will be gained by applying the interactive model to stress research. For example, our ability to predict disease may increase if the attempt is made to apply the model.
Empirical evidence on host resistance
In our group of investigators at the Karolinska Institute in Stockholm, recent research has been focused on factors that promote host resistance. We have hypothesized that one such powerful factor is the health-promoting effects of well-functioning social networks and social support.
Our first endeavour to investigate the effects of social networks on health were focused on the entire Swedish population from a “macroscopic” level. In cooperation with the Central Swedish Bureau of Statistics we were able to evaluate the effects of self-assessed social network interactions on health outcome, in this case on survival (Orth-Gomér and Johnson 1987).
Representing a random sample of the adult Swedish population, 17,433 men and women responded to a questionnaire about their social ties and social networks. The questionnaire was included in two of the annual Surveys of Living Conditions in Sweden, which were designed to assess and measure the welfare of the nation in material as well as in social and psychological terms. Based on the questionnaire, we created a comprehensive social network interaction index which included the number of members in the network and the frequency of contacts with each member. Seven sources of contacts were identified by means of factor analysis: parents, siblings, nuclear family (spouse and children), close relatives, co-workers, neighbours, distant relatives and friends. The contacts with each source were calculated and added up to a total index score, which ranged from zero to 106.
By linking the Surveys of Living Conditions with the national death register, we were able to investigate the impact of the social network interaction index on mortality. Dividing the study population into tertiles according to their index score, we found that those men and women who were in the lower tertile had an invariably higher mortality risk than those who were in the middle and upper tertiles of the index score.
The risk of dying if one was in the lower tertile was four to five times higher than in the other tertiles, although many other factors might explain this association such as the fact that increasing age is associated with higher risk of dying. Also, as one ages the number of social contacts decrease. If one is sick and disabled, mortality risk increases and it is likely that the extent of the social network decreases. Morbidity and mortality are also higher in lower social classes, and social networks are also smaller and social contacts less abundant. Thus, controlling for these and other mortality risk factors is necessary in any analysis. Even when these factors were taken into account, a statistically significant 40% increase in risk was found to be associated with a sparse social network among those in the lowest third of the population. It is interesting to note that there was no additional health-promoting effect of being in the highest as compared to the middle tertile. Possibly, a great number of contacts can represent a strain on the individual as well as protection against harmful health effects.
Thus, without even knowing anything further about the stressors in the lives of these men and women we were able to confirm a health-promoting effect of social networks.
Social networks alone cannot explain the health effects observed. It is probable that the way in which a social network functions and the basis of support the network members provide are more important than the actual number of people included in the network. In addition, an interactive effect of different stressors is possible. For example the effects of work-related stress have been found to worsen when there is also a lack of social support and social interaction at work (Karasek and Theorell 1990).
In order to explore the issues of interaction, research studies have been carried out using various measures for assessing both qualitative and quantitative aspects of social support. Several interesting results were obtained which are illustrative of the health effects that have been associated with social support. For example, one study of heart disease (myocardial infarct and sudden cardiac death) in a population of 776 fifty-year-old men born in Gothenburg, randomly selected from the general population and found healthy on initial examination, smoking and lack of social support were found to be the strongest predictors of disease (Orth-Gomér, Rosengren and Wilheemsen 1993). Other risk factors included elevated blood pressure, lipids, fibrinogen and a sedentary lifestyle.
In the same study it was shown that only in those men who lacked support, in particular emotional support from a spouse, close relatives or friends, were the effects of stressful life events harmful. Men who both lacked support and had experienced several serious life events had more than five times the mortality of men who enjoyed close and emotional support (Rosengren et al. 1993).
Another example of interactive effects was offered in a study of cardiac patients who were examined for psychosocial factors such as social integration and social isolation, as well as myocardial indicators of an unfavourable prognosis and then followed for a ten-year period. Personality and behaviour type, in particular the Type A behaviour pattern, was also assessed.
The behaviour type in itself had no impact on prognosis in these patients. Of Type A men, 24% died as compared to 22% of Type B men. But when considering the interactive effects with social isolation another picture emerged.
Using a diary of activities during a regular week, men partici-pating in the study were asked to describe anything they would do in the evenings and weekends of a normal week. Activities were then divided into those that involved physical exercise, those that were mainly involved with relaxation and performed at home and those that were performed for recreation together with others. Of these activity types, lack of social recreational activity was the strongest predictor of mortality. Men who never engaged in such activities—called socially isolated in the study—had about three times higher mortality risk than those who were socially active. In addition, Type A men who were socially isolated had an even higher mortality risk than those in any of the other categories (Orth-Gomér, Undén and Edwards 1988).
These studies demonstrate the need to consider several aspects of the psychosocial environment, individual factors as well as of course the physiological stress mechanisms. They also demonstrate that social support is one important factor in stress-related health outcomes.
Most previous stress theories were developed to describe reactions to “inevitable” acute stress in situations threatening biological survival (Cannon 1935; Selye 1936). However, the Demand/Control model was developed for work environments where “stressors” are chronic, not initially life threatening, and are the product of sophisticated human organizational decision making. Here, the controllability of the stressor is very important, and becomes more important as we develop ever more complex and integrated social organizations, with ever more complex limitations on individual behaviour. The Demand/Control model (Karasek 1976; Karasek 1979; Karasek and Theorell 1990), which is discussed below, is based on psychosocial characteristics of work: the psychological demands of work and a combined measure of task control and skill use (decision latitude). The model predicts, first, stress-related illness risk, and, secondly, active/passive behavioural correlates of jobs. It has mainly been used in epidemiological studies of chronic disease, such as coronary heart disease.
Pedagogically, it is a simple model which can help to demonstrate clearly several important issues relevant for social policy discussions of occupational health and safety:
Beyond the health consequences of work, the model also captures the perspectives of the work’s organizers who are concerned with productivity results. The psychological demand dimension relates to “how hard workers work”; the decision latitude dimension reflects work organization issues of who makes decisions and who does what tasks. The model’s active learning hypothesis describes the motivation processes of high performance work. The economic logic of extreme labour specialization, the past conventional wisdom about productive job design is contradicted by adverse health consequences in the Demand/Control model. The model implies alternative, health-promoting perspectives on work organization which emphasize broad skills and participation for workers, and which may also bring economic advantages for innovative manufacturing and in service industries because of the increased possibilities for learning and participation.
Hypotheses of the Demand/Control Model
Psychosocial functioning at the workplace, based on psychological demands and decision latitude
Job strain hypothesis
The first hypothesis is that the most adverse reactions of psychological strain occur (fatigue, anxiety, depression and physical illness) when the psychological demands of the job are high and the worker’s decision latitude in the task is low (figure 1, lower right cell). These undesirable stress-like reactions, which result when arousal is combined with restricted opportunities for action or coping with the stressor, are referred to as psychological strain (the term stress is not used at this point as it is defined differently by many groups).
Figure 1. Psychological demand/decision latitude model
For example, the assembly-line worker has almost every behaviour rigidly constrained. In a situation of increased demands (“speed-up”), more than just the constructive response of arousal, the often helpless, long-lasting, and negatively experienced response of residual psychological strain occurs. When the lunch-time rush occurs (Whyte 1948), it is the restaurant worker who does not know how to “control” her customers’ behaviour (“get the jump on the customer”) who experiences the greatest strain on the job. Kerckhoff and Back (1968) describe garment workers under heavy deadline pressure and the subsequent threat of layoff. They conclude that when the actions normally needed to cope with job pressures cannot be taken, the most severe behavioural symptoms of strain occur (fainting, hysteria, social contagion). It is not only the freedom of action as to how to accomplish the formal work task that relieves strain, it may also be the freedom to engage in the informal “rituals”, the coffee break, smoke break or fidgeting, which serve as supplementary “tension release” mechanisms during the work day (Csikszentmihalyi 1975).These are often social activities with other workers— precisely those activities eliminated as “wasted motions” and “soldiering” by Frederick Taylor’s methods (1911 (1967)). This implies a needed expansion of the model to include social relations and social support.
In the model, decision latitude refers to the worker’s ability to control his or her own activities and skill usage, not to control others. Decision latitude scales have two components: task authority—a socially predetermined control over detailed aspects of task performance (also called autonomy); and skill discretion— control over use of skills by the individual, also socially determined at work (and often called variety or “substantive complexity” (Hackman and Lawler 1971; Kohn and Schooler 1973)). In modern organizational hierarchies, the highest levels of knowledge legitimate the exercise of the highest levels of authority, and workers with limited-breadth, specialized tasks are coordinated by managers with higher authority levels. Skill discretion and authority over decisions are so closely related theoretically and empirically that they are often combined.
Examples of work’s psychological demands—“how hard you work”—include the presence of deadlines, the mental arousal or stimulation necessary to accomplish the task, or coordination burdens. The physical demands of work are not included (although psychological arousal comes with physical exertion). Other components of psychological job demands are stressors arising from personal conflicts. Fear of losing a job or skill obsolescence may obviously be a contributor. Overall, Buck (1972) notes that “task requirements” (workload) are the central component of psychological job demands for most workers in spite of the above diversity. While simple measures of working hours, in moderate ranges, do not seem to strongly predict illness, one such measure, shiftwork—especially rotating shiftwork, is associated with substantial social problems as well as increased illness.
While some level of “demands” is necessary to achieve new learning and effective performance on the job (i.e., interest), too high a level is obviously harmful. This has implied the inverted “U-shaped” curve of “optimal” level of demands in the well known General Adaptation Syndrome of Selye (1936) and related, classic theories by Yerkes and Dodson (1908) and Wundt (1922) on stress and performance.* However, our findings show that most work situations have an overload, rather than an underload, problem.
* Although Selye’s “U-shaped” association between demands and stress purported to be unidimensional along a stressor axis, it probably also included a second dimension of constraint in his animal experiments - and thus was really a composite model of stress-related physiological deterioration - potentially similar to the high demand, low control situation, as other researchers have found (Weiss 1971).
Active learning hypothesis
When control on the job is high, and psychological demands are also high, but not overwhelming (fig. 34.2 upper right cell) learning and growth are the predicted behavioural outcomes (i.e., the active learning hypothesis). Such a job is termed the “active job”, since research in both the Swedish and American populations has shown this to be the most active group outside of work in leisure and political activity, in spite of heavy work demands (Karasek and Theorell 1990). Only average psychological strain is predicted for the ‘active job’ because much of the energy aroused by the job’s many stressors (“challenges”) are translated into direct action—effective problem solving—with little residual strain to cause disturbance. This hypothesis parallels White’s “concept of competence” (1959): the psychological state of individuals in challenging circumstances is enhanced by increasing “demands”, an environment-based theory of motivation. The model also predicts that the growth and learning stimuli of these settings, when they occur in a job context, are conducive to high productivity.
In the Demand/Control model, learning occurs in situations which require both individual psychological energy expenditure (demands or challenges) and the exercise of decision-making capability. As the individual with decision-making latitude makes a “choice” as to how to best cope with a new stressor, that new behaviour response, if effective, will be incorporated into the individual’s repertoire of coping strategies (i.e., it will be “learned”). The potential activity level in the future will be raised because of the expanded range of solutions to environmental challenges, yielding an increase in motivation. Opportunities for constructive reinforcement of behaviour patterns are optimal when the challenges in the situation are matched by the individual’s control over alternatives or skill in dealing with those challenges (Csikszentmihalyi 1975). The situation will not be unchallengingly simple (thus, unimportant) nor so demanding that appropriate actions can not be taken because of high anxiety level (the psychological “strain” situation).
The Demand/Control model predicts that situations of low demand and low control (Figure 1 opposite end of diagonal B) cause a very “unmotivating” job setting which leads to “negative learning” or gradual loss of previously acquired skills. Evidence shows that disengagement from leisure and political activity outside the job appear to increase over time in such jobs (Karasek and Theorell 1990). These “passive” job, may be the result of “learned helplessness”, discussed by Seligman (1975) from a sequence of job situations which reject worker’s initiatives.
The fact that environmental demands can thus be conceptualized in both positive and negative terms is congruent with the common understanding that there is both “good” and “bad” stress. Evidence that at least two separable mechanisms must be used to describe “psychological functioning” on the job is one of the primary validations of the multidimensional “Demand/ Control” model structure. The “active”-“passive” diagonal B implies that learning mechanisms are independent of (i.e., orthogonal to) psychological strain mechanisms. This yields a parsimonious model with two broad dimensions of work activity and two major psychological mechanisms (the primary reason for calling it an “interaction” model (Southwood 1978)). (Multiplicative interactions for the axes is too restrictive a test for most sample sizes.)
Clarifying Demand and Control definitions
The Demand/Control model has sometimes been assumed to be congruent with a model of “demands and resources”, allowing a simple fit with currently common “cost/benefit” thinking—where the positive “benefits” of resources are subtracted from the negative “costs” of demands. “Resources” allows inclusion of many factors outside the worker’s immediate task experience of obvious importance. However, the logic of the Demand/ Control model hypotheses cannot be collapsed into a unidimensional form. The distinction between decision latitude and psychological stressors must be retained because the model predicts both learning and job strain—from two different combinations of demands and control which are not simply mathematically additive. Job “control” is not merely a negative stressor, and “demands and challenges” associated with lack of control are not associated with increased learning. Having decision latitude over the work process will reduce a worker’s stress, but increase his learning, while psychological demands would increase both learning and stress. This distinction between demands and control allows understanding of the otherwise unclear prediction of the effects of: (a) “responsibility”, which actually combines high demands and high decision latitude; (b) “qualitative job demands”, which also measures the possibility of decision making about what skills to employ; and (c) “piece work”, where the decision latitude to work faster almost directly brings with it increased demands.
Expanding the Model
Social support hypotheses
The Demand/Control model has been usefully expanded by Johnson by the addition of social support as a third dimension (Johnson 1986; Kristensen 1995). The primary hypothesis, that jobs which are high in demands, low in control—and also low in social support at work (high “iso-strain”) carry the highest risks of illness, has been empirically successful in a number of chronic disease studies. The addition clearly acknowledges the need of any theory of job stress to assess social relations at the workplace (Karasek and Theorell 1990; Johnson and Hall 1988). Social support “buffering” of psychological strain may depend on the degree of social and emotion integration and trust between co-workers, supervisors, etc.—“socio-emotional support” (Israel and Antonnuci 1987). Addition of social support also makes the Demand/Control perspective more useful in job redesigning. Changes in social relations between workers (i.e., autonomous work groups) and changes in decision latitude are almost inseparable in job redesign processes, particularly “participatory” processes (House 1981).
However, a full theoretical treatment of the impact of social relations on both job stress and behaviour is a very complex problem which needs further work. The associations with measures of co-worker and supervisor interactions and chronic disease are less consistent than for decision latitude, and social relations can strongly increase, as well as decrease, the nervous system arousal that may be the risk-inducing link between social situation and illness. The dimensions of work experience that reduce job stress would not necessarily be the same dimensions that are relevant for active behaviour in the Demand/Control model. Facilitating collective forms of active behaviour would likely focus on the distribution of and ability to use competences, communication structure and skills, coordination possibilities, “emotional intelligence skills” (Goleman 1995)—as well as the trust important for social support.
Occupation and psychosocial job characteristics
Job characteristics can be displayed in a four quadrant diagram using the average job characteristics of occupations in the US Census occupation codes (Karasek and Theorell 1990). The “active” job quadrant, with high demand and high control, has high-prestige occupations: lawyers, judges, physicians, professors, engineers, nurses and managers of all kinds. The “passive” job quadrant, with low demands and low control, has clerical workers such as stock and billing clerks, transport operatives and low status service personnel such as janitors. The “high strain” quadrant, with high demands and low control, has machine-paced operatives such as assemblers, cutting operatives, inspectors and freight handlers, as well as other low-status service operatives such as waiters or cooks. Female-dominated occupations are frequent (garment stitchers, waitresses, telephone operators and other office automation workers). “Low strain” self-paced occupations, such as repairmen, sales clerks, foresters, linemen and natural scientists, often involve significant training and self-pacing.
Thus, executives and professionals have a moderate level of stress, and not the highest level of stress, as popular belief often holds. While “managerial stress” certainly exists because of the high psychological demands that come with these jobs, it appears that the frequent occasions for decision-making and deciding how to do the job are a significant stress moderator. Of course, at the highest status levels, executive jobs consist of decision-making as the primary psychological demand, and then the Demand/ Control model fails. However, the implication here is that executives could reduce their stress if they made fewer decisions, and lower status workers would be better off with more decision opportunities, so that all groups could be better off with a more equal share of decision power.
Men are more likely than women to have high control over their work process at the task level, with a difference as great as wage differentials (Karasek and Theorell 1990). Another major gender difference is the negative correlation between decision latitude and demands for women: women with low control also have higher job demands. This means that women are several times as likely to hold high strain jobs in the full working population. By contrast, men’s high demand jobs are generally accompanied by somewhat higher decision latitude (“authority commensurate with responsibility”)
Theoretical linkages between the Demand/Control model and other theoretical perspectives
The Demand/Control models arises out of theoretical integration of several disparate scientific directions. Thus, it falls outside the boundaries of a number of established scientific traditions from which it has gained contributions or with which it is often contrasted: mental health epidemiology and sociology, and stress physiology, cognitive psychology and personality psychology. Some of these previous stress theories have focused on a person-based causal explanation, while the Demand/Control model predicts a stress response to social and psychological environments. However, the Demand/Control model has attempted to provide a set of interfacing hypotheses with person-based perspectives. In addition, linkage to macro social organizational and political economic issues, such as social class, have also been proposed. These theoretical integrations and contrasts with other theories are discussed below at several levels. The linkages below provide the background for an extended set of scientific hypotheses.
Contrast between the Demand/Control model and the cognitive psychological model
One area of stress theory grows out of the currently popular field of cognitive psychology. The central tenet of the cognitive model of human psychological functioning is that it is the processes of perception and interpretation of the external world that determine the development of psychological states in the individual. Mental workload is defined as the total information load that the worker is required to perceive and interpret while performing job tasks (Sanders and McCormick 1993; Wickens 1984). “Overload” and stress occur when this human information processing load is too large for the individual’s information processing capabilities. This model has enjoyed great currency since modelling human mental functions in the same rough conceptual model as modern computers utilize, and thus fits an engineering conception of work design. This model makes us aware of the importance of information overloads, communication difficulties and memory problems. It does well in the design of some aspects of human/computer interfaces and human monitoring of complex processes.
However, the cognitive psychological perspective tends to downplay the importance of “objective” workplace stressors, for example, and emphasize instead the importance of the stressed individuals’ interpretation of the situation. In the cognitive-based “coping approach”, Lazarus and Folkman (1986) advocate that the individual “cognitively reinterpret” the situation in a way that makes it appear less threatening, thus reducing experienced stress. However, this approach could be harmful to workers in situations where the environmental stressors are “objectively” real and must be modified. Another variant of the cognitive approach, more consistent with worker empowerment, is Bandura’s (1977) “self-efficacy /motivation” theory which emphasizes the increases in self-esteem which occur when individuals: (a) define a goal for a change process; (b) receive feedback on the positive results from the environment; and (c) successfully achieve incremental progress.
Several omissions in the cognitive model are problematic for an occupational health perspective on stress and conflict with the Demand/Control model:
Although overlooked in the cognitive model, emotional response is central to the notion of “stress”, since the initial stress problem is often what leads to unpleasant emotional states such as anxiety, fear and depression. “Drives” and emotions are most centrally affected by the limbic regions of the brain—a different and more primitive brain region than the cerebral cortex addressed by most of the processes described by cognitive psychology. Possibly, the failure to develop an integrated perspective on psychological functioning reflects the difficulty of integrating different research specializations focusing on two different neurological systems in the brain. However, recently, evidence has begun to accumulate about the joint effects of emotion and cognition. The conclusion seems to be that emotion is an underlying determinant of strength of behaviour pattern memory and cognition (Damasio 1994; Goleman 1995).
Integrating Sociological and Emotional Stress Perspectives
Development of the Demand/Control model
The goal of the Demand/Control model has been to integrate understanding of the social situation with evidence of emotional response, psychosomatic illness symptoms and active behaviour development in major spheres of adult life activity, particularly in the highly socially structured work situation. However, when the model was being developed, one likely platform for this work, sociological research exploring illness in large population studies, often omitted the detailed level of social or personal response data of stress research, and thus much integrating work was needed to develop the model.
The first Demand/Control integrating idea—for social situation and emotional response—involved stress symptoms, and linked two relatively unidimensional sociological and social psychological research traditions. First, the life stress/illness tradition (Holmes and Rahe 1967; Dohrenwend and Dohrenwend 1974) predicted that illness was based on social and psychological demands alone, without mention of control over stressors. Second, the importance of control at the workplace had been clearly recognized in the job satisfaction literature (Kornhauser 1965): task autonomy and skill variety were used to predict job satisfaction, absenteeism or productivity, with limited additions reflecting the workers’ social relationship to the job—but there was little mention of job workloads. Integrating studies helped bridge the gaps in the area of illness and mental strain. Sundbom (1971) observed symptoms of psychological strain in “mentally heavy work”—which was actually measured by questions relating to both heavy mental pressures and monotonous work (presumably also representing restricted control). The combined insight of these two studies and research traditions was that a two-dimensional model was needed to predict illness: the level of psychological demands determined whether low control could lead to two significantly different types of problem: psychological strain, or passive withdrawal.
The second Demand/Control integration predicted behaviour patterns related to work experience. Behavioural outcomes of work activity also appeared to be affected by the same two broad job characteristics—but in a different combination. Kohn and Schooler (1973) had observed that active orientations to the job were the consequence of both high skill and autonomy levels, plus psychologically demanding work. Social class measures were important correlates here. Meissner (1971) had also found that leisure behaviour was positively associated with opportunities both to take decisions on the job and to perform mentally challenging work. The combined insight of these studies was that “challenge” or mental arousal was necessary, on the one hand, for effective learning and, on the other, could contribute to psychological strain. “Control” was the crucial moderating variable that determined whether environmental demands would lead to “positive” learning consequences, or “negative” strain consequences.
The combination of these two integrating hypotheses, predicting both health and behavioural outcomes, is the basis of the Demand/Control model. “Demand” levels are the contingent factor which determines whether low control leads to either passivity or psychological strain; and “control” levels are the contingent factor which determines whether demands lead to either active learning or psychological strain (Karasek 1976; 1979). The model was then tested on a representative national sample of Swedes (Karasek 1976) to predict both illness symptoms and leisure and political behavioural correlates of psychosocial working conditions. The hypotheses were confirmed in both areas, although many confounding factors obviously share in these results. Shortly after these empirical confirmations, two other conceptual formulations, consistent with the Demand/Control model, appeared, which confirmed the robustness of the general hypotheses. Seligman (1976) observed depression and learned helplessness in conditions of intense demand with restricted control. Simultaneously, Csikszentmihalyi (1975) found that an “active experience” (“flow”) resulted from situations which involved both psychological challenges and high levels of competence. Use of this integrated model was able to resolve some paradoxes in job satisfaction and mental strain research (Karasek 1979): for example, that qualitative workloads were often negatively associated with strain (because they also reflected the individual’s control over his or her use of skills). The most extensive acceptance of the model by other researchers came in 1979 after the expansion of empirical prediction to coronary heart disease, with the assistance of colleague Tores Theorell, a physician with significant background in cardiovascular epidemiology.
A second Demand/Control model integration—physiological response
Additional research has allowed a second level of integration linking the Demand/Control model to physiological response. The main research developments in physiological research had identified two patterns of an organism’s adaptation to its environment. Cannon’s (1914) fight-flight response is most associated with stimulation of the adrenal medulla—and adrenaline secretion. This pattern, occurring in conjunction with sympathetic arousal of the cardiovascular system, is clearly an active and energetic response mode where the human body is able to use maximum metabolic energy to support both mental and physical exertion necessary to escape major threats to its survival. In the second physiological response pattern, the adrenocortical response is a response to defeat or withdrawal in a situation with little possibility of victory. Selye’s research (1936) on stress dealt with the adrenocortical response to animals in a stressed but passive condition (i.e., his animal subjects were restrained while they were stressed, not a fight-flight situation). Henry and Stephens (1977) describe this behaviour as the defeat or loss of social attachments, which leads to a withdrawal and submissiveness in social interactions.
* A major stimulus for the development of the strain hypothesis of the Demand/Control model in 1974 were Dement’s observations (1969) that vital relaxation related to REM dreaming was inhibited if sleep-deprived cats were “constrained” by a treadmill (perhaps like an assembly line) after periods of extreme psychological stressor exposure. The combined actions of both environmental stressors and low environmental control were essential elements in producing these effects. The negative impacts, in terms of mental derangement, were catastrophic and led to inability to coordinate the most basic physiological processes.
In the early 1980s, Frankenhaeuser’s (1986) research demonstrated the congruence of these two patterns of physiological response with the main hypotheses of the Demand/ Control model—allowing linkage to be made between physiological response and social situation, and emotional response patterns. In high-strain situations, cortisol from the adrenal cortex, and adrenaline from the adrenal medulla, secretions are both elevated, whereas in a situation where the subject has a controllable and predictable stressor, adrenaline secretion alone is elevated (Frankenhaeuser, Lundberg and Forsman 1980). This demonstrated a significant differentiation of psychoendocrine response associated with different environmental situations. Frankenhaeuser used a two-dimension model with the same structure as the Demand/Control model, but with dimensions labelling personal emotional response. “Effort” describes adrenal-medullary stimulating activity (demands in the Demand/Control model) and “distress” describes adrenocortical stimulating activity (lack of decision latitude in the Demand/ Control model). Frankenhaeuser’s emotional response categories illuminate a clearer link between emotion and physiological response, but in this form the Demand/Control model fails to illuminate the association between work sociology and physiological response, which has been another strength of the model.
Integrating person-based stress theory: The dynamic version of the Demand/Control model
One of the challenges behind the development of the Demand/ Control model has been to develop an alternative to the socially conservative explanation that the worker’s perception or response orientations are primary responsible for stress—the claim of some person-based stress theories. For example, it is hard to accept the claims, extended by personality-based stress models, that the majority of stress reactions develop because common individual personality types habitually misinterpret real world stresses or are oversensitive to them, and that these types of personality can be identified on the basis of simple tests. Indeed, evidence for such personality effects has been mixed at best with even the most common measures (although a stress denial personality has been identified—alexithymia (Henry and Stephens 1977). The Type A behaviour pattern, for example, was originally interpreted as the individual’s proclivity to select stressful activities, but research in this area has now shifted to the “anger-prone” personality (Williams 1987). Of course, anger response could have a significant environment-response component. A more generalized version of the personality approach is found in the “person-environment fit” model (Harrison 1978), which postulates that a good match between the person and the environment is what reduces stress. Here also it has been difficult to specify the specific personality characteristics to be measured. Nevertheless, personal response/personality-based approaches addressed the obvious fact that: (a) person-based perceptions are an important part of the process in which environments affect individuals; and (b) there are long-term differences in personal responses to environments. Thus, a time dynamic, integrated environment and person-based version of the Demand/Control model was developed.
The dynamic version of the Demand/Control model (figure 2) integrates environment effects with person-based phenomena such as self-esteem development and long-term exhaustion. The dynamic version integrates person-based and environmental factors by building two combined hypotheses on the original strain and learning mechanisms: (a) that stress inhibits learning; and (b) that learning, in the long term, can inhibit stress. The first hypothesis is that high-strain levels may inhibit the normal capacity to accept a challenge, and thus inhibit new learning. These high-strain levels may be the result of long-lasting psychological strain accumulated over time—and reflected in person-based measures (figure 2, diagonal arrow B). The second hypothesis is that new learning may lead to feelings of mastery or confidence—a person-based measure. These feelings of mastery, in turn, can lead to reduced perceptions of events as stressful and increased coping success (figure 3, diagonal arrow A). Thus, environmental factors, over the long term, partly determine personality, and later, environmental effects are moderated by these previously developed personality orientations. This broad model could incorporate the following, more specific measures of personal response: feelings of mastery, denial, alexithymia, trait anxiety, trait anger, vital exhaustion, burnout, culmulative life-stressor implications, and possibly Type A behaviour components.
Figure 2. Dynamic associations linking environmental strain and learning to evolution of personality
The dynamic model yields the possibility of two long-term dynamic “spirals” of behaviour. The positive behavioural dynamic begins with the active job setting, the increased “feeling of mastery”, and the increased ability to cope with inevitable job stressors. These, in turn, reduce accumulated anxiety and thus increase the capacity to accept still more learning challenges —yielding still further positive personality change and improved well-being. The undesirable behavioural dynamic begins with the high-strain job, the high accumulated residual strain and the restricted capacity to accept learning challenges. These, in turn, lead to diminishing self-esteem and increased stress perceptions—yielding still further negative personality change and diminished well-being. Evidence for submechanisms is discussed in Karasek and Theorell (1990), although the complete model has not been tested. Two promising research directions which could easily integrate with Demand/Control research are “vital exhaustion” research integrated with changing responses to life demands (Appels 1990), and Bandura’s (1977) “self-efficacy” methods, which integrate skill development and self-esteem development.
The Demand/Control model and the system dynamics of physiological stress
One necessary next step for Demand/Control research is a more comprehensive specification of the physiological pathways of illness causation. Physiological response is increasingly being understood as a complex system response. The physiology of human stress response—to accomplish, for example, a fight or flight behaviour—is a highly integrated combination of changes in cardiovascular output, brain-stem regulation, respiratory interaction, limbic-system control of the endocrine response, general cortical activation and peripheral circulatory system changes. The concept of “stress” is very possibly most relevant for complex systems—which involve multiple, interacting subsystems and complex causality.* Accompanying this new perspective of systems dynamic principles in physiology, are definitions of many diseases as disorders of system regulation (Henry and Stephens 1977; Weiner 1977), and investigation of the results of time-dependent, multifactoral adjustments to system equilibrium, or alternatively, their absence in “chaos”.
* Instead of a single and unambiguous cause and effect linkage, as in the "hard sciences" (or hard science mythologically), in stress models causal associations are more complex: there may be many causes which "accumulate" to contribute to a single effect; a single cause ("stressor") may have many effects; or effects which occur only after significant time delays.
Interpreting such observations from the perspective of a “generalized” Demand/Control model, we could say that stress refers to a disequilibrium of the system as a whole, even when parts of the system are functioning. All organisms must have control mechanisms to integrate the actions of separate subsystems (i.e., the brain, the heart and the immune systems). Stress (or job strain) would be an overload condition experienced by the organism’s “control system” when it attempts to maintain integrated functioning in the face of too many environmental challenges (“high demands”), and when the system’s capability of integrated control of its submechanisms fails (“high strain”). To impose order on its chaotic environment, the individual’s internal physiological control systems must “do the work” of maintaining a coordinated physiological regularity (i.e., a constant heart rate) in the face of irregular environmental demands. When the organism’s control capacity is exhausted after too much “organizing” (a low entropy condition, by analogy from thermodynamics), further demands lead to excess fatigue or debilitating strain. Furthermore, all organisms must periodically return their control systems to the rest-state—sleep or relaxation periods (a state of relaxed disorder or high entropy)—to be capable of undertaking the next round of coordinating tasks. The system’s coordination processes or its relaxation attempts may be inhibited if it cannot follow its own optimal course of action, i.e., if it has no possibilities to control its situation or find a satisfactory internal equilibrium state. In general, “lack of control” may represent restriction of the organism’s ability to use all of its adaptive mechanisms to maintain physiological equilibrium in the face of demands, leading to increased long-term burdens and disease risk. This is a direction for future Demand/Control physiological research.
One potentially consistent finding is that while the Demand/Control model predicts cardiovascular mortality, no single conventional risk factor or physiological indicator seems to be the primary pathway of this risk. Future research may show whether “systems dynamic failures” are the pathway.
Macro-social implications of Demand/Control model
Models which integrate over several spheres of research allow broader predictions about the health consequences of human social institutions. For example, Henry and Stephens (1977) observe that in the animal world “psychological demands” result from the thoroughly “social” responsibilities of finding family food and shelter, and rearing and defending offspring; situations of enforced demands combined with social isolation would be hard to imagine. However, the human world of work is so organized that demands can occur without any social affiliation at all. Indeed, according to Frederick Taylor’s Principles of Scientific Management (1911 (1967)), increasing workers’ job demands often should be done in isolation, otherwise the workers would revolt against the process—and return to time-wasting socializing! In addition to showing the utility of an integrated model, this example shows the need to expand even further the social understanding of the human stress response (for example, by adding a social support dimension to the Demand/Control model).
An integrated, socially anchored, understanding of human stress response is particularly needed to understand future economic and political development. Less comprehensive models could be misleading. For example, according to the cognitive model which has dominated public dialogues about future social and industrial development (i.e., the direction for worker’s skills, life in the information society, etc.), an individual has freedom to interpret—i.e., reprogramme—his perception of real world events as stressful or non-stressful. The social implication is that, literally, we can design for ourselves any social arrangement—and we should take the responsibility for adapting to any stresses it may cause. However, many of the physiological consequences of stress relate to the “emotional brain” in the limbic system, which has a deterministic structure with clear limitations on overall demands. It is definitely not “infinitely” re-programmable, as studies of post traumatic stress syndrome clearly indicate (Goleman 1995). Overlooking the limbic system’s limits—and the integration of emotional response and social integration—can lead to a very modern set of basic conflicts for human development. We may be developing social systems on the basis of the extraordinary cognitive capabilities of our brain cortex that place impossible demands on the more basic limbic brain functions in terms of overloads: lost social bonds, lack of internal control possibilities, and restricted ability to see the “whole picture”. In short, we appear to be running the risk of developing work organizations for which we are sociobiologically misfit. These results are not just the consequence of scientific incomplete models, they also facilitate the wrong kinds of social process—processes where the interests of some groups with social power are served to the cost to others of previously inexperienced levels of social and personal dysfunction.
Social class and psychosocial job measures
In many cases, individual level stressors can be modelled as the causal outcome of larger-scale social, dynamic and political- economic processes. Thus, theoretical linkages to concepts such as social class are also needed. Assessment of associations between social situation and illness raise the question of the relation between psychosocial Demand/Control factors and broad measures of social circumstance such as social class. Job decision latitude measure is, indeed, clearly correlated with education and other measures of social class. However, social class conventionally measures effects of income and education which operate via different mechanisms than the psychosocial pathways of the Demand/Control model. Importantly, the job strain construct is almost orthogonal to most social class measures in national populations (however, the active/passive dimension is highly correlated with social class among high status workers (only)) (Karasek and Theorell 1990). The low-decision latitude aspects of low status jobs appear to be a more important contributor to psychological strain than the distinction between mental and physical workload, the conventional determinant of white/blue-collar status. Indeed, the physical exertion common in many blue-collar jobs may be protective for psychological strain in some circumstances. While job strain is indeed more common in low status jobs, psychosocial job dimensions define a strain-risk picture which is significantly independent of the conventional social class measures.
Although it has been suggested that the observed Demand/Control job/illness associations merely reflect social class differences (Ganster 1989; Spector 1986), a review of evidence rejects this view (Karasek and Theorell 1990). Most of the Demand/Control research has simultaneously controlled for social class, and Demand/Control associations persist within social class groups. However, blue-collar associations with the model are more consistently confirmed, and the strength of white-collar associations varies (see “Job strain and cardiovascular disease”, below) across studies, with white-collar single occupation studies being somewhat less robust. (Of course, for the very highest status managers and professionals decision making may become a significant demand in itself.)
The fact that conventional “social class” measures often find weaker associations with mental distress and illness outcomes than the Demand/Control model actually makes a case for new social class conceptions. Karasek and Theorell (1990) define a new set of psychosocially advantaged and disadvantaged workers, with job stress “losers” in routinized, commercialized and bureaucratized jobs, and “winners” in highly creative learning-focused intellectual work. Such a definition is consistent with a new, skill-based industrial output in the “information society”, and a new perspective on class politics.
Methodological Issues
Objectivity of psychosocial job measures
Self-report questionnaires administered to workers have been the most common method of gathering data on psychosocial characteristics of work since they are simple to administer and can be easily designed to tap core concepts in work redesign efforts also (Hackman and Oldham’s JDS 1975), Job Content Questionnaire (Karasek 1985), the Swedish Statshalsan questionnaire. While designed to measure the objective job, such questionnaire instruments inevitably measure job characteristics as perceived by the worker. Self-report bias of findings can occur with self-reported dependent variables such as depression, exhaustion and dissatisfaction. One remedy is to aggregate self-report responses by work groups with similar work situations—diluting individual biases (Kristensen 1995). This is the basis of extensively used systems linking psychosocial job characteristics to occupations (Johnson et al. 1996).
There is also evidence assessing the “objective” validity of self-reported psychosocial scales: correlations between self-report and expert observation data are typically 0.70 or higher for decision latitude, and lower (0.35) correlations for work demands (Frese and Zapf 1988). Also supporting objective validity is the high between-occupation variances of (40 to 45%) of decision latitude scales, which compare favourably with 21% for income and 25% for the physical exertion, which are acknowledged to vary dramatically by occupation (Karasek and Theorell 1990). However, only 7% and 4%, of psychological demands and social support scale variance, respectively, is between occupations, leaving the possibility of a large person-based component of self-reports of these measures.
More objective measurement strategies would be desirable. Some well-known objective assessment methods are congruent with the Demand/Control model (for decision latitude: VERA, Volpert et al. (1983)). However, expert observations have problems also: observations are costly, time consuming, and, in assessment of social interactions, obviously do not generate more accurate measures. There are also theoretical biases involved in the very concept of standard “expert” measures: it is much easier to “measure” the easily observed, repetitive quality of the low status assembly-line worker jobs, than the diverse tasks of high status managers or professionals. Thus, objectivity of the psychosocial measures is inversely related to the decision latitude of the subject.
Some reviews of empirical evidence for the Demand/Control model
Job strain and cardiovascular disease (CVD)
Job strain and heart disease associations represent the broadest base of empirical support for the model. Recent comprehensive reviews have been done by Schnall, Landsbergis and Baker (1994), Landsbergis et al. (1993) and Kristensen (1995). Summarizing Schnall, Landsbergis and Baker(1994) (updated by Landsbergis, personal communication, Fall 1995): 16 of 22 studies have confirmed a job strain association with cardiovascular mortality using a wide range of methodologies, including 7 of 11 cohort studies; 2 of 3 cross-sectional studies; 4 of 4 case control studies; and 3 of 3 studies utilizing disease symptom indicators. Most negative studies have been in older populations (mainly over age 55, some with much post-retirement time) and are mainly based upon aggregated occupation scores which, although they minimize self-report bias, are weak in statistical power. The job strain hypothesis appears to be somewhat more consistent when predicting blue-collar than white-collar CVD (Marmot and Theorell 1988). Conventional CVD risk factors such as serum cholesterol, smoking and even blood pressure, when measured in the conventional manner, have so far only shown inconsistent or weak job-strain effects. However, more sophisticated methods (ambulatory blood pressures) show substantial positive results (Theorell and Karasek 1996).
Job strain and psychological distress/behaviour, absenteeism
Psychological disorder findings are reviewed in Karasek and Theorell (1990). The majority of the studies confirm a job strain association and are from broadly representative or nationally representative populations in a number of countries. The common study limitations are cross-section design and the difficult-to-avoid problem of self-reported job and psychological strain questionnaires, although some studies also include objective observer assessment of work situations and there are also supportive longitudinal studies. While some have claimed that a person-based tendency towards negative affect inflates work-mental strain associations (Brief et al. 1988), this could not be true for several strong findings on absenteeism (North et al. 1996; Vahtera Uutela and Pentii 1996). Associations in some studies are very strong and, in a number of studies, are based on a linkage system which minimizes potential self-report bias (at the risk of loss of statistical power). These studies confirm associations for a broad range of psychological strain outcomes: moderately severe forms of depression, exhaustion, drug consumption, and life and job dissatisfaction, but findings also differ by outcome. There is also some differentiation of negative affect by Demand/Control model dimensions. Exhaustion, rushed tempo or simply reports of “feeling stressed” are more strongly related to psychological demands—and are higher for managers and professionals. More serious strain symptoms such as depression, loss of self-esteem, and physical illness seem to be more strongly associated with low decision latitude—a larger problem for low status workers.
Job strain and musculoskeletal disorders and other chronic diseases
Evidence of the utility of the Demand/Control model is accumulating in other areas (see Karasek and Theorell 1990). Prediction of occupational musculoskeletal illness is reviewed for 27 studies by Bongers et al. (1993) and other researchers (Leino and Häøninen 1995; Faucett and Rempel 1994). This work supports the predictive utility of the Demand/ Control/support model, particularly for upper extremity disorders. Recent studies of pregnancy disorders (Fenster et al. 1995; Brandt and Nielsen 1992) also show job strain associations.
Summary and Future Directions
The Demand/Control/support model has stimulated much research during recent years. The model has helped to document more specifically the importance of social and psychological factors in the structure of current occupations as a risk factor for industrial society’s most burdensome diseases and social conditions. Empirically, the model has been successful: a clear relationship between adverse job conditions (particularly low decision latitude) and coronary heart disease has been established.
However, it is still difficult to be precise about which aspects of psychological demands, or decision latitude, are most important in the model, and for what categories of workers. Answers to these questions require more depth of explanation of the physiological and micro-behavioural effects of psychological demands, decision latitude and social support than the model’s original formulation provided, and require simultaneous testing of the dynamic version of the model, including the active/passive hypotheses. Future utility of Demand/Control research could be enhanced by an expanded set of well-structured hypotheses, developed through integration with other intellectual areas, as outlined above (also in Karasek and Theorell 1990). The active/passive hypotheses, in particular, have received too little attention in health outcome research.
Other areas of progress are also needed, particularly new methodological approaches in the psychological demand area. Also, more longitudinal studies are needed, methodological advances are needed to address self-report bias and new physiological monitoring technologies must be introduced. At the macro level, macro social occupational factors, such as worker collective and organizational level decision influence and support, communication limitations and job and income insecurity, need to be more clearly integrated into the model. The linkages to social class concepts need to be further explored, and the strength of the model for women and the structure of work/family linkages need to be further investigated. Population groups in insecure employment arrangements, which have the highest stress levels, must be covered by new types of study designs—especially relevant as the global economy changes the nature of work relationships. As we are more exposed to the strains of the global economy, new measures at macro levels are needed to test the lack of local control and increased intensity of work activity— apparently making the general form of the Demand/Control model relevant in the future.
Overview
In this article, the links between the physical features of the workplace and occupational health are examined. Workplace design is concerned with a variety of physical conditions within work environments that can be objectively observed or recorded and modified through architectural, interior design and site planning interventions. For the purposes of this discussion, occupational health is broadly construed to encompass multiple facets of workers’ physical, mental and social well-being (World Health Organization 1984). Thus, a broad array of health outcomes is examined, including employee satisfaction and morale, work-group cohesion, stress reduction, illness and injury prevention, as well as environmental supports for health promotion at the worksite.
Empirical evidence for the links between workplace design and occupational health is reviewed below. This review, highlighting the health effects of specific design features, must be qualified in certain respects. First, from an ecological perspective, worksites function as complex systems comprised of multiple social and physical environmental conditions, which jointly influence employee well-being (Levi 1992; Moos 1986; Stokols 1992). Thus, the health consequences of environmental conditions are often cumulative and sometimes involve complex mediated and moderated relationships among the sociophysical environment, personal resources and dispositions (Oldham and Fried 1987; Smith 1987; Stellman and Henifin 1983). Moreover, enduring qualities of people-environment transaction, such as the degree to which employees perceive their work situation to be controllable, socially supportive and compatible with their particular needs and abilities, may have a more pervasive influence on occupational health than any single facet of workplace design (Caplan 1983; Karasek and Theorell 1990; Parkes 1989; Repetti 1993; Sauter, Hurrell and Cooper 1989). The research findings reviewed should be interpreted in light of these caveats.
Research Findings
The relationships between worksite design and occupational health can be considered at several levels of analysis, including the:
Previous research has focused primarily on the first and second levels, while giving less attention to the third and fourth levels of workplace design.
Physical features of the immediate work area
The immediate work area extends from the core of an employee’s desk or workstation to the physical enclosure or imaginary boundary surrounding his or her work space. Several features of the immediate work area have been found to influence employee well-being. The degree of physical enclosure surrounding one’s desk or workstation, for example, has been shown in several studies to be positively related to the employee’s perception of privacy, satisfaction with the work environment and overall job satisfaction (Brill, Margulis and Konar 1984; Hedge 1986; Marans and Yan 1989; Oldham 1988; Sundstrom 1986; Wineman 1986). Moreover, “open-plan” (low enclosure) work areas have been linked to more negative social climates in work groups (Moos 1986) and more frequent reports of headaches among employees (Hedge 1986). It is important to note, however, that the potential health effects of workstation enclosure may depend on the type of work being performed (e.g., confidential versus non-confidential, team versus individualized tasks; see Brill, Margulis and Konar 1984), job status (Sundstrom 1986), levels of social density adjacent to one’s work area (Oldham and Fried 1987), and workers’ needs for privacy and stimulation screening (Oldham 1988).
A number of studies have shown that the presence of windows in the employees’ immediate work areas (especially windows that afford views of natural or landscaped settings), exposure to indoor natural elements (e.g., potted plants, pictures of wilderness settings), and opportunities to personalize the decor of one’s office or workstation are associated with higher levels of environmental and job satisfaction and lower levels of stress (Brill, Margulis and Konar 1984; Goodrich 1986; Kaplan and Kaplan 1989; Steele 1986; Sundstrom 1986). Providing employees with localized controls over acoustic, lighting and ventilation conditions within their work areas has been linked to higher levels of environmental satisfaction and lower levels of stress in some studies (Becker 1990; Hedge 1991; Vischer 1989). Finally, several research programmes have documented the health benefits associated with employees’ use of adjustable, ergonomically sound furniture and equipment; these benefits include reduced rates of eyestrain and of repetitive motion injuries and lower back pain (Dainoff and Dainoff 1986; Grandjean 1987; Smith 1987).
Ambient environmental qualities of the work area
Ambient environmental conditions originate from outside the worker’s immediate work area. These pervasive qualities of the worksite influence the comfort and well-being of employees whose work spaces are located within a common region (e.g., a suite of offices located on one floor of a building). Examples of ambient environmental qualities include levels of noise, speech privacy, social density, illumination and air quality—conditions that are typically present within a particular portion of the worksite. Several studies have documented the adverse health impacts of chronic noise disturbance and low levels of speech privacy in the workplace, including elevated levels of physiological and psychological stress and reduced levels of job satisfaction (Brill, Margulis and Konar 1984; Canter 1983; Klitzman and Stellman 1989; Stellman and Henifin 1983; Sundstrom 1986; Sutton and Rafaeli 1987). High levels of social density in the immediate vicinity of one’s work area have also been linked with elevated stress levels and reduced job satisfaction (Oldham 1988; Oldham and Fried 1987; Oldham and Rotchford 1983).
Health consequences of office lighting and ventilation systems have been observed as well. In one study, lensed indirect fluorescent uplighting was associated with higher levels of employee satisfaction and reduced eyestrain, in comparison with traditional fluorescent downlighting (Hedge 1991). Positive effects of natural lighting on employees’ satisfaction with the workplace also have been reported (Brill, Margulis and Konar 1984; Goodrich 1986; Vischer and Mees 1991). In another study, office workers exposed to chilled-air ventilation systems evidenced higher rates of upper-respiratory problems and physical symptoms of “sick building syndrome” than those whose buildings were equipped with natural or mechanical (non-chilled, non-humidified) ventilation systems (Burge et al. 1987; Hedge 1991).
Features of the ambient environment that have been found to enhance the social climate and cohesiveness of work groups include the provision of team-oriented spaces adjacent to individualized offices and workstations (Becker 1990; Brill, Margulis and Konar 1984; Steele 1986; Stone and Luchetti 1985) and visible symbols of corporate and team identity displayed within lobbies, corridors, conference rooms, lounges and other collectively used areas of the worksite (Becker 1990; Danko, Eshelman and Hedge 1990; Ornstein 1990; Steele 1986).
Overall organization of buildings and facilities
This level of design encompasses the interior physical features of work facilities that extend throughout an entire building, many of which are not immediately experienced within an employee’s own work space or within those adjacent to it. For example, enhancing the structural integrity and fire-resistance of buildings, and designing stairwells, corridors and factories to prevent injuries, are essential strategies for promoting worksite safety and health (Archea and Connell 1986; Danko, Eshelman and Hedge 1990). Building layouts that are consistent with the adjacency needs of closely interacting units within an organization can improve coordination and cohesion among work groups (Becker 1990; Brill, Margulis and Konar 1984; Sundstrom and Altman 1989). The provision of physical fitness facilities at the worksite has been found to be an effective strategy for enhancing employees’ health practices and stress management (O’Donnell and Harris 1994). Finally, the presence of legible signs and wayfinding aids, attractive lounge and dining areas, and child-care facilities at the worksite have been identified as design strategies that enhance employees’ job satisfaction and stress management (Becker 1990; Brill, Margulis and Konar 1984; Danko, Eshelman and Hedge 1990; Steele 1986; Stellman and Henifin 1983; Vischer 1989).
Exterior amenities and site planning
Exterior environmental conditions adjacent to the worksite may also carry health consequences. One study reported an association between employees’ access to landscaped, outdoor recreational areas and reduced levels of job stress (Kaplan and Kaplan 1989). Other researchers have suggested that the geographic location and site planning of the worksite can influence the mental and physical well-being of workers to the extent that they afford greater access to parking and public transit, restaurants and retail services, good regional air quality and the avoidance of violent or otherwise unsafe areas in the surrounding neighbourhood (Danko, Eshelman and Hedge 1990; Michelson 1985; Vischer and Mees 1991). However, the health benefits of these design strategies have not yet been evaluated in empirical studies.
Directions for Future Research
Prior studies of environmental design and occupational health reflect certain limitations and suggest several issues for future investigation. First, earlier research has emphasized the health effects of specific design features (e.g., workstation enclosure, furnishings, lighting systems), while neglecting the joint influence of physical, interpersonal and organizational factors on well-being. Yet the health benefits of improved environmental design may be moderated by the social climate and organizational qualities (as moderated, for example, by a participative versus non-participative structure) of the workplace (Becker 1990; Parkes 1989; Klitzman and Stellman 1989; Sommer 1983; Steele 1986). The interactive links between physical design features, employee characteristics, social conditions at work and occupational health, therefore, warrant greater attention in subsequent studies (Levi 1992; Moos 1986; Stokols 1992). At the same time, an important challenge for future research is to clarify the operational definitions of particular design features (e.g., the “open plan” office), which have varied widely in earlier studies (Brill, Margulis and Konar 1984; Marans and Yan 1989; Wineman 1986).
Secondly, employee characteristics such as job status, gender and dispositional styles have been found to mediate the health consequences of worksite design (Burge et al. 1987; Oldham 1988; Hedge 1986; Sundstrom 1986). Yet, it is often difficult to disentangle the separate effects of environmental features and individual differences (these differences may have to do with, for example, workstation enclosures, comfortable furnishings, and job status) because of ecological correlations among these variables (Klitzman and Stellman 1989). Future studies should incorporate experimental techniques and sampling strategies that permit an assessment of the main and interactive effects of personal and environmental factors on occupational health. Moreover, specialized design and ergonomic criteria to enhance the health of diverse and vulnerable employee groups (e.g., disabled, elderly and single-parent female workers) remain to be developed in future research (Michelson 1985; Ornstein 1990; Steinfeld 1986).
Thirdly, prior research on the health outcomes of worksite design has relied heavily on survey methods to assess employees’ perceptions of both their work environments and health status, placing certain constraints (for example, “common method variance”) on the interpretation of data (Klitzman and Stellman 1989; Oldham and Rotchford 1983). Furthermore, the majority of these studies have used cross-sectional rather than longitudinal research designs, the latter incorporating comparative assessments of intervention and control groups. Future studies should emphasize both field-experimental research designs and multi-method strategies that combine survey techniques with more objective observations and recordings of environmental conditions, medical exams and physiological measures.
Finally, the health consequences of building organization, exterior amenities and site-planning decisions have received considerably less attention in prior studies than those associated with the more immediate, ambient qualities of employees’ work areas. The health relevance of both proximal and remote aspects of workplace design should be examined more closely in future research.
Role of Workplace Design in Illness Prevention and Health Promotion
Several environmental design resources and their potential health benefits are summarized in table 1, based on the preceding review of research findings. These resources are grouped according to the four levels of design noted above and emphasize physical features of work settings that have been empirically linked to improved mental, physical and social health outcomes (especially those found at levels 1 and 2), or have been identified as theoretically plausible leverage points for enhancing employee well-being (e.g., several of the features subsumed under levels 3 and 4).
Table 1. Workplace design resources and potential health benefits
Levels of environmental design | Environmental design features of the workplace | Emotional, social and physical health outcomes |
Immediate work area | Physical enclosure of the work area Adjustable furniture and equipment Localized controls of acoustics, lighting and ventilation Natural elements and personalized decor Presence of windows in work area |
Enhanced privacy and job satisfaction Reduced eyestrain and repetitive-strain and lower-back injuries Enhanced comfort and stress reduction Enhanced sense of identity and involvement at the workplace Job satisfaction and stress reduction |
Ambient qualities of the work area |
Speech privacy and noise control Comfortable levels of social density Good mix of private and team spaces Symbols of corporate and team identity Natural, task, and lensed indirect lighting Natural ventilation vs. chilled-air systems |
Lower physiological, emotional stress Lower physiological, emotional stress Improved social climate, cohesion Improved social climate, cohesion Reduced eyestrain, enhanced satisfaction Lower rates of respiratory problems |
Building organization | Adjacencies among interacting units Legible signage and wayfinding aids Injury-resistant architecture Attractive lounge and food areas onsite Availability of worksite child care Physical fitness facilities onsite |
Enhanced coordination and cohesion Reduced confusion and distress Lower rates of unintentional injuries Enhanced satisfaction with job, worksite Employee convenience, stress reduction Improved health practices, lower stress |
Exterior amenities and site planning |
Availability of outside recreation areas Access to parking and public transit Proximity to restaurants and stores Good air quality in surrounding area Low levels of neighbourhood violence |
Enhanced cohesion, stress reduction Employee convenience, stress reduction Employee convenience, stress reduction Improved respiratory health Reduced rates of intentional injuries |
The incorporation of these resources into the design of work environments should, ideally, be combined with organizational and facilities management policies that maximize the health- promoting qualities of the workplace. These corporate policies include:
Organizational efforts to enhance employee well-being are likely to be more effective to the extent that they combine complementary strategies of environmental design and facilities management, rather than relying exclusively on either one of these approaches.
In the language of engineering, stress is “a force which deforms bodies”. In biology and medicine, the term usually refers to a process in the body, to the body’s general plan for adapting to all the influences, changes, demands and strains to which it is exposed. This plan swings into action, for example, when a person is assaulted on the street, but also when someone is exposed to toxic substances or to extreme heat or cold. It is not just physical exposures which activate this plan however; mental and social ones do so as well. For instance, if we are insulted by our supervisor, reminded of an unpleasant experience, expected to achieve something of which we do not believe we are capable, or if, with or without cause, we worry about our job or marriage.
There is something common to all these cases in the way the body attempts to adapt. This common denominator—a kind of “revving up” or “stepping on the gas”—is stress. Stress is, then, a stereotype in the body’s responses to influences, demands or strains. Some level of stress is always to be found in the body, just as, to draw a rough parallel, a country maintains a certain state of military preparedness, even in peacetime. Occasionally this preparedness is intensified, sometimes with good cause and at other times without.
In this way the stress level affects the rate at which processes of wear and tear on the body take place. The more “gas” given, the higher the rate at which the body’s engine is driven, and hence the more quickly the “fuel” is used up and the “engine” wears out. Another metaphor also applies: if you burn a candle with a high flame, at both ends, it will give off brighter light but will also burn down more quickly. A certain amount of fuel is necessary otherwise the engine will stand still, the candle will go out; that is, the organism would be dead. Thus, the problem is not that the body has a stress response, but that the degree of stress—the rate of wear and tear—to which it is subject may be too great. This stress response varies from one minute to another even in one individual, the variation depending in part on the nature and state of the body and in part on the external influences and demands—the stressors—to which the body is exposed. (A stressor is thus something that produces stress.)
Sometimes it is difficult to determine whether stress in a particular situation is good or bad. Take, for instance, the exhausted athlete on the winner’s stand, or the newly appointed but stress-racked executive. Both have achieved their goals. In terms of pure accomplishment, one would have to say that their results were well worth the effort. In psychological terms, however, such a conclusion is more doubtful. A good deal of torment may have been necessary to get so far, involving long years of training or never-ending overtime, usually at the expense of family life. From the medical viewpoint such achievers may be considered to have burnt their candles at both ends. The result could be physiological; the athlete may rupture a muscle or two and the executive develop high blood pressure or have a heart attack.
Stress in relation to work
An example may clarify how stress reactions can arise at work and what they might lead to in terms of health and quality of life. Let us imagine the following situation for a hypothetical male worker. Based on economic and technical considerations, management has decided to break up a production process into very simple and primitive elements which are to be performed on an assembly line. Through this decision, a social structure is created and a process set into motion which can constitute the starting point in a stress- and disease-producing sequence of events. The new situation becomes a psychosocial stimulus for the worker, when he first perceives it. These perceptions may be further influenced by the fact that the worker may have previously received extensive training, and thus was consequently expecting a work assignment which required higher qualifications, not reduced skill levels. In addition, past experience of work on an assembly line was strongly negative (that is, earlier environmental experiences will influence the reaction to the new situation). Furthermore, the worker’s hereditary factors make him more prone to react to stressors with an increase in blood pressure. Because he is more irritable, perhaps his wife criticizes him for accepting his new assignment and bringing his problems home. As a result of all these factors, the worker reacts to the feelings of distress, perhaps with an increase in alcohol consumption or by experiencing undesirable physiological reactions, such as the elevation in blood pressure. The troubles at work and in the family continue, and his reactions, originally of a transient type, become sustained. Eventually, he may enter a chronic anxiety state or develop alcoholism or chronic hypertensive disease. These problems, in turn, increase his difficulties at work and with his family, and may also increase his physiological vulnerability. A vicious cycle may set in which may end in a stroke, a workplace accident or even suicide. This example illustrates the environmental programming involved in the way a worker reacts behaviourally, physiologically and socially, leading to increased vulnerability, impaired health and even death.
Psychosocial conditions in present working life
According to an important International Labour Organization (ILO) (1975) resolution, work should not only respect workers’ lives and health and leave them free time for rest and leisure, but also allow them to serve society and achieve self-fulfilment by developing their personal capabilities. These principles were also set down as early as 1963, in a report from the London Tavistock Institute (Document No. T813) which provided the following general guidelines for job design:
The Organization for Economic Cooperation and Development (OECD), however, draws a less hopeful picture of the reality of working life, pointing out that:
In the short run, benefits of the developments which have proceeded according to this OECD list have brought more productivity at lesser cost, as well as an increase in wealth. However, the long-term disadvantages of such developments are often more worker dissatisfaction, alienation and possibly ill health which, when considering society in general, in turn, may affect the economic sphere, although the economic costs of these effects have only recently been taken into consideration (Cooper, Luikkonen and Cartwright 1996; Levi and Lunde-Jensen 1996).
We also tend to forget that, biologically, humankind has not changed much during the last 100,000 years, whereas the environment—and in particular the work environment—has changed dramatically, particularly during the past century and decades. This change has been partly for the better; however, some of these “improvements” have been accompanied by unexpected side effects. For example, data collected by the National Swedish Central Bureau of Statistics during the 1980s showed that:
In its major study of conditions of work in the 12 member States of the European Union at that time (1991/92), the European Foundation (Paoli 1992) found that 30% of the workforce regarded their work to risk their health, 23 million to have night work more than 25% of total hours worked, each third to report highly repetitive, monotonous work, each fifth male and each sixth female to work under “continuous time pressure”, and each fourth worker to carry heavy loads or to work in a twisted or painful position more than 50% of his or her working time.
Main psychosocial stressors at work
As already indicated, stress is caused by a bad “person- environment fit”, objectively, subjectively, or both, at work or elsewhere and in an interaction with genetic factors. It is like a badly fitting shoe: environmental demands are not matched to individual ability, or environmental opportunities do not measure up to individual needs and expectations. For example, the individual is able to perform a certain amount of work, but much more is required, or on the other hand no work at all is offered. Another example would be that the worker needs to be part of a social network, to experience a sense of belonging, a sense that life has meaning, but there may be no opportunity to meet these needs in the existing environment and the “fit” becomes bad.
Any fit will depend on the “shoe” as well as on the “foot”, on situational factors as well as on individual and group characteristics. The most important situational factors that give rise to “misfit” can be categorized as follows:
Quantitative overload. Too much to do, time pressure and repetitive work-flow. This is to a great extent the typical feature of mass production technology and routinized office work.
Qualitative underload. Too narrow and one-sided job content, lack of stimulus variation, no demands on creativity or problem- solving, or low opportunities for social interaction. These jobs seem to become more common with suboptimally designed automation and increased use of computers in both offices and manufacturing even though there may be instances of the opposite.
Role conflicts. Everybody occupies several roles concurrently. We are the superiors of some people and the subordinates of others. We are children, parents, marital partners, friends and members of clubs or trade unions. Conflicts easily arise among our various roles and are often stress evoking, as when, for instance, demands at work clash with those from a sick parent or child or when a supervisor is divided between loyalty to superiors and to fellow workers and subordinates.
Lack of control over one’s own situation. When someone else decides what to do, when and how; for example, in relation to work pace and working methods, when the worker has no influence, no control, no say. Or when there is uncertainty or lack of any obvious structure in the work situation.
Lack of social support at home and from your boss or fellow workers.
Physical stressors. Such factors can influence the worker both physically and chemically, for example, direct effects on the brain of organic solvents. Secondary psychosocial effects can also originate from the distress caused by, say, odours, glare, noise, extremes of air temperature or humidity and so on. These effects can also be due to the worker’s awareness, suspicion or fear that he is exposed to life-threatening chemical hazards or to accident risks.
Finally, real life conditions at work and outside work usually imply a combination of many exposures. These might become superimposed on each other in an additive or synergistic way. The straw which breaks the camel’s back may therefore be a rather trivial environmental factor, but one that comes on top of a very considerable, pre-existing environmental load.
Some of the specific stressors in industry merit special discussion, namely those characteristic of:
Mass production technology. Over the past century work has become fragmented in many workplaces, changing from a well defined job activity with a distinct and recognized end-product, into numerous narrow and highly specified subunits which bear little apparent relation to the end-product. The growing size of many factory units has tended to result in a long chain of command between management and the individual workers, accentuating remoteness between the two groups. The worker also becomes remote from the consumer, since rapid elaborations for marketing, distribution and selling interpose many steps between the producer and the consumer.
Mass production, thus, normally involves not just a pronounced fragmentation of the work process but also a decrease in worker control of the process. This is partly because work organization, work content and work pace are determined by the machine system. All these factors usually result in monotony, social isolation, lack of freedom and time pressure, with possible long-term effects on health and well-being.
Mass production, moreover, favours the introduction of piece rates. In this regard, it can be assumed that the desire—or necessity—to earn more can, for a time, induce the individual to work harder than is good for the organism and to ignore mental and physical “warnings”, such as a feeling of tiredness, nervous problems and functional disturbances in various organs or organ systems. Another possible effect is that the employee, bent on raising output and earnings, infringes safety regulations thereby increasing the risk of occupational disease and of accidents to oneself and others (e.g., lorry drivers on piece rates).
Highly automated work processes. In automated work the repetitive, manual elements are taken over by machines, and the workers are left with mainly supervisory, monitoring and controlling functions. This kind of work is generally rather skilled, not regulated in detail and the worker is free to move about. Accordingly, the introduction of automation eliminates many of the disadvantages of the mass-production technology. However, this holds true mainly for those stages of automation where the operator is indeed assisted by the computer and maintains some control over its services. If, however, operator skills and knowledge are gradually taken over by the computer—a likely development if decision making is left to economists and technologists—a new impoverishment of work may result, with a re-introduction of monotony, social isolation and lack of control.
Monitoring a process usually calls for sustained attention and readiness to act throughout a monotonous term of duty, a requirement that does not match the brain’s need for a reasonably varied flow of stimuli in order to maintain optimal alertness. It is well documented that the ability to detect critical signals declines rapidly even during the first half-hour in a monotonous environment. This may add to the strain inherent in the awareness that temporary inattention and even a slight error could have extensive economic and other disastrous consequences.
Other critical aspects of process control are associated with very special demands on mental skill. The operators are concerned with symbols, abstract signals on instrument arrays and are not in touch with the actual product of their work.
Shift work. In the case of shift work, rhythmical biological changes do not necessarily coincide with corresponding environmental demands. Here, the organism may “step on the gas” and activation occurs at a time when the worker needs to sleep (for example, during the day after a night shift), and deactivation correspondingly occurs at night, when the worker may need to work and be alert.
A further complication arises because workers usually live in a social environment which is not designed for the needs of shift workers. Last but not least, shift workers must often adapt to regular or irregular changes in environmental demands, as in the case of rotating shifts.
In summary, the psychosocial demands of the modern workplace are often at variance with the workers’ needs and capabilities, leading to stress and ill health. This discussion provides only a snapshot of psychosocial stressors at work, and how these unhealthy conditions can arise in today’s workplace. In the sections that follow, psychosocial stressors are analysed in greater detail with respect to their sources in modern work systems and technologies, and with respect to their assessment and control.
In 1966, long before job stress and psychosocial factors became household expressions, a special report entitled “Protecting the Health of Eighty Million Workers—A National Goal for Occupational Health” was issued to the Surgeon General of the United States (US Department of Health and Human Services 1966). The report was prepared under the auspices of the National Advisory Environmental Health Committee to provide direction to Federal programmes in occupational health. Among its many observations, the report noted that psychological stress was increasingly apparent in the workplace, presenting “... new and subtle threats to mental health,” and possible risk of somatic disorders such as cardiovascular disease. Technological change and the increasing psychological demands of the workplace were listed as contributing factors. The report concluded with a list of two dozen “urgent problems” requiring priority attention, including occupational mental health and contributing workplace factors.
Thirty years later, this report has proven remarkably prophetic. Job stress has become a leading source of worker disability in North America and Europe. In 1990, 13% of all worker disability cases handled by Northwestern National Life, a major US underwriter of worker compensation claims, were due to disorders with a suspected link to job stress (Northwestern National Life 1991). A 1985 study by the National Council on Compensation Insurance found that one type of claim, involving psychological disability due to “gradual mental stress” at work, had grown to 11% of all occupational disease claims (National Council on Compensation Insurance 1985)
* In the United States, occupational disease claims are distinct from injury claims, which tend to greatly outnumber disease claims.
These developments are understandable considering the demands of modern work. A 1991 survey of European Union members found that “The proportion of workers who complain from organizational constraints, which are in particular conducive to stress, is higher than the proportion of workers complaining from physical constraints” (European Foundation for the Improvement of Living and Working Conditions 1992). Similarly, a more recent study of the Dutch working population found that one-half of the sample reported a high work pace, three-fourths of the sample reported poor possibilities of promotion, and one-third reported a poor fit between their education and their jobs (Houtman and Kompier 1995). On the American side, data on the prevalence of job stress risk factors in the workplace are less available. However, in a recent survey of several thousand US workers, over 40% of the workers reported excessive workloads and said they were “used up” and “emotionally drained” at the end of the day (Galinsky, Bond and Friedman 1993).
The impact of this problem in terms of lost productivity, disease and reduced quality of life is undoubtedly formidable, although difficult to estimate reliably. However, recent analyses of data from over 28,000 workers by the Saint Paul Fire and Marine Insurance company are of interest and relevance. This study found that time pressure and other emotional and personal problems at work were more strongly associated with reported health problems than any other personal life stressor; more so than even financial or family problems, or death of a loved one (St. Paul Fire and Marine Insurance Company 1992).
Looking to the future, rapid changes in the fabric of work and the workforce pose unknown, and possibly increased, risks of job stress. For example, in many countries the workforce is rapidly ageing at a time when job security is decreasing. In the United States, corporate downsizing continues almost unabated into the last half of the decade at a rate of over 30,000 jobs lost per month (Roy 1995). In the above-cited study by Galinsky, Bond and Friedman (1993) nearly one-fifth of the workers thought it likely they would lose their jobs in the forthcoming year. At the same time the number of contingent workers, who are generally without health benefits and other safety nets, continues to grow and now comprises about 5% of the workforce (USBLS 1995).
The aim of this chapter is to provide an overview of current knowledge on conditions which lead to stress at work and associated health and safety problems. These conditions, which are commonly referred to as psychosocial factors, include aspects of the job and work environment such as organizational climate or culture, work roles, interpersonal relationships at work, and the design and content of tasks (e.g., variety, meaning, scope, repetitiveness, etc.). The concept of psychosocial factors extends also to the extra-organizational environment (e.g., domestic demands) and aspects of the individual (e.g., personality and attitudes) which may influence the development of stress at work. Frequently, the expressions work organization or organizational factors are used interchangeably with psychosocial factors in reference to working conditions which may lead to stress.
This section of the Encyclopaedia begins with descriptions of several models of job stress which are of current scientific interest, including the job demands-job control model, the person- environment (P-E) fit model, and other theoretical approaches to stress at work. Like all contemporary notions of job stress, these models have a common theme: job stress is conceptualized in terms of the relationship between the job and the person. According to this view, job stress and the potential for ill health develop when job demands are at variance with the needs, expectations or capacities of the worker. This core feature is implicit in figure 1, which shows the basic elements of a stress model favoured by researchers at the National Institute for Occupational Safety and Health (NIOSH). In this model, work-related psychosocial factors (termed stressors) result in psychological, behavioural and physical reactions which may ultimately influence health. However, as illustrated in figure 1, individual and contextual factors (termed stress moderators) intervene to influence the effects of job stressors on health and well-being. (See Hurrell and Murphy 1992 for a more elaborate description of the NIOSH stress model.)
Figure 1. The Job Stress Model of the National Institute for Occupational Safety and Health (NIOSH)
But putting aside this conceptual similarity, there are also non-trivial theoretical differences among these models. For example, unlike the NIOSH and P-E fit models of job stress, which acknowledge a host of potential psychosocial risk factors in the workplace, the job demands-job control model focuses most intensely on a more limited range of psychosocial dimensions pertaining to psychological workload and opportunity for workers to exercise control (termed decision latitude) over aspects of their jobs. Further, both the demand-control and the NIOSH models can be distinguished from the P-E fit models in terms of the focus placed on the individual. In the P-E fit model, emphasis is placed on individuals’ perceptions of the balance between features of the job and individual attributes. This focus on perceptions provides a bridge between P-E fit theory and another variant of stress theory attributed to Lazarus (1966), in which individual differences in appraisal of psychosocial stressors and in coping strategies become critically important in determining stress outcomes. In contrast, while not denying the importance of individual differences, the NIOSH stress model gives primacy to environmental factors in determining stress outcomes as suggested by the geometry of the model illustrated in figure 1. In essence, the model suggests that most stressors will be threatening to most of the people most of the time, regardless of circumstances. A similar emphasis can be seen in other models of stress and job stress (e.g., Cooper and Marshall 1976; Kagan and Levi 1971; Matteson and Ivancevich 1987).
These differences have important implications for both guiding job stress research and intervention strategies at the workplace. The NIOSH model, for example, argues for primary prevention of job stress via attention first to psychosocial stressors in the workplace and, in this regard, is consistent with a public health model of prevention. Although a public health approach recognizes the importance of host factors or resistance in the aetiology of disease, the first line of defence in this approach is to eradicate or reduce exposure to environmental pathogens.
The NIOSH stress model illustrated in figure 1 provides an organizing framework for the remainder of this section. Following the discussions of job stress models are short articles containing summaries of current knowledge on workplace psychosocial stressors and on stress moderators. These subsections address conditions which have received wide attention in the literature as stressors and stress moderators, as well as topics of emerging interest such as organizational climate and career stage. Prepared by leading authorities in the field, each summary provides a definition and brief overview of relevant literature on the topic. Further, to maximize the utility of these summaries, each contributor has been asked to include information on measurement or assessment methods and on prevention practices.
The final subsection of the chapter reviews current knowledge on a wide range of potential health risks of job stress and underlying mechanisms for these effects. Discussion ranges from traditional concerns, such as psychological and cardiovascular disorders, to emerging topics such as depressed immune function and musculoskeletal disease.
In summary, recent years have witnessed unprecedented changes in the design and demands of work, and the emergence of job stress as a major concern in occupational health. This section of the Encyclopaedia tries to promote understanding of psychosocial risks posed by the evolving work environment, and thus better protect the well-being of workers.
It has long been recognized that each person’s response to environmental chemicals is different. The recent explosion in molecular biology and genetics has brought a clearer understanding about the molecular basis of such variability. Major determinants of individual response to chemicals include important differences among more than a dozen superfamilies of enzymes, collectively termed xenobiotic- (foreign to the body) or drug-metabolizing enzymes. Although the role of these enzymes has classically been regarded as detoxification, these same enzymes also convert a number of inert compounds to highly toxic intermediates. Recently, many subtle as well as gross differences in the genes encoding these enzymes have been identified, which have been shown to result in marked variations in enzyme activity. It is now clear that each individual possesses a distinct complement of xenobiotic-metabolizing enzyme activities; this diversity might be thought of as a “metabolic fingerprint”. It is the complex interplay of these many different enzyme superfamilies which ultimately determines not only the fate and the potential for toxicity of a chemical in any given individual, but also assessment of exposure. In this article we have chosen to use the cytochrome P450 enzyme superfamily to illustrate the remarkable progress made in understanding individual response to chemicals. The development of relatively simple DNA-based tests designed to identify specific gene alterations in these enzymes, is now providing more accurate predictions of individual response to chemical exposure. We hope the result will be preventive toxicology. In other words, each individual might learn about those chemicals to which he or she is particularly sensitive, thereby avoiding previously unpredictable toxicity or cancer.
Although it is not generally appreciated, human beings are exposed daily to a barrage of innumerable diverse chemicals. Many of these chemicals are highly toxic, and they are derived from a wide variety of environmental and dietary sources. The relationship between such exposures and human health has been, and continues to be, a major focus of biomedical research efforts worldwide.
What are some examples of this chemical bombardment? More than 400 chemicals from red wine have been isolated and characterized. At least 1,000 chemicals are estimated to be produced by a lighted cigarette. There are countless chemicals in cosmetics and perfumed soaps. Another major source of chemical exposure is agriculture: in the United States alone, farmlands receive more than 75,000 chemicals each year in the form of pesticides, herbicides and fertilizing agents; after uptake by plants and grazing animals, as well as fish in nearby waterways, humans (at the end of the food chain) ingest these chemicals. Two other sources of large concentrations of chemicals taken into the body include (a) drugs taken chronically and (b) exposure to hazardous substances in the workplace over a lifetime of employment.
It is now well established that chemical exposure may adversely affect many aspects of human health, causing chronic diseases and the development of many cancers. In the last decade or so, the molecular basis of many of these relationships has begun to be unravelled. In addition, the realization has emerged that humans differ markedly in their susceptibility to the harmful effects of chemical exposure.
Current efforts to predict human response to chemical exposure combine two fundamental approaches (figure 1): monitoring the extent of human exposure through biological markers (biomarkers), and predicting the likely response of an individual to a given level of exposure. Although both of these approaches are extremely important, it should be emphasized that the two are distinctly different from one another. This article will focus on the genetic factors underlying individual susceptibility to any particular chemical exposure. This field of research is broadly termed ecogenetics, or pharmacogenetics (see Kalow 1962 and 1992). Many of the recent advances in determining individual susceptibility to chemical toxicity have evolved from a greater appreciation of the processes by which humans and other mammals detoxify chemicals, and the remarkable complexity of the enzyme systems involved.
Figure 1. The interrelationships among exposure assessment, ethnic differences, age, diet, nutrition and genetic susceptibility assessment - all of which play a role in the individual risk of toxicity and cancer
We will first describe the variability of toxic responses in humans. We will then introduce some of the enzymes responsible for such variation in response, due to differences in the metabolism of foreign chemicals. Next, the history and nomenclature of the cytochrome P450 superfamily will be detailed. Five human P450 polymorphisms as well as several non-P450 polymorphisms will be briefly described; these are responsible for human differences in toxic response. We will then discuss an example to emphasize the point that genetic differences in individuals can influence exposure assessment, as determined by environmental monitoring. Lastly, we will discuss the role of these xenobiotic-metabolizing enzymes in critical life functions.
Variation in Toxic Response Among the Human Population
Toxicologists and pharmacologists commonly speak about the average lethal dose for 50% of the population (LD50), the average maximal tolerated dose for 50% of the population (MTD50), and the average effective dose of a particular drug for 50% of the population (ED50). However, how do these doses affect each of us on an individual basis? In other words, a highly sensitive individual may be 500 times more affected or 500 times more likely to be affected than the most resistant individual in a population; for these people, the LD50 (and MTD50 and ED50) values would have little meaning. LD50, MTD50 and ED50 values are only relevant when referring to the population as a whole.
Figure 2 illustrates a hypothetical dose-response relationship for a toxic response by individuals in any given population. This generic diagram might represent bronchogenic carcinoma in response to the number of cigarettes smoked, chloracne as a function of dioxin levels in the workplace, asthma as a function of air concentrations of ozone or aldehyde, sunburn in response to ultraviolet light, decreased clotting time as a function of aspirin intake, or gastrointestinal distress in response to the number of jalapeño peppers consumed. Generally, in each of these instances, the greater the exposure, the greater the toxic response. Most of the population will exhibit the mean and standard deviation of toxic response as a function of dose. The “resistant outlier” (lower right in figure 2) is an individual having less of a response at higher doses or exposures. A “sensitive outlier” (upper left) is an individual having an exaggerated response to a relatively small dose or exposure. These outliers, with extreme differences in response compared to the majority of individuals in the population, may represent important genetic variants that can help scientists in attempting to understand the underlying molecular mechanisms of a toxic response.
Figure 2. Generic relationship between any toxic response and the dose of any environmental, chemical or physical agent
Using these outliers in family studies, scientists in a number of laboratories have begun to appreciate the importance of Mendelian inheritance for a given toxic response. Subsequently, one can then turn to molecular biology and genetic studies to pinpoint the underlying mechanism at the gene level (genotype) responsible for the environmentally caused disease (phenotype).
Xenobiotic- or Drug-metabolizing Enzymes
How does the body respond to the myriad of exogenous chemicals to which we are exposed? Humans and other mammals have evolved highly complex metabolic enzyme systems comprising more than a dozen distinct superfamilies of enzymes. Almost every chemical to which humans are exposed will be modified by these enzymes, in order to facilitate removal of the foreign substance from the body. Collectively, these enzymes are frequently referred to as drug-metabolizing enzymes or xenobiotic-metabolizing enzymes. Actually, both terms are misnomers. First, many of these enzymes not only metabolize drugs but hundreds of thousands of environmental and dietary chemicals. Second, all of these enzymes also have normal body compounds as substrates; none of these enzymes metabolizes only foreign chemicals.
For more than four decades, the metabolic processes mediated by these enzymes have commonly been classified as either Phase I or Phase II reactions (figure 3). Phase I (“functionalization”) reactions generally involve relatively minor structural modifications of the parent chemical via oxidation, reduction or hydrolysis in order to produce a more water-soluble metabolite. Frequently, Phase I reactions provide a “handle” for further modification of a compound by subsequent Phase II reactions. Phase I reactions are primarily mediated by a superfamily of highly versatile enzymes, collectively termed cytochromes P450, although other enzyme superfamilies can also be involved (figure 4).
Figure 3. The classical designation of Phase I and Phase II xenobiotic- or drug-metabolizing enzymes
Figure 4. Examples of drug-metabolizing enzymes
Phase II reactions involve the coupling of a water-soluble endogenous molecule to a chemical (parent chemical or Phase I metabolite) in order to facilitate excretion. Phase II reactions are frequently termed “conjugation” or “derivatization” reactions. The enzyme superfamilies catalyzing Phase II reactions are generally named according to the endogenous conjugating moiety involved: for example, acetylation by the N-acetyltransferases, sulphation by the sulphotransferases, glutathione conjugation by the glutathione transferases, and glucuronidation by the UDP glucuronosyltransferases (figure 4). Although the major organ of drug metabolism is the liver, the levels of some drug- metabolizing enzymes are quite high in the gastrointestinal tract, gonads, lung, brain and kidney, and such enzymes are undoubtedly present to some extent in every living cell.
Xenobiotic-metabolizing Enzymes Represent Double-edged Swords
As we learn more about the biological and chemical processes leading to human health aberrations, it has become increasingly evident that drug-metabolizing enzymes function in an ambivalent manner (figure 3). In the majority of cases, lipid-soluble chemicals are converted to more readily excreted water-soluble metabolites. However, it is clear that on many occasions the same enzymes are capable of transforming other inert chemicals into highly reactive molecules. These intermediates can then interact with cellular macromolecules such as proteins and DNA. Thus, for each chemical to which humans are exposed, there exists the potential for the competing pathways of metabolic activation and detoxification.
Brief Review of Genetics
In human genetics, each gene (locus) is located on one of the 23 pairs of chromosomes. The two alleles (one present on each chromosome of the pair) can be the same, or they can be different from one another. For example, the B and b alleles, in which B (brown eyes) is dominant over b (blue eyes): individuals of the brown-eyed phenotype can have either the BB or Bb genotypes, whereas individuals of the blue-eyed phenotype can only have the bb genotype.
A polymorphism is defined as two or more stably inherited phenotypes (traits)—derived from the same gene(s)—that are maintained in the population, often for reasons not necessarily obvious. For a gene to be polymorphic, the gene product must not be essential for development, reproductive vigour or other critical life processes. In fact, a “balanced polymorphism,” wherein the heterozygote has a distinct survival advantage over either homozygote (e.g., resistance to malaria, and the sickle-cell haemoglobin allele) is a common explanation for maintaining an allele in the population at otherwise unexplained high frequencies (see Gonzalez and Nebert 1990).
Human Polymorphisms of Xenobiotic-metabolizing Enzymes
Genetic differences in the metabolism of various drugs and environmental chemicals have been known for more than four decades (Kalow 1962 and 1992). These differences are frequently referred to as pharmacogenetic or, more broadly, ecogenetic polymorphisms. These polymorphisms represent variant alleles that occur at a relatively high frequency in the population and are generally associated with aberrations in enzyme expression or function. Historically, polymorphisms were usually identified following unexpected responses to therapeutic agents. More recently, recombinant DNA technology has enabled scientists to identify the precise alterations in genes that are responsible for some of these polymorphisms. Polymorphisms have now been characterized in many drug-metabolizing enzymes—including both Phase I and Phase II enzymes. As more and more polymorphisms are identified, it is becoming increasingly apparent that each individual may possess a distinct complement of drug-metabolizing enzymes. This diversity might be described as a “metabolic fingerprint”. It is the complex interplay of the various drug- metabolizing enzyme superfamilies within any individual that will ultimately determine his or her particular response to a given chemical (Kalow 1962 and 1992; Nebert 1988; Gonzalez and Nebert 1990; Nebert and Weber 1990).
Expressing Human Xenobiotic-metabolizingEnzymes in Cell Culture
How might we develop better predictors of human toxic responses to chemicals? Advances in defining the multiplicity of drug-metabolizing enzymes must be accompanied by precise knowledge as to which enzymes determine the metabolic fate of individual chemicals. Data gleaned from laboratory rodent studies have certainly provided useful information. However, significant interspecies differences in xenobiotic-metabolizing enzymes necessitate caution in extrapolating data to human populations. To overcome this difficulty, many laboratories have developed systems in which various cell lines in culture can be engineered to produce functional human enzymes that are stable and in high concentrations (Gonzalez, Crespi and Gelboin 1991). Successful production of human enzymes has been achieved in a variety of diverse cell lines from sources including bacteria, yeast, insects and mammals.
In order to define the metabolism of chemicals even more accurately, multiple enzymes have also been successfully produced in a single cell line (Gonzalez, Crespi and Gelboin 1991). Such cell lines provide valuable insights into the precise enzymes involved in the metabolic processing of any given compound and likely toxic metabolites. If this information can then be combined with knowledge regarding the presence and level of an enzyme in human tissues, these data should provide valuable predictors of response.
Cytochrome P450
History and nomenclature
The cytochrome P450 superfamily is one of the most studied drug-metabolizing enzyme superfamilies, having a great deal of individual variability in response to chemicals. Cytochrome P450 is a convenient generic term used to describe a large superfamily of enzymes pivotal in the metabolism of innumerable endogenous and exogenous substrates. The term cytochrome P450 was first coined in 1962 to describe an unknown pigment in cells which, when reduced and bound with carbon monoxide, produced a characteristic absorption peak at 450 nm. Since the early 1980s, cDNA cloning technology has resulted in remarkable insights into the multiplicity of cytochrome P450 enzymes. To date, more than 400 distinct cytochrome P450 genes have been identified in animals, plants, bacteria and yeast. It has been estimated that any one mammalian species, such as humans, may possess 60 or more distinct P450 genes (Nebert and Nelson 1991). The multiplicity of P450 genes has necessitated the development of a standardized nomenclature system (Nebert et al. 1987; Nelson et al. 1993). First proposed in 1987 and updated on a biannual basis, the nomenclature system is based on divergent evolution of amino acid sequence comparisons between P450 proteins. The P450 genes are divided into families and subfamilies: enzymes within a family display greater than 40% amino acid similarity, and those within the same subfamily display 55% similarity. P450 genes are named with the root symbol CYP followed by an arabic numeral designating the P450 family, a letter denoting the subfamily, and a further arabic numeral designating the individual gene (Nelson et al. 1993; Nebert et al. 1991). Thus, CYP1A1 represents P450 gene 1 in family 1 and subfamily A.
As of February 1995, there are 403 CYP genes in the database, composed of 59 families and 105 sub- families. These include eight lower eukaryotic families, 15 plant families, and 19 bacterial families. The 15 human P450 gene families comprise 26 subfamilies, 22 of which have been mapped to chromosomal locations throughout most of the genome. Some sequences are clearly orthologous across many species—for example, only one CYP17 (steroid 17α-hydroxylase) gene has been found in all vertebrates examined to date; other sequences within a subfamily are highly duplicated, making the identification of orthologous pairs impossible (e.g., the CYP2C subfamily). Interestingly, human and yeast share an orthologous gene in the CYP51 family. Numerous comprehensive reviews are available for readers seeking further information on the P450 superfamily (Nelson et al. 1993; Nebert et al. 1991; Nebert and McKinnon 1994; Guengerich 1993; Gonzalez 1992).
The success of the P450 nomenclature system has resulted in similar terminology systems being developed for the UDP glucuronosyltransferases (Burchell et al. 1991) and flavin-containing mono-oxygenases (Lawton et al. 1994). Similar nomenclature systems based on divergent evolution are also under development for several other drug-metabolizing enzyme superfamilies (e.g., sulphotransferases, epoxide hydrolases and aldehyde dehydrogenases).
Recently, we divided the mammalian P450 gene superfamily into three groups (Nebert and McKinnon 1994)—those involved principally with foreign chemical metabolism, those involved in the synthesis of various steroid hormones, and those participating in other important endogenous functions. It is the xenobiotic-metabolizing P450 enzymes that assume the most significance for prediction of toxicity.
Xenobiotic-metabolizing P450 enzymes
P450 enzymes involved in the metabolism of foreign compounds and drugs are almost always found within families CYP1, CYP2, CYP3 and CYP4. These P450 enzymes catalyze a wide variety of metabolic reactions, with a single P450 often capable of meta-bolizing many different compounds. In addition, multiple P450 enzymes may metabolize a single compound at different sites. Also, a compound may be metabolized at the same, single site by several P450s, although at varying rates.
A most important property of the drug-metabolizing P450 enzymes is that many of these genes are inducible by the very substances which serve as their substrates. On the other hand, other P450 genes are induced by nonsubstrates. This phenomenon of enzyme induction underlies many drug-drug interactions of therapeutic importance.
Although present in many tissues, these particular P450 enzymes are found in relatively high levels in the liver, the primary site of drug metabolism. Some of the xenobiotic-metabolizing P450 enzymes exhibit activity toward certain endogenous substrates (e.g., arachidonic acid). However, it is generally believed that most of these xenobiotic-metabolizing P450 enzymes do not play important physiological roles—although this has not been established experimentally as yet. The selective homozygous disruption, or “knock-out,” of individual xenobiotic-metabolizing P450 genes by means of gene targeting methodologies in mice is likely to provide unequivocal information soon with regard to physiological roles of the xenobiotic-metabolizing P450s (for a review of gene targeting, see Capecchi 1994).
In contrast to P450 families encoding enzymes involved primarily in physiological processes, families encoding xenobiotic-metabolizing P450 enzymes display marked species specificity and frequently contain many active genes per subfamily (Nelson et al. 1993; Nebert et al. 1991). Given the apparent lack of physiological substrates, it is possible that P450 enzymes in families CYP1, CYP2, CYP3 and CYP4 that have appeared in the past several hundred million years have evolved as a means of detoxifying foreign chemicals encountered in the environment and diet. Clearly, evolution of the xenobiotic-metabolizing P450s would have occurred over a time period which far precedes the synthesis of most of the synthetic chemicals to which humans are now exposed. The genes in these four gene families may have evolved and diverged in animals due to their exposure to plant metabolites during the last 1.2 billion years—a process descriptively termed “animal-plant warfare” (Gonzalez and Nebert 1990). Animal-plant warfare is the phenomenon in which plants developed new chemicals (phytoalexins) as a defence mechanism in order to prevent ingestion by animals, and animals, in turn, responded by developing new P450 genes to accommodate the diversifying substrates. Providing further impetus to this proposal are the recently described examples of plant-insect and plant-fungus chemical warfare involving P450 detoxification of toxic substrates (Nebert 1994).
The following is a brief introduction to several of the human xenobiotic-metabolizing P450 enzyme polymorphisms in which genetic determinants of toxic response are believed to be of high significance. Until recently, P450 polymorphisms were generally suggested by unexpected variance in patient response to administered therapeutic agents. Several P450 polymorphisms are indeed named according to the drug with which the polymorphism was first identified. More recently, research efforts have focused on identification of the precise P450 enzymes involved in the metabolism of chemicals for which variance is observed and the precise characterization of the P450 genes involved. As described earlier, the measurable activity of a P450 enzyme towards a model chemical can be called the phenotype. Allelic differences in a P450 gene for each individual is termed the P450 genotype. As more and more scrutiny is applied to the analysis of P450 genes, the precise molecular basis of previously documented phenotypic variance is becoming clearer.
The CYP1A subfamily
The CYP1A subfamily comprises two enzymes in humans and all other mammals: these are designated CYP1A1 and CYP1A2 under standard P450 nomenclature. These enzymes are of considerable interest, because they are involved in the metabolic activation of many procarcinogens and are also induced by several compounds of toxicological concern, including dioxin. For example, CYP1A1 metabolically activates many compounds found in cigarette smoke. CYP1A2 metabolically activates many arylamines—associated with urinary bladder cancer—found in the chemical dye industry. CYP1A2 also metabolically activates 4-(methylnitrosamino)-1-(3-pyridyl)-1-butanone (NNK), a tobacco-derived nitrosamine. CYP1A1 and CYP1A2 are also found at higher levels in the lungs of cigarette smokers, due to induction by polycyclic hydrocarbons present in the smoke. The levels of CYP1A1 and CYP1A2 activity are therefore considered to be important determinants of individual response to many potentially toxic chemicals.
Toxicological interest in the CYP1A subfamily was greatly intensified by a 1973 report correlating the level of CYP1A1 inducibility in cigarette smokers with individual susceptibility to lung cancer (Kellermann, Shaw and Luyten-Kellermann 1973). The molecular basis of CYP1A1 and CYP1A2 induction has been a major focus of numerous laboratories. The induction process is mediated by a protein termed the Ah receptor to which dioxins and structurally related chemicals bind. The name Ah is derived from the aryl hydrocarbon nature of many CYP1A inducers. Interestingly, differences in the gene encoding the Ah receptor between strains of mice result in marked differences in chemical response and toxicity. A polymorphism in the Ah receptor gene also appears to occur in humans: approximately one-tenth of the population displays high induction of CYP1A1 and may be at greater risk than the other nine-tenths of the population for development of certain chemically induced cancers. The role of the Ah receptor in the control of enzymes in the CYP1A subfamily, and its role as a determinant of human response to chemical exposure, has been the subject of several recent reviews (Nebert, Petersen and Puga 1991; Nebert, Puga and Vasiliou 1993).
Are there other polymorphisms that might control the level of CYP1A proteins in a cell? A polymorphism in the CYP1A1 gene has also been identified, and this appears to influence lung cancer risk amongst Japanese cigarette smokers, although this same polymorphism does not appear to influence risk in other ethnic groups (Nebert and McKinnon 1994).
CYP2C19
Variations in the rate at which individuals metabolize the anticonvulsant drug (S)-mephenytoin have been well documented for many years (Guengerich 1989). Between 2% and 5% of Caucasians and as many as 25% of Asians are deficient in this activity and may be at greater risk of toxicity from the drug. This enzyme defect has long been known to involve a member of the human CYP2C subfamily, but the precise molecular basis of this deficiency has been the subject of considerable controversy. The major reason for this difficulty was the six or more genes in the human CYP2C subfamily. It was recently demonstrated, however, that a single-base mutation in the CYP2C19 gene is the primary cause of this deficiency (Goldstein and de Morais 1994). A simple DNA test, based on the polymerase chain reaction (PCR), has also been developed to identify this mutation rapidly in human populations (Goldstein and de Morais 1994).
CYP2D6
Perhaps the most extensively characterized variation in a P450 gene is that involving the CYP2D6 gene. More than a dozen examples of mutations, rearrangements and deletions affecting this gene have been described (Meyer 1994). This polymorphism was first suggested 20 years ago by clinical variability in patients’ response to the antihypertensive agent debrisoquine. Alterations in the CYP2D6 gene giving rise to altered enzyme activity are therefore collectively termed the debrisoquine polymorphism.
Prior to the advent of DNA-based studies, individuals had been classified as poor or extensive metabolizers (PMs, EMs) of debrisoquine based on metabolite concentrations in urine samples. It is now clear that alterations in the CYP2D6 gene may result in individuals displaying not only poor or extensive debrisoquine metabolism, but also ultrarapid metabolism. Most alterations in the CYP2D6 gene are associated with partial or total deficiency of enzyme function; however, individuals in two families have recently been described who possess multiple functional copies of the CYP2D6 gene, giving rise to ultrarapid metabolism of CYP2D6 substrates (Meyer 1994). This remarkable observation provides new insights into the wide spectrum of CYP2D6 activity previously observed in population studies. Alterations in CYP2D6 function are of particular significance, given the more than 30 commonly prescribed drugs metabolized by this enzyme. An individual’s CYP2D6 function is therefore a major determinant of both therapeutic and toxic response to administered therapy. Indeed, it has recently been argued that consideration of a patient’s CYP2D6 status is necessary for the safe use of both psychiatric and cardiovascular drugs.
The role of the CYP2D6 polymorphism as a determinant of individual susceptibility to human diseases such as lung cancer and Parkinson’s disease has also been the subject of intense study (Nebert and McKinnon 1994; Meyer 1994). While conclusions are difficult to define given the diverse nature of the study protocols utilized, the majority of studies appear to indicate an association between extensive metabolizers of debrisoquine (EM phenotype) and lung cancer. The reasons for such an association are presently unclear. However, the CYP2D6 enzyme has been shown to metabolize NNK, a tobacco-derived nitrosamine.
As DNA-based assays improve—enabling even more accurate assessment of CYP2D6 status—it is anticipated that the precise relationship of CYP2D6 to disease risk will be clarified. Whereas the extensive metabolizer may be linked with susceptibility to lung cancer, the poor metabolizer (PM phenotype) appears to be associated with Parkinson’s disease of unknown cause. Whereas these studies are also difficult to compare, it appears that PM individuals having a diminished capacity to metabolize CYP2D6 substrates (e.g., debrisoquine) have a 2- to 2.5-fold increase in risk of developing Parkinson’s disease.
CYP2E1
The CYP2E1 gene encodes an enzyme that metabolizes many chemicals, including drugs and many low-molecular-weight carcinogens. This enzyme is also of interest because it is highly inducible by alcohol and may play a role in liver injury induced by chemicals such as chloroform, vinyl chloride and carbon tetrachloride. The enzyme is primarily found in the liver, and the level of enzyme varies markedly between individuals. Close scrutiny of the CYP2E1 gene has resulted in the identification of several polymorphisms (Nebert and McKinnon 1994). A relationship has been reported between the presence of certain structural variations in the CYP2E1 gene and apparent lowered lung cancer risk in some studies; however, there are clear interethnic differences which require clarification of this possible relationship.
The CYP3A subfamily
In humans, four enzymes have been identified as members of the CYP3A subfamily due to their similarity in amino acid sequence. The CYP3A enzymes metabolize many commonly prescribed drugs such as erythromycin and cyclosporin. The carcinogenic food contaminant aflatoxin B1 is also a CYP3A substrate. One member of the human CYP3A subfamily, designated CYP3A4, is the principal P450 in human liver as well as being present in the gastrointestinal tract. As is true for many other P450 enzymes, the level of CYP3A4 is highly variable between individuals. A second enzyme, designated CYP3A5, is found in only approximately 25% of livers; the genetic basis of this finding has not been elucidated. The importance of CYP3A4 or CYP3A5 variability as a factor in genetic determinants of toxic response has not yet been established (Nebert and McKinnon 1994).
Non-P450 Polymorphisms
Numerous polymorphisms also exist within other xenobiotic-metabolizing enzyme superfamilies (e.g., glutathione transferases, UDP glucuronosyltransferases, para-oxonases, dehydrogenases, N-acetyltransferases and flavin-containing mono-oxygenases). Because the ultimate toxicity of any P450-generated intermediate is dependent on the efficiency of subsequent Phase II detoxification reactions, the combined role of multiple enzyme polymorphisms is important in determining susceptibility to chemically induced diseases. The metabolic balance between Phase I and Phase II reactions (figure 3) is therefore likely to be a major factor in chemically induced human diseases and genetic determinants of toxic response.
The GSTM1 gene polymorphism
A well studied example of a polymorphism in a Phase II enzyme is that involving a member of the glutathione S-transferase enzyme superfamily, designated GST mu or GSTM1. This particular enzyme is of considerable toxicological interest because it appears to be involved in the subsequent detoxification of toxic metabolites produced from chemicals in cigarette smoke by the CYP1A1 enzyme. The identified polymorphism in this glutathione transferase gene involves a total absence of functional enzyme in as many as half of all Caucasians studied. This lack of a Phase II enzyme appears to be associated with increased susceptibility to lung cancer. By grouping individuals on the basis of both variant CYP1A1 genes and the deletion or presence of a functional GSTM1 gene, it has been demonstrated that the risk of developing smoking-induced lung cancer varies significantly (Kawajiri, Watanabe and Hayashi 1994). In particular, individuals displaying one rare CYP1A1 gene alteration, in combination with an absence of the GSTM1 gene, were at higher risk (as much as ninefold) of developing lung cancer when exposed to a relatively low level of cigarette smoke. Interestingly, there appear to be interethnic differences in the significance of variant genes which necessitate further study in order to elucidate the precise role of such alterations in susceptibility to disease (Kalow 1962; Nebert and McKinnon 1994; Kawajiri, Watanabe and Hayashi 1994).
Synergistic effect of two or more polymorphisms on the toxic response
A toxic response to an environmental agent may be greatly exaggerated by the combination of two pharmacogenetic defects in the same individual, for example, the combined effects of the N-acetyltransferase (NAT2) polymorphism and the glucose-6-phosphate dehydrogenase (G6PD) polymorphism.
Occupational exposure to arylamines constitutes a grave risk of urinary bladder cancer. Since the elegant studies of Cartwright in 1954, it has become clear that the N-acetylator status is a determinant of azo-dye-induced bladder cancer. There is a highly significant correlation between the slow-acetylator phenotype and the occurrence of bladder cancer, as well as the degree of invasiveness of this cancer in the bladder wall. On the contrary, there is a significant association between the rapid-acetylator phenotype and the incidence of colorectal carcinoma. The N-acetyltransferase (NAT1, NAT2) genes have been cloned and sequenced, and DNA-based assays are now able to detect the more than a dozen allelic variants which account for the slow-acetylator phenotype. The NAT2 gene is polymorphic and responsible for most of the variability in toxic response to environmental chemicals (Weber 1987; Grant 1993).
Glucose-6-phosphate dehydrogenase (G6PD) is an enzyme critical in the generation and maintenance of NADPH. Low or absent G6PD activity can lead to severe drug- or xenobiotic-induced haemolysis, due to the absence of normal levels of reduced glutathione (GSH) in the red blood cell. G6PD deficiency affects at least 300 million people worldwide. More than 10% of African-American males exhibit the less severe phenotype, while certain Sardinian communities exhibit the more severe “Mediterranean type” at frequencies as high as one in every three persons. The G6PD gene has been cloned and localized to the X chromosome, and numerous diverse point mutations account for the large degree of phenotypic heterogeneity seen in G6PD-deficient individuals (Beutler 1992).
Thiozalsulphone, an arylamine sulpha drug, was found to cause a bimodal distribution of haemolytic anaemia in the treated population. When treated with certain drugs, individuals with the combination of G6PD deficiency plus the slow-acetylator phenotype are more affected than those with the G6PD deficiency alone or the slow-acetylator phenotype alone. G6PD-deficient slow acetylators are at least 40 times more susceptible than normal-G6PD rapid acetylators to thiozalsulphone-induced haemolysis.
Effect of genetic polymorphisms on exposure assessment
Exposure assessment and biomonitoring (figure 1) also requires information on the genetic make-up of each individual. Given identical exposure to a hazardous chemical, the level of haemoglobin adducts (or other biomarkers) might vary by two or three orders of magnitude among individuals, depending upon each person’s metabolic fingerprint.
The same combined pharmacogenetics has been studied in chemical factory workers in Germany (table 1). Haemoglobin adducts among workers exposed to aniline and acetanilide are by far the highest in G6PD-deficient slow acetylators, as compared with the other possible combined pharmacogenetic phenotypes. This study has important implications for exposure assessment. These data demonstrate that, although two individuals might be exposed to the same ambient level of hazardous chemical in the work place, the amount of exposure (via biomarkers such as haemoglobin adducts) might be estimated to be two or more orders of magnitude less, due to the underlying genetic predisposition of the individual. Likewise, the resulting risk of an adverse health effect may vary by two or more orders of magnitude.
Table 1: Haemoglobin adducts in workers exposed to aniline and acetanilide
Acetylator status | G6PD deficiency | |||
Fast | Slow | No | Yes | Hgb adducts |
+ | + | 2 | ||
+ | + | 30 | ||
+ | + | 20 | ||
+ | + | 100 |
Source: Adapted from Lewalter and Korallus 1985.
Genetic differences in binding as well as metabolism
It should be emphasized that the same case made here for meta-bolism can also be made for binding. Heritable differences in the binding of environmental agents will greatly affect the toxic response. For example, differences in the mouse cdm gene can profoundly affect individual sensitivity to cadmium-induced testicular necrosis (Taylor, Heiniger and Meier 1973). Differences in the binding affinity of the Ah receptor are likely affect dioxin-induced toxicity and cancer (Nebert, Petersen and Puga 1991; Nebert, Puga and Vasiliou 1993).
Figure 5 summarizes the role of metabolism and binding in toxicity and cancer. Toxic agents, as they exist in the environment or following metabolism or binding, elicit their effects by either a genotoxic pathway (in which damage to DNA occurs) or a non-genotoxic pathway (in which DNA damage and mutagenesis need not occur). Interestingly, it has recently become clear that “classical” DNA-damaging agents can operate via a reduced glutathione (GSH)-dependent nongenotoxic signal transduction pathway, which is initiated on or near the cell surface in the absence of DNA and outside the cell nucleus (Devary et al. 1993). Genetic differences in metabolism and binding remain, however, as the major determinants in controlling different individual toxic responses.
Figure 5. The general means by which toxicity occurs
Role of Drug-metabolizing Enzymesin Cellular Function
Genetically based variation in drug-metabolizing enzyme function is of major importance in determining individual response to chemicals. These enzymes are pivotal in determining the fate and time course of a foreign chemical following exposure.
As illustrated in figure 5, the importance of drug-metabolizing enzymes in individual susceptibility to chemical exposure may in fact present a far more complex issue than is evident from this simple discussion of xenobiotic metabolism. In other words, during the past two decades, genotoxic mechanisms (measurements of DNA adducts and protein adducts) have been greatly emphasized. However, what if nongenotoxic mechanisms are at least as important as genotoxic mechanisms in causing toxic responses?
As mentioned earlier, the physiological roles of many drug-metabolizing enzymes involved in xenobiotic metabolism have not been accurately defined. Nebert (1994) has proposed that, because of their presence on this planet for more than 3.5 billion years, drug-metabolizing enzymes were originally (and are now still primarily) responsible for regulating the cellular levels of many nonpeptide ligands important in the transcriptional activation of genes affecting growth, differentiation, apoptosis, homeostasis and neuroendocrine functions. Furthermore, the toxicity of most, if not all, environmental agents occurs by means of agonist or antagonist action on these signal transduction pathways (Nebert 1994). Based on this hypothesis, genetic variability in drug-metabolizing enzymes may have quite dramatic effects on many critical biochemical processes within the cell, thereby leading to important differences in toxic response. It is indeed possible that such a scenario may also underlie many idiosyncratic adverse reactions encountered in patients using commonly prescribed drugs.
Conclusions
The past decade has seen remarkable progress in our understanding of the genetic basis of differential response to chemicals in drugs, foods and environmental pollutants. Drug-metabolizing enzymes have a profound influence on the way humans respond to chemicals. As our awareness of drug-metabolizing enzyme multiplicity continues to evolve, we are increasingly able to make improved assessments of toxic risk for many drugs and environmental chemicals. This is perhaps most clearly illustrated in the case of the CYP2D6 cytochrome P450 enzyme. Using relatively simple DNA-based tests, it is possible to predict the likely response of any drug predominantly metabolized by this enzyme; this prediction will ensure the safer use of valuable, yet potentially toxic, medication.
The future will no doubt see an explosion in the identification of further polymorphisms (phenotypes) involving drug-metabolizing enzymes. This information will be accompanied by improved, minimally invasive DNA-based tests to identify genotypes in human populations.
Such studies should be particularly informative in evaluating the role of chemicals in the many environmental diseases of presently unknown origin. The consideration of multiple drug-metabolizing enzyme polymorphisms, in combination (e.g., table 1), is also likely to represent a particularly fertile research area. Such studies will clarify the role of chemicals in the causation of cancers. Collectively, this information should enable the formulation of increasingly individualized advice on avoidance of chemicals likely to be of individual concern. This is the field of preventive toxicology. Such advice will no doubt greatly assist all individuals in coping with the ever increasing chemical burden to which we are exposed.
There are often large differences among humans in the intensity of response to toxic chemicals, and variations in susceptibility of an individual over a lifetime. These can be attributed to a variety of factors capable of influencing absorption rate, distribution in the body, biotransformation and/or excretion rate of a particular chemical. Apart from the known hereditary factors which have been clearly demonstrated to be linked with increased susceptibility to chemical toxicity in humans (see “Genetic determinants of toxic response”), other factors include: constitutional characteristics related to age and sex; pre-existing disease states or a reduction in organ function (non-hereditary, i.e., acquired); dietary habits, smoking, alcohol consumption and use of medications; concomitant exposure to biotoxins (various micro- organisms) and physical factors (radiation, humidity, extremely low or high temperatures or barometric pressures particularly relevant to the partial pressure of a gas), as well as concomitant physical exercise or psychological stress situations; previous occupational and/or environmental exposure to a particular chemical, and in particular concomitant exposure to other chemicals, not necessarily toxic (e.g., essential metals). The possible contributions of the aforementioned factors in either increasing or decreasing susceptibility to adverse health effects, as well as the mechanisms of their action, are specific for a particular chemical. Therefore only the most common factors, basic mechanisms and a few characteristic examples will be presented here, whereas specific information concerning each particular chemical can be found in elsewhere in this Encyclopaedia.
According to the stage at which these factors act (absorption, distribution, biotransformation or excretion of a particular chemical), the mechanisms can be roughly categorized according to two basic consequences of interaction: (1) a change in the quantity of the chemical in a target organ, that is, at the site(s) of its effect in the organism (toxicokinetic interactions), or (2) a change in the intensity of a specific response to the quantity of the chemical in a target organ (toxicodynamic interactions). The most common mechanisms of either type of interaction are related to competition with other chemical(s) for binding to the same compounds involved in their transport in the organism (e.g., specific serum proteins) and/or for the same biotransformation pathway (e.g., specific enzymes) resulting in a change in the speed or sequence between initial reaction and final adverse health effect. However, both toxicokinetic and toxicodynamic interactions may influence individual susceptibility to a particular chemical. The influence of several concomitant factors can result in either: (a) additive effects—the intensity of the combined effect is equal to the sum of the effects produced by each factor separately, (b) synergistic effects—the intensity of the combined effect is greater than the sum of the effects produced by each factor separately, or (c) antagonistic effects—the intensity of the combined effect is smaller than the sum of the effects produced by each factor separately.
The quantity of a particular toxic chemical or characteristic metabolite at the site(s) of its effect in the human body can be more or less assessed by biological monitoring, that is, by choosing the correct biological specimen and optimal timing of specimen sampling, taking into account biological half-lives for a particular chemical in both the critical organ and in the measured biological compartment. However, reliable information concerning other possible factors that might influence individual susceptibility in humans is generally lacking, and consequently the majority of knowledge regarding the influence of various factors is based on experimental animal data.
It should be stressed that in some cases relatively large differences exist between humans and other mammals in the intensity of response to an equivalent level and/or duration of exposure to many toxic chemicals; for example, humans appear to be considerably more sensitive to the adverse health effects of several toxic metals than are rats (commonly used in experimental animal studies). Some of these differences can be attributed to the fact that the transportation, distribution and biotransformation pathways of various chemicals are greatly dependent on subtle changes in the tissue pH and the redox equilibrium in the organism (as are the activities of various enzymes), and that the redox system of the human differs considerably from that of the rat.
This is obviously the case regarding important antioxidants such as vitamin C and glutathione (GSH), which are essential for maintaining redox equilibrium and which have a protective role against the adverse effects of the oxygen- or xenobiotic-derived free radicals which are involved in a variety of pathological conditions (Kehrer 1993). Humans cannot auto-synthesize vitamin C, contrary to the rat, and levels as well as the turnover rate of erythrocyte GSH in humans are considerably lower than that in the rat. Humans also lack some of the protective antioxidant enzymes, compared to the rat or other mammals (e.g., GSH- peroxidase is considered to be poorly active in human sperm). These examples illustrate the potentially greater vulnerability to oxidative stress in humans (particularly in sensitive cells, e.g., apparently greater vulnerability of the human sperm to toxic influences than that of the rat), which can result in different response or greater susceptibility to the influence of various factors in humans compared to other mammals (Telišman 1995).
Influence of Age
Compared to adults, very young children are often more susceptible to chemical toxicity because of their relatively greater inhalation volumes and gastrointestinal absorption rate due to greater permeability of the intestinal epithelium, and because of immature detoxification enzyme systems and a relatively smaller excretion rate of toxic chemicals. The central nervous system appears to be particularly susceptible at the early stage of development with regard to neurotoxicity of various chemicals, for example, lead and methylmercury. On the other hand, the elderly may be susceptible because of chemical exposure history and increased body stores of some xenobiotics, or pre-existing compromised function of target organs and/or relevant enzymes resulting in lowered detoxification and excretion rate. Each of these factors can contribute to weakening of the body’s defences—a decrease in reserve capacity, causing increased susceptibility to subsequent exposure to other hazards. For example, the cytochrome P450 enzymes (involved in the biotransformation pathways of almost all toxic chemicals) can be either induced or have lowered activity because of the influence of various factors over a lifetime (including dietary habits, smoking, alcohol, use of medications and exposure to environmental xenobiotics).
Influence of Sex
Gender-related differences in susceptibility have been described for a large number of toxic chemicals (approximately 200), and such differences are found in many mammalian species. It appears that males are generally more susceptible to renal toxins and females to liver toxins. The causes of the different response between males and females have been related to differences in a variety of physiological processes (e.g., females are capable of additional excretion of some toxic chemicals through menstrual blood loss, breast milk and/or transfer to the foetus, but they experience additional stress during pregnancy, delivery and lactation), enzyme activities, genetic repair mechanisms, hormonal factors, or the presence of relatively larger fat depots in females, resulting in greater accumulation of some lipophilic toxic chemicals, such as organic solvents and some medications.
Influence of Dietary Habits
Dietary habits have an important influence on susceptibility to chemical toxicity, mostly because adequate nutrition is essential for the functioning of the body’s chemical defence system in maintaining good health. Adequate intake of essential metals (including metalloids) and proteins, especially the sulphur-containing amino acids, is necessary for the biosynthesis of various detoxificating enzymes and the provision of glycine and glutathione for conjugation reactions with endogenous and exogenous compounds. Lipids, especially phospholipids, and lipotropes (methyl group donors) are necessary for the synthesis of biological membranes. Carbohydrates provide the energy required for various detoxification processes and provide glucuronic acid for conjugation of toxic chemicals and their metabolites. Selenium (an essential metalloid), glutathione, and vitamins such as vitamin C (water soluble), vitamin E and vitamin A (lipid soluble), have an important role as antioxidants (e.g., in controlling lipid peroxidation and maintaining integrity of cellular membranes) and free-radical scavengers for protection against toxic chemicals. In addition, various dietary constituents (protein and fibre content, minerals, phosphates, citric acid, etc.) as well as the amount of food consumed can greatly influence the gastrointestinal absorption rate of many toxic chemicals (e.g., the average absorption rate of soluble lead salts taken with meals is approximately eight per cent, as opposed to approximately 60% in fasting subjects). However, diet itself can be an additional source of individual exposure to various toxic chemicals (e.g., considerably increased daily intakes and accumulation of arsenic, mercury, cadmium and/or lead in subjects who consume contaminated seafood).
Influence of Smoking
The habit of smoking can influence individual susceptibility to many toxic chemicals because of the variety of possible interactions involving the great number of compounds present in cigarette smoke (especially polycyclic aromatic hydrocarbons, carbon monoxide, benzene, nicotine, acrolein, some pesticides, cadmium, and, to a lesser extent, lead and other toxic metals, etc.), some of which are capable of accumulating in the human body over a lifetime, including pre-natal life (e.g., lead and cadmium). The interactions occur mainly because various toxic chemicals compete for the same binding site(s) for transport and distribution in the organism and/or for the same biotransformation pathway involving particular enzymes. For example, several cigarette smoke constituents can induce cytochrome P450 enzymes, whereas others can depress their activity, and thus influence the common biotransformation pathways of many other toxic chemicals, such as organic solvents and some medications. Heavy cigarette smoking over a long period can considerably reduce the body’s defence mechanisms by decreasing reserve capacity to cope with the adverse influence of other life-style factors.
Influence of Alcohol
Consumption of alcohol (ethanol) can influence susceptibility to many toxic chemicals in several ways. It can influence the absorption rate and distribution of certain chemicals in the body—for example, increase the gastrointestinal absorption rate of lead, or decrease the pulmonary absorption rate of mercury vapour by inhibiting oxidation which is necessary for retention of inhaled mercury vapour. Ethanol can also influence susceptibility to various chemicals through short-term changes in tissue pH and increase in the redox potential resulting from ethanol metabolism, as both ethanol oxidizing to acetaldehyde and acetaldehyde oxidizing to acetate produce an equivalent of reduced nicotinamide adenine dinucleotide (NADH) and hydrogen (H+). Because the affinity of both essential and toxic metals and metalloids for binding to various compounds and tissues is influenced by pH and changes in the redox potential (Telišman 1995), even a moderate intake of ethanol may result in a series of consequences such as: (1) redistribution of long-term accumulated lead in the human organism in favour of a biologically active lead fraction, (2) replacement of essential zinc by lead in zinc-containing enzyme(s), thus affecting enzyme activity, or influence of mobil- ized lead on the distribution of other essential metals and metalloids in the organism such as calcium, iron, copper and selenium, (3) increased urinary excretion of zinc and so on. The effect of possible aforementioned events can be augmented due to the fact that alcoholic beverages can contain an appreciable amount of lead from vessels or processing (Prpic-Majic et al. 1984; Telišman et al. 1984; 1993).
Another common reason for ethanol-related changes in susceptibility is that many toxic chemicals, for example, various organic solvents, share the same biotransformation pathway involving the cytochrome P450 enzymes. Depending on the intensity of exposure to organic solvents as well as the quantity and frequency of ethanol ingestion (i.e., acute or chronic alcohol consumption), ethanol can either decrease or increase biotransformation rates of various organic solvents and thus influence their toxicity (Sato 1991).
Influence of Medications
The common use of various medications can influence susceptibility to toxic chemicals mainly because many drugs bind to serum proteins and thus influence the transport, distribution or excretion rate of various toxic chemicals, or because many drugs are capable of inducing relevant detoxifying enzymes or depressing their activity (e.g., the cytochrome P450 enzymes), thus affecting the toxicity of chemicals with the same biotransformation pathway. Characteristic for either of the mechanisms is increased urinary excretion of trichloroacetic acid (the metabolite of several chlorinated hydrocarbons) when using salicylate, sulphonamide or phenylbutazone, and an increased hepato-nephrotoxicity of carbon tetrachloride when using phenobarbital. In addition, some medications contain a considerable amount of a potentially toxic chemical, for example, the aluminium-containing antacids or preparations used for therapeutic management of the hyperphosphataemia arising in chronic renal failure.
Influence of Concomitant Exposure to Other Chemicals
The changes in susceptibility to adverse health effects due to interaction of various chemicals (i.e., possible additive, synergistic or antagonistic effects) have been studied almost exclusively in experimental animals, mostly in the rat. Relevant epidemiological and clinical studies are lacking. This is of concern particularly considering the relatively greater intensity of response or the variety of adverse health effects of several toxic chemicals in humans compared to the rat and other mammals. Apart from published data in the field of pharmacology, most data are related only to combinations of two different chemicals within specific groups, such as various pesticides, organic solvents, or essential and/or toxic metals and metalloids.
Combined exposure to various organic solvents can result in various additive, synergistic or antagonistic effects (depending on the combination of certain organic solvents, their intensity and duration of exposure), mainly due to the capability of influencing each other’s biotransformation (Sato 1991).
Another characteristic example are the interactions of both essential and/or toxic metals and metalloids, as these are involved in the possible influence of age (e.g., a lifetime body accumulation of environmental lead and cadmium), sex (e.g., common iron deficiency in women), dietary habits (e.g., increased dietary intake of toxic metals and metalloids and/or deficient dietary intake of essential metals and metalloids), smoking habit and alcohol consumption (e.g., additional exposure to cadmium, lead and other toxic metals), and use of medications (e.g., a single dose of antacid can result in a 50-fold increase in the average daily intake of aluminium through food). The possibility of various additive, synergistic or antagonistic effects of exposure to various metals and metalloids in humans can be illustrated by basic examples related to the main toxic elements (see table 1), apart from which further interactions may occur because essential elements can also influence one another (e.g., the well-known antagonistic effect of copper on the gastrointestinal absorption rate as well as the metabolism of zinc, and vice versa). The main cause of all these interactions is the competition of various metals and metalloids for the same binding site (especially the sulphhydryl group, -SH) in various enzymes, metalloproteins (especially metallothionein) and tissues (e.g., cell membranes and organ barriers). These interactions may have a relevant role in the development of several chronic diseases which are mediated through the action of free radicals and oxidative stress (Telišman 1995).
Table 1. Basic effects of possible multiple interactions concerning the main toxic and/or essential metals and matalloids in mammals
Toxic metal or metalloid | Basic effects of the interaction with other metal or metalloid |
Aluminium (Al) | Decreases the absorption rate of Ca and impairs the metabolism of Ca; deficient dietary Ca increases the absorption rate of Al. Impairs phosphate metabolism. Data on interactions with Fe, Zn and Cu are equivocal (i.e., the possible role of another metal as a mediator). |
Arsenic (As) | Affects the distribution of Cu (an increase of Cu in the kidney, and a decrease of Cu in the liver, serum and urine). Impairs the metabolism of Fe (an increase of Fe in the liver with concomitant decrease in haematocrit). Zn decreases the absorption rate of inorganic As and decreases the toxicity of As. Se decreases the toxicity of As and vice versa. |
Cadmium (Cd) | Decreases the absorption rate of Ca and impairs the metabolism of Ca; deficient dietary Ca increases the absorption rate of Cd. Impairs the phosphate metabolism, i.e., increases urinary excretion of phosphates. Impairs the metabolism of Fe; deficient dietary Fe increases the absorption rate of Cd. Affects the distribution of Zn; Zn decreases the toxicity of Cd, whereas its influence on the absorption rate of Cd is equivocal. Se decreases the toxicity of Cd. Mn decreases the toxicity of Cd at low-level exposure to Cd. Data on the interaction with Cu are equivocal (i.e., the possible role of Zn, or another metal, as a mediator). High dietary levels of Pb, Ni, Sr, Mg or Cr(III) can decrease the absorption rate of Cd. |
Mercury (Hg) | Affects the distribution of Cu (an increase of Cu in the liver). Zn decreases the absorption rate of inorganic Hg and decreases the toxicity of Hg. Se decreases the toxicity of Hg. Cd increases the concentration of Hg in the kidney, but at the same time decreases the toxicity of Hg in the kidney (the influence ofthe Cd-induced metallothionein synthesis). |
Lead (Pb) | Impairs the metabolism of Ca; deficient dietary Ca increases the absorption rate of inorganic Pb and increases the toxicity of Pb. Impairs the metabolism of Fe; deficient dietary Fe increases the toxicity of Pb, whereas its influence on the absorption rate of Pb is equivocal. Impairs the metabolism of Zn and increases urinary excretion of Zn; deficient dietary Zn increases the absorption rate of inorganic Pb andincreases the toxicity of Pb. Se decreases the toxicity of Pb. Data on interactions with Cu and Mg are equivocal (i.e., the possible role of Zn, or another metal, as a mediator). |
Note: Data are mostly related to experimental studies in the rat, whereas relevant clinical and epidemiological data (particularly regarding quantitative dose-response relationships) are generally lacking (Elsenhans et al. 1991; Fergusson 1990; Telišman et al. 1993).
The priority objective of occupational and environmental toxicology is to improve the prevention or substantial limitation of health effects of exposure to hazardous agents in the general and occupational environments. To this end systems have been developed for quantitative risk assessment related to a given exposure (see the section “Regulatory toxicology”).
The effects of a chemical on particular systems and organs are related to the magnitude of exposure and whether exposure is acute or chronic. In view of the diversity of toxic effects even within one system or organ, a uniform philosophy concerning the critical organ and critical effect has been proposed for the purpose of risk assessment and development of health-based recommended concentration limits of toxic substances in different environmental media.
From the point of view of preventive medicine, it is of particular importance to identify early adverse effects, based on the general assumption that preventing or limiting early effects may prevent more severe health effects from developing.
Such an approach has been applied to heavy metals. Although heavy metals, such as lead, cadmium and mercury, belong to a specific group of toxic substances where the chronic effect of activity is dependent on their accumulation in the organs, the definitions presented below were published by the Task Group on Metal Toxicity (Nordberg 1976).
The definition of the critical organ as proposed by the Task Group on Metal Toxicity has been adopted with a slight modification: the word metal has been replaced with the expression potentially toxic substance (Duffus 1993).
Whether a given organ or system is regarded as critical depends not only on the toxicomechanics of the hazardous agent but also on the route of absorption and the exposed population.
The biological meaning of subcritical effect is sometimes not known; it may stand for exposure biomarker, adaptation index or a critical effect precursor (see “Toxicology test methods: Biomarkers”). The latter possibility can be particularly significant in view of prophylactic activities.
Table 1 displays examples of critical organs and effects for different chemicals. In chronic environmental exposure to cadmium, where the route of absorption is of minor importance (cadmium air concentrations range from 10 to 20μg/m3 in the urban and 1 to 2 μg/m3 in the rural areas), the critical organ is the kidney. In the occupational setting where the TLV reaches 50μg/m3 and inhalation constitutes the main route of exposure, two organs, lung and kidney, are regarded as critical.
Table 1. Examples of critical organs and critical effects
Substance | Critical organ in chronic exposure | Critical effect |
Cadmium | Lungs | Nonthreshold: Lung cancer (unit risk 4.6 x 10-3) |
Kidney | Threshold: Increased excretion of low molecular proteins (β2 –M, RBP) in urine |
|
Lungs | Emphysema slight function changes | |
Lead | Adults Haematopoietic system |
Increased delta-aminolevulinic acid excretion in urine (ALA-U); increased concentration of free erythrocyte protoporphyrin (FEP) in erythrocytes |
Peripheral nervous system | Slowing of the conduction velocities of the slower nerve fibres | |
Mercury (elemental) | Young children Central nervous system |
Decrease in IQ and other subtle effects; mercurial tremor (fingers, lips, eyelids) |
Mercury (mercuric) | Kidney | Proteinuria |
Manganese | Adults Central nervous system |
Impairment of psychomotor functions |
Children Lungs |
Respiratory symptoms | |
Central nervous system | Impairment of psychomotor functions | |
Toluene | Mucous membranes | Irritation |
Vinyl chloride | Liver | Cancer (angiosarcoma unit risk 1 x 10-6 ) |
Ethyl acetate | Mucous membrane | Irritation |
For lead, the critical organs in adults are the haemopoietic and peripheral nervous systems, where the critical effects (e.g., elevated free erythrocyte protoporphyrin concentration (FEP), increased excretion of delta-aminolevulinic acid in urine, or impaired peripheral nerve conduction) manifest when the blood lead level (an index of lead absorption in the system) approaches 200 to 300μg/l. In small children the critical organ is the central nervous system (CNS), and the symptoms of dysfunction detected with the use of a psychological test battery have been found to appear in the examined populations even at concentrations in the range of about 100μg/l Pb in blood.
A number of other definitions have been formulated which may better reflect the meaning of the notion. According to WHO (1989), the critical effect has been defined as “the first adverse effect which appears when the threshold (critical) concentration or dose is reached in the critical organ. Adverse effects, such as cancer, with no defined threshold concentration are often regarded as critical. Decision on whether an effect is critical is a matter of expert judgement.” In the International Programme on Chemical Safety (IPCS) guidelines for developing Environmental Health Criteria Documents, the critical effect is described as “the adverse effect judged to be most appropriate for determining the tolerable intake”. The latter definition has been formulated directly for the purpose of evaluating the health-based exposure limits in the general environment. In this context the most essential seems to be determining which effect can be regarded as an adverse effect. Following current terminology, the adverse effect is the “change in morphology, physiology, growth, development or lifespan of an organism which results in impairment of the capacity to compensate for additional stress or increase in susceptibility to the harmful effects of other environmental influences. Decision on whether or not any effect is adverse requires expert judgement.”
Figure 1 displays hypothetical dose-response curves for different effects. In the case of exposure to lead, A can represent a subcritical effect (inhibition of erythrocyte ALA-dehydratase), B the critical effect (an increase in erythrocyte zinc protoporphyrin or increase in the excretion of delta-aminolevulinic acid, C the clinical effect (anaemia) and D the fatal effect (death). For lead exposure there is abundant evidence illustrating how particular effects of exposure are dependent on lead concentration in blood (practical counterpart of the dose), either in the form of the dose-response relationship or in relation to different variables (sex, age, etc.). Determining the critical effects and the dose-response relationship for such effects in humans makes it possible to predict the frequency of a given effect for a given dose or its counterpart (concentration in biological material) in a certain population.
Figure 1. Hypothetical dose-response curves for various effects
The critical effects can be of two types: those considered to have a threshold and those for which there may be some risk at any exposure level (non-threshold, genotoxic carcinogens and germ mutagens). Whenever possible, appropriate human data should be used as a basis for the risk assessment. In order to determine the threshold effects for the general population, assumptions concerning the exposure level (tolerable intake, biomarkers of exposure) have to be made such that the frequency of the critical effect in the population exposed to a given hazardous agent corresponds to the frequency of that effect in the general population. In lead exposure, the maximum recommended blood lead concentration for the general population (200μg/l, median below 100μg/l) (WHO 1987) is practically below the threshold value for the assumed critical effect—the elevated free erythrocyte protoporphyrin level, although it is not below the level associated with effects on the CNS in children or blood pressure in adults. In general, if data from well-conducted human population studies defining a no observed adverse effect level are the basis for safety evaluation, then the uncertainty factor of ten has been considered appropriate. In the case of occupational exposure the critical effects may refer to a certain part of the population (e.g. 10%). Accordingly, in occupational lead exposure the recommended health-based concentration of blood lead has been adopted to be 400mg/l in men where a 10% response level for ALA-U of 5mg/l occurred at PbB concentrations of about 300 to 400mg/l. For the occupational exposure to cadmium (assuming the increased urinary excretion of low-weight proteins to be the critical effect), the level of 200ppm cadmium in renal cortex has been regarded as the admissible value, for this effect has been observed in 10% of the exposed population. Both these values are under consideration for lowering, in many countries, at the present time (i.e.,1996).
There is no clear consensus on appropriate methodology for the risk assessment of chemicals for which the critical effect may not have a threshold, such as genotoxic carcinogens. A number of approaches based largely on characterization of the dose- response relationship have been adopted for the assessment of such effects. Owing to the lack of socio-political acceptance of health risk caused by carcinogens in such documents as the Air Quality Guidelines for Europe (WHO 1987), only the values such as the unit lifetime risk (i.e., the risk associated with lifetime exposure to 1μg/m3 of the hazardous agent) are presented for non-threshold effects (see “Regulatory toxicology”).
Presently, the basic step in undertaking activities for risk assessment is determining the critical organ and critical effects. The definitions of both the critical and adverse effect reflect the responsibility of deciding which of the effects within a given organ or system should be regarded as critical, and this is directly related to the subsequent determination of recommended values for a given chemical in the general environment—for example, Air Quality Guidelines for Europe (WHO 1987) or health-based limits in occupational exposure (WHO 1980). Determining the critical effect from within the range of subcritical effects may lead to a situation where the recommended limits on toxic chemicals concentration in the general or occupational environment may be in practice impossible to maintain. Regarding as critical an effect that may overlap the early clinical effects may bring about the adoption of the values for which adverse effects may develop in some part of the population. The decision whether or not a given effect should be considered critical remains the responsibility of expert groups who specialize in toxicity and risk assessment.
The human organism represents a complex biological system on various levels of organization, from the molecular-cellular level to the tissues and organs. The organism is an open system, exchanging matter and energy with the environment through numerous biochemical reactions in a dynamic equilibrium. The environment can be polluted, or contaminated with various toxicants.
Penetration of molecules or ions of toxicants from the work or living environment into such a strongly coordinated biological system can reversibly or irreversibly disturb normal cellular biochemical processes, or even injure and destroy the cell (see “Cellular injury and cellular death”).
Penetration of a toxicant from the environment to the sites of its toxic effect inside the organism can be divided into three phases:
Here we will focus our attention exclusively on the toxicokinetic processes inside the human organism following exposure to toxicants in the environment.
The molecules or ions of toxicants present in the environment will penetrate into the organism through the skin and mucosa, or the epithelial cells of the respiratory and gastrointestinal tracts, depending on the point of entry. That means molecules and ions of toxicants must penetrate through cellular membranes of these biological systems, as well as through an intricate system of endomembranes inside the cell.
All toxicokinetic and toxicodynamic processes occur on the molecular-cellular level. Numerous factors influence these processes and these can be divided into two basic groups:
Physico-Chemical Properties of Toxicants
In 1854 the Russian toxicologist E.V. Pelikan started studies on the relation between the chemical structure of a substance and its biological activity—the structure activity relationship (SAR). Chemical structure directly determines physico-chemical properties, some of which are responsible for biological activity.
To define the chemical structure numerous parameters can be selected as descriptors, which can be divided into various groups:
1. Physico-chemical:
2. Steric: molecular volume, shape and surface area, substructure shape, molecular reactivity, etc.
3. Structural: number of bonds number of rings (in polycyclic compounds), extent of branching, etc.
For each toxicant it is necessary to select a set of descriptors related to a particular mechanism of activity. However, from the toxicokinetic point of view two parameters are of general importance for all toxicants:
For inhaled dusts and aerosols, the particle size, shape, surface area and density also influence their toxicokinetics and toxico- dynamics.
Structure and Properties of Membranes
The eukaryotic cell of human and animal organisms is encircled by a cytoplasmic membrane regulating the transport of substances and maintaining cell homeostasis. The cell organelles (nucleus, mitochondria) possess membranes too. The cell cytoplasm is compartmentalized by intricate membranous structures, the endo- plasmic reticulum and Golgi complex (endomembranes). All these membranes are structurally alike, but vary in the content of lipids and proteins.
The structural framework of membranes is a bilayer of lipid molecules (phospholipids, sphyngolipids, cholesterol). The backbone of a phospholipid molecule is glycerol with two of its -OH groups esterified by aliphatic fatty acids with 16 to 18 carbon atoms, and the third group esterified by a phosphate group and a nitrogenous compound (choline, ethanolamine, serine). In sphyngolipids, sphyngosine is the base.
The lipid molecule is amphipatic because it consists of a polar hydrophilic “head” (amino alcohol, phosphate, glycerol) and a non-polar twin “tail” (fatty acids). The lipid bilayer is arranged so that the hydrophilic heads constitute the outer and inner surface of membrane and lipophilic tails are stretched toward the membrane interior, which contains water, various ions and molecules.
Proteins and glycoproteins are inserted into the lipid bilayer (intrinsic proteins) or attached to the membrane surface (extrinsic proteins). These proteins contribute to the structural integrity of the membrane, but they may also perform as enzymes, carriers, pore walls or receptors.
The membrane represents a dynamic structure which can be disintegrated and rebuilt with a different proportion of lipids and proteins, according to functional needs.
Regulation of transport of substances into and out of the cell represents one of the basic functions of outer and inner membranes.
Some lipophilic molecules pass directly through the lipid bilayer. Hydrophilic molecules and ions are transported via pores. Membranes respond to changing conditions by opening or sealing certain pores of various sizes.
The following processes and mechanisms are involved in the transport of substances, including toxicants, through membranes:
Active processes:
Diffusion
This represents the movement of molecules and ions through lipid bilayer or pores from a region of high concentration, or high electric potential, to a region of low concentration or potential (“downhill”). Difference in concentration or electric charge is the driving force influencing the intensity of the flux in both directions. In the equilibrium state, influx will be equal to efflux. The rate of diffusion follows Ficke’s law, stating that it is directly proportional to the available surface of membrane, difference in concentration (charge) gradient and characteristic diffusion coefficient, and inversely proportional to the membrane thickness.
Small lipophilic molecules pass easily through the lipid layer of membrane, according to the Nernst partition coefficient.
Large lipophilic molecules, water soluble molecules and ions will use aqueous pore channels for their passage. Size and stereoconfiguration will influence passage of molecules. For ions, besides size, the type of charge will be decisive. The protein molecules of pore walls can gain positive or negative charge. Narrow pores tend to be selective—negatively charged ligands will allow passage only for cations, and positively charged ligands will allow passage only for anions. With the increase of pore diameter hydrodynamic flow is dominant, allowing free passage of ions and molecules, according to Poiseuille’s law. This filtration is a consequence of the osmotic gradient. In some cases ions can penetrate through specific complex molecules—ionophores—which can be produced by micro-organisms with antibiotic effects (nonactin, valinomycin, gramacidin, etc.).
Facilitated or catalyzed diffusion
This requires the presence of a carrier in the membrane, usually a protein molecule (permease). The carrier selectively binds substances, resembling a substrate-enzyme complex. Similar molecules (including toxicants) can compete for the specific carrier until its saturation point is reached. Toxicants can compete for the carrier and when they are irreversibly bound to it the transport is blocked. The rate of transport is characteristic for each type of carrier. If transport is performed in both direction, it is called exchange diffusion.
Active transport
For transport of some substances vital for the cell, a special type of carrier is used, transporting against the concentration gradient or electric potential (“uphill”). The carrier is very stereospecific and can be saturated.
For uphill transport, energy is required. The necessary energy is obtained by catalytic cleavage of ATP molecules to ADP by the enzyme adenosine triphosphatase (ATP-ase).
Toxicants can interfere with this transport by competitive or non-competitive inhibition of the carrier or by inhibition of ATP-ase activity.
Endocytosis
Endocytosis is defined as a transport mechanism in which the cell membrane encircles material by enfolding to form a vesicle transporting it through the cell. When the material is liquid, the process is termed pinocytosis. In some cases the material is bound to a receptor and this complex is transported by a membrane vesicle. This type of transport is especially used by epithelial cells of the gastrointestinal tract, and cells of the liver and kidneys.
Absorption of Toxicants
People are exposed to numerous toxicants present in the work and living environment, which can penetrate into the human organism by three main portals of entry:
In the case of exposure in industry, inhalation represents the dominant way of entry of toxicants, followed by dermal penetration. In agriculture, pesticides exposure via dermal absorption is almost equal to cases of combined inhalation and dermal penetration. The general population is mostly exposed by ingestion of contaminated food, water and beverages, then by inhalation and less often by dermal penetration.
Absorption via the respiratory tract
Absorption in the lungs represents the main route of uptake for numerous airborne toxicants (gases, vapours, fumes, mists, smokes, dusts, aerosols, etc.).
The respiratory tract (RT) represents an ideal gas-exchange system possessing a membrane with a surface of 30m2 (expiration) to 100m2 (deep inspiration), behind which a network of about 2,000km of capillaries is located. The system, developed through evolution, is accommodated into a relatively small space (chest cavity) protected by ribs.
Anatomically and physiologically the RT can be divided into three compartments:
Hydrophilic toxicants are easily absorbed by the epithelium of the nasopharingeal region. The whole epithelium of the NP and TB regions is covered by a film of water. Lipophilic toxicants are partially absorbed in the NP and TB, but mostly in the alveoli by diffusion through alveolo-capillary membranes. The absorption rate depends on lung ventilation, cardiac output (blood flow through lungs), solubility of toxicant in blood and its metabolic rate.
In the alveoli, gas exchange is carried out. The alveolar wall is made up of an epithelium, an interstitial framework of basement membrane, connective tissue and the capillary endothelium. The diffusion of toxicants is very rapid through these layers, which have a thickness of about 0.8 μm. In alveoli, toxicant is transferred from the air phase into the liquid phase (blood). The rate of absorption (air to blood distribution) of a toxicant depends on its concentration in alveolar air and the Nernst partition coefficient for blood (solubility coefficient).
In the blood the toxicant can be dissolved in the liquid phase by simple physical processes or bound to the blood cells and/or plasma constituents according to chemical affinity or by adsorption. The water content of blood is 75% and, therefore, hydrophilic gases and vapours show a high solubility in plasma (e.g., alcohols). Lipophilic toxicants (e.g., benzene) are usually bound to cells or macro-molecules such as albumen.
From the very beginning of exposure in the lungs, two opposite processes are occurring: absorption and desorption. The equilibrium between these processes depends on the concentration of toxicant in alveolar air and blood. At the onset of exposure the toxicant concentration in the blood is 0 and retention in blood is almost 100%. With continuation of exposure, an equilibrium between absorption and desorption is attained. Hydrophilic toxicants will rapidly attain equilibrium, and the rate of absorption depends on pulmonary ventilation rather than on blood flow. Lipophilic toxicants need a longer time to achieve equilibrium, and here the flow of unsaturated blood governs the rate of absorption.
Deposition of particles and aerosols in the RT depends on physical and physiological factors, as well as particle size. In short, the smaller the particle the deeper it will penetrate into the RT.
Relatively constant low retention of dust particles in the lungs of persons who are highly exposed (e.g., miners) suggests the existence of a very efficient system for the clearance of particles. In the upper part of the RT (tracheo-bronchial) a mucociliary blanket performs the clearance. In the pulmonary part, three different mechanisms are at work.: (1) mucociliary blanket, (2) phagocytosis and (3) direct penetration of particles through the alveolar wall.
The first 17 of the 23 branchings of the tracheo-bronchial tree possess ciliated epithelial cells. By their strokes these cilia constantly move a mucous blanket toward the mouth. Particles deposited on this mucociliary blanket will be swallowed in the mouth (ingestion). A mucous blanket also covers the surface of the alveolar epithelium, moving toward the mucociliary blanket. Additionally the specialized moving cells—phagocytes—engulf particles and micro-organisms in the alveoli and migrate in two possible directions:
Absorption via gastrointestinal tract
Toxicants can be ingested in the case of accidental swallowing, intake of contaminated food and drinks, or swallowing of particles cleared from the RT.
The entire alimentary channel, from oesophagus to anus, is basically built in the same way. A mucous layer (epithelium) is supported by connective tissue and then by a network of capillaries and smooth muscle. The surface epithelium of the stomach is very wrinkled to increase the absorption/secretion surface area. The intestinal area contains numerous small projections (villi), which are able to absorb material by “pumping in”. The active area for absorption in the intestines is about 100m2.
In the gastrointestinal tract (GIT) all absorption processes are very active:
Some toxic metal ions use specialized transport systems for essential elements: thallium, cobalt and manganese use the iron system, while lead appears to use the calcium system.
Many factors influence the rate of absorption of toxicants in various parts of the GIT:
It is also necessary to mention the enterohepatic circulation. Polar toxicants and/or metabolites (glucuronides and other conjugates) are excreted with the bile into the duodenum. Here the enzymes of the microflora perform hydrolysis and liberated products can be reabsorbed and transported by the portal vein into the liver. This mechanism is very dangerous in the case of hepatotoxic substances, enabling their temporary accumulation in the liver.
In the case of toxicants biotransformed in the liver to less toxic or non-toxic metabolites, ingestion may represent a less dangerous portal of entry. After absorption in the GIT these toxicants will be transported by the portal vein to the liver, and there they can be partially detoxified by biotransformation.
Absorption through the skin (dermal, percutaneous)
The skin (1.8 m2 of surface in a human adult) together with the mucous membranes of the body orifices, covers the surface of the body. It represents a barrier against physical, chemical and biological agents, maintaining the body integrity and homeostasis and performing many other physiological tasks.
Basically the skin consists of three layers: epidermis, true skin (dermis) and subcutaneous tissue (hypodermis). From the toxicological point of view the epidermis is of most interest here. It is built of many layers of cells. A horny surface of flattened, dead cells (stratum corneum) is the top layer, under which a continuous layer of living cells (stratum corneum compactum) is located, followed by a typical lipid membrane, and then by stratum lucidum, stratum gramulosum and stratum mucosum. The lipid membrane represents a protective barrier, but in hairy parts of the skin, both hair follicles and sweat gland channels penetrate through it. Therefore, dermal absorption can occur by the following mechanisms:
The rate of absorption through the skin will depend on many factors:
Transport of Toxicants by Blood and Lymph
After absorption by any of these portals of entry, toxicants will reach the blood, lymph or other body fluids. The blood represents the major vehicle for transport of toxicants and their metabolites.
Blood is a fluid circulating organ, transporting necessary oxygen and vital substances to the cells and removing waste products of metabolism. Blood also contains cellular components, hormones, and other molecules involved in many physiological functions. Blood flows inside a relatively well closed, high-pressure circulatory system of blood vessels, pushed by the activity of the heart. Due to high pressure, leakage of fluid occurs. The lymphatic system represents the drainage system, in the form of a fine mesh of small, thin-walled lymph capillaries branching through the soft tissues and organs.
Blood is a mixture of a liquid phase (plasma, 55%) and solid blood cells (45%). Plasma contains proteins (albumins, globulins, fibrinogen), organic acids (lactic, glutamic, citric) and many other substances (lipids, lipoproteins, glycoproteins, enzymes, salts, xenobiotics, etc.). Blood cell elements include erythrocytes (Er), leukocytes, reticulocytes, monocytes, and platelets.
Toxicants are absorbed as molecules and ions. Some toxicants at blood pH form colloid particles as a third form in this liquid. Molecules, ions and colloids of toxicants have various possibilities for transport in blood:
Most of the toxicants in blood exist partially in a free state in plasma and partially bound to erythrocytes and plasma constituents. The distribution depends on the affinity of toxicants to these constituents. All fractions are in a dynamic equilibrium.
Some toxicants are transported by the blood elements—mostly by erythrocytes, very rarely by leukocytes. Toxicants can be adsorbed on the surface of Er, or can bind to the ligands of stroma. If they penetrate into Er they can bind to the haem (e.g. carbon monoxide and selenium) or to the globin (Sb111, Po210). Some toxicants transported by Er are arsenic, cesium, thorium, radon, lead and sodium. Hexavalent chromium is exclusively bound to the Er and trivalent chromium to the proteins of plasma. For zinc, competition between Er and plasma occurs. About 96% of lead is transported by Er. Organic mercury is mostly bound to Er and inorganic mercury is carried mostly by plasma albumin. Small fractions of beryllium, copper, tellurium and uranium are carried by Er.
The majority of toxicants are transported by plasma or plasma proteins. Many electrolytes are present as ions in an equilibrium with non-dissociated molecules free or bound to the plasma fractions. This ionic fraction of toxicants is very diffusible, penetrating through the walls of capillaries into tissues and organs. Gases and vapours can be dissolved in the plasma.
Plasma proteins possess a total surface area of about 600to 800km2 offered for absorption of toxicants. Albumin molecules possess about 109 cationic and 120 anionic ligands at the disposal of ions. Many ions are partially carried by albumin (e.g., copper, zinc and cadmium), as are such compounds as dinitro- and ortho-cresols, nitro- and halogenated derivatives of aromatic hydrocarbons, and phenols.
Globulin molecules (alpha and beta) transport small molecules of toxicants as well as some metallic ions (copper, zinc and iron) and colloid particles. Fibrinogen shows affinity for certain small molecules. Many types of bonds can be involved in binding of toxicants to plasma proteins: Van der Waals forces, attraction of charges, association between polar and non-polar groups, hydrogen bridges, covalent bonds.
Plasma lipoproteins transport lipophilic toxicants such as PCBs. The other plasma fractions serve as a transport vehicle too. The affinity of toxicants for plasma proteins suggests their affinity for proteins in tissues and organs during distribution.
Organic acids (lactic, glutaminic, citric) form complexes with some toxicants. Alkaline earths and rare earths, as well as some heavy elements in the form of cations, are complexed also with organic oxy- and amino acids. All these complexes are usually diffusible and easily distributed in tissues and organs.
Physiologically chelating agents in plasma such as transferrin and metallothionein compete with organic acids and amino acids for cations to form stable chelates.
Diffusible free ions, some complexes and some free molecules are easily cleared from the blood into tissues and organs. The free fraction of ions and molecules is in a dynamic equilibrium with the bound fraction. The concentration of a toxicant in blood will govern the rate of its distribution into tissues and organs, or its mobilization from them into the blood.
Distribution of Toxicants in the Organism
The human organism can be divided into the following compartments. (1) internal organs, (2) skin and muscles, (3) adipose tissues, (4) connective tissue and bones. This classification is mostly based on the degree of vascular (blood) perfusion in a decreasing order. For example internal organs (including the brain), which represent only 12% of the total body weight, receive about 75% of the total blood volume. On the other hand, connective tissues and bones (15% of total body weight) receive only one per cent of the total blood volume.
The well-perfused internal organs generally achieve the highest concentration of toxicants in the shortest time, as well as an equilibrium between blood and this compartment. The uptake of toxicants by less perfused tissues is much slower, but retention is higher and duration of stay much longer (accumulation) due to low perfusion.
Three components are of major importance for the intracellular distribution of toxicants: content of water, lipids and proteins in the cells of various tissues and organs. The above-mentioned order of compartments also follows closely a decreasing water content in their cells. Hydrophilic toxicants will be more rapidly distributed to the body fluids and cells with high water content, and lipophilic toxicants to cells with higher lipid content (fatty tissue).
The organism possesses some barriers which impair penetration of some groups of toxicants, mostly hydrophilic, to certain organs and tissues, such as:
As previously noted only the free forms of toxicants in plasma (molecules, ions, colloids) are available for penetration through the capillary walls participating in distribution. This free fraction is in a dynamic equilibrium with the bound fraction. Concentration of toxicants in blood is in a dynamic equilibrium with their concentration in organs and tissues, governing retention (accumulation) or mobilization from them.
The condition of the organism, functional state of organs (especially neuro-humoral regulation), hormonal balance and other factors play a role in distribution.
Retention of toxicant in a particular compartment is generally temporary and redistribution into other tissues can occur. Retention and accumulation is based on the difference between the rates of absorption and elimination. The duration of retention in a compartment is expressed by the biological half-life. This is the time interval in which 50% of the toxicant is cleared from the tissue or organ and redistributed, translocated or eliminated from the organism.
Biotransformation processes occur during distribution and retention in various organs and tissues. Biotransformation produces more polar, more hydrophilic metabolites, which are more easily eliminated. A low rate of biotransformation of a lipophilic toxicant will generally cause its accumulation in a compartment.
The toxicants can be divided into four main groups according to their affinity, predominant retention and accumulation in a particular compartment:
Accumulation in lipid-rich tissues
The “standard man” of 70kg body weight contains about 15% of body weight in the form of adipose tissue, increasing with obesity to 50%. However, this lipid fraction is not uniformly distributed. The brain (CNS) is a lipid-rich organ, and peripheral nerves are wrapped with a lipid-rich myelin sheath and Schwann cells. All these tissues offer possibilities for accumulation of lipophilic toxicants.
Numerous non-electrolytes and non-polar toxicants with a suitable Nernst partition coefficient will be distributed to this compartment, as well as numerous organic solvents (alcohols, aldehydes, ketones, etc.), chlorinated hydrocarbons (including organochlorine insecticides such as DDT), some inert gases (radon), etc.
Adipose tissue will accumulate toxicants due to its low vascularization and lower rate of biotransformation. Here accumulation of toxicants may represent a kind of temporary “neutralization” because of lack of targets for toxic effect. However, potential danger for the organism is always present due to the possibility of mobilization of toxicants from this compartment back to the circulation.
Deposition of toxicants in the brain (CNS) or lipid-rich tissue of the myelin sheath of the peripheral nervous system is very dangerous. The neurotoxicants are deposited here directly next to their targets. Toxicants retained in lipid-rich tissue of the endocrine glands can produce hormonal disturbances. Despite the blood-brain barrier, numerous neurotoxicants of a lipophilic nature reach the brain (CNS): anaesthetics, organic solvents, pesticides, tetraethyl lead, organomercurials, etc.
Retention in the reticuloendothelial system
In each tissue and organ a certain percentage of cells is specialized for phagocytic activity, engulfing micro-organisms, particles, colloid particles, and so on. This system is called the reticuloendothelial system (RES), comprising fixed cells as well as moving cells (phagocytes). These cells are present in non-active form. An increase of the above-mentioned microbes and particles will activate the cells up to a saturation point.
Toxicants in the form of colloids will be captured by the RES of organs and tissues. Distribution depends on the colloid particle size. For larger particles, retention in the liver will be favoured. With smaller colloid particles, more or less uniform distribution will occur between the spleen, bone marrow and liver. Clearance of colloids from the RES is very slow, although small particles are cleared relatively more quickly.
Accumulation in bones
About 60 elements can be identified as osteotropic elements, or bone seekers.
Osteotropic elements can be divided into three groups:
The skeleton of a standard man accounts for 10to 15% of the total body weight, representing a large potential storage depot for osteotropic toxicants. Bone is a highly specialized tissue consisting by volume of 54% minerals and 38% organic matrix. The mineral matrix of bone is hydroxyapatite, Ca10(PO4)6(OH)2 , in which the ratio of Ca to P is about 1.5 to one. The surface area of mineral available for adsorption is about 100m2 per g of bone.
Metabolic activity of the bones of the skeleton can be divided in two categories:
In the fetus, infant and young child metabolic bone (see “available skeleton”) represents almost 100% of the skeleton. With age this percentage of metabolic bone decreases. Incorporation of toxicants during exposure appears in the metabolic bone and in more slowly turning-over compartments.
Incorporation of toxicants into bone occurs in two ways:
Ion-exchange reactions
The bone mineral, hydroxyapatite, represents a complex ion- exchange system. Calcium cations can be exchanged by various cations. The anions present in bone can also be exchanged by anions: phosphate with citrates and carbonates, hydroxyl with fluorine. Ions which are not exchangeable can be adsorbed on the mineral surface. When toxicant ions are incorporated in the mineral, a new layer of mineral can cover the mineral surface, burying toxicant into the bone structure. Ion exchange is a reversible process, depending on the concentration of ions, pH and fluid volume. Thus, for example, an increase of dietary calcium may decrease the deposition of toxicant ions in the lattice of minerals. It has been mentioned that with age the percentage of metabolic bone is decreased, although ion exchange continues. With ageing, bone mineral resorption occurs, in which bone density actually decreases. At this point, toxicants in bone may be released (e.g., lead).
About 30% of the ions incorporated into bone minerals are loosely bound and can be exchanged, captured by natural chelating agents and excreted, with a biological half-life of 15 days. The other 70% is more firmly bound. Mobilization and excretion of this fraction shows a biological half-life of 2.5 years and more depending on bone type (remodelling processes).
Chelating agents (Ca-EDTA, penicillamine, BAL, etc.) can mobilize considerable quantities of some heavy metals, and their excretion in urine greatly increased.
Colloid adsorption
Colloid particles are adsorbed as a film on the mineral surface (100m2 per g) by Van der Waals forces or chemisorption. This layer of colloids on the mineral surfaces is covered with the next layer of formed minerals, and the toxicants are more buried into the bone structure. The rate of mobilization and elimination depends on remodelling processes.
Accumulation in hair and nails
The hair and nails contain keratin, with sulphydryl groups able to chelate metallic cations such as mercury and lead.
Distribution of toxicant inside the cell
Recently the distribution of toxicants, especially some heavy metals, within cells of tissues and organs has become of importance. With ultracentrifugation techniques, various fractions of the cell can be separated to determine their content of metal ions and other toxicants.
Animal studies have revealed that after penetration into the cell, some metal ions are bound to a specific protein, metallothionein. This low molecular weight protein is present in the cells of liver, kidney and other organs and tissues. Its sulphydryl groups can bind six ions per molecule. Increased presence of metal ions induces the biosynthesis of this protein. Ions of cadmium are the most potent inducer. Metallothionein serves also to maintain homeostasis of vital copper and zinc ions. Metallothionein can bind zinc, copper, cadmium, mercury, bismuth, gold, cobalt and other cations.
Biotransformation and Elimination of Toxicants
During retention in cells of various tissues and organs, toxicants are exposed to enzymes which can biotransform (metabolize) them, producing metabolites. There are many pathways for the elimination of toxicants and/or metabolites: by exhaled air via the lungs, by urine via the kidneys, by bile via the GIT, by sweat via the skin, by saliva via the mouth mucosa, by milk via the mammary glands, and by hair and nails via normal growth and cell turnover.
The elimination of an absorbed toxicant depends on the portal of entry. In the lungs the absorption/desorption process starts immediately and toxicants are partially eliminated by exhaled air. Elimination of toxicants absorbed by other paths of entry is prolonged and starts after transport by blood, eventually being completed after distribution and biotransformation. During absorption an equilibrium exists between the concentrations of a toxicant in the blood and in tissues and organs. Excretion decreases toxicant blood concentration and may induce mobilization of a toxicant from tissues into blood.
Many factors can influence the elimination rate of toxicants and their metabolites from the body:
Here we distinguish two groups of compartments: (1) the rapid-exchange system— in these compartments, tissue concentration of toxicant is similar to that of the blood; and (2) the slow-exchange system, where tissue concentration of toxicant is higher than in blood due to binding and accumulation—adipose tissue, skeleton and kidneys can temporarily retain some toxicants, e.g., arsenic and zinc.
A toxicant can be excreted simultaneously by two or more excretion routes. However, usually one route is dominant.
Scientists are developing mathematical models describing the excretion of a particular toxicant. These models are based on the movement from one or both compartments (exchange systems), biotransformation and so on.
Elimination by exhaled air via lungs
Elimination via the lungs (desorption) is typical for toxicants with high volatility (e.g., organic solvents). Gases and vapours with low solubility in blood will be quickly eliminated this way, whereas toxicants with high blood solubility will be eliminated by other routes.
Organic solvents absorbed by the GIT or skin are excreted partially by exhaled air in each passage of blood through the lungs, if they have a sufficient vapour pressure. The Breathalyser test used for suspected drunk drivers is based on this fact. The concentration of CO in exhaled air is in equilibrium with the CO-Hb blood content. The radioactive gas radon appears in exhaled air due to the decay of radium accumulated in the skeleton.
Elimination of a toxicant by exhaled air in relation to the post-exposure period of time usually is expressed by a three-phase curve. The first phase represents elimination of toxicant from the blood, showing a short half-life. The second, slower phase represents elimination due to exchange of blood with tissues and organs (quick-exchange system). The third, very slow phase is due to exchange of blood with fatty tissue and skeleton. If a toxicant is not accumulated in such compartments, the curve will be two-phase. In some cases a four-phase curve is also possible.
Determination of gases and vapours in exhaled air in the post-exposure period is sometimes used for evaluation of exposures in workers.
Renal excretion
The kidney is an organ specialized in the excretion of numerous water-soluble toxicants and metabolites, maintaining homeostasis of the organism. Each kidney possesses about one million nephrons able to perform excretion. Renal excretion represents a very complex event encompassing three different mechanisms:
Excretion of a toxicant via the kidneys to urine depends on the Nernst partition coefficient, dissociation constant and pH of urine, molecular size and shape, rate of metabolism to more hydrophilic metabolites, as well as health status of the kidneys.
The kinetics of renal excretion of a toxicant or its metabolite can be expressed by a two-, three- or four-phase excretion curve, depending on the distribution of the particular toxicant in various body compartments differing in the rate of exchange with the blood.
Saliva
Some drugs and metallic ions can be excreted through the mucosa of the mouth by saliva—for example, lead (“lead line”), mercury, arsenic, copper, as well as bromides, iodides, ethyl alcohol, alkaloids, and so on. The toxicants are then swallowed, reaching the GIT, where they can be reabsorbed or eliminated by faeces.
Sweat
Many non-electrolytes can be partially eliminated via skin by sweat: ethyl alcohol, acetone, phenols, carbon disulphide and chlorinated hydrocarbons.
Milk
Many metals, organic solvents and some organochlorine pesticides (DDT) are secreted via the mammary gland in mother’s milk. This pathway can represent a danger for nursing infants.
Hair
Analysis of hair can be used as an indicator of homeostasis of some physiological substances. Also exposure to some toxicants, especially heavy metals, can be evaluated by this kind of bioassay.
Elimination of toxicants from the body can be increased by:
Exposure Determinations
Determination of toxicants and metabolites in blood, exhaled air, urine, sweat, faeces and hair is more and more used for evaluation of human exposure (exposure tests) and/or evaluation of the degree of intoxication. Therefore biological exposure limits (Biological MAC Values, Biological Exposure Indices—BEI) have recently been established. These bioassays show “internal exposure” of the organism, that is, total exposure of the body in both the work and living environments by all portals of entry (see “Toxicology test methods: Biomarkers”).
Combined Effects Due to Multiple Exposure
People in the work and/or living environment are usually exposed simultaneously or consecutively to various physical and chemical agents. Also it is necessary to take into consideration that some persons use medications, smoke, consume alcohol and food containing additives and so on. That means that usually multiple exposure is occurring. Physical and chemical agents can interact in each step of toxicokinetic and/or toxicodynamic processes, producing three possible effects:
However, studies on combined effects are rare. This kind of study is very complex due to the combination of various factors and agents.
We can conclude that when the human organism is exposed to two or more toxicants simultaneously or consecutively, it is necessary to consider the possibility of some combined effects, which can increase or decrease the rate of toxicokinetic processes.
Exposure, Dose and Response
Toxicity is the intrinsic capacity of a chemical agent to affect an organism adversely.
Xenobiotics is a term for “foreign substances”, that is, foreign to the organism. Its opposite is endogenous compounds. Xenobiotics include drugs, industrial chemicals, naturally occurring poisons and environmental pollutants.
Hazard is the potential for the toxicity to be realized in a specific setting or situation.
Risk is the probability of a specific adverse effect to occur. It is often expressed as the percentage of cases in a given population and during a specific time period. A risk estimate can be based upon actual cases or a projection of future cases, based upon extrapolations.
Toxicity rating and toxicity classification can be used for regulatory purposes. Toxicity rating is an arbitrary grading of doses or exposure levels causing toxic effects. The grading can be “supertoxic,” “highly toxic,” “moderately toxic” and so on. The most common ratings concern acute toxicity. Toxicity classification concerns the grouping of chemicals into general categories according to their most important toxic effect. Such categories can include allergenic, neurotoxic, carcinogenic and so on. This classification can be of administrative value as a warning and as information.
The dose-effect relationship is the relationship between dose and effect on the individual level. An increase in dose may in- crease the intensity of an effect, or a more severe effect may result. A dose-effect curve may be obtained at the level of the whole organism, the cell or the target molecule. Some toxic effects, such as death or cancer, are not graded but are “all or none” effects.
The dose-response relationship is the relationship between dose and the percentage of individuals showing a specific effect. With increasing dose a greater number of individuals in the exposed population will usually be affected.
It is essential to toxicology to establish dose-effect and dose- response relationships. In medical (epidemiological) studies a criterion often used for accepting a causal relationship between an agent and a disease is that effect or response is proportional to dose.
Several dose-response curves can be drawn for a chemical—one for each type of effect. The dose-response curve for most toxic effects (when studied in large populations) has a sigmoid shape. There is usually a low-dose range where there is no response detected; as dose increases, the response follows an ascending curve that will usually reach a plateau at a 100% response. The dose-response curve reflects the variations among individuals in a population. The slope of the curve varies from chemical to chemical and between different types of effects. For some chemicals with specific effects (carcinogens, initiators, mutagens) the dose-response curve might be linear from dose zero within a certain dose range. This means that no threshold exists and that even small doses represent a risk. Above that dose range, the risk may increase at greater than a linear rate.
Variation in exposure during the day and the total length of exposure during one’s lifetime may be as important for the outcome (response) as mean or average or even integrated dose level. High peak exposures may be more harmful than a more even exposure level. This is the case for some organic solvents. On the other hand, for some carcinogens, it has been experimentally shown that the fractionation of a single dose into several exposures with the same total dose may be more effective in producing tumours.
A dose is often expressed as the amount of a xenobiotic entering an organism (in units such as mg/kg body weight). The dose may be expressed in different (more or less informative) ways: exposure dose, which is the air concentration of pollutant inhaled during a certain time period (in work hygiene usually eight hours), or the retained or absorbed dose (in industrial hygiene also called the body burden), which is the amount present in the body at a certain time during or after exposure. The tissue dose is the amount of substance in a specific tissue and the target dose is the amount of substance (usually a metabolite) bound to the critical molecule. The target dose can be expressed as mg chemical bound per mg of a specific macromolecule in the tissue. To apply this concept, information on the mechanism of toxic action on the molecular level is needed. The target dose is more exactly associated with the toxic effect. The exposure dose or body burden may be more easily available, but these are less precisely related to the effect.
In the dose concept a time aspect is often included, even if it is not always expressed. The theoretical dose according to Haber’s law is D = ct, where D is dose, c is concentration of the xenobiotic in the air and t the duration of exposure to the chemical. If this concept is used at the target organ or molecular level, the amount per mg tissue or molecule over a certain time may be used. The time aspect is usually more important for understanding repeated exposures and chronic effects than for single exposures and acute effects.
Additive effects occur as a result of exposure to a combination of chemicals, where the individual toxicities are simply added to each other (1+1= 2). When chemicals act via the same mechanism, additivity of their effects is assumed although not always the case in reality. Interaction between chemicals may result in an inhibition (antagonism), with a smaller effect than that expected from addition of the effects of the individual chemicals (1+1 2). Alternatively, a combination of chemicals may produce a more pronounced effect than would be expected by addition (increased response among individuals or an increase in frequency of response in a population), this is called synergism (1+1 >2).
Latency time is the time between first exposure and the appearance of a detectable effect or response. The term is often used for carcinogenic effects, where tumours may appear a long time after the start of exposure and sometimes long after the cessation of exposure.
A dose threshold is a dose level below which no observable effect occurs. Thresholds are thought to exist for certain effects, like acute toxic effects; but not for others, like carcinogenic effects (by DNA-adduct-forming initiators). The mere absence of a response in a given population should not, however, be taken as evidence for the existence of a threshold. Absence of response could be due to simple statistical phenomena: an adverse effect occurring at low frequency may not be detectable in a small population.
LD50 (effective dose) is the dose causing 50% lethality in an animal population. The LD50 is often given in older literature as a measure of acute toxicity of chemicals. The higher the LD50, the lower is the acute toxicity. A highly toxic chemical (with a low LD50) is said to be potent. There is no necessary correlation between acute and chronic toxicity. ED50 (effective dose) is the dose causing a specific effect other than lethality in 50% of the animals.
NOEL (NOAEL) means the no observed (adverse) effect level, or the highest dose that does not cause a toxic effect. To establish a NOEL requires multiple doses, a large population and additional information to make sure that absence of a response is not merely a statistical phenomenon. LOEL is the lowest observed effective dose on a dose-response curve, or the lowest dose that causes an effect.
A safety factor is a formal, arbitrary number with which one divides the NOEL or LOEL derived from animal experiments to obtain a tentative permissible dose for humans. This is often used in the area of food toxicology, but may be used also in occupational toxicology. A safety factor may also be used for extrapolation of data from small populations to larger populations. Safety factors range from 100 to 103. A safety factor of two may typically be sufficient to protect from a less serious effect (such as irritation) and a factor as large as 1,000 may be used for very serious effects (such as cancer). The term safety factor could be better replaced by the term protection factor or, even, uncertainty factor. The use of the latter term reflects scientific uncertainties, such as whether exact dose-response data can be translated from animals to humans for the particular chemical, toxic effect or exposure situation.
Extrapolations are theoretical qualitative or quantitative estimates of toxicity (risk extrapolations) derived from translation of data from one species to another or from one set of dose-response data (typically in the high dose range) to regions of dose-response where no data exist. Extrapolations usually must be made to predict toxic responses outside the observation range. Mathematical modelling is used for extrapolations based upon an understanding of the behaviour of the chemical in the organism (toxicokinetic modelling) or based upon the understanding of statistical probabilities that specific biological events will occur (biologically or mechanistically based models). Some national agencies have developed sophisticated extrapolation models as a formalized method to predict risks for regulatory purposes. (See discussion of risk assessment later in the chapter.)
Systemic effects are toxic effects in tissues distant from the route of absorption.
Target organ is the primary or most sensitive organ affected after exposure. The same chemical entering the body by different routes of exposure dose, dose rate, sex and species may affect different target organs. Interaction between chemicals, or between chemicals and other factors may affect different target organs as well.
Acute effects occur after limited exposure and shortly (hours, days) after exposure and may be reversible or irreversible.
Chronic effects occur after prolonged exposure (months, years, decades) and/or persist after exposure has ceased.
Acute exposure is an exposure of short duration, while chronic exposure is long-term (sometimes life-long) exposure.
Tolerance to a chemical may occur when repeat exposures result in a lower response than what would have been expected without pretreatment.
Uptake and Disposition
Transport processes
Diffusion. In order to enter the organism and reach a site where damage is produced, a foreign substance has to pass several barriers, including cells and their membranes. Most toxic substances pass through membranes passively by diffusion. This may occur for small water-soluble molecules by passage through aqueous channels or, for fat-soluble ones, by dissolution into and diffusion through the lipid part of the membrane. Ethanol, a small molecule that is both water and fat soluble, diffuses rapidly through cell membranes.
Diffusion of weak acids and bases. Weak acids and bases may readily pass membranes in their non-ionized, fat-soluble form while ionized forms are too polar to pass. The degree of ionization of these substances depends on pH. If a pH gradient exists across a membrane they will therefore accumulate on one side. The urinary excretion of weak acids and bases is highly dependent on urinary pH. Foetal or embryonic pH is somewhat higher than maternal pH, causing a slight accumulation of weak acids in the foetus or embryo.
Facilitated diffusion. The passage of a substance may be facilitated by carriers in the membrane. Facilitated diffusion is similar to enzyme processes in that it is protein mediated, highly selective, and saturable. Other substances may inhibit the facilitated transport of xenobiotics.
Active transport. Some substances are actively transported across cell membranes. This transport is mediated by carrier proteins in a process analogous to that of enzymes. Active transport is similar to facilitated diffusion, but it may occur against a concentration gradient. It requires energy input and a metabolic inhibitor can block the process. Most environmental pollutants are not transported actively. One exception is the active tubular secretion and reabsorption of acid metabolites in the kidneys.
Phagocytosis is a process where specialized cells such as macrophages engulf particles for subsequent digestion. This transport process is important, for example, for the removal of particles in the alveoli.
Bulk flow. Substances are also transported in the body along with the movement of air in the respiratory system during breathing, and the movements of blood, lymph or urine.
Filtration. Due to hydrostatic or osmotic pressure water flows in bulk through pores in the endothelium. Any solute that is small enough will be filtered together with the water. Filtration occurs to some extent in the capillary bed in all tissues but is particularly important in the formation of primary urine in the kidney glomeruli.
Absorption
Absorption is the uptake of a substance from the environment into the organism. The term usually includes not only the entrance into the barrier tissue but also the further transport into circulating blood.
Pulmonary absorption. The lungs are the primary route of deposition and absorption of small airborne particles, gases, vapours and aerosols. For highly water-soluble gases and vapours a significant part of the uptake occurs in the nose and the respiratory tree, but for less soluble substances it primarily takes place in the lung alveoli. The alveoli have a very large surface area (about 100m2 in humans). In addition, the diffusion barrier is extremely small, with only two thin cell layers and a distance in the order of micrometers from alveolar air to systemic blood circulation. This makes the lungs very efficient not only in the exchange of oxygen and carbon dioxide but also of other gases and vapours. In general, the diffusion across the alveolar wall is so rapid that it does not limit the uptake. The absorption rate is instead dependent on flow (pulmonary ventilation, cardiac output) and solubility (blood: air partition coefficient). Another important factor is metabolic elimination. The relative importance of these factors for pulmonary absorption varies greatly for different substances. Physical activity results in increased pulmonary ventilation and cardiac output, and decreased liver blood flow (and, hence, biotransformation rate). For many inhaled substances this leads to a marked increase in pulmonary absorption.
Percutaneous absorption. The skin is a very efficient barrier. Apart from its thermoregulatory role, it is designed to protect the organism from micro-organisms, ultraviolet radiation and other deleterious agents, and also against excessive water loss. The diffusion distance in the dermis is on the order of tenths of millimetres. In addition, the keratin layer has a very high resistance to diffusion for most substances. Nevertheless, significant dermal absorption resulting in toxicity may occur for some substances—highly toxic, fat-soluble substances such as organophosphorous insecticides and organic solvents, for example. Significant absorption is likely to occur after exposure to liquid substances. Percutaneous absorption of vapour may be important for solvents with very low vapour pressure and high affinity to water and skin.
Gastrointestinal absorption occurs after accidental or intentional ingestion. Larger particles originally inhaled and deposited in the respiratory tract may be swallowed after mucociliary transport to the pharynx. Practically all soluble substances are efficiently absorbed in the gastrointestinal tract. The low pH of the gut may facilitate absorption, for instance, of metals.
Other routes. In toxicity testing and other experiments, special routes of administration are often used for convenience, although these are rare and usually not relevant in the occupational setting. These routes include intravenous (IV), subcutaneous (sc), intraperitoneal (ip) and intramuscular (im) injections. In general, substances are absorbed at a higher rate and more completely by these routes, especially after IV injection. This leads to short-lasting but high concentration peaks that may increase the toxicity of a dose.
Distribution
The distribution of a substance within the organism is a dynamic process which depends on uptake and elimination rates, as well as the blood flow to the different tissues and their affinities for the substance. Water-soluble, small, uncharged molecules, univalent cations, and most anions diffuse easily and will eventually reach a relatively even distribution in the body.
Volume of distribution is the amount of a substance in the body at a given time, divided by the concentration in blood, plasma or serum at that time. The value has no meaning as a physical volume, as many substances are not uniformly distributed in the organism. A volume of distribution of less than one l/kg body weight indicates preferential distribution in the blood (or serum or plasma), whereas a value above one indicates a preference for peripheral tissues such as adipose tissue for fat soluble substances.
Accumulation is the build-up of a substance in a tissue or organ to higher levels than in blood or plasma. It may also refer to a gradual build-up over time in the organism. Many xenobiotics are highly fat soluble and tend to accumulate in adipose tissue, while others have a special affinity for bone. For example, calcium in bone may be exchanged for cations of lead, strontium, barium and radium, and hydroxyl groups in bone may be exchanged for fluoride.
Barriers. The blood vessels in the brain, testes and placenta have special anatomical features that inhibit passage of large molecules like proteins. These features, often referred to as blood-brain, blood-testes, and blood-placenta barriers, may give the false impression that they prevent passage of any substance. These barriers are of little or no importance for xenobiotics that can diffuse through cell membranes.
Blood binding. Substances may be bound to red blood cells or plasma components, or occur unbound in blood. Carbon monoxide, arsenic, organic mercury and hexavalent chromium have a high affinity for red blood cells, while inorganic mercury and trivalent chromium show a preference for plasma proteins. A number of other substances also bind to plasma proteins. Only the unbound fraction is available for filtration or diffusion into eliminating organs. Blood binding may therefore increase the residence time in the organism but decrease uptake by target organs.
Elimination
Elimination is the disappearance of a substance in the body. Elimination may involve excretion from the body or transformation to other substances not captured by a specific method of measurement. The rate of disappearance may be expressed by the elimination rate constant, biological half-time or clearance.
Concentration-time curve. The curve of concentration in blood (or plasma) versus time is a convenient way of describing uptake and disposition of a xenobiotic.
Area under the curve (AUC) is the integral of concentration in blood (plasma) over time. When metabolic saturation and other non-linear processes are absent, AUC is proportional to the absorbed amount of substance.
Biological half-time (or half-life) is the time needed after the end of exposure to reduce the amount in the organism to one-half. As it is often difficult to assess the total amount of a substance, measurements such as the concentration in blood (plasma) are used. The half-time should be used with caution, as it may change, for example, with dose and length of exposure. In addition, many substances have complex decay curves with several half-times.
Bioavailability is the fraction of an administered dose entering the systemic circulation. In the absence of presystemic clearance, or first-pass metabolism, the fraction is one. In oral exposure presystemic clearance may be due to metabolism within the gastrointestinal content, gut wall or liver. First-pass metabolism will reduce the systemic absorption of the substance and instead increase the absorption of metabolites. This may lead to a different toxicity pattern.
Clearance is the volume of blood (plasma) per unit time completely cleared of a substance. To distinguish from renal clearance, for example, the prefix total, metabolic or blood (plasma) is often added.
Intrinsic clearance is the capacity of endogenous enzymes to transform a substance, and is also expressed in volume per unit time. If the intrinsic clearance in an organ is much lower than the blood flow, the metabolism is said to be capacity limited. Conversely, if the intrinsic clearance is much higher than the blood flow, the metabolism is flow limited.
Excretion
Excretion is the exit of a substance and its biotransformation products from the organism.
Excretion in urine and bile. The kidneys are the most important excretory organs. Some substances, especially acids with high molecular weights, are excreted with bile. A fraction of biliary excreted substances may be reabsorbed in the intestines. This process, enterohepatic circulation, is common for conjugated substances following intestinal hydrolysis of the conjugate.
Other routes of excretion. Some substances, such as organic solvents and breakdown products such as acetone, are volatile enough so that a considerable fraction may be excreted by exhalation after inhalation. Small water-soluble molecules as well as fat-soluble ones are readily secreted to the foetus via the placenta, and into milk in mammals. For the mother, lactation can be a quantitatively important excretory pathway for persistent fat-soluble chemicals. The offspring may be secondarily exposed via the mother during pregnancy as well as during lactation. Water-soluble compounds may to some extent be excreted in sweat and saliva. These routes are generally of minor importance. However, as a large volume of saliva is produced and swallowed, saliva excretion may contribute to reabsorption of the compound. Some metals such as mercury are excreted by binding permanently to the sulphydryl groups of the keratin in the hair.
Toxicokinetic models
Mathematical models are important tools to understand and describe the uptake and disposition of foreign substances. Most models are compartmental, that is, the organism is represented by one or more compartments. A compartment is a chemically and physically theoretical volume in which the substance is assumed to distribute homogeneously and instantaneously. Simple models may be expressed as a sum of exponential terms, while more complicated ones require numerical procedures on a computer for their solution. Models may be subdivided in two categories, descriptive and physiological.
In descriptive models, fitting to measured data is performed by changing the numerical values of the model parameters or even the model structure itself. The model structure normally has little to do with the structure of the organism. Advantages of the descriptive approach are that few assumptions are made and that there is no need for additional data. A disadvantage of descriptive models is their limited usefulness for extrapolations.
Physiological models are constructed from physiological, anatomical and other independent data. The model is then refined and validated by comparison with experimental data. An advantage of physiological models is that they can be used for extrapolation purposes. For example, the influence of physical activity on the uptake and disposition of inhaled substances may be predicted from known physiological adjustments in ventilation and cardiac output. A disadvantage of physiological models is that they require a large amount of independent data.
Biotransformation
Biotransformation is a process which leads to a metabolic conversion of foreign compounds (xenobiotics) in the body. The process is often referred to as metabolism of xenobiotics. As a general rule metabolism converts lipid-soluble xenobiotics to large, water- soluble metabolites that can be effectively excreted.
The liver is the main site of biotransformation. All xenobiotics taken up from the intestine are transported to the liver by a single blood vessel (vena porta). If taken up in small quantities a foreign substance may be completely metabolized in the liver before reaching the general circulation and other organs (first pass effect). Inhaled xenobiotics are distributed via the general circulation to the liver. In that case only a fraction of the dose is metabolized in the liver before reaching other organs.
Liver cells contain several enzymes that oxidize xenobiotics. This oxidation generally activates the compound—it becomes more reactive than the parent molecule. In most cases the oxidized metabolite is further metabolized by other enzymes in a second phase. These enzymes conjugate the metabolite with an endogenous substrate, so that the molecule becomes larger and more polar. This facilitates excretion.
Enzymes that metabolize xenobiotics are also present in other organs such as the lungs and kidneys. In these organs they may play specific and qualitatively important roles in the metabolism of certain xenobiotics. Metabolites formed in one organ may be further metabolized in a second organ. Bacteria in the intestine may also participate in biotransformation.
Metabolites of xenobiotics can be excreted by the kidneys or via the bile. They can also be exhaled via the lungs, or bound to endogenous molecules in the body.
The relationship between biotransformation and toxicity is complex. Biotransformation can be seen as a necessary process for survival. It protects the organism against toxicity by preventing accumulation of harmful substances in the body. However, reactive intermediary metabolites may be formed in biotransformation, and these are potentially harmful. This is called metabolic activation. Thus, biotransformation may also induce toxicity. Oxidized, intermediary metabolites that are not conjugated can bind to and damage cellular structures. If, for example, a xenobiotic metabolite binds to DNA, a mutation can be induced (see “Genetic toxicology”). If the biotransformation system is overloaded, a massive destruction of essential proteins or lipid membranes may occur. This can result in cell death (see “Cellular injury and cellular death”).
Metabolism is a word often used interchangeably with biotransformation. It denotes chemical breakdown or synthesis reactions catalyzed by enzymes in the body. Nutrients from food, endogenous compounds, and xenobiotics are all metabolized in the body.
Metabolic activation means that a less reactive compound is converted to a more reactive molecule. This usually occurs during Phase 1 reactions.
Metabolic inactivation means that an active or toxic molecule is converted to a less active metabolite. This usually occurs during Phase 2 reactions. In certain cases an inactivated metabolite might be reactivated, for example by enzymatic cleavage.
Phase 1 reaction refers to the first step in xenobiotic metabolism. It usually means that the compound is oxidized. Oxidation usually makes the compound more water soluble and facilitates further reactions.
Cytochrome P450 enzymes are a group of enzymes that preferentially oxidize xenobiotics in Phase 1 reactions. The different enzymes are specialized for handling specific groups of xenobiotics with certain characteristics. Endogenous molecules are also substrates. Cytochrome P450 enzymes are induced by xenobiotics in a specific fashion. Obtaining induction data on cytochrome P450 can be informative about the nature of previous exposures (see “Genetic determinants of toxic response”).
Phase 2 reaction refers to the second step in xenobiotic meta- bolism. It usually means that the oxidized compound is conjugated with (coupled to) an endogenous molecule. This reaction increases the water solubility further. Many conjugated meta- bolites are actively excreted via the kidneys.
Transferases are a group of enzymes that catalyze Phase 2 reactions. They conjugate xenobiotics with endogenous compounds such as glutathione, amino acids, glucuronic acid or sulphate.
Glutathione is an endogenous molecule, a tripeptide, that is conjugated with xenobiotics in Phase 2 reactions. It is present in all cells (and in liver cells in high concentrations), and usually protects from activated xenobiotics. When glutathione is depleted, toxic reactions between activated xenobiotic metabolites and proteins, lipids or DNA may occur.
Induction means that enzymes involved in biotransformation are increased (in activity or amount) as a response to xenobiotic exposure. In some cases within a few days enzyme activity can be increased several fold. Induction is often balanced so that both Phase 1 and Phase 2 reactions are increased simultaneously. This may lead to a more rapid biotransformation and can explain tolerance. In contrast, unbalanced induction may increase toxicity.
Inhibition of biotransformation can occur if two xenobiotics are metabolized by the same enzyme. The two substrates have to compete, and usually one of the substrates is preferred. In that case the second substrate is not metabolized, or only slowly metabolized. As with induction, inhibition may increase as well as decrease toxicity.
Oxygen activation can be triggered by metabolites of certain xenobiotics. They may auto-oxidize under the production of activated oxygen species. These oxygen-derived species, which include superoxide, hydrogen peroxide and the hydroxyl radical, may damage DNA, lipids and proteins in cells. Oxygen activation is also involved in inflammatory processes.
Genetic variability between individuals is seen in many genes coding for Phase 1 and Phase 2 enzymes. Genetic variability may explain why certain individuals are more susceptible to toxic effects of xenobiotics than others.
Toxicology is the study of poisons, or, more comprehensively, the identification and quantification of adverse outcomes associated with exposures to physical agents, chemical substances and other conditions. As such, toxicology draws upon most of the basic biological sciences, medical disciplines, epidemiology and some areas of chemistry and physics for information, research designs and methods. Toxicology ranges from basic research investigations on the mechanism of action of toxic agents through the development and interpretation of standard tests characterizing the toxic properties of agents. Toxicology provides important information for both medicine and epidemiology in understanding aetiology and in providing information as to the plausibility of observed associations between exposures, including occupations, and disease. Toxicology can be divided into standard disciplines, such as clinical, forensic, investigative and regulatory toxicology; toxicology can be considered by target organ system or process, such as immunotoxicology or genetic toxicology; toxicology can be presented in functional terms, such as research, testing and risk assessment.
It is a challenge to propose a comprehensive presentation of toxicology in this Encyclopaedia. This chapter does not present a compendium of information on toxicology or adverse effects of specific agents. This latter information is better obtained from databases that are continually updated, as described in the last section of this chapter. Moreover, the chapter does not attempt to set toxicology within specific subdisciplines, such as forensic toxicology. It is the premise of the chapter that the information provided is relevant to all types of toxicological endeavours and to the use of toxicology in various medical specialities and fields. In this chapter, topics are based primarily upon a practical orientation and integration with the intent and purpose of the Encyclopaedia as a whole. Topics are also selected for ease of cross-reference within the Encyclopaedia.
In modern society, toxicology has become an important element in environmental and occupational health. This is because many organizations, governmental and non-governmental, utilize information from toxicology to evaluate and regulate hazards in the workplace and nonoccupational environment. As part of prevention strategies, toxicology is invaluable, since it is the source of information on potential hazards in the absence of widespread human exposures. Toxicological methods are also widely used by industry in product development, to provide information useful in the design of specific molecules or product formulations.
The chapter begins with five articles on general principles of toxicology, which are important to the consideration of most topics in the field. The first general principles relate to understanding relationships between external exposure and internal dose. In modern terminology, “exposure” refers to the concentrations or amount of a substance presented to individuals or populations—amounts found in specific volumes of air or water, or in masses of soil. “Dose” refers to the concentration or amount of a substance inside an exposed person or organism. In occupational health, standards and guidelines are often set in terms of exposure, or allowable limits on concentrations in specific situations, such as in air in the workplace. These exposure limits are predicated upon assumptions or information on the relationships between exposure and dose; however, often information on internal dose is unavailable. Thus, in many studies of occupational health, associations can be drawn only between exposure and response or effect. In a few instances, standards have been set based on dose (e.g., permissible levels of lead in blood or mercury in urine). While these measures are more directly correlated with toxicity, it is still necessary to back-calculate exposure levels associated with these levels for purposes of controlling risks.
The next article concerns the factors and events that determine the relationships between exposure, dose and response. The first factors relate to uptake, absorption and distribution—the processes that determine the actual transport of substances into the body from the external environment across portals of entry such as skin, lung and gut. These processes are at the interface between humans and their environments. The second factors, of metabolism, relate to understanding how the body handles absorbed substances. Some substances are transformed by cellular processes of metabolism, which can either increase or decrease their biological activity.
The concepts of target organ and critical effect have been developed to aid in the interpretation of toxicological data. Depending upon dose, duration and route of exposure, as well as host factors such as age, many toxic agents can induce a number of effects within organs and organisms. An important role of toxicology is to identify the important effect or sets of effects in order to prevent irreversible or debilitating disease. One important part of this task is the identification of the organ first or most affected by a toxic agent; this organ is defined as the “target organ”. Within the target organ, it is important to identify the important event or events that signals intoxication, or damage, in order to ascertain that the organ has been affected beyond the range of normal variation. This is known as the “critical effect”; it may represent the first event in a progression of pathophysiological stages (such as the excretion of small-molecular-weight proteins as a critical effect in nephrotoxicity), or it may represent the first and potentially irreversible effect in a disease process (such as formation of a DNA adduct in carcinogenesis). These concepts are important in occupational health because they define the types of toxicity and clinical disease associated with specific exposures, and in most cases reduction of exposure has as a goal the prevention of critical effects in target organs, rather than every effect in every or any organ.
The next two articles concern important host factors that affect many types of responses to many types of toxic agents. These are: genetic determinants, or inherited susceptibility/resistance factors; and age, sex and other factors such as diet or co-existence of infectious disease. These factors can also affect exposure and dose, through modifying uptake, absorption, distribution and metabolism. Because working populations around the world vary with respect to many of these factors, it is critical for occupational health specialists and policy-makers to understand the way in which these factors may contribute to variabilities in response among populations and individuals within populations. In societies with heterogeneous populations, these considerations are particularly important. The variability of human populations must be considered in evaluating the risks of occupational exposures and in reaching rational conclusions from the study of nonhuman organisms in toxicological research or testing.
The section then provides two general overviews on toxicology at the mechanistic level. Mechanistically, modern toxicologists consider that all toxic effects manifest their first actions at the cellular level; thus, cellular responses represent the earliest indications of the body’s encounters with a toxic agent. It is further assumed that these responses represent a spectrum of events, from injury through death. Cell injury refers to specific processes utilized by cells, the smallest unit of biological organization within organs, to respond to challenge. These responses involve changes in the function of processes within the cell, including the membrane and its ability to take up, release or exclude substances; the directed synthesis of proteins from amino acids; and the turnover of cell components. These responses may be common to all injured cells, or they may be specific to certain types of cells within certain organ systems. Cell death is the destruction of cells within an organ system, as a consequence of irreversible or uncompensated cell injury. Toxic agents may cause cell death acutely because of certain actions such as poisoning oxygen transfer, or cell death may be the consequence of chronic intoxication. Cell death can be followed by replacement in some but not all organ systems, but in some conditions cell proliferation induced by cell death may be considered a toxic response. Even in the absence of cell death, repeated cell injury may induce stress within organs that compromises their function and affects their progeny.
The chapter is then divided into more specific topics, which are grouped into the following categories: mechanism, test methods, regulation and risk assessment. The mechanism articles mostly focus on target systems rather than organs. This reflects the practice of modern toxicology and medicine, which studies organ systems rather than isolated organs. Thus, for example, the discussion of genetic toxicology is not focused upon the toxic effects of agents within a specific organ but rather on genetic material as a target for toxic action. Likewise, the article on immunotoxicology discusses the various organs and cells of the immune system as targets for toxic agents. The methods articles are designed to be highly operational; they describe current methods in use in many countries for hazard identification, that is, the development of information related to biological properties of agents.
The chapter continues with five articles on the application of toxicology in regulation and policy-making, from hazard identification to risk assessment. The current practice in several countries, as well as IARC, is presented. These articles should enable the reader to understand how information derived from toxicology tests is integrated with basic and mechanistic inferences to derive quantitative information used in setting exposure levels and other approaches to controlling hazards in the workplace and general environment.
A summary of available toxicology databases, to which the readers of this encyclopaedia can refer for detailed information on specific toxic agents and exposures, can be found in Volume III (see “Toxicology databases” in the chapter Safe handling of chemicals, which provides information on many of these databases, their information sources, methods of evaluation and interpretation, and means of access). These databases, together with the Encyclopaedia, provide the occupational health specialist, the worker and the employer with the ability to obtain and use up-to-date in- formation on toxicology and the evaluation of toxic agents by national and international bodies.
This chapter focuses upon those aspects of toxicology relevant to occupational safety and health. For that reason, clinical toxic-ology and forensic toxicology are not specifically addressed as subdisciplines of the field. Many of the same principles and approaches described here are used in these subdisciplines as well as in environmental health. They are also applicable to evaluating the impacts of toxic agents on nonhuman populations, a major concern of environmental policies in many countries. A committed attempt has been made to enlist the perspectives and experiences of experts and practitioners from all sectors and from many countries; however, the reader may note a certain bias towards academic scientists in the developed world. Although the editor and contributors believe that the principles and practice of toxic-ology are international, the problems of cultural bias and narrowness of experience may well be evident in this chapter. The chapter editor hopes that readers of this Encyclopaedia will assist in ensuring the broadest perspective possible as this important reference continues to be updated and expanded.
" DISCLAIMER: The ILO does not take responsibility for content presented on this web portal that is presented in any language other than English, which is the language used for the initial production and peer-review of original content. Certain statistics have not been updated since the production of the 4th edition of the Encyclopaedia (1998)."