" DISCLAIMER: The ILO does not take responsibility for content presented on this web portal that is presented in any language other than English, which is the language used for the initial production and peer-review of original content."

Monday, 28 February 2011 20:07

General Principles

Written by

Basic Concepts and Definitions

At the worksite, industrial hygiene methodologies can measure and control only airborne chemicals, while other aspects of the problem of possible harmful agents in the environment of workers, such as skin absorption, ingestion, and non-work-related exposure, remain undetected and therefore uncontrolled. Biological monitoring helps fill this gap.

Biological monitoring was defined in a 1980 seminar, jointly sponsored by the European Economic Community (EEC), National Institute for Occupational Safety and Health (NIOSH) and Occupational Safety and Health Association (OSHA) (Berlin, Yodaiken and Henman 1984) in Luxembourg as “the measurement and assessment of agents or their metabolites either in tissues, secreta, excreta, expired air or any combination of these to evaluate exposure and health risk compared to an appropriate reference”. Monitoring is a repetitive, regular and preventive activity designed to lead, if necessary, to corrective actions; it should not be confused with diagnostic procedures.

Biological monitoring is one of the three important tools in the prevention of diseases due to toxic agents in the general or occupational environment, the other two being environmental monitoring and health surveillance.

The sequence in the possible development of such disease may be schematically represented as follows: source-exposed chemical agent—internal dose—biochemical or cellular effect (reversible) —health effects—disease. The relationships among environmental, biological, and exposure monitoring, and health surveillance, are shown in figure 1. 

Figure 1. The relationship between environmental, biological and exposure monitoring, and health surveillance

BMO010F1

When a toxic substance (an industrial chemical, for example) is present in the environment, it contaminates air, water, food, or surfaces in contact with the skin; the amount of toxic agent in these media is evaluated via environmental monitoring.

As a result of absorption, distribution, metabolism, and excretion, a certain internal dose of the toxic agent (the net amount of a pollutant absorbed in or passed through the organism over a specific time interval) is effectively delivered to the body, and becomes detectable in body fluids. As a result of its interaction with a receptor in the critical organ (the organ which, under specific conditions of exposure, exhibits the first or the most important adverse effect), biochemical and cellular events occur. Both the internal dose and the elicited biochemical and cellular effects may be measured through biological monitoring.

Health surveillance was defined at the above-mentioned 1980 EEC/NIOSH/OSHA seminar as “the periodic medico-physiological examination of exposed workers with the objective of protecting health and preventing disease”.

Biological monitoring and health surveillance are parts of a continuum that can range from the measurement of agents or their metabolites in the body via evaluation of biochemical and cellular effects, to the detection of signs of early reversible impairment of the critical organ. The detection of established disease is outside the scope of these evaluations.

Goals of Biological Monitoring

Biological monitoring can be divided into (a) monitoring of exposure, and (b) monitoring of effect, for which indicators of internal dose and of effect are used respectively.

The purpose of biological monitoring of exposure is to assess health risk through the evaluation of internal dose, achieving an estimate of the biologically active body burden of the chemical in question. Its rationale is to ensure that worker exposure does not reach levels capable of eliciting adverse effects. An effect is termed “adverse” if there is an impairment of functional capacity, a decreased ability to compensate for additional stress, a decreased ability to maintain homeostasis (a stable state of equilibrium), or an enhanced susceptibility to other environmental influences.

Depending on the chemical and the analysed biological parameter, the term internal dose may have different meanings (Bernard and Lauwerys 1987). First, it may mean the amount of a chemical recently absorbed, for example, during a single workshift. A determination of the pollutant’s concentration in alveolar air or in the blood may be made during the workshift itself, or as late as the next day (samples of blood or alveolar air may be taken up to 16 hours after the end of the exposure period). Second, in the case that the chemical has a long biological half-life—for example, metals in the bloodstream—the internal dose could reflect the amount absorbed over a period of a few months.

Third, the term may also mean the amount of chemical stored. In this case it represents an indicator of accumulation which can provide an estimate of the concentration of the chemical in organs and/or tissues from which, once deposited, it is only slowly released. For example, measurements of DDT or PCB in blood could provide such an estimate.

Finally, an internal dose value may indicate the quantity of the chemical at the site where it exerts its effects, thus providing information about the biologically effective dose. One of the most promising and important uses of this capability, for example, is the determination of adducts formed by toxic chemicals with protein in haemoglobin or with DNA.

Biological monitoring of effects is aimed at identifying early and reversible alterations which develop in the critical organ, and which, at the same time, can identify individuals with signs of adverse health effects. In this sense, biological monitoring of effects represents the principal tool for the health surveillance of workers.

Principal Monitoring Methods

Biological monitoring of exposure is based on the determination of indicators of internal dose by measuring:

  • the amount of the chemical, to which the worker is exposed, in blood or urine (rarely in milk, saliva, or fat)
  • the amount of one or more metabolites of the chemical involved in the same body fluids
  • the concentration of volatile organic compounds (solvents) in alveolar air
  • the biologically effective dose of compounds which have formed adducts to DNA or other large molecules and which thus have a potential genotoxic effect.

 

Factors affecting the concentration of the chemical and its metabolites in blood or urine will be discussed below.

As far as the concentration in alveolar air is concerned, besides the level of environmental exposure, the most important factors involved are solubility and metabolism of the inhaled substance, alveolar ventilation, cardiac output, and length of exposure (Brugnone et al. 1980).

The use of DNA and haemoglobin adducts in monitoring human exposure to substances with carcinogenic potential is a very promising technique for measurement of low level exposures. (It should be noted, however, that not all chemicals that bind to macromolecules in the human organism are genotoxic, i.e., potentially carcinogenic.) Adduct formation is only one step in the complex process of carcinogenesis. Other cellular events, such as DNA repair promotion and progression undoubtedly modify the risk of developing a disease such as cancer. Thus, at the present time, the measurement of adducts should be seen as being confined only to monitoring exposure to chemicals. This is discussed more fully in the article “Genotoxic chemicals” later in this chapter.

Biological monitoring of effects is performed through the determination of indicators of effect, that is, those that can identify early and reversible alterations. This approach may provide an indirect estimate of the amount of chemical bound to the sites of action and offers the possibility of assessing functional alterations in the critical organ in an early phase.

Unfortunately, we can list only a few examples of the application of this approach, namely, (1) the inhibition of pseudocholinesterase by organophosphate insecticides, (2) the inhibition of d-aminolaevulinic acid dehydratase (ALA-D) by inorganic lead, and (3) the increased urinary excretion of d-glucaric acid and porphyrins in subjects exposed to chemicals inducing microsomal enzymes and/or to porphyrogenic agents (e.g., chlorinated hydrocarbons).

Advantages and Limitations of Biological Monitoring

For substances that exert their toxicity after entering the human organism, biological monitoring provides a more focused and targeted assessment of health risk than does environmental monitoring. A biological parameter reflecting the internal dose brings us one step closer to understanding systemic adverse effects than does any environmental measurement.

Biological monitoring offers numerous advantages over environmental monitoring and in particular permits assessment of:

  • exposure over an extended time period
  • exposure as a result of worker mobility in the working environment
  • absorption of a substance via various routes, including the skin
  • overall exposure as a result of different sources of pollution, both occupational and non-occupational
  • the quantity of a substance absorbed by the subject depending on factors other than the degree of exposure, such as the physical effort required by the job, ventilation, or climate
  • the quantity of a substance absorbed by a subject depending on individual factors that can influence the toxicokinetics of the toxic agent in the organism; for example, age, sex, genetic features, or functional state of the organs where the toxic substance undergoes biotransformation and elimination.

 

In spite of these advantages, biological monitoring still suffers today from considerable limitations, the most significant of which are the following:

  • The number of possible substances which can be monitored biologically is at present still rather small.
  • In the case of acute exposure, biological monitoring supplies useful information only for exposure to substances that are rapidly metabolized, for example, aromatic solvents.
  • The significance of biological indicators has not been clearly defined; for example, it is not always known whether the levels of a substance measured on biological material reflect current or cumulative exposure (e.g., urinary cadmium and mercury).
  • Generally, biological indicators of internal dose allow assessment of the degree of exposure, but do not furnish data that will measure the actual amount present in the critical organ
  • Often there is no knowledge of possible interference in the metabolism of the substances being monitored by other exogenous substances to which the organism is simultaneously exposed in the working and general environment.
  • There is not always sufficient knowledge on the relationships existing between the levels of environmental exposure and the levels of the biological indicators on the one hand, and between the levels of the biological indicators and possible health effects on the other.
  • The number of biological indicators for which biological exposure indices (BEIs) exist at present is rather limited. Follow-up information is needed to determine whether a substance, presently identified as not capable of causing an adverse effect, may at a later time be shown to be harmful.
  • A BEI usually represents a level of an agent that is most likely to be observed in a specimen collected from a healthy worker who has been exposed to the chemical to the same extent as a worker with an inhalation exposure to the TLV (threshold limit value) time-weighted average (TWA).

 

Information Required for the Development of Methods and Criteria for Selecting Biological Tests

Programming biological monitoring requires the following basic conditions:

  • knowledge of the metabolism of an exogenous substance in the human organism (toxicokinetics)
  • knowledge of the alterations that occur in the critical organ (toxicodynamics)
  • existence of indicators
  • existence of sufficiently accurate analytical methods
  • possibility of using readily obtainable biological samples on which the indicators can be measured
  • existence of dose-effect and dose-response relationships and knowledge of these relationships
  • predictive validity of the indicators.

 

In this context, the validity of a test is the degree to which the parameter under consideration predicts the situation as it really is (i.e., as more accurate measuring instruments would show it to be). Validity is determined by the combination of two properties: sensitivity and specificity. If a test possesses a high sensitivity, this means that it will give few false negatives; if it possesses high specificity, it will give few false positives (CEC 1985-1989).

Relationship between exposure, internal dose and effects

The study of the concentration of a substance in the working environment and the simultaneous determination of the indicators of dose and effect in exposed subjects allows information to be obtained on the relationship between occupational exposure and the concentration of the substance in biological samples, and between the latter and the early effects of exposure.

Knowledge of the relationships between the dose of a substance and the effect it produces is an essential requirement if a programme of biological monitoring is to be put into effect. The evaluation of this dose-effect relationship is based on the analysis of the degree of association existing between the indicator of dose and the indicator of effect and on the study of the quantitative variations of the indicator of effect with every variation of indicator of dose. (See also the chapter Toxicology, for further discussion of dose-related relationships).

With the study of the dose-effect relationship it is possible to identify the concentration of the toxic substance at which the indicator of effect exceeds the values currently considered not harmful. Furthermore, in this way it may also be possible to examine what the no-effect level might be.

Since not all the individuals of a group react in the same manner, it is necessary to examine the dose-response relationship, in other words, to study how the group responds to exposure by evaluating the appearance of the effect compared to the internal dose. The term response denotes the percentage of subjects in the group who show a specific quantitative variation of an effect indicator at each dose level.

Practical Applications of Biological Monitoring

The practical application of a biological monitoring programme requires information on (1) the behaviour of the indicators used in relation to exposure, especially those relating to degree, continuity and duration of exposure, (2) the time interval between end of exposure and measurement of the indicators, and (3) all physiological and pathological factors other than exposure that can alter the indicator levels.

In the following articles the behaviour of a number of biological indicators of dose and effect that are used for monitoring occupational exposure to substances widely used in industry will be presented. The practical usefulness and limits will be assessed for each substance, with particular emphasis on time of sampling and interfering factors. Such considerations will be helpful in establishing criteria for selecting a biological test.

Time of sampling

In selecting the time of sampling, the different kinetic aspects of the chemical must be kept in mind; in particular it is essential to know how the substance is absorbed via the lung, the gastrointestinal tract and the skin, subsequently distributed to the different compartments of the body, biotransformed, and finally eliminated. It is also important to know whether the chemical may accumulate in the body.

With respect to exposure to organic substances, the collection time of biological samples becomes all the more important in view of the different velocity of the metabolic processes involved and consequently the more or less rapid excretion of the absorbed dose.

Interfering Factors

Correct use of biological indicators requires a thorough knowledge of those factors which, although independent of exposure, may nevertheless affect the biological indicator levels. The following are the most important types of interfering factors (Alessio, Berlin and Foà 1987).

Physiological factors including diet, sex and age, for example, can affect results. Consumption of fish and crustaceans may increase the levels of urinary arsenic and blood mercury. In female subjects with the same lead blood levels as males, the erythrocyte protoporphyrin values are significantly higher compared to those of male subjects. The levels of urinary cadmium increase with age.

Among the personal habits that can distort indicator levels, smoking and alcohol consumption are particularly important. Smoking may cause direct absorption of substances naturally present in tobacco leaves (e.g., cadmium), or of pollutants present in the working environment that have been deposited on the cigarettes (e.g., lead), or of combustion products (e.g., carbon monoxide).

Alcohol consumption may influence biological indicator levels, since substances such as lead are naturally present in alcoholic beverages. Heavy drinkers, for example, show higher blood lead levels than control subjects. Ingestion of alcohol can interfere with the biotransformation and elimination of toxic industrial compounds: with a single dose, alcohol can inhibit the metabolism of many solvents, for example, trichloroethylene, xylene, styrene and toluene, because of their competition with ethyl alcohol for enzymes which are essential for the breakdown of both ethanol and solvents. Regular alcohol ingestion can also affect the metabolism of solvents in a totally different manner by accelerating solvent metabolism, presumably due to induction of the microsome oxidizing system. Since ethanol is the most important substance capable of inducing metabolic interference, it is advisable to determine indicators of exposure for solvents only on days when alcohol has not been consumed.

Less information is available on the possible effects of drugs on the levels of biological indicators. It has been demonstrated that aspirin can interfere with the biological transformation of xylene to methylhippuric acid, and phenylsalicylate, a drug widely used as an analgesic, can significantly increase the levels of urinary phenols. The consumption of aluminium-based antacid preparations can give rise to increased levels of aluminium in plasma and urine.

Marked differences have been observed in different ethnic groups in the metabolism of widely used solvents such as toluene, xylene, trichloroethylene, tetrachloroethylene, and methylchloroform.

Acquired pathological states can influence the levels of biological indicators. The critical organ can behave anomalously with respect to biological monitoring tests because of the specific action of the toxic agent as well as for other reasons. An example of situations of the first type is the behaviour of urinary cadmium levels: when tubular disease due to cadmium sets in, urinary excretion increases markedly and the levels of the test no longer reflect the degree of exposure. An example of the second type of situation is the increase in erythrocyte protoporphyrin levels observed in iron-deficient subjects who show no abnormal lead absorption.

Physiological changes in the biological media—urine, for example—on which determinations of the biological indicators are based, can influence the test values. For practical purposes, only spot urinary samples can be obtained from individuals during work, and the varying density of these samples means that the levels of the indicator can fluctuate widely in the course of a single day.

In order to overcome this difficulty, it is advisable to eliminate over-diluted or over-concentrated samples according to selected specific gravity or creatinine values. In particular, urine with a specific gravity below 1010 or higher than 1030 or with a creatinine concentration lower than 0.5 g/l or greater than 3.0 g/l should be discarded. Several authors also suggest adjusting the values of the indicators according to specific gravity or expressing the values according to urinary creatinine content.

Pathological changes in the biological media can also considerably influence the values of the biological indicators. For example, in anaemic subjects exposed to metals (mercury, cadmium, lead, etc.) the blood levels of the metal may be lower than would be expected on the basis of exposure; this is due to the low level of red blood cells that transport the toxic metal in the blood circulation.

Therefore, when determinations of toxic substances or metabolites bound to red blood cells are made on whole blood, it is always advisable to determine the haematocrit, which gives a measure of the percentage of blood cells in whole blood.

Multiple exposure to toxic substances present in the workplace

In the case of combined exposure to more than one toxic substance present at the workplace, metabolic interferences may occur that can alter the behaviour of the biological indicators and thus create serious problems in interpretation. In human studies, interferences have been demonstrated, for example, in combined exposure to toluene and xylene, xylene and ethylbenzene, toluene and benzene, hexane and methyl ethyl ketone, tetrachloroethylene and trichloroethylene.

In particular, it should be noted that when biotransformation of a solvent is inhibited, the urinary excretion of its metabolite is reduced (possible underestimation of risk) whereas the levels of the solvent in blood and expired air increase (possible overestimation of risk).

Thus, in situations in which it is possible to measure simultaneously the substances and their metabolites in order to interpret the degree of inhibitory interference, it would be useful to check whether the levels of the urinary metabolites are lower than expected and at the same time whether the concentration of the solvents in blood and/or expired air is higher.

Metabolic interferences have been described for exposures where the single substances are present in levels close to and sometimes below the currently accepted limit values. Interferences, however, do not usually occur when exposure to each substance present in the workplace is low.

Practical Use of Biological Indicators

Biological indicators can be used for various purposes in occupational health practice, in particular for (1) periodic control of individual workers, (2) analysis of the exposure of a group of workers, and (3) epidemiological assessments. The tests used should possess the features of precision, accuracy, good sensitivity, and specificity in order to minimize the possible number of false classifications.

Reference values and reference groups

A reference value is the level of a biological indicator in the general population not occupationally exposed to the toxic substance under study. It is necessary to refer to these values in order to compare the data obtained through biological monitoring programmes in a population which is presumed to be exposed. Reference values should not be confused with limit values, which generally are the legal limits or guidelines for occupational and environmental exposure (Alessio et al. 1992).

When it is necessary to compare the results of group analyses, the distribution of the values in the reference group and in the group under study must be known because only then can a statistical comparison be made. In these cases, it is essential to attempt to match the general population (reference group) with the exposed group for similar characteristics such as, sex, age, lifestyle and eating habits.

To obtain reliable reference values one must make sure that the subjects making up the reference group have never been exposed to the toxic substances, either occupationally or due to particular conditions of environmental pollution.

In assessing exposure to toxic substances one must be careful not to include subjects who, although not directly exposed to the toxic substance in question, work in the same workplace, since if these subjects are, in fact, indirectly exposed, the exposure of the group may be in consequence underestimated.

Another practice to avoid, although it is still widespread, is the use for reference purposes of values reported in the literature that are derived from case lists from other countries and may often have been collected in regions where different environmental pollution situations exist.

Periodic monitoring of individual workers

Periodic monitoring of individual workers is mandatory when the levels of the toxic substance in the atmosphere of the working environment approach the limit value. Where possible, it is advisable to simultaneously check an indicator of exposure and an indicator of effect. The data thus obtained should be compared with the reference values and the limit values suggested for the substance under study (ACGIH 1993).

Analysis of a group of workers

Analysis of a group becomes mandatory when the results of the biological indicators used can be markedly influenced by factors independent of exposure (diet, concentration or dilution of urine, etc.) and for which a wide range of “normal” values exists.

In order to ensure that the group study will furnish useful results, the group must be sufficiently numerous and homogeneous as regards exposure, sex, and, in the case of some toxic agents, work seniority. The more the exposure levels are constant over time, the more reliable the data will be. An investigation carried out in a workplace where the workers frequently change department or job will have little value. For a correct assessment of a group study it is not sufficient to express the data only as mean values and range. The frequency distribution of the values of the biological indicator in question must also be taken into account.

Epidemiological assessments

Data obtained from biological monitoring of groups of workers can also be used in cross-sectional or prospective epidemiological studies.

Cross-sectional studies can be used to compare the situations existing in different departments of the factory or in different industries in order to set up risk maps for manufacturing processes. A difficulty that may be encountered in this type of application depends on the fact that inter-laboratory quality controls are not yet sufficiently widespread; thus it cannot be guaranteed that different laboratories will produce comparable results.

Prospective studies serve to assess the behaviour over time of the exposure levels so as to check, for example, the efficacy of environmental improvements or to correlate the behaviour of biological indicators over the years with the health status of the subjects being monitored. The results of such long-term studies are very useful in solving problems involving changes over time. At present, biological monitoring is mainly used as a suitable procedure for assessing whether current exposure is judged to be “safe,” but it is as yet not valid for assessing situations over time. A given level of exposure considered safe today may no longer be regarded as such at some point in the future.

Ethical Aspects

Some ethical considerations arise in connection with the use of biological monitoring as a tool to assess potential toxicity. One goal of such monitoring is to assemble enough information to decide what level of any given effect constitutes an undesirable effect; in the absence of sufficient data, any perturbation will be considered undesirable. The regulatory and legal implications of this type of information need to be evaluated. Therefore, we should seek societal discussion and consensus as to the ways in which biological indicators should best be used. In other words, education is required of workers, employers, communities and regulatory authorities as to the meaning of the results obtained by biological monitoring so that no one is either unduly alarmed or complacent.

There must be appropriate communication with the individual upon whom the test has been performed concerning the results and their interpretation. Further, whether or not the use of some indicators is experimental should be clearly conveyed to all participants.

The International Code of Ethics for Occupational Health Professionals, issued by the International Commission on Occupational Health in 1992, stated that “biological tests and other investigations must be chosen from the point of view of their validity for protection of the health of the worker concerned, with due regard to their sensitivity, their specificity and their predictive value”. Use must not be made of tests “which are not reliable or which do not have a sufficient predictive value in relation to the requirements of the work assignment”. (See the chapter Ethical Issues for further discussion and the text of the Code.)

Trends in Regulation and Application

Biological monitoring can be carried out for only a limited number of environmental pollutants on account of the limited availability of appropriate reference data. This imposes important limitations on the use of biological monitoring in evaluating exposure.

The World Health Organization (WHO), for example, has proposed health-based reference values for lead, mercury, and cadmium only. These values are defined as levels in blood and urine not linked to any detectable adverse effect.The American Conference of Governmental Industrial Hygienists (ACGIH) has established biological exposure indices (BEIs) for about 26 compounds; BEIs are defined as “values for determinants which are indicators of the degree of integrated exposure to industrial chemicals” (ACGIH 1995).

 

Back

Monday, 28 February 2011 20:15

Metals and organometallic compounds

Written by

Toxic metals and organometallic compounds such as aluminium, antimony, inorganic arsenic, beryllium, cadmium, chromium, cobalt, lead, alkyl lead, metallic mercury and its salts, organic mercury compounds, nickel, selenium and vanadium have all been recognized for some time as posing potential health risks to exposed persons. In some cases, epidemiological studies on relationships between internal dose and resulting effect/response in occupationally exposed workers have been studied, thus permitting the proposal of health-based biological limit values (see table 1).

Table 1. Metals: Reference values and biological limit values proposed by the American Conference of Governmental Industrial Hygienists (ACGIH), Deutsche Forschungsgemeinschaft (DFG), and Lauwerys and Hoet (L and H)

Metal

Sample

Reference1 values*

ACGIH (BEI) limit2

DFG (BAT) limit3

L and H limit4 (TMPC)

Aluminium

Serum/plasma

Urine

<1 μg/100 ml

<30 μg/g

 

200 μg/l (end of shift)

150 μg/g (end of shift)

Antimony

Urine

<1 μg/g

   

35 μg/g (end of shift)

Arsenic

Urine (sum of inorganic arsenic and methylated metabolites)

<10 μg/g

50 μg/g (end of workweek)

 

50 μg/g (if TWA: 0.05 mg/m3 ); 30 μg/g (if TWA: 0.01 mg/m3 ) (end of shift)

Beryllium

Urine

<2 μg/g

     

Cadmium

Blood

Urine

<0.5 μg/100 ml

<2 μg/g

0.5 μg/100 ml

5 μg/g

1.5 μg/100 ml

15 μg/l

0.5 μg/100 ml

5 μg/g

Chromium

(soluble compounds)

Serum/plasma

Urine

<0.05 μg/100 ml

<5 μg/g

30 μg/g (end of shift, end of workweek); 10 μg/g (increase during shift)

 

30 μg/g (end of shift)

Cobalt

Serum/plasma

Blood

Urine

<0.05 μg/100 ml

<0.2 μg/100 ml

<2 μg/g

0.1 μg/100 ml (end of shift, end of workweek)

15 μg/l (end of shift, end of workweek)

0.5 μg/100 ml (EKA)**

60 μg/l (EKA)**

30 μg/g (end of shift, end of workweek)

Lead

Blood (lead)

ZPP in blood

Urine (lead)

ALA urine

<25 μg/100 ml

<40 μg/100 ml blood

<2.5μg/g Hb

<50 μg/g

<4.5 mg/g

30 μg/100 ml (not critical)

female <45 years:

30 μg/100 ml

male: 70 μg/100 ml

female <45 years:

6 mg/l; male: 15 mg/l

40 μg/100 ml

40 μg/100 ml blood or 3 μg/g Hb

50 μg/g

5 mg/g

Manganese

Blood

Urine

<1 μg/100 ml

<3 μg/g

     

Mercury inorganic

Blood

Urine

<1 μg/100 ml

<5 μg/g

1.5 μg/100 ml (end of shift, end of workweek)

35 μg/g (preshift)

5 μg/100 ml

200 μg/l

2 μg/100 ml (end of shift)

50 μg/g (end of shift)

Nickel

(soluble compounds)

Serum/plasma

Urine

<0.05 μg/100 ml

<2 μg/g

 

45 μg/l (EKA)**

30 μg/g

Selenium

Serum/plasma

Urine

<15 μg/100 ml

<25 μg/g

     

Vanadium

Serum/plasma

Blood

Urine

<0.2 μg/100 ml

<0.1 μg/100 ml

<1 μg/g

 

70 μg/g creatinine

50 μg/g

* Urine values are per gram of creatinine.
** EKA = Exposure equivalents for carcinogenic materials.
1 Taken with some modifications from Lauwerys and Hoet 1993.
2 From ACGIH 1996-97.
3 From DFG 1996.
4 Tentative maximum permissible concentrations (TMPCs) taken from Lauwerys and Hoet 1993.

One problem in seeking precise and accurate measurements of metals in biological materials is that the metallic substances of interest are often present in the media at very low levels. When biological monitoring consists of sampling and analyzing urine, as is often the case, it is usually performed on “spot” samples; correction of the results for the dilution of urine is thus usually advisable. Expression of the results per gram of creatinine is the method of standardization most frequently used. Analyses performed on too dilute or too concentrated urine samples are not reliable and should be repeated.

Aluminium

In industry, workers may be exposed to inorganic aluminium compounds by inhalation and possibly also by ingestion of dust containing aluminium. Aluminium is poorly absorbed by the oral route, but its absorption is increased by simultaneous intake of citrates. The rate of absorption of aluminium deposited in the lung is unknown; the bioavailability is probably dependent on the physicochemical characteristics of the particle. Urine is the main route of excretion of the absorbed aluminium. The concentration of aluminium in serum and in urine is determined by both the intensity of a recent exposure and the aluminium body burden. In persons non-occupationally exposed, aluminium concentration in serum is usually below 1 μg/100 ml and in urine rarely exceeds 30 μg/g creatinine. In subjects with normal renal function, urinary excretion of aluminium is a more sensitive indicator of aluminium exposure than its concentration in serum/plasma.

Data on welders suggest that the kinetics of aluminium excretion in urine involves a mechanism of two steps, the first one having a biological half-life of about eight hours. In workers who have been exposed for several years, some accumulation of the metal in the body effectively occurs and aluminium concentrations in serum and in urine are also influenced by the aluminium body burden. Aluminium is stored in several compartments of the body and excreted from these compartments at different rates over many years. High accumulation of aluminium in the body (bone, liver, brain) has also been found in patients suffering from renal insufficiency. Patients undergoing dialysis are at risk of bone toxicity and/or encephalopathy when their serum aluminium concentration chronically exceeds 20 μg/100 ml, but it is possible to detect signs of toxicity at even lower concentrations. The Commission of the European Communities has recommended that, in order to prevent aluminium toxicity, the concentration of aluminium in plasma should never exceed 20 μg/100 ml; a level above 10 μg/100 ml should lead to an increased monitoring frequency and health surveillance, and a concentration exceeding 6 μg/100 ml should be considered as evidence of an excessive build-up of the aluminium body burden.

Antimony

Inorganic antimony can enter the organism by ingestion or inhalation, but the rate of absorption is unknown. Absorbed pentavalent compounds are primarily excreted with urine and trivalent compounds via faeces. Retention of some antimony compounds is possible after long-term exposure. Normal concentrations of antimony in serum and urine are probably below 0.1 μg/100 ml and 1 μg/g creatinine, respectively.

A preliminary study on workers exposed to pentavalent antimony indicates that a time-weighted average exposure to 0.5 mg/m3 would lead to an increase in urinary antimony concentration of 35 μg/g creatinine during the shift.

Inorganic Arsenic

Inorganic arsenic can enter the organism via the gastrointestinal and respiratory tracts. The absorbed arsenic is mainly eliminated through the kidney either unchanged or after methylation. Inorganic arsenic is also excreted in the bile as a glutathione complex.

Following a single oral exposure to a low dose of arsenate, 25 and 45% of the administered dose is excreted in urine within one and four days, respectively.

Following exposure to inorganic trivalent or pentavalent arsenic, the urinary excretion consists of 10 to 20% inorganic arsenic, 10 to 20% monomethylarsonic acid, and 60 to 80% cacodylic acid. Following occupational exposure to inorganic arsenic, the proportion of the arsenical species in urine depends on the time of sampling.

The organoarsenicals present in marine organisms are also easily absorbed by the gastrointestinal tract but are excreted for the most part unchanged.

Long-term toxic effects of arsenic (including the toxic effects on genes) result mainly from exposure to inorganic arsenic. Therefore, biological monitoring aims at assessing exposure to inorganic arsenic compounds. For this purpose, the specific determination of inorganic arsenic (Asi), monomethylarsonic acid (MMA), and cacodylic acid (DMA) in urine is the method of choice. However, since seafood consumption might still influence the excretion rate of DMA, the workers being tested should refrain from eating seafood during the 48 hours prior to urine collection.

In persons non-occupationally exposed to inorganic arsenic and who have not recently consumed a marine organism, the sum of these three arsenical species does not usually exceed 10 μg/g urinary creatinine. Higher values can be found in geographical areas where the drinking water contains significant amounts of arsenic.

It has been estimated that in the absence of seafood consumption, a time-weighted average exposure to 50 and 200 μg/m3 inorganic arsenic leads to mean urinary concentrations of the sum of the metabolites (Asi, MMA, DMA) in post-shift urine samples of 54 and 88 μg/g creatinine, respectively.

In the case of exposure to less soluble inorganic arsenic compounds (e.g., gallium arsenide), the determination of arsenic in urine will reflect the amount absorbed but not the total dose delivered to the body (lung, gastrointestinal tract).

Arsenic in hair is a good indicator of the amount of inorganic arsenic absorbed during the growth period of the hair. Organic arsenic of marine origin does not appear to be taken up in hair to the same degree as inorganic arsenic. Determination of arsenic concentration along the length of the hair may provide valuable information concerning the time of exposure and the length of the exposure period. However, the determination of arsenic in hair is not recommended when the ambient air is contaminated by arsenic, as it will not be possible to distinguish between endogenous arsenic and arsenic externally deposited on the hair. Arsenic levels in hair are usually below 1 mg/kg. Arsenic in nails has the same significance as arsenic in hair.

As with urine levels, blood arsenic levels may reflect the amount of arsenic recently absorbed, but the relation between the intensity of arsenic exposure and its concentration in blood has not yet been assessed.

Beryllium

Inhalation is the primary route of beryllium uptake for occupationally exposed persons. Long-term exposure can result in the storage of appreciable amounts of beryllium in lung tissues and in the skeleton, the ultimate site of storage. Elimination of absorbed beryllium occurs mainly via urine and only to a minor degree in the faeces.

Beryllium levels can be determined in blood and urine, but at present these analyses can be used only as qualitative tests to confirm exposure to the metal, since it is not known to what extent the concentrations of beryllium in blood and urine may be influenced by recent exposure and by the amount already stored in the body. Furthermore, it is difficult to interpret the limited published data on the excretion of beryllium in exposed workers, because usually the external exposure has not been adequately characterized and the analytical methods have different sensitivities and precision. Normal urinary and serum levels of beryllium are probably below
2 μg/g creatinine and 0.03 μg/100 ml, respectively.

However, the finding of a normal concentration of beryllium in urine is not sufficient evidence to exclude the possibility of past exposure to beryllium. Indeed, an increased urinary excretion of beryllium has not always been found in workers even though they have been exposed to beryllium in the past and have consequently developed pulmonary granulomatosis, a disease characterized by multiple granulomas, that is, nodules of inflammatory tissue, found in the lungs.

Cadmium

In the occupational setting, absorption of cadmium occurs chiefly through inhalation. However, gastrointestinal absorption may significantly contribute to the internal dose of cadmium. One important characteristic of cadmium is its long biological half-life in the body, exceeding
10 years. In tissues, cadmium is mainly bound to metallothionein. In blood, it is mainly bound to red blood cells. In view of the property of cadmium to accumulate, any biological monitoring programme of population groups chronically exposed to cadmium should attempt to evaluate both the current and the integrated exposure.

By means of neutron activation, it is currently possible to carry out in vivo measurements of the amounts of cadmium accumulated in the main sites of storage, the kidneys and the liver. However, these techniques are not used routinely. So far, in the health surveillance of workers in industry or in large-scale studies on the general population, exposure to cadmium has usually been evaluated indirectly by measuring the metal in urine and blood.

The detailed kinetics of the action of cadmium in humans is not yet fully elucidated, but for practical purposes the following conclusions can be formulated regarding the significance of cadmium in blood and urine. In newly exposed workers, the levels of cadmium in blood increase progressively and after four to six months reach a concentration corresponding to the intensity of exposure. In persons with ongoing exposure to cadmium over a long period, the concentration of cadmium in the blood reflects mainly the average intake during recent months. The relative influence of the cadmium body burden on the cadmium level in the blood may be more important in persons who have accumulated a large amount of cadmium and have been removed from exposure. After cessation of exposure, the cadmium level in blood decreases relatively fast, with an initial half-time of two to three months. Depending on the body burden, the level may, however, remain higher than in control subjects. Several studies in humans and animals have indicated that the level of cadmium in urine can be interpreted as follows: in the absence of acute overexposure to cadmium, and as long as the storage capability of the kidney cortex is not exceeded or cadmium-induced nephropathy has not yet occurred, the level of cadmium in urine increases progressively with the amount of cadmium stored in the kidneys. Under such conditions, which prevail mainly in the general population and in workers moderately exposed to cadmium, there is a significant correlation between urinary cadmium and cadmium in the kidneys. If exposure to cadmium has been excessive, the cadmium-binding sites in the organism become progressively saturated and, despite continuous exposure, the cadmium concentration in the renal cortex levels off.

From this stage on, the absorbed cadmium cannot be further retained in that organ and it is rapidly excreted in the urine. Then at this stage, the concentration of urinary cadmium is influenced by both the body burden and the recent intake. If exposure is continued, some subjects may develop renal damage, which gives rise to a further increase of urinary cadmium as a result of the release of cadmium stored in the kidney and depressed reabsorption of circulating cadmium. However, after an episode of acute exposure, cadmium levels in urine may rapidly and briefly increase without reflecting an increase in the body burden.

Recent studies indicate that metallothionein in urine has the same biological significance. Good correlations have been observed between the urinary concentration of metallothionein and that of cadmium, independently of the intensity of exposure and the status of renal function.

The normal levels of cadmium in blood and in urine are usually below 0.5 μg/100 ml and
2 μg/g creatinine, respectively. They are higher in smokers than in nonsmokers. In workers chronically exposed to cadmium, the risk of renal impairment is negligible when urinary cadmium levels never exceed 10 μg/g creatinine. An accumulation of cadmium in the body which would lead to a urinary excretion exceeding this level should be prevented. However, some data suggest that certain renal markers (whose health significance is still unknown) may become abnormal for urinary cadmium values between 3 and 5 μg/g creatinine, so it seems reasonable to propose a lower biological limit value of 5 μg/g creatinine. For blood, a biological limit of 0.5 μg/100 ml has been proposed for long-term exposure. It is possible, however, that in the case of the general population exposed to cadmium via food or tobacco or in the elderly, who normally suffer a decline of renal function, the critical level in the renal cortex may be lower.

Chromium

The toxicity of chromium is attributable chiefly to its hexavalent compounds. The absorption of hexavalent compounds is relatively higher than the absorption of trivalent compounds. Elimination occurs mainly via urine.

In persons non-occupationally exposed to chromium, the concentration of chromium in serum and in urine usually does not exceed 0.05 μg/100 ml and 2 μg/g creatinine, respectively. Recent exposure to soluble hexavalent chromium salts (e.g., in electroplaters and stainless steel welders) can be assessed by monitoring chromium level in urine at the end of the workshift. Studies carried out by several authors suggest the following relation: a TWA exposure of 0.025 or 0.05 mg/m3 hexavalent chromium is associated with an average concentration at the end of the exposure period of 15 or 30 μg/g creatinine, respectively. This relation is valid only on a group basis. Following exposure to 0.025 mg/m3 hexavalent chromium, the lower 95% confidence limit value is approximately 5 μg/g creatinine. Another study among stainless steel welders has found that a urinary chromium concentration on the order of 40 μg/l corresponds to an average exposure to 0.1 mg/m3 chromium trioxide.

Hexavalent chromium readily crosses cell membranes, but once inside the cell, it is reduced to trivalent chromium. The concentration of chromium in erythrocytes might be an indicator of the exposure intensity to hexavalent chromium during the lifetime of the red blood cells, but this does not apply to trivalent chromium.

To what extent monitoring chromium in urine is useful for health risk estimation remains to be assessed.

Cobalt

Once absorbed, by inhalation and to some extent via the oral route, cobalt (with a biological half-life of a few days) is eliminated mainly with urine. Exposure to soluble cobalt compounds leads to an increase of cobalt concentration in blood and urine.

The concentrations of cobalt in blood and in urine are influenced chiefly by recent exposure. In non-occupationally exposed subjects, urinary cobalt is usually below 2 μg/g creatinine and serum/plasma cobalt below 0.05 μg/100 ml.

For TWA exposures of 0.1 mg/m3 and 0.05 mg/m3, mean urinary levels ranging from about 30 to 75 μg/l and 30 to 40 μg/l, respectively, have been reported (using end-of-shift samples). Sampling time is important as there is a progressive increase in the urinary levels of cobalt during the workweek.

In workers exposed to cobalt oxides, cobalt salts, or cobalt metal powder in a refinery, a TWA of 0.05 mg/m3 has been found to lead to an average cobalt concentration of 33 and 46 μg/g creatinine in the urine collected at the end of the shift on Monday and Friday, respectively.

Lead

Inorganic lead, a cumulative toxin absorbed by the lungs and the gastrointestinal tract, is clearly the metal that has been most extensively studied; thus, of all the metal contaminants, the reliability of methods for assessing recent exposure or body burden by biological methods is greatest for lead.

In a steady-state exposure situation, lead in whole blood is considered to be the best indicator of the concentration of lead in soft tissues and hence of recent exposure. However, the increase of blood lead levels (Pb-B) becomes progressively smaller with increasing levels of lead exposure. When occupational exposure has been prolonged, cessation of exposure is not necessarily associated with a return of Pb-B to a pre-exposure (background) value because of the continuous release of lead from tissue depots. The normal blood and urinary lead levels are generally below 20 μg/100 ml and 50 μg/g creatinine, respectively. These levels may be influenced by the dietary habits and the place of residence of the subjects. The WHO has proposed 40 μg/100 ml as the maximal tolerable individual blood lead concentration for adult male workers, and 30 μg/100 ml for women of child-bearing age. In children, lower blood lead concentrations have been associated with adverse effects on the central nervous system. Lead level in urine increases exponentially with increasing Pb-B and under a steady-state situation is mainly a reflection of recent exposure.

The amount of lead excreted in urine after administration of a chelating agent (e.g., CaEDTA) reflects the mobilizable pool of lead. In control subjects, the amount of lead excreted in urine within 24 hours after intravenous administration of one gram of EDTA usually does not exceed 600 μg. It seems that under constant exposure, chelatable lead values reflect mainly blood and soft tissues lead pool, with only a small fraction derived from bones.

An x-ray fluorescence technique has been developed for measuring lead concentration in bones (phalanges, tibia, calcaneus, vertebrae), but presently the limit of detection of the technique restricts its use to occupationally exposed persons.

Determination of lead in hair has been proposed as a method of evaluating the mobilizable pool of lead. However, in occupational settings, it is difficult to distinguish between lead incorporated endogenously into hair and that simply adsorbed on its surface.

The determination of lead concentration in the circumpulpal dentine of deciduous teeth (baby teeth) has been used to estimate exposure to lead during early childhood.

Parameters reflecting the interference of lead with biological processes can also be used for assessing the intensity of exposure to lead. The biological parameters which are currently used are coproporphyrin in urine (COPRO-U), delta-aminolaevulinic acid in urine (ALA-U), erythrocyte protoporphyrin (EP, or zinc protoporphyrin), delta-aminolaevulinic acid dehydratase (ALA-D), and pyrimidine-5’-nucleotidase (P5N) in red blood cells. In steady-state situations, the changes in these parameters are positively (COPRO-U, ALA-U, EP) or negatively (ALA-D, P5N) correlated with lead blood levels. The urinary excretion of COPRO (mostly the III isomer) and ALA starts to increase when the concentration of lead in blood reaches a value of about 40 μg/100 ml. Erythrocyte protoporphyrin starts to increase significantly at levels of lead in blood of about 35 μg/100 ml in males and 25 μg/100 ml in females. After the termination of occupational exposure to lead, the erythrocyte protoporphyrin remains elevated out of proportion to current levels of lead in blood. In this case, the EP level is better correlated with the amount of chelatable lead excreted in urine than with lead in blood.

Slight iron deficiency will also cause an elevated protoporphyrin concentration in red blood cells. The red blood cell enzymes, ALA-D and P5N, are very sensitive to the inhibitory action of lead. Within the range of blood lead levels of 10 to 40 μg/100 ml, there is a close negative correlation between the activity of both enzymes and blood lead.

Alkyl Lead

In some countries, tetraethyllead and tetramethyllead are used as antiknock agents in automobile fuels. Lead in blood is not a good indicator of exposure to tetraalkyllead, whereas lead in urine seems to be useful for evaluating the risk of overexposure.

Manganese

In the occupational setting, manganese enters the body mainly through the lungs; absorption via the gastrointestinal tract is low and probably depends on a homeostatic mechanism. Manganese elimination occurs through the bile, with only small amounts excreted with urine.

The normal concentrations of manganese in urine, blood, and serum or plasma are usually less than 3 μg/g creatinine, 1 μg/100 ml, and 0.1 μg/100 ml, respectively.

It seems that, on an individual basis, neither manganese in blood nor manganese in urine are correlated to external exposure parameters.

There is apparently no direct relation between manganese concentration in biological material and the severity of chronic manganese poisoning. It is possible that, following occupational exposure to manganese, early adverse central nervous system effects might already be detected at biological levels close to normal values.

Metallic Mercury and its Inorganic Salts

Inhalation represents the main route of uptake of metallic mercury. The gastrointestinal absorption of metallic mercury is negligible. Inorganic mercury salts can be absorbed through the lungs (inhalation of inorganic mercury aerosol) as well as the gastrointestinal tract. The cutaneous absorption of metallic mercury and its inorganic salts is possible.

The biological half-life of mercury is of the order of two months in the kidney but is much longer in the central nervous system.

Inorganic mercury is excreted mainly with the faeces and urine. Small quantities are excreted through salivary, lacrimal and sweat glands. Mercury can also be detected in expired air during the few hours following exposure to mercury vapour. Under chronic exposure conditions there is, at least on a group basis, a relationship between the intensity of recent exposure to mercury vapour and the concentration of mercury in blood or urine. The early investigations, during which static samples were used for monitoring general workroom air, showed that an average mercury-air, Hg–air, concentration of 100 μg/m3 corresponds to average mercury levels in blood (Hg–B) and in urine (Hg–U) of 6 μg Hg/100 ml and 200 to 260 μg/l, respectively. More recent observations, particularly those assessing the contribution of the external micro-environment close to the respiratory tract of the workers, indicate that the air (μg/m3)/urine (μg/g creatinine)/ blood (μg/100ml) mercury relationship is approximately 1/1.2/0.045. Several epidemiological studies on workers exposed to mercury vapour have demonstrated that for long-term exposure, the critical effect levels of Hg–U and Hg–B are approximately 50 μg/g creatinine and 2 μg/100 ml, respectively.

However, some recent studies seem to indicate that signs of adverse effects on the central nervous system or the kidney can already be observed at a urinary mercury level below 50 μg/g creatinine.

Normal urinary and blood levels are generally below 5 μg/g creatinine and 1 μg/100 ml, respectively. These values can be influenced by fish consumption and the number of mercury amalgam fillings in the teeth.

Organic Mercury Compounds

The organic mercury compounds are easily absorbed by all the routes. In blood, they are to be found mainly in red blood cells (around 90%). A distinction must be made, however, between the short chain alkyl compounds (mainly methylmercury), which are very stable and are resistant to biotransformation, and the aryl or alkoxyalkyl derivatives, which liberate inorganic mercury in vivo. For the latter compounds, the concentration of mercury in blood, as well as in urine, is probably indicative of the exposure intensity.

Under steady-state conditions, mercury in whole blood and in hair correlates with methylmercury body burden and with the risk of signs of methylmercury poisoning. In persons chronically exposed to alkyl mercury, the earliest signs of intoxication (paresthesia, sensory disturbances) may occur when the level of mercury in blood and in hair exceeds 20 μg/100 ml and 50 μg/g, respectively.

Nickel

Nickel is not a cumulative toxin and almost all the amount absorbed is excreted mainly via the urine, with a biological half-life of 17 to 39 hours. In non-occupationally exposed subjects, the urine and plasma concentrations of nickel are usually below 2 μg/g creatinine and 0.05 μg/100 ml, respectively.

The concentrations of nickel in plasma and in urine are good indicators of recent exposure to metallic nickel and its soluble compounds (e.g., during nickel electroplating or nickel battery production). Values within normal ranges usually indicate nonsignificant exposure and increased values are indicative of overexposure.

For workers exposed to soluble nickel compounds, a biological limit value of 30 μg/g creatinine (end of shift) has been tentatively proposed for nickel in urine.

In workers exposed to slightly soluble or insoluble nickel compounds, increased levels in body fluids generally indicate significant absorption or progressive release from the amount stored in the lungs; however, significant amounts of nickel may be deposited in the respiratory tract (nasal cavities, lungs) without any significant elevation of its plasma or urine concentration. Therefore, “normal” values have to be interpreted cautiously and do not necessarily indicate absence of health risk.

Selenium

Selenium is an essential trace element. Soluble selenium compounds seem to be easily absorbed through the lungs and the gastrointestinal tract. Selenium is mainly excreted in urine, but when exposure is very high it can also be excreted in exhaled air as dimethylselenide vapour. Normal selenium concentrations in serum and urine are dependent on daily intake, which may vary considerably in different parts of the world but are usually below 15 μg/100 ml and 25 μg/g creatinine, respectively. The concentration of selenium in urine is mainly a reflection of recent exposure. The relationship between the intensity of exposure and selenium concentration in urine has not yet been established.

It seems that the concentration in plasma (or serum) and urine mainly reflects short-term exposure, whereas the selenium content of erythrocytes reflects more long-term exposure.

Measuring selenium in blood or urine gives some information on selenium status. Currently it is more often used to detect a deficiency rather than an overexposure. Since the available data concerning the health risk of long-term exposure to selenium and the relationship between potential health risk and levels in biological media are too limited, no biological threshold value can be proposed.

Vanadium

In industry, vanadium is absorbed mainly via the pulmonary route. Oral absorption seems low (less than 1%). Vanadium is excreted in urine with a biological half-life of about 20 to 40 hours, and to a minor degree in faeces. Urinary vanadium seems to be a good indicator of recent exposure, but the relationship between uptake and vanadium levels in urine has not yet been sufficiently established. It has been suggested that the difference between post-shift and pre-shift urinary concentrations of vanadium permits the assessment of exposure during the workday, whereas urinary vanadium two days after cessation of exposure (Monday morning) would reflect accumulation of the metal in the body. In non-occupationally exposed persons, vanadium concentration in urine is usually below 1 μg/g creatinine. A tentative biological limit value of 50 μg/g creatinine (end of shift) has been proposed for vanadium in urine.

 

Back

Monday, 28 February 2011 20:21

Organic Solvents

Written by

Introduction

Organic solvents are volatile and generally soluble in body fat (lipophilic), although some of them, e.g., methanol and acetone, are water soluble (hydrophilic) as well. They have been extensively employed not only in industry but in consumer products, such as paints, inks, thinners, degreasers, dry-cleaning agents, spot removers, repellents, and so on. Although it is possible to apply biological monitoring to detect health effects, for example, effects on the liver and the kidney, for the purpose of health surveillance of workers who are occupationally exposed to organic solvents, it is best to use biological monitoring instead for “exposure” monitoring in order to protect the health of workers from the toxicity of these solvents, because this is an approach sensitive enough to give warnings well before any health effects may occur. Screening workers for high sensitivity to solvent toxicity may also contribute to the protection of their health.

Summary of Toxicokinetics

Organic solvents are generally volatile under standard conditions, although the volatility varies from solvent to solvent. Thus, the leading route of exposure in industrial settings is through inhalation. The rate of absorption through the alveolar wall of the lungs is much higher than that through the digestive tract wall, and a lung absorption rate of about 50% is considered typical for many common solvents such as toluene. Some solvents, for example, carbon disulphide and N,N-dimethylformamide in the liquid state, can penetrate intact human skin in amounts large enough to be toxic.

When these solvents are absorbed, a portion is exhaled in the breath without any biotransformation, but the greater part is distributed in organs and tissues rich in lipids as a result of their lipophilicity. Biotransformation takes place primarily in the liver (and also in other organs to a minor extent), and the solvent molecule becomes more hydrophilic, typically by a process of oxidation followed by conjugation, to be excreted via the kidney into the urine as metabolite(s). A small portion may be eliminated unchanged in the urine.

Thus, three biological materials, urine, blood and exhaled breath, are available for exposure monitoring for solvents from a practical viewpoint. Another important factor in selecting biological materials for exposure monitoring is the speed of disappearance of the absorbed substance, for which the biological half-life, or the time needed for a substance to diminish to one-half its original concentration, is a quantitative parameter. For example, solvents will disappear from exhaled breath much more rapidly than corresponding metabolites from urine, meaning they have a much shorter half-life. Within urinary metabolites, the biological half-life varies depending on how quickly the parent compound is metabolised, so that sampling time in relation to exposure is often of critical importance (see below). A third consideration in choosing a biological material is the specificity of the target chemical to be analysed in relation to the exposure. For example, hippuric acid is a long-used marker of exposure to toluene, but it is not only formed naturally by the body, but can also be derived from non-occupational sources such as some food additives, and is no longer considered a reliable marker when toluene exposure is low (less than 50 cm3/m3). Generally speaking, urinary metabolites have been most widely used as indicators of exposure to various organic solvents. Solvent in blood is analysed as a qualitative measure of exposure because it usually remains in the blood a shorter time and is more reflective of acute exposure, whereas solvent in exhaled breath is difficult to use for estimation of average exposure because the concentration in breath declines so rapidly after cessation of exposure. Solvent in urine is a promising candidate as a measure of exposure, but it needs further validation.

Biological Exposure Tests for Organic Solvents

In applying biological monitoring for solvent exposure, sampling time is important, as indicated above. Table 1 shows recommended sampling times for common solvents in the monitoring of everyday occupational exposure. When the solvent itself is to be analysed, attention should be paid to preventing possible loss (e.g., evaporation into room air) as well as contamination (e.g., dissolving from room air into the sample) during the sample handling process. In case the samples need to be transported to a distant laboratory or to be stored before analysis, care should be exercised to prevent loss. Freezing is recommended for metabolites, whereas refrigeration (but no freezing) in an airtight container without an air space (or more preferably, in a headspace vial) is recommended for analysis of the solvent itself. In chemical analysis, quality control is essential for reliable results (for details, see the article “Quality assurance” in this chapter). In reporting the results, ethics should be respected (see chapter Ethical Issues elsewhere in the Encyclopaedia).

Table 1. Some examples of target chemicals for biological monitoring and sampling time

Solvent

Target chemical

Urine/blood

Sampling time1

Carbon disulphide

2-Thiothiazolidine-4-carboxylicacid

Urine

Th F

N,N-Dimethyl-formamide

N-Methylformamide

Urine

M Tu W Th F

2-Ethoxyethanol and its acetate

Ethoxyacetic acid

Urine

Th F (end of last workshift)

Hexane

2,4-Hexanedione

Hexane

Urine

Blood

M Tu W Th F

confirmation of exposure

Methanol

Methanol

Urine

M Tu W Th F

Styrene

Mandelic acid

Phenylglyoxylic acid

Styrene

Urine

Urine

Blood

Th F

Th F

confirmation of exposure

Toluene

Hippuric acid

o-Cresol

Toluene

Toluene

Urine

Urine

Blood

Urine

Tu W Th F

Tu W Th F

confirmation of exposure

Tu W Th F

Trichloroethylene

Trichloroacetic acid

(TCA)

Total trichloro- compounds (sum of TCA and free and conjugated trichloroethanol)

Trichloroethylene

Urine

Urine

Blood

Th F

Th F

confirmation of exposure

Xylenes2

Methylhippuric acids

Xylenes

Urine

Blood

Tu W Th F

Tu W Th F

1 End of workshift unless otherwise noted: days of week indicate preferred sampling days.
2 Three isomers, either separately or in any combination.

Source: Summarized from WHO 1996.

 

Anumber of analytical procedures are established for many solvents. Methods vary depending on the target chemical, but most of the recently developed methods use gas chromatography (GC) or high-performance liquid chromatography (HPLC) for separation. Use of an autosampler and data processor is recommended for good quality control in chemical analysis. When a solvent itself in blood or in urine is to be analysed, an application of headspace technique in GC (headspace GC) is very convenient, especially when the solvent is volatile enough. Table 2 outlines some examples of the methods established for common solvents.

Table 2. Some examples of analytical methods for biological monitoring of exposure to organic solvents

Solvent

Target chemical

Blood/urine

Analytical method

Carbon disulphide

2-Thiothiazolidine-4-
carboxylic acid

Urine

High performance liquid chromatograph with ultraviolet detection

(UV-HPLC)

N,N-Dimethylformamide

N-Methylformamide

Urine

Gas chromatograph with flame thermionic detection (FTD-GC)

2-Ethoxyethanol and its acetate

Ethoxyacetic acid

Urine

Extraction, derivatization and gas chromatograph with flame ionization detection (FID-GC)

Hexane

2,4-Hexanedione

Hexane

Urine

Blood

Extraction, (hydrolysis) and FID-GC

Head-space FID-GC

Methanol

Methanol

Urine

Head-space FID-GC

Styrene

Mandelic acid

Phenylglyoxylic acid

Styrene

Urine

Urine

Blood

Desalting and UV-HPLC

Desalting and UV-HPLC

Headspace FID-GC

Toluene

Hippuric acid

o-Cresol

Toluene

Toluene

Urine

Urine

Blood

Urine

Desalting and UV-HPLC

Hydrolysis, extraction and FID-GC

Headspace FID-GC

Headspace FID-GC

Trichloroethylene

Trichloroacetic acid
(TCA)

Total trichloro-compounds (sum of TCA and freeand conjugated trichloroethanol)

Trichloroethylene

Urine

Urine

Blood

Colorimetry or esterification and gas chromatograph with electron capture detection (ECD-GC)

Oxidation and colorimetry, or hydrolysis, oxidation, esterification and ECD-GC

Headspace ECD-GC

Xylenes

Methylhippuric acids (three isomers, either separately orin combination)

Urine

Headspace FID-GC

Source: Summarized from WHO 1996.

Evaluation

A linear relationship of the exposure indicators (listed in table 2) with the intensity of exposure to corresponding solvents may be established either through a survey of workers occupationally exposed to solvents, or by experimental exposure of human volunteers. Accordingly, the ACGIH (1994) and the DFG (1994), for example, have established the biological exposure index (BEI) and the biological tolerance value (BAT), respectively, as the values in the biological samples which are equivalent to the occupational exposure limit for airborne chemicals—that is, threshold limit value (TLV) and maximum workplace concentration (MAK), respectively. It is known, however, that the level of the target chemical in samples obtained from non-exposed people may vary, reflecting, for example, local customs (e.g., food), and that ethnic differences may exist in solvent metabolism. It is therefore desirable to establish limit values through the study of the local population of concern.

In evaluating the results, non-occupational exposure to the solvent (e.g., via use of solvent-containing consumer products or intentional inhalation) and exposure to chemicals which give rise to the same metabolites (e.g., some food additives) should be carefully excluded. In case there is a wide gap between the intensity of vapour exposure and the biological monitoring results, the difference may indicate the possibility of skin absorption. Cigarette smoking will suppress the metabolism of some solvents (e.g., toluene), whereas acute ethanol intake may suppress methanol metabolism in a competitive manner.

 

Back

Monday, 28 February 2011 20:25

Genotoxic Chemicals

Written by

Human biological monitoring uses samples of body fluids or other easily obtainable biological material for the measurement of exposure to specific or nonspecific substances and/or their metabolites or for the measurement of the biological effects of this exposure. Biological monitoring allows one to estimate total individual exposure through different exposure pathways (lungs, skin, gastrointestinal tract) and different sources of exposure (air, diet, lifestyle or occupation). It is also known that in complex exposure situations, which are very often encountered in workplaces, different exposing agents may interact with one another, either enhancing or inhibiting the effects of the individual compounds. And since individuals differ in their genetic constitution, they exhibit variability in their response to chemical exposures. Thus, it may be more reasonable to look for early effects directly in the exposed individuals or groups than to try to predict potential hazards of the complex exposure patterns from data pertaining to single compounds. This is an advantage of genetic biomonitoring for early effects, an approach employing techniques that focus on cytogenetic damage, point mutations, or DNA adducts in surrogate human tissue (see the article “General principles” in this chapter).

What Is Genotoxicity?

Genotoxicity of chemical agents is an intrinsic chemical character, based on the agent’s electrophilic potential to bind with such nucleophilic sites in the cellular macromolecules as deoxyribonucleic acid, DNA, the carrier of hereditary information. Genotoxicity is thus toxicity manifested in the genetic material of cells.

The definition of genotoxicity, as discussed in a consensus report (IARC 1992), is broad, and includes both direct and indirect effects in DNA: (1) the induction of mutations (gene, chromosomal, genomial, recombinational) that at the molecular level are similar to events known to be involved in carcinogenesis, (2) indirect surrogate events associated with mutagenesis (e.g., unscheduled DNA synthesis (UDS) and sister chromatid exchange (SCE), or (3) DNA damage (e.g., the formation of adducts), which may eventually lead to mutations.

Genotoxicity, Mutagenicity And Carcinogenicity

Mutations are permanent hereditary changes in the cell lines, either horizontally in the somatic cells or vertically in the germinal (sex) cells of the body. That is, mutations may affect the organism itself through changes in body cells, or they may be passed on to other generations through alteration of the sex cells. Genotoxicity thus preceeds mutagenicity although most of genotoxicity is repaired and is never expressed as mutations. Somatic mutations are induced at the cellular level and in the event that they lead to cell death or malignancies, may become manifest as various disorders of tissues or of the organism itself. Somatic mutations are thought to be related to ageing effects or to the induction of atherosclerotic plaques (see figure 1 and the chapter on Cancer).

Figure 1. Schematic view of the scientific paradigm in genetic toxicology and human health effects

BMO050F1

Mutations in the germ cell line may be transferred to the zygote—the fertilized egg cell—and be expressed in the offspring generation (see also the chapter Reproductive System). The most important mutational disorders found in the newborn are induced by malsegregation of chromosomes during gametogenesis (the development of germ cells) and result in severe chromosomal syndromes (e.g., trisomy 21 or Down’s syndrome, and monosomy X or Turner’s syndrome).

The paradigm of genotoxicology from exposure to anticipated effects may be simplified as shown in figure 1.

 

 

The relationship of genotoxicity to carcinogenicity is well supported by various indirect research facts, as shown in figure 2. 

Figure 2. The interrelationships of genotoxicity and carcinogenicity    

BMO050T1 

This correlation provides the basis for applying biomarkers of genotoxicity to be used in human monitoring as indicators of cancer hazard.

Genetic Toxicity in Hazard Identification

The role of genetic changes in carcinogenesis underscores the importance of genetic toxicity testing in the identification of potential carcinogens. Various short-term test methods have been developed which are able to detect some of the endpoints in genotoxicity supposedly relevant in carcinogenesis.

Several extensive surveys have been performed to compare the carcinogenicity of chemicals with results obtained by examining them in short-term tests. The general conclusion has been that since no single validated test can provide information on all of the above-mentioned genetic end-points; it is necessary to test each chemical in more than one assay. Also, the value of short-term tests of genetic toxicity for prediction of chemical carcinogenicity has been discussed and reviewed repeatedly. On the basis of such reviews, a working group at the International Agency for Research on Cancer (IARC) concluded that most human carcinogens give positive results in routinely used short-term tests such as the Salmonella assay and the chromosome aberration assays (table 1). However, it must be realized that the epigenetic carcinogens—such as hormonally active compounds which can increase genotoxic activity without themselves being genotoxic—cannot be detected by short-term tests, which measure only the intrinsic genotoxic activity of a substance.

Table 1. Genotoxicity of chemicals evaluated in Supplements 6 and 7 to the IARC Monographs (1986)

Carcinogenicity classification

Ratio of evidence for genotoxicity/carcinogenicity

%

1: human carcinogens

24/30

80

2A: probable human carcinogens

14/20

70

2B: possible human carcinogens

72/128

56

3: not classifiable

19/66

29

 

Genetic Biomonitoring

Genetic monitoring utilizes genetic toxicology methods for biological monitoring of genetic effects or assessment of genotoxic exposure in a group of individuals with defined exposure at a worksite or through environment or lifestyle. Thus, genetic monitoring has the potential for early identification of genotoxic exposures in a group of persons and enables identification of high-risk populations and thus priorities for intervention. Use of predictive biomarkers in an exposed population is warranted to save time (as compared with epidemiological techniques) and to prevent unnecessary end effects, namely cancer (figure 3).

Figure 3. The predictiveness of biomarkers enables preventive actions to be taken to decrease risks to health in human populations

BMO050F2

The methods currently used for biomonitoring of genotoxic exposure and early biological effects are listed in table 2. The samples used for biomonitoring must meet several criteria, including the necessity that they be both easily obtainable and comparable with the target tissue.

Table 2. Biomarkers in genetic monitoring of genotoxicity exposure and the most commonly used cell/tissue samples.

Marker of genetic monitoring

Cell/tissue samples

Chromosomal aberrations (CA)

Lymphocytes

Sister chromatid exchanges (SCE)

Lymphocytes

Micronuclei (MN)

Lymphocytes

Point mutations (e.g., HPRT gene)

Lymphocytes and other tissues

DNA adducts

DNA isolated from cells/tissues

Protein adducts

Haemoglobin, albumin

DNA strand breaks

DNA isolated from cells/tissues

Oncogene activation

DNA or specific proteins isolated

Mutations/oncoproteins

Various cells and tissues

DNA repair

Isolated cells from blood samples

 

The types of molecularly recognisable DNA damage include the formation of DNA adducts and reorganization of the DNA sequence. These kinds of damage can be detected by measurements of DNA adducts using various techniques, for example, either 32P-postlabelling or the detection of monoclonal antibodies to DNA adducts. Measurement of DNA strand breaks is conventionally carried out using alkaline elution or unwinding assays. Mutations may be detected by sequencing the DNA of a specific gene, for example, the HPRT gene.

Several methodological reports have appeared that discuss the techniques of table 2 in detail (CEC 1987; IARC 1987, 1992, 1993).

Genotoxicity can also be monitored indirectly through the measurement of protein adducts, that is, in haemoglobin instead of DNA, or the monitoring of DNA repair activity. As a measuring strategy, the monitoring activity may be either one time or continuous. In all cases the results must be applied to the development of safe working conditions.

Cytogenetic Biomonitoring

A theoretical and empirical rationale links cancer to chromosome damage. Mutational events altering the activity or expression of growth-factor genes are key steps in carcinogenesis. Many types of cancers have been associated with specific or nonspecific chromosomal aberrations. In several hereditary human diseases, chromosome instability is associated with increased susceptibility to cancer.

Cytogenetic surveillance of people exposed to carcinogenic and/or mutagenic chemicals or radiation can bring to light effects on the genetic material of the individuals concerned. Chromosomal aberration studies of people exposed to ionizing radiation have been applied for biological dosimetry for decades, but well-documented positive results are as yet available only for a limited number of chemical carcinogens.

Microscopically recognizable chromosomal damage includes both structural chromosomal aberrations (CA), in which a gross change in the morphology (shape) of a chromosome has occurred, and by sister chromatid exchanges (SCE). SCE is the symmetrical exchange of chromosomal materials between two sister chromatids. Micronuclei (MN) can arise either from acentric chromosome fragments or from lagging whole chromosomes. These types of changes are illustrated in figure 4.

Figure 4. Human lymphocyte chromosomes at metaphase, revealing an induced chromosome mutation (arrow pointing to an acentric fragment)

BMO050F3

Peripheral blood lymphocytes in humans are suitable cells to be used in surveillance studies because of their easy accessibility and because they can integrate exposure over a relatively long lifespan. Exposure to a variety of chemical mutagens may result in increased frequencies of CAs and/or SCEs in blood lymphocytes of exposed individuals. Also, the extent of damage is roughly correlated with exposure, although this has been shown with only a few chemicals.

When cytogenetic tests on peripheral blood lymphocytes show that the genetic material has been damaged, the results can be used to estimate risk only at the level of the population. An increased frequency of CAs in a population should be considered an indication of increased risk to cancer, but cytogenetic tests do not, as such, allow individual risk prediction of cancer.

The health significance of somatic genetic damage as seen through the narrow window of a sample of peripheral blood lymphocytes has little or no significance to the health of an individual, since most of the lymphocytes carrying genetic damage die and are replaced.

Problems and their Control in Human Biomonitoring Studies

Rigorous study design is necessary in the application of any human biomonitoring method, since many interindividual factors that are not related to the specific chemical exposure(s) of interest may affect the biological responses studied. Since human biomonitoring studies are tedious and difficult in many respects, careful preplanning is very important. In performing human cytogenetic studies, experimental confirmation of the chromosome damaging potential of the exposing agent(s) should always be an experimental prerequisite.

In cytogenetic biomonitoring studies, two major types of variations have been documented. The first includes technical factors associated with slide-reading discrepancies and with culture conditions, specifically with the type of medium, temperature, and concentration of chemicals (such as bromodeoxyuridine or cytochalasin-B). Also, sampling times can alter chromosome aberration yields, and possibly also findings of SCE incidence, through changes in subpopulations of T- and B-lymphocytes. In micronucleus analyses, methodological differences (e.g., use of binucleated cells induced by cytochalasin-B) quite clearly affect the scoring results.

The lesions induced in the DNA of lymphocytes by chemical exposure that lead to formation of structural chromosome aberrations, sister chromatid exchange, and micronuclei must persist in vivo until the blood is withdrawn and then in vitro until the cultured lymphocyte begins DNA synthesis. It is, therefore, important to score cells directly after the first division (in the case of chromosome aberrations or micronuclei) or after the second division (sister chromatid exchanges) in order to obtain the best estimate of induced damage.

Scoring constitutes an extremely important element in cytogenetic biomonitoring. Slides must be randomized and coded to avoid scorer bias as far as possible. Consistent scoring criteria, quality control and standardized statistical analyses and reporting should be maintained. The second category of variability is due to conditions associated with the subjects, such as age, sex, medication and infections. Individual variations can also be caused by genetic susceptibility to environmental agents.

It is critical to obtain a concurrent control group that is matched as closely as possible on internal factors such as sex and age as well as on factors such as smoking status, viral infections and vaccinations, alcohol and drug intake, and exposure to x-rays. Additionally, it is necessary to obtain qualitative (job category, years exposed) and quantitative (e.g., breathing zone air samples for chemical analysis and specific metabolites, if possible) estimates or exposure to the putative genotoxic agent(s) in the workplace. Special consideration should be paid to proper statistical treatment of the results.

Relevancy of genetic biomonitoring to cancer risk assessment

The number of agents repeatedly shown to induce cytogenetic changes in humans is still relatively limited, but most known carcinogens induce damage in lymphocyte chromosomes.

The extent of damage is a function of exposure level, as has been shown to be the case with, for example, vinyl chloride, benzene, ethylene oxide, and alkylating anticancer agents. Even if the cytogenetic end points are not very sensitive or specific as regards the detection of exposures occurring in present-day occupational settings, positive results of such tests have often prompted implementation of hygienic controls even in the absence of direct evidence relating somatic chromosomal damage to adverse health outcomes.

Most experience with application of cytogenetic biomonitoring derives from “high exposure” occupational situations. Very few exposures have been confirmed by several independent studies, and most of these have been performed using chromosomal aberration biomonitoring. The database of the International Agency for Research on Cancer lists in its updated volumes 43–50 of the IARC Monographs a total of 14 occupational carcinogens in groups 1, 2A or 2B, for which there is positive human cytogenetic data available that are in most cases supported by corresponding animal cytogenetics (table 3). This limited database suggests that there is a tendency for carcinogenic chemicals to be clastogenic, and that clastogenicity tends to be associated with known human carcinogens. Quite clearly, however, not all carcinogens induce cytogenetic damage in humans or experimental animals in vivo. Cases in which the animal data are positive and the human findings are negative may represent differences in exposure levels. Also, the complex and long-term human exposures at work may not be comparable with short-term animal experiments.

Table 3. Proven, probable and possible human carcinogens for which occupational exposure exists and for which cytogenetic end points have been measured in both humans and experimental animals

 

Cytogenic findings1

 

Humans

Animals

Agent/exposure

CA

SCE

MN

CA

SCE

MN

GROUP 1, Human carcinogens

Arsenic and arsenic compounds

?

?

+

 

+

Asbestos

?

 

 

Benzene

+

 

 

+

+

+

Bis(chloromethyl)ether and chloromethyl methyl ether (technical grade)

(+)

 

 

 

 

Cyclophosphamide

+

+

 

+

+

+

Hexavalent chromium compounds

+

+

 

+

+

+

Melphalan

+

+

 

+

 

 

Nickel compounds

+

 

?

 

 

Radon

+

 

 

 

 

Tobacco smoke

+

+

+

 

+

 

Vinyl chloride

+

?

 

+

+

+

GROUP 2A, Probable human carcinogens

Acrylonitrile

 

 

 

Adriamycin

+

+

 

+

+

+

Cadmium and cadmium compounds

(–)

 

 

 

Cisplatin

+

 

+

+

 

Epichlorohydrin

+

 

 

?

+

Ethylene dibromide

 

+

Ethylene oxide

+

+

+

+

+

+

Formaldehyde

?

?

 

 

GROUP 2B, Possible human carcinogens

Chlorophenoxy herbicides (2,4-D and 2,4,5-T)

 

+

+

DDT

?

 

 

+

 

Dimethylformamide

(+)

 

 

 

Lead compounds

?

?

 

?

?

Styrene

+

?

+

?

+

+

2,3,7,8-Tetrachlorodibenzo-para-dioxin

?

 

 

Welding fumes

+

+

 

 

1 CA, chromosomal aberration; SCE, sister chromatid exchange; MN, micronuclei.
(–) = negative relationship for one study;                      – = negative relationship;
(+) = positive relationship for one study;                       + = positive relationship;
? = inconclusive;                                              blank area = not studied

Source: IARC, 1987; updated through volumes 43–50 of IARC monographs.

 

Studies of genotoxicity in exposed humans include various end points other than chromosomal end points, such as DNA damage, DNA repair activity, and adducts in DNA and in proteins. Some of these end points may be more relevant than others for the prediction of carcinogenic hazard. Stable genetic changes (e.g., chromosomal rearrangements, deletions, and point mutations) are highly relevant, since these types of damage are known to be related to carcinogenesis. The significance of DNA adducts is dependent upon their chemical identification and evidence that they result from the exposure. Some endpoints, such as SCE, UDS, SSB, DNA strand breakage, are potential indicators and/or markers of genetic events; however, their value is reduced in the absence of a mechanistic understanding of their ability to lead to genetic events. Clearly, the most relevant genetic marker in humans would be the induction of a specific mutation that has been directly associated with cancer in rodents exposed to the agent under study (figure 5).

Figure 5. Relevance of different genetic biomonitoring effects for potential cancer risk

BMO050T5

Ethical Considerations for Genetic Biomonitoring

Rapid advances in molecular genetic techniques, the enhanced speed of sequencing of the human genome, and the identification of the role of tumour suppressor genes and proto-oncogenes in human carcinogenesis, raise ethical issues in the interpretation, communication, and use of this kind of personal information. Quickly improving techniques for the analysis of human genes will soon allow the identification of yet more inherited susceptibility genes in healthy, asymptomatic individuals (US Office of Technology Assessment 1990), lending themselves to be used in genetic screening.

Many questions of social and ethical concern will be raised if the application of genetic screening soon becomes a reality. Already at present roughly 50 genetic traits of metabolism, enzyme polymorphisms, and DNA repair are suspected for specific disease sensitivities, and a diagnostic DNA test is available for about 300 genetic diseases. Should any genetic screening at all be performed at the workplace? Who is to decide who will undergo testing, and how will the information be used in employment decisions? Who will have access to the information obtained from genetic screening, and how will the results be communicated to the person(s) involved? Many of these questions are strongly linked to social norms and prevailing ethical values. The main objective must be the prevention of disease and human suffering, but respect must be accorded to the individual’s own will and ethical premises. Some of the relevant ethical questions which must be answered well before the outset of any workplace biomonitoring study are given in table 4 and are also discussed in the chapter Ethical Issues.

Table 4. Some ethical principles relating to the need to know in occupational genetic biomonitoring studies

 

Groups to whom information is given

Information given

Persons studied

Occupational health unit

Employer

What is being studied

     

Why is the study performed

     

Are there risks involved

     

Confidentiality issues

     

Preparedness for possible hygienic improvements, exposure reductions indicated

     

 

Time and effort must be put into the planning phase of any genetic biomonitoring study, and all necessary parties—the employees, employers, and the medical personnel of the collaborating workplace—must be well-informed before the study, and the results made known to them after the study as well. With proper care and reliable results, genetic biomonitoring can help to ensure safer workplaces and improve workers’ health.

 

Back

Monday, 28 February 2011 20:35

Pesticides

Written by

Introduction

Human exposure to pesticides has different characteristics according to whether it occurs during industrial production or use (table 1). The formulation of commercial products (by mixing active ingredients with other coformulants) has some exposure characteristics in common with pesticide use in agriculture. In fact, since formulation is typically performed by small industries which manufacture many different products in successive operations, the workers are exposed to each of several pesticides for a short time. In public health and agriculture, the use of a variety of compounds is generally the rule, although in some specific applications (for example, cotton defoliation or malaria control programmes) a single product may be used.

Table 1. Comparison of exposure characteristics during production and use of pesticides

 

Exposure on production

Exposure on use

Duration of exposure

Continuous and prolonged

Variable and intermittent

Degree of exposure

Fairly constant

Extremely variable

Type of exposure

To one or few compounds

To numerous compounds either in sequence or concomitantly

Skin absorption

Easy to control

Variable according to work procedures

Ambient monitoring

Useful

Seldom informative

Biological monitoring

Complementary to ambient monitoring

Very useful when available

Source: WHO 1982a, modified.

The measurement of biological indicators of exposure is particularly useful for pesticide users where the conventional techniques of exposure assessment through ambient air monitoring are scarcely applicable. Most pesticides are lipid-soluble substances that penetrate the skin. The occurrence of percutaneous (skin) absorption makes the use of biological indicators very important in assessing the level of exposure in these circumstances.

Organophosphate Insecticides

Biological indicators of effect:

Cholinesterases are the target enzymes accounting for organophosphate (OP) toxicity to insect and mammalian species. There are two principal types of cholinesterases in the human organism: acetylcholinesterase (ACHE) and plasma cholinesterase (PCHE). OP causes toxic effects in humans through the inhibition of synaptic acetylcholinesterase in the nervous system. Acetylcholinesterase is also present in red blood cells, where its function is unknown. Plasma cholinesterase is a generic term covering an inhomogeneous group of enzymes present in glial cells, plasma, liver and some other organs. PCHE is inhibited by OPs, but its inhibition does not produce known functional derangements.

Inhibition of blood ACHE and PCHE activity is highly correlated with intensity and duration of OP exposure. Blood ACHE, being the same molecular target as that responsible for acute OP toxicity in the nervous system, is a more specific indicator than PCHE. However, sensitivity of blood ACHE and PCHE to OP inhibition varies among the individual OP compounds: at the same blood concentration, some inhibit more ACHE and others more PCHE.

A reasonable correlation exists between blood ACHE activity and the clinical signs of acute toxicity (table 2). The correlation tends to be better as the rate of inhibition is faster. When inhibition occurs slowly, as with chronic low-level exposures, the correlation with illness may be low or totally non-existent. It must be noted that blood ACHE inhibition is not predictive for chronic or delayed effects.

Table 2. Severity and prognosis of acute OP toxicity at different levels of ACHE inhibition

ACHE

inhibition (%)

Level of

poisoning

Clinical symptoms

Prognosis

50–60

Mild

Weakness, headache, dizziness, nausea, salivation, lacrimation, miosis, moderate bronchial spasm

Convalescence in 1-3 days

60–90

Moderate

Abrupt weakness, visual disturbance, excess salivation, sweating, vomiting, diarrhoea, bradycardia, hypertonia, tremors of hands and head, disturbed gait, miosis, pain in the chest, cyanosis of the mucous membranes

Convalescence in 1-2 weeks

90–100

Severe

Abrupt tremor, generalized convulsions, psychic disturbance, intensive cyanosis, lung oedema, coma

Death from respiratory or cardiac failure

 

Variations of ACHE and PCHE activities have been observed in healthy people and in specific physiopathological conditions (table 3). Thus, the sensitivity of these tests in monitoring OP exposure can be increased by adopting individual pre-exposure values as a reference. Cholinesterase activities after exposure are then compared with the individual baseline values. One should make use of population cholinesterase activity reference values only when pre-exposure cholinesterase levels are not known (table 4).

Table 3. Variations of ACHE and PCHE activities in healthy people and in selected physiopathological conditions

Condition

ACHE activity

PCHE activity

 

Healthy people

Interindividual variation1

10–18 %

15–25 %

Intraindividual variation1

3–7 %

6%

Sex differences

No

10–15 % higher in male

Age

Reduced up to 6 months old

 

Body mass

 

Positive correlation

Serum cholesterol

 

Positive correlation

Seasonal variation

No

No

Circadian variation

No

No

Menstruation

 

Decreased

Pregnancy

 

Decreased

 

Pathological conditions

Reduced activity

Leukaemia, neoplasm

Liver disease; uraemia; cancer; heart failure; allergic reactions

Increased activity

Polycythaemia; thalassaemia; other congenital blood dyscrasias

Hyperthyroidism; other conditions of high metabolic rate

1 Source: Augustinsson 1955 and Gage 1967.

Table 4. Cholinesterase activities of healthy people without exposure to OP measured with selected methods

Method

Sex

ACHE*

PCHE*

Michel1 (DpH/h)

male

female

0.77±0.08

0.75±0.08

0.95±0.19

0.82±0.19

Titrimetric1 (mmol/min ml)

male/female

13.2±0.31

4.90±0.02

Ellman’s modified2 (UI/ml)

male

female

4.01±0.65

3.45±0.61

3.03±0.66

3.03±0.68

* mean result, ± standard deviation.
Source: 1 Laws 1991.    2 Alcini et al. 1988.

Blood should preferably be sampled within two hours after exposure. Venipuncture is preferred to extracting capillary blood from a finger or earlobe because the sampling point can be contaminated with pesticide residing on the skin in exposed subjects. Three sequential samples are recommended to establish a normal baseline for each worker before exposure (WHO 1982b).

Several analytical methods are available for the determination of blood ACHE and PCHE. According to WHO, the Ellman spectrophotometric method (Ellman et al. 1961) should serve as a reference method.

Biological indicators of exposure.

The determination in urine of metabolites that are derived from the alkyl phosphate moiety of the OP molecule or of the residues generated by the hydrolysis of the P–X bond (figure 1) has been used to monitor OP exposure.

Figure 1. Hydrolysis of OP insecticides

BMO060F1

Alkyl phosphate metabolites.

The alkyl phosphate metabolites detectable in urine and the main parent compound from which they can originate are listed in table 5. Urinary alkyl phosphates are sensitive indicators of exposure to OP compounds: the excretion of these metabolites in urine is usually detectable at an exposure level at which plasma or erythrocyte cholinesterase inhibition cannot be detected. The urinary excretion of alkyl phosphates has been measured for different conditions of exposure and for various OP compounds (table 6). The existence of a relationship between external doses of OP and alkyl phosphate urinary concentrations has been established in a few studies. In some studies a significant relationship between cholinesterase activity and levels of alkyl phosphates in urine has also been demonstrated.

Table 5. Alkyl phosphates detectable in urine as metabolites of OP pesticides

Metabolite

Abbreviation

Principal parent compounds

Monomethylphosphate

MMP

Malathion, parathion

Dimethylphosphate

DMP

Dichlorvos, trichlorfon, mevinphos, malaoxon, dimethoate, fenchlorphos

Diethylphosphate

DEP

Paraoxon, demeton-oxon, diazinon-oxon, dichlorfenthion

Dimethylthiophosphate

DMTP

Fenitrothion, fenchlorphos, malathion, dimethoate

Diethylthiophosphate

DETP

Diazinon, demethon, parathion,fenchlorphos

Dimethyldithiophosphate

DMDTP

Malathion, dimethoate, azinphos-methyl

Diethyldithiophosphate

DEDTP

Disulfoton, phorate

Phenylphosphoric acid

 

Leptophos, EPN

Table 6. Examples of levels of urinary alkyl phosphates measured in various conditions of exposure to OP

Compound

Condition of exposure

Route of exposure

Metabolite concentrations1 (mg/l)

Parathion2

Nonfatal poisoning

Oral

DEP = 0.5

DETP = 3.9

Disulfoton2

Formulators

Dermal/inhalation

DEP = 0.01-4.40

DETP = 0.01-1.57

DEDTP = <0.01-.05

Phorate2

Formulators

Dermal/inhalation

DEP = 0.02-5.14

DETP = 0.08-4.08

DEDTP = <0.01-0.43

Malathion3

Sprayers

Dermal

DMDTP = <0.01

Fenitrothion3

Sprayers

Dermal

DMP = 0.01-0.42

DMTP = 0.02-0.49

Monocrotophos4

Sprayers

Dermal/inhalation

DMP = <0.04-6.3/24 h

1 For abbreviations see table 27.12 [BMO12TE].
2 Dillon and Ho 1987.
3 Richter 1993.
4 van Sittert and Dumas 1990.

 Alkyl phosphates are usually excreted in urine within a short time. Samples collected soon after the end of the workday are suitable for metabolite determination.

The measurement of alkyl phosphates in urine requires a rather sophisticated analytical method, based on derivatization of the compounds and detection by gas-liquid chromatography (Shafik et al. 1973a; Reid and Watts 1981).

Hydrolytic residues.

p-Nitrophenol (PNP) is the phenolic metabolite of parathion, methylparathion and ethyl parathion, EPN. The measurement of PNP in urine (Cranmer 1970) has been widely used and has proven to be successful in evaluating exposure to parathion. Urinary PNP correlates well with the absorbed dose of parathion. With PNP urinary levels up to 2 mg/l, the absorption of parathion does not cause symptoms, and little or no reduction of cholinesterase activities is observed. PNP excretion occurs rapidly and urinary levels of PNP become insignificant 48 hours after exposure. Thus, urine samples should be collected soon after exposure.

Carbamates

Biological indicators of effect.

Carbamate pesticides include insecticides, fungicides and herbicides. Insecticidal carbamate toxicity is due to the inhibition of synaptic ACHE, while other mechanisms of toxicity are involved for herbicidal and fungicidal carbamates. Thus, only exposure to carbamate insecticides can be monitored through the assay of cholinesterase activity in red blood cells (ACHE) or plasma (PCHE). ACHE is usually more sensitive to carbamate inhibitors than PCHE. Cholinergic symptoms have been usually observed in carbamate-exposed workers with a blood ACHE activity lower than 70% of the individual baseline level (WHO 1982a).

Inhibition of cholinesterases by carbamates is rapidly reversible. Therefore, false negative results can be obtained if too much time elapses between exposure and biological sampling or between sampling and analysis. In order to avoid such problems, it is recommended that blood samples be collected and analysed within four hours after exposure. Preference should be given to the analytical methods that allow the determination of cholinesterase activity immediately after blood sampling, as discussed for organophosphates.

Biological indicators of exposure.

The measurement of urinary excretion of carbamate metabolites as a method to monitor human exposure has so far been applied only to few compounds and in limited studies. Table 7 summarizes the relevant data. Since carbamates are promptly excreted in the urine, samples collected soon after the end of exposure are suitable for metabolite determination. Analytical methods for the measurements of carbamate metabolites in urine have been reported by Dawson et al. (1964); DeBernardinis and Wargin (1982) and Verberk et al. (1990).

Table 7. Levels of urinary carbamate metabolites measured in field studies

Compound

Biological index

Condition of exposure

Environmental concentrations

Results

References

Carbaryl

a-naphthol

a-naphthol

a-naphthol

formulators

mixer/applicators

unexposed population

0.23–0.31 mg/m3

x=18.5 mg/l1 , max. excretion rate = 80 mg/day

x=8.9 mg/l, range = 0.2–65 mg/l

range = 1.5–4 mg/l

WHO 1982a

Pirimicarb

metabolites I2 and V3

applicators

 

range = 1–100 mg/l

Verberk et al. 1990

1 Systemic poisonings have been occasionally reported.
2 2-dimethylamino-4-hydroxy-5,6-dimethylpyrimidine.
3 2-methylamino-4-hydroxy-5,6-dimethylpyrimidine.
x = standard deviation.

Dithiocarbamates

Biological indicators of exposure.

Dithiocarbamates (DTC) are widely used fungicides, chemically grouped in three classes: thiurams, dimethyldithiocarbamates and ethylene-bis-dithiocarbamates.

Carbon disulphide (CS2) and its main metabolite 2-thiothiazolidine-4-carboxylic acid (TTCA) are metabolites common to almost all DTC. A significant increase in urinary concentrations of these compounds has been observed for different conditions of exposure and for various DTC pesticides. Ethylene thiourea (ETU) is an important urinary metabolite of ethylene-bis-dithiocarbamates. It may also be present as an impurity in market formulations. Since ETU has been determined to be a teratogen and a carcinogen in rats and in other species and has been associated with thyroid toxicity, it has been widely applied to monitor ethylene-bis-dithiocarbamate exposure. ETU is not compound-specific, as it may be derived from maneb, mancozeb or zineb.

Measurement of the metals present in the DTC has been proposed as an alternative approach in monitoring DTC exposure. Increased urinary excretion of manganese has been observed in workers exposed to mancozeb (table 8).

Table 8. Levels of urinary dithiocarbamate metabolites measured in field studies

Compound

Biological index

Condition of

exposure

Environmental concentrations*

± standard deviation

Results ± standard deviation

References

Ziram

Carbon disulphide (CS2)

TTCA1

formulators

formulators

1.03 ± 0.62 mg/m3

3.80 ± 3.70 mg/l

0.45 ± 0.37 mg/l

Maroni et al. 1992

Maneb/Mancozeb

ETU2

applicators

 

range = < 0.2–11.8 mg/l

Kurttio et al. 1990

Mancozeb

Manganese

applicators

57.2 mg/m3

pre-exposure: 0.32 ± 0.23 mg/g creatinine;

post-exposure: 0.53 ± 0.34 mg/g creatinine

Canossa et al. 1993

* Mean result according to Maroni et al. 1992.
1 TTCA = 2-thiothiazolidine-4-carbonylic acid.
2 ETU = ethylene thiourea.

 CS2, TTCA, and manganese are commonly found in urine of non-exposed subjects. Thus, the measurement of urinary levels of these compounds prior to exposure is recommended. Urine samples should be collected in the morning following the cessation of exposure. Analytical methods for the measurements of CS2, TTCA and ETU have been reported by Maroni et al. (1992).

Synthetic Pyrethroids

Biological indicators of exposure.

Synthetic pyrethroids are insecticides similar to natural pyrethrins. Urinary metabolites suitable for application in biological monitoring of exposure have been identified through studies with human volunteers. The acidic metabolite 3-(2,2’-dichloro-vinyl)-2,2’-dimethyl-cyclopropane carboxylic acid (Cl2CA) is excreted both by subjects orally dosed with permethrin and cypermethrin and the bromo-analogue (Br2CA) by subjects treated with deltamethrin. In the volunteers treated with cypermethrin, a phenoxy metabolite, 4-hydroxy-phenoxy benzoic acid (4-HPBA), has also been identified. These tests, however, have not often been applied in monitoring occupational exposures because of the complex analytical techniques required (Eadsforth, Bragt and van Sittert 1988; Kolmodin-Hedman, Swensson and Akerblom 1982). In applicators exposed to cypermethrin, urinary levels of Cl2CA have been found to range from 0.05 to 0.18 mg/l, while in formulators exposed to a-cypermethrin, urinary levels of 4-HPBA have been found to be lower than 0.02 mg/l.

A 24-hour urine collection period started after the end of exposure is recommended for metabolite determinations.

Organochlorines

Biological indicators of exposure.

Organochlorine (OC) insecticides were widely used in the 1950s and 1960s. Subsequently, the use of many of these compounds was discontinued in many countries because of their persistence and consequent contamination of the environment.

Biological monitoring of OC exposure can be carried out through the determination of intact pesticides or their metabolites in blood or serum (Dale, Curley and Cueto 1966; Barquet, Morgade and Pfaffenberger 1981). After absorption, aldrin is rapidly metabolized to dieldrin and can be measured as dieldrin in blood. Endrin has a very short half-life in blood. Therefore, endrin blood concentration is of use only in determining recent exposure levels. The determination of the urinary metabolite anti-12-hydroxy-endrin has also proven to be useful in monitoring endrin exposure (van Sittert and Tordoir 1987) .

Significant correlations between the concentration of biological indicators and the onset of toxic effects have been demonstrated for some OC compounds. Instances of toxicity due to aldrin and dieldrin exposure have been related to levels of dieldrin in blood above 200 μg/l. A blood lindane concentration of 20 μg/l has been indicated as the upper critical level as far as neurological signs and symptoms are concerned. No acute adverse effects have been reported in workers with blood endrin concentrations below 50 μg/l. Absence of early adverse effects (induction of liver microsomal enzymes) has been shown on repeated exposures to endrin at urinary anti-12-hydroxy-endrin concentrations below 130 μg/g creatinine and on repeated exposures to DDT at DDT or DDE serum concentrations below 250 μg/l.

OC may be found in low concentrations in the blood or urine of the general population. Examples of observed values are as follows: lindane blood concentrations up to 1 μg/l, dieldrin up to 10 μg/l, DDT or DDE up to 100 μg/l, and anti-12-hydroxy-endrin up to 1 μg/g creatinine. Thus, a baseline assessment prior to exposure is recommended.

For exposed subjects, blood samples should be taken immediately after the end of a single exposure. For conditions of long-term exposure, the time of collection of the blood sample is not critical. Urine spot samples for urinary metabolite determination should be collected at the end of exposure.

Triazines

Biological indicators of exposure.

The measurement of urinary excretion of triazinic metabolites and the unmodified parent compound has been applied to subjects exposed to atrazine in limited studies. Figure 2 shows the urinary excretion profiles of atrazine metabolites of a manufacturing worker with dermal exposure to atrazine ranging from 174 to 275 μmol/workshift (Catenacci et al. 1993). Since other chlorotriazines (simazine, propazine, terbuthylazine) follow the same biotransformation pathway of atrazine, levels of dealkylated triazinic metabolites may be determined to monitor exposure to all chlorotriazine herbicides. 

Figure 2. Urinary excretion profiles of atrazine metabolites

BMO060F2

The determination of unmodified compounds in urine may be useful as a qualitative confirmation of the nature of the compound that has generated the exposure. A 24–hour urine collection period started at the beginning of exposure is recommended for metabolite determination.

Recently, by using an enzyme-linked immunosorbent assay (ELISA test), a mercapturic acid conjugate of atrazine has been identified as its major urinary metabolite in exposed workers. This compound has been found in concentrations at least 10 times higher than those of any dealkylated products. A relationship between cumulative dermal and inhalation exposure and total amount of the mercapturic acid conjugate excreted over a 10-day period has been observed (Lucas et al. 1993).

 

 

 

 

Coumarin Derivatives

Biological indicators of effect.

Coumarin rodenticides inhibit the activity of the enzymes of the vitamin K cycle in the liver of mammals, humans included (figure 3), thus causing a dose-related reduction of the synthesis of vitamin K-dependent clotting factors, namely factor II (prothrombin), VII, IX, and X. Anticoagulant effects appear when plasma levels of clotting factors have dropped below approximately 20% of normal.

Figure 3. Vitamin K cycle

BMO060F3

These vitamin K antagonists have been grouped into so-called “first generation” (e.g., warfarin) and “second generation” compounds (e.g., brodifacoum, difenacoum), the latter characterized by a very long biological half-life (100 to 200 days).

The determination of prothrombin time is widely used in monitoring exposure to coumarins. However, this test is sensitive only to a clotting factor decrease of approximately 20% of normal plasma levels. The test is not suitable for detection of early effects of exposure. For this purpose, the determination of                                                                                                                       the prothrombin concentration in plasma is recommended.

In the future, these tests might be replaced by the determination of coagulation factor precursors (PIVKA), which are substances detectable in blood only in the case of blockage of the vitamin K cycle by coumarins.

With conditions of prolonged exposure, the time of blood collection is not critical. In cases of acute overexposure, biological monitoring should be carried out for at least five days after the event, in view of the latency of the anticoagulant effect. To increase the sensitivity of these tests, the measurement of baseline values prior to exposure is recommended.

Biological indicators of exposure.

The measurement of unmodified coumarins in blood has been proposed as a test to monitor human exposure. However, experience in applying these indices is very limited mainly because the analytical techniques are much more complex (and less standardized) in comparison with those required to monitor the effects on the coagulation system (Chalermchaikit, Felice and Murphy 1993).

Phenoxy Herbicides

Biological indicators of exposure.

Phenoxy herbicides are scarcely biotransformed in mammals. In humans, more than 95% of a 2,4-dichlorophenoxyacetic acid (2,4-D) dose is excreted unchanged in urine within five days, and 2,4,5-trichlorophenoxyacetic acid (2,4,5-T) and 4-chloro-2-methylphenoxyacetic acid (MCPA) are also excreted mostly unchanged via urine within a few days after oral absorption. The measurement of unchanged compounds in urine has been applied in monitoring occupational exposure to these herbicides. In field studies, urinary levels of exposed workers have been found to range from 0.10 to 8 μg/l for 2,4-D, from 0.05 to 4.5 μg/l for 2,4,5-T and from below 0.1 μg/l to 15 μg/l for MCPA. A 24-hour period of urine collection starting at the end of exposure is recommended for the determination of unchanged compounds. Analytical methods for the measurements of phenoxy herbicides in urine have been reported by Draper (1982).

Quaternary Ammonium Compounds

Biological indicators of exposure.

Diquat and paraquat are herbicides scarcely biotransformed by the human organism. Because of their high water solubility, they are readily excreted unchanged in urine. Urine concentrations below the analytical detection limit (0.01 μg/l) have been often observed in paraquat exposed workers; while in tropical countries, concentrations up to 0.73 μg/l have been measured after improper paraquat handling. Urinary diquat concentrations lower than the analytical detection limit (0.047 μg/l) have been reported for subjects with dermal exposures from 0.17 to 1.82 μg/h and inhalation exposures lower than 0.01 μg/h. Ideally, 24 hours sampling of urine collected at the end of exposure should be used for analysis. When this is impractical, a spot sample at the end of the workday can be used.

Determination of paraquat levels in serum is useful for prognostic purposes in case of acute poisoning: patients with serum paraquat levels up to 0.1 μg/l twenty-four hours after ingestion are likely to survive.

The analytical methods for paraquat and diquat determination have been reviewed by Summers (1980).

Miscellaneous Pesticides

4,6-Dinitro-o-cresol (DNOC).

DNOC is an herbicide introduced in 1925, but the use of this compound has been progressively decreased due to its high toxicity to plants and to humans. Since blood DNOC concentrations correlate to a certain extent with the severity of adverse health effects, the measure of unchanged DNOC in blood has been proposed for monitoring occupational exposures and for the evaluation of the clinical course of poisonings.

Pentachlorophenol.

Pentachlorophenol (PCP) is a wide-spectrum biocide with pesticidal action against weeds, insects, and fungi. Measurements of blood or urinary unchanged PCP have been recommended as suitable indices in monitoring occupational exposures (Colosio et al. 1993), because these parameters are significantly correlated with PCP body burden. In workers with prolonged exposure to PCP the time of blood collection is not critical, while urine spot samples should be collected on the morning after exposure.

A multiresidue method for the measurement of halogenated and nitrophenolic pesticides has been described by Shafik et al.(1973b).

Other tests proposed for the biological monitoring of pesticide exposure are listed in table 9.

Table 9. Other indices proposed in the literature for the biological monitoring of pesticide exposure

Compound

Biological index

 

Urine

Blood

Bromophos

Bromophos

Bromophos

Captan

Tetrahydrophtalimide

 

Carbofuran

3-Hydroxycarbofuran

 

Chlordimeform

4-Chloro-o-toluidine derivatives

 

Chlorobenzilate

p,p-1-Dichlorobenzophenone

 

Dichloropropene

Mercapturic acid metabolites

 

Fenitrothion

p-Nitrocresol

 

Ferbam

 

Thiram

Fluazifop-Butyl

Fluazifop

 

Flufenoxuron

 

Flufenoxuron

Glyphosate

Glyphosate

 

Malathion

Malathion

Malathion

Organotin compounds

Tin

Tin

Trifenomorph

Morpholine, triphenylcarbinol

 

Ziram

 

Thiram

 

Conclusions

Biological indicators for monitoring pesticide exposure have been applied in a number of experimental and field studies.

Some tests, such as those for cholinesterase in blood or for selected unmodified pesticides in urine or blood, have been validated by extensive experience. Biological exposure limits have been proposed for these tests (table 10). Other tests, in particular those for blood or urinary metabolites, suffer from greater limitations because of analytical difficulties or because of limitations in interpretation of results.

Table 10. Recommended biological limit values (as of 1996)

Compound

Biological index

BEI1

BAT2

HBBL3

BLV4

ACHE inhibitors

ACHE in blood

70%

70%

70%,

 

DNOC

DNOC in blood

   

20 mg/l,

 

Lindane

Lindane in blood

 

0.02mg/l

0.02mg/l

 

Parathion

PNP in urine

0.5mg/l

0.5mg/l

   

Pentachlorophenol (PCP)

PCP in urine

PCP in plasma

2 mg/l

5 mg/l

0.3mg/l

1 mg/l

   

Dieldrin/Aldrin

Dieldrin in blood

     

100 mg/l

Endrin

Anti-12-hydroxy-endrin in urine

     

130 mg/l

DDT

DDT and DDEin serum

     

250 mg/l

Coumarins

Prothrombin time in plasma

Prothrombin concentration in plasma

     

10% above baseline

60% of baseline

MCPA

MCPA in urine

     

0.5 mg/l

2,4-D

2,4-D in urine

     

0.5 mg/l

1 Biological exposure indices (BEIs) are recommended by the American Conference of Governmental Industrial Hygienists (ACGIH 1995).
2 Biological tolerance values (BATs) are recommended by the German Commission for the Investigation of Health Hazards of Chemical Compounds in the Work Area (DFG 1992).
3 Health-based biological limits (HBBLs) are recommended by a WHO Study Group (WHO 1982a).
4 Biological limit values (BLVs) are proposed by a Study Group of the Scientific Committee on Pesticides of the International Commission on Occupational Health (Tordoir et al. 1994). Assessment of working conditions is called for if this value is exceeded.

This field is in rapid development and, given the enormous importance of using biological indicators to assess exposure to these substances, new tests will be continuously developed and validated.

 

Back

Contents

Preface
Part I. The Body
Part II. Health Care
Part III. Management & Policy
Part IV. Tools and Approaches
Part V. Psychosocial and Organizational Factors
Part VI. General Hazards
Part VII. The Environment
Part VIII. Accidents and Safety Management
Part IX. Chemicals
Part X. Industries Based on Biological Resources
Part XI. Industries Based on Natural Resources
Part XII. Chemical Industries
Part XIII. Manufacturing Industries
Part XIV. Textile and Apparel Industries
Part XV. Transport Industries
Part XVI. Construction
Part XVII. Services and Trade
Part XVIII. Guides