Tuesday, 01 March 2011 02:17

Validity Issues in Study Design

The Need for Validity

Epidemiology aims at providing an understanding of the disease experience in populations. In particular, it can be used to obtain insight into the occupational causes of ill health. This knowledge comes from studies conducted on groups of people having a disease by comparing them to people without that disease. Another approach is to examine what diseases people who work in certain jobs with particular exposures acquire and to compare these disease patterns to those of people not similarly exposed. These studies provide estimates of risk of disease for specific exposures. For information from such studies to be used for establishing prevention programmes, for the recognition of occupational diseases, and for those workers affected by exposures to be appropriately compensated, these studies must be valid.

Validity can be defined as the ability of a study to reflect the true state of affairs. A valid study is therefore one which measures correctly the association (either positive, negative or absent) between an exposure and a disease. It describes the direction and magnitude of a true risk. Two types of validity are distinguished: internal and external validity. Internal validity is a study’s ability to reflect what really happened among the study subjects; external validity reflects what could occur in the population.

Validity relates to the truthfulness of a measurement. Validity must be distinguished from precision of the measurement, which is a function of the size of the study and the efficiency of the study design.

Internal Validity

A study is said to be internally valid when it is free from biases and therefore truly reflects the association between exposure and disease which exists among the study participants. An observed risk of disease in association with an exposure may indeed result from a real association and therefore be valid, but it may also reflect the influence of biases. A bias will give a distorted image of reality.

Three major types of biases, also called systematic errors, are usually distinguished:

  • selection bias
  • information or observation bias
  • confounding

 

They will be presented briefly below, using examples from the occupational health setting.

Selection bias

Selection bias will occur when the entry into the study is influenced by knowledge of the exposure status of the potential study participant. This problem is therefore encountered only when the disease has already taken place by the time (before) the person enters the study. Typically, in the epidemiological setting, this will happen in case-control studies or in retrospective cohort studies. This means that a person will be more likely to be considered a case if it is known that he or she has been exposed. Three sets of circumstances may lead to such an event, which will also depend on the severity of the disease.

Self-selection bias

This can occur when people who know they have been exposed to known or believed harmful products in the past and who are convinced their disease is the result of the exposure will consult a physician for symptoms which other people, not so exposed, might have ignored. This is particularly likely to happen for diseases which have few noticeable symptoms. An example may be early pregnancy loss or spontaneous abortion among female nurses handling drugs used for cancer treatment. These women are more aware than most of reproductive physiology and, by being concerned about their ability to have children, may be more likely to recognize or label as a spontaneous abortion what other women would only consider as a delay in the onset of menstruation. Another example from a retrospective cohort study, cited by Rothman (1986), involves a Centers for Disease Control study of leukaemia among troops who had been present during a US atomic test in Nevada. Of the troops present on the test site, 76% were traced and constituted the cohort. Of these, 82% were found by the investigators, but an additional 18% contacted the investigators themselves after hearing publicity about the study. Four cases of leukaemia were present among the 82% traced by CDC and four cases were present among the self-referred 18%. This strongly suggests that the investigators’ ability to identify exposed persons was linked to leukaemia risk.

Diagnostic bias

This will occur when the doctors are more likely to diagnose a given disease once they know to what the patient has been previously exposed. For example, when most paints were lead-based, a symptom of disease of the peripheral nerves called peripheral neuritis with paralysis was also known as painters’ “wrist drop”. Knowing the occupation of the patient made it easier to diagnose the disease even in its early stages, whereas the identification of the causal agent would be much more difficult in research participants not known to be occupationally exposed to lead.

Bias resulting from refusal to participate in a study

When people, either healthy or sick, are asked to participate in a study, several factors play a role in determining whether or not they will agree. Willingness to answer variably lengthy questionnaires, which at times inquire about sensitive issues, and even more so to give blood or other biological samples, may be determined by the degree of self-interest held by the person. Someone who is aware of past potential exposure may be ready to comply with this inquiry in the hope that it will help to find the cause of the disease, whereas someone who considers that they have not been exposed to anything dangerous, or who is not interested in knowing, may decline the invitation to participate in the study. This can lead to a selection of those people who will finally be the study participants as compared to all those who might have been.

Information bias

This is also called observation bias and concerns disease outcome in follow-up studies and exposure assessment in case-control studies.

Differential outcome assessment in prospective follow-up (cohort) studies

Two groups are defined at the start of the study: an exposed group and an unexposed group. Problems of diagnostic bias will arise if the search for cases differs between these two groups. For example, consider a cohort of people exposed to an accidental release of dioxin in a given industry. For the highly exposed group, an active follow-up system is set up with medical examinations and biological monitoring at regular intervals, whereas the rest of the working population receives only routine care. It is highly likely that more disease will be identified in the group under close surveillance, which would lead to a potential over-estimation of risk.

Differential losses in retrospective cohort studies

The reverse mechanism to that described in the preceding paragraph may occur in retrospective cohort studies. In these studies, the usual way of proceeding is to start with the files of all the people who have been employed in a given industry in the past, and to assess disease or mortality subsequent to employment. Unfortunately, in almost all studies files are incomplete, and the fact that a person is missing may be related either to exposure status or to disease status or to both. For example, in a recent study conducted in the chemical industry in workers exposed to aromatic amines, eight tumours were found in a group of 777 workers who had undergone cytological screening for urinary tumours. Altogether, only 34 records were found missing, corresponding to a 4.4% loss from the exposure assessment file, but for bladder cancer cases, exposure data were missing for two cases out of eight, or 25%. This shows that the files of people who became cases were more likely to become lost than the files of other workers. This may occur because of more frequent job changes within the company (which may be linked to exposure effects), resignation, dismissal or mere chance.

Differential assessment of exposure in case-control studies

In case-control studies, the disease has already occurred at the start of the study, and information will be sought on exposures in the past. Bias may result either from the interviewer’s or study participant’s attitude to the investigation. Information is usually collected by trained interviewers who may or may not be aware of the hypothesis underlying the research. For example, in a population-based case-control study of bladder cancer conducted in a highly industrialized region, study staff may well be aware of the fact that certain chemicals, such as aromatic amines, are risk factors for bladder cancer. If they also know who has developed the disease and who has not, they may be likely to conduct more in-depth interviews with the participants who have bladder cancer than with the controls. They may insist on more detailed information of past occupations, searching systematically for exposure to aromatic amines, whereas for controls they may record occupations in a more routine way. The resulting bias is known as exposure suspicion bias.

The participants themselves may also be responsible for such bias. This is called recall bias to distinguish it from interviewer bias. Both have exposure suspicion as the mechanism for the bias. Persons who are sick may suspect an occupational origin to their disease and therefore will try to remember as accurately as possible all the dangerous agents to which they may have been exposed. In the case of handling undefined products, they may be inclined to recall the names of precise chemicals, particularly if a list of suspected products is made available to them. By contrast, controls may be less likely to go through the same thought process.

Confounding

Confounding exists when the association observed between exposure and disease is in part the result of a mixing of the effect of the exposure under study and another factor. Let us say, for example, that we are finding an increased risk of lung cancer among welders. We are tempted to conclude immediately that there is a causal association between exposure to welding fumes and lung cancer. However, we also know that smoking is by far the main risk factor for lung cancer. Therefore, if information is available, we begin checking the smoking status of welders and other study participants. We may find that welders are more likely to smoke than non-welders. In that situation, smoking is known to be associated with lung cancer and, at the same time, in our study smoking is also found to be associated with being a welder. In epidemiological terms, this means that smoking, linked both to lung cancer and to welding, is confounding the association between welding and lung cancer.

Interaction or effect modification

In contrast to all the issues listed above, namely selection, information and confounding, which are biases, interaction is not a bias due to problems in study design or analysis, but reflects reality and its complexity. An example of this phenomenon is the following: exposure to radon is a risk factor for lung cancer, as is smoking. In addition, smoking and radon exposure have different effects on lung cancer risk depending on whether they act together or in isolation. Most of the occupational studies on this topic have been conducted among underground miners and at times have provided conflicting results. Overall, there seem to be arguments in favour of an interaction of smoking and radon exposure in producing lung cancer. This means that lung cancer risk is increased by exposure to radon, even in non-smokers, but that the size of the risk increase from radon is much greater among smokers than among non-smokers. In epidemiological terms, we say that the effect is multiplicative. In contrast to confounding, described above, interaction needs to be carefully analysed and described in the analysis rather than simply controlled, as it reflects what is happening at the biological level and is not merely a consequence of poor study design. Its explanation leads to a more valid interpretation of the findings from a study.

External Validity

This issue can be addressed only after ensuring that internal validity is secured. If we are convinced that the results observed in the study reflect associations which are real, we can ask ourselves whether or not we can extrapolate these results to the larger population from which the study participants themselves were drawn, or even to other populations which are identical or at least very similar. The most common question is whether results obtained for men also apply to women. For years, studies and, in particular, occupational epidemiological investigations have been conducted exclusively among men. Studies among chemists carried out in the 1960s and 1970s in the United States, United Kingdom and Sweden all found increased risks of specific cancers—namely leukaemia, lymphoma and pancreatic cancer. Based on what we knew of the effects of exposure to solvents and some other chemicals, we could already have deduced at the time that laboratory work also entailed carcinogenic risk for women. This in fact was shown to be the case when the first study among women chemists was finally published in the mid-1980s, which found results similar to those among men. It is worth noting that other excess cancers found were tumours of the breast and ovary, traditionally considered as being related only to endogenous factors or reproduction, but for which newly suspected environmental factors such as pesticides may play a role. Much more work needs to be done on occupational determinants of female cancers.

Strategies for a Valid Study

A perfectly valid study can never exist, but it is incumbent upon the researcher to try to avoid, or at least to minimize, as many biases as possible. This can often best be done at the study design stage, but can also be carried out during analysis.

Study design

Selection and information bias can be avoided only through the careful design of an epidemiological study and the scrupulous implementation of all the ensuing day-to-day guidelines, including meticulous attention to quality assurance, for the conduct of the study in field conditions. Confounding may be dealt with either at the design or analysis stage.

Selection

Criteria for considering a participant as a case must be explicitly defined. One cannot, or at least should not, attempt to study ill-defined clinical conditions. A way of minimizing the impact that knowledge of the exposure may have on disease assessment is to include only severe cases which would have been diagnosed irrespective of any information on the history of the patient. In the field of cancer, studies often will be limited to cases with histological proof of the disease to avoid the inclusion of borderline lesions. This also will mean that groups under study are well defined. For example, it is well-known in cancer epidemiology that cancers of different histological types within a given organ may have dissimilar risk factors. If the number of cases is sufficient, it is better to separate adenocarcinoma of the lung from squamous cell carcinoma of the lung. Whatever the final criteria for entry into the study, they should always be clearly defined and described. For example, the exact code of the disease should be indicated using the International Classification of Diseases (ICD) and also, for cancer, the International Classification of Diseases-Oncology (ICD-O).

Efforts should be made once the criteria are specified to maximize participation in the study. The decision to refuse to participate is hardly ever made at random and therefore leads to bias. Studies should first of all be presented to the clinicians who are seeing the patients. Their approval is needed to approach patients, and therefore they will have to be convinced to support the study. One argument that is often persuasive is that the study is in the interest of the public health. However, at this stage it is better not to discuss the exact hypothesis being evaluated in order to avoid unduly influencing the clinicians involved. Physicians should not be asked to take on supplementary duties; it is easier to convince health personnel to lend their support to a study if means are provided by the study investigators to carry out any additional tasks, over and above routine care, necessitated by the study. Interviewers and data abstractors ought to be unaware of the disease status of their patients.

Similar attention should be paid to the information provided to participants. The goal of the study must be described in broad, neutral terms, but must also be convincing and persuasive. It is important that issues of confidentiality and interest for public health be fully understood while avoiding medical jargon. In most settings, use of financial or other incentives is not considered appropriate, although compensation should be provided for any expense a participant may incur. Last, but not least, the general population should be sufficiently scientifically literate to understand the importance of such research. Both the benefits and the risks of participation must be explained to each prospective participant where they need to complete questionnaires and/or to provide biological samples for storage and/or analysis. No coercion should be applied in obtaining prior and fully informed consent. Where studies are exclusively records-based, prior approval of the agencies responsible for ensuring the confidentiality of such records must be secured. In these instances, individual participant consent usually can be waived. Instead, approval of union and government officers will suffice. Epidemiological investigations are not a threat to an individual’s private life, but are a potential aid to improve the health of the population. The approval of an institutional review board (or ethics review committee) will be needed prior to the conduct of a study, and much of what is stated above will be expected by them for their review.

Information

In prospective follow-up studies, means for assessment of the disease or mortality status must be identical for exposed and non-exposed participants. In particular, different sources should not be used, such as only checking in a central mortality register for non-exposed participants and using intensive active surveillance for exposed participants. Similarly, the cause of death must be obtained in strictly comparable ways. This means that if a system is used to gain access to official documents for the unexposed population, which is often the general population, one should never plan to get even more precise information through medical records or interviews on the participants themselves or on their families for the exposed subgroup.

In retrospective cohort studies, efforts should be made to determine how closely the population under study is compared to the population of interest. One should beware of potential differential losses in exposed and non-exposed groups by using various sources concerning the composition of the population. For example, it may be useful to compare payroll lists with union membership lists or other professional listings. Discrepancies must be reconciled and the protocol adopted for the study must be closely followed.

In case-control studies, other options exist to avoid biases. Interviewers, study staff and study participants need not be aware of the precise hypothesis under study. If they do not know the association being tested, they are less likely to try to provide the expected answer. Keeping study personnel in the dark as to the research hypothesis is in fact often very impractical. The interviewer will almost always know the exposures of greatest potential interest as well as who is a case and who is a control. We therefore have to rely on their honesty and also on their training in basic research methodology, which should be a part of their professional background; objectivity is the hallmark at all stages in science.

It is easier not to inform the study participants of the exact object of the research. Good, basic explanations on the need to collect data in order to have a better understanding of health and disease are usually sufficient and will satisfy the needs of ethics review.

Confounding

Confounding is the only bias which can be dealt with either at the study design stage or, provided adequate information is available, at the analysis stage. If, for example, age is considered to be a potential confounder of the association of interest because age is associated with the risk of disease (i.e., cancer becomes more frequent in older age) and also with exposure (conditions of exposure vary with age or with factors related to age such as qualification, job position and duration of employment), several solutions exist. The simplest is to limit the study to a specified age range—for example, enrol only Caucasian men aged 40 to 50. This will provide elements for a simple analysis, but will also have the drawback of limiting the application of the results to a single sex age/racial group. Another solution is matching on age. This means that for each case, a referent of the same age is needed. This is an attractive idea, but one has to keep in mind the possible difficulty of fulfilling this requirement as the number of matching factors increases. In addition, once a factor has been matched on, it becomes impossible to evaluate its role in the occurrence of disease. The last solution is to have sufficient information on potential confounders in the study database in order to check for them in the analysis. This can be done either through a simple stratified analysis, or with more sophisticated tools such as multivariate analysis. However, it should be remembered that analysis will never be able to compensate for a poorly designed or conducted study.

Conclusion

The potential for biases to occur in epidemiological research is long established. This was not too much of a concern when the associations being studied were strong (as is the case for smoking and lung cancer) and therefore some inaccuracy did not cause too severe a problem. However, now that the time has come to evaluate weaker risk factors, the need for better tools becomes paramount. This includes the need for excellent study designs and the possibility of combining the advantages of various traditional designs such as the case-control or cohort studies with more innovative approaches such as case-control studies nested within a cohort. Also, the use of biomarkers may provide the means of obtaining more accurate assessments of current and possibly past exposures, as well as for the early stages of disease.

 

Back

Tuesday, 01 March 2011 01:58

Measuring Effects of Exposures

Epidemiology involves measuring the occurrence of disease and quantifying associations between diseases and exposures.

Measures of Disease Occurrence

Disease occurrence can be measured by frequencies (counts) but is better described by rates, which are composed of three elements: the number of people affected (numerator), the number of people in the source or base population (i.e., the population at risk) from which the affected persons come, and the time period covered. The denominator of the rate is the total person-time experienced by the source population. Rates allow more informative comparisons between populations of different sizes than counts alone. Risk, the probability of an individual developing disease within a specified time period, is a proportion, ranging from 0 to 1, and is not a rate per se. Attack rate, the proportion of people in a population who are affected within a specified time period, is technically a measure of risk, not a rate.

Disease-specific morbidity includes incidence, which refers to the number of persons who are newly diagnosed with the disease of interest. Prevalence refers to the number of existing cases. Mortality refers to the number of persons who die.

Incidence is defined as the number of newly diagnosed cases within a specified time period, whereas the incidence rate is this number divided by the total person-time experienced by the source population (table 1). For cancer, rates are usually expressed as annual rates per 100,000 people. Rates for other more common diseases may be expressed per a smaller number of people. For example, birth defect rates are usually expressed per 1,000 live births. Cumulative incidence, the proportion of people who become cases within a specified time period, is a measure of average risk for a population. 

Table 1. Measures of disease occurrence: Hypothetical population observed for a five-year period

Newly diagnosed cases

10

Previously diagnosed living cases

12

Deaths, all causes*

5

Deaths, disease of interest

3

Persons in population

100

Years observed

5

Incidence

10 persons

Annual incidence rate

Point prevalence (at end of year 5)

(10 + 12 - 3) = 19 persons

Period prevalence (five-year period)

(10 + 12) = 22 persons

Annual death rate

Annual mortality rate

*To simplify the calculations, this example assumes that all deaths occurred at the end of the five-year period so that all 100 persons in the population were alive for the full five years.

Prevalence includes point prevalence, the number of cases of disease at a point in time, and period prevalence, the total number of cases of a disease known to have existed at some time during a specified period.

Mortality, which concerns deaths rather than newly diagnosed cases of disease, reflects factors that cause disease as well as factors related to the quality of medical care, such as screening, access to medical care, and availability of effective treatments. Consequently, hypothesis-generating efforts and aetiological research may be more informative and easier to interpret when based on incidence rather than on mortality data. However, mortality data are often more readily available on large populations than incidence data.

The term death rate is generally accepted to mean the rate for deaths from all causes combined, whereas mortality rate is the rate of death from one specific cause. For a given disease, the case-fatality rate (technically a proportion, not a rate) is the number of persons dying from the disease during a specified time period divided by the number of persons with the disease. The complement of the case-fatality rate is the survival rate. The five-year survival rate is a common benchmark for chronic diseases such as cancer.

The occurrence of a disease may vary across subgroups of the population or over time. A disease measure for an entire population, without consideration of any subgroups, is called a crude rate. For example, an incidence rate for all age groups combined is a crude rate. The rates for the individual age groups are the age-specific rates. To compare two or more populations with different age distributions, age-adjusted (or, age-standardized) rates should be calculated for each population by multiplying each age-specific rate by the per cent of the standard population (e.g., one of the populations under study, the 1970 US population) in that age group, then summing over all age groups to produce an overall age-adjusted rate. Rates can be adjusted for factors other than age, such as race, gender or smoking status, if the category-specific rates are known.

Surveillance and evaluation of descriptive data can provide clues to disease aetiology, identify high-risk subgroups that may be suitable for intervention or screening programmes, and provide data on the effectiveness of such programmes. Sources of information that have been used for surveillance activities include death certificates, medical records, cancer registries, other disease registries (e.g., birth defects registries, end-stage renal disease registries), occupational exposure registries, health or disability insurance records and workmen’s compensation records.

Measures of Association

Epidemiology attempts to identify and quantify factors that influence disease. In the simplest approach, the occurrence of disease among persons exposed to a suspect factor is compared to the occurrence among persons unexposed. The magnitude of an association between exposure and disease can be expressed in either absolute or relative terms. (See also "Case Study: Measures").

Absolute effects are measured by rate differences and risk differences (table 2). A rate difference is one rate minus a second rate. For example, if the incidence rate of leukaemia among workers exposed to benzene is 72 per 100,000 person-years and the rate among non-exposed workers is 12 per 100,000 person-years, then the rate difference is 60 per 100,000 person-years. A risk difference is a difference in risks or cumulative incidence and can range from -1 to 1. 

 


Table 2. Measures of association for a cohort study

 

 

Cases

Person-years at risk

Rate per 100,000

Exposed

100

20,000

500

Unexposed

200

80,000

250

Total

300

100,000

300

Rate Difference (RD) = 500/100,000 - 250/100,000

= 250/100,000 per year

(146.06/100,000 - 353.94/100,000)*

Rate ratio (or relative risk) (RR) =  

Attributable risk in the exposed (ARe) = 100/20,000 - 200/80,000

= 250/100,000 per year

Attributable risk per cent in the exposed (ARe%) =

 Population attributable risk (PAR) = 300/100,000 - 200/80,000

= 50/100,000 per year

Population attributable risk per cent (PAR%) =

 * In parentheses 95% confidence intervals computed using the formulas in the boxes.


 

Relative effects are based on ratios of rates or risk measures, instead of differences. A rate ratio is the ratio of a rate in one population to the rate in another. The rate ratio has also been called the risk ratio, relative risk, relative rate, and incidence (or mortality) rate ratio. The measure is dimensionless and ranges from 0 to infinity. When the rate in two groups is similar (i.e., there is no effect from the exposure), the rate ratio is equal to unity (1). An exposure that increased risk would yield a rate ratio greater than unity, while a protective factor would yield a ratio between 0 and 1. The excess relative risk is the relative risk minus 1. For example, a relative risk of 1.4 may also be expressed as an excess relative risk of 40%.

In case-control studies (also called case-referent studies), persons with disease are identified (cases) and persons without disease are identified (controls or referents). Past exposures of the two groups are compared. The odds of being an exposed case is compared to the odds of being an exposed control. Complete counts of the source populations of exposed and unexposed persons are not available, so disease rates cannot be calculated. Instead, the exposed cases can be compared to the exposed controls by calculation of relative odds, or the odds ratio (table 3). 

 


Table 3. Measures of association for case-control studies: Exposure to wood dust and adenocarcinoma of the nasal cavity and paranasal sinues

 

 

Cases

Controls

Exposed

18

55

Unexposed

5

140

Total

23

195

 

Relative odds (odds ratio) (OR) =

Attributable risk per cent in the exposed () =

Population attributable risk per cent (PAR%) =

where = proportion of exposed controls = 55/195 = 0.28

 

* In parentheses 95% confidence intervals computed using the formulas in the box overleaf.

Source: Adapted from Hayes et al. 1986.


 

Relative measures of effect are used more frequently than absolute measures to report the strength of an association. Absolute measures, however, may provide a better indication of the public health impact of an association. A small relative increase in a common disease, such as heart disease, may affect more persons (large risk difference) and have more of an impact on public health than a large relative increase (but small absolute difference) in a rare disease, such as angiosarcoma of the liver.

Significance Testing

Testing for statistical significance is often performed on measures of effect to evaluate the likelihood that the effect observed differs from the null hypothesis (i.e., no effect). While many studies, particularly in other areas of biomedical research, may express significance by p-values, epidemiological studies typically present confidence intervals (CI) (also called confidence limits). A 95% confidence interval, for example, is a range of values for the effect measure that includes the estimated measure obtained from the study data and that which has 95% probability of including the true value. Values outside the interval are deemed to be unlikely to include the true measure of effect. If the CI for a rate ratio includes unity, then there is no statistically significant difference between the groups being compared.

Confidence intervals are more informative than p-values alone. A p-value’s size is determined by either or both of two reasons. Either the measure of association (e.g., rate ratio, risk difference) is large or the populations under study are large. For example, a small difference in disease rates observed in a large population may yield a highly significant p-value. The reasons for the large p-value cannot be identified from the p-value alone. Confidence intervals, however, allow us to disentangle the two factors. First, the magnitude of the effect is discernible by the values of the effect measure and the numbers encompassed by the interval. Larger risk ratios, for example, indicate a stronger effect. Second, the size of the population affects the width of the confidence interval. Small populations with statistically unstable estimates generate wider confidence intervals than larger populations.

The level of confidence chosen to express the variability of the results (the “statistical significance”) is arbitrary, but has traditionally been 95%, which corresponds to a p-value of 0.05. A 95% confidence interval has a 95% probability of containing the true measure of the effect. Other levels of confidence, such as 90%, are occasionally used.

Exposures can be dichotomous (e.g., exposed and unexposed), or may involve many levels of exposure. Effect measures (i.e., response) can vary by level of exposure. Evaluating exposure-response relationships is an important part of interpreting epidemiological data. The analogue to exposure-response in animal studies is “dose-response”. If the response increases with exposure level, an association is more likely to be causal than if no trend is observed. Statistical tests to evaluate exposure-response relationships include the Mantel extension test and the chi-square trend test.

Standardization

To take into account factors other than the primary exposure of interest and the disease, measures of association may be standardized through stratification or regression techniques. Stratification means dividing the populations into homogenous groups with respect to the factor (e.g., gender groups, age groups, smoking groups). Risk ratios or odds ratios are calculated for each stratum and overall weighted averages of the risk ratios or odds ratios are calculated. These overall values reflect the association between the primary exposure and disease, adjusted for the stratification factor, i.e., the association with the effects of the stratification factor removed.

A standardized rate ratio (SRR) is the ratio of two standardized rates. In other words, an SRR is a weighted average of stratum-specific rate ratios where the weights for each stratum are the person-time distribution of the non-exposed, or referent, group. SRRs for two or more groups may be compared if the same weights are used. Confidence intervals can be constructed for SRRs as for rate ratios.

The standardized mortality ratio (SMR) is a weighted average of age-specific rate ratios where the weights (e.g., person-time at risk) come from the group under study and the rates come from the referent population, the opposite of the situation in a SRR. The usual referent population is the general population, whose mortality rates may be readily available and based on large numbers and thus are more stable than using rates from a non-exposed cohort or subgroup of the occupational population under study. Using the weights from the cohort instead of the referent population is called indirect standardization. The SMR is the ratio of the observed number of deaths in the cohort to the expected number, based on the rates from the referent population (the ratio is typically multiplied by 100 for presentation). If no association exists, the SMR equals 100. It should be noted that because the rates come from the referent population and the weights come from the study group, two or more SMRs tend not to be comparable. This non-comparability is often forgotten in the interpretation of epidemiological data, and erroneous conclusions can be drawn.

Healthy Worker Effect

It is very common for occupational cohorts to have lower total mortality than the general population, even if the workers are at increased risk for selected causes of death from workplace exposures. This phenomenon, called the healthy worker effect, reflects the fact that any group of employed persons is likely to be healthier, on average, than the general population, which includes workers and persons unable to work due to illnesses and disabilities. The overall mortality rate in the general population tends to be higher than the rate in workers. The effect varies in strength by cause of death. For example, it appears to be less important for cancer in general than for chronic obstructive lung disease. One reason for this is that it is likely that most cancers would not have developed out of any predisposition towards cancer underlying job/career selection at a younger age. The healthy worker effect in a given group of workers tends to diminish over time.

Proportional Mortality

Sometimes a complete tabulation of a cohort (i.e., person-time at risk) is not available and there is information only on the deaths or some subset of deaths experienced by the cohort (e.g., deaths among retirees and active employees, but not among workers who left employment before becoming eligible for a pension). Computation of person-years requires special methods to deal with person-time assessment, including life-table methods. Without total person-time information on all cohort members, regardless of disease status, SMRs and SRRs cannot be calculated. Instead, proportional mortality ratios (PMRs) can be used. A PMR is the ratio of the observed number of deaths due to a specific cause in comparison to the expected number, based on the proportion of total deaths due to the specific cause in the referent population, multiplied by the number of total deaths in the study group, multiplied by 100.

Because the proportion of deaths from all causes combined must equal 1 (PMR=100), some PMRs may appear to be in excess, but are actually artificially inflated due to real deficits in other causes of death. Similarly, some apparent deficits may merely reflect real excesses of other causes of death. For example, if aerial pesticide applicators have a large real excess of deaths due to accidents, the mathematical requirement that the PMR for all causes combined equal 100 may cause some one or other causes of death to appear deficient even if the mortality is excessive. To ameliorate this potential problem, researchers interested primarily in cancer can calculate proportionate cancer mortality ratios (PCMRs). PCMRs compare the observed number of cancer deaths to the number expected based on the proportion of total cancer deaths (rather than all deaths) for the cancer of interest in the referent population multiplied by the total number of cancer deaths in the study group, multiplied by 100. Thus, the PCMR will not be affected by an aberration (excess or deficit) in a non-cancer cause of death, such as accidents, heart disease or non-malignant lung disease.

PMR studies can better be analysed using mortality odds ratios (MORs), in essence analysing the data as if they were from a case-control study. The “controls” are the deaths from a subset of all deaths that are thought to be unrelated to the exposure under study. For example, if the main interest of the study were cancer, mortality odds ratios could be calculated comparing exposure among the cancer deaths to exposure among the cardiovascular deaths. This approach, like the PCMR, avoids the problems with the PMR which arise when a fluctuation in one cause of death affects the apparent risk of another simply because the overall PMR must equal 100. The choice of the control causes of death is critical, however. As mentioned above, they must not be related to the exposure, but the possible relationship between exposure and disease may not be known for many potential control diseases.

Attributable Risk

There are measures available which express the amount of disease that would be attributable to an exposure if the observed association between the exposure and disease were causal. The attributable risk in the exposed (ARe) is the disease rate in the exposed minus the rate in the unexposed. Because disease rates cannot be measured directly in case-control studies, the ARe is calculable only for cohort studies. A related, more intuitive, measure, the attributable risk percent in the exposed (ARe%), can be obtained from either study design. The ARe% is the proportion of cases arising in the exposed population that is attributable to the exposure (see table 2 and table 3 for formula). The ARe% is the rate ratio (or the odds ratio) minus 1, divided by the rate ratio (or odds ratio), multiplied by 100.

The population attributable risk (PAR) and the population attributable risk per cent (PAR%), or aetiological fraction, express the amount of disease in the total population, which is comprised of exposed and unexposed persons, that is due to the exposure if the observed association is causal. The PAR can be obtained from cohort studies (table 28.3 ) and the PAR% can be calculated in both cohort and case-control studies (table 2 and table 3).

Representativeness

There are several measures of risk that have been described. Each assumes underlying methods for counting events and in the representatives of these events to a defined group. When results are compared across studies, an understanding of the methods used is essential for explaining any observed differences.

 

Back

Tuesday, 01 March 2011 01:48

Options in Study Design

The epidemiologist is interested in relationships between variables, chiefly exposure and outcome variables. Typically, epidemiologists want to ascertain whether the occurrence of disease is related to the presence of a particular agent (exposure) in the population. The ways in which these relationships are studied may vary considerably. One can identify all persons who are exposed to that agent and follow them up to measure the incidence of disease, comparing such incidence with disease occurrence in a suitable unexposed population. Alternatively, one can simply sample from among the exposed and unexposed, without having a complete enumeration of them. Or, as a third alternative, one can identify all people who develop a disease of interest in a defined time period (“cases”) and a suitable group of disease-free individuals (a sample of the source population of cases), and ascertain whether the patterns of exposure differ between the two groups. Follow-up of study participants is one option (in so-called longitudinal studies): in this situation, a time lag exists between the occurrence of exposure and disease onset. One alternative option is a cross-section of the population, where both exposure and disease are measured at the same point in time.

In this article, attention is given to the common study designs—cohort, case-referent (case-control) and cross-sectional. To set the stage for this discussion, consider a large viscose rayon factory in a small town. An investigation into whether carbon disulphide exposure increases the risk of cardiovascular disease is started. The investigation has several design choices, some more and some less obvious. A first strategy is to identify all workers who have been exposed to carbon disulphide and follow them up for cardiovascular mortality.

Cohort Studies

A cohort study encompasses research participants sharing a common event, the exposure. A classical cohort study identifies a defined group of exposed people, and then everyone is followed up and their morbidity and/or mortality experience is registered. Apart from a common qualitative exposure, the cohort should also be defined on other eligibility criteria, such as age range, gender (male or female or both), minimum duration and intensity of exposure, freedom from other exposures, and the like, to enhance the study’s validity and efficiency. At entrance, all cohort members should be free of the disease under study, according to the empirical set of criteria used to measure the disease.

If, for example, in the cohort study on the effects of carbon disulphide on coronary morbidity, coronary heart disease is empirically measured as clinical infarctions, those who, at the baseline, have had a history of coronary infarction must be excluded from the cohort. By contrast, electrocardiographic abnormalities without a history of infarction can be accepted. However, if the appearance of new electrocardiographic changes is the empirical outcome measure, the cohort members should also have normal electrocardiograms at the baseline.

The morbidity (in terms of incidence) or the mortality of an exposed cohort should be compared to a reference cohort which ideally should be as similar as possible to the exposed cohort in all relevant aspects, except for the exposure, to determine the relative risk of illness or death from exposure. Using a similar but unexposed cohort as a provider of the reference experience is preferable to the common (mal)practice of comparing the morbidity or mortality of the exposed cohort to age-standardized national figures, because the general population falls short on fulfilling even the most elementary requirements for comparison validity. The Standardized Morbidity (or Mortality) Ratio (SMR), resulting from such a comparison, usually generates an underestimate of the true risk ratio because of a bias operating in the exposed cohort, leading to the lack of comparability between the two populations. This comparison bias has been named the “Healthy Worker Effect”. However, it is really not a true “effect”, but a bias from negative confounding, which in turn has arisen from health-selective turnover in an employed population. (People with poor health tend to move out from, or never enter, “exposed” cohorts, their end destination often being the unemployed section of the general population.)

Because an “exposed” cohort is defined as having a certain exposure, only effects caused by that single exposure (or mix of exposures) can be studied simultaneously. On the other hand, the cohort design permits the study of several diseases at the same time. One can also study concomitantly different manifestations of the same disease—for example, angina, ECG changes, clinical myocardial infarctions and coronary mortality. While well-suited to test specific hypotheses (e.g., “exposure to carbon disulphide causes coronary heart disease”), a cohort study also provides answers to the more general question: “What diseases are caused by this exposure?”

For example, in a cohort study investigating the risk to foundry workers of dying from lung cancer, the mortality data are obtained from the national register of causes of death. Although the study was to determine if foundry dust causes lung cancer, the data source, with the same effort, also gives information on all other causes of death. Therefore, other possible health risks can be studied at the same time.

The timing of a cohort study can either be retrospective (historical) or prospective (concurrent). In both instances the design structure is the same. A full enumeration of exposed people occurs at some point or period in time, and the outcome is measured for all individuals through a defined end point in time. The difference between prospective and retrospective is in the timing of the study. If retrospective, the end point has already occurred; if prospective, one has to wait for it.

In the retrospective design, the cohort is defined at some point in the past (for example, those exposed on 1 January 1961, or those taking on exposed work between 1961 and 1970). The morbidity and/or mortality of all cohort members is then followed to the present. Although “all” means that also those having left the job must be traced, in practice a 100 per cent coverage can rarely be achieved. However, the more complete the follow-up, the more valid is the study.

In the prospective design, the cohort is defined at the present, or during some future period, and the morbidity is then followed into the future.

When doing cohort studies, enough time must be allowed for follow-up in order that the end points of concern have sufficient time to manifest. Sometimes, because historical records may be available for only a short period into the past, it is nevertheless desirable to take advantage of this data source because it means that a shorter period of prospective follow-up would be needed before results from the study could be available. In these situations, a combination of the retrospective and the prospective cohort study designs can be efficient. The general layout of frequency tables presenting cohort data is shown in table 1.

Table 1. The general layout of frequency tables presenting cohort data

Component of disease rate

Exposed cohort

Unexposed cohort

Cases of illness or death

c1

c0

Number of people in cohort

N1

N0

 

The observed proportion of diseased in the exposed cohort is calculated as:

and that of the reference cohort as:

The rate ratio then is expressed as:

N0 and N1 are usually expressed in person-time units instead of as the number of people in the populations. Person-years are computed for each individual separately. Different people often enter the cohort during a period of time, not at the same date. Hence their follow-up times start at different dates. Likewise, after their death, or after the event of interest has occurred, they are no longer “at risk” and should not continue to contribute person-years to the denominator.

If the RR is greater than 1, the morbidity of the exposed cohort is higher than that of the reference cohort, and vice versa. The RR is a point estimate and a confidence interval (CI) should be computed for it. The larger the study, the narrower the confidence interval will become. If RR = 1 is not included in the confidence interval (e.g., the 95% CI is 1.4 to 5.8), the result can be considered as “statistically significant” at the chosen level of probability (in this example, α = 0.05).

If the general population is used as the reference population, c0 is substituted by the “expected” figure, E(c1 ), derived from the age-standardized morbidity or mortality rates of that population (i.e., the number of cases that would have occurred in the cohort, had the exposure of interest not taken place). This yields the Standardized Mortality (or Morbidity) Ratio, SMR. Thus,

Also for the SMR, a confidence interval should be computed. It is better to give this measure in a publication than a p-value, because statistical significance testing is meaningless if the general population is the reference category. Such comparison entails a considerable bias (the healthy worker effect noted above), and statistical significance testing, originally developed for experimental research, is misleading in the presence of systematic error.

Suppose the question is whether quartz dust causes lung cancer. Usually, quartz dust occurs together with other carcinogens—such as radon daughters and diesel exhaust in mines, or polyaromatic hydrocarbons in foundries. Granite quarries do not expose the stone workers to these other carcinogens. Therefore the problem is best studied among stone workers employed in granite quarries.

Suppose then that all 2,000 workers, having been employed by 20 quarries between 1951 and 1960, are enrolled in the cohort and their cancer incidence (alternatively only mortality) is followed starting at ten years after first exposure (to allow for an induction time) and ending in 1990. This is a 20- to 30-year (depending on the year of entry) or, say, on average, a 25-year follow-up of the cancer mortality (or morbidity) among 1,000 of the quarry workers who were specifically granite workers. The exposure history of each cohort member must be recorded. Those who have left the quarries must be traced and their later exposure history recorded. In countries where all inhabitants have unique registration numbers, this is a straightforward procedure, governed chiefly by national data protection laws. Where no such system exists, tracing employees for follow up purposes can be extremely difficult. Where appropriate death or disease registries exist, the mortality from all causes, all cancers and specific cancer sites can be obtained from the national register of causes of death. (For cancer mortality, the national cancer registry is a better source because it contains more accurate diagnoses. In addition, incidence (or, morbidity) data can also be obtained.) The death rates (or cancer incidence rates) can be compared to “expected numbers”, computed from national rates using the person-years of the exposed cohort as a basis.

Suppose that 70 fatal cases of lung cancer are found in the cohort, whereas the expected number (the number which would have occurred had there been no exposure) is 35. Then:

c1 = 70, E(c1) = 35

Thus, the SMR = 200, which indicates a twofold increase in risk of dying from lung cancer among the exposed. If detailed exposure data are available, the cancer mortality can be studied as a function of different latency times (say, 10, 15, 20 years), work in different types of quarries (different kinds of granite), different historical periods, different exposure intensities and so on. However, 70 cases cannot be subdivided into too many categories, because the number falling into each one rapidly becomes too small for statistical analysis.

Both types of cohort designs have advantages and disadvantages. A retrospective study can, as a rule, measure only mortality, because data for milder manifestations usually are lacking. Cancer registries are an exception, and perhaps a few others, such as stroke registries and hospital discharge registries, in that incidence data also are available. Assessing past exposure is always a problem and the exposure data are usually rather weak in retrospective studies. This can lead to effect masking. On the other hand, since the cases have already occurred, the results of the study become available much sooner; in, say, two to three years.

A prospective cohort study can be better planned to comply with the researcher’s needs, and exposure data can be collected accurately and systematically. Several different manifestations of a disease can be measured. Measurements of both exposure and outcome can be repeated, and all measurements can be standardized and their validity can be checked. However, if the disease has a long latency (such as cancer), much time—even 20 to 30 years—will need to pass before the results of the study can be obtained. Much can happen during this time. For example, turnover of researchers, improvements in techniques for measuring exposure, remodelling or closure of the plants chosen for study and so forth. All these circumstances endanger the success of the study. The costs of a prospective study are also usually higher than those of a retrospective study, but this is mostly due to the much greater number of measurements (repeated exposure monitoring, clinical examinations and so on), and not to more expensive death registration. Therefore the costs per unit of information do not necessarily exceed those of a retrospective study. In view of all this, prospective studies are more suited for diseases with rather short latency, requiring short follow-up, while retrospective studies are better for disease with a long latency.

Case-Control (or Case-Referent) Studies

Let us go back to the viscose rayon plant. A retrospective cohort study may not be feasible if the rosters of the exposed workers have been lost, while a prospective cohort study would yield sound results in a very long time. An alternative would then be the comparison between those who died from coronary heart disease in the town, in the course of a defined time period, and a sample of the total population in the same age group.

The classical case-control (or, case-referent) design is based on sampling from a dynamic (open, characterized by a turnover of membership) population. This population can be that of a whole country, a district or a municipality (as in our example), or it can be the administratively defined population from which patients are admitted to a hospital. The defined population provides both the cases and the controls (or referents).

The technique is to gather all the cases of the disease in question that exist at a point in time (prevalent cases), or have occurred during a defined period of time (incident cases). The cases thus can be drawn from morbidity or mortality registries, or be gathered directly from hospitals or other sources having valid diagnostics. The controls are drawn as a sample from the same population, either from among non-cases or from the entire population. Another option is to select patients with another disease as controls, but then these patients must be representative of the population from which the cases came. There may be one or more controls (i.e., referents) for each case. The sampling approach differs from cohort studies, which examine the entire population. It goes without saying that the gains in terms of the lower costs of case-control designs are considerable, but it is important that the sample is representative of the whole population from which the cases originated (i.e., the “study base”)—otherwise the study can be biased.

When cases and controls have been identified, their exposure histories are gathered by questionnaires, interviews or, in some instances, from existing records (e.g., payroll records from which work histories can be deduced). The data can be obtained either from the participants themselves or, if they are deceased, from close relatives. To ensure symmetrical recall, it is important that the proportion of dead and live cases and referents be equal, because close relatives usually give a less detailed exposure history than the participants themselves. Information about the exposure pattern among cases is compared to that among controls, providing an estimate of the odds ratio (OR), an indirect measure of the risk among the exposed to incur the disease relative to that of the unexposed.

Because the case-control design relies on the exposure information obtained from patients with a certain disease (i.e., cases) along with a sample of non-diseased people (i.e., controls) from the population from which the cases originated, the connection with exposures can be investigated for only one disease. By contrast, this design allows the concomitant study of the effect of several different exposures. The case-referent study is well suited to address specific research questions (e.g., “Is coronary heart disease caused by exposure to carbon disulphide?”), but it also can help to answer the more general question: “What exposures can cause this disease?”

The question of whether exposure to organic solvents causes primary liver cancer is raised (as an example) in Europe. Cases of primary liver cancer, a comparatively rare disease in Europe, are best gathered from a national cancer registry. Suppose that all cancer cases occurring during three years form the case series. The population base for the study is then a three-year follow-up of the entire population in the European country in question. The controls are drawn as a sample of persons without liver cancer from the same population. For reasons of convenience (meaning that the same source can be used for sampling the controls) patients with another cancer type, not related to solvent exposure, can be used as controls. Colon cancer has no known relation to solvent exposure; hence this cancer type can be included among the controls. (Using cancer controls minimizes recall bias in that the accuracy of the history given by cases and controls is, on average, symmetrical. However, if some presently unknown connection between colon cancer and exposure to solvents were revealed later, this type of control would cause an underestimation of the true risk—not an exaggeration of it.)

For each case of liver cancer, two controls are drawn in order to achieve greater statistical power. (One could draw even more controls, but available funds may be a limiting factor. If funds were not limited, perhaps as many as four controls would be optimal. Beyond four, the law of diminishing returns applies.) After obtaining appropriate permission from data protection authorities, the cases and controls, or their close relatives, are approached, usually by means of a mailed questionnaire, asking for a detailed occupational history with special emphasis on a chronological list of the names of all employers, the departments of work, the job tasks in different employment, and the period of employment in each respective task. These data can be obtained from relatives with some difficulty; however, specific chemicals or trade names usually are not well recalled by relatives. The questionnaire also should include questions on possible confounding data, such as alcohol use, exposure to foodstuffs containing aflatoxins, and hepatitis B and C infection. In order to obtain a sufficiently high response rate, two reminders are sent to non-respondents at three-week intervals. This usually results in a final response rate in excess of 70%. The occupational history is then reviewed by an industrial hygienist, without knowledge of the respondent’s case or control status, and exposure is classified into high, medium, low, none, and unknown exposure to solvents. The ten years of exposure immediately preceding the cancer diagnosis are disregarded, because it is not biologically plausible that initiator-type carcinogens can be the cause of the cancer if the latency time is that short (although promoters, in fact, could). At this stage it is also possible to differentiate between different types of solvent exposure. Because a complete employment history has been given, it is also possible to explore other exposures, although the initial study hypothesis did not include these. Odds ratios can then be computed for exposure to any solvent, specific solvents, solvent mixtures, different categories of exposure intensity, and for different time windows in relation to cancer diagnosis. It is advisable to exclude from analysis those with unknown exposure.

The cases and controls can be sampled and analysed either as independent series or matched groups. Matching means that controls are selected for each case based on certain characteristics or attributes, to form pairs (or sets, if more than one control is chosen for each case). Matching is usually done based on one or more such factors, as age, vital status, smoking history, calendar time of case diagnosis, and the like. In our example, cases and controls are then matched on age and vital status. (Vital status is important, because patients themselves usually give a more accurate exposure history than close relatives, and symmetry is essential for validity reasons.) Today, the recommendation is to be restrictive with matching, because this procedure can introduce negative (effect-masking) confounding.

If one control is matched to one case, the design is called a matched-pair design. Provided the costs of studying more controls are not prohibitive, more than one referent per case improves the stability of the estimate of the OR, which makes the study more size efficient.

The layout of the results of an unmatched case-control study is shown in table 2.

Table 2. Sample layout of case-control data

Exposure classification

 

Exposed

Unexposed

Cases

c1

c0

Non-cases

n1

n0

 

From this table, the odds of exposure among the cases, and the odds of exposure among the population (the controls), can be computed and divided to yield the exposure odds ratio, OR. For the cases, the exposure odds is c1 / c0, and for the controls it is n1 / n0. The estimate of the OR is then:

If relatively more cases than controls have been exposed, the OR is in excess of 1 and vice versa. Confidence intervals must be calculated and provided for the OR, in the same manner as for the RR.

By way of a further example, an occupational health centre of a large company serves 8,000 employees exposed to a variety of dusts and other chemical agents. We are interested in the connection between mixed dust exposure and chronic bronchitis. The study involves follow-up of this population for one year. We have set the diagnostic criteria for chronic bronchitis as “morning cough and phlegm production for three months during two consecutive years”. Criteria for “positive” dust exposure are defined before the study begins. Each patient visiting the health centre and fulfilling these criteria during a one-year period is a case, and the next patient seeking medical advice for non-pulmonary problems is defined as a control. Suppose that 100 cases and 100 controls become enrolled during the study period. Let 40 cases and 15 controls be classified as having been exposed to dust. Then

c1 = 40, c0 = 60, n1 = 15, and n0 = 85.

Consequently,

In the foregoing example, no consideration has been given to the possibility of confounding, which may lead to a distortion of the OR due to systematic differences between cases and controls in a variable like age. One way to reduce this bias is to match controls to cases on age or other suspect factors. This results in a data layout depicted in table 3.

Table 3. Layout of case-control data if one control is matched to each case

Referents

Cases

Exposure (+)

Exposure (-)

Exposure (+)

f+ +

f+ -

Exposure (-)

f- +

f- -

 

The analysis focuses on the discordant pairs: that is, “case exposed, control unexposed” (f+–); and “case unexposed, control exposed” (f–+). When both members of a pair are exposed or unexposed, the pair is disregarded. The OR in a matched-pair study design is defined as

In a study on the association between nasal cancer and wood dust exposure, there were all together 164 case-control pairs. In only one pair, both the case and the control had been exposed, and in 150 pairs, neither the case nor the control had been exposed. These pairs are not further considered. The case, but not the control had been exposed in 12 pairs, and the control, but not the case, in one pair. Hence,

and because unity is not included in this interval, the result is statistically significant—that is, there is a statistically significant association between nasal cancer and wood dust exposure.

Case-control studies are more efficient than cohort studies when the disease is rare; they may in fact provide the only option. However, common diseases also can be studied by this method. If the exposure is rare, an exposure-based cohort is the preferable or only feasible epidemiological design. Of course, cohort studies also can be carried out on common exposures. The choice between cohort and case-control designs when both the exposure and disease are common is usually decided taking validity considerations into account.

Because case-control studies rely on retrospective exposure data, usually based on the participants’ recall, their weak point is the inaccuracy and crudeness of the exposure information, which results in effect-masking through non-differential (symmetrical) misclassification of exposure status. Moreover, sometimes the recall can be asymmetrical between cases and controls, cases usually believed to remember “better” (i.e., recall bias).

Selective recall can cause an effect-magnifying bias through differential (asymmetrical) misclassification of exposure status. The advantages of case-control studies lie in their cost-effectiveness and their ability to provide a solution to a problem relatively quickly. Because of the sampling strategy, they allow the investigation of very large target populations (e.g., through national cancer registries), thereby increasing the statistical power of the study. In countries where data protection legislation or lack of good population and morbidity registries hinders the execution of cohort studies, hospital-based case-control studies may be the only practical way to conduct epidemiological research.

Case-control sampling within a cohort (nested case-control study designs)

A cohort study also can be designed for sampling instead of complete follow-up. This design has previously been called a “nested” case-control study. A sampling approach within the cohort sets different requirements on cohort eligibility, because the comparisons are now made within the same cohort. This should therefore include not only heavily exposed workers, but also less-exposed and even unexposed workers, in order to provide exposure contrasts within itself. It is important to realize this difference in eligibility requirements when assembling the cohort. If a full cohort analysis is first carried out on a cohort whose eligibility criteria were on “much” exposure, and a “nested” case-control study is done later on the same cohort, the study becomes insensitive. This introduces effect-masking because the exposure contrasts are insufficient “by design” by virtue of a lack of variability in exposure experience among members of the cohort.

However, provided the cohort has a broad range of exposure experience, the nested case-control approach is very attractive. One gathers all the cases arising in the cohort over the follow-up period to form the case series, while only a sample of the non-cases is drawn for the control series. The researchers then, as in the traditional case-control design, gather detailed information on the exposure experience by interviewing cases and controls (or, their close relatives), by scrutinizing the employers’ personnel rolls, by constructing a job exposure matrix, or by combining two or more of these approaches. The controls can either be matched to the cases or they can be treated as an independent series.

The sampling approach can be less costly compared to exhaustive information procurement on each member of the cohort. In particular, because only a sample of controls is studied, more resources can be devoted to detailed and accurate exposure assessment for each case and control. However, the same statistical power problems prevail as in classical cohort studies. To achieve adequate statistical power, the cohort must always comprise an “adequate” number of exposed cases depending on the magnitude of the risk that should be detected.

Cross-sectional study designs

In a scientific sense, a cross-sectional design is a cross-section of the study population, without any consideration given to time. Both exposure and morbidity (prevalence) are measured at the same point in time.

From the aetiological point of view, this study design is weak, partly because it deals with prevalence as opposed to incidence. Prevalence is a composite measure, depending both on the incidence and duration of the disease. This also restricts the use of cross-sectional studies to diseases of long duration. Even more serious is the strong negative bias caused by the health-dependent elimination from the exposed group of those people more sensitive to the effects of exposure. Therefore aetiological problems are best solved by longitudinal designs. Indeed, cross-sectional studies do not permit any conclusions about whether exposure preceded disease, or vice versa. The cross-section is aetiologically meaningful only if a true time relation exists between the exposure and the outcome, meaning that present exposure must have immediate effects. However, the exposure can be cross-sectionally measured so that it represents a longer past time period (e.g., the blood lead level), while the outcome measure is one of prevalence (e.g., nerve conduction velocities). The study then is a mixture of a longitudinal and a cross-sectional design rather than a mere cross-section of the study population.

Cross-sectional descriptive surveys

Cross-sectional surveys are often useful for practical and administrative, rather than for scientific, purposes. Epidemiological principles can be applied to systematic surveillance activities in the occupational health setting, such as:

  • observation of morbidity in relation to occupation, work area, or certain exposures
  • regular surveys of workers exposed to known occupational hazards
  • examination of workers coming into contact with new health hazards
  • biological monitoring programmes
  • exposure surveys to identify and quantify hazards
  • screening programmes of different worker groups
  • assessing the proportion of workers in need of prevention or regular control (e.g., blood pressure, coronary heart disease).

 

It is important to choose representative, valid, and specific morbidity indicators for all types of surveys. A survey or a screening programme can use only a rather small number of tests, in contrast to clinical diagnostics, and therefore the predictive value of the screening test is important. Insensitive methods fail to detect the disease of interest, while highly sensitive methods produce too many falsely positive results. It is not worthwhile to screen for rare diseases in an occupational setting. All case finding (i.e., screening) activities also require a mechanism for taking care of people having “positive” findings, both in terms of diagnostics and therapy. Otherwise only frustration will result with a potential for more harm than good emerging.

 

Back

Although epidemiological studies of occupationally acquired pneumonia (OAP) are limited, work-related lung infections are thought to be declining in frequency worldwide. In contrast, OAPs in developed nations may be increasing in occupations associated with biomedical research or healthcare. OAP in hospital workers largely reflects the prevalent community-acquired pathogens, but the re-emergence of tuberculosis, measles and pertussis in health care settings presents additional risk for health-based occupations. In developing nations, and in specific occupations in developed nations, unique infectious pathogens that do not commonly circulate in the community cause many OAPs.

Attributing infection to occupational rather than community exposure can be difficult, especially for hospital workers. In the past, occupational risk was documented with certainty only in situations where workers were infected with agents that occurred in the workplace but were not present in the community. In the future, the use of molecular techniques to track specific microbial clones through the workplace and communities will make risk determinations more clear.

Like community-acquired pneumonia, OAP results from microaspiration of bacteria that colonize the oropharynx, inhalation of respirable infectious particles, or haematogenous seeding of the lungs. Most community-acquired pneumonia results from microaspiration, but OAP is usually due to inhalation of infectious 0.5 to 10μm airborne particles in the workplace. Larger particles fail to reach the alveoli because of impaction or sedimentation onto the walls of the large airways and are subsequently cleared. Smaller particles remain suspended during inspiratory and expiratory flow and are rarely deposited in the alveoli. For some diseases, such as the haemorrhagic fever with renal syndrome associated with hantavirus infection, the principal mode of transmission is inhalation but the primary focus of disease may not be the lungs. Occupationally acquired pathogens that are not transmitted by inhalation may secondarily involve the lungs but will not be discussed here.

This review briefly discusses some of the most important occupationally acquired pathogens. A more extensive list of occupationally acquired pulmonary disorders, classified by specific aetiologies, is presented in table 1.

Table 1. Occupationally acquired infectious diseases contracted via microaspiration or inhalation of infectious particles

Disease (pathogen)

Reservoir

At-risk populations

Bacteria, chlamydia, mycoplasma and rickettsia

Brucellosis (Brucella spp.)

Livestock (cattle, goats, pigs)

Veterinary care workers, agricultural workers, laboratory workers, abattoir workers

Inhalation anthrax (Bacillus anthracis)

Animal products (wools, hides)

Agricultural workers, tanners, abattoir workers, textile workers, laboratory workers

Pneumonic plague (Yersinia pestis)

Wild rodents

Veterinary care workers, hunters/trappers, laboratory workers

Pertussis (Bordatella pertussis)

Humans

Employees of nursing homes, health care workers

Legionnaire’s disease (Legionella spp.)

Contaminated water sources (e.g., cooling towers, evaporator condensers)

Health care workers, laboratory workers, industrial laboratory workers, water well excavators

Melioidosis (Pseudomonas pseudomallei)

Soil, stagnant water, rice fields

Military personnel, agricultural workers

Streptococcus pneumoniae

Humans

Health care workers, agricultural workers, subterranean miners

Neisseria meningitidis

Humans

Health care workers, laboratory workers, military personnel

Pasteurellosis (Pasteurella multocida)

Variety of domesticated (cats, dogs) and wild animals

Agricultural workers, veterinary care workers

Respiratory tularaemia (Francisella tularensis)

Wild rodents and rabbits

Manual labourers, military personnel, laboratory workers, hunters/trappers, agricultural workers

Ornithosis (Chlamydia psittaci)

Birds

Pet shop workers, poultry production workers, veterinary care workers, laboratory workers

TWAR pneumonia (Chlamydia pneumoniae)

Humans

Health care workers, military personnel

Q Fever (Coxiella burnetii)

Domesticated animals (cattle, sheep, goats)

Laboratory workers, textile workers, abattoir workers, dairy cattle workers, veterinary care workers

Atypical pneumonia (Mycoplasma pneumoniae)

Humans

Military personnel, health care workers, institutional workers

Fungi/Mycobacteria

Histoplasmosis (Histoplasma capsulatum)

Soil; bird or bat excrement (endemic to eastern North America)

Agricultural workers, laboratory workers, manual labourers

Coccidioidomycosis (Coccidioides immitis)

Soil (endemic to western North America)

Military personnel, agricultural workers, manual labourers, textile workers, laboratory workers

Blastomycosis (Blastomyces dermatitidis)

Soil (endemic to eastern North America)

Laboratory workers, agricultural workers, manual labourers, forestry workers

Paracoccidioidomycosis (Paracoccidioides brasiliensis)

Soil (endemic to Venezuela, Colombia, Brazil)

Agricultural workers

Sporotrichosis (Sporothrix schenkii)

Plant debris, tree and garden plant bark

Gardeners, florists, miners

Tuberculosis (Mycobacterium tuberculosis, M. bovis, M. africanum)

Human and non-human primates, cattle

Hard rock miners, foundry workers, health care and laboratory workers, abattoir workers, veterinary care workers, military personnel, tavern workers

Mycobacteriosis other than tuberculosis 
(Mycobacterium spp.)

Soil

Silica-exposed workers, including sandblasters

Viruses

Hantavirus

Rodents

Agricultural workers, herders, rodent control workers

Measles

Humans

Health care and laboratory workers

Rubella

Humans

Health care and laboratory workers

Influenza

Humans

Health care and laboratory workers

Varicella zoster

Humans

Health care and laboratory workers, military personnel

Respiratory syncytial virus

Humans

Health care and laboratory workers

Adenovirus

Humans

Health care and laboratory workers, military personnel

Parainfluenza virus

Humans

Health care and laboratory workers

Lymphocytic choriomeningitis virus (arenavirus)

Rodents

Laboratory workers, veterinary care workers

Lassa Fever (arenavirus)

Rodents

Health care workers

Marburg and Ebola viruses (filovirus)

Human and non-human primates, possibly bats

Laboratory workers, veterinary care workers, health care workers, cotton factory workers

 

Occupationally Acquired Infections in Agricultural Workers

In addition to gases and organic dusts that affect the respiratory tract and mimic infectious diseases, several zoonotic (pathogens common to animals and humans) and other infectious diseases associated with rural living uniquely affect agricultural workers. These diseases are acquired by inhalation of infectious aerosols, and are rarely transmitted from one person to another. Such illnesses that occur in agricultural workers include anthrax, brucellosis, Q fever, ornithosis, tuberculosis and plague (table 1). Fungal pathogens include histoplasmosis, blastomycosis, coccidioidomycosis, paracoccidioidomycosis and cryptococcosis (table 1). Except for the hantaviral diseases, viral diseases are not an important cause of occupational lung disease in agricultural workers.

Some of these infections are thought to be more common but their incidence is difficult to determine because: (1) most infections are subclinical, (2) clinical illness is mild or difficult to diagnose because of non-specific symptoms, (3) medical and diagnostic services are rarely available for most agricultural workers, (4) there is no organized system for reporting many of these diseases and (5) many of these are rare diseases in the general community and are not recognized by medical personnel. For example, although epidemic nephritis due to Puumala virus, a hantavirus, is rarely reported in western Europe, serosurveys of agricultural workers have shown a 2 to 7% prevalence of antibody to hantaviruses.

Zoonotic infections in developed nations are decreasing due to active disease control programmes directed at the animal populations. Despite these controls, agricultural workers and persons working in agriculturally related fields (such as veterinarians, meat-packers, poultry-processors and hair/hide workers) remain at risk for many diseases.

Hantavirus Infection

Hantavirus infection resulting in haemorrhagic fever with renal syndrome (HFRS) or epidemic nephritis (EN) has been clinically described among agricultural workers, military personnel and laboratory workers in endemic areas of Asia and Europe for more than 50 years. Infection results from inhalation of aerosols of urine, saliva and faeces from infected rodents. Haemorrhagic illness and decreased renal function develop during most hantavirus illnesses rather than pneumonia, but pulmonary oedema due to increased vascular permeability has been reported in HFRS and EN. The profound pulmonary consequences of hantavirus infections were not fully appreciated until a recent outbreak of Hantavirus Pulmonary Syndrome (HPS) associated with infection with a recently isolated hantavirus in the western United States (Muerto Canyon virus, Four Corners virus, or Sin Nombre virus).

Hantaviruses are members of the Bunyaviridae, a family of RNA viruses. Five hantaviruses have been associated with human disease. HFRS has been associated with Hantaan virus in eastern Asia, Dobrava virus in the Balkans, and Seoul virus, which has a worldwide distribution. EN has been associated with Puumala virus in western Europe. HPS has been associated with a newly isolated hantavirus in the western United States. From 1951 to 1983, 12,000 cases of HFRS were reported from the Republic of Korea. Disease incidence in China is reported to be increasing with epidemics in rural and urban centres, and in 1980, 30,500 cases with 2,000 deaths were attributed to HFRS.

Clinical presentation

With the viruses causing HFRS or EN, infection usually results in asymptomatic development of anti-hantavirus antibodies. In people who become ill, signs and symptoms of the early phase are non-specific, and hantavirus infection can be diagnosed only with serologic testing. Slow recovery is common, but a few persons progress to HFRS or EN developing proteinuria, microscopic haematuria, azotaemia and oliguria. Persons with HFRS also develop profound haemorrhage due to disseminated intravascular coagulation, increased vascular permeability, and shock. Mortality in persons with the full HFRS syndrome varies from 5 to 20%.

HPS is characterized by diffuse interstitial pulmonary infiltrates and the abrupt onset of acute respiratory distress and shock. Marked leukocytosis may occur as a result of increased cytokines that characterize hantaviral illnesses. In HPS, mortality may be more than 50%. The incidence of asymptomatic infection or unrecognized HPS is incompletely investigated.

Diagnostic tests

Diagnosis is made by demonstrating the presence of immunoglobulin M or rising titre of immunoglobulin G using highly specific and sensitive indirect immunofluorescence and neutralizing antibody assays. Other diagnostic methods include polymerase chain reaction for viral ribonucleic acid and immunohistochemistry for viral antigen.

Epidemiology

Infection results from inhalation of aerosols of urine, saliva and faeces from infected rodents. Infected rodents do not have any apparent illness. Transmission may occur by percutaneous inoculation of urine, saliva or faeces from infected rodents, but there is no evidence of human-to-human transmission.

Seroepidemiological surveys of humans and rodents have shown that hantaviruses are endemic with a worldwide distribution in rural and urban settings. In endemic rural settings, rodent-human interactions increase when rodents seasonally invade homes or human activity increases in areas with high rodent density. Persons in rural occupations are at greatest risk of infection. In surveys of asymptomatic rural populations in Italy, 4 to 7% of forestry workers, rangers, farmers and hunters had anti-hantavirus antibody, compared to 0.7% of soldiers. In asymptomatic agricultural workers in Ireland and Czechoslovakia, the prevalence of anti-hantavirus antibody was 1 to 2% and 20 to 30%, respectively. Planting, harvesting, threshing, herding and forestry are risk factors for virus infection. Serosurveys in the western United States to determine the occupational risk of hantavirus infection are in progress, but in a study of health care workers (HCWs) caring for HPS patients, no infections were identified. From the first 68 persons with HPS, it appears that agricultural activities in habitats of infected rodents are risk factors for infection. Patients were more likely to hand plow, clean food storage areas, plant, clean animal sheds and be herders. The major reservoir for HPS is the deer mouse, Peromyscus maniculatus.

Other affected occupations

In urban settings, the rodent reservoir for Seoul virus is the house rat. Urban workers, such as dockworkers, workers at grain storage facilities, zoo workers and rodent-control workers may be at risk for hantavirus infection. Research laboratories using rodents for research other than hantavirus research have occasionally been unsuspected sources for hantavirus infections of laboratory workers. Other occupations, such as military personnel and field biologists, are at risk for hantavirus infection.

Treatment

Ribavirin has demonstrated in vitro activity against several hantaviruses and clinical efficacy against Hantaan virus infection, and has been used to treat persons with HPS.

Public health controls

No vaccine is available for use although there are ongoing efforts to develop live and killed vaccines. Minimizing human contact with rodents and reducing rodent populations in human environments reduces the risk of disease. In hantavirus research laboratories, high-level biosafety facilities limit the risk from virus propagation in cell culture or handling materials with high concentrations of virus. In other research laboratories using rodents, periodic serologic surveillance for hantavirus infection of rodent colonies may be considered.

Lymphocytic Choriomeningitis (LCM)

LCM, like the hantaviral infections, is naturally an infection of wild rodents which occasionally spreads to man. The LCM virus is an arenavirus, but transmission usually occurs by aerosolization. The natural hosts include wild mice, but persistent infection of domestic Syrian hamsters is well documented. Infection is, therefore, possible in most occupations involving the aerosolization of rodent urine. The most recent documented occupational outbreak of this disease occurred in laboratory personnel exposed to T-cell deficient nude mice persistently infected as a result of inoculation of contaminated tumour cell lines.

Clinical presentation

Most cases of LCM are asymptomatic or associated with non-specific flu-like illness and are, therefore, not recognized. While the respiratory tract is the site of entry, respiratory symptoms tend to be non-specific and self-limited. Meningitis or meningoencephalitis develops in a small percentage of the patients and may lead to a specific diagnosis.

Diagnostic tests

Diagnosis is usually by serologic demonstration of a rising titre to the virus in the presence of appropriate clinical signs. Virus isolation and tissue immunofluorescence are also occasionally used.

Epidemiology

Approximately 20% of wild mice are infected with this virus. Transplacental transmission of the virus in susceptible rodents leads to T-cell tolerance and congenitally infected mice (or hamsters) who remain persistently infected throughout their lives. Similarly, T-cell deficient mice, such as nude mice, may become persistently infected with the virus. Humans are infected by aerosol transmission. In addition, rodent cell lines can be contaminated with and propagate the virus. Humans usually become infected by aerosols, although transmission may be direct or via insect vectors.

Other affected occupations

Any occupation involving exposure to dusts contaminated with the excreta of wild rodents confers risk of LCM infection. Animal caretakers in laboratory animal facilities, workers in the pet store industry, and laboratory workers working with rodent cell lines may become infected.

Treatment

LCM infection is usually self-limited. Supportive treatment may be necessary in severe cases.

Public health controls

No vaccine is available. Screening of research mice, hamsters and cell lines has limited most laboratory-acquired infections. For T-cell deficient mice, serologic testing requires the use of immunocompetent sentinel mice. The use of routine laboratory safety precautions such as gloves, eye protection and laboratory coats is appropriate. Reducing the number of wild rodents in the human environment is important in the control of LCM, hantavirus and plague.

Respiratory Chlamydiosis

Respiratory chlamydiosis due to Chlamydia psittaci is the most frequently reported cause of OAP associated with animal (poultry) slaughter and meat processing. Chlamydiosis and other illnesses are often associated with exposure to ill animals, which may be the only clue to the source and type of infection. Processing infected animals creates aerosols that infect persons who are remote from meat processing, and working near meat-processing plants may be a clue to the type of infection. Respiratory chlamydiosis may be associated with exposure to parrots (psittacosis) or non-psittacine birds (ornithosis). Non-avian sources of Chlamydia psittaci are usually not considered potential zoonoses, although spontaneous abortion and conjunctivitis have been reported in humans exposed to sheep and goat strains. Pneumonia due to C. pneumoniae is a recently described common cause of community acquired pneumonia distinct from C. psittaci infections. Because of its recent discovery, the role of C. pneumoniae in OAPs is incompletely investigated and will not be further discussed in this review.

Clinical presentation

Ornithosis varies from mild influenza-like illness to severe pneumonia with encephalitis which, in the preantibiotic era, had a case-fatality rate (CFR) greater than 20%. Prodromal fever, chills, myalgia, headache and non-productive cough may last up to three weeks prior to the diagnosis of pneumonia. Neurologic, hepatic and renal changes are common. Roentgenographic findings include lower lobe consolidation with hilar lymphadenopathy. Clinical suspicion after determining work-related or other exposures to birds is crucial to the diagnosis because there are no pathognomonic findings.

Diagnostic tests

Ornithosis usually results in a high titre of complement fixation (CF) antibody, although early treatment with tetracyclines may suppress antibody formation. A single acute serum titre ³1:16 dilution of CF antibody with a compatible clinical presentation or four-fold change in CF antibody titre can be used to make the diagnosis. Inappropriately paired serum samples and the high background of Chlamydia antibodies in at-risk groups undermine the utility of antibody assays to diagnose most chlamydial diseases.

Epidemiology

C. psittaci is present in virtually all avian species and is common in mammals. Infection usually results from zoonotic transmission but person-to-person transmission has been reported. Asymptomatic infection is common and up to 11% of agricultural workers without a history of illness have antibodies to C. psittaci. Limited outbreaks remain intermittent but pandemics associated with the exotic bird trade most recently occurred in 1930. In the United States, 70 to 100 cases of ornithosis are reported annually, and nearly one-third of these illnesses are occupationally acquired. Most occupationally acquired infections occur in workers in the pet-bird or poultry-processing industries and are related to aerosolization of avian tissue or faeces. In countries where birds are commonly kept as pets and importation quarantines are poorly enforced, outbreaks are more common but occupation is less of a risk factor.

Other affected occupations

Disease most frequently occurs in poultry processing workers, but workers in exotic bird distribution and avian quarantine facilities, breeding aviaries and veterinary clinics are at risk.

Treatment

Tetracycline or erythromycin for 10 to 14 days should be adequate treatment, but clinical relapse is common when treatment is given for an inadequate duration.

Public health controls

In the United States, exotic birds are quarantined for chemoprophylaxis with tetracyclines. Similar methods are used in other countries where an exotic bird trade exists. No vaccine has been developed for ornithosis. Programmes to increase ventilation to dilute aerosol concentration, reduce aerosolization or inhalation of infectious particles, or treat ill birds in commercial processing plants have been instituted, but their efficacy has not been demonstrated.

Brucellosis

Annually, approximately 500,000 cases of Brucellosis occur worldwide, caused by several Brucella species. The pathogenicity of Brucella infections is dependent upon the infecting species, which tend to have different reservoir hosts. The reservoirs for Brucella abortus, B. suis, B. melitensis, B. ovis, B. canis, and B. neotomae tend to be cattle, swine, goats, sheep, dogs and rats, respectively.

Brucellosis can result from infection by different routes, including aerosolization. However, most illness results from ingestion of non-pasteurized dairy products from goats. The resulting systemic illness is caused by B. melitensis but not associated with specific occupations. Pneumonia occurs in 1% of cases, although cough is a frequent finding.

In developed countries, occupational brucellosis is usually caused by Brucella abortus and results from ingestion or inhalation of infectious aerosols associated with placentas of swine and cattle. Subclinical infection is common; up to 1% of agricultural workers have antibodies to B. abortus. Illness develops in approximately 10% of infected persons. Unlike illness due to B. melitensis, illness associated with B. abortus is usually occupationally acquired and is less severe. Persons with acute brucellosis develop high daily fevers, arthralgia and hepatosplenomegaly. In primary brucella pneumonia, pneumonic consolidation is actually rare, and pulmonary findings may include hoarseness or wheezing, hilar adenopathy, peribronchial infiltrates, parenchymal nodules or a miliary pattern. Isolation can be made from bone marrow in 90% of acute cases and from blood in 50 to 80% of cases. Diagnosis can be made serologically with a variety of antibody assays. Tetracyclines should be used for four to six weeks, and rifampin may be added for synergy. Cattle, goat, sheep and swine raisers, dairy workers, slaughterhouse workers, veterinarians and butchers are the primary populations at risk. Brucella testing and eradication programmes have greatly reduced the number of infected animals and have identified those herds which pose the greatest risk for disease transmission. When working with Brucella-infected animals, avoidance or personal protection, especially after abortion or parturition, are the only effective disease control methods.

Inhalation Anthrax

Inhalation anthrax occurs worldwide but is less common than cutaneous anthrax. Anthrax is a systemic illness in many animals and is usually transmitted to humans by percutaneous infection from processing or by eating contaminated meat. Inhalation anthrax is caused by inhalation of spores of Bacillus anthracis from the bone, hair or hide of sheep, goats or cattle (“woolsorter’s disease”) or rarely while processing infected meat. Spores undergo phagocytosis by alveolar macrophages and are transported to mediastinal lymph nodes, where they germinate. This results in a haemorrhagic mediastinitis but rarely presents as primary pneumonia. Illness is characterized by a widened mediastinum, pulmonary oedema, pleural effusions, splenomegaly and rapid progression to respiratory failure. The case fatality rate is 50% or greater despite antibiotics and ventilatory support. Positive blood cultures are common but serologic testing using a blotting immunoassay may be used. Ill persons are treated with high-dose penicillin, or intravenous ciprofloxacin as an alternative in persons allergic to penicillin. Animal breeders, veterinarians, veterinary care workers, hair and hide processors, and slaughterhouse workers are at increased risk. Annual vaccination is available for animals in endemic areas and humans at high risk for disease. Specific control measures against inhalation anthrax include formaldehyde decontamination, steam sterilization or irradiation of hair and hides; prohibiting hide importation from endemic areas; and personal respiratory protection for workers.

Pneumonic Plague

Plague, caused by Yersinia pestis, is predominantly a flea-borne disease enzootic in wild rodents. Humans usually become infected when bitten by an infected flea and often develop septicaemia. In the United States from 1970 through 1988, secondary pneumonia from haematogenous spread developed in approximately 10% of septicaemic persons. Animals and humans with pneumonic plague produce infectious aerosols. Primary pneumonia in humans can occur from inhalation of an infectious aerosol created around dying animals with secondary pneumonia. Despite the potential for pneumonic spread, person-to-person transmission is rare and has not occurred in the United States in nearly 50 years. Disease controls include the isolation of persons with pneumonic plague and the use of personal respiratory protection by HCWs. Aerosol transmission to hospital workers is possible, and tetracycline prophylaxis should be considered for anyone in contact with humans or animals with pneumonic plague. A number of occupations are at risk for aerosol transmission, including biomedical and hospital laboratory workers and, in endemic areas, a number of rural occupations, including veterinarians, rodent-control workers, hunter/trappers, mammologists, wildlife biologists and agricultural workers. A killed vaccine is recommended for persons in high-risk occupations.

Q Fever

Caused by inhalation of Coxiella burnetii, Q fever is a systemic disease that presents as atypical pneumonia in 10 to 60% of infected persons. Many different isolates of C. burnetii produce disease, and theories of plasmid-dependent virulence are controversial. C. burnetii infects many domestic animals (e.g., sheep, cattle, goats, cats) worldwide; is aerosolized from urine, faeces, milk, placenta or uterine tissues; forms a highly resistant endospore that remains infectious for years; and is extremely infectious.

Clinical presentation

After a 4- to 40-day incubation period, acute Q fever presents as an influenza-like illness that progresses to an atypical pneumonia similar to Mycoplasma. Acute illness lasts about two weeks but may persist up to nine weeks. Chronic illness, predominantly an endocarditis and hepatitis, may develop up to 20 years following acute illness.

Diagnostic tests

Primary isolation of C. burnetii is rarely performed because it requires a high level of biosafety containment. Diagnosis is made serologically by demonstrating a CF antibody titre of 1:8 or greater in an appropriate clinical setting or a four-fold change in CF titre.

Other affected occupations

Agricultural (especially dairy and wool), hospital laboratory, and biomedical research workers are at risk for infection.

Treatment

No effective vaccine exists for C. burnetii. A two-week course of tetracyclines or ciprofloxacin is used to treat acute illness.

Public health controls

Because of its widespread geographic distribution, numerous animal reservoirs, and resistance to inactivation, personal respiratory protection and engineering controls to contain infectious aerosols are the only effective preventive measures. However, these control methods are difficult to implement in many agricultural settings (e.g., sheep and cattle herding). The early diagnosis of Q fever by medical personnel can be facilitated by education of workers at high risk for contracting this rare disease. Transmission to hospital personnel may occur, and isolation may limit the spread of Q fever pneumonia in hospitals.

Miscellaneous Bacterial OAPS of Agricultural Workers

Pseudomonas pseudomallei is a soil- and rodent-associated organism principally of Southeast Asia which causes melioidosis. The disease is associated with soil exposure and a potentially long latency. Military personnel during and after the Vietnam War have been the major victims of melioidosis in the United States. Multifocal, nodular, suppurative or granulomatous pneumonia characterizes the pulmonary form of melioidosis.

Francisella tularensis, the aetiologic agent of tularaemia, is a zoonosis associated with wild rodents and lagomorphs. This is a potential occupational disease of wildlife biologists, mammologists, rodent-control workers, hunters, trappers and veterinarians. Tularaemia may result from inhalation, direct inoculation, cutaneous contact or ingestion, or it may be vector borne. Pulmonary disease results from either direct inhalation exposure or haematogenous spread of septicaemic disease. The pulmonary lesions of tularaemia are acute, multifocal, suppurative and necrotizing.

Histoplasmosis

Histoplasmosis is caused by Histoplasma capsulatum, a free-living mould in the soil associated with the faeces of birds or bats. Histoplasmosis is the most important cause of fungal OAPs in agricultural workers. The miscellaneous fungal pneumonias of agricultural workers are described in the next section.

Clinical presentation

Following exposure, attack rates and severity of histoplasmosis vary as a result of the infecting inoculum and the host’s antibody levels conferred by prior infection. Following heavy exposure, up to 50% of persons develop self-limited respiratory illness, while others remain asymptomatic. The least severe of the symptomatic syndromes includes “flu-like” symptoms, non-productive cough, and chest pain. Physical examination may be remarkable for erythema nodosum or erythema multiforme. Chest x rays show patchy, segmental infiltrates but no x ray findings can specifically differentiate histoplasmosis from other pulmonary infections. Hilar or mediastinal lymphadenopathy is common in all stages of primary histoplasmosis.

Progressive primary pneumonic histoplasmosis is characterized by profound systemic complaints, cough productive of purulent sputum, and haemoptysis. Progressive x-ray changes include multiple nodules, lobar consolidation and dense, multilobar interstitial infiltrates. Greater exposures increase the severity of the illness and result in severe respiratory disease, the acute respiratory distress syndrome (ARDS) or atelectasis due to obstruction by mediastinal lymphadenopathy.

Approximately 20% of ill persons develop other histoplasmosis syndromes which are idiosyncratic and not the result of greater exposure or progression of primary disease. Syndromes include arthritis-erythema nodosum, pericarditis, and chronic pulmonary histoplasmosis (fibrotic apical lung infiltrates with cavitation). Disseminated histoplasmosis develops in a small percentage of patients, particularly the immunosuppressed.

Diagnostic tests

Definitive diagnosis is made by isolating or histopathologically demonstrating the organism in an appropriate clinical specimen. Unfortunately, the organism is present in low concentrations and the sensitivities of these methods are low. Presumptive diagnoses are often made on the basis of geographic location, exposure history and x-ray findings of the lungs or calcifications in the spleen.

Epidemiology

H. capsulatum is found worldwide associated with specific soil conditions, but illness is reported primarily from the Ohio and Mississippi River valleys of the United States. High concentrations of spores are found in bird roosts, old buildings, poultry houses, caves or schoolyards; they are disrupted by work activity. Microconidia concentration is higher in disrupted, enclosed areas (e.g., building demolition) and results in higher inoculum for workers there than in most outdoor sites. In endemic areas, persons who clean bird roosts, demolish older contaminated buildings or perform excavations for road or building construction are at greater risk than the general population. In the United States, 15,000 to 20,000 persons are hospitalized each year with histoplasmosis, and approximately 3% of them die.

Other affected occupations

Attributing occupational risk for Histoplasma infection is difficult because the organism is free-living in soil and the concentration of aerosolized spores is increased by wind and dusty conditions. Infection is predominantly due to geographic location. In endemic areas, rural persons, regardless of occupation, have a 60 to 80% prevalence of positive skin test to H. capsulatum antigens. Actual illness results from a large infecting inoculum and is usually restricted to workers involved in the disruption of soil or destruction of contaminated buildings.

Treatment

Antifungal treatment for histoplasmosis and other occupationally acquired fungal infections is not indicated for acute self-limited pulmonary disease. Therapy with amphotericin B (30 to 35 mg/kg total dose) or ketoconazole (400 mg/day for six months) or treatment regimens using both agents is indicated for disseminated histoplasmosis, chronic pulmonary histoplasmosis, acute pulmonary histoplasmosis with ARDS, or mediastinal granuloma with symptomatic obstruction, and may be useful for prolonged, moderately severe primary illness. Treatment results in an 80 to 100% response rate, but relapses are common and may be as high as 20% with amphotericin B and 50% with ketoconazole. Efficacy of newer azole drugs (i.e., itraconazole and fluconazole) for occupational fungal infections has not been defined.

Public health controls

No effective vaccine has been developed. Chemical decontamination with 3% formaldehyde, prewetting the ground or contaminated surfaces to reduce aerosolization, and personal respiratory protection to reduce inhalation of aerosolized spores may reduce infection, but the efficacy of these methods has not been determined.

Miscellaneous Fungal Pneumonias

The miscellaneous fungal pneumonias of agricultural workers include aspergillosis, blastomycosis, cryptococcosis, coccidioidomycosis and paracoccidioidomycosis (table 1). These diseases are caused by Aspergillus spp., Blastomyces dermatitidis, Cryptococcus neoformans, Coccidioides immitis and Paracoccidioides brasiliensis, respectively. Although these fungi have a widespread geographic distribution, disease is usually reported from endemic areas. Relative to viral and bacterial causes of pneumonia, these disorders are rare and are often initially unsuspected. T-cell disorders enhance susceptibility to histoplasmosis, blastomycosis, cryptococcosis, coccidioidomycosis and paracoccidioidomycosis. However, a large initial exposure may result in disease in the immunocompetent worker. Infections with Aspergillus and related fungi tend to occur in neutropenic patients. Aspergillosis is most frequently an OAP of the immunosuppressed and will be discussed in the section on infections in the immunosuppressed.

Cr. neoformans, like H. capsulatum, is a common inhabitant of soil contaminated by avian faeces, and occupational exposure to such dusts or other dusts contaminated with Cr. neoformans may result in disease. Occupational blastomycosis is associated with outside occupations, especially in the eastern and central United States. Coccidioidomycosis results from exposure to contaminated dusts in endemic areas of the south-western United States (hence the synonym San Joaquin valley fever). Occupational exposure to contaminated soils of South and Central America is often associated with paracoccidioidomycosis. Because of the potentially long latency with paracoccidioidomycosis, this exposure may long precede the appearance of symptoms.

Clinical presentation

The clinical presentation of coccidioidomycosis, blastomycosis, or paracoccidioidomycosis is similar to histoplasmosis. Aerosol exposures to these fungi can produce OAP if the initial inoculum is high enough. However, host factors, such as prior exposure, limit disease in most individuals. In coccidioidomycosis, pulmonary and systemic signs of disease are apparent in a small percentage of those infected; progressive disease with dissemination to multiple organs is rare in the absence of immunosuppression. Although the source of infection is usually the lung, blastomycosis may present as pulmonary disease, cutaneous disease, or systemic disease. The most common clinical presentation of blastomycosis is a chronic cough with pneumonia indistinguishable from tuberculosis. However, the majority of patients with clinically apparent blastomycosis will have extra-pulmonary lesions involving the skin, bones or genitourinary system. Paracoccidioidomycosis is a disease of Mexico, Central and South America which most frequently presents as reactivation of prior infection after a long but variable latency period. The disease may be associated with ageing of infected individuals, and reactivation may be induced by immunosuppression. The pulmonary presentation is similar to other fungal pneumonias, but extrapulmonary disease, particularly of the mucous membranes, is common in paracoccidioidomycosis.

The lung is the usual site for primary infection with Cryptococcus neoformans. As with the previously discussed fungi, pulmonary infections may be asymptomatic, self-limited or progressive. However, dissemination of the organism, particularly to the meninges and brain, may occur without symptomatic respiratory disease. Cryptococcal meningoencephalitis without evidence of pulmonary cryptococcosis, while rare, is the most common clinical manifestation of Cr. neoformans infection.

Diagnostic tests

Direct demonstration of the tissue form of the organism permits a definitive diagnosis in biopsies and cytologic preparations. Immunofluorescence can be a useful confirmatory procedure if morphologic details are insufficient for establishing the aetiologic agent. These organisms can also be cultured from suspicious lesions. A positive latex cryptococcal agglutinin test in cerebrospinal fluid is consistent with cryptococcal meningoencephalitis. However, demonstration of organisms may not be sufficient for the diagnosis of disease. For example, saprophytic growth of Cr. neoformans is possible in airways.

Other affected occupations

Laboratory workers isolating these fungi are at risk for infection.

Treatment

Antifungal therapy is similar to that for histoplasmosis.

Public health controls

Engineering controls are indicated to reduce the risk to laboratory workers. Respiratory protection when working with soils heavily contaminated with avian faeces will reduce exposure to Cr. neoformans.

Occupationally Acquired Infections in Health Care and Laboratory Workers

Inhalation of infectious aerosols is the most common source of infection in hospital workers, and many types of viral and bacterial pneumonias have been attributed to work-related transmission (Table 26). The majority of infections are viral and self-limited. However, potentially serious outbreaks of tuberculosis, measles, pertussis and pneumococcal pneumonia have been reported in hospital workers. Infections in immunocompromised workers are discussed at the end of this section.

Diagnostic laboratory workers are at risk for occupationally acquired infections resulting from airborne transmission. Transmission occurs when pathogens are aerosolized during the initial processing of clinical specimens from patients with uncertain infectious diseases, and is rarely recognized. For example, in a recent community outbreak of brucellosis, one-third of the laboratory technicians developed brucellosis. Employment in the laboratory was the only identified risk factor. Person-to-person transmission between laboratory employees, food or waterborne transmission, or contact with a particular clinical specimen could not be shown to be risk factors. Rubella, tuberculosis, varicella-zoster and respiratory syncytial virus are occupational illnesses similarly acquired in the laboratory by technicians.

Despite rigorous veterinary care, biosafety containment procedures and the use of commercially reared, pathogen-free laboratory animals, inhalation remains the principal mode of infectious disease transmission associated with biomedical research workers. In addition, newly discovered micro-organisms or previously unrecognized zoonotic reservoirs may be encountered and undermine these disease control strategies.

Measles

Measles, as an occupationally acquired illness, has become an increasing problem among hospital workers in developed nations. Since 1989, there has been a resurgence of measles in the United States due to poor compliance with vaccine recommendations and the failure of primary immunization in vaccine recipients. Because of the high morbidity and potential mortality associated with measles in susceptible workers, special consideration should be given to measles in any occupational health programme. From 1985 to 1989, more than 350 cases of occupationally acquired measles were reported in the United States, representing 1% of all reported cases. Nearly 30% of hospital workers with occupationally acquired measles were hospitalized. The largest groups of hospital workers with measles were nurses and physicians, and 90% of them acquired measles from patients. Although 50% of these ill persons were eligible for vaccination, none had been vaccinated. The increased measles morbidity and mortality in adults has increased the concern that infected workers may infect patients and co-workers.

In 1989, the Immunization Practices Advisory Committee recommended two doses of measles vaccine or evidence of measles immunity at the time of employment in a health care setting. Serologic and vaccination status of workers should be documented. In addition, when patients with measles present, re-evaluation of the immune status of HCWs is appropriate. Implementing these recommendations and appropriate isolation of patients with known and suspected measles curtails the transmission of measles in medical settings.

Clinical presentation

In addition to the common presentation of measles seen in non-immune adults, atypical and modified presentations of measles must be considered because many hospital workers had previously received killed vaccines or have partial immunity. In classic measles, a two-week incubation period with mild upper respiratory symptoms follows infection. During this period, the worker is viremic and infectious. This is followed by a seven- to ten-day course of cough, coryza and conjunctivitis and the development of a morbilliform rash and Koplik spots (raised white lesions on the buccal mucosa), which are pathognomonic for measles. Diffuse reticulonodular infiltrates with bilateral hilar lymphadenopathy, often with a superimposed bacterial bronchopneumonia, are noted on x ray. These signs occur well after the person has had the opportunity to infect other susceptible persons. Pulmonary complications account for 90% of the measles deaths in adults. No specific antiviral treatment is effective for any form of measles, although high-titre anti-measles immunoglobulin may ameliorate some symptoms in adults.

In atypical measles, which occurs in persons vaccinated with a killed vaccine developed in the 1960s, severe pulmonary involvement is common. The rash is atypical and Koplik spots are rare. In modified measles, which occurs in persons previously receiving a live vaccine but developing partial immunity, signs and symptoms are similar to classic measles but milder, and often go unnoticed. Persons with atypical and modified measles are viremic and can spread measles virus.

Diagnosis

Measles in hospital workers is often modified or atypical, and is rarely suspected. Measles should be considered in a person with an erythematous maculopapular rash preceded by a three- to four-day febrile prodrome. In persons with a first time infection and without previous immunization, viral isolation or antigen detection is difficult, but enzyme-linked immunosorbent or fluorescent antibody assays may be used for rapid diagnosis. In persons with previous immunizations, interpreting these assays is difficult, but immunofluorescent antibody stains of exfoliated cells may be helpful.

Epidemiology

Susceptible nurses and physicians are nearly nine times more likely to acquire measles than persons of the same age who are not HCWs. As with all measles infections, person-to-person transmission occurs via inhalation of an infectious aerosol. Hospital workers acquire measles from patients and co-workers and, in turn, transmit measles to susceptible patients, co-workers and family members.

Other affected occupations

Epidemic measles has occurred in academic institutions in developed nations and among agricultural workers restricted to collective lodgings on plantations.

Public health controls

Public health intervention strategies include immunization programmes as well as infection control programmes to monitor measles illness and antibody status of workers. If natural infection or an appropriate two-dose vaccination cannot be documented, antibody assays should be performed. Vaccination of pregnant workers is contraindicated. Vaccination of other at-risk workers is an important aid in disease prevention. After exposure to measles, removal of susceptible workers from patient contact for 21 days may reduce the spread of disease. Restricted activity of workers with measles for 7 days after the appearance of the rash may also curtail disease transmission. Unfortunately, appropriately vaccinated workers have developed measles despite protective antibody levels that were documented prior to illness. As a result, many recommend personal respiratory protection when caring for patients with measles.

Miscellaneous viral respiratory tract infections

A variety of viruses which are not unique to the health care environment are the most common cause of OAPs in health care workers. The aetiological agents are those which cause community-acquired OAPs, including adenovirus, cytomegalovirus, influenza virus, parainfluenza virus and respiratory syncytial virus. Because these organisms are also present in the community, establishing these as the cause of an individual OAP is difficult. However, serologic studies suggest that health care and day care workers are at increased risk for exposure to these respiratory pathogens. These viruses are also responsible for disease outbreaks in many situations where workers are brought together in a confined space. For example, outbreaks of adenoviral infection are common in military recruits.

Pertussis

Pertussis, like measles, has been increasingly reported in hospital workers in developed nations. In 1993, nearly 6,000 cases of pertussis were reported in the United States, an 80% increase over 1992. Unlike previous years, 25% of the reported cases occurred in persons over ten years old. The number of occupationally acquired illnesses in hospital workers is unknown but is felt to be underreported in developed nations. Because of waning immunity in adults and the potential for hospital workers to infect susceptible infants, there is greater emphasis on diagnosis and surveillance of pertussis.

Clinical presentation

Pertussis may persist for six to ten weeks without intervention. In the first week, when the ill person is most contagious, dry cough, coryza, conjunctivitis and fever develop. In previously vaccinated adults, the persistent, productive cough may last several weeks and pertussis is rarely considered. Clinical diagnosis is difficult, and clinical suspicion should be aroused when one encounters any worker with a cough that lasts for more than seven days. A white count greater than 20,000 with a predominance of lymphocytes may be the only laboratory abnormality, but this is rarely noted in adults. Chest radiographs show confluent bronchopneumonia in the lower lobes that radiate from the heart to give the characteristic “shaggy heart” sign, and atelectasis is present in 50% of cases. Because of the extreme infectiousness of this agent, strict respiratory isolation is necessary until treatment with erythromycin or trimethoprim/sulphamethoxazole has continued for five days. Close contacts of infected person and hospital workers who were not using respiratory precautions should receive 14 days of antibiotic prophylaxis regardless of immunization status.

Diagnosis

Isolation of Bordetella pertussis, direct immunofluorescent staining of nasal secretions, or development of a B. pertussis antibody response is used to make a definitive diagnosis.

Epidemiology

B. pertussis is highly contagious, transmitted person-to-person via inhalation of infectious aerosols, and has an attack rate of 70 to 100%. In the past, it has not been a disease of adults and has not been appreciated as an OAP. During a community outbreak of pertussis in the western United States, many hospital workers were exposed at work and developed pertussis despite antibiotic prophylaxis. Because of waning protective antibody levels in adults who have never had clinical disease but received cellular vaccine after 1940, there is a growing population of pertussis-susceptible hospital workers in developed nations.

Public health controls

Identification, isolation and treatment are the main disease control strategies in hospitals. The role of acellular pertussis vaccine for hospital workers without adequate levels of protective antibody is unclear. During the recent outbreak in the western United States, one-third of vaccinated hospital workers reported mild to moderate side effects to the vaccine but 1% had “severe” systemic symptoms. Although these more severely affected workers missed days of work, no neurologic symptoms were reported.

Tuberculosis

During the 1950s, it was generally recognized that health care workers in developed nations were at greater risk for tuberculosis (TB—granulomatous disease due to Mycobacterium tuberculosis or closely related organisms M. bovis) than the general population. From the 1970s through the early 1980s, surveys suggested that this had become only a slightly increased risk. In the late 1980s, a marked increase in the number of cases of TB admitted to US hospitals resulted in the unsuspected transmission of M. tuberculosis to hospital workers. The high background prevalence of positive tuberculin skin test (TST) in certain socio-economic or immigrant groups from which many hospital workers came, and the poor association of TST conversion with work-related exposures to TB, made it difficult to quantify the risk of TB occupational transmission to workers. In 1993 in the United States, an estimated 3.2% of reported persons with TB were health care workers. Despite problems in defining risk, work-related infection should be considered when hospital workers develop TB or convert their TST.

M. tuberculosis is spread almost exclusively person to person on infectious particles with a diameter of 1 to 5mm that result from coughing, talking or sneezing. The risk of infection is directly related to the intensity of exposure to infectious aerosols—small shared spaces, increased density of infectious particles, poor clearance of infectious particles, recirculation of air containing infectious particles, and prolonged contact time. In health care settings, procedures such as bronchoscopy, endotracheal intubation and nebulized aerosol treatment increase the density of infectious aerosols. Approximately 30% of close contacts—persons who share a common space with an infectious person—become infected and undergo skin-test conversion. After infection, 3 to 10% of persons will develop TB within 12 months (i.e., primary disease) and an additional 5 to 10% will develop TB in their lifetime (i.e., reactivation disease). These higher rates occur in developing nations and situations when malnutrition is more prevalent. HIV-infected persons reactivate TB at higher rates, approximately 3 to 8% per year. CFR varies; in developed nations, it is between 5 and 10%, but in developing nations, these rates range from 15 to 40%.

Clinical presentation

Prior to the HIV epidemic, 85 to 90% of persons with TB had pulmonary involvement. Chronic cough, sputum production, fever and weight loss remain the most frequently reported symptoms of pulmonary TB. Except for rare amphoric breathing or post-tussive crackles over the upper lobes, physical examination is not helpful. An abnormal chest x ray is found in nearly all cases and is usually the first finding to suggest TB. In primary TB, a lower- or middle-lobe infiltrate with ipsilateral hilar lymphadenopathy with atelectasis is common. Reactivation TB usually results in an infiltrate and cavitation in the upper lobes of the lungs. Although sensitive, chest x rays lack specificity and will not give a definitive diagnosis of TB.

Diagnosis

Definitive diagnosis of pulmonary TB can be made only by isolating M. tuberculosis from sputum or lung tissue, although a presumptive diagnosis is possible if acid-fast bacilli (AFB) are found in sputum from persons with compatible clinical presentations. The diagnosis of TB should be considered on the basis of clinical signs and symptoms; isolation and treatment of persons with compatible illnesses should not be delayed for the result of a TST. In developing nations where TST reagents and chest x rays are not available, WHO suggests evaluating persons with any respiratory symptom of three weeks’ duration, haemoptysis of any duration or significant weight loss for TB. These persons should have a microscopic examination of their sputum for AFB.

Other affected occupations

Worker-to-worker and client-to-worker airborne transmission of M. tuberculosis has been documented among hospital workers, airline flight crews, miners, correctional facility workers, animal caretakers, shipyard workers, school employees and plywood factory workers. Special consideration must be given to certain occupations such as farmworkers, animal caretakers, manual labourers, housekeepers, janitors and food preparation workers, although most of the risk may be due to the socio-economic or immigration status of the workers.

Special consideration should be given to pulmonary TB among miners and other groups with silica exposure. In addition to an increased risk of primary infection from fellow miners, persons with silicosis are more likely to progress to TB and have a greater TB-specific mortality compared to non-silicotic workers. As in most persons, TB reactivates among silicotic persons from longstanding M. tuberculosis infections that predate silica exposure. In experimental systems, silica exposure has been shown to worsen the course of infection in a dose-dependent fashion, but it is unclear if silica-exposed, non-silicotic workers are at greater risk for developing TB. Silica-exposed foundry workers without radiographic silicosis are at a three-times greater risk of TB-specific mortality compared to similar workers without silica exposure. No other occupational dust exposures have been associated with enhanced progression of TB.

Migrant farmworkers are more likely to develop reactivation TB than the general population. Estimates of positive TSTs in migrant farmworkers range from approximately 45% in 15 to 34 year-old persons to nearly 70% in workers more than 34 years old.

Clinical laboratory workers are at increased risk for occupationally acquired TB through airborne transmission. In a recent ten-year survey of selected hospitals in Japan, 0.8% of laboratory workers developed TB. No community sources were identified, and work-related exposures were identified in only 20% of the cases. Most cases occurred among the workers in the pathology and bacteriology laboratories and autopsy theatres.

Treatment

Several treatment regimens have been shown to be effective in different outpatient settings. Among compliant patients in developed nations, daily doses of four drugs (including isoniazid and rifampin) for two months followed by daily doses of isoniazid and rifampin for the next four months has become a standard treatment regimen. Directly observed, twice weekly administration of the same drugs is an effective alternative in less compliant patients. In developing nations and in situations where anti-tuberculous medications are not readily available, 9 to 12 months of daily dosing with isoniazid and rifampin has been used. A treatment regimen should be consistent with the national policy and take into consideration the organism’s susceptibility to standard, available anti-tuberculous medications and duration of therapy. Because of limited resources to control TB in developing nations, efforts may focus on the primary sources of infection—patients with sputum smears that demonstrate AFB.

In health care settings, work restrictions are indicated for infectious workers with pulmonary TB. In other settings, infectious workers may simply be isolated from other workers. In general, persons are considered non-infective after two weeks of appropriate antituberculous medications if there is symptomatic improvement and decreasing density of AFB in the sputum smear.

Public health controls

The main public health control of occupationally or community-acquired TB transmission remains identification, isolation and treatment of persons with pulmonary TB. Ventilation to dilute infectious aerosols; filtration and ultraviolet lights to decontaminate the air containing the aerosol; or personal respiratory protection may be used where the risk of transmission is known to be exceptionally high, but the efficacy of these methods is still unknown. The utility of BCG in worker protection remains controversial.

Miscellaneous bacterial infections in the health careenvironment

Common bacterial infections of the lung may be acquired from patients or within the community. Work-related airborne transmission of bacterial pathogens such as Streptococcus pneumoniae, Haemophilus influenza, Neisseria meningitidis, Mycoplasma pneumoniae and Legionella spp (Table 26) occurs and the resulting illnesses are included in many hospital surveillance programmes. Occupational bacterial respiratory tract infections are also not restricted to health care workers. Infections with Streptococcus spp are, for example, a well-established cause of disease outbreaks among military recruits. However, for a specific worker, the prevalence of these disorders outside the workplace complicates the distinction between occupational and community-acquired infections. The clinical presentation, diagnostic tests, epidemiology and treatment of these disorders are described in standard medical textbooks.

Infections in the immunosuppressed worker

Immunosuppressed workers are at increased risk from many OAPs. In addition, a number of organisms which do not cause disease in normal individuals will produce disease in the immunosuppressed. The type of immunosuppression will also affect disease susceptibility. For example, invasive pulmonary aspergillosis is a more frequent complication of chemotherapy than of acquired immunodeficiency syndrome (AIDS).

Invasive pulmonary aspergillosis is usually seen in the immunosuppressed, particularly individuals with neutropenia. However, invasive pulmonary aspergillosis is occasionally reported in individuals without an apparent predisposition to disease. Invasive pulmonary aspergillosis normally presents as a severe, necrotizing pneumonia with or without systemic involvement in a neutropenic patient. While invasive aspergillosis is most frequently seen as a nosocomial infection in chemotherapy patients, this is a highly fatal disease in any neutropenic worker. Techniques which reduce nosocomial aspergillosis—for example, the control of dusts from construction projects—may also protect susceptible workers.

A variety of animal pathogens become potential zoonoses only in the immunosuppressed patient. The zoonoses transmitted by aerosol exposure seen only in the immunosuppressed include encephalitozoonosis (due to Encephalitozoon cuniculi), avian tuberculosis (due to Mycobacterium avium) and Rhodococcus equi infections. Such diseases are of particular concern in agriculture. Methods for the protection of immunosuppressed workers are incompletely investigated.

In the immunosuppressed worker, many potential pathogens cause invasive and severe disease not seen in normal patients. For example, severe infections with Candida albicans and Pneumocystis carinii are classical manifestations of AIDS. The spectrum of occupational pathogens in the immunosuppressed worker, therefore, potentially involves disorders not present in immunologically normal workers. The diseases of immunosuppressed individuals have been thoroughly reviewed elsewhere and will not be further discussed in this review.

Public Health Controls: Overview

OAPs predominantly occur in five groups of workers: hospital workers, agricultural workers, meat production workers, military personnel and biomedical laboratory workers (table 1). Avoidance of infectious aerosols is the most effective way of reducing infection in most situations but often is difficult. For example, Coxiella burnetii, the aetiological agent for Q fever, may be present in any environment previously contaminated with the biological fluids of infected animals, but avoidance of all potentially infected aerosols would be impractical in many low-risk situations such as sheep herding or rodeos. Control of concomitant diseases may also reduce the risk of OAPs. Silicosis, for example, increases the risk for reactivation of TB, and reducing silica exposure may reduce the risk of TB in miners. For OAPs that have significant mortality and morbidity in the general population, immunization may be the most important public health intervention. Education of workers about their risk of OAPs assists in worker compliance with occupational disease control programmes and also aids in the early diagnosis of these disorders.

Among hospital workers and military personnel, human-to-human transmission is usually the main route of infection. Worker immunization may prevent disease and may be useful in the control of pathogens of high morbidity and/or mortality. Because there is a risk for persons who may not have been adequately immunized; identifying, isolating and treating ill persons remains a part of disease control. When immunization and respiratory isolation fail or the associated morbidity and mortality is intolerable, personal protection or engineering controls to reduce the density or infectiousness of aerosols may be considered.

For agricultural, meat production and biomedical laboratory workers, animal-to-human transmission is a common transmission pattern. In addition to immunization of susceptible persons when possible, other disease control strategies may include immunization of the animals, veterinarian-controlled antibiotic prophylaxis of well-appearing animals, quarantine of newly arrived animals, isolation and treatment of ill animals, and purchase of pathogen-free animals. When these strategies have failed or there is high morbidity and mortality, strategies such as personal protection or engineering controls may be considered.

Environment-to-human transmission of infectious agents is common among agricultural workers, including many labourers. Worker immunization is possible when a vaccine is available, but for many of these pathogens, disease incidence in the general population is low and vaccines are rarely feasible. In agricultural settings, the sources of infection are widespread. As a result, engineering controls to reduce the density or infectiousness of aerosols are rarely feasible. In these settings, wetting agents or other methods to reduce dust, decontaminating agents and personal respiratory protection may be considered. Because control of OAP in agricultural workers is often difficult and these diseases are rarely seen by medical personnel, education of workers and communication between workers and medical personnel is essential.

 

Back

Tuesday, 01 March 2011 00:09

Respiratory Cancer

Lung Cancer

Lung cancer is the most common cancer worldwide. In 1985, it is estimated that worldwide 676,500 cases occurred in males and 219,300 cases in females, accounting for 11.8% of all new cancers, and this figure is increasing at a rate of about 0.5% per year (Parkin, Pisani and Ferlay 1993). About 60% of these cases occur in industrialized countries, in many of which lung cancer is the leading cancer cause of death among males. In both industrialized and developing countries, males have a higher incidence than females, the sex ratio ranging from two- to ten-fold. The international intergender variations in lung cancer incidence are largely explained by the variation in current and past smoking patterns.

A higher lung cancer risk has been consistently observed in urban compared to rural areas. In industrialized countries, a clear, inverse relationship is evident in males in lung cancer incidence and mortality by social class, while women show less clear and consistent patterns. Differences in social class in males principally reflect a different smoking pattern. In developing countries, however, there seems to be a higher risk in men from the upper social class than in other men: this pattern probably reflects the earlier adoption of Western habits by affluent groups in the population.

Incidence data from the United States National Cancer Institute’s SEER Program for the period 1980-86 indicate, similarly to previous years, that Black males have a higher incidence than White males, while incidence for females does not differ by race. These differences among ethnic groups in the United States can actually be attributed to socio-economic differences between Blacks and Whites (Baquet et al. 1991).

Lung cancer incidence increases almost linearly with age, when plotted in a log-log scale; only in the oldest age groups can a downward curve be observed. Lung cancer incidence and mortality have increased rapidly during this century, and continue to increase in most countries.

There are four principal histological types of lung cancer: squamous cell carcinoma, adenocarcinoma, large cell carcinoma and small cell carcinoma (SCLC). The first three are also referred to as non-small cell lung cancer (NSCLC). The proportions of each histological type change according to sex and age.

Squamous cell carcinoma is very strongly associated with smoking and represents the most common type of lung cancer in many populations. It arises most frequently in the proximal bronchi.

Adenocarcinoma is less strongly associated with smoking. This tumour is peripheral in origin and may present as a solitary peripheral nodule, a multifocal disease or a rapidly progressive pneumonic form, spreading from lobe to lobe.

Large cell carcinoma represents a smaller proportion of all lung cancers and has a similar behaviour as adenocarcinoma.

SCLC represents a small proportion (10 to 15%) of all lung cancers; it typically arises in the central endobronchial location and tends to develop early metastases.

The signs and symptoms of lung cancer depend on the location of the tumour, the spread and the effects of metastatic growth. Many patients present with an asymptomatic lesion discovered incidentally on x ray. Among NSCLC patients, fatigue, decreased activity, persistent cough, dyspnoea, decreased appetite and weight loss are common. Wheeze or stridor may also develop in advanced stages. Continuous growth may result in atelectasia, pneumonia and abscess formation. Clinical signs among SCLC patients are less pronounced than among those with NSCLC, and are usually related to the endobronchial location.

Lung cancer can metastasize to virtually any organ. The most common locations of metastatic lesions are pleura, lymph nodes, bone, brain, adrenals, pericardium and liver. At the moment of diagnosis, the majority of patients with lung cancer have metastases.

The prognosis varies with the stage of the disease. Overall five-year survival for lung cancer patients in Europe (in 1983-85) was between 7% and 9% (Berrino et al. 1995).

No population screening method is currently available for lung cancer.

Nasopharyngeal Cancer

Nasopharyngeal cancer is rare in most populations, but is frequent in both sexes in areas such as South-East Asia, Southern China and North Africa. Migrants from South China retain the high risk to a large extent, but second- and third-generation Chinese migrants to the United States have less than half the risk of first generation migrants.

Cancers of the nasopharynx are predominantly of squamous epithelial origin. According to WHO, these tumours are classified as: type 1, keratinizing squamous cell carcinoma; type 2, non-keratinizing carcinoma; and type 3, undifferentiated carcinoma, which is the most frequent histological type. Type 1 has an uncontrolled local growth, and metastatic spread is found in 60% of the patients. Types 2 and 3 have metastatic spread in 80 to 90% of the patients.

A mass in the neck is noticed in approximately 90% of nasopharyngeal carcinoma patients. Alterations in the hearing, serous otitis media, tinnitus, nasal obstruction, pain and symptoms related to the growth of the tumour into adjacent anatomical structures may be noticed.

The overall five-year survival for nasopharyngeal cancer patients in Europe between 1983 and 1985 was around 35%, varying according to the stage of the tumour and its location (Berrino et al. 1995).

Consumption of Chinese-style salted fish is a risk factor of nasopharyngeal cancer; the role of other nutritional factors and of viruses, in particular Epstein-Barr virus, although suspected, has not been confirmed. No occupational factors are known to cause nasopharyngeal cancer. No preventive measures are available at present (Higginson, Muir and Muñoz 1992).

Sinonasal Cancer

Neoplasms of the nose and nasal cavities are relatively rare. Together, cancer of the nose and nasal sinus—including maxillary, ethmoidal, sphenoid and frontal sinuses—account for less than 1% of all cancers. In most cases these tumours are classified as squamous carcinomas. In Western countries, cancers of the nose are more common than cancers of the nasal sinus (Higginson, Muir and Muñoz 1992).

They occur more frequently in men and among Black populations. The highest incidence is seen in Kuwait, Martinique and India. The peak of development of the disease occurs during the sixth decade of life. The major known cause of sinonasal cancer is exposure to wood dust, in particular from hardwood species. Tobacco smoking does not seem to be associated with this type of cancer.

Most tumours of the nasal cavity and para-nasal sinus are well differentiated and slow growing. Symptoms may include non-healing ulcer, bleeding, nasal obstruction and symptoms related to the growth into the oral cavity, orbit and pterygoid fossa. The disease is usually advanced at the time of diagnosis.

Overall five-year survival for nose and nasal sinus cancer patients in Europe between 1983 and 1985 was around 35%, varying according to the size of the lesion at diagnosis (Berrino et al. 1995).

Laryngeal Cancer

The highest incidence of laryngeal cancer is reported in Sao Paolo (Brazil), Navarra (Spain) and Varese (Italy). High mortality has also been reported in France, Uruguay, Hungary, Yugoslavia, Cuba, the Middle East and North Africa. Laryngeal cancer is predominantly a male cancer: an estimated 120,500 cases among males and 20,700 cases among females occurred in 1985 (Parkin, Pisani and Ferlay 1993). In general, incidence is higher among Black populations as compared to Whites, and in urban areas compared to rural.

Almost all cancers of the larynx are squamous carcinomas. The majority are located in the glottis, but they may also develop in the supraglottis or, rarely, in the subglottis.

Symptoms may not occur or be very subtle. Pain, a scratchy sensation, alteration of tolerance for hot or cold foods, a tendency to aspirate liquids, airway alteration, a slight change in the voice during several weeks and cervical adenopathy may be present, according to the location and stage of the lesion.

Most larynx cancers are visible with laryngeal inspection or endoscopy. Pre-neoplastic lesions can be identified in the larynx of smokers (Higginson, Muir and Muñoz 1992).

The overall five-year survival for laryngeal cancer patients in Europe between 1983 and 1985 was around 55% (Berrino et al. 1995).

Pleural Mesothelioma

Mesotheliomas may arise from the pleura, peritoneum and pericardium. Malignant mesothelioma represents the most important pleural tumour; it occurs mainly between the fifth and seventh decade of life.

Pleural mesothelioma was once a rare tumour and remains so in most female populations, while in men in industrialized countries it has increased by 5 to 10% per year during the last decades. In general, men are affected five times as much as women. Precise estimates of incidence and mortality are problematic because of difficulties in the histological diagnosis and changes in the International Classification of Diseases (ICD) (Higginson, Muir and Muñoz 1992). However, incidence rates seem to present very important local variations: they are very high in areas where asbestos mining is present (e.g., North West Cape Province of South Africa), in major naval dockyard cities, and in regions with environmental fibre contamination, such as certain areas of central Turkey.

Patients may be asymptomatic and have their disease diagnosed incidentally on chest radiographs, or they may have dyspnoea and chest pain.

Mesotheliomas tend to be invasive. The median survival is 4 to 18 months in various series.

Occupational Risk Factors of Respiratory Cancer

Apart from tobacco smoke, a causal association with respiratory cancer has been demonstrated according to the International Agency for Research on Cancer (IARC) for 13 agents or mixtures and nine exposure circumstances (see table 1). Furthermore, there are eight agents, mixtures or exposure circumstances which according to IARC are probably carcinogenic to one or more organs in the respiratory tract (table 2). All but azathioprine, an immunosuppressant drug, are primarily occupational exposures (IARC 1971-94).

Table 1. Established human respiratory carcinogens according to IARC

Agents Individual agents Target sites
Asbestos Lung, larynx, pleura
Arsenic and arsenic compounds Lung
Beryllium and beryllium compounds Lung
Bis (chloromethyl) ether Lung
Cadmium and cadmium compounds Lung
Chloromethyl methyl ether (technical-grade) Lung
Chromium (VI) compounds Nose, lung
Mustard gas Lung, larynx
Nickel compounds Nose, lung
Talc containing asbestiform fibres Lung, pleura
Complex mixtures  
Coal-tars Lung
Coal-tar pitches Lung
Soots Lung
Tobacco smoke Nose, lung, larynx
Exposure circumstances  
Aluminium production Lung
Boot and shoe manufacture and repair Nose
Coal gasification Lung
Coke production Lung
Iron and steel founding Lung
Furniture and cabinet-making Nose
Strong inorganic acid mists containing  sulphuric acid (occupational exposures to) Larynx
Painters (occupational exposure as) Lung
Radon and its decay products Lung
Underground haematite mining (with exposure to radon) Lung

 Source: IARC, 1971-1994.

Table 2. Probable human respiratory carcinogens according to IARC

Agents Individual agents Suspected target sites
Acrylonitrile Lung
Azathioprine Lung
Formaldehyde Nose, larynx
Silica (crystalline) Lung
Complex mixtures  
Diesel engine exhaust Lung
Welding fumes Lung
Exposure circumstances  
Rubber industry Lung
Spraying and application of insecticides (occupational exposures in) Lung

Source: IARC, 1971-1994.

Occupational groups demonstrating an increased risk of lung cancer following exposure to arsenic compounds include non-ferrous smeltery workers, fur handlers, manufacturers of sheep-dip compounds and vineyard workers (IARC 1987).

A large number of epidemiological studies have been carried out on the association between chromium (VI) compounds and the occurrence of lung and nasal cancer in the chromate, chromate pigment and chromium plating industries (IARC 1990a). The consistency of findings and the magnitude of the excesses have demonstrated the carcinogenic potential of chromium (VI) compounds.

Nickel refinery workers from many countries have shown substantial increased risks of lung and nasal cancers; other occupational groups exposed to nickel among which an increased risk of lung cancer has been detected include sulphide nickel ore miners and high nickel alloy manufacture workers (IARC 1990b).

Workers exposed to beryllium are at elevated risk of lung cancer (IARC 1994a). The most informative data are those derived from the US Beryllium Case Registry, in which cases of beryllium-related lung diseases were collected from different industries.

An increase in lung cancer occurrence has been found in cohorts of cadmium smelters and nickel-cadmium battery workers (IARC 1994b). Concurrent exposure to arsenic among smelters and to nickel among battery workers, cannot explain such an increase.

Asbestos is an important occupational carcinogen. Lung cancer and mesothelioma are the major asbestos-related neoplasms, but cancers at other sites, such as the gastro-intestinal tract, larynx and kidney, have been reported in asbestos workers. All forms of asbestos have been causally related to lung cancer and mesothelioma. In addition, talc-containing asbestiform fibres have been shown to be carcinogenic to the human lung (IARC 1987). A distinctive characteristic of asbestos-induced lung cancer is its synergistic relationship with cigarette smoking.

A number of studies among miners, quarry workers, foundry workers, ceramic workers, granite workers and stone cutters have shown that individuals diagnosed as having silicosis after exposure to dust containing crystalline silica have an increased risk of lung cancer (IARC 1987).

Polynuclear aromatic hydrocarbons (PAHs) are formed mainly as a result of pyrolytic processes, especially the incomplete combustion of organic materials. However, humans are exposed exclusively to mixtures of PAHs, such as soots, coal-tars and coal-tar pitches. Cohort studies of mortality among chimney-sweeps have shown an increased risk of lung cancer, which has been attributed to soot exposure. Several epidemiological studies have shown excesses of respiratory cancer among workers exposed to pitch fumes in aluminium production, calcium carbide production and roofing. In these industries, exposure to tar, and particularly coal tar, does also occur. Other industries in which an excess of respiratory cancer is due to exposure to coal-tar fumes are coal gasification and coke production (IARC 1987). An increased risk of respiratory (mainly lung) cancer was found in some, but not all the studies tried to analyse diesel engine exhaust exposure separately from other combustion products; the occupational groups which were studied include railroad workers, dockers, bus garage workers, bus company employees and professional lorry drivers (IARC 1989a). Other mixtures of PAHs that have been studied for their carcinogenicity to humans include carbon blacks, gasoline engine exhaust, mineral oils, shale oils, and bitumens. Shale oils and untreated and mildly treated mineral oils are carcinogenic to humans, whereas gasoline engine exhaust is possibly carcinogenic and highly refined mineral oils, bitumens and carbon blacks are not classifiable as to their carcinogenicity to humans (IARC 1987, 1989a). Although these mixtures do contain PAHs, a carcinogenic effect on the human lung has not been demonstrated for any of them, and the evidence of carcinogenicity for untreated and mildly treated mineral oils and for shale oils is based on increased risk of cancers from sites other than respiratory organs (mainly skin and scrotum) among exposed workers.

Bis(b-chloroethyl)sulphide, known as mustard gas, was widely used during the First World War, and the studies of soldiers exposed to mustard gas as well as of workers employed in its manufacture have revealed a subsequent development of lung and nasal cancer (IARC 1987).

Numerous epidemiological studies have demonstrated that workers exposed to chloromethyl methyl ether and/or bis(chloromethyl)-ether have an increased risk of lung cancer, primarily of SCLC (IARC 1987).

Workers exposed to acrylonitrile have been found to be at higher risk of lung cancer in some but not all studies which have been conducted among workers in textile fibre manufacture, acrylonitrile polymerization and the rubber industry (IARC 1987).

Excess occurrence has been reported for workers exposed to formaldehyde, including chemical workers, wood workers, and producers and users of formaldehyde (IARC 1987). The evidence is strongest for nasal and nasopharyngeal cancer: the occurrence of these cancers showed a dose-response gradient in more than one study, although the number of exposed cases was often small. Other neoplasms at possible increased risk are lung and brain cancer and leukaemia.

An increased risk of laryngeal cancer has been found in several studies of workers exposed to mists and vapours of sulphuric and other strong inorganic acids, such as workers in steel pickling operations, and in soap manufacture and petrochemical workers (IARC 1992). Lung cancer risk was also increased in some, but not all, of these studies. Furthermore, an excess of sinonasal cancer was found in a cohort of workers in isopropanol manufacture using the strong-acid process.

Woodworkers are at increased risk of nasal cancer, in particular adenocarcinoma (IARC 1987). The risk is confirmed for furniture and cabinet-makers; studies on workers in carpentry and joinery suggested a similar excess risk, but some studies produced negative results. Other wood industries, such as sawmills and pulp and paper manufacture, were not classifiable as to their carcinogenic risk. Although carcinogenicity of wood dust was not evaluated by IARC, it is plausible to consider that wood dust is responsible for at least part of the increased risk of nasal adenocarcinoma among woodworkers. Woodworkers do not seem to be at increased risk of cancer in other respiratory organs.

Nasal adenocarcinoma has been caused also by employment in boot and shoe manufacture and repair (IARC 1987). No clear evidence is available, on the other hand, that workers employed in the manufacture of leather products and in leather tanning and processing are at increased risk of respiratory cancer. It is not known at present whether the excess of nasal adenocarcinoma in the boot and shoe industry is due to leather dust or to other exposures. Carcinogenicity of leather dust has not been evaluated by IARC.

Lung cancer has been common among uranium miners, underground hematite miners and several other groups of metal miners (IARC 1988; BEIR IV Committee on the Biological Effects of Ionizing Radiation 1988). A common factor among each of these occupational groups is exposure to a-radiation emitted by inhaled radon particles. The main source of data on cancer following exposure to ionizing radiation is derived from the follow-up of atomic bomb survivors (Preston et al. 1986; Shimizu et al. 1987). The risk of lung cancer is elevated among the atomic bomb survivors as well as among people who have received radiation therapy (Smith and Doll 1982). No convincing evidence, however, is currently available on the existence of an elevated lung cancer risk among workers exposed to low-level ionizing radiation, such as those occurring in the nuclear industry (Beral et al. 1987; BEIR V, Committee on the Biological Effects of Ionizing Radiation 1990). Carcinogenicity of ionizing radiation has not been evaluated by IARC.

An elevated risk of lung cancer among painters was found in three large cohort studies and in eight small cohort and census-based studies, as well as eleven case-control studies from various countries. On the other hand, little evidence of an increase in lung cancer risk was found among workers involved in the manufacture of paint (IARC 1989b).

A number of other chemicals, mixtures, occupations and industries which have been evaluated by IARC to be carcinogenic to humans (IARC Group 1) do not have the lung as the primary target organ. Nonetheless, the possibility of an increased risk of lung cancer has been raised for some of these chemicals, such as vinyl chloride (IARC 1987), and occupations, such as spraying and application of insecticides (IARC 1991a), but the evidence is not consistent.

Furthermore, several agents which have the lung as one of the main targets, have been considered to be possible human carcinogens (IARC Group 2B), on the basis of carcinogenic activity in experimental animals and/or limited epidemiological evidence. They include inorganic lead compounds (IARC 1987), cobalt (IARC 1991b), man-made vitreous fibres (rockwool, slagwool and glasswool) (IARC 1988b), and welding fumes (IARC 1990c).

 

Back

Monday, 28 February 2011 23:59

Health Effects of Man-Made Fibres

The industrial use of various types of man-made fibres has been increasing, particularly since restrictions were placed on the use of asbestos in view of its known health hazards. The potential for adverse health effects related to the production and use of man-made fibres is still being studied. This article will provide an overview of the general principles regarding the potential for toxicity related to such fibres, an overview of the various types of fibres in production (as listed in table 1) and an update regarding existing and ongoing studies of their potential health effects.

Table 1. Synthetic fibres

Man-made fibres

Aluminium oxide

Carbon/graphite

Kevlar® para-aramid

Silicon carbide fibres and
whiskers

 
Man-made vitreous fibres

Glass fibre

 

Mineral wool

 

Refractory ceramic fibre

Glass wool
Continuous glass filament
Special-purpose glass fibre

Rock wool
Slag wool

 

Toxicity Determinants

The primary factors related to potential for toxicity due to exposure to fibres are:

  1. fibre dimension
  2. fibre durability and
  3. dose to the target organ.

 

Generally, fibres that are long and thin (but of a respirable size) and are durable have the greatest potential for causing adverse effects if delivered to the lungs in sufficient concentration. Fibre toxicity has been correlated in short-term animal inhalation studies with inflammation, cytotoxicity, altered macrocyte function and biopersistence. Carcinogenic potential is most likely related to cellular DNA damage via formation of oxygen-free radicals, formation of clastogenic factors, or missegregation of chromosomes in cells in mitosis—alone or in combination. Fibres of a respirable size are those less than 3.0 to 3.5mm in diameter and less than 200μm in length. According to the “Stanton hypothesis,” the carcinogenic potential of fibres (as determined by animal pleural implantation studies) is related to their dimension (the greatest risk is associated with fibres less than 0.25μm in diameter and greater than 8mm in length) and durability (Stanton et al. 1981). Naturally occurring mineral fibres, such as asbestos, exist in a polycrystalline structure that has the propensity to cleave along longitudinal planes, creating thinner fibres with higher length-to-width ratios, which have a greater potential for toxicity. The vast majority of man-made fibres are non-crystalline or amorphous and will fracture perpendicularly to their longitudinal plane into shorter fibres. This is an important difference between asbestos and non-asbestos fibrous silicates and man-made fibres. The durability of fibres deposited in the lung is dependent upon the lung’s ability to clear the fibres, as well as the fibres’ physical and chemical properties. The durability of man-made fibres can be altered in the production process, according to end-use requirements, through the addition of certain stabilizers such as Al2O3. Because of this variability in the chemical constituents and size of man-made fibres, their potential toxicity has to be evaluated on a fibre-type by fibre-type basis.

Man-made Fibres

Aluminium oxide fibres

Crystalline aluminium oxide fibre toxicity has been suggested by a case report of pulmonary fibrosis in a worker employed in aluminium smelting for 19 years (Jederlinic et al. 1990). His chest radiograph revealed interstitial fibrosis. Analysis of the lung tissue by electron microscopy techniques demonstrated 1.3×109 crystalline fibres per gram of dry lung tissue, or ten times more fibres than the number of asbestos fibres found in lung tissue from chrysotile asbestos miners with asbestosis. Further study is needed to determine the role of crystalline aluminium oxide fibres (figure 1) and pulmonary fibrosis. This case report, however, suggests a potential for fibrization to take place when proper environmental conditions coexist, such as increased air flow across molten materials. Both phase-contrast light microscopy and electron microscopy with energy dispersion x-ray analysis should be used to identify potential airborne fibres in the work environment and in lung tissue samples in cases where there are clinical findings consistent with fibre-induced pneumoconiosis.

Figure 1. Scanning electron micrograph (SEM) of aluminium oxide fibres.

RES200F1

Courtesy of T. Hesterberg.

Carbon/Graphite Fibres

Carbonaceous pitch, rayon or polyacrylonitrile fibres heated to 1,200°C form amorphous carbon fibres, and when heated above 2,20 °C form crystalline graphite fibres (figure 2). Resin binders can be added to increase the strength and to allow moulding and machining of the material. Generally, these fibres have a diameter of 7 to 10μm, but variations in size occur due to the manufacturing process and mechanical manipulation. Carbon/graphite composites are used in the aircraft, automobile and sporting goods industries. Exposure to respirable-sized carbon/graphite particles can occur during the manufacturing process and with mechanical manipulation. Furthermore, small quantities of respirable-sized fibres can be produced when composites are heated to 900 to 1,10 °C. The existing knowledge regarding these fibres is inadequate to provide definite answers as to their potential for causing adverse health effects. Studies involving intratracheal injection of different graphite fibre composite dusts in rats produced heterogeneous results. Three of the dust samples tested produced minimal toxicity, and two of the samples produced consistent toxicity as manifested by cytotoxicity for alveolar macrophages and differences in the total number of cells recovered from the lung (Martin, Meyer and Luchtel 1989). Clastogenic effects have been observed in mutagenicity studies of pitch-based fibres, but not of polyacrylonitrile-based carbon fibres. A ten-year study of carbon fibre production workers, manufacturing fibres 8 to 10mm in diameter, did not reveal any abnormalities (Jones, Jones and Lyle 1982). Until further studies are available, it is recommended that exposure to respirable-sized carbon/graphite fibres be 1 fibre/ml (f/ml) or lower, and that exposure to respirable-sized composite particulates be maintained below the current respirable dust standard for nuisance dust.

Figure 2.   SEM of carbon fibres.

RES200F2

Kevlar para-aramid fibres

Kevlar para-aramid fibres are approximately 12μm in diameter and the curved ribbon-like fibrils on the surface of the fibres are less than 1mm in width (figure 3). The fibrils partially peel off the fibres and interlock with other fibrils to form clumps which are non-respirable in size. The physical properties of Kevlar fibres include substantial heat resistance and tensile strength. They have many different uses, serving as a reinforcing agent in plastics, fabrics and rubber, and as an automobile brake friction material. The eight-hour time-weighted average (TWA) of fibril levels during manufacturing and end-use applications ranges from 0.01 to 0.4 f/ml (Merriman 1989). Very low levels of Kevlar aramid fibres are generated in dust when used in friction materials. The only available health effects data is from animal studies. Rat inhalation studies involving one- to two-year time periods and exposures to fibrils at 25, 100 and 400 f/ml revealed alveolar bronchiolarization which was dose-related. Slight fibrosis and alveolar duct fibrotic changes also were noted at the higher exposure levels. The fibrosis may have been related to overloading of pulmonary clearance mechanisms. A tumour type unique to rats, cystic keratinizing squamous cell tumour, developed in a few of the study animals (Lee et al. 1988). Short-term rat inhalation studies indicate that the fibrils have low durability in lung tissue and are rapidly cleared (Warheit et al. 1992). There are no studies available regarding the human health effects of exposure to Kevlar para-aramid fibre. However, in view of the evidence of decreased biopersistence and given the physical structure of Kevlar, the health risks should be minimal if exposures to fibrils are maintained at 0.5 f/ml or less, as is now the case in commercial applications.

Figure 3. SEM of Kevlar para-aramid fibres.

RES200F5

Silicon carbide fibres and whiskers

Silicon carbide (carborundum) is a widely used abrasive and refractory material that is manufactured by combining silica and carbon at 2,400°C. Silicon carbide fibres and whiskers—figure 4 (Harper et al. 1995)—can be generated as by-products of the manufacture of silicon carbide crystals or can be purposely produced as polycrystalline fibres or monocrystalline whiskers. The fibres generally are less than 1 to 2μm in diameter and range from 3 to 30μm in length. The whiskers average 0.5μm in diameter and 10μm in length. Incorporation of silicon carbide fibres and whiskers adds strength to products such as metal matrix composites, ceramics and ceramic components. Exposure to fibres and whiskers can occur during the production and manufacturing processes and potentially during the machining and finishing processes. For example, short-term exposure during handling of recycled materials has been shown to reach levels up to 5 f/ml. Machining of metal and ceramic matrix composites have resulted in eight-hour TWA exposure concentrations of 0.031 f/ml and up to 0.76 f/ml, respectively (Scansetti, Piolatto and Botta 1992; Bye 1985).

Figure 4. SEMs of silicon carbide fibres (A) and whiskers (B).

A.

RES200F3

B.

RES200F4

Existing data from animal and human studies indicate a definite fibrogenic and possible carcinogenic potential. In vitro mouse cell culture studies involving silicon carbide whiskers revealed cytotoxicity equal to or greater than that resulting from crocidolite asbestos (Johnson et al. 1992; Vaughan et al. 1991). Persistent adenomatous hyperplasia of rat lungs was demonstrated in a subacute inhalation study (Lapin et al. 1991). Sheep inhalation studies involving silicon carbide dust revealed that the particles were inert. However, exposure to silicon carbide fibres resulted in fibrosing alveolitis and increased fibroblast growth activity (Bégin et al. 1989). Studies of lung tissue samples from silicon carbide manufacturing workers revealed silicotic nodules and ferruginous bodies and indicated that silicon carbide fibres are durable and can exist in high concentrations in lung parenchyma. Chest radiographs also have been consistent with nodular and irregular interstitial changes and pleural plaques.

Silicon carbide fibres and whiskers are respirable in size, durable, and have definite fibrogenic potential in lung tissue. A manufacturer of silicon carbide whiskers has set an internal standard at 0.2 f/ml as an eight-hour TWA (Beaumont 1991). This is a prudent recommendation based on currently available health information.

Man-made Vitreous Fibres

Man-made vitreous fibres (MMVFs) generally are classified as:

  1. glass fibre (glass wool or fibreglass, continuous glass filament and special-purpose glass fibre)
  2. mineral wool (rock wool and slag wool) and
  3. ceramic fibre (ceramic textile fibre and refractory ceramic fibre).

 

The manufacturing process begins with melting raw materials with subsequent rapid cooling, resulting in the production of non-crystalline (or vitreous) fibres. Some manufacturing processes allow for large variations in terms of fibre size, the lower limit being 1mm or less in diameter (figure 5). Stabilizers (such as Al2O3, TiO2 and ZnO) and modifiers (such as MgO, Li2O, BaO, CaO, Na2O and K2O) can be added to alter the physical and chemical properties such as tensile strength, elasticity, durability and thermal non-transference.

Figure 5. SEM of slag wool.

RES200F6

Rock wool, glass fibres and refractory ceramic fibres are identical in appearance.

Glass fibre is manufactured from silicon dioxide and various concentrations of stabilizers and modifiers. Most glass wool is produced through use of a rotary process resulting in 3 to 15μm average diameter discontinuous fibres with variations to 1μm or less in diameter. The glass wool fibres are bound together, most commonly with phenolic formaldehyde resins, and then put through a heat-curing polymerization process. Other agents, including lubricants and wetting agents, may also be added, depending on the production process. The continuous glass filament production process results in less variation from the average fibre diameter in comparison to glass wool and special-purpose glass fibre. Continuous glass filament fibres range from 3 to 25μm in diameter. Special-purpose glass fibre production involves a flame attenuation fibrization process that produces fibres with an average diameter of less than 3μm.

Slag wool and rock wool production involves melting and fibrizing slag from metallic ore and igneous rock, respectively. The production process includes a dish shaped wheel and wheel centrifuge process. It produces 3.5 to 7μm average diameter discontinuous fibres whose size may range well into the respirable range. Mineral wool can be manufactured with or without binder, depending on end-use applications.

Refractory ceramic fibre is manufactured through a wheel centrifuge or steam jet fibrization process using melted kaolin clay, alumina/silica, or alumina/silica/zirconia. Average fibre diameters range from 1 to 5μm. When heated to temperatures above 1,000°C, refractory ceramic fibres can undergo conversion to cristobalite (a crystalline silica).

MMVFs with different fibre diameters and chemical composition are used in over 35,000 applications. Glass wool is used in residential and commercial acoustical and thermal insulation applications, as well as in air handling systems. Continuous glass filament is used in fabrics and as reinforcing agents in plastics such as are employed in automobile parts. Special-purpose glass fibre is used in specialty applications, for instance in aircraft, that require high heat and acoustical insulation properties. Rock and slag wool without binder is used as blown insulation and in ceiling tiles. Rock and slag wool with a phenolic resin binder is used in insulation materials, such as insulation blankets and batts. Refractory ceramic fibre constitutes 1 to 2% of the worldwide production of MMVF. Refractory ceramic fibre is used in specialized high-temperature industrial applications, such as furnaces and kilns. Glass wool, continuous glass filament and mineral wool are manufactured in the greatest amounts.

MMVFs are thought to have less potential than naturally occurring fibrous silicates (such as asbestos) for producing adverse health effects because of their non-crystalline state and their propensity to fracture into shorter fibres. Existing data suggests that the most commonly utilized MMVF, glass wool, has the lowest risk of producing adverse health effects, followed by rock and slag wool, and then both special-purpose glass fibre with increased durability and refractory ceramic fibre. Special-purpose glass fibre and refractory ceramic fibre have the greatest potential for existing as respirable-sized fibres as they are generally less than 3mm in diameter. Special-purpose glass fibre (with increased concentration of stabilizers such as Al2O3) and refractory ceramic fibre are also durable in physiologic fluids. Continuous glass filaments are non-respirable in size and therefore do not represent a potential pulmonary health risk.

Available health data is gathered from inhalation studies in animals and morbidity and mortality studies of workers involved with MMVF manufacturing. Inhalation studies involving exposure of rats to two commercial glass wool insulation materials averaging 1μm in diameter and 20μm in length revealed a mild pulmonary cellular response which partly reversed following discontinuation of exposure. Similar findings resulted from an animal inhalation study of a type of slag wool. Minimal fibrosis has been demonstrated with animal inhalation exposure to rock wool. Refractory ceramic fibre inhalation studies resulted in lung cancer, mesothelioma and pleural and pulmonary fibrosis in rats and in mesothelioma and pleural and pulmonary fibrosis in hamsters at a maximum tolerated dose of 250 f/ml. At 75 f/ml and 120 f/ml, one mesothelioma and minimal fibrosis was demonstrated in rats, and at 25 f/ml, there was a pulmonary cellular response (Bunn et al. 1993).

Skin, eye, and upper and lower respiratory tract irritation can occur and depends on exposure levels and job duties. Skin irritation has been the most common health effect noted and can cause up to 5% of new MMVF manufacturing plant workers to leave their employment within a few weeks. It is caused by mechanical trauma to the skin from fibres greater than 4 to 5μm in diameter. It can be prevented with appropriate environmental control measures including avoiding direct skin contact with the fibres, wearing loose fitting, long-sleeved clothing, and washing work clothing separately. Upper and lower respiratory symptoms can occur in unusually dusty situations, particularly in MMVF product fabrication and end-use applications and in residential settings when MMVFs are not handled, installed or repaired correctly.

Studies of respiratory morbidity, as measured by symptoms, chest radiographs and pulmonary function tests among manufacturing plant workers generally have not found any adverse effects. However, an ongoing study of refractory ceramic fibre manufacturing plant workers has revealed an increased prevalence of pleural plaques (Lemasters et al. 1994). Studies in secondary production workers and end-users of MMVF are limited and have been hampered by the likelihood of the confounding factor of previous asbestos exposures.

Mortality studies of workers in glass fibre and mineral wool manufacturing plants are continuing in Europe and the United States. The data from the study in Europe revealed an overall increase in lung cancer mortality based upon national, but not local, mortality rates. There was an increasing trend of lung cancer in the glass and mineral wool cohorts with time since first employment but not with duration of employment. Using local mortality rates, there was an increase in lung cancer mortality for the earliest phase of mineral wool production (Simonato, Fletcher and Cherrie 1987; Boffetta et al. 1992). The data from the study in the United States demonstrated a statistically significant increased risk of respiratory cancer but failed to find an association between the development of cancer and various fibre exposure indices (Marsh et al. 1990). This is in accord with other case-control studies of slag wool and glass fibre manufacturing plant workers which have revealed an increased risk of lung cancer associated with cigarette smoking but not to the extent of MMVF exposure (Wong, Foliart and Trent 1991; Chiazze, Watkins and Fryar 1992). A mortality study of continuous glass filament manufacturing workers did not reveal an increased risk of mortality (Shannon et al. 1990). A mortality study involving refractory ceramic fibre workers is under way in the United States. Mortality studies of workers involved with product fabrication and end-users of MMVF are very limited.

In 1987, the International Agency for Research on Cancer (IARC) classified glass wool, rock wool, slag wool, and ceramic fibres as possible human carcinogens (group 2B). Ongoing animal studies and morbidity and mortality studies of workers involved with MMVF will help to further define any potential human health risk. Based on available data, the health risk from exposure to MMVF is substantially lower than what has been associated with asbestos exposure both from a morbidity and mortality perspective. The vast majority of the human studies, however, are from MMVF manufacturing facilities where exposure levels have generally been maintained below a 0.5 to 1 f/ml level over an eight-hour work day. The lack of morbidity and mortality data on secondary and end-users of MMVF makes it prudent to control respirable fibre exposure at or below these levels through environmental control measures, work practices, worker training and respiratory protection programmes. This is especially applicable with exposure to durable refractory ceramic and special purpose glass MMVF and any other type of respirable man-made fibre that is durable in biological media and that can therefore be deposited and retained in the pulmonary parenchyma.

 

Back

Monday, 28 February 2011 23:53

Chronic Obstructive Pulmonary Disease

Chronic respiratory disorders characterized by differing grades of dyspnoea, cough, phlegm expectoration and functional respiratory impairment are included in the general category of chronic non-specific lung disease (CNSLD). The original definition of CNSLD, accepted at the Ciba Symposium in 1959, covered chronic bronchitis, emphysema and asthma. Subsequently, the diagnostic terminology of chronic bronchitis was redefined according to the concept that disabling airflow limitation represents the final stage of the ever-progressing process which starts as a benign expectoration caused by prolonged or recurrent inhalation of bronchial irritants (the “British Hypothesis”). The concept was thrown into question in 1977 and since then hypersecretion and airflow obstruction are regarded as unrelated processes. The alternative hypothesis, known as the “Dutch Hypothesis,” while accepting the role of smoking and air pollution in the aetiology of chronic airflow limitation, points to the key and possibly causative role of susceptibility of the host, manifesting itself as, for instance, an asthmatic tendency. Subsequent studies have shown that both hypotheses can contribute to the understanding of the natural history of chronic airways disease. Although the conclusion about the insignificant prognostic value of hypersecretory syndrome has generally been accepted as well-grounded, the recent studies have shown a significant association between hypersecretory disorder and the increased risk of the development of airflow limitation and respiratory mortality.

Currently, the term CNSLD combines two major categories of chronic respiratory disorders, asthma (discussed in a separate article of this chapter) and chronic obstructive pulmonary disease (COPD).

Definition

In a document published by the American Thoracic Society (ATS) (1987), COPD is defined as a disorder characterized by abnormal tests of expiratory flow that do not change markedly over periods of several months’ observation. Taking into account functional and structural causes of airflow limitation, the definition includes the following non-asthmatic airways disorders: chronic bronchitis, emphysema and peripheral airways disease. The important common characteristics of COPD are pronounced pathophysiological abnormalities mostly exhibited as a varying degree of chronic airflow limitation (CAL). Chronic airflow limitation can be found in a subject with any disease included under the rubric of COPD.

Chronic bronchitis is defined as an abnormal condition of the respiratory tract, characterized by persistent and excessive productive cough, which reflects the mucous hypersecretion within the airways. For epidemiological purposes, the diagnosis of chronic bronchitis has been based on answers to the set of standard questions included in the Medical Research Council (MRC) or ATS questionnaire on respiratory symptoms. The disorder is defined as cough and phlegm expectoration occurring on most days for at least three months of the year, during at least two successive years.

Emphysema is defined as an anatomical alteration of the lung characterized by abnormal enlargement of the airspaces distal to the terminal bronchiole, accompanied by destruction of acinar architecture. Emphysema often coexists with chronic bronchitis.

The term peripheral airways disease or small airways disease is used to describe the abnormal condition of airways less than 2 to 3 mm in diameter. Inflammation, obstruction and excess mucus production in this part of the bronchial tree has been observed in a variety of clinical entities, including chronic bronchitis and emphysema. The pathological evidence of local structural abnormalities and the concept that the observed changes can represent an early stage in the natural history of chronic disease of airways, have stimulated in the late 1960s and the 1970s a rapid development of functional tests designed to examine physiological properties of peripheral airways. Consequently, the term peripheral airways disease is generally understood to refer to structural abnormalities or functional defect.

CAL is a functional hallmark of COPD. The term refers to an increased resistance to airflow, resulting in a persistent slowing during forced expiration. The definition thereof and the underlying clinical and pathophysiological knowledge imply two important diagnostic clues. First, the condition must be shown to have a chronic course, and the early recommendation of 1958 required the presence of CAL for more than one year to fulfil the diagnostic criteria. The time frame suggested recently is less rigorous and refers to the demonstration of a defect over the period of three months. In surveillance of work-related CAL, the standard spirometric evaluation provides sufficient means of identification of CAL, based on the reduction in the forced expiratory volume in one second (FEV1) and/or in the ratio of FEV1 to forced vital capacity (FVC).

Usually, CAL is diagnosed when the FEV1 value is reduced below 80% of the predicted value. According to the functional classification of CAL recommended by the American Thoracic Society:

  1. mild impairment occurs when the value of FEV1 is below 80% and above 60% of the predicted value
  2. moderate impairment occurs when FEV1 is in the range of 40% to 59% of the predicted value
  3. severe impairment occurs when FEV1 is below 40% of the predicted value.

 

When the degree of impairment is assessed by the value of the FEV1/FVC ratio, a mild defect is diagnosed if the ratio falls between 60% and 74%; moderate impairment if the ratio ranges from 41% to 59%; and severe impairment if the ratio is 40% or less.

Prevalence of COPD

Accumulated evidence indicates that COPD is a common problem in many countries. Its prevalence is higher in men than in women and increases with age. Chronic bronchitis, a well-standardized diagnostic form of COPD, is two to three times more prevalent in men than in women. Large surveys document that usually between 10% and 20% of adult men in the general population meet the diagnostic criteria of chronic bronchitis (Table 18). The disease is much more frequent among smokers, both in men and in women. Occurrence of COPD in occupational populations is discussed below.

Table 1. Prevalence of COPD in selected countries-results of large surveys

Country Year Population Males Females
  SMK (%) CB (%) COPD/CAL (%) SMK (%) CB (%) COPD/CAL (%)
USA 1978 4,699 56.6 16.5 n.r. 36.2 5.9 n.r.
USA 1982 2,540 52.8 13.0 5.2 32.2 4.1 2.5
UK 1961 1,569   17.0 n.r. n.r. 8.0 n.r.
Italy 1988 3,289 49.2 13.1 n.r. 26.9 2.8 n.r.
Poland 1986 4,335 59.6 24.2 8.5 26.7 10.4 4.9
Nepal 1984 2,826 78.3 17.6 n.r. 58.9 18.9 n.r.
Japan 1977 22,590 n.r. 5.8 n.r. n.r. 3.1 n.r.
Australia 1968 3,331 n.r. 6.3 n.r. n.r. 2.4 n.r.

Legend: SMK = smoking habit; CB = chronic bronchitis; COPD/CAL = chronic obstructive pulmonary disease/chronic airways limitation; n.r. = not reported.

Modified with permission from: Woolcock 1989.

 

Risk factors of COPD, including effect of occupational exposures

COPD is a disorder of multifactorial aetiology. Numerous studies have provided evidence for a causative dependence of COPD on many risk factors, categorized as host and environmental factors. The role of occupational exposures among environmental risk factors in the genesis of COPD has been recognized following accumulation of epidemiological evidence published in the period 1984 to 1988. Recently independent effects of smoking and occupational exposures have been confirmed, based on the results of the studies published from 1966 to 1991. Table 2 summarizes the current state of knowledge on multifactorial aetiology of COPD.

Table 2. Risk factors implicated in COPD

Factor
related to
Established Putative
Host Sex Age Antitrypsin deficiency Atopy Familial factors Increased airway reactivity Past health
Environment Tobacco smoke (personal) Tobacco smoke
(environmental) Air pollution Occupational exposure

Reproduced with permission from: Becklake et al. 1988.

 

The occurrence of chronic bronchitis in occupational populations is a potential marker of significant exposure to occupational irritants. A significant effect of exposure to industrial dust on the development of chronic bronchitis has been documented in workers employed in coal mining, the iron and steel industry, as well as in textile, construction and agricultural industries. In general, more dusty environments are associated with higher prevalence of the symptoms of chronic expectoration. The prevalence studies, however, are subject to “healthy worker effect”, a bias that results in underestimation of health impact of harmful occupational exposures. More conclusive, yet less available, are data on the disease’s incidence. In certain occupations the incidence rate of chronic bronchitis is high and ranges from 197-276/10,000 in farmers to 380/10,000 in engineering workers and 724/10,000 in miners and quarryworkers, compared with 108/10,000 in white-collar workers.

This pattern, and the causative effect of smoking as well, are in line with a concept that chronic bronchitis presents a common response to chronic inhalation of respiratory irritants.

A deleterious effect of lung dust burden is thought to result in chronic non-specific bronchial wall inflammation. This type of inflammatory response has been documented in workers exposed to organic dust and its constituents, such as for example grain and endotoxin, both responsible for neutrophillic inflammation. The role of individual susceptibility cannot be ruled out and known host-related factors include past respiratory infections, the efficiency of clearance mechanisms and poorly determined genetic factors, whereas cigarette smoking remains a single most potent environmental cause of chronic bronchitis.

The contribution of occupational exposures to the aetiology of emphysema is not clearly understood. The putative causative factors include nitrogen oxide, ozone and cadmium, as suggested by experimental observations. The data provided by occupational epidemiology are less convincing and may be difficult to obtain because of usually low levels of occupational exposures and a predominant effect of smoking. This is particularly important in case of so-called centriacinar emphysema. The other pathological form of the disease, panacinar emphysema, is considered hereditary and related to alpha1-antitrypsin deficiency.

Bronchiolar and peribronchiolar inflammation, accompanied by progressive narrowing of the affected segment of the bronchial tree (peripheral airways disease or constrictive bronchiolitis) can be seen in a variety of conditions underlying symptoms of COPD, at different stages of natural history. In the occupational setting, the disease usually follows acute lung injury due to inhalation of toxic fumes, such as sulphur dioxide, ammonia, chlorine and nitrogen oxides. However, the occupational epidemiology of constrictive bronchiolitis largely remains unclear. Apparently, its early stages are difficult to identify because of non-specific symptomatology and limitation of diagnostic procedure. More is known about the cases following industrial accidents. Otherwise, the disease can go undetected until the development of overt symptomatology and objective respiratory impairment (i.e., chronic airflow limitation).

CAL is not infrequently found in various occupational groups and, as documented by controlled studies, its prevalence in blue-collar workers can exceed that of white-collar workers. Due to the complex aetiology of CAL, including the effect of smoking and host-related risk factors, early studies on the association of chronic airflow limitation with occupational exposure were inconclusive. Modern occupational epidemiology, employing goal-oriented design and modelling of exposure-response relationships, has provided evidence on association of airflow capacity with exposure to both mineral and organic dusts, fumes and gases.

Workforce-based longitudinal studies conducted in workers exposed to mineral and organic dusts, and to fumes and gases show that lung function loss is associated with occupational exposures. The results summarized in table 3 prove a significant effect of exposures to dust in coal and iron mining, the asbestos-cement industry, steel and smelter workers and pulp mill workers. A number of analysed exposures is composed of exposure to dust and fumes (such as non-halogenated hydrocarbons, paints, resins or varnishes) as well as gases (such as sulphur dioxide or the oxides of nitrogen). According to the results of a comprehensive review, restricted to the most valid and systematically analysed articles on COPD and occupational dust exposure, it can be estimated that 80 of 1,000 non-smoking coalminers could be expected to develop at least 20% loss of FEV1 following 35 years of work with a mean respirable dust concentration of 2 mg/m3, and for non-smoking gold miners the respective risk could be three times as large.

Table 3. Loss of ventilatory function in relation to occupational exposures: results from selected longitudinal workforce-based studies

Country (year) Subjects and exposures Test used Annual loss of function*
      NE E NS S
UK (1982) 1,677 coalminers FEV ml 37 41 (av)
57 (max)
37 48
USA (1985) 1,072 coalminers FEV ml 40 47 40 49
Italy (1984) 65 asbestos cement workers FEV ml 9 49 Not given Not given
Sweden (1985) 70 asbestos cement workers FEV% 4.2 9.2 3.7 9.4
France (1986) 871 iron miners FEV% 6 8 5 7
France (1979) 159 steel-workers FEV% 0.6 7.4 Not given Not given
Canada (1984) 179 mine and smelter workers FEV/FVC% 1.6 3.1 2.0 3.4
France (1982) 556 workers in factories FEV ml 42 50
52 (dust)
47 (gases)
55 (heat)
40 48
Finland (1982) 659 pulp mill workers FEV ml No effect No effect 37 49
Canada (1987) 972 mine and smelter workers FEV ml   69 (roaster)
49 (furnace)
33 (mining)
41 54

* Table shows the average annual loss of lung function in the exposed (E) compared to the non-exposed (NE), and in smokers (S) compared to non-smokers (NS). Independent effects of smoking (S) and/or exposure (E) shown to be significant in the analyses carried out by the authors in all studies except for the Finnish study.

Modified with permission from: Becklake 1989.

 

Selected studies performed with grain workers show the effect of occupational exposure to organic dust on longitudinal changes in lung function. Although limited in number and the duration of follow-up, the findings document an independent relationship of smoking with annual lung function loss (vis à vis exposure to grain dust).

Pathogenesis

The central pathophysiological disorder of COPD is chronic airflow limitation. The disorder results from narrowing of the airways—a condition that has a complex mechanism in chronic bronchitis—whereas in emphysema the airways obstruction results mainly from low elastic recoil of the lung tissue. Both mechanisms often coexist.

The structural and functional abnormalities seen in chronic bronchitis include hypertrophy and hyperplasia of submucosal glands associated with mucous hypersecretion. The inflammatory changes lead to smooth muscle hyperplasia and mucosal swelling. The mucous hypersecretion and airways narrowing favour bacterial and viral infections of the respiratory tract, which may further increase the airways obstruction.

The airflow limitation in emphysema reflects the loss of elastic recoil as a consequence of the destruction of elastin fibres and collapsing bronchiolar wall due to high lung compliance. The destruction of elastin fibres is considered to result from an imbalance in the proteolytic-antiproteolytic system, in a process known also as protease inhibitor-deficiency. Alpha1-antitrypsin is the most potent protease inhibiting the elastase effect on alveoli in humans. Neutrophils and macrophages that release elastase accumulate in response to local inflammatory mediators and inhalation of various respiratory irritants, including tobacco smoke. The other, less powerful inhibitors are a2-macroglobulin and low-weight elastase inhibitor, released from submucosal glands.

Recently, the antioxidant-deficiency hypothesis has been examined for its role in the pathogenetic mechanisms of emphysema. The hypothesis contends that oxidants, if not inhibited by antioxidants, cause damage to the lung tissue, leading to emphysema. Known oxidants include exogenous factors (ozone, chlorine, nitrogen oxides and tobacco smoke) and endogenous factors such as free radicals. The most important antioxidant factors include natural antioxidants such as vitamins E and C, catalase, superoxide dysmutase, glutathion, ceruloplasmin, and synthetic antioxidants such as N-acetylcysteine and allopurinol. There is an increasing body of evidence about synergism regarding antioxidant-deficiency and protease inhibitor-deficiency mechanisms in the pathogenesis of emphysema.

Pathology

Pathologically, chronic bronchitis is characterized by hypertrophy and hyperplasia of the glands in the submucosa of large airways. As a result, the ratio of the bronchial gland thickness to the bronchial wall thickness (the so-called Reid index) increases. Other pathological abnormalities include metaplasia of the cilliary epithelium, smooth muscle hyperplasia and neutrophillic and lymphocytic infiltrations. The changes in large airways are often accompanied by pathological abnormalities in small bronchioles.

Pathological changes in small bronchioles have been consistently documented as varying degrees of the inflammatory process of airway walls. After the introduction of the concept of small airways disease, the focus has been on the morphology of separate segments of bronchioles. The histological evaluation of the membranous bronchioles, expanded subsequently to the respiratory bronchioles, displays wall inflammation, fibrosis, muscle hypertrophy, pigment deposition, epithelial goblet and squamous metaplasia and intraluminal macrophages. Pathological abnormalities of the type described above have been termed “mineral dust induced airway disease”. An associated condition demonstrated in this segment of the respiratory tract is peribronchiolar fibrosing alveolitis, which is thought to represent the early reaction of pulmonary tissue to inhalation of mineral dust.

Pathological changes in emphysema can be categorized as centriacinar emphysema or panacinar emphysema. The former entity is largely limited to the centre of the acinus whereas the latter form involves changes in all structures of the acinus. Although panacinar emphysema is thought to reflect a hereditary protease inhibitor deficiency, both forms may coexist. In emphysema, terminal bronchioles show signs of inflammation and distal airspaces are abnormally enlarged. The structural destruction involves alveoli, capillaries and may lead to the formation of large abnormal airspaces (emphysema bullosum). Centriacinar emphysema tends to be located in the upper lung lobes whereas panacinar emphysema is usually found in the lower lung lobes.

Clinical Symptoms

Chronic cough and phlegm expectoration are two major symptoms of chronic bronchitis, whereas dyspnoea (shortness of breath) is a clinical feature of emphysema. In advanced cases, the symptoms of chronic expectoration and dyspnoea usually coexist. The onset and progress of dyspnoea suggest the development of chronic airflow limitation. According to the symptoms and the physiological status, clinical presentation of chronic bronchitis includes three forms of the disease: simple, mucopurulent and obstructive bronchitis.

In chronic bronchitis, the results of chest auscultation may reveal normal breath sounds. In advanced cases there may be a prolonged expiratory time, wheezes and rales, heard during expiration. Cyanosis is common in advanced obstructive bronchitis.

Clinical diagnosis of emphysema is difficult in its early stage. Dyspnoea may be a single finding. The patient with advanced emphysema may have the barrel-chest and signs of hyperventilation. As a result of lung hyperinflation, other findings include hyperresonance, decrease in diaphragmatic excursion and diminished breath sounds. Cyanosis is rare.

Because of similar causative factors (predominantly the effect of tobacco smoke) and similar presentation diagnosis of chronic bronchitis vis-à-vis emphysema may be difficult, especially if chronic airflow limitation dominates the picture. Table 4 provides some clues that are helpful for diagnosis. The advanced form of COPD can take two extreme types: predominant bronchitis (“blue bloater”) or predominant emphysema (“pink puffer”).

Table 4. Diagnostic classification of two clinical types of COPD, chronic bronchitis and emphysema

Signs/symptoms Predominant bronchitis
(“Blue Bloater”)
Predominant emphysema
(“Pink Puffer”)
Body mass Increased Decreased
Cyanosis Frequent Infrequent
Cough Predominant symptom Intermittent
Sputum Large quantity Rare
Dyspnoea Usually marked during exercise Predominant symptom
Breath sounds Normal or slightly decreased,
adventitious lung sounds
Decreased
Cor Pulmonale Frequent Infrequent
Respiratory infections Frequent Infrequent

 

Chest radiology has a limited diagnostic value in chronic bronchitis and early stages of emphysema. Advanced emphysema shows a radiological pattern of increased radiolucency (hyperinflation). Computerized tomography provides better insight into the location and magnitude of emphysematous changes, including differentiation between centriacinar and panacinar emphysema.

Lung function testing has a well-established position in diagnostic evaluation of COPD (table 5). The battery of tests that are of practical importance in functional assessment of chronic bronchitis and emphysema includes functional residual capacity (FRC), residual volume (RV), total lung capacity (TLC), FEV1 and FEV1/VC, airways resistance (Raw), static compliance (Cst), elastic recoil (PL,el), blood gases (PaO2, PaCO2) and diffusing capacity (DLCO).

Table 5. Lung function testing in differential diagnosis of two clinical types of COPD, chronic bronchitis and emphysema

Lung function test Predominant bronchitis
(“Blue Bloater”)
Predominant emphysema
(“Pink Puffer”)
RV, FRC, TLC Normal or slightly increased Markedly increased
FEV1 , FEV1 /VC Decreased Decreased
Raw Markedly increased Slightly increased
Cst Normal Markedly increased
PL,el Normal Markedly increased
PaO2 Markedly increased Slightly decreased
PaCO2 Increased Normal
DLCO Normal or slightly decreased Decreased

RV = residual volume; FRC = functional residual capacity; TLC = total lung capacity; FEV1 = forced expiratory volume in the first second and VC = vital capacity; Raw = airways resistance; Cst = static compliance; PL,el = elastic recoil; PaO2 and PaCO2 = blood gases; DLCO = diffusing capacity.

 

Clinical diagnosis of peripheral airways disease is not possible. Very often the disease accompanies chronic bronchitis or emphysema or even precedes clinical presentation of both latter forms or COPD. Isolated form of peripheral airways disease can be investigated by means of lung function testing, although the functional status of peripheral airways is difficult to assess. This part of the bronchial tree contributes to less than 20% of the total airflow resistance and isolated, mild abnormalities in small airways are considered to be below the level of detectability of conventional spirometry. More sensitive methods designed to measure the function of peripheral airways include a number of tests, among which the following are in most frequent use: maximal midexpiratory flow rate (FEF25-75), flow rates at low lung volumes (MEF50, MEF25), single breath nitrogen index (SBN2/l), closing capacity (CC), upstream airflow conductance (Gus) and frequency dependent compliance (Cfd). In general, these tests are thought to have a low specificity. On theoretical grounds FEF25-75 and MEF50,25 should reflect calibre-limiting mechanisms first of all, whereas SBN2/l is thought to be more specific to the mechanical properties of airspaces. The former indices are used most frequently in occupational epidemiology.

Differential diagnosis

Basic differences between chronic bronchitis and emphysema are shown in tables 4 and 5. However, in individual cases the differential diagnosis is difficult and sometimes impossible to conduct with a fair degree of confidence. In some cases it is also difficult to differentiate between COPD and asthma. In practice, asthma and COPD are not clear-cut entities and there is a large degree of overlap between the two diseases. In asthma, the airway obstruction is usually intermittent, while in COPD it is constant. The course of airflow limitation is more variable in asthma than in COPD.

Case Management

The clinical management of COPD involves cessation of a smoking habit, the single most effective measure. Occupational exposure to respiratory irritants should be discontinued or avoided. The clinical management should focus on the proper treatment of respiratory infections and should involve regular influenza vaccinations. Bronchodilator therapy is justified in patients with airflow limitation and should comprise b2-adrenergic agonists and anticholinergics, given as monotherapy or in combination, preferably as an aerosol. Theophylline is still in use although its role in the management of COPD is controversial. Long-term corticosteroid therapy may be effective in some cases. Bronchial hypersecretion is often dealt with by mucoactive drugs affecting mucus production, mucus structure or mucocilliary clearance. The assessment of the effects of mucolytic therapy is difficult because these drugs are not used as monotherapy of COPD. Patients with hypoxaemia (PaO2 equal to or less than 55 mm Hg) qualify for long-term oxygen therapy, a treatment that is facilitated by access to portable oxygenators. Augmentation therapy with alpha1-antitrypsin can be considered in emphysema with confirmed alpha1-antitrypsin deficiency (phenotype PiZZ). The effect of antioxidant drugs (such as vitamin E and C) on the progress of emphysema is under investigation.

Prevention

Prevention of COPD should begin with anti-smoking campaigns targeting both the general population and occupational groups at risk. In the occupational setting, the control and prevention of exposures to respiratory irritants are essential and always constitute a priority. These activities should aim at effective reduction of air pollution to safe levels, usually defined by so-called permissible exposure levels. Since the number of air pollutants is not regulated or not adequately regulated, every effort to reduce exposure is justified. In circumstances where such a reduction is impossible to achieve, personal respiratory protection is required to diminish the risk of individual exposure to harmful agents.

Medical prevention of COPD in the occupational setting incorporates two important steps: a respiratory health surveillance programme and an employee education programme.

The respiratory health surveillance programme involves regular evaluation of respiratory health; it starts with initial assessment (history, physical examination, chest x ray and standard lung function testing) and continues to be performed periodically over the period of employment. The programme is meant to assess the baseline respiratory health of workers (and to identify workers with subjective and/or objective respiratory impairment) prior to the commencement of work, and to detect early signs of respiratory impairment during ongoing surveillance of workers. Workers with positive findings should be withdrawn from exposure and referred for further diagnostic evaluation.

The employee education programme should be based on the reliable recognition of respiratory hazards present in the work environment and should be designed by health professionals, industrial hygienists, safety engineers and the management. The programme should provide workers with proper information on respiratory hazards in the workplace, potential respiratory effects of exposures, and pertinent regulations. It should also involve promotion of safe work practices and a healthy lifestyle.

 

Back

This article is devoted to a discussion of pneumoconioses related to a variety of specific non-fibrous substances; exposures to these dusts are not covered elsewhere in this volume. For each material capable of engendering a pneumoconiosis upon exposure, a brief discussion of the mineralogy and commercial importance is followed by information related to the lung health of exposed workers.

Aluminium

Aluminium is a light metal with many commercial uses in both its metallic and combined states. (Abramson et al. 1989; Kilburn and Warshaw 1992; Kongerud et al. 1994.) Aluminium-containing ores, primarily bauxite and cryolite, consist of combinations of the metal with oxygen, fluorine and iron. Silica contamination of the ores is common. Alumina (Al2O3) is extracted from bauxite, and may be processed for use as an abrasive or as a catalyst. Metallic aluminium is obtained from alumina by electrolytic reduction in the presence of fluoride. Electrolysis of the mixture is carried out by using carbon electrodes at a temperature of about 1,000°C in cells known as pots. The metallic aluminium is then drawn off for casting. Dust, fume and gas exposures in pot rooms, including carbon, alumina, fluorides, sulphur dioxide, carbon monoxide and aromatic hydrocarbons, are accentuated during crust breaking and other maintenance operations. Numerous products are manufactured from aluminium plate, flake, granules and castings—resulting in extensive potential for occupational exposures. Metallic aluminium and its alloys find use in the aircraft, boat and automobile industries, in the manufacture of containers and of electrical and mechanical devices, as well as in a variety of construction and structural applications. Small aluminium particles are used in paints, explosives and incendiary devices. To maintain particle separation, mineral oils or stearin are added; increased lung toxicity of aluminium flakes has been associated with the use of mineral oil.

Lung health

Inhalation of aluminium-containing dusts and fumes may occur in workers involved in the mining, extraction, processing, fabrication and end-use of aluminium-containing materials. Pulmonary fibrosis, resulting in symptoms and radiographic findings, has been described in workers with several differing exposures to aluminium-containing substances. Shaver’s disease is a severe pneumoconiosis described among workers involved in the manufacture of alumina abrasives. A number of deaths from the condition have been reported. The upper lobes of the lung are most often affected and the occurrence of pneumothorax is a frequent complication. High levels of silicon dioxide have been found in the pot room environment as well as in workers’ lungs at autopsy, suggesting silica as a potential contributor to the clinical picture in Shaver’s disease. High concentrations of aluminium oxide particulate have also been observed. Lung pathology may show blebs and bullae, and pleural thickening is seen occasionally. The fibrosis is diffuse, with areas of inflammation in the lungs and associated lymph nodes.

Aluminium powders are used in making explosives, and there have been a number of reports of a severe and progressive fibrosis in workers involved in this process. Lung involvement has also occasionally been described in workers employed in the welding or polishing of aluminium, and in bagging cat litter containing aluminium silicate (alunite). However, there has been considerable variation in the reporting of lung diseases in relation to exposures to aluminium. Epidemiological studies of workers exposed to aluminium reduction have generally shown low prevalence of pneumoconiotic changes and slight mean reductions in ventilatory lung function. In various work environments, alumina compounds can occur in several forms, and in animal studies these forms appear to have differing lung toxicities. Silica and other mixed dusts may also contribute to this varying toxicity, as may the materials used to coat the aluminium particles. One worker, who developed a granulomatous lung disease after exposure to oxides and metallic aluminium, showed transformation of his blood lymphocytes upon exposure to aluminium salts, suggesting that immunologic factors might play a role.

An asthmatic syndrome has frequently been noted among workers exposed to fumes in aluminium reduction pot rooms. Fluorides found in the pot room environment have been implicated, although the specific agent or agents associated with the asthmatic syndrome has not been determined. As with other occupational asthmas, symptoms are often delayed 4 to 12 hours after exposure, and include cough, dyspnoea, chest tightness and wheeze. An immediate reaction may also be noted. Atopy and a family history of asthma do not appear to be risk factors for development of pot room asthma. After cessation of exposure, symptoms may be expected to disappear in most cases, although two-thirds of the affected workers show persistent non-specific bronchial responsiveness and, in some workers, symptoms and airway hyperresponsiveness continue for years even after exposure is terminated. The prognosis for pot room asthma appears to be best in those who are immediately removed from exposure when the asthmatic symptoms become manifest. Fixed airflow obstruction has also been associated with pot room work.

Carbon electrodes are used in the aluminium reduction process, and known human carcinogens have been identified in the pot room environment. Several mortality studies have revealed lung cancer excesses among exposed workers in this industry.

Diatomaceous Earth

Deposits of diatomaceous earth result from the accretion of skeletons of microscopic organisms. (Cooper and Jacobson 1977; Checkoway et al. 1993.) Diatomaceous earth may be utilized in foundries and in the maintenance of filters, abrasives, lubricants and explosives. Certain deposits comprise up to 90% free silica. Exposed workers may develop lung changes involving simple or complicated pneumoconiosis. The risk of death from both nonmalignant respiratory diseases and lung cancer has been related to the workers’ tenure in dusty work as well as to cumulative crystalline silica exposures during the mining and processing of diatomaceous earth.

Elemental Carbon

Aside from coal, the two common forms of elemental carbon are graphite (crystalline carbon) and carbon black. (Hanoa 1983; Petsonk et al. 1988.) Graphite is used in the manufacture of lead pencils, foundry linings, paints, electrodes, dry batteries and crucibles for metallurgical purposes. Finely ground graphite has lubricant properties. Carbon black is a partially decomposed form used in automotive tires, pigments, plastics, inks and other products. Carbon black is manufactured from fossil fuels through a variety of processes involving partial combustion and thermal decomposition.

Inhalation of carbon, as well as associated dusts, may occur during the mining and milling of natural graphite, and during the manufacture of artificial graphite. Artificial graphite is produced by the heating of coal or petroleum coke, and generally contains no free silica.

Lung health

Pneumoconiosis results from worker exposure to both natural and artificial graphite. Clinically, workers with carbon or graphite pneumoconiosis show radiographic findings similar to those for coal workers. Severe symptomatic cases with massive pulmonary fibrosis were reported in the past, particularly related to the manufacture of carbon electrodes for metallurgy, although recent reports emphasize that the materials implicated in exposures leading to this sort of condition are likely to be mixed dusts.

Gilsonite

Gilsonite, also known as uintaite, is a solidified hydrocarbon. (Keimig et al. 1987.) It occurs in veins in the western United States. Current uses include the manufacture of automotive body seam sealers, inks, paints and enamels. It is an ingredient of oil-well drilling fluids and cements; it is an additive in sand moulds in the foundry industry; it is to be found as a component of asphalt, building boards and explosives; and it is employed in the production of nuclear grade graphite. Workers exposed to gilsonite dust have reported symptoms of cough and phlegm production. Five of ninety-nine workers surveyed showed radiographic evidence of pneumoconiosis. No abnormalities in pulmonary function have been defined in relation to gilsonite dust exposures.

Gypsum

Gypsum is hydrated calcium sulphate (CaSO4·2H2O) (Oakes et al. 1982). It is used as a component of plasterboard, plaster of Paris and Portland cement. Deposits are found in several forms and are often associated with other minerals such as quartz. Pneumoconiosis has been observed in gypsum miners, and has been attributed to silica contamination. Ventilatory abnormalities have not been associated with gypsum dust exposures.

Oils and Lubricants

Liquids containing hydrocarbon oils are used as coolants, cutting oils and lubricants (Cullen et al. 1981). Vegetable oils are found in some commercial products and in a variety of foodstuffs. These oils may be aerosolized and inhaled when metals that are coated with oils are milled or machined, or if oil-containing sprays are used for purposes of cleaning or lubrication. Environmental measurements in machine shops and mills have documented airborne oil levels up to 9 mg/m3. One report implicated airborne oil exposure from the burning of animal and vegetable fats in an enclosed building.

Lung health

Workers exposed to these aerosols have occasionally been reported to develop evidence of a lipoid pneumonia, similar to that noted in patients who have aspirated mineral oil nose drops or other oily materials. The condition is associated with symptoms of cough and dyspnoea, inspiratory lung crackles, and impairments in lung function, generally mild in severity. A few cases have been reported with more extensive radiographic changes and severe lung impairments. Exposure to mineral oils has also been associated in several studies with an increased risk of respiratory tract cancers.

Portland Cement

Portland cement is made from hydrated calcium silicates, aluminium oxide, magnesium oxide, iron oxide, calcium sulphate, clay, shale and sand (Abrons et al. 1988; Yan et al. 1993). The mixture is crushed and calcined at high temperatures with the addition of gypsum. Cement finds numerous uses in road and building construction.

Lung health

Silicosis appears to be the greatest risk in cement workers, followed by a mixed dust pneumoconiosis. (In the past, asbestos was added to cement to improve its characteristics.) Abnormal chest radiographic findings, including small rounded and irregular opacities and pleural changes, have been noted. Workers have occasionally been reported to have developed pulmonary alveolar proteinosis after the inhalation of cement dust. Airflow obstructive changes have been noted in some, but not all, surveys of cement workers.

Rare Earth Metals

Rare earth metals or “lanthanides” have atomic numbers between 57 and 71. Lanthanum (atomic number 57), cerium (58), and neodymium (60) are the commonest of the group. The other elements in this group include praseodymium (59), promethium (61), samarium (62), europium (63), gadolinium (64), terbium (65), dysprosium (66), holmium (67), erbium (68), thulium (69), ytterbium (70) and lutetium (71). (Hussain, Dick and Kaplan 1980; Sabbioni, Pietra and Gaglione 1982; Vocaturo, Colombo and Zanoni 1983; Sulotto, Romano and Berra 1986; Waring and Watling 1990; Deng et al. 1991.) The rare earth elements are found naturally in monazite sand, from which they are extracted. They are used in a variety of alloy metals, as abrasives for polishing mirrors and lenses, for high-temperature ceramics, in fireworks and in cigarette lighter flints. In the electronics industry they are used in electrowelding and are to be found in various electronic components, including television phosphors, radiographic screens, lasers, microwave devices, insulators, capacitors and semiconductors.

Carbon arc lamps are used widely in the printing, photoengraving and lithography industries and were used for floodlighting, spotlighting and movie projection before the wide-scale adoption of argon and xenon lamps. The rare earth metal oxides were incorporated into the central core of carbon arc rods, where they stabilize the arc stream. Fumes which are emitted from the lamps are a mixture of gaseous and particulate material composed of approximately 65% rare earth oxides, 10% fluorides and unburnt carbon and impurities.

Lung health

Pneumoconiosis in workers exposed to rare earths has been exhibited primarily as bilateral nodular chest radiographic infiltrates. Lung pathology in cases of rare earth pneumoconiosis has been described as an interstitial fibrosis accompanied by an accumulation of fine granular dust particles, or granulomatous changes.

Variable pulmonary function impairments have been described, from restrictive to mixed restrictive-obstructive. However, the spectrum of pulmonary disease related to inhalation of rare earth elements is still to be defined, and data regarding the pattern and progression of disease and histological changes is available primarily only from a few case reports.

A neoplastic potential of the rare earth isotopes has been suggested by a case report of lung cancer, possibly related to ionizing radiation from the naturally occurring rare earth radioisotopes.

Sedimentary Compounds

Sedimentary rock deposits form through the processes of physical and chemical weathering, erosion, transport, deposition and diagenesis. These may be characterized into two broad classes: Clastics, which include mechanically deposited erosion debris, and chemical precipitates, which include carbonates, shells of organic skeletons and saline deposits. Sedimentary carbonates, sulphates and halides provide relatively pure minerals that have crystallized from concentrated solutions. Due to the high solubility of many of the sedimentary compounds, they are rapidly cleared from the lungs and are generally associated with little pulmonary pathology. In contrast, workers exposed to certain sedimentary compounds, primarily clastics, have shown pneumoconiotic changes.

Phosphates

Phosphate ore, Ca5(F,Cl)(PO4)3, is used in the production of fertilizers, dietary supplements, toothpaste, preservatives, detergents, pesticides, rodent poisons and ammunitions (Dutton et al. 1993). Extraction and processing of the ore may result in a variety of irritant exposures. Surveys of workers in phosphate mining and extraction have documented increased symptoms of cough and phlegm production, as well as radiographic evidence of pneumoconiosis, but little evidence of abnormal lung function.

Shale

Shale is a mixture of organic material composed mainly of carbon, hydrogen, oxygen, sulphur and nitrogen (Rom, Lee and Craft 1981; Seaton et al. 1981). The mineral component (kerogen) is found in the sedimentary rock called marlstone, which is of a grey-brown colour and a layered consistency. Oil shale has been used as an energy source since the 1850s in Scotland. Major deposits exist in the United States, Scotland and Estonia. Dust in the atmosphere of underground oil shale mines is of relatively fine dispersion, with up to 80% of the dust particles under 2 mm in size.

Lung health

Pneumoconiosis related to the deposition of shale dust in the lung is termed shalosis. The dust creates a granulomatous and fibrotic reaction in the lungs. This pneumoconiosis is similar clinically to coal workers’ pneumoconiosis and silicosis, and may progress to massive fibrosis even after the worker has left the industry.

Pathologic changes identified in lungs with shalosis are characterized by vascular and bronchial deformation, with irregular thickening of interalveolar and interlobular septa. In addition to interstitial fibrosis, lung specimens with shale pneumoconiosis have shown enlarged hilar shadows, related to the transport of shale dust and subsequent development of well-defined sclerotic changes in the hilar lymph nodes.

Shale workers have been found to have a prevalence of chronic bronchitis two and one-half times that of age-matched controls. The effect of shale dust exposures on lung function has not been studied systematically.

Slate

Slate is a metamorphic rock, made up of various minerals, clays and carbonaceous matter (McDermott et al. 1978). The major constituents of slate include muscovite, chlorite, calcite and quartz, along with graphite, magnetite and rutile. These have undergone metamorphosis to form a dense crystalline rock that possesses strength but is easily cleaved, characteristics which account for its economic importance. Slate is used in roofing, dimension stone, floor tile, flagging, structural shapes such as panels and window sills, blackboards, pencils, billiard tables and laboratory bench tops. Crushed slate is used in highway construction, tennis court surfaces and lightweight roofing granules.

Lung health

Pneumoconiosis has been found in a third of workers studied in the slate industry in North Wales, and in 54% of slate pencil makers in India. Various lung radiographic changes have been identified in slateworkers. Because of the high quartz content of some slates and the adjacent rock strata, slateworkers’ pneumoconiosis may have features of silicosis. The prevalence of respiratory symptoms in slateworkers is high, and the proportion of workers with symptoms increases with pneumoconiosis category, irrespective of smoking status. Diminished values of forced expiratory volume in one second (FEV1) and forced vital capacity (FVC) are associated with increasing pneumoconiosis category.

The lungs of miners exposed to slate dust reveal localized areas of perivascular and peribronchial fibrosis, extending to macule formation and extensive interstitial fibrosis. Typical lesions are fibrotic macules of variable configuration intimately associated with small pulmonary blood vessels.

Talc

Talc is composed of magnesium silicates, and is found in a variety of forms. (Vallyathan and Craighead 1981; Wegman et al. 1982; Stille and Tabershaw 1982; Wergeland, Andersen and Baerheim 1990; Gibbs, Pooley and Griffith 1992.)

Deposits of talc are frequently contaminated with other minerals, including both fibrous and non-fibrous tremolite and quartz. Lung health effects of talc-exposed workers may be related to both the talc itself as well as the other associated minerals.

Talc production occurs primarily in Australia, Austria, China, France and the United States. Talc is used as a component in hundreds of products, and is used in the manufacture of paint, pharmaceuticals, cosmetics, ceramics, automobile tires and paper.

Lung health

Diffuse rounded and irregular parenchymal lung opacities and pleural abnormalities are seen on the chest radiographs of talc workers in association with the talc exposure. Depending on the specific exposures experienced, the radiographic shadows may be ascribed to talc itself or to contaminants in the talc. Talc exposure has been associated with symptoms of cough, dyspnoea and phlegm production, and with evidence of airflow obstruction in pulmonary function studies. Lung pathology has revealed various forms of pulmonary fibrosis: granulomatous changes and ferruginous bodies have been reported, and dust-laden macrophages collected around the respiratory bronchioles intermingled with bundles of collagen. Mineralogical examination of lung tissue from talc workers is also variable and may show silica, mica or mixed silicates.

Since talc deposits may be associated with asbestos and other fibres, it is not surprising that an increased risk of bronchogenic carcinoma has been reported in talc miners and millers. Recent investigations of workers exposed to talc without associated asbestos fibres revealed trends for higher mortality from non-malignant respiratory disease (silicosis, silico-tuberculosis, emphysema and pneumonia), but the risk for bronchogenic cancer was not found to be elevated.

Hairspray

Exposure to hairsprays occurs in the home environment as well as in commercial hairdressing establishments (Rom 1992b). Environmental measurements in beauty salons have indicated the potential for respirable aerosol exposures. Several case reports have implicated hairspray exposure in the occurrence of a pneumonitis, thesaurosis, in heavily exposed individuals. Clinical symptoms in the cases were generally mild, and resolved with termination of exposure. Histology usually showed a granulomatous process in the lung and enlarged hilar lymph nodes, with thickening of alveolar walls and numerous granular macrophages in the airspaces. Macromolecules in hairsprays, including shellacs and polyvinylpyrrolidone, have been suggested as potential agents. In contrast to the clinical case reports, increased lung parenchymal radiographic shadows observed in radiological surveys of commercial hairdressers have not been conclusively related to hairspray exposure. Although the results of these studies do not allow definitive conclusions to be drawn, clinically important lung disease from typical hairspray exposures does appear to be an unusual occurrence.

 

Back

Monday, 28 February 2011 23:27

Hard Metal Disease

Shortly after the end of the First World War, while doing research to find a material able to replace diamond in metal-drawing nozzles, Karl Schoeter patented in Berlin a sintering process (pressurization plus heating at 1,500°C) of a mixture of fine tungsten carbide (WC) powder with 10% of cobalt to produce “hard metal”. The main characteristics of this sinter are the extreme hardness, only slightly inferior to that of diamond, and the maintenance of its mechanical properties at high temperatures; these characteristics make it suitable for use in drawing metal, for welded inserts, and for high-speed tools for machining of metals, stone, wood and materials with high resistance to wear or to heat, in the mechanical, aeronautical and ballistic fields. The use of hard metal is continually expanding all over the world. In 1927 Krupp extended the use of hard metal into the cutting tools field, calling it “Widia” (wie Diamant—like diamond), a name still in use today.

Sintering remains the basis of all hard metal production: techniques are improved by the introduction of other metallic carbides—titanium carbide (TiC) and tantalum carbide (TaC)—and by treatment of hard metal parts for mobile cutting inserts with one or more layers of titanium nitride or aluminium oxide and of other very hard compounds applied with chemical vapour deposition (CVD) or physical vapour deposition (PVD). The fixed inserts welded to the tools cannot be plated, but are repeatedly sharpened by a diamond grinding wheel (figures 1 and 2).

Figure 1. (A) Examples of some hard metal drawing mobile inserts, plated with golden-yellow tungsten nitride; (B) insert welded to the tool and working in steel drawing.

RES170F1

 

Figure 2. Fixed inserts welded to (A) stone drill and (B) saw disk.

RES170F2

The hard metal sinter is formed by particles of metallic carbides incorporated in a matrix formed by cobalt, which melts during sintering, interacting and occupying the interstices. Cobalt is therefore the structure gluing material, which assumes metal-ceramic characteristics (figures 3, 4, and 5).

 

Figure 3. Microstructure of a WC/Co sintering; WC particles are incorporated into the Co light matrix (1,500x).

RES170F3

 

Figure 4. Microstructure of a WC + TiC + TaC + Co sintering. Along with WC prismatic particles, globular particles formed by a solid solution of TiC + TaC are observed.  The light matrix is formed by Co (1,500x).

RES170F4

 

Figure 5. Sintering microstructure plated by multiple very hard layers (2,000x).

RES170F5

The sintering process uses very fine metallic carbide powders (average diameters from 1 to 9μm) and cobalt powders (average diameter from 1 to 4μm) which are mixed, treated with paraffin solution, die-pressed, de-waxed at low temperature, pre-sintered at 700 to 750°C and sintered at 1,500°C (Brookes 1992).

When sintering is done with inadequate methods, improper techniques and poor industrial hygiene, the powders can pollute the atmosphere of the work environment: workers are therefore exposed to the risk of inhalation of metallic carbide powders and cobalt powders. Along with the primary process there are other activities which can expose the workers to the risk of aerosol inhalation of hard metal. Sharpening of fixed inserts welded to tools is normally carried out by dry diamond grinding or, more frequently, cooled with liquids of different kinds, producing powders or mists formed by very small drops containing metallic particles. Particles of hard metal are also used in the production of a high-resistance layer on steel surfaces subjected to wear, applied through methods (plasma coating process and others) based on the combination of a powder spray with an electric arc or a controlled explosion of a gas mixture at high temperature. The electric arc or the explosive flow of the gas determines the fusion of the metallic particles and their impact on the surface being plated.

First observations on “hard metal diseases” were described in Germany in the 1940s. They reported a diffuse, progressive pulmonary fibrosis, called Hartmetallungenfibrose. During the next 20 years parallel cases were observed and described in all industrial countries. The workers affected were in the majority of cases in charge of the sintering. From 1970 to the present, several studies indicate that the pathology to the breathing apparatus is caused by the inhalation of hard metal particles. It affects only susceptible subjects, and consists of the following symptoms:

  • acute: rhinitis, asthma
  • subacute: fibrosant alveolitis
  • chronic: diffuse and progressive interstitial fibrosis.

 

It affects not only workers in charge of sintering, but anyone inhaling aerosol containing hard metal and particularly cobalt. It is mainly and perhaps exclusively caused by cobalt.

The definition of hard metal disease now includes a group of pathologies of the breathing apparatus, different from each other in clinical gravity and prognosis, but having in common a variable individual reactivity to the aetiological factor, cobalt.

More recent epidemiological and experimental information agree on the causal role of cobalt for acute symptoms in the upper respiratory tract (rhinitis, asthma) and for subacute and chronic symptoms in the bronchial parenchyma (fibrosing alveolitis and chronic interstitial fibrosis).

The pathogenic mechanism is based on the induction by Co of a hypersensitive immunoreaction: in fact, only some of the subjects present pathologies after short exposures to relatively low concentration, or even after longer and more intense exposures. Co concentrations in biological samples (blood, urine, skin) are not significantly different in those who have the pathology and those who do not; there is no correlation of dose and response at the tissue level; specific antibodies have been individuated (immunoglobins IgE and IgG) against a Co-albumin compound in asthmatics, and the Co patch test is positive in the subjects with alveolitis or fibrosis; the cytological aspects of the giant-cellular alveolitis are compatible with immunoreaction, and acute or subacute symptoms tend to regress when the subjects are removed from exposure to Co (Parkes 1994).

The immunological basis of hypersensitivity to Co has not yet been satisfactorily explained; it is not possible, therefore, to identify a reliable marker of individual susceptibility.

Identical pathologies to those found in the subjects exposed to hard metals were also observed in diamond cutters, who use disks formed by microdiamonds cemented with Co and who therefore inhale only Co and diamond particles.

It is not yet fully demonstrated that pure Co (all other inhaled particles excluded) is capable alone of producing the pathologies and above all the diffused interstitial fibrosis: the particles inhaled with Co could have a synergistic as well as modulating effect. Experimental studies seem to demonstrate that the biological reactivity to a mixture of Co particles and of tungsten is stronger than that caused by Co alone, and significant pathologies are not to be observed in the workers in charge of the production of pure Co powder (Science of the Total Environment 1994).

Clinical symptoms of hard metal disease, which, on the basis of current aetiopathogenic knowledge should be more precisely called “cobalt disease”, are, as mentioned before, acute, subacute and chronic.

Acute symptoms include a specific respiratory irritation (rhinitis, laryngo-tracheitis, pulmonary oedema) caused by exposure to high concentrations of Co powder or Co smoke; they are observable only in exceptional cases. Asthma is observed more frequently. It appears in 5 to 10% of the workers exposed to cobalt concentrations of 0.05 mg/m3, the current US threshold limit value (TLV). Symptoms of thoracic constriction with dyspnoea and cough tend to appear at the end of the work shift or during the night. The diagnosis of occupational allergic bronchial asthma due to cobalt can be suspected on the basis of case history criteria, but it is confirmed by a specific bronchial stimulation test which determines the appearance of an immediate, delayed or dual bronchospastic response. Even respiratory capacity tests carried out at the beginning and at the end of the work shift can help the diagnosis. Asthmatic symptoms due to cobalt tend to disappear when the subject is removed from exposure, but, similarly to all other forms of occupational allergic asthma, symptoms can become chronic and irreversible when the exposure continues for a long time (years), despite the presence of respiratory disturbances. Highly bronchoreactive subjects can present non-allergic aetiological asthmatic symptoms, with a non-specific response to inhalation of cobalt and other irritating powders. In a high percentage of cases with allergic bronchial asthma, specific reaction towards a human Co-seroalbumin compound was found in the IgE serum. The radiological finding does not vary: only in rare cases can mixed forms of asthma plus alveolitis with radiological alteration specifically caused by alveolitis be found. Bronchodilator therapy, along with an immediate end of the work exposure, leads to complete recovery for cases that are of recent onset, not yet chronic.

Subacute and chronic symptoms include fibrosant alveolitis and chronic diffuse and progressive interstitial fibrosis (DIPF). The clinical experience seems to indicate that the transition from alveolitis to interstitial fibrosis is a process which evolves gradually and slowly in time: one can find cases of pure initial alveolitis reversible with withdrawal from the exposure plus corticosteroid therapy; or cases with an already present fibrosis component, which can improve but not reach complete recovery by removing the subject from exposure, even with additional therapy; and finally, cases in which the predominant situation is that of an irreversible DIPF. The occurrence of such cases is low in the exposed workers, very much lower than the percentage of allergic asthma cases.

Alveolitis is easy to study today in its cytological components through broncho-alveolar lavage (BAL); it is characterized by a large increase of the total cell number, mainly formed by macrophages, with numerous multinuclear giant cells and the typical aspect of foreign-body giant cells containing at times cytoplasmic cells (figure 6); even an absolute or relative increase of lymphocytes is frequent, with a decreased CD4/CD8 ratio, associated with a large increase of eosinophiles and mast cells. Rarely, alveolitis is mainly lymphocytic, with CD4/CD8 ratio inverted, as it occurs in the pneumopathies due to hypersensitivity.

Figure 6. Cytological BAL in a macrophagic mono-nuclear giant-cellular alveolitis case caused by hard metal. Between the mononuclear macrophages and the lymphocyte, a giant foreign-body type of cell (400x) is observed.

RES170F6

Subjects with alveolitis report dyspnoea linked with fatigue, loss of weight and dry cough. Crepitation is present in the lower lung with functional alteration of a restrictive kind and diffused round or irregular radiological opacity. The patch test for cobalt is positive in the majority of cases. In the susceptible subjects, alveolitis is revealed after a relatively short period of workplace exposure, of one or a few years. In its initial phases this form is reversible up to complete recovery with the simple removal from exposure, with better results if this is combined with cortisone therapy.

The development of diffuse interstitial fibrosis aggravates the clinical symptoms with a worsening of the dyspnoea, which appears even after minimal strain and then even at rest, with worsening of the restrictive ventilatory impairment which is linked to a reduction of the capillary-alveolar diffusion, and with appearance of radiographic opacities of a linear type and of honeycombing (figure 7). The histological situation is that of a fibrosing alveolitis of a “mural type”.

Figure 7. Thoracic radiograph of a subject affected by interstitial fibrosis caused by hard metal. Linear and diffused opacity and honeycombing aspects are observed.

RES170F7

The evolution is rapidly progressive; therapies are ineffective and the prognosis doubtful. One of the cases diagnosed by the author eventually required a lung transplant.

The occupational diagnosis is based on case history, BAL cytological pattern and cobalt patch test.

Prevention of hard metal disease, or, more precisely, of cobalt disease, is now mainly technical: protecting the workers through the elimination of powder, smokes or mists with adequate ventilation of the work areas. In fact, the lack of knowledge about the factors which determine individual hypersensitivity to cobalt makes the identification of susceptible people impossible, and the maximum effort must be made to reduce the atmospheric concentrations.

The number of people at risk is underestimated because many sharpening activities are carried out in small industries or by craftspeople. In such workplaces, the US TLV of 0.05 mg/m3 is frequently exceeded. There is also some question as to the adequacy of the TLV for protecting workers against cobalt disease since dose-effect relationships for disease mechanisms involving hypersensitivity are not completely understood.

Routine surveillance must be accurate enough to identify cobalt pathologies in their earliest stages. An annual questionnaire aimed mainly at temporary symptoms must be administered, along with a medical examination that includes pulmonary function testing and other appropriate medical examinations. Since it has been demonstrated that there is a good correlation between cobalt concentrations in the work environment and the urinary excretion of the metal, it is appropriate to carry out semi-annual measurement of cobalt in urine (CoU) on samples taken at the end of the work week. When the exposure is at the level of the TLV, the biological exposure index (BEI) is estimated to be equal to 30μg Co/litre urine.

Pre-exposure medical examinations for the presence of pre-existing respiratory disease and bronchial hypersensitivity can be useful in the counselling and placement of workers. Metacholine tests are a useful indicator of non-specific bronchial hyperreactivity and may be useful in some settings.

International standardization of environmental and medical surveillance methods for workers exposed to cobalt is highly recommended.

 

Back

Monday, 28 February 2011 22:46

Asbestos-Related Diseases

Historical Perspective

Asbestos is a term used to describe a group of naturally occurring fibrous minerals which are very widely distributed in rock outcrops and deposits throughout the world. Exploitation of the tensile and heat-resistant properties of asbestos for human use dates from ancient times. For instance, in the third century BC asbestos was used to strengthen clay pots in Finland. In classic times, shrouds woven from asbestos were used to preserve the ashes of the famous dead. Marco Polo returned from his travels in China with descriptions of a magic material which could be manufactured into a flame resistant cloth. By the early years of the nineteenth century, deposits were known to exist in several parts of the world, including the Ural Mountains, northern Italy and other Mediterranean areas, in South Africa and in Canada, but commercial exploitation only started in the latter half of the nineteenth century. By this time, the industrial revolution created not only the demand (such as that of insulating the steam engine) but also facilitated production, with mechanization replacing hand cobbing of fibre from the parent rock. The modern industry began in Italy and the United Kingdom after 1860 and was boosted by the development and exploitation of the extensive deposits of chrysotile (white) asbestos in Quebec (Canada) in the 1880s. Exploitation of the also extensive deposits of chrysotile in the Ural mountains was modest until the 1920s. The long thin fibres of chrysotile were particularly suitable for spinning into cloth and felts, one of the early commercial uses for the mineral. The exploitation of the deposits of crocidolite (blue) asbestos of the northwest Cape, South Africa, a fibre more water-resistant than chrysotile and better suited to marine use, and of the amosite (brown) asbestos deposits, also found in South Africa, started in the early years of this century. Exploitation of the Finnish deposits of anthophyllite asbestos, the only important commercial source of this fibre, took place between 1918 and 1966, while the deposits of crocidolite in Wittenoom, Western Australia, were mined from 1937 to 1966.

Fibre Types

The asbestos minerals fall into two groups, the serpentine group which includes chrysotile, and the amphiboles, which include crocidolite, tremolite, amosite and anthophyllite (figure 1). Most ore deposits are heterogeneous mineralogically, as are most of the commercial forms of the mineral (Skinner, Roos and Frondel 1988). Chrysotile and the various amphibole asbestos minerals differ in crystalline structure, in chemical and surface characteristics and in the physical characteristics of their fibres, usually described in terms of the length-to-diameter (or aspect) ratio. They also differ in characteristics which distinguish commercial use and grade. Pertinent to the current discussion is the evidence that the different fibres differ in their biological potency (as considered below in the sections on various diseases).

Figure 1. Asbestos fibre types.

RES160F1

Seen on election microscopy together with energy dispersive x-ray spectra which enables identification of individual fibres. Courtesy of A. Dufresne and M. Harrigan, McGill University.

 

 

 

 

 

 

 

 

 

 

 

 

 

Commercial Production

The growth of commercial production, illustrated in figure 2, was slow in the early years of this century. For instance, Canadian production exceeded 100,000 short tons per annum for the first time in 1911 and 200,000 tons in 1923. Growth between the two World Wars was steady, increased considerably to meet the demands of the Second World War and spectacularly to meet peacetime demands (including those of the cold war) to reach a peak in 1976 of 5,708,000 short tons (Selikoff and Lee 1978). After this, production faltered as the ill-health effects of exposure became a matter of increasing public concern in North America and Europe and remain at approximately 4,000,000 short tons per annum up to 1986, but decreased further in the 1990s. There was also a shift in the uses and sources of fibre in the 1980s; in Europe and North America demand declined as substitutes for many applications were introduced, while on the African, Asian and South American continents, demand for asbestos increased to meet the needs of a cheap durable material for use in construction and in water reticulation. By 1981, Russia had become the world’s major producer, with an increase in the commercial exploitation of large deposits in China and Brazil. In 1980, it was estimated that a total of over 100 million tons of asbestos had been mined worldwide, 90% of which was chrysotile, approximately 75% of which came from 4 chrysotile mining areas, located in Quebec (Canada), Southern Africa and the central and southern Ural Mountains. Two to three per cent of the world’s total production was crocidolite, from the Northern Cape, South Africa, and from Western Australia, and another 2 to 3% was amosite, from the Eastern Transvaal, South Africa (Skinner, Ross and Frondel 1988).

Figure 2. World production of asbestos in thousands of tons 1900-92

RES160F2

Asbestos-Related Diseases and Conditions

Like silica, asbestos has the capability of evoking scarring reactions in all biological tissue, human and animal. In addition, asbestos evokes malignant reactions, adding a further element to the concern for human health, as well as a challenge to science as to how asbestos exerts its ill effects. The first asbestos-related disease to be recognized, diffuse interstitial pulmonary fibrosis or scarring, later called asbestosis, was the subject of case reports in the United Kingdom in the early 1900s. Later, in the 1930s, case reports of lung cancer in association with asbestosis appeared in the medical literature though it was only over the next several decades that the scientific evidence was gathered establishing that asbestos was the carcinogenic factor. In 1960, the association between asbestos exposure and another much less common cancer, malignant mesothelioma, which involves the pleura (a membrane that covers the lung and lines the chest wall) was dramatically brought to attention by the report of a cluster of these tumours in 33 individuals, all of whom worked or lived in the asbestos mining area of the Northwest Cape (Wagner 1996). Asbestosis was the target of the dust control levels introduced and implemented with increasing rigour in the 1960s and 1970s, and in many industrialized countries, as the frequency of this disease decreased, asbestos-related pleural disease emerged as the most frequent manifestation of exposure and the condition which most frequently brought exposed subjects to medical attention. Table 1 lists diseases and conditions currently recognized as asbestos-related. The diseases in bold type are those most frequently encountered and for which a direct causal relationship is well established, while for the sake of completeness, certain other conditions, for which the relationship is less well established, are also listed (see footnote to Table 16) and the sections which follow in the text below that expand upon the various disease headings).

Table 1. Asbestos-related diseases and conditions

Pathology Organ(s) affected Disease/condition1
Non-malignant Lungs Pleura Skin Asbestosis (diffuse interstitial fibrosis)
Small airway disease2 (fibrosis limited to the
peri-bronchiolar region)
Chronic airways disease3 Pleural plaques
Viscero-parietal reactions, including benign pleural
effusion, diffuse pleural fibrosis and rounded
atelectasis Asbestos corns4
Malignant Lungs Pleura Other mesothelium-lined cavities Gastrointestinal tract5   Other5 Lung cancer (all cell types)
Cancer of larynx Mesothelioma of pleura Mesothelioma of the peritoneum, pericardium and scrotum (in decreasing frequency of occurrence) Cancer of stomach, oesophagus, colon, rectum Ovary, gall bladder, bile ducts, pancreas, kidney

1 The diseases or conditions indicated in bold type are those most frequently encountered and the ones for which a causal relationship is well established and/or generally recognized.

2 Fibrosis in the walls of the small airways of the lung (including the membranous and respiratory bronchioles) is thought to represent the early lung parenchymal response to retained asbestos (Wright et al. 1992) which will progress to asbestosis if exposure continues and/or is heavy, but if exposure is limited or light, the lung response may be limited to these areas (Becklake in Liddell & Miller 1991).

3 Included are bronchitis, chronic obstructive pulmonary disease (COPD) and emphysema. All have been shown to be associated with work in dusty environments. The evidence for causality is reviewed in the section Chronic Airways Diseases and Becklake (1992).

4 Related to direct handling of asbestos and of historical rather than current interest.

5 Data not consistent from all studies (Doll and Peto 1987); some of the highest risks were reported in a cohort of over 17,000 American and Canadian asbestos insulation workers (Selikoff 1990), followed from January 1, 1967 to December 31, 1986 in whom exposure had been particularly heavy.

Sources: Becklake 1994; Liddell and Miller 1992; Selikoff 1990; Doll and Peto in Antman and Aisner 1987; Wright et al. 1992.

 

Uses

Table 2 lists the major sources, products and uses of the asbestos minerals.

Table 2. Main commercial sources, products and uses of asbestos

Fibre type Location of major deposits Commercial products and/or uses
Chrysotile
(white)
Russia, Canada (Québec, also British Columbia, Newfoundland), China (Szechwan province); Mediterranean countries (Italy, Greece, Corsica, Cyprus); Southern Africa (South Africa, Zimbabwe, Swaziland); Brazil; smaller deposits in United States (Vermont, Arizona, California) and in Japan Construction materials (tiles, shingles, gutters and cisterns; roofing, sheeting, and siding)
Pressure and other pipes
Fire proofing (marine and other)
Insulation and sound proofing
Reinforced plastic products (fan blades, switch gear)
Friction materials usually in combination with resins in brakes, clutches, other
Textiles (used in belts, clothing, casing, fire barriers, autoclaves, yarns and packing)
Paper products (used in millboard, insulators, gaskets, roof felt, wall coverings, etc.)
Floats in paints, coatings and welding rods
Crocidolite
(blue)
South Africa (Northwest Cape, Eastern Transvaal), Western Australia1 Used mainly in combination in cement products (in particular pressure pipes) but also in many of the other products listed above
Amosite
(brown)
South Africa (Northern Transvaal)1 Used mainly in cement, thermal insulation and roofing products particularly in the United States2 , but also in combination in many of the products listed under chrysotile
Anthophyllite Finland1 Filler in the rubber, plastics and chemical industries
Tremolite Italy, Korea and some Pacific Islands; mined on a small scale in Turkey, China and elsewhere; contaminates the ore bearing rock in some asbestos, iron, talc and vermiculite mines; also found in agricultural soils in the Balkan Peninsula and in Turkey Used as a filler in talc; may or may not be removed in processing the ore so it may appear in end products
Actinolite Contaminates amosite, and less often, chrysotile, talc and vermiculite deposits Not usually exploited commercially

1 A list such as this is obviously not comprehensive and the readers should consult the sources cited and other chapters in this Encyclopedia for more complete information.

2 No longer in operation.

Sources: Asbestos Institute (1995); Browne (1994); Liddell and Miller (1991); Selikoff and Lee (1978); Skinner et al (1988).

 

Though necessarily incomplete, this table emphasizes that:

  1. Deposits are found in many parts of the world, most of which have been exploited non-commercially or commercially in the past, and some of which are currently commercially exploited.
  2. There are many manufactured products in current or past use which contain asbestos, particularly in the construction and transport industries.
  3. Disintegration of these products or their removal carries with it the risk of the resuspension of fibres and of renewed human exposure.

 

A figure of over 3,000 has been commonly quoted for the number of uses of asbestos and no doubt led to asbestos being dubbed the “magic mineral” in the 1960s. A 1953 industry list contains as many as 50 uses for raw asbestos, in addition to its use in the manufacture of the products listed in Table 17, each of which has many other industrial applications. In 1972, the consumption of asbestos in an industrialized country like the United States was attributed to the following categories of product: construction (42%); friction materials, felts, packings and gaskets (20%); floor tiles (11%); paper (9%); insulation and textiles (3%) and other uses (15%) (Selikoff and Lee 1978). By contrast, a 1995 industry list of the main product categories shows major redistribution on a worldwide basis as follows: asbestos cement (84%); friction materials (10%); textiles (3%); seals and gaskets (2%); and other uses (1%) (Asbestos Institute 1995).

Occupational Exposures, Past and Current

Occupational exposure, certainly in industrialized countries, has always been and is still the most likely source of human exposure (see Table 17 and the references cited in its footnote; other sections of this Encyclopaedia contain further information). There have, however, been major changes in industrial processes and procedures aimed at diminishing the release of dust into the working environment (Browne 1994; Selikoff and Lee 1978). In countries with mining operations, milling usually takes place at the minehead. Most chrysotile mines are open cast, while amphibole mines usually involve underground methods which generate more dust. Milling involves separating fibre from rock by means of mechanized crushing and screening, which were dusty processes until the introduction of wet methods and/or enclosure in most mills during the 1950s and 1960s. The handling of waste was also a source of human exposure, as was transporting bagged asbestos, whether it involved loading and unloading trucks and railcars or work on the dockside. These exposures have diminished since the introduction of leak-proof bags and the use of sealed containers.

Workers have had to use raw asbestos directly in packing and lagging, particularly in locomotives, and in spraying walls, ceilings and airducts, and in the marine industry, deckheads and bulkheads. Some of these uses have been phased out voluntarily or have been banned. In the manufacture of asbestos cement products, exposure occurs in receiving and opening bags containing raw asbestos, in preparing the fibre for mixing in the slurry, in machining end-products and in dealing with waste. In the manufacture of vinyl tiles and flooring, asbestos was used as a reinforcing and filler agent to blend with organic resins, but has now largely been replaced by organic fibre in Europe and North America. In the manufacture of yarns and textiles, exposure to fibre occurs in receiving, preparing, blending, carding, spinning, weaving and calendaring the fibre—processes which were until recently dry and potentially very dusty. Dust exposure has been considerably reduced in modern plants through use of a colloidal suspension of fibre extruded through a coagulant to form wet strands for the last-mentioned three processes. In the manufacture of asbestos paper products, human exposure to asbestos dust is also most likely to occur in the reception and preparation of the stock mix and in cutting the final products which in the 1970s contained from 30 to 90% asbestos. In the manufacture of asbestos friction products (dry mix-moulded, roll-formed, woven or endless wound) human exposure to asbestos dust is also most likely to occur during the initial handling and blending processes as well as in finishing the end product, which in the 1970s contained from 30 to 80% of asbestos. In the construction industry, prior to regular use of appropriate exhaust ventilation (which came in the 1960s), the high-speed power sawing, drilling and sanding of asbestos-containing boards or tiles led to the release of fibre-containing dust close to the operator’s breathing zone, particularly when such operations were conducted in closed spaces (for instance in high-rise buildings under construction). In the period after the Second World War, a major source of human exposure was in the use, removal or replacement of asbestos-containing materials in the demolition or refurbishing of buildings or ships. One of the chief reasons for this state of affairs was the lack of awareness, both of the composition of these materials (i.e., that they contained asbestos) and that exposure to asbestos could be harmful to health. Improved worker education, better work practices and personal protection have reduced the risk in the 1990s in some countries. In the transport industry, sources of exposure were the removal and replacement of lagging in locomotive engines and of braking material in trucks and cars in the automobile repair industry. Other sources of past exposure leading to, in particular, pleural disease, continue to attract notice, even in the 1990s, usually on the basis of case reports, for instance those describing workers using asbestos string in the manufacture of welding rods, in the formation of asbestos rope for grouting furnaces and maintaining underground mine haulage systems.

Other Sources of Exposure

Exposure of individuals engaged in trades which do not directly involve use or handling of asbestos but who work in the same area as those who do deal with it directly is called para-occupational (bystander) exposure. This has been an important source of exposure not only in the past but also for cases presenting for diagnosis in the 1990s. Workers involved include electricians, welders and carpenters in the construction and in the ship building or repair industries; maintenance personnel in asbestos factories; fitters, stokers and others in power stations and ships and boiler houses where asbestos lagging or other insulation is in place, and maintenance personnel in post-war high-rise buildings incorporating various asbestos-containing materials. In the past, domestic exposure occurred primarily from dust-laden workclothes being shaken or laundered at home, the dust so released becoming entrapped in carpets or furnishings and resuspended into the air with the activities of daily living. Not only could levels of airborne fibre reach levels as high as 10 fibre per millilitre (f/ml), that is, ten times the occupational exposure limit proposed by a WHO consultation (1989) of 1.0 f/ml but the fibres tended to remain airborne for several days. Since the 1970s, the practice of retaining all work clothes at the worksite for laundering has been widely but not universally adopted. In the past also, residential exposure occurred from contamination of air from industrial sources. For instance, increased levels of airborne asbestos have been documented in the neighbourhood of mines and asbestos plants and are determined by production levels, emission controls and weather. Given the long lag time for, in particular, asbestos-related pleural disease, such exposures are still likely to be responsible for some cases presenting for diagnosis in the 1990s. In the 1970s and 1980s, with the increase in public awareness of both the ill-health consequences of asbestos exposure and of the fact that asbestos containing materials are used extensively in modern construction (particularly in the friable form used for spray-on applications to walls, ceilings and ventilation ducts), a major cause of concern centred on whether, as such buildings age and are subject to daily wear and tear, asbestos fibres may be released into the air in sufficient numbers to become a threat to the health of those working in modern high-rise buildings (see below for risk estimates). Other sources of contamination of the air in urban areas include the release of fibre from brakes of vehicles and rescattering of fibres released by passing vehicles (Bignon, Peto and Saracci 1989).

Non-industrial sources of environmental exposure include naturally occurring fibres in soils, for instance in eastern Europe, and in rock outcrops in the Mediterranean region, including Corsica, Cyprus, Greece and Turkey (Bignon, Peto and Saracci 1989). An additional source of human exposure results from the use of tremolite for whitewash and stucco in Greece and Turkey, and according to more recent reports, in New Caledonia in the South Pacific (Luce et al. 1994). Furthermore, in several rural villages in Turkey, a zeolite fibre, erionite, has been found to be used both in stucco and in domestic construction and has been implicated in mesothelioma production (Bignon, Peto and Saracci 1991). Finally, human exposure may occur through drinking water, mainly from natural contamination, and given the widespread natural distribution of the fibre in outcrops, most water sources contain some fibre, levels being highest in mining areas (Skinner, Roos and Frondel 1988).

Aetiopathology of Asbestos-Related Disease

Fate of inhaled fibres

Inhaled fibres align themselves with the airstream and their capability of penetrating into the deeper lung spaces depends on their dimension, fibres of 5mm or less in aerodynamic diameter showing an over 80% penetration, but also less than 10 to 20% retention. Larger particles may impact in the nose and in major airways at bifurcations, where they tend to collect. Particles deposited in the major airways are cleared by the action of ciliated cells and are transported up the mucus escalator. Individual differences associated with what appears to be the same exposure are due, at least in part, to differences between individuals in the penetration and retention of inhaled fibres (Bégin, Cantin and Massé 1989). Small particles deposited beyond the major airways are phagocytosed by alveolar macrophages, scavenger cells which ingest foreign material. Longer fibres, that is, those over 10mm, often come under attack by more than one macrophage, are more likely to become coated and to form the nucleus of an asbestos body, a characteristic structure recognized since the early 1900s as a marker of exposure (see figure 3). Coating a fibre is considered to be part of the lungs’ defense to render it inert and non-immunogenic. Asbestos bodies are more likely to form on amphibole than on chrysotile fibres, and their density in biological material (sputum, bronchoalveolar lavage, lung tissue) is an indirect marker of lung burden. Coated fibres may persist in the lung for long periods, to be recovered from sputum or bronchoalveolar lavage fluid up to 30 years after last exposure. Clearance of non-coated fibres deposited in the lung’s parenchyma is towards the lung periphery and subpleural regions, and then to lymph nodes at the root of the lung.

Figure 3. Asbestos body

RES160F3

Magnification x 400, seen on microscopic section of the lung as a slightly curved elongated structure with a finely beaded iron protein coat. The asbestos fibre itself can be identified as the thin line near one end of the asbestos body (arrow). Source: Fraser et al. 1990

 

 

 

 

 

 

Theories to explain how fibres evoke the various pleural reactions associated with asbestos exposure include:

  1. direct penetration into the pleural space and drainage with the pleural fluid to pores in the pleura lining the chest wall
  2. release of mediators into the pleural space from subpleural lymphatic collections
  3. retrograde flow from lymph nodes at the root of the lung to the parietal pleura (Browne 1994)

 

There may also be retrograde flow via the thoracic duct to the abdominal lymph nodes to explain the occurrence of peritoneal mesothelioma.

Cellular effects of inhaled fibres

Animal studies indicate that the initial events which follow asbestos retention in the lung include:

  1. an inflammatory reaction, with accumulation of white blood cells followed by a macrophagic alveolitis with release of fibronectin, growth factor and various neutrophil chemotactic factors and, over time, the release of superoxide ion and
  2. proliferation of alveolar, epithelial, interstitial and endothelial cells (Bignon, Peto and Saracci 1989).

 

These events are reflected in the material recovered by bronchoalveolar lavage in animals and humans (Bégin, Cantin and Massé 1989). Both fibre dimensions and their chemical characteristics appear to determine biological potency for fibrogenesis, and these characteristics, in addition to surface properties, are also thought to be important for carcinogenesis. Long, thin fibres are more active than short ones, although the activity of the latter cannot be discounted, and amphiboles are more active than chrysotile, a property attributed to their greater biopersistence (Bégin, Cantin and Massé 1989). Asbestos fibres may also affect the human immune system and change the circulating population of blood lymphocytes. For instance, human cell mediated immunity to cell antigens (such as is exhibited in a tuberculin skin test) may be impaired (Browne 1994). In addition, since asbestos fibres appear to be capable of inducing chromosome abnormality, the view has been expressed that they can also be considered capable of inducing as well as promoting cancer (Jaurand in Bignon, Peto and Saracci 1989).

Dose versus exposure response relationships

In biological sciences such as pharmacology or toxicology in which dose-response relationships are used to estimate the probability of desired effects or the risk of undesired effects, a dose is conceptualized as the amount of agent delivered to and remaining in contact with the target organ for sufficient time to evoke a reaction. In occupational medicine, surrogates for dose, such as various measures of exposure, are usually the basis for risk estimates. However, exposure-response relationships can usually be demonstrated in workforce-based studies; the most appropriate exposure measure may, however, differ between diseases. Somewhat disconcerting is the fact that although exposure-response relationships will differ between workforces, these differences can be explained only in part by the fibre, particle size and industrial process. Nevertheless, such exposure-response relationships have formed the scientific basis for risk assessment and for setting permissible exposure levels, which were originally focused on controlling asbestosis (Selikoff and Lee 1978). As the prevalence and/or incidence of this condition has decreased, concern has switched to assure protection of human health against asbestos-related cancers. Over the last decade, techniques have been developed for the quantitative measurement of lung dust burden or biological dose directly in terms of fibres per gram of dry lung tissue. In addition, energy dispensive x-ray analysis (EDXA) permits precise characterization of each fibre by fibre type (Churg 1991). Though standardization of results between laboratories has not yet been achieved, comparisons of results obtained within a given laboratory are useful, and lung burden measurements have added a new tool for case evaluation. In addition, the application of these techniques in epidemiological studies has

  1. confirmed the biopersistence of amphibole fibres in the lung compared to chrysotile fibres
  2. identified fibre burden in the lungs of some individuals in whom exposure was forgotten, remote or thought to be unimportant
  3. demonstrated a gradient in lung burden associated with rural and urban residence and with occupational exposure and
  4. confirmed a fibre gradient in the lung dust burden associated with the major asbestos-related diseases (Becklake and Case 1994).

 

Asbestosis

Definition and history

Asbestosis is the name given to the pneumoconiosis consequent on exposure to asbestos dust. The term pneumoconiosis is used here as defined in the article “Pneumoconioses: Definitions”, of this Encyclopaedia as a condition in which there is “accumulation of dust in the lungs and tissue responses to the dust”. In the case of asbestosis, the tissue reaction is collagenous, and results in permanent alteration of the alveolar architecture with scarring. As early as 1898, the Annual Report of Her Majesty’s Chief Inspector of Factories contained reference to a lady factory inspector’s report on the adverse health consequences of asbestos exposure, and the 1899 Report contained details of one such case in a man who had worked for 12 years in one of the recently established textile factories in London, England. Autopsy revealed diffuse severe fibrosis of the lung and what subsequently came to be known as asbestos bodies were seen on subsequent histologic re-examination of the slides. Since fibrosis of the lung is an uncommon condition, the association was thought to be causal and the case was presented in evidence to a committee on compensation for industrial disease in 1907 (Browne 1994). Despite the appearance of reports of a similar nature filed by inspectors from the United Kingdom, Europe and Canada over the next decade, the role of exposure to asbestos in the genesis of the condition was not generally recognized until a case report was published in the British Medical Journal in 1927. In this report, the term pulmonary asbestosis was first used to describe this particular pneumoconiosis, and comment was made on the prominence of the associated pleural reactions, in contrast, for instance, to silicosis, the main pneumoconiosis recognized at the time (Selikoff and Lee 1978). In the 1930s, two major workforce-based studies carried out among textile workers, one in the United Kingdom and one in the United States, provided evidence of an exposure-response (and therefore likely causal) relationship between level and duration of exposure and radiographic changes indicative of asbestosis. These reports formed the basis of the first control regulations in the United Kingdom, promulgated in 1930, and the first threshold limit values for asbestos published by the American Conference of Government and Industrial Hygienists in 1938 (Selikoff and Lee 1978).

Pathology

The fibrotic changes which characterize asbestosis are the consequence of an inflammatory process set up by fibres retained in the lung. The fibrosis of asbestosis is interstitial, diffuse, tends to involve the lower lobes and peripheral zones preferentially and, in the advanced case, is associated with obliteration of the normal lung architecture. Fibrosis of the adjacent pleura is common. Nothing in the histological features of asbestosis distinguish it from interstitial fibrosis due to other causes, except the presence of asbestos in the lung either in the form of asbestos bodies, visible to light microscopy, or as uncoated fibres, most of which are too fine to be seen except by means of electron microscopy. Thus, the absence of asbestos bodies in images derived from light microscopy does not rule out either exposure or the diagnosis of asbestosis. At the other end of the spectrum of disease severity, the fibrosis may be limited to relatively few zones and affect mainly the peribronchiolar regions (see figure 4), giving rise to what has been called asbestos-related small airways disease. Again, except perhaps for more extensive involvement of membranous small airways, nothing in the histologic changes of this condition distinguishes it from small airways disease due to other causes (such as cigarette smoking or exposure to other mineral dusts) other than the presence of asbestos in the lung. Small airways disease may be the only manifestation of asbestos-related lung fibrosis or it may coexist with varying degrees of interstitial fibrosis, that is, asbestosis (Wright et al. 1992). Carefully considered criteria have been published for the pathological grading of asbestosis (Craighead et al. 1982). In general, the extent and intensity of the lung fibrosis relates to the measured lung dust burden (Liddell and Miller 1991).

Figure 4.  Asbestos-related small airways disease

RES160F4

Peribronchiolar fibrosis and infiltration by inflammatory cells is seen on a histologic section of a respiratory bronchiole (R) and its distal divisions or alveolar ducts (A). The surrounding lung is mostly normal but with focal thickening of the interstitial tissue (arrow), representing early asbestosis. Source: Fraser et al. 1990

 

 

 

 

Clinical features

Shortness of breath, the earliest, most consistently reported and most distressing complaint, has led to asbestosis being called a monosymptomatic disease (Selikoff and Lee 1978). Shortness of breath precedes other symptoms which include a dry, often distressing cough, and chest tightness—which is thought to be associated pleural reactions. Late inspiratory rales or crackles which persist after coughing are heard, first in the axilla and over the lung bases, before becoming more generalized as the condition advances, and are thought to be due to the explosive opening of airways which close on expiration. Coarse rales and rhonchi, if present, are thought to reflect bronchitis either in response to working in a dusty environment, or due to smoking.

Chest imaging

Traditionally, the chest radiograph has been the most important single diagnostic tool for establishing the presence of asbestosis. This has been facilitated by the use of the ILO (1980) radiological classification, which grades the small irregular opacities that are characteristic of asbestosis on a continuum from no disease to the most advanced disease, both for severity (described as profusion on a 12-point scale from –/0 to 3/+) and extent (described as the number of zones affected). Despite between reader differences, even among those who have completed training courses in reading, this classification has proved particularly useful in epidemiological studies, and has also been used clinically. However, pathological changes of asbestosis can be present on lung biopsy in up to 20% of subjects with a normal chest radiograph. In addition, small irregular opacities of low profusion (e.g., 1/0 on the ILO scale) are not specific for asbestosis but can be seen in relation to other exposures, for instance to cigarette smoking (Browne 1994). Computer tomography (CT) has revolutionized the imaging of interstitial lung disease, including asbestosis, with high resolution computer tomography (HRCT) adding increased sensitivity to the detection of interstitial and pleural disease (Fraser et al. 1990). Characteristics of asbestosis which can be identified by HRCT include thickened interlobular (septal) and intralobular core lines, parenchymal bands, curvilinear subpleural lines and subpleural dependent densities, the first two being the most distinctive for asbestosis (Fraser et al. 1990). HRCT can also identify these changes in cases with pulmonary function deficit in whom the chest radiograph is inconclusive. Based on postmortem HRCT, thickened intralobular lines have been shown to correlate with peribronchiolar fibrosis, and thickened interlobular lines with interstitial fibrosis (Fraser et al. 1990). As yet, no standardized reading method has been developed for the use of HRCT in asbestos-related disease. In addition to its cost, the fact that a CT device is a hospital installation makes it unlikely that it will replace the chest radiograph for surveillance and epidemiological studies; its role will likely remain limited to individual case investigation or to planned studies intended to address specific issues. Figure 21 illustrates the use of chest imaging in the diagnosis of asbestos-related lung disease; the case shown exhibits asbestosis, asbestos-related pleural disease and lung cancer. Large opacities, a complication of other pneumoconioses, in particular silicosis, are unusual in asbestosis and are usually due to other conditions such as lung cancer (see the case described in figure 5) or rounded atelectasis.

Figure 5. Chest imaging in asbestos-related lung disease.

RES160F5

A posteroanterior chest radiograph (A) shows asbestosis involving both lungs and assessed as ILO category 1/1, associated with bilateral pleural thickening (open arrows) and a vaguely defined opacity (arrow heads) in the left upper lobe. On HRCT scan (B), this was shown to be a dense mass (M) abutting onto the pleura and transthoracic needle biopsy revealed an adenocarcinoma of the lung. Also on CT scan (C), at high attenuation pleural plaques can be seen (arrow heads) as well as a thin curvilinear opacity in the parenchyma underlying the plaques with interstitial abnormality in the lung between the opacity and the pleura. Source: Fraser et al. 1990

Lung function tests

Established interstitial lung fibrosis due to asbestos exposure, like established lung fibrosis due to other causes, is usually but not invariably associated with a restrictive lung function profile (Becklake 1994). Its features include reduced lung volumes, in particular vital capacity (VC) with preservation of the ratio of forced expiratory volume in 1second to forced vital capacity (FEV1/FVC%), reduced lung compliance, and impaired gas exchange. Air flow limitation with reduced FEV1/FVC may, however, also be present as a response to a dusty work environment or to cigarette smoke. In the earlier stages of asbestosis, when the pathological changes are limited to peribronchiolar fibrosis and even before small irregular opacities are evident on the chest radiograph, impairment of tests reflecting small airway dysfunction such as the Maximum Mid-expiratory Flow Rate may be the only sign of respiratory dysfunction. Responses to the stress of exercise may also be impaired early in the disease, with increased ventilation in relation to the oxygen requirement of the exercise (due to an increased breathing frequency and shallow breathing) and impaired O2 exchange. As the disease progresses, less and less exercise is required to compromise O2 exchange. Given that the asbestos-exposed worker may exhibit features of both a restrictive and an obstructive lung function profile, the wise physician interprets the lung function profile in the asbestos worker for what it is, as a measure of impairment, rather than as an aid to diagnosis. Lung functions, in particular vital capacity, provide a useful tool for the follow-up of subjects individually, or in epidemiological studies, for instance after exposure has ceased, to monitor the natural history of asbestosis or asbestos-related pleural disease.

Other laboratory tests

Bronchoalveolar lavage is increasingly used as a clinical tool in the investigation of asbestos-related lung disease:

  1. to rule out other diagnoses
  2. to assess the activity of the pulmonary reactions under study such as fibrosis or
  3. to identify the agent in the form of asbestos bodies or fibres.

 

It is also used to study disease mechanisms in humans and animals (Bégin, Cantin and Massé 1989). The uptake of Gallium-67 is used as a measure of the activity of the pulmonary process, and serum antinuclear antibodies (ANA) and rheumatoid factors (RF), both of which reflect the immunological status of the individual, have also been investigated as factors influencing disease progression, and/or accounting for between individual differences in response to what appears to be the same level and dose of exposure.

Epidemiology including natural history

The prevalence of radiological asbestosis documented in workforce-based surveys varies considerably and, as might be expected, these differences relate to differences in exposure duration and intensity rather than differences between workplaces. However, even when these are taken into account by restricting comparison of exposure response relationships to those studies in which exposure estimates were individualized for each cohort member and based on job history and industrial hygiene measurements, marked fibre and process related gradients are evident (Liddell and Miller 1991). For instance, a 5% prevalence of small irregular opacities (1/0 or more on the ILO classification) resulted from a cumulative exposure to approximately 1,000 fibre years in Quebec chrysotile miners, to approximately 400 fibre years in Corsican chrysotile miners, and to under 10 fibre years in South African and Australian crocidolite miners. By contrast, for textile workers exposed to Quebec chrysotile, a 5% prevalence of irregular small opacities resulted from a cumulative exposure to under 20 fibre years. Lung dust burden studies are also consistent with a fibre gradient for evoking asbestosis: in 29 men in Pacific shipyard trades with asbestosis associated with mainly amosite exposure, the average lung burden found in autopsy material was 10 million amosite fibres per gram of dry lung tissue compared to an average chrysotile burden of 30 million fibres per gram of dry lung tissue in 23 Quebec chrysotile miners and millers (Becklake and Case 1994). Fibre size distribution contributes to but does not fully explain these differences, suggesting that other plant-specific factors, including other workplace contaminants, may play a role.

Asbestosis may remain stable or progress, but probably does not regress. Progression rates increase with age, with cumulative exposure, and with the extent of existing disease, and are more likely to occur if exposure was to crocidolite. Radiological asbestosis can both progress and appear long after exposure ceases. Deterioration of lung functions may also occur after exposure has ceased (Liddell and Miller 1991). An important issue (and one on which the epidemiological evidence is not consistent) is whether continued exposure increases the chance of progression once radiological changes have developed (Browne 1994; Liddell and Miller 1991). In some jurisdictions, for example in the United Kingdom, the number of cases of asbestosis presenting for worker’s compensation have decreased over the last decades, reflecting the workplace controls put in place in the 1970s (Meredith and McDonald 1994). In other countries, for instance in Germany (Gibbs, Valic and Browne 1994), rates of asbestosis continue to rise. In the United States, age-adjusted asbestos-related mortality rates (based on mention of asbestosis on the death certificate as either the cause of death or as playing a contributory role) for age 1+ increased from under 1 per million in 1960 to over 2.5 in 1986, and to 3 in 1990 (US Dept. of Health and Human Services, 1994).

Diagnosis and case management

Clinical diagnosis depends on:

  1. establishing the presence of disease
  2. establishing whether exposure occurred and
  3. evaluating whether the exposure was likely to have caused the disease.

 

The chest radiograph remains the key tool to establish the presence of disease, supplemented by HRCT if available in cases where there is doubt. Other objective features are the presence of basal crackles, while lung function level, including exercise challenge, is useful in establishing impairment, a step required for compensation evaluation. Since neither the pathology, radiological changes, nor the symptoms and lung function changes associated with asbestosis are different from those associated with interstitial lung fibrosis due to other causes, establishing exposure is key to diagnosis. In addition, the many uses of asbestos products whose content is often not known to the user makes an exposure history a much more daunting exercise in interrogation than was previously thought. If the exposure history appears inadequate, identification of the agent in biological specimens (sputum, bronchoalveolar lavage and when indicated, biopsy) can corroborate exposure; dose in the form of lung burden can be assessed quantitatively by autopsy or in surgically removed lungs. Evidence of disease activity (from a gallium-67 scan or bronchoalveolar lavage) may assist in estimating prognosis, a key issue in this irreversible condition. Even in the absence of consistent epidemiological evidence that progression is slowed once exposure ceases, such a course may be prudent and certainly desirable. It is not, however, a decision easy to take or recommend, particularly for older workers with little opportunity for job retraining. Certainly exposure should not continue in any workplace not in conformity with current permissible exposure levels. Criteria for the diagnosis of asbestosis for epidemiological purposes are less demanding, particularly for cross-sectional workforce-based studies which include those well enough to be at work. These usually address issues of causality and often use markers that indicate minimal disease, based either on lung function level or on changes in the chest radiograph. By contrast, criteria for diagnosis for medicolegal purposes are considerably more stringent and vary according to the legal administrative systems under which they operate, varying between states within countries as well as between countries.

Asbestos-Related Pleural Disease

Historical perspective

Early descriptions of asbestosis mention fibrosis of the visceral pleura as part of the disease process (see “Pathology”, page 10.55). In the 1930s there were also reports of circumscribed pleural plaques, often calcified, in the parietal pleura (which lines the chest wall and covers the surface of the diaphragm), and occurring in those with environmental, not occupational, exposure. A 1955 workforce-based study of a German factory reported a 5% prevalence of pleural changes on the chest radiograph, thereby drawing attention to the fact that pleural disease might be the primary if not the only manifestation of exposure. Visceroparietal pleural reactions, including diffuse pleural fibrosis, benign pleural effusion (reported first in the 1960s) and rounded atelectasis (first reported in the 1980s) are now all considered interrelated reactions which are usefully distinguished from pleural plaques on the basis of pathology and probably pathogenesis, as well as clinical features and presentation. In jurisdictions in which the prevalence and/or incidence rates of asbestosis are decreasing, pleural manifestations, increasingly common in surveys, are increasingly the basis of detection of past exposure, and increasingly the reason for an individual seeking medical attention.

Pleural plaques

Pleural plaques are smooth, raised, white irregular lesions covered with mesothelium and found on the parietal pleura or diaphragm (figure 6). They vary in size, are often multiple, and tend to calcify with increasing age (Browne 1994). Only a small proportion of those detected at autopsy are seen on the chest radiograph, though most can be detected by HRCT. In the absence of pulmonary fibrosis, pleural plaques may cause no symptoms and may be detected only in screening surveys using chest radiography. Nevertheless, in workforce surveys, they are consistently associated with modest but measurable lung function impairment, mainly in VC and FVC (Ernst and Zejda 1991). In radiological surveys in the United States, rates of 1% are reported in men without known exposure, and 2.3% in men which include those in urban populations, with occupational exposure. Rates are also higher in communities with asbestos industries or high usage rates, while in some workforces, such as sheet metal workers, insulators, plumbers and railroad workers, rates may exceed 50%. In a 1994 Finnish autopsy survey of 288 men aged 35 to 69 years who died suddenly, pleural plaques were detected in 58%, and exhibited the tendency to increase with age, with the probability of exposure (based on history), with the concentration of asbestos fibres in lung tissue, and with smoking (Karjalainen et al. 1994). The aetiologic fraction of plaques attributable to a lung dust burden of 0.1 million fibres per gram of lung tissue was estimated at 24%, (this value is considered to be an underestimate). Lung dust burden studies are also consistent with fibre gradient in potency for evoking pleural reactions; in 103 men with amosite exposure in Pacific shipyard trades, all with pleural plaques, the average autopsy lung burden was 1.4 million fibres per gram of lung tissue, compared to 15.5 and 75 million fibres per gram of lung tissue for chrysotile and tremolite respectively in 63 Quebec chrysotile miners and millers examined in the same way (Becklake and Case 1994).

Figure 6.  Asbestos-related pleural disease

RES160F6

A diaphragmatic pleural plaque (A) is seen in an autopsy specimen as a smooth well defined focus of fibrosis on the diaphragm of a construction worker with incidental exposure to asbestos and asbestos bodies in the lung. Visceral pleural fibrosis (B) is seen on an inflated autopsy lung specimen, and radiates from two central foci on the visceral pleura of the lung of a construction worker with asbestos exposure who also exhibited several parietal pleural plaques. Source: Fraser et al. 1990.

 

 

 

Visceroparietal pleural reactions

Though the pathology and pathogenesis of the different forms of visceroparietal reaction to asbestos exposure are almost certainly interrelated, their clinical manifestations and how they come to attention differs. Acute exudative pleural reactions may occur in the form of effusions in subjects whose lungs do not manifest other asbestos-related disease, or as an exacerbation in the severity and extent of existing pleural reactions. Such pleural effusions are called benign by way of distinguishing them from effusions associated with malignant mesothelioma. Benign pleural effusions occur typically 10 to 15 years after first exposure (or after limited past exposure) in individuals in their 20s and 30s. They are usually transient but may reoccur, may involve one or both sides of the chest simultaneously or sequentially, and may be either silent or associated with symptoms including chest tightness and/or pleural pain and dyspnoea. The pleural fluid contains leucocytes, often blood, and is albumin-rich; only rarely does it contain asbestos bodies or fibres which may, however, be found in biopsy material of the pleura or underlying lung. Most benign pleural effusions clear spontaneously, though in a small proportion of subjects (of the order of 10% in one series) these effusions may evolve into diffuse pleural fibrosis (see figure 6), with or without the development of lung fibrosis. Local pleural reactions may also fold in upon themselves, trapping lung tissue and causing well defined lesions called rounded atelectasis or pseudotumour because they may have the radiological appearance of lung cancer. In contrast to pleural plaques, which seldom cause symptoms, visceroparietal pleural reactions are usually associated with some shortness of breath as well as lung function impairment, particularly when there is obliteration of the costophrenic angle. In one study, for instance, average FVC deficit was 0.07 l when the chest wall was involved and 0.50 l when the costophrenic angle was involved (Ernst and Zejda in Liddell and Miller 1991). As already indicated, the distribution and determinants of pleural reactions vary considerably between workforces, with prevalence rates increasing with:

  1. estimated residence time of fibre in the lung (measured as time since first exposure)
  2. exposures primarily to or including amphibole and
  3. possibly intermittence of exposure, given the high rates of contamination in occupations in which use of asbestos materials is intermittent, but exposure probably heavy.

 

Lung Cancer

Historical perspective

The 1930s saw the publication of a number of clinical case reports from the United States, the United Kingdom and Germany of lung cancer (a condition much less common then than it is today) in asbestos workers, most of whom also had asbestosis of varying degrees of severity. Further evidence of the association between the two conditions was provided in the 1947 Annual Report of His Majesty’s Chief Inspector of Factories, which noted that lung cancer had been reported in 13.2% of male deaths attributed to asbestosis in the period 1924 to 1946 and in only 1.3% of male deaths attributed to silicosis. The first study to address the causal hypothesis was a cohort mortality study of a large United Kingdom asbestos textile plant (Doll 1955), one of the first such workforce-based studies, and by 1980, after at least eight such studies in as many workforces had confirmed an exposure-response relationship, the association was generally accepted as causal (McDonald and McDonald in Antman and Aisner 1987).

Clinical features and pathology

In the absence of other associated asbestos disease, the clinical features and criteria for the diagnosis of asbestos-associated lung cancer are no different from those for lung cancer not associated with asbestos exposure. Originally, asbestos-associated lung cancers were considered to be scar cancers, similar to lung cancer seen in other forms of diffuse lung fibrosis such as scleroderma. Features which favoured this view were their location in the lower lung lobes (where asbestosis is usually more marked), their sometimes multicentric origin and a preponderance of adenocarcinoma in some series. However, in most reported workforce-based studies, the distribution of cell types was no different from that seen in studies of non-asbestos-exposed populations, supporting the view that asbestos itself may be a human carcinogen, a conclusion reached by the International Agency for Research on Cancer (World Health Organization: International Agency for Research on Cancer 1982). Most but not all asbestos-related lung cancers occur in association with radiologic asbestosis (see below).

Epidemiology

Cohort studies confirm that lung cancer risk increases with exposure, though the fractional rate of increase for each fiber per milliliter per year exposed varies, and is related both to fibre type and to industrial process (Health Effects Institute—Asbestos Research 1991). For instance, for mainly chrysotile exposures in mining, milling and friction product manufacture, the increase ranged from approximately 0.01 to 0.17%, and in textile manufacture from 1.1 to 2.8%, while for exposure to amosite insulation products and some cement product exposures involving mixed fibre, rates of as high as 4.3 and 6.7% have been recorded (Nicholson 1991). Cohort studies in asbestos workers also confirm that cancer risk is demonstrable for non-smokers and that risk is increased (closer to multiplicative than additive) by cigarette smoking (McDonald and McDonald in Antman and Aisner 1987). The relative risk for lung cancer declines after exposure ceases, although the decline appears slower than that which occurs after quitting smoking. Lung dust burden studies are also consistent with a fibre gradient in lung cancer production; 32 men in Pacific shipyard trades with mainly amosite exposure had a lung dust burden of 1.1 million amosite fibres per gram of dry lung tissue compared to 36 Quebec chrysotile miners with an average lung dust burden of 13 million chrysotile fibres per gram of lung tissue (Becklake and Case 1994).

Relationship to asbestosis

In the 1955 autopsy study of causes of death in 102 workers employed in the United Kingdom asbestos textile factory referred to above (Doll 1955), lung cancer was found in 18 individuals, 15 of whom also had asbestosis. All subjects in whom both conditions were found had worked for at least 9 years before 1931, when national regulations for asbestos dust control were introduced. These observations suggested that as exposure levels decreased, the competing risk of death from asbestosis also decreased and workers lived long enough to exhibit the development of cancer. In most workforce-based studies, older workers with long service have some pathological evidence of asbestosis (or asbestos-related small airways disease) at autopsy even though this may be minimal and not detectable on the chest radiograph in life (McDonald and McDonald in Antman and Aisner 1987). Several but not all cohort studies are consistent with the view that not all excess lung cancers in populations exposed to asbestos are related to asbestosis. More than one pathogenetic mechanism may in fact be responsible for lung cancers in individuals exposed to asbestos depending on the site and deposition of the fibres. For instance, long thin fibres, which are deposited preferentially at airway bifurcations, are thought to become concentrated and to act as inducers of the process of cancerogenesis through chromosomal damage. Promoters of this process may include continued exposure to asbestos fibres or to tobacco smoke (Lippman 1995). Such cancers are more likely to be squamous cell in type. By contrast, in lungs which are the site of fibrosis, cancerogenesis may result from the fibrotic process: such cancers are more likely to be adenocarcinomas.

Implications and attributability

While determinants of excess cancer risk can be derived for exposed populations, attributability in the individual case cannot. Obviously, attributability to asbestos exposure is more likely and credible in an exposed individual with asbestosis who has never smoked than in an exposed individual without asbestosis who smokes. Nor can this probability be modelled reasonably. Lung dust burden measurements may supplement a careful clinical assessment but each case must be evaluated on its merits (Becklake 1994).

Malignant Mesothelioma

Pathology, diagnosis, ascertainment and clinical features

Malignant mesotheliomas arise from the serous cavities of the body. Approximately two-thirds arise in the pleura, about one-fifth in the peritoneum, while the pericardium and tunica vaginalis are much less frequently affected (McDonald and McDonald in Lidell and Miller 1991). Since mesothelial cells are pluripotential, the histological features of mesothelial tumours may vary; in most series, epithelial, sarcomatous and mixed forms account for approximately 50, 30 and 10% of cases respectively. Diagnosis of this rare tumour, even in the hands of experienced pathologists, is not easy, and mesothelioma panel pathologists often confirm only a small percentage, in some studies less than 50% of cases submitted for review. A variety of cytological and immunohistochemical techniques have been developed to assist in differentiating malignant mesothelioma from the main alternative clinical diagnoses, namely, secondary cancer or reactive mesothelial hyperplasia; this remains an active research field in which expectations are high but findings inconclusive (Jaurand, Bignon and Brochard 1993). For all these reasons, ascertainment of cases for epidemiological surveys is not straightforward, and even when based on cancer registries, may be incomplete. In addition, confirmation by expert panels using specified pathological criteria is necessary to assure comparability in criteria for registration.

Clinical features

Pain is usually the presenting feature. For pleural tumours, this starts in the chest and/or shoulders, and may be severe. Breathlessness follows, associated with pleural effusion and/or progressive encasement of the lung by tumour, and weight loss. With peritoneal tumours, abdominal pain is usually accompanied by swelling. Imaging features are illustrated in figure 7. The clinical course is usually rapid and median survival times, six months in a 1973 report and eight months in a 1993 report, have changed little over the last two decades, despite the greater public and medical awareness which often leads to earlier diagnosis and despite advances in diagnostic techniques and an increase in the number of treatment options for cancer.

Figure 7. Malignant mesothelioma

RES160F7

Seen on an overpenetrated chest roetngenogram (A) as a large mass in the axillary region. Note the associated reduction in volume of the right haemothorax with marked irregular nodular thickening of the pleura of the whole right lung. CT scan (B) confirms the extensive pleural thickening involving parietal and mediastinal pleura (closed arrows) in and around the ribs. Source: Fraser et al. 1990

Epidemiology

In the 15 years which followed the 1960 report of the mesothelioma case series from the Northwest Cape, South Africa (Wagner 1996), international confirmation of the association came from reports of other case series from Europe (United Kingdom, France, Germany, Holland), the United States (Illinois, Pennsylvania and New Jersey) and Australia, and of case control studies from the United Kingdom (4 cities), Europe (Italy, Sweden, Holland) and from the United States and Canada. Odds ratios in these studies ranged from 2 to 9. In Europe in particular, the association with shipyard occupations was strong. In addition, proportional mortality studies in asbestos-exposed cohorts suggested that risk was associated both with fibre type and with industrial process, with rates attributable to mesothelioma ranging from 0.3% in chrysotile mining to 1% in chrysotile manufacturing, compared with 3.4% in amphibole mining and manufacturing and as high as 8.6% for exposure to mixed fibre in insulation (McDonald and McDonald in Liddell and Miller 1991). Similar fibre gradients are shown in cohort mortality studies which, given the short survival times of these tumours, are a reasonable reflection of incidence. These studies also show longer latent periods when exposure was to chrysotile compared to amphiboles. Geographical variation in incidence has been documented using Canadian age-and sex-specific rates for 1966 to 1972 to calculate expected rates (McDonald and McDonald in Liddell and Miller 1991); rate ratios (values actually observed over expected) were 0.8 for the United States (1972), 1.1 for Sweden (1958 to 1967), 1.3 for Finland (1965 to 1969), 1.7 for United Kingdom (1967 to 1968), and 2.1 for the Netherlands (1969 to 1971). While technical factors including ascertainment may obviously contribute to the variation recorded, the results do suggest higher rates in Europe than in North America.

Time trends and gender differences in mesothelioma incidence have been used as a measure of the health impact of asbestos exposure on populations. The best estimates for overall rates in industrialized countries before 1950 are under 1.0 per million for men and women (McDonald and McDonald in Jaurand and Bignon 1993). Subsequently, rates increased steadily in men and either not at all or less in women. For instance, overall rates in men and women per million were reported at 11.0 and under 2.0 in the United States in 1982, 14.7 and 7.0 in Denmark for 1975-80, 15.3 and 3.2 in the United Kingdom for 1980-83, and 20.9 and 3.6 in the Netherlands for 1978-87. Higher rates in men and women, but excluding younger subjects, were reported for crocidolite mining countries: 28.9 and 4.7 respectively in Australia (aged 2+) for 1986, and 32.9 and 8.9 respectively in South African Whites (aged 1+) for 1988 (Health Effects Institute—Asbestos Research 1991). The rising rates in men are likely to reflect occupational exposure, and if so, they should level off or decrease within the 20-to 30-year “incubation” period following the introduction of workplace controls and reduction of exposure levels in most workplaces in most industrialized countries in the 1970s. In countries in which the rates in women are rising, this increase may reflect their increasing engagement in occupations with risk exposure, or the increasing environmental or indoor contamination of urban air (McDonald 1985).

Aetiology

Environmental factors are clearly the main determinants of mesothelioma risk, exposure to asbestos being the most important, though the occurrence of family clusters maintains interest in the potential role of genetic factors. All asbestos fibre types have been implicated in mesothelioma production, including anthophyllite for the first time in a recent report from Finland (Meurman, Pukkala and Hakama 1994). However, there is a substantial body of evidence, from proportional and cohort mortality studies and lung burden studies, which suggests the role of a fibre gradient in mesothelioma production, risk being higher for exposures to mainly amphiboles or amphibole chrysotile mixtures, compared with mainly chrysotile exposures. In addition, there are rate differences between workforces for the same fibre at what appears to be the same exposure level; these remain to be explained, though fibre size distribution is a likely contributing factor.

The role of tremolite has been widely debated, a debate sparked by the evidence of its biopersistence in lung tissue, animal and human, compared to that of chrysotile. A plausible hypothesis is that the many short fibres which reach and are deposited in peripheral lung airways and alveoli are cleared to subpleural lymphatics where they collect; their potency in mesothelioma production depends on their biopersistence in contact with pleural surfaces (Lippmann 1995). In human studies, mesothelioma rates are lower for populations exposed at work to chrysotile relatively uncontaminated by tremolite (for instance, in Zimbabwean mines) compared to those exposed to chrysotile which is so contaminated (for instance, in Quebec mines), and these findings have been replicated in animal studies (Lippmann 1995). Also, in a multivariate analysis of lung fibre burden in material from a Canada-wide mesothelioma case control study (McDonald et al. 1989), the results suggested that most if not all mesotheliomas could be explained by tremolite lung fibre burden. Finally, a recent analysis of the mortality in the cohort of over 10,000 Quebec chrysotile miners and millers born between 1890 and 1920, and followed to 1988 (McDonald and McDonald 1995), supports this view: in almost 7,300 deaths, the 37 mesothelioma deaths were concentrated in certain mines from the Thetford area, yet the lung burden of 88 cohort members from the mines implicated did not differ from that of miners from other mines in terms of chrysotile fibre burden, only in terms of tremolite burden (McDonald et al. 1993).

What has been called the tremolite question is perhaps the most important of the currently debated scientific issues, and it also has public health implications. Note must also be made of the important fact that in all series and jurisdictions, a certain proportion of cases occur without reported asbestos exposure, and that only in some of these cases do lung dust burden studies point to previous environmental or occupational exposure. Other occupational exposures have been implicated in mesothelioma production, for instance in talc, vermiculite and possibly mica mining, but in these, the ore contained either tremolite or other fibres (Bignon, Peto and Saracci 1989). An open search for other exposures, occupational or non-occupational, to fibres, inorganic and organic, and to other agents which may be associated with mesothelioma production, should continue.

Other Asbestos-Related Diseases

Chronic airways disease

Usually included under this rubric are chronic bronchitis and chronic obstructive pulmonary disease (COPD), both of which can be diagnosed clinically, and emphysema, until recently diagnosed only by pathological examination of lungs removed at autopsy or otherwise (Becklake 1992). A major cause is smoking, and, over the past decades, mortality and morbidity due to chronic airways disease has increased in most industrialized countries. However, with the decline of pneumoconiosis in many workforces, evidence has emerged to implicate occupational exposures in the genesis of chronic airways disease, after taking into account the dominant role of smoking. All forms of chronic airways disease have been shown to be associated with work in a variety of dusty occupations, including those occupations in which an important component of the dust contaminating the workplace was asbestos (Ernst and Zejda in Liddell and Miller 1991). Total pollutant burden, rather than exposure to any of its particular components, in this case asbestos dust, is thought to be implicated, in much the same way as the effect of smoking exposure on chronic airways diseases is viewed, that is, in terms of total exposure burden (e.g., as pack-years), not exposure to any one of the over 4,000 constituents of tobacco smoke. (see elsewhere in this volume for a further discussion of the relationship between occupational exposures and chronic airways disease).

Other cancers

In several of the earlier cohort studies of asbestos exposed workers, mortality attributable to all cancers exceeded that expected, based on national or regional vital statistics. While lung cancer accounted for most of the excess, other cancers implicated were gastro-intestinal cancers, laryngeal cancer and cancer of the ovaries, in that order of frequency. For gastro-intestinal cancers, (including those affecting the oesophagus, the stomach, the colon and the rectum), the relevant exposure in occupational cohorts is presumed to be via swallowing asbestos-laden sputum raised from the major airways in the lung, and in earlier times, (before protection measures were taken against exposure at lunch sites) direct contamination of food in workplaces which had no lunch areas separate from working areas of plants and factories. Retrograde flow via the thoracic duct from lymph nodes draining the lung might also occur (see “Fate of inhaled fibres”, page 10.54). Because the association was inconsistent in the different cohorts studied, and because exposure response relationships were not always seen, there has been a reluctance to accept the evidence of the association between occupational exposure and asbestos exposure as causal (Doll and Peto 1987; Liddell and Miller 1991).

Cancer of the larynx is much less common than gastro-intestinal or lung cancer. As early as the 1970s, there were reports of an association between cancer of the larynx and asbestos exposure. Like lung cancer, a major risk factor and cause of laryngeal cancer is smoking. Laryngeal cancer is also strongly associated with alcohol consumption. Given the location of the larynx (an organ exposed to all the inhaled pollutants to which the lungs are exposed) and given the fact that it is lined by the same epithelium that lines the major bronchi, it is certainly biologically plausible that cancer of the larynx occurs as a result of asbestos exposure. However, the overall evidence available to date is inconsistent, even from large cohort studies such as the Quebec and Balangero (Italy) chrysotile miners, possibly because it is a rare cancer and there is still reluctance to regard the association as causal (Liddell and Miller 1991) despite its biological plausibility. Cancer of the ovaries has been recorded in excess of expected in three cohort studies (WHO 1989). Misdiagnosis, in particular as peritoneal mesothelioma, may explain most of the cases (Doll and Peto 1987).

Prevention, Surveillance and Assessment

Historical and current approaches

Prevention of any pneumoconiosis, including asbestosis, has traditionally been through:

  1. engineering and work practices to maintain airborne fibre levels as low as possible, or at least in conformity with permissible exposure levels usually set by law or regulation
  2. surveillance, conducted to record trends of markers of disease in exposed populations and monitor the results of control measures
  3. education and product labelling aimed at assisting workers as well as the general public in avoiding non-occupational exposure.

 

Permissible exposure levels were originally directed at controlling asbestosis and were based on industrial hygiene measurements in million particles per cubic foot, gathered using the same methods as were used for the control of silicosis. With the shift in biological focus to fibres, in particular long thin ones, as the cause of asbestosis, methods more appropriate to their identification and measurement in air were developed and, given these methods, the focus on the more abundant short fibres which contaminate most workplaces was minimized. Aspect (length to diameter) ratios for most particles of milled chrysotile asbestos fall within the range 5:1 to 20:1, going up to 50:1, in contrast to most particles of milled amphibole asbestos (including cleavage fragments) whose values fall below 3:1. The introduction of the membrane filter for fibre counting of air samples led to an arbitrary industrial hygiene and medical definition of a fibre as a particle at least 5μm long, 3μm or less thick, and with a length to width ratio of at least 3:1. This definition, used for many of the studies of exposure-response relationships, forms the scientific basis for setting environmental standards.

For instance, it was used in a meeting sponsored by the World Health Organization (1989) to propose occupational exposure limits and has been adopted by agencies such as the US Occupational Safety and Health Administration; it is retained mainly for reasons of comparability. The WHO meeting, chaired by Sir Richard Doll, while recognizing that the occupational exposure limit in any country can only be set by the appropriate national body, recommended that countries having high limits should take urgent steps to lower the occupational exposure for an individual worker to 2 f/ml (eight-hour time-weighted average) and that all countries should move as quickly as possible to 1 f/ml (eight-hour time-weighted average) if they had not already done so. With the decrease in asbestosis rates in some industrialized countries, and concern over asbestos-related cancers in all, attention has now shifted to determining whether the same fibre parameters—that is, at least 5mm long, 3mm or less thick, and with a length to width ratio of at least 3:1—are also appropriate for controlling carcinogenesis (Browne 1994). A current theory of asbestos carcinogenesis implicates short as well as long fibres (Lippmann 1995). In addition, given the evidence for a fibre gradient in mesothelioma and lung cancer production, and to a lesser extent, for asbestosis production, an argument could be made for permissible exposure levels taking fibre type into account. Some countries have addressed the issue by banning the use (and thus the import) of crocidolite, and setting more stringent exposure levels for amosite, namely 0.1 f/l (McDonald and McDonald 1987).

Exposure levels in the workplace

Permissible exposure levels embody the hypothesis, based on all available evidence, that human health will be preserved if exposure is maintained within those limits. Revision of permissible exposure levels, when it occurs, is invariably towards greater stringency (as described in the paragraph above). Nevertheless, despite good compliance with workplace controls, cases of disease continue to occur, for reasons of personal susceptibility (for instance, higher-than-average fibre retention rates) or because of failure of workplace controls for certain jobs or processes. Engineering controls, improved workplace practices and the use of substitutes, described elsewhere in the chapter, have been implemented internationally (Gibbs, Valic and Browne 1994) in larger establishments through industry, union and other initiatives. For instance, according to a 1986 worldwide industry review, compliance with the current recommended standard of 1 f/ml had been achieved at 83% of production sites (mines and mills) covering 13,499 workers in 6 countries; in 96% of 167 cement factories operating in 23 countries; in 71% of 40 textile factories covering over 2,000 workers operating in 7 countries; and in 97% of 64 factories manufacturing friction materials, covering 10,190 workers in 10 countries (Bouige 1990). However, a not unimportant proportion of such workplaces still do not comply with regulations, not all manufacturing countries participated in this survey, and the anticipated health benefits are evident only in some national statistics, not in others (“Diagnosis and case management”, page 10.57). Control in demolition processes and small enterprises using asbestos continues to be less than successful, even in many industrialized countries.

Surveillance

The chest radiograph is the main tool for asbestosis surveillance, cancer registries and national statistics for asbestos-related cancers. A commendable initiative in international surveillance of mining, tunnelling and quarrying, undertaken by the ILO through voluntary reporting from governmental sources, focuses on coal and hard-rock mining but could include asbestos. Unfortunately, follow-through has been poor, with the last report, which was based on data for 1973-77, being published in 1985 (ILO 1985). Several countries issue national mortality and morbidity data, an excellent example being the Work-related Lung Disease Surveillance Report for the United States, a report referred to above (USDHSS 1994). Such reports provide information to interpret trends and evaluate the impact of control levels at a national level. Larger industries should (and many do) keep their own surveillance statistics, as do some unions. Surveillance of smaller industries may require specific studies at appropriate intervals. Other sources of information include programmes such as the Surveillance of Work-related Respiratory Diseases (SWORD) in the United Kingdom, which gathers regular reports from a sample of the country’s chest and occupational physicians (Meredith and McDonald 1994), and reports from compensation boards (which often, however, do not provide information on workers at risk).

Product labelling, education and the information highway

Mandatory product labelling together with worker education and education of the general public are powerful tools in prevention. While in the past, this took place within the context of worker organizations, worker management committees, and union education programmes, future approaches could exploit electronic highways to make available databases on health and safety in toxicology and medicine.

Exposure in buildings and from water supplies

In 1988, a review of potential health risks associated with working in buildings constructed using asbestos-containing materials was mandated by the US Congress (Health Effects Institute—Asbestos Research 1991). The results of a large number of indoor sampling studies from Europe, the United States and Canada were used in risk estimates. The lifetime risk for premature cancer death was estimated to be 1 per million for those exposed for 15 years in schools (for estimated exposure levels ranging from .0005 to .005 f/ml) and 4 per million for those exposed for 20 years working in office buildings (for estimated exposure levels ranging from .0002 to .002 f/ml). For comparison, the risk for occupational exposure to 0.1 f/ml (i.e., in compliance with the permissible exposure limit proposed by the US Occupational Safety and Health Administration) for 20 years was estimated at 2,000 per million exposed. Measurements in drinking water in urban communities show considerable variation, from undetectable levels to high levels ranging from 0.7 million f/l in Connecticut, USA, to levels ranging from 1.1 million to 1.3 billion f/l in the mining areas of Quebec (Bignon, Peto and Saracci 1989). Some contamination may also occur from the asbestos cement pipes which must service most urban water reticulation services in the world. However, a working group which reviewed the evidence in 1987 did not discount the potential associated hazard, but did not regard the health risks associated with asbestos ingestion as “one of the most pressing public health hazards” (USDHHS 1987), a view concordant with the concluding remarks in an IARC (WHO) monograph on non-occupational exposure to mineral fibres (Bignon, Peto and Saracci 1989).

Asbestos and other fibres in the 21st century

The first half of the twentieth century was characterized by what could be described as gross neglect of asbestos-related ill health. Before the Second World War, the reasons for this are not clear; the scientific basis for control was there but perhaps not the will and not the worker militancy. During the war, there were other national and international priorities, and after the war, pressures of urbanization by a rapidly increasing world population took precedence, and perhaps fascination in an industrial age with the versatility of the “magic” mineral diverted attention from its dangers. Following the first International Conference on the Biological Effects of Asbestos in 1964 (Selikoff and Churg 1965), asbestos-related disease became a cause célèbre, not only on its own account, but also because it marked a period of labour-management confrontation concerning the rights of the worker to knowledge about workplace hazards, health protection and fair compensation for injury or illness. In countries with no-fault worker’s compensation, asbestos-related disease on the whole received fair recognition and handling. In countries where product liability and class action suits were more usual, large awards have been made to some affected workers (and their lawyers) while others have been left destitute and without support. While the need for fibres in modern societies is unlikely to diminish, the role of the mineral fibres vis-à-vis other fibres may change. There has already been a shift in uses both within and between countries (see “Other sources of exposure”, page 10.53). Though the technology exists to diminish workplace exposures, there remain workplaces in which it has not been applied. Given the current knowledge, given international communication and product labelling, and given worker education and industry commitment, it should be possible to use this mineral to provide cheap and durable products for use in construction and water reticulation on an international basis without risk to user, worker, manufacturer or miner, or to the general public at large.

 

Back

Page 82 of 122

" DISCLAIMER: The ILO does not take responsibility for content presented on this web portal that is presented in any language other than English, which is the language used for the initial production and peer-review of original content. Certain statistics have not been updated since the production of the 4th edition of the Encyclopaedia (1998)."

Contents

Metal Processing and Metal Working Industry References

Buonicore, AJ and WT Davis (eds.). 1992. Air Pollution Engineering Manual. New York: Van Nostrand Reinhold/Air and Waste Management Association.

Environmental Protection Agency (EPA). 1995. Profile of the Nonferrous Metals Industry. EPA/310-R-95-010. Washington, DC: EPA.

International Association for Research on Cancer (IARC). 1984. Monographs on the Evaluation of Carcinogenic Risks to Humans. Vol. 34. Lyon: IARC.

Johnson A, CY Moira, L MacLean, E Atkins, A Dybunico, F Cheng, and D Enarson. 1985. Respiratory abnormalities amongst workers in iron and steel industry. Brit J Ind Med 42:94–100.

Kronenberg RS, JC Levin, RF Dodson, JGN Garcia, and DE Griffith. 1991. Asbestos-related disease in employees of a steel mill and a glass bottle manufacturing plant. Ann NY Acad Sci 643:397–403.

Landrigan, PJ, MG Cherniack, FA Lewis, LR Catlett, and RW Hornung. 1986. Silicosis in a grey iron foundry. The persistence of an ancient disease. Scand J Work Environ Health 12:32–39.

National Institute for Occupational Safety and Health (NIOSH). 1996. Criteria for a Recommended Standard: Occupational Exposures to Metalworking Fluids. Cincinatti, OH: NIOSH.

Palheta, D and A Taylor. 1995. Mercury in environmental and biological samples from a gold mining area in the Amazon Region of Brazil. Science of the Total Environment 168:63-69.

Thomas, PR and D Clarke. 1992 Vibration white finger and Dupuytren’s contracture: Are they related? Occup Med 42(3):155–158.