Sunday, 16 January 2011 19:15

Approaches to Hazard Identification: IARC

The identification of carcinogenic risks to humans has been the objective of the IARC Monographs on the Evaluation of Carcinogenic Risks to Humans since 1971. To date, 69 volumes of monographs have been published or are in press, with evaluations of carcinogenicity of 836 agents or exposure circumstances (see Appendix).

These qualitative evaluations of carcinogenic risk to humans are equivalent to the hazard identification phase in the now generally accepted scheme of risk assessment, which involves identification of hazard, dose-response assessment (including extrapolation outside the limits of observations), exposure assessment and risk characterization.

The aim of the IARC Monographs programme has been to publish critical qualitative evaluations on the carcinogenicity to humans of agents (chemicals, groups of chemicals, complex mixtures, physical or biological factors) or exposure circumstances (occupational exposures, cultural habits) through international cooperation in the form of expert working groups. The working groups prepare monographs on a series of individual agents or exposures and each volume is published and widely distributed. Each monograph consists of a brief description of the physical and chemical properties of the agent; methods for its analysis; a description of how it is produced, how much is produced, and how it is used; data on occurrence and human exposure; summaries of case reports and epidemiological studies of cancer in humans; summaries of experimental carcinogenicity tests; a brief description of other relevant biological data, such as toxicity and genetic effects, that may indicate its possible mechanism of action; and an evaluation of its carcinogenicity. The first part of this general scheme is adjusted appropriately when dealing with agents other than chemicals or chemical mixtures.

The guiding principles for evaluating carcinogens have been drawn up by various ad-hoc groups of experts and are laid down in the Preamble to the Monographs (IARC 1994a).

Tools for Qualitative Carcinogenic Risk (Hazard) Identification

Associations are established by examining the available data from studies of exposed humans, the results of bioassays in experimental animals and studies of exposure, metabolism, toxicity and genetic effects in both humans and animals.

Studies of cancer in humans

Three types of epidemiological studies contribute to an assessment of carcinogenicity: cohort studies, case-control studies and correlation (or ecological) studies. Case reports of cancer may also be reviewed.

Cohort and case-control studies relate individual exposures under study to the occurrence of cancer in individuals and provide an estimate of relative risk (ratio of the incidence in those exposed to the incidence in those not exposed) as the main measure of association.

In correlation studies, the unit of investigation is usually whole populations (e.g., particular geographical areas) and cancer frequency is related to a summary measure of the exposure of the population to the agent. Because individual exposure is not documented, a causal relationship is less easy to infer from such studies than from cohort and case-control studies. Case reports generally arise from a suspicion, based on clinical experience, that the concurrence of two events—that is, a particular exposure and occurrence of a cancer—has happened rather more frequently than would be expected by chance. The uncertainties surrounding interpretation of case reports and correlation studies make them inadequate, except in rare cases, to form the sole basis for inferring a causal relationship.

In the interpretation of epidemiological studies, it is necessary to take into account the possible roles of bias and confounding. By bias is meant the operation of factors in study design or execution that lead erroneously to a stronger or weaker association than in fact exists between disease and an agent. By confounding is meant a situation in which the relationship with disease is made to appear stronger or weaker than it truly is as a result of an association between the apparent causal factor and another factor that is associated with either an increase or decrease in the incidence of the disease.

In the assessment of the epidemiological studies, a strong association (i.e., a large relative risk) is more likely to indicate causality than a weak association, although it is recognized that relative risks of small magnitude do not imply lack of causality and may be important if the disease is common. Associations that are replicated in several studies of the same design or using different epidemiological approaches or under different circumstances of exposure are more likely to represent a causal relationship than isolated observations from single studies. An increase in risk of cancer with increasing amounts of exposure is considered to be a strong indication of causality, although the absence of a graded response is not necessarily evidence against a causal relationship. Demonstration of a decline in risk after cessation of or reduction in exposure in individuals or in whole populations also supports a causal interpretation of the findings.

When several epidemiological studies show little or no indication of an association between an exposure and cancer, the judgement may be made that, in the aggregate, they show evidence suggesting lack of carcinogenicity. The possibility that bias, confounding or misclassification of exposure or outcome could explain the observed results must be considered and excluded with reasonable certainty. Evidence suggesting lack of carcinogenicity obtained from several epidemiological studies can apply only to those type(s) of cancer, dose levels and intervals between first exposure and observation of disease that were studied. For some human cancers, the period between first exposure and the development of clinical disease is seldom less than 20 years; latent periods substantially shorter than 30 years cannot provide evidence suggesting lack of carcinogenicity.

The evidence relevant to carcinogenicity from studies in humans is classified into one of the following categories:

Sufficient evidence of carcinogenicity. A causal relationship has been established between exposure to the agent, mixture or exposure circumstance and human cancer. That is, a positive relationship has been observed between the exposure and cancer in studies in which chance, bias and confounding could be ruled out with reasonable confidence.

Limited evidence of carcinogenicity. A positive association has been observed between exposure to the agent, mixture or exposure circumstance and cancer for which a causal interpretation is considered to be credible, but chance, bias or confounding cannot be ruled out with reasonable confidence.

Inadequate evidence of carcinogenicity. The available studies are of insufficient quality, consistency or statistical power to permit a conclusion regarding the presence or absence of a causal association, or no data on cancer in humans are available.

Evidence suggesting lack of carcinogenicity. There are several adequate studies covering the full range of levels of exposure that human beings are known to encounter, which are mutually consistent in not showing a positive association between exposure to the agent and the studied cancer at any observed level of exposure. A conclusion of “evidence suggesting lack of carcinogenicity” is inevitably limited to the cancer sites, conditions and levels of exposure and length of observation covered by the available studies.

The applicability of an evaluation of the carcinogenicity of a mixture, process, occupation or industry on the basis of evidence from epidemiological studies depends on time and place. The specific exposure, process or activity considered most likely to be responsible for any excess risk should be sought and the evaluation focused as narrowly as possible. The long latent period of human cancer complicates the interpretation of epidemiological studies. A further complication is the fact that humans are exposed simultaneously to a variety of chemicals, which can interact either to increase or decrease the risk for neoplasia.

Studies on carcinogenicity in experimental animals

Studies in which experimental animals (usually mice and rats) are exposed to potential carcinogens and examined for evidence of cancer were introduced about 50 years ago with the aim of introducing a scientific approach to the study of chemical carcinogenesis and to avoid some of the disadvantages of using only epidemiological data in humans. In the IARC Monographs all available, published studies of carcinogenicity in animals are summarized, and the degree of evidence of carcinogenicity is then classified into one of the following categories:

Sufficient evidence of carcinogenicity. A causal relationship has been established between the agent or mixture and an increased incidence of malignant neoplasms or of an appropriate combination of benign and malignant neoplasms in two or more species of animals or in two or more independent studies in one species carried out at different times or in different laboratories or under different protocols. Exceptionally, a single study in one species might be considered to provide sufficient evidence of carcinogenicity when malignant neoplasms occur to an unusual degree with regard to incidence, site, type of tumour or age at onset.

Limited evidence of carcinogenicity. The data suggest a carcinogenic effect but are limited for making a definitive evaluation because, for example, (a) the evidence of carcinogenicity is restricted to a single experiment; or (b) there are some unresolved questions regarding the adequacy of the design, conduct or interpretation of the study; or (c) the agent or mixture increases the incidence only of benign neoplasms or lesions of uncertain neoplastic potential, or of certain neoplasms which may occur spontaneously in high incidences in certain strains.

Inadequate evidence of carcinogenicity. The studies cannot be interpreted as showing either the presence or absence of a carcinogenic effect because of major qualitative or quantitative limitations, or no data on cancer in experimental animals are available.

Evidence suggesting lack of carcinogenicity. Adequate studies involving at least two species are available which show that, within the limits of the tests used, the agent or mixture is not carcinogenic. A conclusion of evidence suggesting lack of carcinogenicity is inevitably limited to the species, tumour sites and levels of exposure studied.

Other data relevant to an evaluationof carcinogenicity

Data on biological effects in humans that are of particular relevance include toxicological, kinetic and metabolic considerations and evidence of DNA binding, persistence of DNA lesions or genetic damage in exposed humans. Toxicological information, such as that on cytotoxicity and regeneration, receptor binding and hormonal and immunological effects, and data on kinetics and metabolism in experimental animals are summarized when considered relevant to the possible mechanism of the carcinogenic action of the agent. The results of tests for genetic and related effects are summarized for whole mammals including man, cultured mammalian cells and nonmammalian systems. Structure-activity relationships are mentioned when relevant.

For the agent, mixture or exposure circumstance being evaluated, the available data on end-points or other phenomena relevant to mechanisms of carcinogenesis from studies in humans, experimental animals and tissue and cell test systems are summarized within one or more of the following descriptive dimensions:

  •  evidence of genotoxicity (i.e., structural changes at the level of the gene): for example, structure-activity considerations, adduct formation, mutagenicity (effect on specific genes), chromosomal mutation or aneuploidy
  •  evidence of effects on the expression of relevant genes (i.e., functional changes at the intracellular level): for example, alterations to the structure or quantity of the product of a proto-oncogene or tumour suppressor gene, alterations to metabolic activation, inactivation or DNA repair
  •  evidence of relevant effects on cell behaviour (i.e., morphological or behavioural changes at the cellular or tissue level): for example, induction of mitogenesis, compensatory cell proliferation, preneoplasia and hyperplasia, survival of premalignant or malignant cells (immortalization, immunosuppression), effects on metastatic potential
  •  evidence from dose and time relationships of carcinogenic effects and interactions between agents: for example, early versus late stage, as inferred from epidemiological studies; initiation, promotion, progression or malignant conversion, as defined in animal carcinogenicity experiments; toxicokinetics.

 

These dimensions are not mutually exclusive, and an agent may fall within more than one. Thus, for example, the action of an agent on the expression of relevant genes could be summarized under both the first and second dimension, even if it were known with reasonable certainty that those effects resulted from genotoxicity.

Overall evaluations

Finally, the body of evidence is considered as a whole, in order to reach an overall evaluation of the carcinogenicity to humans of an agent, mixture or circumstance of exposure. An evaluation may be made for a group of chemicals when supporting data indicate that other, related compounds for which there is no direct evidence of capacity to induce cancer in humans or in animals may also be carcinogenic, a statement describing the rationale for this conclusion is added to the evaluation narrative.

The agent, mixture or exposure circumstance is described according to the wording of one of the following categories, and the designated group is given. The categorization of an agent, mixture or exposure circumstance is a matter of scientific judgement, reflecting the strength of the evidence derived from studies in humans and in experimental animals and from other relevant data.

Group 1

The agent (mixture) is carcinogenic to humans. The exposure circumstance entails exposures that are carcinogenic to humans.

This category is used when there is sufficient evidence of carcinogenicity in humans. Exceptionally, an agent (mixture) may be placed in this category when evidence in humans is less than sufficient but there is sufficient evidence of carcinogenicity in experimental animals and strong evidence in exposed humans that the agent (mixture) acts through a relevant mechanism of carcinogenicity.

Group 2

This category includes agents, mixtures and exposure circumstances for which, at one extreme, the degree of evidence of carcinogenicity in humans is almost sufficient, as well as those for which, at the other extreme, there are no human data but for which there is evidence of carcinogenicity in experimental animals. Agents, mixtures and exposure circumstances are assigned to either group 2A (probably carcinogenic to humans) or group 2B (possibly carcinogenic to humans) on the basis of epidemiological and experimental evidence of carcinogenicity and other relevant data.

Group 2A. The agent (mixture) is probably carcinogenic to humans. The exposure circumstance entails exposures that are probably carcinogenic to humans. This category is used when there is limited evidence of carcinogenicity in humans and sufficient evidence of carcinogenicity in experimental animals. In some cases, an agent (mixture) may be classified in this category when there is inadequate evidence of carcinogenicity in humans and sufficient evidence of carcinogenicity in experimental animals and strong evidence that the carcinogenesis is mediated by a mechanism that also operates in humans. Exceptionally, an agent, mixture or exposure circumstance may be classified in this category solely on the basis of limited evidence of carcinogenicity in humans.

Group 2B. The agent (mixture) is possibly carcinogenic to humans. The exposure circumstance entails exposures that are possibly carcinogenic to humans. This category is used for agents, mixtures and exposure circumstances for which there is limited evidence of carcinogenicity in humans and less than sufficient evidence of carcinogenicity in experimental animals. It may also be used when there is inadequate evidence of carcinogenicity in humans but there is sufficient evidence of carcinogenicity in experimental animals. In some instances, an agent, mixture or exposure circumstance for which there is inadequate evidence of carcinogenicity in humans but limited evidence of carcinogenicity in experimental animals together with supporting evidence from other relevant data may be placed in this group.

Group 3

The agent (mixture or exposure circumstance) is not classifiable as to its carcinogenicity to humans. This category is used most commonly for agents, mixtures and exposure circumstances for which the evidence of carcinogenicity is inadequate in humans and inadequate or limited in experimental animals.

Exceptionally, agents (mixtures) for which the evidence of carcinogenicity is inadequate in humans but sufficient in experimental animals may be placed in this category when there is strong evidence that the mechanism of carcinogenicity in experimental animals does not operate in humans.

Group 4

The agent (mixture) is probably not carcinogenic to humans. This category is used for agents or mixtures for which there is evidence suggesting lack of carcinogenicity in humans and in experimental animals. In some instances, agents or mixtures for which there is inadequate evidence of carcinogenicity in humans but evidence suggesting lack of carcinogenicity experimental animals, consistently and strongly supported by a broad range of other relevant data, may be classified in this group.

Classification systems made by humans are not sufficiently perfect to encompass all the complex entities of biology. They are, however, useful as guiding principles and may be modified as new knowledge of carcinogenesis becomes more firmly established. In the categorization of an agent, mixture or exposure circumstance, it is essential to rely on scientific judgements formulated by the group of experts.

Results to Date

To date, 69 volumes of IARC Monographs have been published or are in press, in which evaluations of carcinogenicity to humans have been made for 836 agents or exposure circumstances. Seventy-four agents or exposures have been evaluated as carcinogenic to humans (Group 1), 56 as probably carcinogenic to humans (Group 2A), 225 as possibly carcinogenic to humans (Group 2B) and one as probably not carcinogenic to humans (Group 4). For 480 agents or exposures, the available epidemiological and experimental data did not allow an evaluation of their carcinogenicity to humans (Group 3).

Importance of Mechanistic Data

The revised Preamble, which first appeared in volume 54 of the IARC Monographs, allows for the possibility that an agent for which epidemiological evidence of cancer is less than sufficient can be placed in Group 1 when there is sufficient evidence of carcinogenicity in experimental animals and strong evidence in exposed humans that the agent acts through a relevant mechanism of carcinogenicity. Conversely, an agent for which there is inadequate evidence of carcinogenicity in humans together with sufficient evidence in experimental animals and strong evidence that the mechanism of carcinogenesis does not operate in humans may be placed in Group 3 instead of the normally assigned Group 2B—possibly carcinogenic to humans—category.

The use of such data on mechanisms has been discussed on three recent occasions:

While it is generally accepted that solar radiation is carcinogenic to humans (Group 1), epidemiological studies on cancer in humans for UVA and UVB radiation from sun lamps provide only limited evidence of carcinogenicity. Special tandem base substitutions (GCTTT) have been observed in p53 tumour suppression genes in squamous-cell tumours at sun-exposed sites in humans. Although UVR can induce similar transitions in some experimental systems and UVB, UVA and UVC are carcinogenic in experimental animals, the available mechanistic data were not considered strong enough to allow the working group to classify UVB, UVA and UVC higher than Group 2A (IARC 1992). In a study published after the meeting (Kress et al. 1992), CCTTT transitions in p53 have been demonstrated in UVB-induced skin tumours in mice, which might suggest that UVB should also be classified as carcinogenic to humans (Group 1).

The second case in which the possibility of placing an agent in Group 1 in the absence of sufficient epidemiological evidence was considered was 4,4´-methylene-bis(2-chloroaniline) (MOCA). MOCA is carcinogenic in dogs and rodents and is comprehensively genotoxic. It binds to DNA through reaction with N-hydroxy MOCA and the same adducts that are formed in target tissues for carcinogenicity in animals have been found in urothelial cells from a small number of exposed humans. After lengthy discussions on the possibility of an upgrading, the working group finally made an overall evaluation of Group 2A, probably carcinogenic to humans (IARC 1993).

During a recent evaluation of ethylene oxide (IARC 1994b), the available epidemiological studies provided limited evidence of carcinogenicity in humans, and studies in experimental animals provided sufficient evidence of carcinogenicity. Taking into account the other relevant data that (1) ethylene oxide induces a sensitive, persistent, dose-related increase in the frequency of chromosomal aberrations and sister chromatid exchanges in peripheral lymphocytes and micronuclei in bone-marrow cells from exposed workers; (2) it has been associated with malignancies of the lymphatic and haematopoietic system in both humans and experimental animals; (3) it induces a dose-related increase in the frequency of haemoglobin adducts in exposed humans and dose-related increases in the numbers of adducts in both DNA and haemoglobin in exposed rodents; (4) it induces gene mutations and heritable translocations in germ cells of exposed rodents; and (5) it is a powerful mutagen and clastogen at all phylogenetic levels; ethylene oxide was classified as carcinogenic to humans (Group 1).

In the case where the Preamble allows for the possibility that an agent for which there is sufficient evidence of carcinogenicity in animals can be placed in Group 3 (instead of Group 2B, in which it would normally be categorized) when there is strong evidence that the mechanism of carcinogenicity in animals does not operate in humans, this possibility has not yet been used by any working group. Such a possibility could have been envisaged in the case of d-limonene had there been sufficient evidence of its carcinogenicity in animals, since there are data suggesting that α2-microglobulin production in male rat kidney is linked to the renal tumours observed.

Among the many chemicals nominated as priorities by an ad-hoc working group in December 1993, some common postulated intrinsic mechanisms of action appeared or certain classes of agents based upon their biological properties were identified. The working group recommended that before evaluations are made on such agents as peroxisome proliferators, fibres, dusts and thyrostatic agents within the Monographs programme, special ad-hoc groups should be convened to discuss the latest state of the art on their particular mechanisms of action.

 

Back

As in many other countries, risk due to exposure to chemicals is regulated in Japan according to the category of chemicals concerned, as listed in table 1. The governmental ministry or agency in charge varies. In the case of industrial chemicals in general, the major law that applies is the Law Concerning Examination and Regulation of Manufacture, Etc. of Chemical Substances, or Chemical Substances Control Law (CSCL) for short. The agencies in charge are the Ministry of International Trade and Industry and the Ministry of Health and Welfare. In addition, the Labour Safety and Hygiene Law (by the Ministry of Labour) provides that industrial chemicals should be examined for possible mutagenicity and, if the chemical in concern is found to be mutagenic, the exposure of workers to the chemical should be minimized by enclosure of production facilities, installation of local exhaust systems, use of protective equipment, and so on.

Table 1. Regulation of chemical substances by laws, Japan

Category Law Ministry
Food and food additives Foodstuff Hygiene Law MHW
Pharmaceuticals Pharmaceuticals Law MHW
Narcotics Narcotics Control Law MHW
Agricultural chemicals Agricultural Chemicals Control Law MAFF
Industrial chemicals Chemical Substances Control Law MHW & MITI
All chemicals except for radioactive substances Law concerning Regulation of
House-Hold Products Containing
Hazardous Substances
Poisonous and Deleterious
Substances Control Law
Labour Safety and Hygiene Law
MHW

MHW

MOL
Radioactive substances Law concerning Radioactive Substances STA

Abbreviations: MHW—Ministry of Health and Welfare; MAFF—Ministry of Agriculture, Forestry and Fishery; MITI—Ministry of International Trade and Industry; MOL—Ministry of Labour; STA—Science and Technology Agency.

Because hazardous industrial chemicals will be identified primarily by the CSCL, the framework of tests for hazard identification under CSCL will be described in this section.

The Concept of the Chemical SubstanceControl Law

The original CSCL was passed by the Diet (the parliament of Japan) in 1973 and took effect on 16 April 1974. The basic motivation for the Law was the prevention of environmental pollution and resulting human health effects by PCBs and PCB-like substances. PCBs are characterized by (1) persistency in the environment (poorly biodegradable), (2) increasing concentration as one goes up the food chain (or food web) (bioaccumulation) and (3) chronic toxicity in humans. Accordingly, the Law mandated that each industrial chemical be examined for such characteristics prior to marketing in Japan. In parallel with the passage of the Law, the Diet decided that the Environment Agency should monitor the general environment for possible chemical pollution. The Law was then amended by the Diet in 1986 (the amendment taking effect in 1987) in order to harmonize with actions of the OECD regarding health and the environment, the lowering of non-tariff barriers in international trade and especially the setting of a minimum premarketing set of data (MPD) and related test guidelines. The amendment was also a reflection of observation at the time, through monitoring of the environment, that chemicals such as trichloroethylene and tetrachloroethylene, which are not highly bioaccumulating although poorly biodegradable and chronically toxic, can pollute the environment; these chemical substances were detected in groundwater nationwide.

The Law classifies industrial chemicals into two categories: existing chemicals and new chemicals. The existing chemicals are those listed in the “Existing Chemicals Inventory” (established with the passage of the original Law) and number about 20,000, the number depending on the way some chemicals are named in the inventory. Chemicals not in the inventory are called new chemicals. The government is responsible for hazard identification of the existing chemicals, whereas the company or other entity that wishes to introduce a new chemical into the market in Japan is responsible for hazard identification of the new chemical. Two governmental ministries, the Ministry of Health and Welfare (MHW) and the Ministry of International Trade and Industry (MITI), are in charge of the Law, and the Environment Agency can express its opinion when necessary. Radioactive substances, specified poisons, stimulants and narcotics are excluded because they are regulated by other laws.

Test System Under CSCL

The flow scheme of examination is depicted in figure 1, which is a stepwise system in principle. All chemicals (for exceptions, see below) should be examined for biodegradability in vitro. In case the chemical is readily biodegradable, it is considered “safe”. Otherwise, the chemical is then examined for bioaccumulation. If it is found to be “highly accumulating,” full toxicity data are requested, based on which the chemical will be classified as a “Class 1 specified chemical substance” when toxicity is confirmed, or a “safe” one otherwise. The chemical with no or low accumulation will be subject to toxicity screening tests, which consist of mutagenicity tests and 28-day repeated dosing to experimental animals (for details, see table 2). After comprehensive evaluation of the toxicity data, the chemical will be classified as a “Designated chemical substance” if the data indicate toxicity. Otherwise, it is considered “safe”. When other data suggest that there is a great possibility of environmental pollution with the chemical in concern, full toxicity data are requested, from which the designated chemical will be reclassified as “Class 2 specified chemical substance” when positive. Otherwise, it is considered “safe”. Toxicological and ecotoxicological characteristics of “Class 1 specific chemical substance,” “Class 2 specific chemical substance” and “Designated chemical substance” are listed in table 3 together with outlines of regulatory actions.

Figure 1. Scheme of examination

TOX260F1

Table 2. Test items under the Chemical Substance Control Law, Japan

Item Test design
Biodegradation For 2 weeks in principle, in vitro, with activated
sludge
Bioaccumulation For 8 weeks in principle, with carp
Toxicity screening
Mutagenicity tests
Bacterial system
Chromosome aberration


Ames’ test and test with E. coli, ± S9 mix
CHL cells, etc., ±S9 mix
28-day repeated dosing Rats, 3 dose levels plus control for NOEL,
2 weeks recovery test at the highest dose level in addition

Table 3. Characteristics of classified chemical substances and regulations under the Japanese Chemical Substances Control Law

Chemical substance Characteristics Regulation
Class 1
specified chemical substances
Nonbiodegradability
High bioaccumulation
Chronic toxicity
Authorization to manufacture or import necessary1
Restriction in use
Class 2
specified chemical substances
Nonbiodegradability
Non- or low bioaccumulation Chronic toxicity
Suspected environmental pollution
Notification on scheduled manu-facturing or import quantity
Technical guideline to prevent pollution/heath effects
Designated chemical substances Nonbiodegradability
Non- or low bioaccumulation
Suspected chronic toxicity
Report on manufacturing or import quantity
Study and literature survey

1 No authorization in practice.

Testing is not required for a new chemical with a limited use amount (i.e., less than 1,000 kg/company/year and less than 1,000 kg/year for all of Japan). Polymers are examined following the high molecular-weight compound flow scheme, which is developed with an assumption that chances are remote for absorption into the body when the chemical has a molecular weight of greater than 1,000 and is stable in the environment.

Results of Classification of Industrial Chemicals,as of 1996

In the 26 years from the time CSCL went into effect in 1973 to the end of 1996, 1,087 existing chemical items were examined under the original and amended CSCL. Among the 1,087, nine items (some are identified by generic names) were classified as “Class 1 specified chemical substance”. Among those remaining, 36 were classified as “designated”, of which 23 were reclassified as “Class 2 specified chemical substance” and another 13 remained as “designated”. The names of Class 1 and 2 specified chemical substances are listed in figure 2. It is clear from the table that most of the Class 1 chemicals are organochlorine pesticides in addition to PCB and its substitute, except for one seaweed killer. A majority of the Class 2 chemicals are seaweed killers, with the exceptions of three once widely used chlorinated hydrocarbon solvents.

Figure 2. Specified and designated chemical substances under the Japanese Chemical Substances Control Law

TOX260T4

In the same period from 1973 to the end of 1996, about 2,335 new chemicals were submitted for approval, of which 221 (about 9.5%) were identified as “designated”, but none as Class 1 or 2 chemicals. Other chemicals were considered “safe” and approved for manufacturing or import.

 

Back

Sunday, 16 January 2011 19:01

Toxicology in Health and Safety Regulation

Toxicology plays a major role in the development of regulations and other occupational health policies. In order to prevent occupational injury and illness, decisions are increasingly based upon information obtainable prior to or in the absence of the types of human exposures that would yield definitive information on risk such as epidemiology studies. In addition, toxicological studies, as described in this chapter, can provide precise information on dose and response under the controlled conditions of laboratory research; this information is often difficult to obtain in the uncontrolled setting of occupational exposures. However, this information must be carefully evaluated in order to estimate the likelihood of adverse effects in humans, the nature of these adverse effects, and the quantitative relationship between exposures and effects.

Considerable attention has been given in many countries, since the 1980s, to developing objective methods for utilizing toxicological information in regulatory decision-making. Formal methods, frequently referred to as risk assessment, have been proposed and utilized in these countries by both governmental and non-governmental entities. Risk assessment has been varyingly defined; fundamentally it is an evaluative process that incorporates toxicology, epidemiology and exposure information to identify and estimate the probability of adverse effects associated with exposures to hazardous substances or conditions. Risk assessment may be qualitative in nature, indicating the nature of an adverse effect and a general estimate of likelihood, or it may be quantitative, with estimates of numbers of affected persons at specific levels of exposure. In many regulatory systems, risk assessment is undertaken in four stages: hazard identification, the description of the nature of the toxic effect; dose-response evaluation, a semi-quantitative or quantitative analysis of the relationship between exposure (or dose) and severity or likelihood of toxic effect; exposure assessment, the evaluation of information on the range of exposures likely to occur for populations in general or for subgroups within populations; risk characterization, the compilation of all the above information into an expression of the magnitude of risk expected to occur under specified exposure conditions (see NRC 1983 for a statement of these principles).

In this section, three approaches to risk assessment are presented as illustrative. It is impossible to provide a comprehensive compendium of risk assessment methods used throughout the world, and these selections should not be taken as prescriptive. It should be noted that there are trends towards harmonization of risk assessment methods, partly in response to provisions in the recent GATT accords. Two processes of international harmonization of risk assessment methods are currently underway, through the International Programme on Chemical Safety (IPCS) and the Organization for Economic Cooperation and Development (OECD). These organizations also maintain current information on national approaches to risk assessment.

 

Back

Sunday, 16 January 2011 18:56

Structure Activity Relationships

Structure activity relationships (SAR) analysis is the utilization of information on the molecular structure of chemicals to predict important characteristics related to persistence, distribution, uptake and absorption, and toxicity. SAR is an alternative method of identifying potential hazardous chemicals, which holds promise of assisting industries and governments in prioritizing substances for further evaluation or for early-stage decision making for new chemicals. Toxicology is an increasingly expensive and resource-intensive undertaking. Increased concerns over the potential for chemicals to cause adverse effects in exposed human populations have prompted regulatory and health agencies to expand the range and sensitivity of tests to detect toxicological hazards. At the same time, the real and perceived burdens of regulation upon industry have provoked concerns for the practicality of toxicity testing methods and data analysis. At present, the determination of chemical carcinogenicity depends upon lifetime testing of at least two species, both sexes, at several doses, with careful histopathological analysis of multiple organs, as well as detection of preneoplastic changes in cells and target organs. In the United States, the cancer bioassay is estimated to cost in excess of $3 million (1995 dollars).

Even with unlimited financial resources, the burden of testing the approximately 70,000 existing chemicals produced in the world today would exceed the available resources of trained toxicologists. Centuries would be required to complete even a first tier evaluation of these chemicals (NRC 1984). In many countries ethical concerns over the use of animals in toxicity testing have increased, bringing additional pressures upon the uses of standard methods of toxicity testing. SAR has been widely used in the pharmaceutical industry to identify molecules with potential for beneficial use in treatment (Hansch and Zhang 1993). In environmental and occupational health policy, SAR is used to predict the dispersion of compounds in the physical-chemical environment and to screen new chemicals for further evaluation of potential toxicity. Under the US Toxic Substances Control Act (TSCA), the EPA has used since 1979 an SAR approach as a “first screen” of new chemicals in the premanufacture notification (PMN) process; Australia uses a similar approach as part of its new chemicals notification (NICNAS) procedure. In the US SAR analysis is an important basis for determining that there is a reasonable basis to conclude that manufacture, processing, distribution, use or disposal of the substance will present an unreasonable risk of injury to human health or the environment, as required by Section 5(f) of TSCA. On the basis of this finding, EPA can then require actual tests of the substance under Section 6 of TSCA.

Rationale for SAR

The scientific rationale for SAR is based upon the assumption that the molecular structure of a chemical will predict important aspects of its behaviour in physical-chemical and biological systems (Hansch and Leo 1979).

SAR Process

The SAR review process includes identification of the chemical structure, including empirical formulations as well as the pure compound; identification of structurally analogous substances; searching databases and literature for information on structural analogs; and analysis of toxicity and other data on structural analogs. In some rare cases, information on the structure of the compound alone can be sufficient to support some SAR analysis, based upon well-understood mechanisms of toxicity. Several databases on SAR have been compiled, as well as computer-based methods for molecular structure prediction.

With this information, the following endpoints can be estimated with SAR:

  • physical-chemical parameters: boiling point, vapour pressure, water solubility, octanol/water partition coefficient
  • biological/environmental fate parameters: biodegradation, soil sorption, photodegradation, pharmacokinetics
  • toxicity parameters: aquatic organism toxicity, absorption, acute mammalian toxicity (limit test or LD50), dermal, lung and eye irritation, sensitization, subchronic toxicity, mutagenicity.

 

It should be noted that SAR methods do not exist for such important health endpoints as carcinogenicity, developmental toxicity, reproductive toxicity, neurotoxicity, immunotoxicity or other target organ effects. This is due to three factors: the lack of a large database upon which to test SAR hypotheses, lack of knowledge of structural determinants of toxic action, and the multiplicity of target cells and mechanisms that are involved in these endpoints (see “The United States approach to risk assessment of reproductive toxicants and neurotoxic agents”). Some limited attempts to utilize SAR for predicting pharmacokinetics using information on partition coefficients and solubility (Johanson and Naslund 1988). More extensive quantitative SAR has been done to predict P450-dependent metabolism of a range of compounds and binding of dioxin- and PCB-like molecules to the cytosolic “dioxin” receptor (Hansch and Zhang 1993).

SAR has been shown to have varying predictability for some of the endpoints listed above, as shown in table 1. This table presents data from two comparisons of predicted activity with actual results obtained by empirical measurement or toxicity testing. SAR as conducted by US EPA experts performed more poorly for predicting physical-chemical properties than for predicting biological activity, including biodegradation. For toxicity endpoints, SAR performed best for predicting mutagenicity. Ashby and Tennant (1991) in a more extended study also found good predictability of short-term genotoxicity in their analysis of NTP chemicals. These findings are not surprising, given current understanding of molecular mechanisms of genotoxicity (see “Genetic toxicology”) and the role of electrophilicity in DNA binding. In contrast, SAR tended to underpredict systemic and subchronic toxicity in mammals and to overpredict acute toxicity to aquatic organisms.

Table 1. Comparison of SAR and test data: OECD/NTP analyses

Endpoint Agreement (%) Disagreement (%) Number
Boiling point 50 50 30
Vapour pressure 63 37 113
Water solubility 68 32 133
Partition coefficient 61 39 82
Biodegradation 93 7 107
Fish toxicity 77 22 130
Daphnia toxicity 67 33 127
Acute mammalian toxicity (LD50 ) 80 201 142
Skin irritation 82 18 144
Eye irritation 78 22 144
Skin sensitization 84 16 144
Subchronic toxicity 57 32 143
Mutagenicity2 88 12 139
Mutagenicity3 82–944 1–10 301
Carcinogenicity3 : Two year bioassay 72–954 301

Source: Data from OECD, personal communication C. Auer ,US EPA. Only those endpoints for which comparable SAR predictions and actual test data were available were used in this analysis. NTP data are from Ashby and Tennant 1991.

1 Of concern was the failure by SAR to predict acute toxicity in 12% of the chemicals tested.

2 OECD data, based on Ames test concordance with SAR

3 NTP data, based on genetox assays compared to SAR predictions for several classes of “structurally alerting chemicals”.

4 Concordance varies with class; highest concordance was with aromatic amino/nitro compounds; lowest with “miscellaneous” structures.

For other toxic endpoints, as noted above, SAR has less demonstrable utility. Mammalian toxicity predictions are complicated by the lack of SAR for toxicokinetics of complex molecules. Nevertheless, some attempts have been made to propose SAR principles for complex mammalian toxicity endpoints (for instance, see Bernstein (1984) for an SAR analysis of potential male reproductive toxicants). In most cases, the database is too small to permit rigorous testing of structure-based predictions.

At this point it may be concluded that SAR may be useful mainly for prioritizing the investment of toxicity testing resources or for raising early concerns about potential hazard. Only in the case of mutagenicity is it likely that SAR analysis by itself can be utilized with reliability to inform other decisions. For no endpoint is it likely that SAR can provide the type of quantitative information required for risk assessment purposes as discussed elsewhere in this chapter and Encyclopaedia.

 

Back

Sunday, 16 January 2011 18:53

In Vitro Toxicity Testing

The emergence of sophisticated technologies in molecular and cellular biology has spurred a relatively rapid evolution in the life sciences, including toxicology. In effect, the focus of toxicology is shifting from whole animals and populations of whole animals to the cells and molecules of individual animals and humans. Since the mid-1980s, toxicologists have begun to employ these new methodologies in assessing the effects of chemicals on living systems. As a logical progression, such methods are being adapted for the purposes of toxicity testing. These scientific advances have worked together with social and economic factors to effect change in the evaluation of product safety and potential risk.

Economic factors are specifically related to the volume of materials that must be tested. A plethora of new cosmetics, pharmaceuticals, pesticides, chemicals and household products is introduced into the market every year. All of these products must be evaluated for their potential toxicity. In addition, there is a backlog of chemicals already in use that have not been adequately tested. The enormous task of obtaining detailed safety information on all of these chemicals using traditional whole animal testing methods would be costly in terms of both money and time, if it could even be accomplished.

There are also societal issues that relate to public health and safety, as well as increasing public concern about the use of animals for product safety testing. With regard to human safety, public interest and environmental advocacy groups have placed significant pressure on government agencies to apply more stringent regulations on chemicals. A recent example of this has been a movement by some environmental groups to ban chlorine and chlorine-containing compounds in the United States. One of the motivations for such an extreme action lies in the fact that most of these compounds have never been adequately tested. From a toxicological perspective, the concept of banning a whole class of diverse chemicals based simply on the presence of chlorine is both scientifically unsound and irresponsible. Yet, it is understandable that from the public’s perspective, there must be some assurance that chemicals released into the environment do not pose a significant health risk. Such a situation underscores the need for more efficient and rapid methods to assess toxicity.

The other societal concern that has impacted the area of toxicity testing is animal welfare. The growing number of animal protection groups throughout the world have voiced considerable opposition to the use of whole animals for product safety testing. Active campaigns have been waged against manufacturers of cosmetics, household and personal care products and pharmaceuticals in attempts to stop animal testing. Such efforts in Europe have resulted in the passage of the Sixth Amendment to Directive 76/768/EEC (the Cosmetics Directive). The consequence of this Directive is that cosmetic products or cosmetic ingredients that have been tested in animals after January 1, 1998 cannot be marketed in the European Union, unless alternative methods are insufficiently validated. While this Directive has no jurisdiction over the sale of such products in the United States or other countries, it will significantly affect those companies that have international markets that include Europe.

The concept of alternatives, which forms the basis for the development of tests other than those on whole animals, is defined by the three Rs: reduction in the numbers of animals used; refinement of protocols so that animals experience less stress or discomfort; and replacement of current animal tests with in vitro tests (i.e., tests done outside of the living animal), computer models or test on lower vertebrate or invertebrate species. The three Rs were introduced in a book published in 1959 by two British scientists, W.M.S. Russell and Rex Burch, The Principles of Humane Experimental Technique. Russell and Burch maintained that the only way in which valid scientific results could be obtained is through the humane treatment of animals, and believed that methods should be developed to reduce animal use and ultimately replace it. Interestingly, the principles outlined by Russell and Burch received little attention until the resurgence of the animal welfare movement in the mid-1970s. Today the concept of the three Rs is very much in the forefront with regard to research, testing and education.

In summary, the development of in vitro test methodologies has been influenced by a variety of factors that have converged over the last ten to 20 years. It is difficult to ascertain if any of these factors alone would have had such a profound effect on toxicity testing strategies.

Concept of In Vitro Toxicity Tests

This section will focus solely on in vitro methods for evaluating toxicity, as one of the alternatives to whole-animal testing. Additional non-animal alternatives such as computer modelling and quantitative structure-activity relationships are discussed in other articles of this chapter.

In vitro studies are generally conducted in animal or human cells or tissues outside of the body. In vitro literally means “in glass”, and refers to procedures carried out on living material or components of living material cultured in petri dishes or in test tubes under defined conditions. These may be contrasted with in vivo studies, or those carried out “in the living animal”. While it is difficult, if not impossible, to project the effects of a chemical on a complex organism when the observations are confined to a single type of cells in a dish, in vitro studies do provide a significant amount of information about intrinsic toxicity as well as cellular and molecular mechanisms of toxicity. In addition, they offer many advantages over in vivo studies in that they are generally less expensive and they may be conducted under more controlled conditions. Furthermore, despite the fact that small numbers of animals are still needed to obtain cells for in vitro cultures, these methods may be considered reduction alternatives (since many fewer animals are used compared to in vivo studies) and refinement alternatives (because they eliminate the need to subject the animals to the adverse toxic consequences imposed by in vivo experiments).

In order to interpret the results of in vitro toxicity tests, determine their potential usefulness in assessing toxicity and relate them to the overall toxicological process in vivo, it is necessary to understand which part of the toxicological process is being examined. The entire toxicological process consists of events that begin with the organism’s exposure to a physical or chemical agent, progress through cellular and molecular interactions and ultimately manifest themselves in the response of the whole organism. In vitro tests are generally limited to the part of the toxicological process that takes place at the cellular and molecular level. The types of information that may be obtained from in vitro studies include pathways of metabolism, interaction of active metabolites with cellular and molecular targets and potentially measurable toxic endpoints that can serve as molecular biomarkers for exposure. In an ideal situation, the mechanism of toxicity of each chemical from exposure to organismal manifestation would be known, such that the information obtained from in vitro tests could be fully interpreted and related to the response of the whole organism. However, this is virtually impossible, since relatively few complete toxicological mechanisms have been elucidated. Thus, toxicologists are faced with a situation in which the results of an in vitro test cannot be used as an entirely accurate prediction of in vivo toxicity because the mechanism is unknown. However, frequently during the process of developing an in vitro test, components of the cellular and molecular mechanism(s) of toxicity are elucidated.

One of the key unresolved issues surrounding the development and implementation of in vitro tests is related to the following consideration: should they be mechanistically based or is it sufficient for them to be descriptive? It is inarguably better from a scientific perspective to utilize only mechanistically based tests as replacements for in vivo tests. However in the absence of complete mechanistic knowledge, the prospect of developing in vitro tests to completely replace whole animal tests in the near future is almost nil. This does not, however, rule out the use of more descriptive types of assays as early screening tools, which is the case presently. These screens have resulted in a significant reduction in animal use. Therefore, until such time as more mechanistic information is generated, it may be necessary to employ to a more limited extent, tests whose results simply correlate well with those obtained in vivo.

In Vitro Tests for Cytotoxicity

In this section, several in vitro tests that have been developed to assess a chemical’s cytotoxic potential will be described. For the most part, these tests are easy to perform and analysis can be automated. One commonly used in vitro test for cytotoxicity is the neutral red assay. This assay is done on cells in culture, and for most applications, the cells can be maintained in culture dishes that contain 96 small wells, each 6.4mm in diameter. Since each well can be used for a single determination, this arrangement can accommodate multiple concentrations of the test chemical as well as positive and negative controls with a sufficient number of replicates for each. Following treatment of the cells with various concentrations of the test chemical ranging over at least two orders of magnitude (e.g., from 0.01mM to 1mM), as well as positive and negative control chemicals, the cells are rinsed and treated with neutral red, a dye that can be taken up and retained only by live cells. The dye may be added upon removal of the test chemical to determine immediate effects, or it may be added at various times after the test chemical is removed to determine cumulative or delayed effects. The intensity of the colour in each well corresponds to the number of live cells in that well. The colour intensity is measured by a spectrophotometer which may be equipped with a plate reader. The plate reader is programmed to provide individual measurements for each of the 96 wells of the culture dish. This automated methodology permits the investigator to rapidly perform a concentration-response experiment and to obtain statistically useful data.

Another relatively simple assay for cytotoxicity is the MTT test. MTT (3[4,5-dimethylthiazol-2-yl]-2,5-diphenyltetrazolium bromide) is a tetrazolium dye that is reduced by mitochondrial enzymes to a blue colour. Only cells with viable mitochondria will retain the ability to carry out this reaction; therefore the colour intensity is directly related to the degree of mitochondrial integrity. This is a useful test to detect general cytotoxic compounds as well as those agents that specifically target mitochondria.

The measurement of lactate dehydrogenase (LDH) activity is also used as a broad-based assay for cytotoxicity. This enzyme is normally present in the cytoplasm of living cells and is released into the cell culture medium through leaky cell membranes of dead or dying cells that have been adversely affected by a toxic agent. Small amounts of culture medium may be removed at various times after chemical treatment of the cells to measure the amount of LDH released and determine a time course of toxicity. While the LDH release assay is a very general assessment of cytotoxicity, it is useful because it is easy to perform and it may be done in real time.

There are many new methods being developed to detect cellular damage. More sophisticated methods employ fluorescent probes to measure a variety of intracellular parameters, such as calcium release and changes in pH and membrane potential. In general, these probes are very sensitive and may detect more subtle cellular changes, thus reducing the need to use cell death as an endpoint. In addition, many of these fluorescent assays may be automated by the use of 96-well plates and fluorescent plate readers.

Once data have been collected on a series of chemicals using one of these tests, the relative toxicities may be determined. The relative toxicity of a chemical, as determined in an in vitro test, may be expressed as the concentration that exerts a 50% effect on the endpoint response of untreated cells. This determination is referred to as the EC50 (Effective Concentration for 50% of the cells) and may be used to compare toxicities of different chemicals in vitro. (A similar term used in evaluating relative toxicity is IC50, indicating the concentration of a chemical that causes a 50% inhibition of a cellular process, e.g., the ability to take up neutral red.) It is not easy to assess whether the relative in vitro toxicity of the chemicals is comparable to their relative in vivo toxicities, since there are so many confounding factors in the in vivo system, such as toxicokinetics, metabolism, repair and defence mechanisms. In addition, since most of these assays measure general cytotoxicity endpoints, they are not mechanistically based. Therefore, agreement between in vitro and in vivo relative toxicities is simply correlative. Despite the numerous complexities and difficulties in extrapolating from in vitro to in vivo, these in vitro tests are proving to be very valuable because they are simple and inexpensive to perform and may be used as screens to flag highly toxic drugs or chemicals at early stages of development.

Target Organ Toxicity

In vitro tests can also be used to assess specific target organ toxicity. There are a number of difficulties associated with designing such tests, the most notable being the inability of in vitro systems to maintain many of the features of the organ in vivo. Frequently, when cells are taken from animals and placed into culture, they tend either to degenerate quickly and/or to dedifferentiate, that is, lose their organ-like functions and become more generic. This presents a problem in that within a short period of time, usually a few days, the cultures are no longer useful for assessing organ-specific effects of a toxin.

Many of these problems are being overcome because of recent advances in molecular and cellular biology. Information that is obtained about the cellular environment in vivo may be utilized in modulating culture conditions in vitro. Since the mid-1980s, new growth factors and cytokines have been discovered, and many of these are now available commercially. Addition of these factors to cells in culture helps to preserve their integrity and may also help to retain more differentiated functions for longer periods of time. Other basic studies have increased the knowledge of the nutritional and hormonal requirements of cells in culture, so that new media may be formulated. Recent advances have also been made in identifying both naturally occurring and artificial extracellular matrices on which cells may be cultured. Culture of cells on these different matrices can have profound effects on both their structure and function. A major advantage derived from this knowledge is the ability to intricately control the environment of cells in culture and individually examine the effects of these factors on basic cell processes and on their responses to different chemical agents. In short, these systems can provide great insight into organ-specific mechanisms of toxicity.

Many target organ toxicity studies are conducted in primary cells, which by definition are freshly isolated from an organ, and usually exhibit a finite lifetime in culture. There are many advantages to having primary cultures of a single cell type from an organ for toxicity assessment. From a mechanistic perspective, such cultures are useful for studying specific cellular targets of a chemical. In some instances, two or more cell types from an organ may be cultured together, and this provides an added advantage of being able to look at cell-cell interactions in response to a toxin. Some co-culture systems for skin have been engineered so that they form a three dimensional structure resembling skin in vivo. It is also possible to co-culture cells from different organs—for example, liver and kidney. This type of culture would be useful in assessing the effects specific to kidney cells, of a chemical that must be bioactivated in the liver.

Molecular biological tools have also played an important role in the development of continuous cell lines that can be useful for target organ toxicity testing. These cell lines are generated by transfecting DNA into primary cells. In the transfection procedure, the cells and the DNA are treated such that the DNA can be taken up by the cells. The DNA is usually from a virus and contains a gene or genes that, when expressed, allow the cells to become immortalized (i.e., able to live and grow for extended periods of time in culture). The DNA can also be engineered so that the immortalizing gene is controlled by an inducible promoter. The advantage of this type of construct is that the cells will divide only when they receive the appropriate chemical stimulus to allow expression of the immortalizing gene. An example of such a construct is the large T antigen gene from Simian Virus 40 (SV40) (the immortalizing gene), preceded by the promoter region of the metallothionein gene, which is induced by the presence of a metal in the culture medium. Thus, after the gene is transfected into the cells, the cells may be treated with low concentrations of zinc to stimulate the MT promoter and turn on the expression of the T antigen gene. Under these conditions, the cells proliferate. When zinc is removed from the medium, the cells stop dividing and under ideal conditions return to a state where they express their tissue-specific functions.

The ability to generate immortalized cells combined with the advances in cell culture technology have greatly contributed to the creation of cell lines from many different organs, including brain, kidney and liver. However, before these cell lines may be used as a surrogate for the bona fide cell types, they must be carefully characterized to determine how “normal” they really are.

Other in vitro systems for studying target organ toxicity involve increasing complexity. As in vitro systems progress in complexity from single cell to whole organ culture, they become more comparable to the in vivo milieu, but at the same time they become much more difficult to control given the increased number of variables. Therefore, what may be gained in moving to a higher level of organization can be lost in the inability of the researcher to control the experimental environment. Table 1 compares some of the characteristics of various in vitro systems that have been used to study hepatotoxicity.

Table 1. Comparison of in vitro systems for hepatotoxicity studies

System Complexity
(level of interaction)
Ability to retain liver-specific functions Potential duration of culture Ability to control environment
Immortalized cell lines some cell to cell (varies with cell line) poor to good (varies with cell line) indefinite excellent
Primary hepatocyte cultures cell to cell fair to excellent (varies with culture conditions) days to weeks excellent
Liver cell co-cultures cell to cell (between the same and different cell types) good to excellent weeks excellent
Liver slices cell to cell (among all cell types) good to excellent hours to days good
Isolated, perfused liver cell to cell (among all cell types), and intra-organ excellent hours fair

 

Precision-cut tissue slices are being used more extensively for toxicological studies. There are new instruments available that enable the researcher to cut uniform tissue slices in a sterile environment. Tissue slices offer some advantage over cell culture systems in that all of the cell types of the organ are present and they maintain their in vivo architecture and intercellular communication. Thus, in vitro studies may be conducted to determine the target cell type within an organ as well as to investigate specific target organ toxicity. A disadvantage of the slices is that they degenerate rapidly after the first 24 hours of culture, mainly due to poor diffusion of oxygen to the cells on the interior of the slices. However, recent studies have indicated that more efficient aeration may be achieved by gentle rotation. This, together with the use of a more complex medium, allows the slices to survive for up to 96 hours.

Tissue explants are similar in concept to tissue slices and may also be used to determine the toxicity of chemicals in specific target organs. Tissue explants are established by removing a small piece of tissue (for teratogenicity studies, an intact embryo) and placing it into culture for further study. Explant cultures have been useful for short-term toxicity studies including irritation and corrosivity in skin, asbestos studies in trachea and neurotoxicity studies in brain tissue.

Isolated perfused organs may also be used to assess target organ toxicity. These systems offer an advantage similar to that of tissue slices and explants in that all cell types are present, but without the stress to the tissue introduced by the manipulations involved in preparing slices. In addition, they allow for the maintenance of intra-organ interactions. A major disadvantage is their short-term viability, which limits their use for in vitro toxicity testing. In terms of serving as an alternative, these cultures may be considered a refinement since the animals do not experience the adverse consequences of in vivo treatment with toxicants. However, their use does not significantly decrease the numbers of animals required.

In summary, there are several types of in vitro systems available for assessing target organ toxicity. It is possible to acquire much information about mechanisms of toxicity using one or more of these techniques. The difficulty remains in knowing how to extrapolate from an in vitro system, which represents a relatively small part of the toxicological process, to the whole process occurring in vivo.

In Vitro Tests for Ocular Irritation

Perhaps the most contentious whole-animal toxicity test from an animal welfare perspective is the Draize test for eye irritation, which is conducted in rabbits. In this test, a small fixed dose of a chemical is placed in one of the rabbit’s eyes while the other eye is used as a control. The degree of irritation and inflammation is scored at various times after exposure. A major effort is being made to develop methodologies to replace this test, which has been criticized not only for humane reasons, but also because of the subjectivity of the observations and variability of the results. It is interesting to note that despite the harsh criticism the Draize test has received, it has proven to be remarkably successful in predicting human eye irritants, particularly slightly to moderately irritating substances, that are difficult to identify by other methods. Thus, the demands on in vitro alternatives are great.

The quest for alternatives to the Draize test is a complicated one, albeit one that is predicted to be successful. Numerous in vitro and other alternatives have been developed and in some cases they have been implemented. Refinement alternatives to the Draize test, which by definition, are less painful or distressful to the animals, include the Low Volume Eye Test, in which smaller amounts of test materials are placed in the rabbits’ eyes, not only for humane reasons, but to more closely mimic the amounts to which people may actually be accidentally exposed. Another refinement is that substances which have a pH less than 2 or greater than 11.5 are no longer tested in animals since they are known to be severely irritating to the eye.

Between 1980 and 1989, there has been an estimated 87% decline in the number of rabbits used for eye irritation testing of cosmetics. In vitro tests have been incorporated as part of a tier-testing approach to bring about this vast reduction in whole-animal tests. This approach is a multi-step process that begins with a thorough examination of the historical eye irritation data and physical and chemical analysis of the chemical to be evaluated. If these two processes do not yield enough information, then a battery of in vitro tests is performed. The additional data obtained from the in vitro tests might then be sufficient to assess the safety of the substance. If not, then the final step would be to perform limited in vivo tests. It is easy to see how this approach can eliminate or at least drastically reduce the numbers of animals needed to predict the safety of a test substance.

The battery of in vitro tests that is used as part of this tier-testing strategy depends upon the needs of the particular industry. Eye irritation testing is done by a wide variety of industries from cosmetics to pharmaceuticals to industrial chemicals. The type of information required by each industry varies and therefore it is not possible to define a single battery of in vitro tests. A test battery is generally designed to assess five parameters: cytotoxicity, changes in tissue physiology and biochemistry, quantitative structure-activity relationships, inflammation mediators, and recovery and repair. An example of a test for cytotoxicity, which is one possible cause for irritation, is the neutral red assay using cultured cells (see above). Changes in cellular physiology and biochemistry resulting from exposure to a chemical may be assayed in cultures of human corneal epithelial cells. Alternatively, investigators have also used intact or dissected bovine or chicken eyeballs obtained from slaughterhouses. Many of the endpoints measured in these whole organ cultures are the same as those measured in vivo, such as corneal opacity and corneal swelling.

Inflammation is frequently a component of chemical-induced eye injury, and there are a number of assays available to examine this parameter. Various biochemical assays detect the presence of mediators released during the inflammatory process such as arachidonic acid and cytokines. The chorioallantoic membrane (CAM) of the hen’s egg may also be used as an indicator of inflammation. In the CAM assay, a small piece of the shell of a ten-to-14-day chick embryo is removed to expose the CAM. The chemical is then applied to the CAM and signs of inflammation, such as vascular hemorrhaging, are scored at various times thereafter.

One of the most difficult in vivo processes to assess in vitro is recovery and repair of ocular injury. A newly developed instrument, the silicon microphysiometer, measures small changes in extracellular pH and can been used to monitor cultured cells in real time. This analysis has been shown to correlate fairly well with in vivo recovery and has been used as an in vitro test for this process. This has been a brief overview of the types of tests being employed as alternatives to the Draize test for ocular irritation. It is likely that within the next several years a complete series of in vitro test batteries will be defined and each will be validated for its specific purpose.

Validation

The key to regulatory acceptance and implementation of in vitro test methodologies is validation, the process by which the credibility of a candidate test is established for a specific purpose. Efforts to define and coordinate the validation process have been made both in the United States and in Europe. The European Union established the European Centre for the Validation of Alternative Methods (ECVAM) in 1993 to coordinate efforts there and to interact with American organizations such as the Johns Hopkins Centre for Alternatives to Animal Testing (CAAT), an academic centre in the United States, and the Interagency Coordinating Committee for the Validation of Alternative Methods (ICCVAM), composed of representatives from the National Institutes of Health, the US Environmental Protection Agency, the US Food and Drug Administration and the Consumer Products Safety Commission.

Validation of in vitro tests requires substantial organization and planning. There must be consensus among government regulators and industrial and academic scientists on acceptable procedures, and sufficient oversight by a scientific advisory board to ensure that the protocols meet set standards. The validation studies should be performed in a series of reference laboratories using calibrated sets of chemicals from a chemical bank and cells or tissues from a single source. Both intralaboratory repeatability and interlaboratory reproducibility of a candidate test must be demonstrated and the results subjected to appropriate statistical analysis. Once the results from the different components of the validation studies have been compiled, the scientific advisory board can make recommendations on the validity of the candidate test(s) for a specific purpose. In addition, results of the studies should be published in peer-reviewed journals and placed in a database.

The definition of the validation process is currently a work in progress. Each new validation study will provide information useful to the design of the next study. International communication and cooperation are essential for the expeditious development of a widely acceptable series of protocols, particularly given the increased urgency imposed by the passage of the EC Cosmetics Directive. This legislation may indeed provide the needed impetus for a serious validation effort to be undertaken. It is only through completion of this process that the acceptance of in vitro methods by the various regulatory communities can commence.

Conclusion

This article has provided a broad overview of the current status of in vitro toxicity testing. The science of in vitro toxicology is relatively young, but it is growing exponentially. The challenge for the years ahead is to incorporate the mechanistic knowledge generated by cellular and molecular studies into the vast inventory of in vivo data to provide a more complete description of toxicological mechanisms as well as to establish a paradigm by which in vitro data may be used to predict toxicity in vivo. It will only be through the concerted efforts of toxicologists and government representatives that the inherent value of these in vitro methods can be realized.

 

Back

Sunday, 16 January 2011 18:49

Genetic Toxicity Assessment

Genetic toxicity assessment is the evaluation of agents for their ability to induce any of three general types of changes (mutations) in the genetic material (DNA): gene, chromosomal and genomic. In organisms such as humans, the genes are composed of DNA, which consists of individual units called nucleotide bases. The genes are arranged in discrete physical structures called chromosomes. Genotoxicity can result in significant and irreversible effects upon human health. Genotoxic damage is a critical step in the induction of cancer and it can also be involved in the induction of birth defects and foetal death. The three classes of mutations mentioned above can occur within either of the two types of tissues possessed by organisms such as humans: sperm or eggs (germ cells) and the remaining tissue (somatic cells).

Assays that measure gene mutation are those that detect the substitution, addition or deletion of nucleotides within a gene. Assays that measure chromosomal mutation are those that detect breaks or chromosomal rearrangements involving one or more chromosomes. Assays that measure genomic mutation are those that detect changes in the number of chromosomes, a condition called aneuploidy. Genetic toxicity assessment has changed considerably since the development by Herman Muller in 1927 of the first assay to detect genotoxic (mutagenic) agents. Since then, more than 200 assays have been developed that measure mutations in DNA; however, fewer than ten assays are used commonly today for genetic toxicity assessment. This article reviews these assays, describes what they measure, and explores the role of these assays in toxicity assessment.

Identification of Cancer HazardsPrior to the Development of the Fieldof Genetic Toxicology

Genetic toxicology has become an integral part of the overall risk assessment process and has gained in stature in recent times as a reliable predictor for carcinogenic activity. However, prior to the development of genetic toxicology (before 1970), other methods were and are still being used to identify potential cancer hazards to humans. There are six major categories of methods currently used for identifying human cancer risks: epidemiological studies, long-term in vivo bioassays, mid-term in vivo bioassays, short-term in vivo and in vitro bioassays, artificial intelligence (structure-activity), and mechanism-based inference.

Table 1 gives advantages and disadvantages for these methods.

Table 1. Advantages and disadvantages of current methods for identifying human cancer risks

  Advantages Disadvantages
Epidemiological studies (1) humans are ultimate indicators of disease;
(2) evaluate sensitive or susceptible populations;
(3) occupational exposure cohorts; (4) environmental sentinel alerts
(1) generally retrospective (death certificates, recall biases, etc.); (2) insensitive, costly, lengthy; (3) reliable exposure data sometimes unavailable or difficult to obtain; (4) combined, multiple and complex exposures; lack of appropriate control cohorts; (5) experiments on humans not done; (6) cancer detection, not prevention
Long-term in vivo bioassays (1) prospective and retrospective (validation) evaluations; (2) excellent correlation with identified human carcinogens; (3) exposure levels and conditions known; (4) identifies chemical toxicity and carcinogenicity effects; (5) results obtained relatively quickly; (6) qualitative comparisons among chemical classes; (7) integrative and interactive biologic systems related closely to humans (1) rarely replicated, resource intensive; (3) limited facilities suitable for such experiments; (4) species extrapolation debate; (5) exposures used are often at levels far in excess of those experienced by humans; (6) single-chemical exposure does not mimic human exposures, which are generally to multiple chemicals simultaneously
Mid- and short-term in vivo and in vitro bioassays (1) more rapid and less expensive than other assays; (2) large samples that are easily replicated;
(3) biologically meaningful end points are measured (mutation, etc.); (4) can be used as screening assays to select chemicals for long-term bioassays
(1) in vitro not fully predictive of in vivo; (2) usually organism or organ specific; (3) potencies not comparable to whole animals or humans
Chemical structure–biological activity associations (1) relatively easy, rapid, and inexpensive; (2) reliable for certain chemical classes (e.g., nitrosamines and benzidine dyes); (3) developed from biological data but not dependent on additional biological experimentation (1) not “biological”; (2) many exceptions to formulated rules; (3) retrospective and rarely (but becoming) prospective
Mechanism-based inferences (1) reasonably accurate for certain classes of chemicals; (2) permits refinements of hypotheses; (3) can orient risk assessments to sensitive populations (1) mechanisms of chemical carcinogenesis undefined, multiple, and likely chemical or class specific; (2) may fail to highlight exceptions to general mechanisms

 

Rationale and Conceptual Basisfor Genetic Toxicology Assays

Although the exact types and numbers of assays used for genetic toxicity assessment are constantly evolving and vary from country to country, the most common ones include assays for (1) gene mutation in bacteria and/or cultured mammalian cells and (2) chromosomal mutation in cultured mammalian cells and/or bone marrow within living mice. Some of the assays within this second category can also detect aneuploidy. Although these assays do not detect mutations in germ cells, they are used primarily because of the extra cost and complexity of performing germ-cell assays. Nonetheless, germ-cell assays in mice are used when information about germ-cell effects is desired.

Systematic studies over a 25-year period (1970-1995), especially at the US National Toxicology Program in North Carolina, have resulted in the use of a discrete number of assays for detecting the mutagenic activity of agents. The rationale for evaluating the usefulness of the assays was based on their ability to detect agents that cause cancer in rodents and that are suspected of causing cancer in humans (i.e., carcinogens). This is because studies during the past several decades have indicated that cancer cells contain mutations in certain genes and that many carcinogens are also mutagens. Thus, cancer cells are viewed as containing somatic-cell mutations, and carcinogenesis is viewed as a type of somatic-cell mutagenesis.

The genetic toxicity assays used most commonly today have been selected not only because of their large database, relatively low cost, and ease of performance, but because they have been shown to detect many rodent and, presumptively, human carcinogens. Consequently, genetic toxicity assays are used to predict the potential carcinogenicity of agents.

An important conceptual and practical development in the field of genetic toxicology was the recognition that many carcinogens were modified by enzymes within the body, creating altered forms (metabolites) that were frequently the ultimate carcinogenic and mutagenic form of the parent chemical. To duplicate this metabolism in a petri dish, Heinrich Malling showed that the inclusion of a preparation from rodent liver contained many of the enzymes necessary to perform this metabolic conversion or activation. Thus, many genetic toxicity assays performed in dishes or tubes (in vitro) employ the addition of similar enzyme preparations. Simple preparations are called S9 mix, and purified preparations are called microsomes. Some bacterial and mammalian cells have now been genetically engineered to contain some of the genes from rodents or humans that produce these enzymes, reducing the need to add S9 mix or microsomes.

Genetic Toxicology Assays and Techniques

The primary bacterial systems used for genetic toxicity screening are the Salmonella (Ames) mutagenicity assay and, to a much lesser extent, strain WP2 of Escherichia coli. Studies in the mid-1980s indicated that the use of only two strains of the Salmonella system (TA98 and TA100) were sufficient to detect approximately 90% of the known Salmonella mutagens. Thus, these two strains are used for most screening purposes; however, various other strains are available for more extensive testing.

These assays are performed in a variety of ways, but two general procedures are the plate-incorporation and liquid-suspension assays. In the plate-incorporation assay, the cells, the test chemical and (when desired) the S9 are added together into a liquefied agar and poured onto the surface of an agar petri plate. The top agar hardens within a few minutes, and the plates are incubated for two to three days, after which time mutant cells have grown to form visually detectable clusters of cells called colonies, which are then counted. The agar medium contains selective agents or is composed of ingredients such that only the newly mutated cells will grow. The liquid-incubation assay is similar, except the cells, test agent, and S9 are incubated together in liquid that does not contain liquefied agar, and then the cells are washed free of the test agent and S9 and seeded onto the agar.

Mutations in cultured mammalian cells are detected primarily in one of two genes: hprt and tk. Similar to the bacterial assays, mammalian cell lines (developed from rodent or human cells) are exposed to the test agent in plastic culture dishes or tubes and then are seeded into culture dishes that contain medium with a selective agent that permits only mutant cells to grow. The assays used for this purpose include the CHO/HPRT, the TK6, and the mouse lymphoma L5178Y/TK+/- assays. Other cell lines containing various DNA repair mutations as well as containing some human genes involved in metabolism are also used. These systems permit the recovery of mutations within the gene (gene mutation) as well as mutations involving regions of the chromosome flanking the gene (chromosomal mutation). However, this latter type of mutation is recovered to a much greater extent by the tk gene systems than by the hprt gene systems due to the location of the tk gene.

Similar to the liquid-incubation assay for bacterial mutagenicity, mammalian cell mutagenicity assays generally involve the exposure of the cells in culture dishes or tubes in the presence of the test agent and S9 for several hours. The cells are then washed, cultured for several more days to allow the normal (wild-type) gene products to be degraded and the newly mutant gene products to be expressed and accumulate, and then they are seeded into medium containing a selective agent that permits only the mutant cells to grow. Like the bacterial assays, the mutant cells grow into visually detectable colonies that are then counted.

Chromosomal mutation is identified primarily by cytogenetic assays, which involve exposing rodents and/or rodent or human cells in culture dishes to a test chemical, allowing one or more cell divisions to occur, staining the chromosomes, and then visually examining the chromosomes through a microscope to detect alterations in the structure or number of chromosomes. Although a variety of endpoints can be examined, the two that are currently accepted by regulatory agencies as being the most meaningful are chromosomal aberrations and a subcategory called micronuclei.

Considerable training and expertise are required to score cells for the presence of chromosomal aberrations, making this a costly procedure in terms of time and money. In contrast, micronuclei require little training, and their detection can be automated. Micronuclei appear as small dots within the cell that are distinct from the nucleus, which contains the chromosomes. Micronuclei result from either chromosome breakage or from aneuploidy. Because of the ease of scoring micronuclei compared to chromosomal aberrations, and because recent studies indicate that agents that induce chromosomal aberrations in the bone marrow of living mice generally induce micronuclei in this tissue, micronuclei are now commonly measured as an indication of the ability of an agent to induce chromosomal mutation.

Although germ-cell assays are used far less frequently than the other assays described above, they are indispensable in determining whether an agent poses a risk to the germ cells, mutations in which can lead to health effects in succeeding generations. The most commonly used germ-cell assays are in mice, and involve systems that detect (1) heritable translocations (exchanges) among chromosomes (heritable translocation assay), (2) gene or chromosomal mutations involving specific genes (visible or biochemical specific-locus assays), and (3) mutations that affect viability (dominant lethal assay). As with the somatic-cell assays, the working assumption with the germ-cell assays is that agents positive in these assays are presumed to be potential human germ-cell mutagens.

Current Status and Future Prospects

Recent studies have indicated that only three pieces of information were necessary to detect approximately 90% of a set of 41 rodent carcinogens (i.e., presumptive human carcinogens and somatic-cell mutagens). These included (1) knowledge of the chemical structure of agent, especially if it contains electrophilic moieties (see section on structure-activity relationships); (2) Salmonella mutagenicity data; and (3) data from a 90-day chronic toxicity assay in rodents (mice and rats). Indeed, essentially all of the IARC-declared human carcinogens are detectable as mutagens using just the Salmonella assay and the mouse-bone marrow micronucleus assay. The use of these mutagenicity assays for detecting potential human carcinogens is supported further by the finding that most human carcinogens are carcinogenic in both rats and mice (trans-species carcinogens) and that most trans- species carcinogens are mutagenic in Salmonella and/or induce micronuclei in mouse bone marrow.

With advances in DNA technology, the human genome project, and an improved understanding of the role of mutation in cancer, new genotoxicity assays are being developed that will likely be incorporated into standard screening procedures. Among these are the use of transgenic cells and rodents. Transgenic systems are those in which a gene from another species has been introduced into a cell or organism. For example, transgenic mice are now in experimental use that permit the detection of mutation in any organ or tissue of the animal, based on the introduction of a bacterial gene into the mouse. Bacterial cells, such as Salmonella, and mammalian cells (including human cell lines) are now available that contain genes involved in the metabolism of carcinogenic/mutagenic agents, such as the P450 genes. Molecular analysis of the actual mutations induced in the trans-gene within transgenic rodents, or within native genes such as hprt, or the target genes within Salmonella can now be performed, so that the exact nature of the mutations induced by the chemicals can be determined, providing insights into the mechanism of action of the chemical and allowing comparisons to mutations in humans presumptively exposed to the agent.

Molecular advances in cytogenetics now permit more detailed evaluation of chromosomal mutations. These include the use of probes (small pieces of DNA) that attach (hybridize) to specific genes. Rearrangements of genes on the chromosome can then be revealed by the altered location of the probes, which are fluorescent and easily visualized as colored sectors on the chromosomes. The single-cell gel electrophoresis assay for DNA breakage (commonly called the “comet” assay) permits the detection of DNA breaks within single cells and may become an extremely useful tool in combination with cytogenetic techniques for detecting chromosomal damage.

After many years of use and the generation of a large and systematically developed database, genetic toxicity assessment can now be done with just a few assays for relatively small cost in a short period of time (a few weeks). The data produced can be used to predict the ability of an agent to be a rodent and, presumptively, human carcinogen/somatic-cell mutagen. Such an ability makes it possible to limit the introduction into the environment of mutagenic and carcinogenic agents and to develop alternative, nonmutagenic agents. Future studies should lead to even better methods with greater predictivity than the current assays.

 

Back

Sunday, 16 January 2011 18:45

Biomarkers

The word biomarker is short for biological marker, a term that refers to a measurable event occurring in a biological system, such as the human body. This event is then interpreted as a reflection, or marker, of a more general state of the organism or of life expectancy. In occupational health, a biomarker is generally used as an indicator of health status or disease risk.

Biomarkers are used for in vitro as well as in vivo studies that may include humans. Usually, three specific types of biological markers are identified. Although a few biomarkers may be difficult to classify, usually they are separated into biomarkers of exposure, biomarkers of effect or biomarkers of susceptibility (see table 1).

Table 1. Examples of biomarkers of exposure or biomarkers of effect  that are used in toxicological studies in occupational health

Sample Measurement Purpose
Exposure biomarkers
Adipose tissue Dioxin Dioxin exposure
Blood Lead Lead exposure
Bone Aluminium Aluminium exposure
Exhaled breath Toluene Toluene exposure
Hair Mercury Methylmercury exposure
Serum Benzene Benzene exposure
Urine Phenol Benzene exposure
Effect biomarkers
Blood Carboxyhaemoglobin Carbon monoxide exposure
Red blood cells Zinc-protoporphyrin Lead exposure
Serum Cholinesterase Organophosphate exposure
Urine Microglobulins Nephrotoxic exposure
White blood cells DNA adducts Mutagen exposure

 

Given an acceptable degree of validity, biomarkers may be employed for several purposes. On an individual basis, a biomarker may be used to support or refute a diagnosis of a particular type of poisoning or other chemically-induced adverse effect. In a healthy subject, a biomarker may also reflect individual hypersusceptibility to specific chemical exposures and may therefore serve as a basis for risk prediction and counselling. In groups of exposed workers, some exposure biomarkers can be applied to assess the extent of compliance with pollution abatement regulations or the effectiveness of preventive efforts in general.

Biomarkers of Exposure

An exposure biomarker may be an exogenous compound (or a metabolite) within the body, an interactive product between the compound (or metabolite) and an endogenous component, or another event related to the exposure. Most commonly, biomarkers of exposures to stable compounds, such as metals, comprise measurements of the metal concentrations in appropriate samples, such as blood, serum or urine. With volatile chemicals, their concentration in exhaled breath (after inhalation of contamination-free air) may be assessed. If the compound is metabolized in the body, one or more metabolites may be chosen as a biomarker of the exposure; metabolites are often determined in urine samples.

Modern methods of analysis may allow separation of isomers or congeners of organic compounds, and determination of the speciation of metal compounds or isotopic ratios of certain elements. Sophisticated analyses allow determination of changes in the structure of DNA or other macromolecules caused by binding with reactive chemicals. Such advanced techniques will no doubt gain considerably in importance for applications in biomarker studies, and lower detection limits and better analytical validity are likely to make these biomarkers even more useful.

Particularly promising developments have occurred with biomarkers of exposure to mutagenic chemicals. These compounds are reactive and may form adducts with macromolecules, such as proteins or DNA. DNA adducts may be detected in white blood cells or tissue biopsies, and specific DNA fragments may be excreted in the urine. For example, exposure to ethylene oxide results in reactions with DNA bases, and, after excision of the damaged base, N-7-(2-hydroxyethyl)guanine will be eliminated in the urine. Some adducts may not refer directly to a particular exposure. For example, 8-hydroxy-2´-deoxyguanosine reflects oxidative damage to DNA, and this reaction may be triggered by several chemical compounds, most of which also induce lipid peroxidation.

Other macromolecules may also be changed by adduct formation or oxidation. Of special interest, such reactive compounds may generate haemoglobin adducts that can be determined as biomarkers of exposure to the compounds. The advantage is that ample amounts of haemoglobin can be obtained from a blood sample, and, given the four-month lifetime of red blood cells, the adducts formed with the amino acids of the protein will indicate the total exposure during this period.

Adducts may be determined by sensitive techniques such as high-performance lipid chromatography, and some immunological methods are also available. In general, the analytical methods are new, expensive and need further development and validation. Better sensitivity can be obtained by using the 32P post labelling assay, which is a nonspecific indication that DNA damage has taken place. All of these techniques are potentially useful for biological monitoring and have been applied in a growing number of studies. However, simpler and more sensitive analytical methods are needed. Given the limited specificity of some methods at low-level exposures, tobacco smoking or other factors may impact significantly on the measurement results, thus causing difficulties in interpretation.

Exposure to mutagenic compounds, or to compounds which are metabolized into mutagens, may also be determined by assessing the mutagenicity of the urine from an exposed individual. The urine sample is incubated with a strain of bacteria in which a specific point mutation is expressed in a way that can be easily measured. If mutagenic chemicals are present in the urine sample, then an increased rate of mutations will occur in the bacteria.

Exposure biomarkers must be evaluated with regard to temporal variation in exposure and the relation to different compartments. Thus, the time frame(s) represented by the biomarker, that is, the extent to which the biomarker measurement reflects past exposure(s) and/or accumulated body burden, must be determined from toxicokinetic data in order to interpret the result. In particular, the degree to which the biomarker indicates retention in specific target organs should be considered. Although blood samples are often used for biomarker studies, peripheral blood is generally not regarded as a compartment as such, although it acts as a transport medium between compartments. The degree to which the concentration in the blood reflects levels in different organs varies widely between different chemicals, and usually also depends upon the length of the exposure as well as time since exposure.

Sometimes this type of evidence is used to classify a biomarker as an indicator of (total) absorbed dose or an indicator of effective dose (i.e., the amount that has reached the target tissue). For example, exposure to a particular solvent may be evaluated from data on the actual concentration of the solvent in the blood at a particular time following the exposure. This measurement will reflect the amount of the solvent that has been absorbed into the body. Some of the absorbed amount will be exhaled due to the vapour pressure of the solvent. While circulating in the blood, the solvent will interact with various components of the body, and it will eventually become subject to breakdown by enzymes. The outcome of the metabolic processes can be assessed by determining specific mercapturic acids produced by conjugation with glutathione. The cumulative excretion of mercapturic acids may better reflect the effective dose than will the blood concentration.

Life events, such as reproduction and senescence, may affect the distribution of a chemical. The distribution of chemicals within the body is significantly affected by pregnancy, and many chemicals may pass the placental barrier, thus causing exposure of the foetus. Lactation may result in excretion of lipid-soluble chemicals, thus leading to a decreased retention in the mother along with an increased uptake by the infant. During weight loss or development of osteoporosis, stored chemicals may be released, which can then result in a renewed and protracted “endogenous” exposure of target organs. Other factors may affect individual absorption, metabolism, retention and distribution of chemical compounds, and some biomarkers of susceptibility are available (see below).

Biomarkers of Effect

A marker of effect may be an endogenous component, or a measure of the functional capacity, or some other indicator of the state or balance of the body or organ system, as affected by the exposure. Such effect markers are generally preclinical indicators of abnormalities.

These biomarkers may be specific or non-specific. The specific biomarkers are useful because they indicate a biological effect of a particular exposure, thus providing evidence that can potentially be used for preventive purposes. The non-specific biomarkers do not point to an individual cause of the effect, but they may reflect the total, integrated effect due to a mixed exposure. Both types of biomarkers may therefore be of considerable use in occupational health.

There is not a clear distinction between exposure biomarkers and effect biomarkers. For example, adduct formation could be said to reflect an effect rather than the exposure. However, effect biomarkers usually indicate changes in the functions of cells, tissues or the total body. Some researchers include gross changes, such as an increase in liver weight of exposed laboratory animals or decreased growth in children, as biomarkers of effect. For the purpose of occupational health, effect biomarkers should be restricted to those that indicate subclinical or reversible biochemical changes, such as inhibition of enzymes. The most frequently used effect biomarker is probably inhibition of cholinesterase caused by certain insecticides, that is, organophosphates and carbamates. In most cases, this effect is entirely reversible, and the enzyme inhibition reflects the total exposure to this particular group of insecticides.

Some exposures do not result in enzyme inhibition but rather in increased activity of an enzyme. This is the case with several enzymes that belong to the P450 family (see “Genetic determinants of toxic response”). They may be induced by exposures to certain solvents and polyaromatic hydrocarbons (PAHs). Since these enzymes are mainly expressed in tissues from which a biopsy may be difficult to obtain, the enzyme activity is determined indirectly in vivo by administering a compound that is metabolized by that particular enzyme, and then the breakdown product is measured in urine or plasma.

Other exposures may induce the synthesis of a protective protein in the body. The best example is probably metallothionein, which binds cadmium and promotes the excretion of this metal; cadmium exposure is one of the factors that result in increased expression of the metallothionein gene. Similar protective proteins may exist but have not yet been explored sufficiently to become accepted as biomarkers. Among the candidates for possible use as biomarkers are the so-called stress proteins, originally referred to as heat shock proteins. These proteins are generated by a range of different organisms in response to a variety of adverse exposures.

Oxidative damage may be assessed by determining the concentration of malondialdehyde in serum or the exhalation of ethane. Similarly, the urinary excretion of proteins with a small molecular weight, such as albumin, may be used as a biomarker of early kidney damage. Several parameters routinely used in clinical practice (for example, serum hormone or enzyme levels) may also be useful as biomarkers. However, many of these parameters may not be sufficiently sensitive to detect early impairment.

Another group of effect parameters relate to genotoxic effects (changes in the structure of chromosomes). Such effects may be detected by microscopy of white blood cells that undergo cell division. Serious damage to the chromosomes—chromosomal aberrations or formation of micronuclei—can be seen in a microscope. Damage may also be revealed by adding a dye to the cells during cell division. Exposure to a genotoxic agent can then be visualized as an increased exchange of the dye between the two chromatids of each chromosome (sister chromatid exchange). Chromosomal aberrations are related to an increased risk of developing cancer, but the significance of an increased rate of sister chromatid exchange is less clear.

More sophisticated assessment of genotoxicity is based on particular point mutations in somatic cells, that is, white blood cells or epithelial cells obtained from the oral mucosa. A mutation at a specific locus may make the cells capable of growing in a culture that contains a chemical that is otherwise toxic (such as 6-thioguanine). Alternatively, a specific gene product can be assessed (e.g., serum or tissue concentrations of oncoproteins encoded by particular oncogenes). Obviously, these mutations reflect the total genotoxic damage incurred and do not necessarily indicate anything about the causative exposure. These methods are not yet ready for practical use in occupational health, but rapid progress in this line of research would suggest that such methods will become available within a few years.

Biomarkers of Susceptibility

A marker of susceptibility, whether inherited or induced, is an indicator that the individual is particularly sensitive to the effect of a xenobiotic or to the effects of a group of such compounds. Most attention has been focused on genetic susceptibility, although other factors may be at least as important. Hypersusceptibility may be due to an inherited trait, the constitution of the individual, or environmental factors.

The ability to metabolize certain chemicals is variable and is genetically determined (see “Genetic determinants of toxic response”). Several relevant enzymes appear to be controlled by a single gene. For example, oxidation of foreign chemicals is mainly carried out be a family of enzymes belonging to the P450 family. Other enzymes make the metabolites more water soluble by conjugation (e.g., N-acetyltransferase and μ-glutathion-S-transferase). The activity of these enzymes is genetically controlled and varies considerably. As mentioned above, the activity can be determined by administering a small dose of a drug and then determining the amount of the metabolite in the urine. Some of the genes have now been characterized, and techniques are available to determine the genotype. Important studies suggest that a risk of developing certain cancer forms is related to the capability of metabolizing foreign compounds. Many questions still remain unanswered, thus at this time limiting the use of these potential susceptibility biomarkers in occupational health.

Other inherited traits, such as alpha1-antitrypsin deficiency or glucose-6-phosphate dehydrogenase deficiency, also result in deficient defence mechanisms in the body, thereby causing hypersusceptibility to certain exposures.

Most research related to susceptibility has dealt with genetic predisposition. Other factors play a role as well and have been partly neglected. For example, individuals with a chronic disease may be more sensitive to an occupational exposure. Also, if a disease process or previous exposure to toxic chemicals has caused some subclinical organ damage, then the capacity to withstand a new toxic exposure is likely to be less. Biochemical indicators of organ function may in this case be used as susceptibility biomarkers. Perhaps the best example regarding hypersusceptibility relates to allergic responses. If an individual has become sensitized to a particular exposure, then specific antibodies can be detected in serum. Even if the individual has not become sensitized, other current or past exposures may add to the risk of developing an adverse effect related to an occupational exposure.

A major problem is to determine the joint effect of mixed exposures at work. In addition, personal habits and drug use may result in an increased susceptibility. For example, tobacco smoke usually contains a considerable amount of cadmium. Thus, with occupational exposure to cadmium, a heavy smoker who has accumulated substantial amounts of this metal in the body will be at increased risk of developing cadmium-related kidney disease.

Application in Occupational Health

Biomarkers are extremely useful in toxicological research, and many may be applicable in biological monitoring. Nonetheless, the limitations must also be recognized. Many biomarkers have so far been studied only in laboratory animals. Toxicokinetic patterns in other species may not necessarily reflect the situation in human beings, and extrapolation may require confirmatory studies in human volunteers. Also, account must be taken of individual variations due to genetic or constitutional factors.

In some cases, exposure biomarkers may not at all be feasible (e.g., for chemicals which are short-lived in vivo). Other chemicals may be stored in, or may affect, organs which cannot be accessed by routine procedures, such as the nervous system. The route of exposure may also affect the distribution pattern and therefore also the biomarker measurement and its interpretation. For example, direct exposure of the brain via the olfactory nerve is likely to escape detection by measurement of exposure biomarkers. As to effect biomarkers, many of them are not at all specific, and the change can be due to a variety of causes, including lifestyle factors. Perhaps in particular with the susceptibility biomarkers, interpretation must be very cautious at the moment, as many uncertainties remain about the overall health significance of individual genotypes.

In occupational health, the ideal biomarker should satisfy several requirements. First of all, sample collection and analysis must be simple and reliable. For optimal analytical quality, standardization is needed, but the specific requirements vary considerably. Major areas of concern include: preparation of the in- dividual, sampling procedure and sample handling, and measurement procedure; the latter encompasses technical factors, such as calibration and quality assurance procedures, and individual- related factors, such as education and training of operators.

For documentation of analytical validity and traceability, reference materials should be based on relevant matrices and with appropriate concentrations of toxic substances or relevant metabolites at appropriate levels. For biomarkers to be used for biological monitoring or for diagnostic purposes, the responsible laboratories must have well-documented analytical procedures with defined performance characteristics, and accessible records to allow verification of the results. At the same time, nonetheless, the economics of characterizing and using reference materials to supplement quality assurance procedures in general must be considered. Thus, the achievable quality of results, and the uses to which they are put, have to be balanced against the added costs of quality assurance, including reference materials, manpower and instrumentation.

Another requirement is that the biomarker should be specific, at least under the circumstances of the study, for a particular type of exposure, with a clear-cut relationship to the degree of exposure. Otherwise, the result of the biomarker measurement may be too difficult to interpret. For proper interpretation of the measurement result of an exposure biomarker, the diagnostic validity must be known (i.e., the translation of the biomarker value into the magnitude of possible health risks). In this area, metals serve as a paradigm for biomarker research. Recent research has demonstrated the complexity and subtlety of dose-response relationships, with considerable difficulty in identifying no-effect levels and therefore also in defining tolerable exposures. However, this kind of research has also illustrated the types of investigation and the refinement that are necessary to uncover the relevant information. For most organic compounds, quantitative associations between exposures and the corresponding adverse health effects are not yet available; in many cases, even the primary target organs are not known for sure. In addition, evaluation of toxicity data and biomarker concentrations is often complicated by exposure to mixtures of substances, rather than exposure to a single compound at the time.

Before the biomarker is applied for occupational health purposes, some additional considerations are necessary. First, the biomarker must reflect a subclinical and reversible change only. Second, given that the biomarker results can be interpreted with regard to health risks, then preventive efforts should be available and should be considered realistic in case the biomarker data suggests a need to reduce the exposure. Third, the practical use of the biomarker must be generally regarded as ethically acceptable.

Industrial hygiene measurements may be compared with applicable exposure limits. Likewise, results on exposure biomarkers or effect biomarkers may be compared to biological action limits, sometimes referred to as biological exposure indices. Such limits should be based on the best advice of clinicians and scientists from appropriate disciplines, and responsible administrators as “risk managers” should then take into account relevant ethical, social, cultural and economic factors. The scientific basis should, if possible, include dose-response relationships supplemented by information on variations in susceptibility within the population at risk. In some countries, workers and members of the general public are involved in the standard-setting process and provide important input, particularly when scientific uncertainty is considerable. One of the major uncertainties is how to define an adverse health effect that should be prevented—for example, whether adduct formation as an exposure biomarker by itself represents an adverse effect (i.e., effect biomarker) that should be prevented. Difficult questions are likely to arise when deciding whether it is ethically defensible, for the same compound, to have different limits for adventitious exposure, on the one hand, and occupational exposure, on the other.

The information generated by the use of biomarkers should generally be conveyed to the individuals examined within the physician-patient relationship. Ethical concerns must in particular be considered in connection with highly experimental biomarker analyses that cannot currently be interpreted in detail in terms of actual health risks. For the general population, for example, limited guidance exists at present with regard to interpretation of exposure biomarkers other than the blood-lead concentration. Also of importance is the confidence in the data generated (i.e., whether appropriate sampling has been done, and whether sound quality assurance procedures have been utilized in the laboratory involved). An additional area of special worry relates to individual hypersusceptibility. These issues must be taken into account when providing the feedback from the study.

All sectors of society affected by, or concerned with carrying out, a biomarker study need to be involved in the decision-making process on how to handle the information generated by the study. Specific procedures to prevent or overcome inevitable ethical conflicts should be developed within the legal and social frameworks of the region or country. However, each situation represents a different set of questions and pitfalls, and no single procedure for public involvement can be developed to cover all applications of exposure biomarkers.

 

Back

Sunday, 16 January 2011 18:43

Target Organ Toxicology

The study and characterization of chemicals and other agents for toxic properties is often undertaken on the basis of specific organs and organ systems. In this chapter, two targets have been selected for in-depth discussion: the immune system and the gene. These examples were chosen to represent a complex target organ system and a molecular target within cells. For more comprehensive discussion of the toxicology of target organs, the reader is referred to standard toxicology texts such as Casarett and Doull, and Hayes. The International Programme on Chemical Safety (IPCS) has also published several criteria documents on target organ toxicology, by organ system.

Target organ toxicology studies are usually undertaken on the basis of information indicating the potential for specific toxic effects of a substance, either from epidemiological data or from general acute or chronic toxicity studies, or on the basis of special concerns to protect certain organ functions, such as reproduction or foetal development. In some cases, specific target organ toxicity tests are expressly mandated by statutory authorities, such as neurotoxicity testing under the US pesticides law (see “The United States approach to risk assessment of reproductive toxicants and neurotoxic agents,” and mutagenicity testing under the Japanese Chemical Substance Control Law (see “Principles of hazard identification: The Japanese approach”).

As discussed in “Target organ and critical effects,” the identification of a critical organ is based upon the detection of the organ or organ system which first responds adversely or to the lowest doses or exposures. This information is then used to design specific toxicology investigations or more defined toxicity tests that are designed to elicit more sensitive indications of intoxication in the target organ. Target organ toxicology studies may also be used to determine mechanisms of action, of use in risk assessment (see “The United States approach to risk assessment of reproductive toxicants and neurotoxic agents”).

Methods of Target Organ Toxicity Studies

Target organs may be studied by exposure of intact organisms and detailed analysis of function and histopathology in the target organ, or by in vitro exposure of cells, tissue slices, or whole organs maintained for short or long term periods in culture (see “Mechanisms of toxicology: Introduction and concepts”). In some cases, tissues from human subjects may also be available for target organ toxicity studies, and these may provide opportunities to validate assumptions of cross-species extrapolation. However, it must be kept in mind that such studies do not provide information on relative toxicokinetics.

In general, target organ toxicity studies share the following common characteristics: detailed histopathological examination of the target organ, including post mortem examination, tissue weight, and examination of fixed tissues; biochemical studies of critical pathways in the target organ, such as important enzyme systems; functional studies of the ability of the organ and cellular constituents to perform expected metabolic and other functions; and analysis of biomarkers of exposure and early effects in target organ cells.

Detailed knowledge of target organ physiology, biochemistry and molecular biology may be incorporated in target organ studies. For instance, because the synthesis and secretion of small-molecular-weight proteins is an important aspect of renal function, nephrotoxicity studies often include special attention to these parameters (IPCS 1991). Because cell-to-cell communication is a fundamental process of nervous system function, target organ studies in neurotoxicity may include detailed neurochemical and biophysical measurements of neurotransmitter synthesis, uptake, storage, release and receptor binding, as well as electrophysiological measurement of changes in membrane potential associated with these events.

A high degree of emphasis is being placed upon the development of in vitro methods for target organ toxicity, to replace or reduce the use of whole animals. Substantial advances in these methods have been achieved for reproductive toxicants (Heindel and Chapin 1993).

In summary, target organ toxicity studies are generally undertaken as a higher order test for determining toxicity. The selection of specific target organs for further evaluation depends upon the results of screening level tests, such as the acute or subchronic tests used by OECD and the European Union; some target organs and organ systems may be a priori candidates for special investigation because of concerns to prevent certain types of adverse health effects.

 

Back

Sunday, 16 January 2011 18:35

Immunotoxicology

The functions of the immune system are to protect the body from invading infectious agents and to provide immune surveillance against arising tumour cells. It has a first line of defence that is non-specific and that can initiate effector reactions itself, and an acquired specific branch, in which lymphocytes and antibodies carry the specificity of recognition and subsequent reactivity towards the antigen.

Immunotoxicology has been defined as “the discipline concerned with the study of the events that can lead to undesired effects as a result of interaction of xenobiotics with the immune system. These undesired events may result as a consequence of (1) a direct and/or indirect effect of the xenobiotic (and/or its biotransformation product) on the immune system, or (2) an immunologically based host response to the compound and/or its metabolite(s), or host antigens modified by the compound or its metabolites” (Berlin et al. 1987).

When the immune system acts as a passive target of chemical insults, the result can be decreased resistance to infection and certain forms of neoplasia, or immune disregulation/stimulation that can exacerbate allergy or auto-immunity. In the case that the immune system responds to the antigenic specificity of the xenobiotic or host antigen modified by the compound, toxicity can become manifest as allergies or autoimmune diseases.

Animal models to investigate chemical-induced immune suppression have been developed, and a number of these methods are validated (Burleson, Munson, and Dean 1995; IPCS 1996). For testing purposes, a tiered approach is followed to make an adequate selection from the overwhelming number of assays available. Generally, the objective of the first tier is to identify potential immunotoxicants. If potential immunotoxicity is identified, a second tier of testing is performed to confirm and characterize further the changes observed. Third-tier investigations include special studies on the mechanism of action of the compound. Several xenobiotics have been identified as immunotoxicants causing immunosuppression in such studies with laboratory animals.

The database on immune function disturbances in humans by environmental chemicals is limited (Descotes 1986; NRC Subcommittee on Immunotoxicology 1992). The use of markers of immunotoxicity has received little attention in clinical and epidemiological studies to investigate the effect of these chemicals on human health. Such studies have not been performed frequently, and their interpretation often does not permit unequivocal conclusions to be drawn, due for instance to the uncontrolled nature of exposure. Therefore, at present, immunotoxicity assessment in rodents, with subsequent extrapolation to man, forms the basis of decisions regarding hazard and risk.

Hypersensitivity reactions, notably allergic asthma and contact dermatitis, are important occupational health problems in industrialized countries (Vos, Younes and Smith 1995). The phenomenon of contact sensitization was investigated first in the guinea pig (Andersen and Maibach 1985). Until recently this has been the species of choice for predictive testing. Many guinea pig test methods are available, the most frequently employed being the guinea pig maximization test and the occluded patch test of Buehler. Guinea pig tests and newer approaches developed in mice, such as ear swelling tests and the local lymph node assay, provide the toxicologist with the tools to assess skin sensitization hazard. The situation with respect to sensitization of the respiratory tract is very different. There are, as yet, no well-validated or widely accepted methods available for the identification of chemical respiratory allergens although progress in the development of animal models for the investigation of chemical respiratory allergy has been achieved in the guinea pig and mouse.

Human data show that chemical agents, in particular drugs, can cause autoimmune diseases (Kammüller, Bloksma and Seinen 1989). There are a number of experimental animal models of human autoimmune diseases. Such comprise both spontaneous pathology (for example systemic lupus erythematosus in New Zealand Black mice) and autoimmune phenomena induced by experimental immunization with a cross-reactive autoantigen (for example the H37Ra adjuvant induced arthritis in Lewis strain rats). These models are applied in the preclinical evaluation of immunosuppressive drugs. Very few studies have addressed the potential of these models for assessment of whether a xenobiotic exacerbates induced or congenital autoimmunity. Animal models that are suitable to investigate the ability of chemicals to induce autoimmune diseases are virtually lacking. One model that is used to a limited extent is the popliteal lymph node assay in mice. Like the situation in humans, genetic factors play a crucial role in the development of autoimmune disease (AD) in laboratory animals, which will limit the predictive value of such tests.

The Immune System

The major function of the immune system is defence against bacteria, viruses, parasites, fungi and neoplastic cells. This is achieved by the actions of various cell types and their soluble mediators in a finely tuned concert. The host defence can be roughly divided into non-specific or innate resistance and specific or acquired immunity mediated by lymphocytes (Roitt, Brostoff and Male 1989).

Components of the immune system are present throughout the body (Jones et al. 1990). The lymphocyte compartment is found within lymphoid organs (figure 1). The bone marrow and thymus are classified as primary or central lymphoid organs; the secondary or peripheral lymphoid organs include lymph nodes, spleen and lymphoid tissue along secretory surfaces such as the gastrointestinal and respiratory tracts, the so-called mucosa-associated lymphoid tissue (MALT). About half of the body’s lymphocytes are located at any one time in MALT. In addition the skin is an important organ for the induction of immune responses to antigens present on the skin. Important in this process are epidermal Langerhans cells that have an antigen-presenting function.

Figure 1. Primary and secondary lymphoid organs and tissues

TOX110F1

Phagocytic cells of the monocyte/macrophage lineage, called the mononuclear phagocyte system (MPS), occur in lymphoid organs and also at extranodal sites; the extranodal phagocytes include Kupffer cells in the liver, alveolar macrophages in the lung, mesangial macrophages in the kidney and glial cells in the brain. Polymorphonuclear leukocytes (PMNs) are present mainly in blood and bone marrow, but accumulate at sites of inflammation.

 

 

 

 

 

 

 

Non-specific defence

A first line of defence to micro-organisms is executed by a physical and chemical barrier, such as at the skin, the respiratory tract and the alimentary tract. This barrier is helped by non-specific protective mechanisms including phagocytic cells, such as macrophages and polymorphonuclear leukocytes, which are able to kill pathogens, and natural killer cells, which can lyse tumour cells and virus-infected cells. The complement system and certain microbial inhibitors (e.g., lysozyme) also take part in the non-specific response.

Specific immunity

After initial contact of the host with the pathogen, specific immune responses are induced. The hallmark of this second line of defence is specific recognition of determinants, so-called antigens or epitopes, of the pathogens by receptors on the cell surface of B- and T-lymphocytes. Following interaction with the specific antigen, the receptor-bearing cell is stimulated to undergo proliferation and differentiation, producing a clone of progeny cells that are specific for the eliciting antigen. The specific immune responses help the non-specific defence presented to the pathogens by stimulating the efficacy of the non-specific responses. A fundamental characteristic of specific immunity is that memory develops. Secondary contact with the same antigen provokes a faster and more vigorous but well-regulated response.

The genome does not have the capacity to carry the codes of an array of antigen receptors sufficient to recognize the number of antigens that can be encountered. The repertoire of specificity develops by a process of gene rearrangements. This is a random process, during which various specificities are brought about. This includes specificities for self components, which are undesirable. A selection process that takes place in the thymus (T cells), or bone marrow (B cells) operates to delete these undesirable specificities.

Normal immune effector function and homeostatic regulation of the immune response is dependent upon a variety of soluble products, known collectively as cytokines, which are synthesized and secreted by lymphocytes and by other cell types. Cytokines have pleiotropic effects on immune and inflammatory responses. Cooperation between different cell populations is required for the immune response—the regulation of antibody responses, the accumulation of immune cells and molecules at inflammatory sites, the initiation of acute phase responses, the control of macrophage cytotoxic function and many other processes central to host resistance. These are influenced by, and in many cases are dependent upon, cytokines acting individually or in concert.

Two arms of specific immunity are recognized—humoral immunity and cell-mediated or cellular immunity:

Humoral immunity. In the humoral arm B-lymphocytes are stimulated following recognition of antigen by cell-surface receptors. Antigen receptors on B-lymphocytes are immunoglobulins (Ig). Mature B cells (plasma cells) start the production of antigen-specific immunoglobulins that act as antibodies in serum or along mucosal surfaces. There are five major classes of immunoglobulins: (1) IgM, pentameric Ig with optimal agglutinating capacity, which is first produced after antigenic stimulation; (2) IgG, the main Ig in circulation, which can pass the placenta; (3) IgA, secretory Ig for the protection of mucosal surfaces; (4) IgE, Ig fixing to mast cells or basophilic granulocytes involved in immediate hypersensitivity reactions and (5) IgD, whose major function is as a receptor on B-lymphocytes.

Cell-mediated immunity. The cellular arm of the specific immune system is mediated by T-lymphocytes. These cells also have antigen receptors on their membranes. They recognize antigen if presented by antigen presenting cells in the context of histocompatibility antigens. Hence, these cells have a restriction in addition to the antigen specificity. T cells function as helper cells for various (including humoral) immune responses, mediate recruitment of inflammatory cells, and can, as cytotoxic T cells, kill target cells after antigen-specific recognition.

Mechanisms of Immunotoxicity

Immunosuppression

Effective host resistance is dependent upon the functional integrity of the immune system, which in turn requires that the component cells and molecules which orchestrate immune responses are available in sufficient numbers and in an operational form. Congenital immunodeficiencies in humans are often characterized by defects in certain stem cell lines, resulting in impaired or absent production of immune cells. By analogy with congenital and acquired human immunodeficiency diseases, chemical-induced immunosuppression may result simply from a reduced number of functional cells (IPCS 1996). The absence, or reduced numbers, of lymphocytes may have more or less profound effects on immune status. Some immunodeficiency states and severe immunosuppression, as can occur in transplantation or cytostatic therapy, have been associated in particular with increased incidences of opportunistic infections and of certain neoplastic diseases. The infections can be bacterial, viral, fungal or protozoan, and the predominant type of infection depends on the associated immunodeficiency. Exposure to immunosuppressive environmental chemicals may be expected to result in more subtle forms of immunosuppression, which may be difficult to detect. These may lead, for example, to an increased incidence of infections such as influenza or the common cold.

In view of the complexity of the immune system, with the wide variety of cells, mediators and functions that form a complicated and interactive network, immunotoxic compounds have numerous opportunities to exert an effect. Although the nature of the initial lesions induced by many immunotoxic chemicals have not yet been elucidated, there is increasing information available, mostly derived from studies in laboratory animals, regarding the immunobiological changes which result in depression of immune function (Dean et al. 1994). Toxic effects might occur at the following critical functions (and some examples are given of immunotoxic compounds affecting these functions):

  •  development and expansion of different stem cell populations (benzene exerts immunotoxic effects at the stem cell level, causing lymphocytopenia)
  •  proliferation of various lymphoid and myeloid cells as well as supportive tissues in which these cells mature and function (immunotoxic organotin compounds suppress the proliferative activity of lymphocytes in the thymic cortex through direct cytotoxicity; the thymotoxic action of 2,3,7,8-tetrachloro-dibenzo-p-dioxin (TCDD) and related compounds is likely due to an impaired function of thymic epithelial cells, rather than to direct toxicity for thymocytes)
  •  antigen uptake, processing and presentation by macrophages and other antigen-presenting cells (one of the targets of 7,12-dimethylbenz(a)anthracene (DMBA) and of lead is antigen presentation by macrophages; a target of ultraviolet radiation is the antigen-presenting Langerhans cell)
  •  regulatory function of T-helper and T-suppressor cells (T-helper cell function is impaired by organotins, aldicarb, polychlorinated biphenyls (PCBs), TCDD and DMBA; T-suppressor cell function is reduced by low-dose cyclophosphamide treatment)
  •  production of various cytokines or interleukins (benzo(a)pyrene (BP) suppresses interleukin-1 production; ultraviolet radiation alters production of cytokines by keratinocytes)
  •  synthesis of various classes of immunoglobulins IgM and IgG is suppressed following PCB and tributyltin oxide (TBT) treatment, and increased after hexachlorobenzene (HCB) exposure).
  •  complement regulation and activation (affected by TCDD)
  •  cytotoxic T cell function (3-methylcholanthrene (3-MC), DMBA, and TCDD suppress cytotoxic T cell activity)
  •  natural killer (NK) cell function (pulmonary NK activity is suppressed by ozone; splenic NK activity is impaired by nickel)
  •  macrophage and polymorphonuclear leukocyte chemotaxis and cytotoxic functions (ozone and nitrogen dioxide impair the phagocytic activity of alveolar macrophages).

 

Allergy

Allergy may be defined as the adverse health effects which result from the induction and elicitation of specific immune responses. When hypersensitivity reactions occur without involvement of the immune system the term pseudo-allergy is used. In the context of immunotoxicology, allergy results from a specific immune response to chemicals and drugs that are of interest. The ability of a chemical to sensitize individuals is generally related to its ability to bind covalently to body proteins. Allergic reactions may take a variety of forms and these differ with respect to both the underlying immunological mechanisms and the speed of the reaction. Four major types of allergic reactions have been recognized: Type I hypersensitivity reactions, which are effectuated by IgE antibody and where symptoms are manifest within minutes of exposure of the sensitized individual. Type II hypersensitivity reactions result from the damage or destruction of host cells by antibody. In this case symptoms become apparent within hours. Type III hypersensitivity, or Arthus, reactions are also antibody mediated, but against soluble antigen, and result from the local or systemic action of immune complexes. Type IV, or delayed-type hypersensitivity, reactions are effected by T-lymphocytes and normally symptoms develop 24to 48hours following exposure of the sensitized individual.

The two types of chemical allergy of greatest relevance to occupational health are contact sensitivity or skin allergy and allergy of the respiratory tract.

Contact hypersensitivity. A large number of chemicals are able to cause skin sensitization. Following topical exposure of a susceptible individual to a chemical allergen, a T-lymphocyte response is induced in the draining lymph nodes. In the skin the allergen interacts directly or indirectly with epidermal Langerhans cells, which transport the chemical to the lymph nodes and present it in an immunogenic form to responsive T-lymphocytes. Allergen- activated T-lymphocytes proliferate, resulting in clonal expansion. The individual is now sensitized and will respond to a second dermal exposure to the same chemical with a more aggressive immune response, resulting in allergic contact dermatitis. The cutaneous inflammatory reaction which characterizes allergic contact dermatitis is secondary to the recognition of the allergen in the skin by specific T-lymphocytes. These lymphocytes become activated, release cytokines and cause the local accumulation of other mononuclear leukocytes. Symptoms develop some 24 to 48 hours following exposure of the sensitized individual, and allergic contact dermatitis therefore represents a form of delayed-type hypersensitivity. Common causes of allergic contact dermatitis include organic chemicals (such as 2,4-dinitrochlorobenzene), metals (such as nickel and chromium) and plant products (such as urushiol from poison ivy).

Respiratory hypersensitivity. Respiratory hypersensitivity is usually considered to be a Type I hypersensitivity reaction. However, late phase reactions and the more chronic symptoms associated with asthma may involve cell-mediated (Type IV) immune processes. The acute symptoms associated with respiratory allergy are effected by IgE antibody, the production of which is provoked following exposure of the susceptible individual to the inducing chemical allergen. The IgE antibody distributes systemically and binds, via membrane receptors, to mast cells which are found in vascularized tissues, including the respiratory tract. Following inhalation of the same chemical a respiratory hypersensitivity reaction will be elicited. Allergen associates with protein and binds to, and cross-links, IgE antibody bound to mast cells. This in turn causes the degranulation of mast cells and the release of inflammatory mediators such as histamine and leukotrienes. Such mediators cause bronchoconstriction and vasodilation, resulting in the symptoms of respiratory allergy; asthma and/or rhinitis. Chemicals known to cause respiratory hypersensitivity in man include acid anhydrides (such as trimellitic anhydride), some diisocyanates (such as toluene diisocyanate), platinum salts and some reactive dyes. Also, chronic exposure to beryllium is known to cause hypersensitivity lung disease.

Autoimmunity

Autoimmunity can be defined as the stimulation of specific immune responses directed against endogenous “self” antigens. Induced autoimmunity can result either from alterations in the balance of regulatory T-lymphocytes or from the association of a xenobiotic with normal tissue components such as to render them immunogenic (“altered self”). Drugs and chemicals known to incidentally induce or exacerbate effects like those of autoimmune disease (AD) in susceptible individuals are low molecular weight compounds (molecular weight 100 to 500) that are generally considered to be not immunogenic themselves. The mechanism of AD by chemical exposure is mostly unknown. Disease can be produced directly by means of circulating antibody, indirectly through the formation of immune complexes, or as a consequence of cell-mediated immunity, but likely occurs through a combination of mechanisms. The pathogenesis is best known in immune haemolytic disorders induced by drugs:

  •  The drug can attach to the red-cell membrane and interact with a drug-specific antibody.
  •  The drug can alter the red-cell membrane so that the immune system regards the cell as foreign.
  •  The drug and its specific antibody form immune complexes that adhere to the red-cell membrane to produce injury.
  •  Red-cell sensitization occurs due to the production of red-cell autoantibody.

 

A variety of chemicals and drugs, in particular the latter, have been found to induce autoimmune-like responses (Kamüller, Bloksma and Seinen 1989). Occupational exposure to chemicals may incidentally lead to AD-like syndromes. Exposure to monomeric vinyl chloride, trichloroethylene, perchloroethylene, epoxy resins and silica dust may induce scleroderma-like syndromes. A syndrome similar to systemic lupus erythematosus (SLE) has been described after exposure to hydrazine. Exposure to toluene diisocyanate has been associated with the induction of thrombocytopenic purpura. Heavy metals such as mercury have been implicated in some cases of immune complex glomerulonephritis.

Human Risk Assessment

The assessment of human immune status is performed mainly using peripheral blood for analysis of humoral substances like immunoglobulins and complement, and of blood leukocytes for subset composition and functionality of subpopulations. These methods are usually the same as those used to investigate humoral and cell-mediated immunity as well as nonspecific resistance of patients with suspected congenital immunodeficiency disease. For epidemiological studies (e.g., of occupationally exposed populations) parameters should be selected on the basis of their predictive value in human populations, validated animal models, and the underlying biology of the markers (see table 1). The strategy in screening for immunotoxic effects after (accidental) exposure to environmental pollutants or other toxicants is much dependent on circumstances, such as type of immunodeficiency to be expected, time between exposure and immune status assessment, degree of exposure and number of exposed individuals. The process of assessing the immunotoxic risk of a particular xenobiotic in humans is extremely difficult and often impossible, due largely to the presence of various confounding factors of endogenous or exogenous origin that influence the response of individuals to toxic damage. This is particularly true for studies which investigate the role of chemical exposure in autoimmune diseases, where genetic factors play a crucial role.

Table 1. Classification of tests for immune markers

Test category Characteristics Specific tests
Basic-general
Should be included with general panels
Indicators of general health and organ system status Blood urea nitrogen, blood glucose, etc.
Basic-immune
Should be included with general panels
General indicators of immune status
Relatively low cost
Assay methods are standardized among laboratories
Results outside reference ranges are clinically interpretable
Complete blood counts
Serum IgG, IgA, IgM levels
Surface marker phenotypes for major lymphocyte subsets
Focused/reflex
Should be included when indicated by clinical findings, suspected exposures, or prior test results
Indicators of specific immune functions/events
Cost varies
Assay methods are standardized among laboratories
Results outside reference ranges are clinically interpretable
Histocompatibility genotype
Antibodies to infectious agents
Total serum IgE
Allergen-specific IgE
Autoantibodies
Skin tests for hypersensitivity
Granulocyte oxidative burst
Histopathology (tissue biopsy)
Research
Should be included only with control populations and careful study design
Indicators of general or specific immune functions/events
Cost varies; often expensive
Assay methods are usually not standardized among laboratories
Results outside reference ranges are often not clinically interpretable
In vitro stimulation assays
Cell activation surface markers
Cytokine serum concentrations
Clonality assays (antibody, cellular, genetic)
Cytotoxicity tests

 

As adequate human data are seldom available, the assessment of risk for chemical-induced immunosuppression in humans is in the majority of cases based upon animal studies. The identification of potential immunotoxic xenobiotics is undertaken primarily in controlled studies in rodents. In vivo exposure studies present, in this regard, the optimal approach to estimate the immunotoxic potential of a compound. This is due to the multifactoral and complex nature of the immune system and of immune responses. In vitro studies are of increasing value in the elucidation of mechanisms of immunotoxicity. In addition, by investigating the effects of the compound using cells of animal and human origin, data can be generated for species comparison, which can be used in the “parallelogram” approach to improve the risk assessment process. If data are available for three cornerstones of the parallelogram (in vivo animal, and in vitro animal and human) it may be easier to predict the outcome at the remaining cornerstone, that is, the risk in humans.

When assessment of risk for chemical-induced immunosuppression has to rely solely upon data from animal studies, an approach can be followed in the extrapolation to man by the application of uncertainty factors to the no observed adverse effect level (NOAEL). This level can be based on parameters determined in relevant models, such as host resistance assays and in vivo assessment of hypersensitivity reactions and antibody production. Ideally, the relevance of this approach to risk assessment requires confirmation by studies in humans. Such studies should combine the identification and measurement of the toxicant, epidemiological data and immune status assessments.

To predict contact hypersensitivity, guinea pig models are available and have been used in risk assessment since the 1970s. Although sensitive and reproducible, these tests have limitations as they depend on subjective evaluation; this can be overcome by newer and more quantitative methods developed in the mouse. Regarding chemical-induced hypersensitivity induced by inhalation or ingestion of allergens, tests should be developed and evaluated in terms of their predictive value in man. When it comes to setting safe occupational exposure levels of potential allergens, consideration has to be given to the biphasic nature of allergy: the sensitization phase and the elicitation phase. The concentration required to elicit an allergic reaction in a previously sensitized individual is considerably lower than the concentration necessary to induce sensitization in the immunologically naïve but susceptible individual.

As animal models to predict chemical-induced autoimmunity are virtually lacking, emphasis should be given to the development of such models. For the development of such models, our knowledge of chemical-induced autoimmunity in humans should be advanced, including the study of genetic and immune system markers to identify susceptible individuals. Humans that are exposed to drugs that induce autoimmunity offer such an opportunity.

 

Back

Sunday, 16 January 2011 16:34

Genetic Toxicology

Genetic toxicology, by definition, is the study of how chemical or physical agents affect the intricate process of heredity. Genotoxic chemicals are defined as compounds that are capable of modifying the hereditary material of living cells. The probability that a particular chemical will cause genetic damage inevitably depends on several variables, including the organism’s level of exposure to the chemical, the distribution and retention of the chemical once it enters the body, the efficiency of metabolic activation and/or detoxification systems in target tissues, and the reactivity of the chemical or its metabolites with critical macromolecules within cells. The probability that genetic damage will cause disease ultimately depends on the nature of the damage, the cell’s ability to repair or amplify genetic damage, the opportunity for expressing whatever alteration has been induced, and the ability of the body to recognize and suppress the multiplication of aberrant cells.

In higher organisms, hereditary information is organized in chromosomes. Chromosomes consist of tightly condensed strands of protein-associated DNA. Within a single chromosome, each DNA molecule exists as a pair of long, unbranched chains of nucleotide subunits linked together by phosphodiester bonds that join the 5 carbon of one deoxyribose moiety to the 3 carbon of the next (figure 1). In addition, one of four different nucleotide bases (adenine, cytosine, guanine or thymine) is attached to each deoxyribose subunit like beads on a string. Three-dimensionally, each pair of DNA strands forms a double helix with all of the bases oriented toward the inside of the spiral. Within the helix, each base is associated with its complementary base on the opposite DNA strand; hydrogen bonding dictates strong, noncovalent pairing of adenine with thymine and guanine with cytosine (figure 1). Since the sequence of nucleotide bases is complementary throughout the entire length of the duplex DNA molecule, both strands carry essentially the same genetic information. In fact, during DNA replication each strand serves as a template for the production of a new partner strand.

Figure 1. The (a) primary, (b) secondary and (c) tertiary organization of human hereditary information

TOX090F1Using RNA and an array of different proteins, the cell ultimately deciphers the information encoded by the linear sequence of bases within specific regions of DNA (genes) and produces proteins that are essential for basic cell survival as well as normal growth and differentiation. In essence, the nucleotides function like a biological alphabet which is used to code for amino acids, the building blocks of proteins.

When incorrect nucleotides are inserted or nucleotides are lost, or when unnecessary nucleotides are added during DNA synthesis, the mistake is called a mutation. It has been estimated that less than one mutation occurs for every 109 nucleotides incorporated during the normal replication of cells. Although mutations are not necessarily harmful, alterations causing inactivation or overexpression of important genes can result in a variety of disorders, including cancer, hereditary disease, developmental abnormalities, infertility and embryonic or perinatal death. Very rarely, a mutation can lead to enhanced survival; such occurrences are the basis of natural selection.

Although some chemicals react directly with DNA, most require metabolic activation. In the latter case, electrophilic intermediates such as epoxides or carbonium ions are ultimately responsible for inducing lesions at a variety of nucleophilic sites within the genetic material (figure 2). In other instances, genotoxicity is mediated by by-products of compound interaction with intracellular lipids, proteins, or oxygen.

Figure 2. Bioactivation of: a) benzo(a)pyrene; and b) N-nitrosodimethylamine

TOX090F2

Because of their relative abundance in cells, proteins are the most frequent target of toxicant interaction. However, modification of DNA is of greater concern due to the central role of this molecule in regulating growth and differentiation through multiple generations of cells.

At the molecular level, electrophilic compounds tend to attack oxygen and nitrogen in DNA. The sites that are most prone to modification are illustrated in figure 3. Although oxygens within phosphate groups in the DNA backbone are also targets for chemical modification, damage to bases is thought to be biologically more relevant since these groups are considered to be the primary informational elements in the DNA molecule.

Figure 3. Primary sites of chemically-induced DNA damage

TOX090F3

Compounds that contain one electrophilic moiety typically exert genotoxicity by producing mono-adducts in DNA. Similarly, compounds that contain two or more reactive moieties can react with two different nucleophilic centres and thereby produce intra- or inter-molecular crosslinks in genetic material (figure 4). Interstrand DNA-DNA and DNA-protein crosslinks can be particularly cytotoxic since they can form complete blocks to DNA replication. For obvious reasons, the death of a cell eliminates the possibility that it will be mutated or neoplastically transformed. Genotoxic agents can also act by inducing breaks in the phosphodiester backbone, or between bases and sugars (producing abasic sites) in DNA. Such breaks may be a direct result of chemical reactivity at the damage site, or may occur during the repair of one of the aforementioned types of DNA lesion.

Figure 4. Various types of damage to the protein-DNA complex

TOX090F4

Over the past thirty to forty years, a variety of techniques have been developed to monitor the type of genetic damage induced by various chemicals. Such assays are described in detail elsewhere in this chapter and Encyclopaedia.

Misreplication of “microlesions” such as mono-adducts, abasic sites or single-strand breaks may ultimately result in nucleotide base-pair substitutions, or the insertion or deletion of short polynucleotide fragments in chromosomal DNA. In contrast, “macrolesions,” such as bulky adducts, crosslinks, or double-strand breaks may trigger the gain, loss or rearrangement of relatively large pieces of chromosomes. In any case, the consequences can be devastating to the organism since any one of these events can lead to cell death, loss of function or malignant transformation of cells. Exactly how DNA damage causes cancer is largely unknown. It is currently believed the process may involve inappropriate activation of proto-oncogenes such as myc and ras, and/or inactivation of recently identified tumour suppressor genes such as p53. Abnormal expression of either type of gene abrogates normal cellular mechanisms for controlling cell proliferation and/or differentiation.

The preponderance of experimental evidence indicates that the development of cancer following exposure to electrophilic compounds is a relatively rare event. This can be explained, in part, by the cell’s intrinsic ability to recognize and repair damaged DNA or the failure of cells with damaged DNA to survive. During repair, the damaged base, nucleotide or short stretch of nucleotides surrounding the damage site is removed and (using the opposite strand as a template) a new piece of DNA is synthesized and spliced into place. To be effective, DNA repair must occur with great accuracy prior to cell division, before opportunities for the propagation of mutation.

Clinical studies have shown that people with inherited defects in the ability to repair damaged DNA frequently develop cancer and/or developmental abnormalities at an early age (table 1). Such examples provide strong evidence linking accumulation of DNA damage to human disease. Similarly, agents that promote cell proliferation (such as tetradecanoylphorbol acetate) often enhance carcinogenesis. For these compounds, the increased likelihood of neoplastic transformation may be a direct consequence of a decrease in the time available for the cell to carry out adequate DNA repair.

Table 1. Hereditary, cancer-prone disorders that appear to involve defects in DNA repair

Syndrome Symptoms Cellular phenotype
Ataxia telangiectasia Neurological deterioration
Immunodeficiency
High incidence of lymphoma
Hypersensitivity to ionizing radiation and certain alkylating agents.
Dysregulated replication of damaged DNA (may indicate shortened time for DNA repair)
Bloom’s syndrome Developmental abnormalities
Lesions on exposed skin
High incidence of tumours of the immune system and gastrointestinal tract
High frequency of chromosomal aberrations
Defective ligation of breaks associated with DNA repair
Fanconi’s anaemia Growth retardation
High incidence of leukaemia
Hypersensitivity to crosslinking agents
High frequency of chromosomal aberrations
Defective repair of crosslinks in DNA
Hereditary nonpolyposis colon cancer High incidence of colon cancer Defect in DNA mismatch repair (when insertion of wrong nucleotide occurs during replication)
Xeroderma pigmentosum High incidence of epithelioma on exposed areas of skin
Neurological impairment (in many cases)
Hypersensitivity to UV light and many chemical carcinogens
Defects in excision repair and/or replication of damaged DNA

 

The earliest theories on how chemicals interact with DNA can be traced back to studies conducted during the development of mustard gas for use in warfare. Further understanding grew out of efforts to identify anticancer agents that would selectively arrest the replication of rapidly dividing tumour cells. Increased public concern over hazards in our environment has prompted additional research into the mechanisms and consequences of chemical interaction with the genetic material. Examples of various types of chemicals which exert genotoxicity are presented in table 2.

Table 2. Examples of chemicals that exhibit genotoxicity in human cells

Class of chemical Example Source of exposure Probable genotoxic lesion
Aflatoxins Aflatoxin B1 Contaminated food Bulky DNA adducts
Aromatic amines 2-Acetylaminofluorene Environmental Bulky DNA adducts
Aziridine quinones Mitomycin C Cancer chemotherapy Mono-adducts, interstrand crosslinks and single-strand breaks in DNA.
Chlorinated hydrocarbons Vinyl chloride Environmental Mono-adducts in DNA
Metals and metal compounds Cisplatin Cancer chemotherapy Both intra- and inter-strand crosslinks in DNA
  Nickel compounds Environmental Mono-adducts and single-strand breaks in DNA
Nitrogen mustards Cyclophosphamide Cancer chemotherapy Mono-adducts and interstrand crosslinks in DNA
Nitrosamines N-Nitrosodimethylamine Contaminated food Mono-adducts in DNA
Polycyclic aromatic hydrocarbons Benzo(a)pyrene Environmental Bulky DNA adducts

 

Back

Page 83 of 87

" DISCLAIMER: The ILO does not take responsibility for content presented on this web portal that is presented in any language other than English, which is the language used for the initial production and peer-review of original content. Certain statistics have not been updated since the production of the 4th edition of the Encyclopaedia (1998)."

Contents