33. Toxicology
Chapter Editor: Ellen K. Silbergeld
Introduction
Ellen K. Silbergeld, Chapter Editor
Definitions and Concepts
Bo Holmberg, Johan Hogberg and Gunnar Johanson
Toxicokinetics
Dušan Djuríc
Target Organ And Critical Effects
Marek Jakubowski
Effects Of Age, Sex And Other Factors
Spomenka Telišman
Genetic Determinants Of Toxic Response
Daniel W. Nebert and Ross A. McKinnon
Introduction And Concepts
Philip G. Watanabe
Cellular Injury And Cellular Death
Benjamin F. Trump and Irene K. Berezesky
Genetic Toxicology
R. Rita Misra and Michael P. Waalkes
Immunotoxicology
Joseph G. Vos and Henk van Loveren
Target Organ Toxicology
Ellen K. Silbergeld
Biomarkers
Philippe Grandjean
Genetic Toxicity Assessment
David M. DeMarini and James Huff
In Vitro Toxicity Testing
Joanne Zurlo
Structure Activity Relationships
Ellen K. Silbergeld
Toxicology In Health And Safety Regulation
Ellen K. Silbergeld
Principles Of Hazard Identification - The Japanese Approach
Masayuki Ikeda
The United States Approach to Risk Assessment Of Reproductive Toxicants and Neurotoxic Agents
Ellen K. Silbergeld
Approaches To Hazard Identification - IARC
Harri Vainio and Julian Wilbourn
Appendix - Overall Evaluations of Carcinogenicity to Humans: IARC Monographs Volumes 1-69 (836)
Carcinogen Risk Assessment: Other Approaches
Cees A. van der Heijden
Click a link below to view table in article context.
Point to a thumbnail to see figure caption, click to see figure in article context.
Exposure, Dose and Response
Toxicity is the intrinsic capacity of a chemical agent to affect an organism adversely.
Xenobiotics is a term for “foreign substances”, that is, foreign to the organism. Its opposite is endogenous compounds. Xenobiotics include drugs, industrial chemicals, naturally occurring poisons and environmental pollutants.
Hazard is the potential for the toxicity to be realized in a specific setting or situation.
Risk is the probability of a specific adverse effect to occur. It is often expressed as the percentage of cases in a given population and during a specific time period. A risk estimate can be based upon actual cases or a projection of future cases, based upon extrapolations.
Toxicity rating and toxicity classification can be used for regulatory purposes. Toxicity rating is an arbitrary grading of doses or exposure levels causing toxic effects. The grading can be “supertoxic,” “highly toxic,” “moderately toxic” and so on. The most common ratings concern acute toxicity. Toxicity classification concerns the grouping of chemicals into general categories according to their most important toxic effect. Such categories can include allergenic, neurotoxic, carcinogenic and so on. This classification can be of administrative value as a warning and as information.
The dose-effect relationship is the relationship between dose and effect on the individual level. An increase in dose may in- crease the intensity of an effect, or a more severe effect may result. A dose-effect curve may be obtained at the level of the whole organism, the cell or the target molecule. Some toxic effects, such as death or cancer, are not graded but are “all or none” effects.
The dose-response relationship is the relationship between dose and the percentage of individuals showing a specific effect. With increasing dose a greater number of individuals in the exposed population will usually be affected.
It is essential to toxicology to establish dose-effect and dose- response relationships. In medical (epidemiological) studies a criterion often used for accepting a causal relationship between an agent and a disease is that effect or response is proportional to dose.
Several dose-response curves can be drawn for a chemical—one for each type of effect. The dose-response curve for most toxic effects (when studied in large populations) has a sigmoid shape. There is usually a low-dose range where there is no response detected; as dose increases, the response follows an ascending curve that will usually reach a plateau at a 100% response. The dose-response curve reflects the variations among individuals in a population. The slope of the curve varies from chemical to chemical and between different types of effects. For some chemicals with specific effects (carcinogens, initiators, mutagens) the dose-response curve might be linear from dose zero within a certain dose range. This means that no threshold exists and that even small doses represent a risk. Above that dose range, the risk may increase at greater than a linear rate.
Variation in exposure during the day and the total length of exposure during one’s lifetime may be as important for the outcome (response) as mean or average or even integrated dose level. High peak exposures may be more harmful than a more even exposure level. This is the case for some organic solvents. On the other hand, for some carcinogens, it has been experimentally shown that the fractionation of a single dose into several exposures with the same total dose may be more effective in producing tumours.
A dose is often expressed as the amount of a xenobiotic entering an organism (in units such as mg/kg body weight). The dose may be expressed in different (more or less informative) ways: exposure dose, which is the air concentration of pollutant inhaled during a certain time period (in work hygiene usually eight hours), or the retained or absorbed dose (in industrial hygiene also called the body burden), which is the amount present in the body at a certain time during or after exposure. The tissue dose is the amount of substance in a specific tissue and the target dose is the amount of substance (usually a metabolite) bound to the critical molecule. The target dose can be expressed as mg chemical bound per mg of a specific macromolecule in the tissue. To apply this concept, information on the mechanism of toxic action on the molecular level is needed. The target dose is more exactly associated with the toxic effect. The exposure dose or body burden may be more easily available, but these are less precisely related to the effect.
In the dose concept a time aspect is often included, even if it is not always expressed. The theoretical dose according to Haber’s law is D = ct, where D is dose, c is concentration of the xenobiotic in the air and t the duration of exposure to the chemical. If this concept is used at the target organ or molecular level, the amount per mg tissue or molecule over a certain time may be used. The time aspect is usually more important for understanding repeated exposures and chronic effects than for single exposures and acute effects.
Additive effects occur as a result of exposure to a combination of chemicals, where the individual toxicities are simply added to each other (1+1= 2). When chemicals act via the same mechanism, additivity of their effects is assumed although not always the case in reality. Interaction between chemicals may result in an inhibition (antagonism), with a smaller effect than that expected from addition of the effects of the individual chemicals (1+1 2). Alternatively, a combination of chemicals may produce a more pronounced effect than would be expected by addition (increased response among individuals or an increase in frequency of response in a population), this is called synergism (1+1 >2).
Latency time is the time between first exposure and the appearance of a detectable effect or response. The term is often used for carcinogenic effects, where tumours may appear a long time after the start of exposure and sometimes long after the cessation of exposure.
A dose threshold is a dose level below which no observable effect occurs. Thresholds are thought to exist for certain effects, like acute toxic effects; but not for others, like carcinogenic effects (by DNA-adduct-forming initiators). The mere absence of a response in a given population should not, however, be taken as evidence for the existence of a threshold. Absence of response could be due to simple statistical phenomena: an adverse effect occurring at low frequency may not be detectable in a small population.
LD50 (effective dose) is the dose causing 50% lethality in an animal population. The LD50 is often given in older literature as a measure of acute toxicity of chemicals. The higher the LD50, the lower is the acute toxicity. A highly toxic chemical (with a low LD50) is said to be potent. There is no necessary correlation between acute and chronic toxicity. ED50 (effective dose) is the dose causing a specific effect other than lethality in 50% of the animals.
NOEL (NOAEL) means the no observed (adverse) effect level, or the highest dose that does not cause a toxic effect. To establish a NOEL requires multiple doses, a large population and additional information to make sure that absence of a response is not merely a statistical phenomenon. LOEL is the lowest observed effective dose on a dose-response curve, or the lowest dose that causes an effect.
A safety factor is a formal, arbitrary number with which one divides the NOEL or LOEL derived from animal experiments to obtain a tentative permissible dose for humans. This is often used in the area of food toxicology, but may be used also in occupational toxicology. A safety factor may also be used for extrapolation of data from small populations to larger populations. Safety factors range from 100 to 103. A safety factor of two may typically be sufficient to protect from a less serious effect (such as irritation) and a factor as large as 1,000 may be used for very serious effects (such as cancer). The term safety factor could be better replaced by the term protection factor or, even, uncertainty factor. The use of the latter term reflects scientific uncertainties, such as whether exact dose-response data can be translated from animals to humans for the particular chemical, toxic effect or exposure situation.
Extrapolations are theoretical qualitative or quantitative estimates of toxicity (risk extrapolations) derived from translation of data from one species to another or from one set of dose-response data (typically in the high dose range) to regions of dose-response where no data exist. Extrapolations usually must be made to predict toxic responses outside the observation range. Mathematical modelling is used for extrapolations based upon an understanding of the behaviour of the chemical in the organism (toxicokinetic modelling) or based upon the understanding of statistical probabilities that specific biological events will occur (biologically or mechanistically based models). Some national agencies have developed sophisticated extrapolation models as a formalized method to predict risks for regulatory purposes. (See discussion of risk assessment later in the chapter.)
Systemic effects are toxic effects in tissues distant from the route of absorption.
Target organ is the primary or most sensitive organ affected after exposure. The same chemical entering the body by different routes of exposure dose, dose rate, sex and species may affect different target organs. Interaction between chemicals, or between chemicals and other factors may affect different target organs as well.
Acute effects occur after limited exposure and shortly (hours, days) after exposure and may be reversible or irreversible.
Chronic effects occur after prolonged exposure (months, years, decades) and/or persist after exposure has ceased.
Acute exposure is an exposure of short duration, while chronic exposure is long-term (sometimes life-long) exposure.
Tolerance to a chemical may occur when repeat exposures result in a lower response than what would have been expected without pretreatment.
Uptake and Disposition
Transport processes
Diffusion. In order to enter the organism and reach a site where damage is produced, a foreign substance has to pass several barriers, including cells and their membranes. Most toxic substances pass through membranes passively by diffusion. This may occur for small water-soluble molecules by passage through aqueous channels or, for fat-soluble ones, by dissolution into and diffusion through the lipid part of the membrane. Ethanol, a small molecule that is both water and fat soluble, diffuses rapidly through cell membranes.
Diffusion of weak acids and bases. Weak acids and bases may readily pass membranes in their non-ionized, fat-soluble form while ionized forms are too polar to pass. The degree of ionization of these substances depends on pH. If a pH gradient exists across a membrane they will therefore accumulate on one side. The urinary excretion of weak acids and bases is highly dependent on urinary pH. Foetal or embryonic pH is somewhat higher than maternal pH, causing a slight accumulation of weak acids in the foetus or embryo.
Facilitated diffusion. The passage of a substance may be facilitated by carriers in the membrane. Facilitated diffusion is similar to enzyme processes in that it is protein mediated, highly selective, and saturable. Other substances may inhibit the facilitated transport of xenobiotics.
Active transport. Some substances are actively transported across cell membranes. This transport is mediated by carrier proteins in a process analogous to that of enzymes. Active transport is similar to facilitated diffusion, but it may occur against a concentration gradient. It requires energy input and a metabolic inhibitor can block the process. Most environmental pollutants are not transported actively. One exception is the active tubular secretion and reabsorption of acid metabolites in the kidneys.
Phagocytosis is a process where specialized cells such as macrophages engulf particles for subsequent digestion. This transport process is important, for example, for the removal of particles in the alveoli.
Bulk flow. Substances are also transported in the body along with the movement of air in the respiratory system during breathing, and the movements of blood, lymph or urine.
Filtration. Due to hydrostatic or osmotic pressure water flows in bulk through pores in the endothelium. Any solute that is small enough will be filtered together with the water. Filtration occurs to some extent in the capillary bed in all tissues but is particularly important in the formation of primary urine in the kidney glomeruli.
Absorption
Absorption is the uptake of a substance from the environment into the organism. The term usually includes not only the entrance into the barrier tissue but also the further transport into circulating blood.
Pulmonary absorption. The lungs are the primary route of deposition and absorption of small airborne particles, gases, vapours and aerosols. For highly water-soluble gases and vapours a significant part of the uptake occurs in the nose and the respiratory tree, but for less soluble substances it primarily takes place in the lung alveoli. The alveoli have a very large surface area (about 100m2 in humans). In addition, the diffusion barrier is extremely small, with only two thin cell layers and a distance in the order of micrometers from alveolar air to systemic blood circulation. This makes the lungs very efficient not only in the exchange of oxygen and carbon dioxide but also of other gases and vapours. In general, the diffusion across the alveolar wall is so rapid that it does not limit the uptake. The absorption rate is instead dependent on flow (pulmonary ventilation, cardiac output) and solubility (blood: air partition coefficient). Another important factor is metabolic elimination. The relative importance of these factors for pulmonary absorption varies greatly for different substances. Physical activity results in increased pulmonary ventilation and cardiac output, and decreased liver blood flow (and, hence, biotransformation rate). For many inhaled substances this leads to a marked increase in pulmonary absorption.
Percutaneous absorption. The skin is a very efficient barrier. Apart from its thermoregulatory role, it is designed to protect the organism from micro-organisms, ultraviolet radiation and other deleterious agents, and also against excessive water loss. The diffusion distance in the dermis is on the order of tenths of millimetres. In addition, the keratin layer has a very high resistance to diffusion for most substances. Nevertheless, significant dermal absorption resulting in toxicity may occur for some substances—highly toxic, fat-soluble substances such as organophosphorous insecticides and organic solvents, for example. Significant absorption is likely to occur after exposure to liquid substances. Percutaneous absorption of vapour may be important for solvents with very low vapour pressure and high affinity to water and skin.
Gastrointestinal absorption occurs after accidental or intentional ingestion. Larger particles originally inhaled and deposited in the respiratory tract may be swallowed after mucociliary transport to the pharynx. Practically all soluble substances are efficiently absorbed in the gastrointestinal tract. The low pH of the gut may facilitate absorption, for instance, of metals.
Other routes. In toxicity testing and other experiments, special routes of administration are often used for convenience, although these are rare and usually not relevant in the occupational setting. These routes include intravenous (IV), subcutaneous (sc), intraperitoneal (ip) and intramuscular (im) injections. In general, substances are absorbed at a higher rate and more completely by these routes, especially after IV injection. This leads to short-lasting but high concentration peaks that may increase the toxicity of a dose.
Distribution
The distribution of a substance within the organism is a dynamic process which depends on uptake and elimination rates, as well as the blood flow to the different tissues and their affinities for the substance. Water-soluble, small, uncharged molecules, univalent cations, and most anions diffuse easily and will eventually reach a relatively even distribution in the body.
Volume of distribution is the amount of a substance in the body at a given time, divided by the concentration in blood, plasma or serum at that time. The value has no meaning as a physical volume, as many substances are not uniformly distributed in the organism. A volume of distribution of less than one l/kg body weight indicates preferential distribution in the blood (or serum or plasma), whereas a value above one indicates a preference for peripheral tissues such as adipose tissue for fat soluble substances.
Accumulation is the build-up of a substance in a tissue or organ to higher levels than in blood or plasma. It may also refer to a gradual build-up over time in the organism. Many xenobiotics are highly fat soluble and tend to accumulate in adipose tissue, while others have a special affinity for bone. For example, calcium in bone may be exchanged for cations of lead, strontium, barium and radium, and hydroxyl groups in bone may be exchanged for fluoride.
Barriers. The blood vessels in the brain, testes and placenta have special anatomical features that inhibit passage of large molecules like proteins. These features, often referred to as blood-brain, blood-testes, and blood-placenta barriers, may give the false impression that they prevent passage of any substance. These barriers are of little or no importance for xenobiotics that can diffuse through cell membranes.
Blood binding. Substances may be bound to red blood cells or plasma components, or occur unbound in blood. Carbon monoxide, arsenic, organic mercury and hexavalent chromium have a high affinity for red blood cells, while inorganic mercury and trivalent chromium show a preference for plasma proteins. A number of other substances also bind to plasma proteins. Only the unbound fraction is available for filtration or diffusion into eliminating organs. Blood binding may therefore increase the residence time in the organism but decrease uptake by target organs.
Elimination
Elimination is the disappearance of a substance in the body. Elimination may involve excretion from the body or transformation to other substances not captured by a specific method of measurement. The rate of disappearance may be expressed by the elimination rate constant, biological half-time or clearance.
Concentration-time curve. The curve of concentration in blood (or plasma) versus time is a convenient way of describing uptake and disposition of a xenobiotic.
Area under the curve (AUC) is the integral of concentration in blood (plasma) over time. When metabolic saturation and other non-linear processes are absent, AUC is proportional to the absorbed amount of substance.
Biological half-time (or half-life) is the time needed after the end of exposure to reduce the amount in the organism to one-half. As it is often difficult to assess the total amount of a substance, measurements such as the concentration in blood (plasma) are used. The half-time should be used with caution, as it may change, for example, with dose and length of exposure. In addition, many substances have complex decay curves with several half-times.
Bioavailability is the fraction of an administered dose entering the systemic circulation. In the absence of presystemic clearance, or first-pass metabolism, the fraction is one. In oral exposure presystemic clearance may be due to metabolism within the gastrointestinal content, gut wall or liver. First-pass metabolism will reduce the systemic absorption of the substance and instead increase the absorption of metabolites. This may lead to a different toxicity pattern.
Clearance is the volume of blood (plasma) per unit time completely cleared of a substance. To distinguish from renal clearance, for example, the prefix total, metabolic or blood (plasma) is often added.
Intrinsic clearance is the capacity of endogenous enzymes to transform a substance, and is also expressed in volume per unit time. If the intrinsic clearance in an organ is much lower than the blood flow, the metabolism is said to be capacity limited. Conversely, if the intrinsic clearance is much higher than the blood flow, the metabolism is flow limited.
Excretion
Excretion is the exit of a substance and its biotransformation products from the organism.
Excretion in urine and bile. The kidneys are the most important excretory organs. Some substances, especially acids with high molecular weights, are excreted with bile. A fraction of biliary excreted substances may be reabsorbed in the intestines. This process, enterohepatic circulation, is common for conjugated substances following intestinal hydrolysis of the conjugate.
Other routes of excretion. Some substances, such as organic solvents and breakdown products such as acetone, are volatile enough so that a considerable fraction may be excreted by exhalation after inhalation. Small water-soluble molecules as well as fat-soluble ones are readily secreted to the foetus via the placenta, and into milk in mammals. For the mother, lactation can be a quantitatively important excretory pathway for persistent fat-soluble chemicals. The offspring may be secondarily exposed via the mother during pregnancy as well as during lactation. Water-soluble compounds may to some extent be excreted in sweat and saliva. These routes are generally of minor importance. However, as a large volume of saliva is produced and swallowed, saliva excretion may contribute to reabsorption of the compound. Some metals such as mercury are excreted by binding permanently to the sulphydryl groups of the keratin in the hair.
Toxicokinetic models
Mathematical models are important tools to understand and describe the uptake and disposition of foreign substances. Most models are compartmental, that is, the organism is represented by one or more compartments. A compartment is a chemically and physically theoretical volume in which the substance is assumed to distribute homogeneously and instantaneously. Simple models may be expressed as a sum of exponential terms, while more complicated ones require numerical procedures on a computer for their solution. Models may be subdivided in two categories, descriptive and physiological.
In descriptive models, fitting to measured data is performed by changing the numerical values of the model parameters or even the model structure itself. The model structure normally has little to do with the structure of the organism. Advantages of the descriptive approach are that few assumptions are made and that there is no need for additional data. A disadvantage of descriptive models is their limited usefulness for extrapolations.
Physiological models are constructed from physiological, anatomical and other independent data. The model is then refined and validated by comparison with experimental data. An advantage of physiological models is that they can be used for extrapolation purposes. For example, the influence of physical activity on the uptake and disposition of inhaled substances may be predicted from known physiological adjustments in ventilation and cardiac output. A disadvantage of physiological models is that they require a large amount of independent data.
Biotransformation
Biotransformation is a process which leads to a metabolic conversion of foreign compounds (xenobiotics) in the body. The process is often referred to as metabolism of xenobiotics. As a general rule metabolism converts lipid-soluble xenobiotics to large, water- soluble metabolites that can be effectively excreted.
The liver is the main site of biotransformation. All xenobiotics taken up from the intestine are transported to the liver by a single blood vessel (vena porta). If taken up in small quantities a foreign substance may be completely metabolized in the liver before reaching the general circulation and other organs (first pass effect). Inhaled xenobiotics are distributed via the general circulation to the liver. In that case only a fraction of the dose is metabolized in the liver before reaching other organs.
Liver cells contain several enzymes that oxidize xenobiotics. This oxidation generally activates the compound—it becomes more reactive than the parent molecule. In most cases the oxidized metabolite is further metabolized by other enzymes in a second phase. These enzymes conjugate the metabolite with an endogenous substrate, so that the molecule becomes larger and more polar. This facilitates excretion.
Enzymes that metabolize xenobiotics are also present in other organs such as the lungs and kidneys. In these organs they may play specific and qualitatively important roles in the metabolism of certain xenobiotics. Metabolites formed in one organ may be further metabolized in a second organ. Bacteria in the intestine may also participate in biotransformation.
Metabolites of xenobiotics can be excreted by the kidneys or via the bile. They can also be exhaled via the lungs, or bound to endogenous molecules in the body.
The relationship between biotransformation and toxicity is complex. Biotransformation can be seen as a necessary process for survival. It protects the organism against toxicity by preventing accumulation of harmful substances in the body. However, reactive intermediary metabolites may be formed in biotransformation, and these are potentially harmful. This is called metabolic activation. Thus, biotransformation may also induce toxicity. Oxidized, intermediary metabolites that are not conjugated can bind to and damage cellular structures. If, for example, a xenobiotic metabolite binds to DNA, a mutation can be induced (see “Genetic toxicology”). If the biotransformation system is overloaded, a massive destruction of essential proteins or lipid membranes may occur. This can result in cell death (see “Cellular injury and cellular death”).
Metabolism is a word often used interchangeably with biotransformation. It denotes chemical breakdown or synthesis reactions catalyzed by enzymes in the body. Nutrients from food, endogenous compounds, and xenobiotics are all metabolized in the body.
Metabolic activation means that a less reactive compound is converted to a more reactive molecule. This usually occurs during Phase 1 reactions.
Metabolic inactivation means that an active or toxic molecule is converted to a less active metabolite. This usually occurs during Phase 2 reactions. In certain cases an inactivated metabolite might be reactivated, for example by enzymatic cleavage.
Phase 1 reaction refers to the first step in xenobiotic metabolism. It usually means that the compound is oxidized. Oxidation usually makes the compound more water soluble and facilitates further reactions.
Cytochrome P450 enzymes are a group of enzymes that preferentially oxidize xenobiotics in Phase 1 reactions. The different enzymes are specialized for handling specific groups of xenobiotics with certain characteristics. Endogenous molecules are also substrates. Cytochrome P450 enzymes are induced by xenobiotics in a specific fashion. Obtaining induction data on cytochrome P450 can be informative about the nature of previous exposures (see “Genetic determinants of toxic response”).
Phase 2 reaction refers to the second step in xenobiotic meta- bolism. It usually means that the oxidized compound is conjugated with (coupled to) an endogenous molecule. This reaction increases the water solubility further. Many conjugated meta- bolites are actively excreted via the kidneys.
Transferases are a group of enzymes that catalyze Phase 2 reactions. They conjugate xenobiotics with endogenous compounds such as glutathione, amino acids, glucuronic acid or sulphate.
Glutathione is an endogenous molecule, a tripeptide, that is conjugated with xenobiotics in Phase 2 reactions. It is present in all cells (and in liver cells in high concentrations), and usually protects from activated xenobiotics. When glutathione is depleted, toxic reactions between activated xenobiotic metabolites and proteins, lipids or DNA may occur.
Induction means that enzymes involved in biotransformation are increased (in activity or amount) as a response to xenobiotic exposure. In some cases within a few days enzyme activity can be increased several fold. Induction is often balanced so that both Phase 1 and Phase 2 reactions are increased simultaneously. This may lead to a more rapid biotransformation and can explain tolerance. In contrast, unbalanced induction may increase toxicity.
Inhibition of biotransformation can occur if two xenobiotics are metabolized by the same enzyme. The two substrates have to compete, and usually one of the substrates is preferred. In that case the second substrate is not metabolized, or only slowly metabolized. As with induction, inhibition may increase as well as decrease toxicity.
Oxygen activation can be triggered by metabolites of certain xenobiotics. They may auto-oxidize under the production of activated oxygen species. These oxygen-derived species, which include superoxide, hydrogen peroxide and the hydroxyl radical, may damage DNA, lipids and proteins in cells. Oxygen activation is also involved in inflammatory processes.
Genetic variability between individuals is seen in many genes coding for Phase 1 and Phase 2 enzymes. Genetic variability may explain why certain individuals are more susceptible to toxic effects of xenobiotics than others.
The human organism represents a complex biological system on various levels of organization, from the molecular-cellular level to the tissues and organs. The organism is an open system, exchanging matter and energy with the environment through numerous biochemical reactions in a dynamic equilibrium. The environment can be polluted, or contaminated with various toxicants.
Penetration of molecules or ions of toxicants from the work or living environment into such a strongly coordinated biological system can reversibly or irreversibly disturb normal cellular biochemical processes, or even injure and destroy the cell (see “Cellular injury and cellular death”).
Penetration of a toxicant from the environment to the sites of its toxic effect inside the organism can be divided into three phases:
Here we will focus our attention exclusively on the toxicokinetic processes inside the human organism following exposure to toxicants in the environment.
The molecules or ions of toxicants present in the environment will penetrate into the organism through the skin and mucosa, or the epithelial cells of the respiratory and gastrointestinal tracts, depending on the point of entry. That means molecules and ions of toxicants must penetrate through cellular membranes of these biological systems, as well as through an intricate system of endomembranes inside the cell.
All toxicokinetic and toxicodynamic processes occur on the molecular-cellular level. Numerous factors influence these processes and these can be divided into two basic groups:
Physico-Chemical Properties of Toxicants
In 1854 the Russian toxicologist E.V. Pelikan started studies on the relation between the chemical structure of a substance and its biological activity—the structure activity relationship (SAR). Chemical structure directly determines physico-chemical properties, some of which are responsible for biological activity.
To define the chemical structure numerous parameters can be selected as descriptors, which can be divided into various groups:
1. Physico-chemical:
2. Steric: molecular volume, shape and surface area, substructure shape, molecular reactivity, etc.
3. Structural: number of bonds number of rings (in polycyclic compounds), extent of branching, etc.
For each toxicant it is necessary to select a set of descriptors related to a particular mechanism of activity. However, from the toxicokinetic point of view two parameters are of general importance for all toxicants:
For inhaled dusts and aerosols, the particle size, shape, surface area and density also influence their toxicokinetics and toxico- dynamics.
Structure and Properties of Membranes
The eukaryotic cell of human and animal organisms is encircled by a cytoplasmic membrane regulating the transport of substances and maintaining cell homeostasis. The cell organelles (nucleus, mitochondria) possess membranes too. The cell cytoplasm is compartmentalized by intricate membranous structures, the endo- plasmic reticulum and Golgi complex (endomembranes). All these membranes are structurally alike, but vary in the content of lipids and proteins.
The structural framework of membranes is a bilayer of lipid molecules (phospholipids, sphyngolipids, cholesterol). The backbone of a phospholipid molecule is glycerol with two of its -OH groups esterified by aliphatic fatty acids with 16 to 18 carbon atoms, and the third group esterified by a phosphate group and a nitrogenous compound (choline, ethanolamine, serine). In sphyngolipids, sphyngosine is the base.
The lipid molecule is amphipatic because it consists of a polar hydrophilic “head” (amino alcohol, phosphate, glycerol) and a non-polar twin “tail” (fatty acids). The lipid bilayer is arranged so that the hydrophilic heads constitute the outer and inner surface of membrane and lipophilic tails are stretched toward the membrane interior, which contains water, various ions and molecules.
Proteins and glycoproteins are inserted into the lipid bilayer (intrinsic proteins) or attached to the membrane surface (extrinsic proteins). These proteins contribute to the structural integrity of the membrane, but they may also perform as enzymes, carriers, pore walls or receptors.
The membrane represents a dynamic structure which can be disintegrated and rebuilt with a different proportion of lipids and proteins, according to functional needs.
Regulation of transport of substances into and out of the cell represents one of the basic functions of outer and inner membranes.
Some lipophilic molecules pass directly through the lipid bilayer. Hydrophilic molecules and ions are transported via pores. Membranes respond to changing conditions by opening or sealing certain pores of various sizes.
The following processes and mechanisms are involved in the transport of substances, including toxicants, through membranes:
Active processes:
Diffusion
This represents the movement of molecules and ions through lipid bilayer or pores from a region of high concentration, or high electric potential, to a region of low concentration or potential (“downhill”). Difference in concentration or electric charge is the driving force influencing the intensity of the flux in both directions. In the equilibrium state, influx will be equal to efflux. The rate of diffusion follows Ficke’s law, stating that it is directly proportional to the available surface of membrane, difference in concentration (charge) gradient and characteristic diffusion coefficient, and inversely proportional to the membrane thickness.
Small lipophilic molecules pass easily through the lipid layer of membrane, according to the Nernst partition coefficient.
Large lipophilic molecules, water soluble molecules and ions will use aqueous pore channels for their passage. Size and stereoconfiguration will influence passage of molecules. For ions, besides size, the type of charge will be decisive. The protein molecules of pore walls can gain positive or negative charge. Narrow pores tend to be selective—negatively charged ligands will allow passage only for cations, and positively charged ligands will allow passage only for anions. With the increase of pore diameter hydrodynamic flow is dominant, allowing free passage of ions and molecules, according to Poiseuille’s law. This filtration is a consequence of the osmotic gradient. In some cases ions can penetrate through specific complex molecules—ionophores—which can be produced by micro-organisms with antibiotic effects (nonactin, valinomycin, gramacidin, etc.).
Facilitated or catalyzed diffusion
This requires the presence of a carrier in the membrane, usually a protein molecule (permease). The carrier selectively binds substances, resembling a substrate-enzyme complex. Similar molecules (including toxicants) can compete for the specific carrier until its saturation point is reached. Toxicants can compete for the carrier and when they are irreversibly bound to it the transport is blocked. The rate of transport is characteristic for each type of carrier. If transport is performed in both direction, it is called exchange diffusion.
Active transport
For transport of some substances vital for the cell, a special type of carrier is used, transporting against the concentration gradient or electric potential (“uphill”). The carrier is very stereospecific and can be saturated.
For uphill transport, energy is required. The necessary energy is obtained by catalytic cleavage of ATP molecules to ADP by the enzyme adenosine triphosphatase (ATP-ase).
Toxicants can interfere with this transport by competitive or non-competitive inhibition of the carrier or by inhibition of ATP-ase activity.
Endocytosis
Endocytosis is defined as a transport mechanism in which the cell membrane encircles material by enfolding to form a vesicle transporting it through the cell. When the material is liquid, the process is termed pinocytosis. In some cases the material is bound to a receptor and this complex is transported by a membrane vesicle. This type of transport is especially used by epithelial cells of the gastrointestinal tract, and cells of the liver and kidneys.
Absorption of Toxicants
People are exposed to numerous toxicants present in the work and living environment, which can penetrate into the human organism by three main portals of entry:
In the case of exposure in industry, inhalation represents the dominant way of entry of toxicants, followed by dermal penetration. In agriculture, pesticides exposure via dermal absorption is almost equal to cases of combined inhalation and dermal penetration. The general population is mostly exposed by ingestion of contaminated food, water and beverages, then by inhalation and less often by dermal penetration.
Absorption via the respiratory tract
Absorption in the lungs represents the main route of uptake for numerous airborne toxicants (gases, vapours, fumes, mists, smokes, dusts, aerosols, etc.).
The respiratory tract (RT) represents an ideal gas-exchange system possessing a membrane with a surface of 30m2 (expiration) to 100m2 (deep inspiration), behind which a network of about 2,000km of capillaries is located. The system, developed through evolution, is accommodated into a relatively small space (chest cavity) protected by ribs.
Anatomically and physiologically the RT can be divided into three compartments:
Hydrophilic toxicants are easily absorbed by the epithelium of the nasopharingeal region. The whole epithelium of the NP and TB regions is covered by a film of water. Lipophilic toxicants are partially absorbed in the NP and TB, but mostly in the alveoli by diffusion through alveolo-capillary membranes. The absorption rate depends on lung ventilation, cardiac output (blood flow through lungs), solubility of toxicant in blood and its metabolic rate.
In the alveoli, gas exchange is carried out. The alveolar wall is made up of an epithelium, an interstitial framework of basement membrane, connective tissue and the capillary endothelium. The diffusion of toxicants is very rapid through these layers, which have a thickness of about 0.8 μm. In alveoli, toxicant is transferred from the air phase into the liquid phase (blood). The rate of absorption (air to blood distribution) of a toxicant depends on its concentration in alveolar air and the Nernst partition coefficient for blood (solubility coefficient).
In the blood the toxicant can be dissolved in the liquid phase by simple physical processes or bound to the blood cells and/or plasma constituents according to chemical affinity or by adsorption. The water content of blood is 75% and, therefore, hydrophilic gases and vapours show a high solubility in plasma (e.g., alcohols). Lipophilic toxicants (e.g., benzene) are usually bound to cells or macro-molecules such as albumen.
From the very beginning of exposure in the lungs, two opposite processes are occurring: absorption and desorption. The equilibrium between these processes depends on the concentration of toxicant in alveolar air and blood. At the onset of exposure the toxicant concentration in the blood is 0 and retention in blood is almost 100%. With continuation of exposure, an equilibrium between absorption and desorption is attained. Hydrophilic toxicants will rapidly attain equilibrium, and the rate of absorption depends on pulmonary ventilation rather than on blood flow. Lipophilic toxicants need a longer time to achieve equilibrium, and here the flow of unsaturated blood governs the rate of absorption.
Deposition of particles and aerosols in the RT depends on physical and physiological factors, as well as particle size. In short, the smaller the particle the deeper it will penetrate into the RT.
Relatively constant low retention of dust particles in the lungs of persons who are highly exposed (e.g., miners) suggests the existence of a very efficient system for the clearance of particles. In the upper part of the RT (tracheo-bronchial) a mucociliary blanket performs the clearance. In the pulmonary part, three different mechanisms are at work.: (1) mucociliary blanket, (2) phagocytosis and (3) direct penetration of particles through the alveolar wall.
The first 17 of the 23 branchings of the tracheo-bronchial tree possess ciliated epithelial cells. By their strokes these cilia constantly move a mucous blanket toward the mouth. Particles deposited on this mucociliary blanket will be swallowed in the mouth (ingestion). A mucous blanket also covers the surface of the alveolar epithelium, moving toward the mucociliary blanket. Additionally the specialized moving cells—phagocytes—engulf particles and micro-organisms in the alveoli and migrate in two possible directions:
Absorption via gastrointestinal tract
Toxicants can be ingested in the case of accidental swallowing, intake of contaminated food and drinks, or swallowing of particles cleared from the RT.
The entire alimentary channel, from oesophagus to anus, is basically built in the same way. A mucous layer (epithelium) is supported by connective tissue and then by a network of capillaries and smooth muscle. The surface epithelium of the stomach is very wrinkled to increase the absorption/secretion surface area. The intestinal area contains numerous small projections (villi), which are able to absorb material by “pumping in”. The active area for absorption in the intestines is about 100m2.
In the gastrointestinal tract (GIT) all absorption processes are very active:
Some toxic metal ions use specialized transport systems for essential elements: thallium, cobalt and manganese use the iron system, while lead appears to use the calcium system.
Many factors influence the rate of absorption of toxicants in various parts of the GIT:
It is also necessary to mention the enterohepatic circulation. Polar toxicants and/or metabolites (glucuronides and other conjugates) are excreted with the bile into the duodenum. Here the enzymes of the microflora perform hydrolysis and liberated products can be reabsorbed and transported by the portal vein into the liver. This mechanism is very dangerous in the case of hepatotoxic substances, enabling their temporary accumulation in the liver.
In the case of toxicants biotransformed in the liver to less toxic or non-toxic metabolites, ingestion may represent a less dangerous portal of entry. After absorption in the GIT these toxicants will be transported by the portal vein to the liver, and there they can be partially detoxified by biotransformation.
Absorption through the skin (dermal, percutaneous)
The skin (1.8 m2 of surface in a human adult) together with the mucous membranes of the body orifices, covers the surface of the body. It represents a barrier against physical, chemical and biological agents, maintaining the body integrity and homeostasis and performing many other physiological tasks.
Basically the skin consists of three layers: epidermis, true skin (dermis) and subcutaneous tissue (hypodermis). From the toxicological point of view the epidermis is of most interest here. It is built of many layers of cells. A horny surface of flattened, dead cells (stratum corneum) is the top layer, under which a continuous layer of living cells (stratum corneum compactum) is located, followed by a typical lipid membrane, and then by stratum lucidum, stratum gramulosum and stratum mucosum. The lipid membrane represents a protective barrier, but in hairy parts of the skin, both hair follicles and sweat gland channels penetrate through it. Therefore, dermal absorption can occur by the following mechanisms:
The rate of absorption through the skin will depend on many factors:
Transport of Toxicants by Blood and Lymph
After absorption by any of these portals of entry, toxicants will reach the blood, lymph or other body fluids. The blood represents the major vehicle for transport of toxicants and their metabolites.
Blood is a fluid circulating organ, transporting necessary oxygen and vital substances to the cells and removing waste products of metabolism. Blood also contains cellular components, hormones, and other molecules involved in many physiological functions. Blood flows inside a relatively well closed, high-pressure circulatory system of blood vessels, pushed by the activity of the heart. Due to high pressure, leakage of fluid occurs. The lymphatic system represents the drainage system, in the form of a fine mesh of small, thin-walled lymph capillaries branching through the soft tissues and organs.
Blood is a mixture of a liquid phase (plasma, 55%) and solid blood cells (45%). Plasma contains proteins (albumins, globulins, fibrinogen), organic acids (lactic, glutamic, citric) and many other substances (lipids, lipoproteins, glycoproteins, enzymes, salts, xenobiotics, etc.). Blood cell elements include erythrocytes (Er), leukocytes, reticulocytes, monocytes, and platelets.
Toxicants are absorbed as molecules and ions. Some toxicants at blood pH form colloid particles as a third form in this liquid. Molecules, ions and colloids of toxicants have various possibilities for transport in blood:
Most of the toxicants in blood exist partially in a free state in plasma and partially bound to erythrocytes and plasma constituents. The distribution depends on the affinity of toxicants to these constituents. All fractions are in a dynamic equilibrium.
Some toxicants are transported by the blood elements—mostly by erythrocytes, very rarely by leukocytes. Toxicants can be adsorbed on the surface of Er, or can bind to the ligands of stroma. If they penetrate into Er they can bind to the haem (e.g. carbon monoxide and selenium) or to the globin (Sb111, Po210). Some toxicants transported by Er are arsenic, cesium, thorium, radon, lead and sodium. Hexavalent chromium is exclusively bound to the Er and trivalent chromium to the proteins of plasma. For zinc, competition between Er and plasma occurs. About 96% of lead is transported by Er. Organic mercury is mostly bound to Er and inorganic mercury is carried mostly by plasma albumin. Small fractions of beryllium, copper, tellurium and uranium are carried by Er.
The majority of toxicants are transported by plasma or plasma proteins. Many electrolytes are present as ions in an equilibrium with non-dissociated molecules free or bound to the plasma fractions. This ionic fraction of toxicants is very diffusible, penetrating through the walls of capillaries into tissues and organs. Gases and vapours can be dissolved in the plasma.
Plasma proteins possess a total surface area of about 600to 800km2 offered for absorption of toxicants. Albumin molecules possess about 109 cationic and 120 anionic ligands at the disposal of ions. Many ions are partially carried by albumin (e.g., copper, zinc and cadmium), as are such compounds as dinitro- and ortho-cresols, nitro- and halogenated derivatives of aromatic hydrocarbons, and phenols.
Globulin molecules (alpha and beta) transport small molecules of toxicants as well as some metallic ions (copper, zinc and iron) and colloid particles. Fibrinogen shows affinity for certain small molecules. Many types of bonds can be involved in binding of toxicants to plasma proteins: Van der Waals forces, attraction of charges, association between polar and non-polar groups, hydrogen bridges, covalent bonds.
Plasma lipoproteins transport lipophilic toxicants such as PCBs. The other plasma fractions serve as a transport vehicle too. The affinity of toxicants for plasma proteins suggests their affinity for proteins in tissues and organs during distribution.
Organic acids (lactic, glutaminic, citric) form complexes with some toxicants. Alkaline earths and rare earths, as well as some heavy elements in the form of cations, are complexed also with organic oxy- and amino acids. All these complexes are usually diffusible and easily distributed in tissues and organs.
Physiologically chelating agents in plasma such as transferrin and metallothionein compete with organic acids and amino acids for cations to form stable chelates.
Diffusible free ions, some complexes and some free molecules are easily cleared from the blood into tissues and organs. The free fraction of ions and molecules is in a dynamic equilibrium with the bound fraction. The concentration of a toxicant in blood will govern the rate of its distribution into tissues and organs, or its mobilization from them into the blood.
Distribution of Toxicants in the Organism
The human organism can be divided into the following compartments. (1) internal organs, (2) skin and muscles, (3) adipose tissues, (4) connective tissue and bones. This classification is mostly based on the degree of vascular (blood) perfusion in a decreasing order. For example internal organs (including the brain), which represent only 12% of the total body weight, receive about 75% of the total blood volume. On the other hand, connective tissues and bones (15% of total body weight) receive only one per cent of the total blood volume.
The well-perfused internal organs generally achieve the highest concentration of toxicants in the shortest time, as well as an equilibrium between blood and this compartment. The uptake of toxicants by less perfused tissues is much slower, but retention is higher and duration of stay much longer (accumulation) due to low perfusion.
Three components are of major importance for the intracellular distribution of toxicants: content of water, lipids and proteins in the cells of various tissues and organs. The above-mentioned order of compartments also follows closely a decreasing water content in their cells. Hydrophilic toxicants will be more rapidly distributed to the body fluids and cells with high water content, and lipophilic toxicants to cells with higher lipid content (fatty tissue).
The organism possesses some barriers which impair penetration of some groups of toxicants, mostly hydrophilic, to certain organs and tissues, such as:
As previously noted only the free forms of toxicants in plasma (molecules, ions, colloids) are available for penetration through the capillary walls participating in distribution. This free fraction is in a dynamic equilibrium with the bound fraction. Concentration of toxicants in blood is in a dynamic equilibrium with their concentration in organs and tissues, governing retention (accumulation) or mobilization from them.
The condition of the organism, functional state of organs (especially neuro-humoral regulation), hormonal balance and other factors play a role in distribution.
Retention of toxicant in a particular compartment is generally temporary and redistribution into other tissues can occur. Retention and accumulation is based on the difference between the rates of absorption and elimination. The duration of retention in a compartment is expressed by the biological half-life. This is the time interval in which 50% of the toxicant is cleared from the tissue or organ and redistributed, translocated or eliminated from the organism.
Biotransformation processes occur during distribution and retention in various organs and tissues. Biotransformation produces more polar, more hydrophilic metabolites, which are more easily eliminated. A low rate of biotransformation of a lipophilic toxicant will generally cause its accumulation in a compartment.
The toxicants can be divided into four main groups according to their affinity, predominant retention and accumulation in a particular compartment:
Accumulation in lipid-rich tissues
The “standard man” of 70kg body weight contains about 15% of body weight in the form of adipose tissue, increasing with obesity to 50%. However, this lipid fraction is not uniformly distributed. The brain (CNS) is a lipid-rich organ, and peripheral nerves are wrapped with a lipid-rich myelin sheath and Schwann cells. All these tissues offer possibilities for accumulation of lipophilic toxicants.
Numerous non-electrolytes and non-polar toxicants with a suitable Nernst partition coefficient will be distributed to this compartment, as well as numerous organic solvents (alcohols, aldehydes, ketones, etc.), chlorinated hydrocarbons (including organochlorine insecticides such as DDT), some inert gases (radon), etc.
Adipose tissue will accumulate toxicants due to its low vascularization and lower rate of biotransformation. Here accumulation of toxicants may represent a kind of temporary “neutralization” because of lack of targets for toxic effect. However, potential danger for the organism is always present due to the possibility of mobilization of toxicants from this compartment back to the circulation.
Deposition of toxicants in the brain (CNS) or lipid-rich tissue of the myelin sheath of the peripheral nervous system is very dangerous. The neurotoxicants are deposited here directly next to their targets. Toxicants retained in lipid-rich tissue of the endocrine glands can produce hormonal disturbances. Despite the blood-brain barrier, numerous neurotoxicants of a lipophilic nature reach the brain (CNS): anaesthetics, organic solvents, pesticides, tetraethyl lead, organomercurials, etc.
Retention in the reticuloendothelial system
In each tissue and organ a certain percentage of cells is specialized for phagocytic activity, engulfing micro-organisms, particles, colloid particles, and so on. This system is called the reticuloendothelial system (RES), comprising fixed cells as well as moving cells (phagocytes). These cells are present in non-active form. An increase of the above-mentioned microbes and particles will activate the cells up to a saturation point.
Toxicants in the form of colloids will be captured by the RES of organs and tissues. Distribution depends on the colloid particle size. For larger particles, retention in the liver will be favoured. With smaller colloid particles, more or less uniform distribution will occur between the spleen, bone marrow and liver. Clearance of colloids from the RES is very slow, although small particles are cleared relatively more quickly.
Accumulation in bones
About 60 elements can be identified as osteotropic elements, or bone seekers.
Osteotropic elements can be divided into three groups:
The skeleton of a standard man accounts for 10to 15% of the total body weight, representing a large potential storage depot for osteotropic toxicants. Bone is a highly specialized tissue consisting by volume of 54% minerals and 38% organic matrix. The mineral matrix of bone is hydroxyapatite, Ca10(PO4)6(OH)2 , in which the ratio of Ca to P is about 1.5 to one. The surface area of mineral available for adsorption is about 100m2 per g of bone.
Metabolic activity of the bones of the skeleton can be divided in two categories:
In the fetus, infant and young child metabolic bone (see “available skeleton”) represents almost 100% of the skeleton. With age this percentage of metabolic bone decreases. Incorporation of toxicants during exposure appears in the metabolic bone and in more slowly turning-over compartments.
Incorporation of toxicants into bone occurs in two ways:
Ion-exchange reactions
The bone mineral, hydroxyapatite, represents a complex ion- exchange system. Calcium cations can be exchanged by various cations. The anions present in bone can also be exchanged by anions: phosphate with citrates and carbonates, hydroxyl with fluorine. Ions which are not exchangeable can be adsorbed on the mineral surface. When toxicant ions are incorporated in the mineral, a new layer of mineral can cover the mineral surface, burying toxicant into the bone structure. Ion exchange is a reversible process, depending on the concentration of ions, pH and fluid volume. Thus, for example, an increase of dietary calcium may decrease the deposition of toxicant ions in the lattice of minerals. It has been mentioned that with age the percentage of metabolic bone is decreased, although ion exchange continues. With ageing, bone mineral resorption occurs, in which bone density actually decreases. At this point, toxicants in bone may be released (e.g., lead).
About 30% of the ions incorporated into bone minerals are loosely bound and can be exchanged, captured by natural chelating agents and excreted, with a biological half-life of 15 days. The other 70% is more firmly bound. Mobilization and excretion of this fraction shows a biological half-life of 2.5 years and more depending on bone type (remodelling processes).
Chelating agents (Ca-EDTA, penicillamine, BAL, etc.) can mobilize considerable quantities of some heavy metals, and their excretion in urine greatly increased.
Colloid adsorption
Colloid particles are adsorbed as a film on the mineral surface (100m2 per g) by Van der Waals forces or chemisorption. This layer of colloids on the mineral surfaces is covered with the next layer of formed minerals, and the toxicants are more buried into the bone structure. The rate of mobilization and elimination depends on remodelling processes.
Accumulation in hair and nails
The hair and nails contain keratin, with sulphydryl groups able to chelate metallic cations such as mercury and lead.
Distribution of toxicant inside the cell
Recently the distribution of toxicants, especially some heavy metals, within cells of tissues and organs has become of importance. With ultracentrifugation techniques, various fractions of the cell can be separated to determine their content of metal ions and other toxicants.
Animal studies have revealed that after penetration into the cell, some metal ions are bound to a specific protein, metallothionein. This low molecular weight protein is present in the cells of liver, kidney and other organs and tissues. Its sulphydryl groups can bind six ions per molecule. Increased presence of metal ions induces the biosynthesis of this protein. Ions of cadmium are the most potent inducer. Metallothionein serves also to maintain homeostasis of vital copper and zinc ions. Metallothionein can bind zinc, copper, cadmium, mercury, bismuth, gold, cobalt and other cations.
Biotransformation and Elimination of Toxicants
During retention in cells of various tissues and organs, toxicants are exposed to enzymes which can biotransform (metabolize) them, producing metabolites. There are many pathways for the elimination of toxicants and/or metabolites: by exhaled air via the lungs, by urine via the kidneys, by bile via the GIT, by sweat via the skin, by saliva via the mouth mucosa, by milk via the mammary glands, and by hair and nails via normal growth and cell turnover.
The elimination of an absorbed toxicant depends on the portal of entry. In the lungs the absorption/desorption process starts immediately and toxicants are partially eliminated by exhaled air. Elimination of toxicants absorbed by other paths of entry is prolonged and starts after transport by blood, eventually being completed after distribution and biotransformation. During absorption an equilibrium exists between the concentrations of a toxicant in the blood and in tissues and organs. Excretion decreases toxicant blood concentration and may induce mobilization of a toxicant from tissues into blood.
Many factors can influence the elimination rate of toxicants and their metabolites from the body:
Here we distinguish two groups of compartments: (1) the rapid-exchange system— in these compartments, tissue concentration of toxicant is similar to that of the blood; and (2) the slow-exchange system, where tissue concentration of toxicant is higher than in blood due to binding and accumulation—adipose tissue, skeleton and kidneys can temporarily retain some toxicants, e.g., arsenic and zinc.
A toxicant can be excreted simultaneously by two or more excretion routes. However, usually one route is dominant.
Scientists are developing mathematical models describing the excretion of a particular toxicant. These models are based on the movement from one or both compartments (exchange systems), biotransformation and so on.
Elimination by exhaled air via lungs
Elimination via the lungs (desorption) is typical for toxicants with high volatility (e.g., organic solvents). Gases and vapours with low solubility in blood will be quickly eliminated this way, whereas toxicants with high blood solubility will be eliminated by other routes.
Organic solvents absorbed by the GIT or skin are excreted partially by exhaled air in each passage of blood through the lungs, if they have a sufficient vapour pressure. The Breathalyser test used for suspected drunk drivers is based on this fact. The concentration of CO in exhaled air is in equilibrium with the CO-Hb blood content. The radioactive gas radon appears in exhaled air due to the decay of radium accumulated in the skeleton.
Elimination of a toxicant by exhaled air in relation to the post-exposure period of time usually is expressed by a three-phase curve. The first phase represents elimination of toxicant from the blood, showing a short half-life. The second, slower phase represents elimination due to exchange of blood with tissues and organs (quick-exchange system). The third, very slow phase is due to exchange of blood with fatty tissue and skeleton. If a toxicant is not accumulated in such compartments, the curve will be two-phase. In some cases a four-phase curve is also possible.
Determination of gases and vapours in exhaled air in the post-exposure period is sometimes used for evaluation of exposures in workers.
Renal excretion
The kidney is an organ specialized in the excretion of numerous water-soluble toxicants and metabolites, maintaining homeostasis of the organism. Each kidney possesses about one million nephrons able to perform excretion. Renal excretion represents a very complex event encompassing three different mechanisms:
Excretion of a toxicant via the kidneys to urine depends on the Nernst partition coefficient, dissociation constant and pH of urine, molecular size and shape, rate of metabolism to more hydrophilic metabolites, as well as health status of the kidneys.
The kinetics of renal excretion of a toxicant or its metabolite can be expressed by a two-, three- or four-phase excretion curve, depending on the distribution of the particular toxicant in various body compartments differing in the rate of exchange with the blood.
Saliva
Some drugs and metallic ions can be excreted through the mucosa of the mouth by saliva—for example, lead (“lead line”), mercury, arsenic, copper, as well as bromides, iodides, ethyl alcohol, alkaloids, and so on. The toxicants are then swallowed, reaching the GIT, where they can be reabsorbed or eliminated by faeces.
Sweat
Many non-electrolytes can be partially eliminated via skin by sweat: ethyl alcohol, acetone, phenols, carbon disulphide and chlorinated hydrocarbons.
Milk
Many metals, organic solvents and some organochlorine pesticides (DDT) are secreted via the mammary gland in mother’s milk. This pathway can represent a danger for nursing infants.
Hair
Analysis of hair can be used as an indicator of homeostasis of some physiological substances. Also exposure to some toxicants, especially heavy metals, can be evaluated by this kind of bioassay.
Elimination of toxicants from the body can be increased by:
Exposure Determinations
Determination of toxicants and metabolites in blood, exhaled air, urine, sweat, faeces and hair is more and more used for evaluation of human exposure (exposure tests) and/or evaluation of the degree of intoxication. Therefore biological exposure limits (Biological MAC Values, Biological Exposure Indices—BEI) have recently been established. These bioassays show “internal exposure” of the organism, that is, total exposure of the body in both the work and living environments by all portals of entry (see “Toxicology test methods: Biomarkers”).
Combined Effects Due to Multiple Exposure
People in the work and/or living environment are usually exposed simultaneously or consecutively to various physical and chemical agents. Also it is necessary to take into consideration that some persons use medications, smoke, consume alcohol and food containing additives and so on. That means that usually multiple exposure is occurring. Physical and chemical agents can interact in each step of toxicokinetic and/or toxicodynamic processes, producing three possible effects:
However, studies on combined effects are rare. This kind of study is very complex due to the combination of various factors and agents.
We can conclude that when the human organism is exposed to two or more toxicants simultaneously or consecutively, it is necessary to consider the possibility of some combined effects, which can increase or decrease the rate of toxicokinetic processes.
The priority objective of occupational and environmental toxicology is to improve the prevention or substantial limitation of health effects of exposure to hazardous agents in the general and occupational environments. To this end systems have been developed for quantitative risk assessment related to a given exposure (see the section “Regulatory toxicology”).
The effects of a chemical on particular systems and organs are related to the magnitude of exposure and whether exposure is acute or chronic. In view of the diversity of toxic effects even within one system or organ, a uniform philosophy concerning the critical organ and critical effect has been proposed for the purpose of risk assessment and development of health-based recommended concentration limits of toxic substances in different environmental media.
From the point of view of preventive medicine, it is of particular importance to identify early adverse effects, based on the general assumption that preventing or limiting early effects may prevent more severe health effects from developing.
Such an approach has been applied to heavy metals. Although heavy metals, such as lead, cadmium and mercury, belong to a specific group of toxic substances where the chronic effect of activity is dependent on their accumulation in the organs, the definitions presented below were published by the Task Group on Metal Toxicity (Nordberg 1976).
The definition of the critical organ as proposed by the Task Group on Metal Toxicity has been adopted with a slight modification: the word metal has been replaced with the expression potentially toxic substance (Duffus 1993).
Whether a given organ or system is regarded as critical depends not only on the toxicomechanics of the hazardous agent but also on the route of absorption and the exposed population.
The biological meaning of subcritical effect is sometimes not known; it may stand for exposure biomarker, adaptation index or a critical effect precursor (see “Toxicology test methods: Biomarkers”). The latter possibility can be particularly significant in view of prophylactic activities.
Table 1 displays examples of critical organs and effects for different chemicals. In chronic environmental exposure to cadmium, where the route of absorption is of minor importance (cadmium air concentrations range from 10 to 20μg/m3 in the urban and 1 to 2 μg/m3 in the rural areas), the critical organ is the kidney. In the occupational setting where the TLV reaches 50μg/m3 and inhalation constitutes the main route of exposure, two organs, lung and kidney, are regarded as critical.
Table 1. Examples of critical organs and critical effects
Substance | Critical organ in chronic exposure | Critical effect |
Cadmium | Lungs | Nonthreshold: Lung cancer (unit risk 4.6 x 10-3) |
Kidney | Threshold: Increased excretion of low molecular proteins (β2 –M, RBP) in urine |
|
Lungs | Emphysema slight function changes | |
Lead | Adults Haematopoietic system |
Increased delta-aminolevulinic acid excretion in urine (ALA-U); increased concentration of free erythrocyte protoporphyrin (FEP) in erythrocytes |
Peripheral nervous system | Slowing of the conduction velocities of the slower nerve fibres | |
Mercury (elemental) | Young children Central nervous system |
Decrease in IQ and other subtle effects; mercurial tremor (fingers, lips, eyelids) |
Mercury (mercuric) | Kidney | Proteinuria |
Manganese | Adults Central nervous system |
Impairment of psychomotor functions |
Children Lungs |
Respiratory symptoms | |
Central nervous system | Impairment of psychomotor functions | |
Toluene | Mucous membranes | Irritation |
Vinyl chloride | Liver | Cancer (angiosarcoma unit risk 1 x 10-6 ) |
Ethyl acetate | Mucous membrane | Irritation |
For lead, the critical organs in adults are the haemopoietic and peripheral nervous systems, where the critical effects (e.g., elevated free erythrocyte protoporphyrin concentration (FEP), increased excretion of delta-aminolevulinic acid in urine, or impaired peripheral nerve conduction) manifest when the blood lead level (an index of lead absorption in the system) approaches 200 to 300μg/l. In small children the critical organ is the central nervous system (CNS), and the symptoms of dysfunction detected with the use of a psychological test battery have been found to appear in the examined populations even at concentrations in the range of about 100μg/l Pb in blood.
A number of other definitions have been formulated which may better reflect the meaning of the notion. According to WHO (1989), the critical effect has been defined as “the first adverse effect which appears when the threshold (critical) concentration or dose is reached in the critical organ. Adverse effects, such as cancer, with no defined threshold concentration are often regarded as critical. Decision on whether an effect is critical is a matter of expert judgement.” In the International Programme on Chemical Safety (IPCS) guidelines for developing Environmental Health Criteria Documents, the critical effect is described as “the adverse effect judged to be most appropriate for determining the tolerable intake”. The latter definition has been formulated directly for the purpose of evaluating the health-based exposure limits in the general environment. In this context the most essential seems to be determining which effect can be regarded as an adverse effect. Following current terminology, the adverse effect is the “change in morphology, physiology, growth, development or lifespan of an organism which results in impairment of the capacity to compensate for additional stress or increase in susceptibility to the harmful effects of other environmental influences. Decision on whether or not any effect is adverse requires expert judgement.”
Figure 1 displays hypothetical dose-response curves for different effects. In the case of exposure to lead, A can represent a subcritical effect (inhibition of erythrocyte ALA-dehydratase), B the critical effect (an increase in erythrocyte zinc protoporphyrin or increase in the excretion of delta-aminolevulinic acid, C the clinical effect (anaemia) and D the fatal effect (death). For lead exposure there is abundant evidence illustrating how particular effects of exposure are dependent on lead concentration in blood (practical counterpart of the dose), either in the form of the dose-response relationship or in relation to different variables (sex, age, etc.). Determining the critical effects and the dose-response relationship for such effects in humans makes it possible to predict the frequency of a given effect for a given dose or its counterpart (concentration in biological material) in a certain population.
Figure 1. Hypothetical dose-response curves for various effects
The critical effects can be of two types: those considered to have a threshold and those for which there may be some risk at any exposure level (non-threshold, genotoxic carcinogens and germ mutagens). Whenever possible, appropriate human data should be used as a basis for the risk assessment. In order to determine the threshold effects for the general population, assumptions concerning the exposure level (tolerable intake, biomarkers of exposure) have to be made such that the frequency of the critical effect in the population exposed to a given hazardous agent corresponds to the frequency of that effect in the general population. In lead exposure, the maximum recommended blood lead concentration for the general population (200μg/l, median below 100μg/l) (WHO 1987) is practically below the threshold value for the assumed critical effect—the elevated free erythrocyte protoporphyrin level, although it is not below the level associated with effects on the CNS in children or blood pressure in adults. In general, if data from well-conducted human population studies defining a no observed adverse effect level are the basis for safety evaluation, then the uncertainty factor of ten has been considered appropriate. In the case of occupational exposure the critical effects may refer to a certain part of the population (e.g. 10%). Accordingly, in occupational lead exposure the recommended health-based concentration of blood lead has been adopted to be 400mg/l in men where a 10% response level for ALA-U of 5mg/l occurred at PbB concentrations of about 300 to 400mg/l. For the occupational exposure to cadmium (assuming the increased urinary excretion of low-weight proteins to be the critical effect), the level of 200ppm cadmium in renal cortex has been regarded as the admissible value, for this effect has been observed in 10% of the exposed population. Both these values are under consideration for lowering, in many countries, at the present time (i.e.,1996).
There is no clear consensus on appropriate methodology for the risk assessment of chemicals for which the critical effect may not have a threshold, such as genotoxic carcinogens. A number of approaches based largely on characterization of the dose- response relationship have been adopted for the assessment of such effects. Owing to the lack of socio-political acceptance of health risk caused by carcinogens in such documents as the Air Quality Guidelines for Europe (WHO 1987), only the values such as the unit lifetime risk (i.e., the risk associated with lifetime exposure to 1μg/m3 of the hazardous agent) are presented for non-threshold effects (see “Regulatory toxicology”).
Presently, the basic step in undertaking activities for risk assessment is determining the critical organ and critical effects. The definitions of both the critical and adverse effect reflect the responsibility of deciding which of the effects within a given organ or system should be regarded as critical, and this is directly related to the subsequent determination of recommended values for a given chemical in the general environment—for example, Air Quality Guidelines for Europe (WHO 1987) or health-based limits in occupational exposure (WHO 1980). Determining the critical effect from within the range of subcritical effects may lead to a situation where the recommended limits on toxic chemicals concentration in the general or occupational environment may be in practice impossible to maintain. Regarding as critical an effect that may overlap the early clinical effects may bring about the adoption of the values for which adverse effects may develop in some part of the population. The decision whether or not a given effect should be considered critical remains the responsibility of expert groups who specialize in toxicity and risk assessment.
There are often large differences among humans in the intensity of response to toxic chemicals, and variations in susceptibility of an individual over a lifetime. These can be attributed to a variety of factors capable of influencing absorption rate, distribution in the body, biotransformation and/or excretion rate of a particular chemical. Apart from the known hereditary factors which have been clearly demonstrated to be linked with increased susceptibility to chemical toxicity in humans (see “Genetic determinants of toxic response”), other factors include: constitutional characteristics related to age and sex; pre-existing disease states or a reduction in organ function (non-hereditary, i.e., acquired); dietary habits, smoking, alcohol consumption and use of medications; concomitant exposure to biotoxins (various micro- organisms) and physical factors (radiation, humidity, extremely low or high temperatures or barometric pressures particularly relevant to the partial pressure of a gas), as well as concomitant physical exercise or psychological stress situations; previous occupational and/or environmental exposure to a particular chemical, and in particular concomitant exposure to other chemicals, not necessarily toxic (e.g., essential metals). The possible contributions of the aforementioned factors in either increasing or decreasing susceptibility to adverse health effects, as well as the mechanisms of their action, are specific for a particular chemical. Therefore only the most common factors, basic mechanisms and a few characteristic examples will be presented here, whereas specific information concerning each particular chemical can be found in elsewhere in this Encyclopaedia.
According to the stage at which these factors act (absorption, distribution, biotransformation or excretion of a particular chemical), the mechanisms can be roughly categorized according to two basic consequences of interaction: (1) a change in the quantity of the chemical in a target organ, that is, at the site(s) of its effect in the organism (toxicokinetic interactions), or (2) a change in the intensity of a specific response to the quantity of the chemical in a target organ (toxicodynamic interactions). The most common mechanisms of either type of interaction are related to competition with other chemical(s) for binding to the same compounds involved in their transport in the organism (e.g., specific serum proteins) and/or for the same biotransformation pathway (e.g., specific enzymes) resulting in a change in the speed or sequence between initial reaction and final adverse health effect. However, both toxicokinetic and toxicodynamic interactions may influence individual susceptibility to a particular chemical. The influence of several concomitant factors can result in either: (a) additive effects—the intensity of the combined effect is equal to the sum of the effects produced by each factor separately, (b) synergistic effects—the intensity of the combined effect is greater than the sum of the effects produced by each factor separately, or (c) antagonistic effects—the intensity of the combined effect is smaller than the sum of the effects produced by each factor separately.
The quantity of a particular toxic chemical or characteristic metabolite at the site(s) of its effect in the human body can be more or less assessed by biological monitoring, that is, by choosing the correct biological specimen and optimal timing of specimen sampling, taking into account biological half-lives for a particular chemical in both the critical organ and in the measured biological compartment. However, reliable information concerning other possible factors that might influence individual susceptibility in humans is generally lacking, and consequently the majority of knowledge regarding the influence of various factors is based on experimental animal data.
It should be stressed that in some cases relatively large differences exist between humans and other mammals in the intensity of response to an equivalent level and/or duration of exposure to many toxic chemicals; for example, humans appear to be considerably more sensitive to the adverse health effects of several toxic metals than are rats (commonly used in experimental animal studies). Some of these differences can be attributed to the fact that the transportation, distribution and biotransformation pathways of various chemicals are greatly dependent on subtle changes in the tissue pH and the redox equilibrium in the organism (as are the activities of various enzymes), and that the redox system of the human differs considerably from that of the rat.
This is obviously the case regarding important antioxidants such as vitamin C and glutathione (GSH), which are essential for maintaining redox equilibrium and which have a protective role against the adverse effects of the oxygen- or xenobiotic-derived free radicals which are involved in a variety of pathological conditions (Kehrer 1993). Humans cannot auto-synthesize vitamin C, contrary to the rat, and levels as well as the turnover rate of erythrocyte GSH in humans are considerably lower than that in the rat. Humans also lack some of the protective antioxidant enzymes, compared to the rat or other mammals (e.g., GSH- peroxidase is considered to be poorly active in human sperm). These examples illustrate the potentially greater vulnerability to oxidative stress in humans (particularly in sensitive cells, e.g., apparently greater vulnerability of the human sperm to toxic influences than that of the rat), which can result in different response or greater susceptibility to the influence of various factors in humans compared to other mammals (Telišman 1995).
Influence of Age
Compared to adults, very young children are often more susceptible to chemical toxicity because of their relatively greater inhalation volumes and gastrointestinal absorption rate due to greater permeability of the intestinal epithelium, and because of immature detoxification enzyme systems and a relatively smaller excretion rate of toxic chemicals. The central nervous system appears to be particularly susceptible at the early stage of development with regard to neurotoxicity of various chemicals, for example, lead and methylmercury. On the other hand, the elderly may be susceptible because of chemical exposure history and increased body stores of some xenobiotics, or pre-existing compromised function of target organs and/or relevant enzymes resulting in lowered detoxification and excretion rate. Each of these factors can contribute to weakening of the body’s defences—a decrease in reserve capacity, causing increased susceptibility to subsequent exposure to other hazards. For example, the cytochrome P450 enzymes (involved in the biotransformation pathways of almost all toxic chemicals) can be either induced or have lowered activity because of the influence of various factors over a lifetime (including dietary habits, smoking, alcohol, use of medications and exposure to environmental xenobiotics).
Influence of Sex
Gender-related differences in susceptibility have been described for a large number of toxic chemicals (approximately 200), and such differences are found in many mammalian species. It appears that males are generally more susceptible to renal toxins and females to liver toxins. The causes of the different response between males and females have been related to differences in a variety of physiological processes (e.g., females are capable of additional excretion of some toxic chemicals through menstrual blood loss, breast milk and/or transfer to the foetus, but they experience additional stress during pregnancy, delivery and lactation), enzyme activities, genetic repair mechanisms, hormonal factors, or the presence of relatively larger fat depots in females, resulting in greater accumulation of some lipophilic toxic chemicals, such as organic solvents and some medications.
Influence of Dietary Habits
Dietary habits have an important influence on susceptibility to chemical toxicity, mostly because adequate nutrition is essential for the functioning of the body’s chemical defence system in maintaining good health. Adequate intake of essential metals (including metalloids) and proteins, especially the sulphur-containing amino acids, is necessary for the biosynthesis of various detoxificating enzymes and the provision of glycine and glutathione for conjugation reactions with endogenous and exogenous compounds. Lipids, especially phospholipids, and lipotropes (methyl group donors) are necessary for the synthesis of biological membranes. Carbohydrates provide the energy required for various detoxification processes and provide glucuronic acid for conjugation of toxic chemicals and their metabolites. Selenium (an essential metalloid), glutathione, and vitamins such as vitamin C (water soluble), vitamin E and vitamin A (lipid soluble), have an important role as antioxidants (e.g., in controlling lipid peroxidation and maintaining integrity of cellular membranes) and free-radical scavengers for protection against toxic chemicals. In addition, various dietary constituents (protein and fibre content, minerals, phosphates, citric acid, etc.) as well as the amount of food consumed can greatly influence the gastrointestinal absorption rate of many toxic chemicals (e.g., the average absorption rate of soluble lead salts taken with meals is approximately eight per cent, as opposed to approximately 60% in fasting subjects). However, diet itself can be an additional source of individual exposure to various toxic chemicals (e.g., considerably increased daily intakes and accumulation of arsenic, mercury, cadmium and/or lead in subjects who consume contaminated seafood).
Influence of Smoking
The habit of smoking can influence individual susceptibility to many toxic chemicals because of the variety of possible interactions involving the great number of compounds present in cigarette smoke (especially polycyclic aromatic hydrocarbons, carbon monoxide, benzene, nicotine, acrolein, some pesticides, cadmium, and, to a lesser extent, lead and other toxic metals, etc.), some of which are capable of accumulating in the human body over a lifetime, including pre-natal life (e.g., lead and cadmium). The interactions occur mainly because various toxic chemicals compete for the same binding site(s) for transport and distribution in the organism and/or for the same biotransformation pathway involving particular enzymes. For example, several cigarette smoke constituents can induce cytochrome P450 enzymes, whereas others can depress their activity, and thus influence the common biotransformation pathways of many other toxic chemicals, such as organic solvents and some medications. Heavy cigarette smoking over a long period can considerably reduce the body’s defence mechanisms by decreasing reserve capacity to cope with the adverse influence of other life-style factors.
Influence of Alcohol
Consumption of alcohol (ethanol) can influence susceptibility to many toxic chemicals in several ways. It can influence the absorption rate and distribution of certain chemicals in the body—for example, increase the gastrointestinal absorption rate of lead, or decrease the pulmonary absorption rate of mercury vapour by inhibiting oxidation which is necessary for retention of inhaled mercury vapour. Ethanol can also influence susceptibility to various chemicals through short-term changes in tissue pH and increase in the redox potential resulting from ethanol metabolism, as both ethanol oxidizing to acetaldehyde and acetaldehyde oxidizing to acetate produce an equivalent of reduced nicotinamide adenine dinucleotide (NADH) and hydrogen (H+). Because the affinity of both essential and toxic metals and metalloids for binding to various compounds and tissues is influenced by pH and changes in the redox potential (Telišman 1995), even a moderate intake of ethanol may result in a series of consequences such as: (1) redistribution of long-term accumulated lead in the human organism in favour of a biologically active lead fraction, (2) replacement of essential zinc by lead in zinc-containing enzyme(s), thus affecting enzyme activity, or influence of mobil- ized lead on the distribution of other essential metals and metalloids in the organism such as calcium, iron, copper and selenium, (3) increased urinary excretion of zinc and so on. The effect of possible aforementioned events can be augmented due to the fact that alcoholic beverages can contain an appreciable amount of lead from vessels or processing (Prpic-Majic et al. 1984; Telišman et al. 1984; 1993).
Another common reason for ethanol-related changes in susceptibility is that many toxic chemicals, for example, various organic solvents, share the same biotransformation pathway involving the cytochrome P450 enzymes. Depending on the intensity of exposure to organic solvents as well as the quantity and frequency of ethanol ingestion (i.e., acute or chronic alcohol consumption), ethanol can either decrease or increase biotransformation rates of various organic solvents and thus influence their toxicity (Sato 1991).
Influence of Medications
The common use of various medications can influence susceptibility to toxic chemicals mainly because many drugs bind to serum proteins and thus influence the transport, distribution or excretion rate of various toxic chemicals, or because many drugs are capable of inducing relevant detoxifying enzymes or depressing their activity (e.g., the cytochrome P450 enzymes), thus affecting the toxicity of chemicals with the same biotransformation pathway. Characteristic for either of the mechanisms is increased urinary excretion of trichloroacetic acid (the metabolite of several chlorinated hydrocarbons) when using salicylate, sulphonamide or phenylbutazone, and an increased hepato-nephrotoxicity of carbon tetrachloride when using phenobarbital. In addition, some medications contain a considerable amount of a potentially toxic chemical, for example, the aluminium-containing antacids or preparations used for therapeutic management of the hyperphosphataemia arising in chronic renal failure.
Influence of Concomitant Exposure to Other Chemicals
The changes in susceptibility to adverse health effects due to interaction of various chemicals (i.e., possible additive, synergistic or antagonistic effects) have been studied almost exclusively in experimental animals, mostly in the rat. Relevant epidemiological and clinical studies are lacking. This is of concern particularly considering the relatively greater intensity of response or the variety of adverse health effects of several toxic chemicals in humans compared to the rat and other mammals. Apart from published data in the field of pharmacology, most data are related only to combinations of two different chemicals within specific groups, such as various pesticides, organic solvents, or essential and/or toxic metals and metalloids.
Combined exposure to various organic solvents can result in various additive, synergistic or antagonistic effects (depending on the combination of certain organic solvents, their intensity and duration of exposure), mainly due to the capability of influencing each other’s biotransformation (Sato 1991).
Another characteristic example are the interactions of both essential and/or toxic metals and metalloids, as these are involved in the possible influence of age (e.g., a lifetime body accumulation of environmental lead and cadmium), sex (e.g., common iron deficiency in women), dietary habits (e.g., increased dietary intake of toxic metals and metalloids and/or deficient dietary intake of essential metals and metalloids), smoking habit and alcohol consumption (e.g., additional exposure to cadmium, lead and other toxic metals), and use of medications (e.g., a single dose of antacid can result in a 50-fold increase in the average daily intake of aluminium through food). The possibility of various additive, synergistic or antagonistic effects of exposure to various metals and metalloids in humans can be illustrated by basic examples related to the main toxic elements (see table 1), apart from which further interactions may occur because essential elements can also influence one another (e.g., the well-known antagonistic effect of copper on the gastrointestinal absorption rate as well as the metabolism of zinc, and vice versa). The main cause of all these interactions is the competition of various metals and metalloids for the same binding site (especially the sulphhydryl group, -SH) in various enzymes, metalloproteins (especially metallothionein) and tissues (e.g., cell membranes and organ barriers). These interactions may have a relevant role in the development of several chronic diseases which are mediated through the action of free radicals and oxidative stress (Telišman 1995).
Table 1. Basic effects of possible multiple interactions concerning the main toxic and/or essential metals and matalloids in mammals
Toxic metal or metalloid | Basic effects of the interaction with other metal or metalloid |
Aluminium (Al) | Decreases the absorption rate of Ca and impairs the metabolism of Ca; deficient dietary Ca increases the absorption rate of Al. Impairs phosphate metabolism. Data on interactions with Fe, Zn and Cu are equivocal (i.e., the possible role of another metal as a mediator). |
Arsenic (As) | Affects the distribution of Cu (an increase of Cu in the kidney, and a decrease of Cu in the liver, serum and urine). Impairs the metabolism of Fe (an increase of Fe in the liver with concomitant decrease in haematocrit). Zn decreases the absorption rate of inorganic As and decreases the toxicity of As. Se decreases the toxicity of As and vice versa. |
Cadmium (Cd) | Decreases the absorption rate of Ca and impairs the metabolism of Ca; deficient dietary Ca increases the absorption rate of Cd. Impairs the phosphate metabolism, i.e., increases urinary excretion of phosphates. Impairs the metabolism of Fe; deficient dietary Fe increases the absorption rate of Cd. Affects the distribution of Zn; Zn decreases the toxicity of Cd, whereas its influence on the absorption rate of Cd is equivocal. Se decreases the toxicity of Cd. Mn decreases the toxicity of Cd at low-level exposure to Cd. Data on the interaction with Cu are equivocal (i.e., the possible role of Zn, or another metal, as a mediator). High dietary levels of Pb, Ni, Sr, Mg or Cr(III) can decrease the absorption rate of Cd. |
Mercury (Hg) | Affects the distribution of Cu (an increase of Cu in the liver). Zn decreases the absorption rate of inorganic Hg and decreases the toxicity of Hg. Se decreases the toxicity of Hg. Cd increases the concentration of Hg in the kidney, but at the same time decreases the toxicity of Hg in the kidney (the influence ofthe Cd-induced metallothionein synthesis). |
Lead (Pb) | Impairs the metabolism of Ca; deficient dietary Ca increases the absorption rate of inorganic Pb and increases the toxicity of Pb. Impairs the metabolism of Fe; deficient dietary Fe increases the toxicity of Pb, whereas its influence on the absorption rate of Pb is equivocal. Impairs the metabolism of Zn and increases urinary excretion of Zn; deficient dietary Zn increases the absorption rate of inorganic Pb andincreases the toxicity of Pb. Se decreases the toxicity of Pb. Data on interactions with Cu and Mg are equivocal (i.e., the possible role of Zn, or another metal, as a mediator). |
Note: Data are mostly related to experimental studies in the rat, whereas relevant clinical and epidemiological data (particularly regarding quantitative dose-response relationships) are generally lacking (Elsenhans et al. 1991; Fergusson 1990; Telišman et al. 1993).
It has long been recognized that each person’s response to environmental chemicals is different. The recent explosion in molecular biology and genetics has brought a clearer understanding about the molecular basis of such variability. Major determinants of individual response to chemicals include important differences among more than a dozen superfamilies of enzymes, collectively termed xenobiotic- (foreign to the body) or drug-metabolizing enzymes. Although the role of these enzymes has classically been regarded as detoxification, these same enzymes also convert a number of inert compounds to highly toxic intermediates. Recently, many subtle as well as gross differences in the genes encoding these enzymes have been identified, which have been shown to result in marked variations in enzyme activity. It is now clear that each individual possesses a distinct complement of xenobiotic-metabolizing enzyme activities; this diversity might be thought of as a “metabolic fingerprint”. It is the complex interplay of these many different enzyme superfamilies which ultimately determines not only the fate and the potential for toxicity of a chemical in any given individual, but also assessment of exposure. In this article we have chosen to use the cytochrome P450 enzyme superfamily to illustrate the remarkable progress made in understanding individual response to chemicals. The development of relatively simple DNA-based tests designed to identify specific gene alterations in these enzymes, is now providing more accurate predictions of individual response to chemical exposure. We hope the result will be preventive toxicology. In other words, each individual might learn about those chemicals to which he or she is particularly sensitive, thereby avoiding previously unpredictable toxicity or cancer.
Although it is not generally appreciated, human beings are exposed daily to a barrage of innumerable diverse chemicals. Many of these chemicals are highly toxic, and they are derived from a wide variety of environmental and dietary sources. The relationship between such exposures and human health has been, and continues to be, a major focus of biomedical research efforts worldwide.
What are some examples of this chemical bombardment? More than 400 chemicals from red wine have been isolated and characterized. At least 1,000 chemicals are estimated to be produced by a lighted cigarette. There are countless chemicals in cosmetics and perfumed soaps. Another major source of chemical exposure is agriculture: in the United States alone, farmlands receive more than 75,000 chemicals each year in the form of pesticides, herbicides and fertilizing agents; after uptake by plants and grazing animals, as well as fish in nearby waterways, humans (at the end of the food chain) ingest these chemicals. Two other sources of large concentrations of chemicals taken into the body include (a) drugs taken chronically and (b) exposure to hazardous substances in the workplace over a lifetime of employment.
It is now well established that chemical exposure may adversely affect many aspects of human health, causing chronic diseases and the development of many cancers. In the last decade or so, the molecular basis of many of these relationships has begun to be unravelled. In addition, the realization has emerged that humans differ markedly in their susceptibility to the harmful effects of chemical exposure.
Current efforts to predict human response to chemical exposure combine two fundamental approaches (figure 1): monitoring the extent of human exposure through biological markers (biomarkers), and predicting the likely response of an individual to a given level of exposure. Although both of these approaches are extremely important, it should be emphasized that the two are distinctly different from one another. This article will focus on the genetic factors underlying individual susceptibility to any particular chemical exposure. This field of research is broadly termed ecogenetics, or pharmacogenetics (see Kalow 1962 and 1992). Many of the recent advances in determining individual susceptibility to chemical toxicity have evolved from a greater appreciation of the processes by which humans and other mammals detoxify chemicals, and the remarkable complexity of the enzyme systems involved.
Figure 1. The interrelationships among exposure assessment, ethnic differences, age, diet, nutrition and genetic susceptibility assessment - all of which play a role in the individual risk of toxicity and cancer
We will first describe the variability of toxic responses in humans. We will then introduce some of the enzymes responsible for such variation in response, due to differences in the metabolism of foreign chemicals. Next, the history and nomenclature of the cytochrome P450 superfamily will be detailed. Five human P450 polymorphisms as well as several non-P450 polymorphisms will be briefly described; these are responsible for human differences in toxic response. We will then discuss an example to emphasize the point that genetic differences in individuals can influence exposure assessment, as determined by environmental monitoring. Lastly, we will discuss the role of these xenobiotic-metabolizing enzymes in critical life functions.
Variation in Toxic Response Among the Human Population
Toxicologists and pharmacologists commonly speak about the average lethal dose for 50% of the population (LD50), the average maximal tolerated dose for 50% of the population (MTD50), and the average effective dose of a particular drug for 50% of the population (ED50). However, how do these doses affect each of us on an individual basis? In other words, a highly sensitive individual may be 500 times more affected or 500 times more likely to be affected than the most resistant individual in a population; for these people, the LD50 (and MTD50 and ED50) values would have little meaning. LD50, MTD50 and ED50 values are only relevant when referring to the population as a whole.
Figure 2 illustrates a hypothetical dose-response relationship for a toxic response by individuals in any given population. This generic diagram might represent bronchogenic carcinoma in response to the number of cigarettes smoked, chloracne as a function of dioxin levels in the workplace, asthma as a function of air concentrations of ozone or aldehyde, sunburn in response to ultraviolet light, decreased clotting time as a function of aspirin intake, or gastrointestinal distress in response to the number of jalapeño peppers consumed. Generally, in each of these instances, the greater the exposure, the greater the toxic response. Most of the population will exhibit the mean and standard deviation of toxic response as a function of dose. The “resistant outlier” (lower right in figure 2) is an individual having less of a response at higher doses or exposures. A “sensitive outlier” (upper left) is an individual having an exaggerated response to a relatively small dose or exposure. These outliers, with extreme differences in response compared to the majority of individuals in the population, may represent important genetic variants that can help scientists in attempting to understand the underlying molecular mechanisms of a toxic response.
Figure 2. Generic relationship between any toxic response and the dose of any environmental, chemical or physical agent
Using these outliers in family studies, scientists in a number of laboratories have begun to appreciate the importance of Mendelian inheritance for a given toxic response. Subsequently, one can then turn to molecular biology and genetic studies to pinpoint the underlying mechanism at the gene level (genotype) responsible for the environmentally caused disease (phenotype).
Xenobiotic- or Drug-metabolizing Enzymes
How does the body respond to the myriad of exogenous chemicals to which we are exposed? Humans and other mammals have evolved highly complex metabolic enzyme systems comprising more than a dozen distinct superfamilies of enzymes. Almost every chemical to which humans are exposed will be modified by these enzymes, in order to facilitate removal of the foreign substance from the body. Collectively, these enzymes are frequently referred to as drug-metabolizing enzymes or xenobiotic-metabolizing enzymes. Actually, both terms are misnomers. First, many of these enzymes not only metabolize drugs but hundreds of thousands of environmental and dietary chemicals. Second, all of these enzymes also have normal body compounds as substrates; none of these enzymes metabolizes only foreign chemicals.
For more than four decades, the metabolic processes mediated by these enzymes have commonly been classified as either Phase I or Phase II reactions (figure 3). Phase I (“functionalization”) reactions generally involve relatively minor structural modifications of the parent chemical via oxidation, reduction or hydrolysis in order to produce a more water-soluble metabolite. Frequently, Phase I reactions provide a “handle” for further modification of a compound by subsequent Phase II reactions. Phase I reactions are primarily mediated by a superfamily of highly versatile enzymes, collectively termed cytochromes P450, although other enzyme superfamilies can also be involved (figure 4).
Figure 3. The classical designation of Phase I and Phase II xenobiotic- or drug-metabolizing enzymes
Figure 4. Examples of drug-metabolizing enzymes
Phase II reactions involve the coupling of a water-soluble endogenous molecule to a chemical (parent chemical or Phase I metabolite) in order to facilitate excretion. Phase II reactions are frequently termed “conjugation” or “derivatization” reactions. The enzyme superfamilies catalyzing Phase II reactions are generally named according to the endogenous conjugating moiety involved: for example, acetylation by the N-acetyltransferases, sulphation by the sulphotransferases, glutathione conjugation by the glutathione transferases, and glucuronidation by the UDP glucuronosyltransferases (figure 4). Although the major organ of drug metabolism is the liver, the levels of some drug- metabolizing enzymes are quite high in the gastrointestinal tract, gonads, lung, brain and kidney, and such enzymes are undoubtedly present to some extent in every living cell.
Xenobiotic-metabolizing Enzymes Represent Double-edged Swords
As we learn more about the biological and chemical processes leading to human health aberrations, it has become increasingly evident that drug-metabolizing enzymes function in an ambivalent manner (figure 3). In the majority of cases, lipid-soluble chemicals are converted to more readily excreted water-soluble metabolites. However, it is clear that on many occasions the same enzymes are capable of transforming other inert chemicals into highly reactive molecules. These intermediates can then interact with cellular macromolecules such as proteins and DNA. Thus, for each chemical to which humans are exposed, there exists the potential for the competing pathways of metabolic activation and detoxification.
Brief Review of Genetics
In human genetics, each gene (locus) is located on one of the 23 pairs of chromosomes. The two alleles (one present on each chromosome of the pair) can be the same, or they can be different from one another. For example, the B and b alleles, in which B (brown eyes) is dominant over b (blue eyes): individuals of the brown-eyed phenotype can have either the BB or Bb genotypes, whereas individuals of the blue-eyed phenotype can only have the bb genotype.
A polymorphism is defined as two or more stably inherited phenotypes (traits)—derived from the same gene(s)—that are maintained in the population, often for reasons not necessarily obvious. For a gene to be polymorphic, the gene product must not be essential for development, reproductive vigour or other critical life processes. In fact, a “balanced polymorphism,” wherein the heterozygote has a distinct survival advantage over either homozygote (e.g., resistance to malaria, and the sickle-cell haemoglobin allele) is a common explanation for maintaining an allele in the population at otherwise unexplained high frequencies (see Gonzalez and Nebert 1990).
Human Polymorphisms of Xenobiotic-metabolizing Enzymes
Genetic differences in the metabolism of various drugs and environmental chemicals have been known for more than four decades (Kalow 1962 and 1992). These differences are frequently referred to as pharmacogenetic or, more broadly, ecogenetic polymorphisms. These polymorphisms represent variant alleles that occur at a relatively high frequency in the population and are generally associated with aberrations in enzyme expression or function. Historically, polymorphisms were usually identified following unexpected responses to therapeutic agents. More recently, recombinant DNA technology has enabled scientists to identify the precise alterations in genes that are responsible for some of these polymorphisms. Polymorphisms have now been characterized in many drug-metabolizing enzymes—including both Phase I and Phase II enzymes. As more and more polymorphisms are identified, it is becoming increasingly apparent that each individual may possess a distinct complement of drug-metabolizing enzymes. This diversity might be described as a “metabolic fingerprint”. It is the complex interplay of the various drug- metabolizing enzyme superfamilies within any individual that will ultimately determine his or her particular response to a given chemical (Kalow 1962 and 1992; Nebert 1988; Gonzalez and Nebert 1990; Nebert and Weber 1990).
Expressing Human Xenobiotic-metabolizingEnzymes in Cell Culture
How might we develop better predictors of human toxic responses to chemicals? Advances in defining the multiplicity of drug-metabolizing enzymes must be accompanied by precise knowledge as to which enzymes determine the metabolic fate of individual chemicals. Data gleaned from laboratory rodent studies have certainly provided useful information. However, significant interspecies differences in xenobiotic-metabolizing enzymes necessitate caution in extrapolating data to human populations. To overcome this difficulty, many laboratories have developed systems in which various cell lines in culture can be engineered to produce functional human enzymes that are stable and in high concentrations (Gonzalez, Crespi and Gelboin 1991). Successful production of human enzymes has been achieved in a variety of diverse cell lines from sources including bacteria, yeast, insects and mammals.
In order to define the metabolism of chemicals even more accurately, multiple enzymes have also been successfully produced in a single cell line (Gonzalez, Crespi and Gelboin 1991). Such cell lines provide valuable insights into the precise enzymes involved in the metabolic processing of any given compound and likely toxic metabolites. If this information can then be combined with knowledge regarding the presence and level of an enzyme in human tissues, these data should provide valuable predictors of response.
Cytochrome P450
History and nomenclature
The cytochrome P450 superfamily is one of the most studied drug-metabolizing enzyme superfamilies, having a great deal of individual variability in response to chemicals. Cytochrome P450 is a convenient generic term used to describe a large superfamily of enzymes pivotal in the metabolism of innumerable endogenous and exogenous substrates. The term cytochrome P450 was first coined in 1962 to describe an unknown pigment in cells which, when reduced and bound with carbon monoxide, produced a characteristic absorption peak at 450 nm. Since the early 1980s, cDNA cloning technology has resulted in remarkable insights into the multiplicity of cytochrome P450 enzymes. To date, more than 400 distinct cytochrome P450 genes have been identified in animals, plants, bacteria and yeast. It has been estimated that any one mammalian species, such as humans, may possess 60 or more distinct P450 genes (Nebert and Nelson 1991). The multiplicity of P450 genes has necessitated the development of a standardized nomenclature system (Nebert et al. 1987; Nelson et al. 1993). First proposed in 1987 and updated on a biannual basis, the nomenclature system is based on divergent evolution of amino acid sequence comparisons between P450 proteins. The P450 genes are divided into families and subfamilies: enzymes within a family display greater than 40% amino acid similarity, and those within the same subfamily display 55% similarity. P450 genes are named with the root symbol CYP followed by an arabic numeral designating the P450 family, a letter denoting the subfamily, and a further arabic numeral designating the individual gene (Nelson et al. 1993; Nebert et al. 1991). Thus, CYP1A1 represents P450 gene 1 in family 1 and subfamily A.
As of February 1995, there are 403 CYP genes in the database, composed of 59 families and 105 sub- families. These include eight lower eukaryotic families, 15 plant families, and 19 bacterial families. The 15 human P450 gene families comprise 26 subfamilies, 22 of which have been mapped to chromosomal locations throughout most of the genome. Some sequences are clearly orthologous across many species—for example, only one CYP17 (steroid 17α-hydroxylase) gene has been found in all vertebrates examined to date; other sequences within a subfamily are highly duplicated, making the identification of orthologous pairs impossible (e.g., the CYP2C subfamily). Interestingly, human and yeast share an orthologous gene in the CYP51 family. Numerous comprehensive reviews are available for readers seeking further information on the P450 superfamily (Nelson et al. 1993; Nebert et al. 1991; Nebert and McKinnon 1994; Guengerich 1993; Gonzalez 1992).
The success of the P450 nomenclature system has resulted in similar terminology systems being developed for the UDP glucuronosyltransferases (Burchell et al. 1991) and flavin-containing mono-oxygenases (Lawton et al. 1994). Similar nomenclature systems based on divergent evolution are also under development for several other drug-metabolizing enzyme superfamilies (e.g., sulphotransferases, epoxide hydrolases and aldehyde dehydrogenases).
Recently, we divided the mammalian P450 gene superfamily into three groups (Nebert and McKinnon 1994)—those involved principally with foreign chemical metabolism, those involved in the synthesis of various steroid hormones, and those participating in other important endogenous functions. It is the xenobiotic-metabolizing P450 enzymes that assume the most significance for prediction of toxicity.
Xenobiotic-metabolizing P450 enzymes
P450 enzymes involved in the metabolism of foreign compounds and drugs are almost always found within families CYP1, CYP2, CYP3 and CYP4. These P450 enzymes catalyze a wide variety of metabolic reactions, with a single P450 often capable of meta-bolizing many different compounds. In addition, multiple P450 enzymes may metabolize a single compound at different sites. Also, a compound may be metabolized at the same, single site by several P450s, although at varying rates.
A most important property of the drug-metabolizing P450 enzymes is that many of these genes are inducible by the very substances which serve as their substrates. On the other hand, other P450 genes are induced by nonsubstrates. This phenomenon of enzyme induction underlies many drug-drug interactions of therapeutic importance.
Although present in many tissues, these particular P450 enzymes are found in relatively high levels in the liver, the primary site of drug metabolism. Some of the xenobiotic-metabolizing P450 enzymes exhibit activity toward certain endogenous substrates (e.g., arachidonic acid). However, it is generally believed that most of these xenobiotic-metabolizing P450 enzymes do not play important physiological roles—although this has not been established experimentally as yet. The selective homozygous disruption, or “knock-out,” of individual xenobiotic-metabolizing P450 genes by means of gene targeting methodologies in mice is likely to provide unequivocal information soon with regard to physiological roles of the xenobiotic-metabolizing P450s (for a review of gene targeting, see Capecchi 1994).
In contrast to P450 families encoding enzymes involved primarily in physiological processes, families encoding xenobiotic-metabolizing P450 enzymes display marked species specificity and frequently contain many active genes per subfamily (Nelson et al. 1993; Nebert et al. 1991). Given the apparent lack of physiological substrates, it is possible that P450 enzymes in families CYP1, CYP2, CYP3 and CYP4 that have appeared in the past several hundred million years have evolved as a means of detoxifying foreign chemicals encountered in the environment and diet. Clearly, evolution of the xenobiotic-metabolizing P450s would have occurred over a time period which far precedes the synthesis of most of the synthetic chemicals to which humans are now exposed. The genes in these four gene families may have evolved and diverged in animals due to their exposure to plant metabolites during the last 1.2 billion years—a process descriptively termed “animal-plant warfare” (Gonzalez and Nebert 1990). Animal-plant warfare is the phenomenon in which plants developed new chemicals (phytoalexins) as a defence mechanism in order to prevent ingestion by animals, and animals, in turn, responded by developing new P450 genes to accommodate the diversifying substrates. Providing further impetus to this proposal are the recently described examples of plant-insect and plant-fungus chemical warfare involving P450 detoxification of toxic substrates (Nebert 1994).
The following is a brief introduction to several of the human xenobiotic-metabolizing P450 enzyme polymorphisms in which genetic determinants of toxic response are believed to be of high significance. Until recently, P450 polymorphisms were generally suggested by unexpected variance in patient response to administered therapeutic agents. Several P450 polymorphisms are indeed named according to the drug with which the polymorphism was first identified. More recently, research efforts have focused on identification of the precise P450 enzymes involved in the metabolism of chemicals for which variance is observed and the precise characterization of the P450 genes involved. As described earlier, the measurable activity of a P450 enzyme towards a model chemical can be called the phenotype. Allelic differences in a P450 gene for each individual is termed the P450 genotype. As more and more scrutiny is applied to the analysis of P450 genes, the precise molecular basis of previously documented phenotypic variance is becoming clearer.
The CYP1A subfamily
The CYP1A subfamily comprises two enzymes in humans and all other mammals: these are designated CYP1A1 and CYP1A2 under standard P450 nomenclature. These enzymes are of considerable interest, because they are involved in the metabolic activation of many procarcinogens and are also induced by several compounds of toxicological concern, including dioxin. For example, CYP1A1 metabolically activates many compounds found in cigarette smoke. CYP1A2 metabolically activates many arylamines—associated with urinary bladder cancer—found in the chemical dye industry. CYP1A2 also metabolically activates 4-(methylnitrosamino)-1-(3-pyridyl)-1-butanone (NNK), a tobacco-derived nitrosamine. CYP1A1 and CYP1A2 are also found at higher levels in the lungs of cigarette smokers, due to induction by polycyclic hydrocarbons present in the smoke. The levels of CYP1A1 and CYP1A2 activity are therefore considered to be important determinants of individual response to many potentially toxic chemicals.
Toxicological interest in the CYP1A subfamily was greatly intensified by a 1973 report correlating the level of CYP1A1 inducibility in cigarette smokers with individual susceptibility to lung cancer (Kellermann, Shaw and Luyten-Kellermann 1973). The molecular basis of CYP1A1 and CYP1A2 induction has been a major focus of numerous laboratories. The induction process is mediated by a protein termed the Ah receptor to which dioxins and structurally related chemicals bind. The name Ah is derived from the aryl hydrocarbon nature of many CYP1A inducers. Interestingly, differences in the gene encoding the Ah receptor between strains of mice result in marked differences in chemical response and toxicity. A polymorphism in the Ah receptor gene also appears to occur in humans: approximately one-tenth of the population displays high induction of CYP1A1 and may be at greater risk than the other nine-tenths of the population for development of certain chemically induced cancers. The role of the Ah receptor in the control of enzymes in the CYP1A subfamily, and its role as a determinant of human response to chemical exposure, has been the subject of several recent reviews (Nebert, Petersen and Puga 1991; Nebert, Puga and Vasiliou 1993).
Are there other polymorphisms that might control the level of CYP1A proteins in a cell? A polymorphism in the CYP1A1 gene has also been identified, and this appears to influence lung cancer risk amongst Japanese cigarette smokers, although this same polymorphism does not appear to influence risk in other ethnic groups (Nebert and McKinnon 1994).
CYP2C19
Variations in the rate at which individuals metabolize the anticonvulsant drug (S)-mephenytoin have been well documented for many years (Guengerich 1989). Between 2% and 5% of Caucasians and as many as 25% of Asians are deficient in this activity and may be at greater risk of toxicity from the drug. This enzyme defect has long been known to involve a member of the human CYP2C subfamily, but the precise molecular basis of this deficiency has been the subject of considerable controversy. The major reason for this difficulty was the six or more genes in the human CYP2C subfamily. It was recently demonstrated, however, that a single-base mutation in the CYP2C19 gene is the primary cause of this deficiency (Goldstein and de Morais 1994). A simple DNA test, based on the polymerase chain reaction (PCR), has also been developed to identify this mutation rapidly in human populations (Goldstein and de Morais 1994).
CYP2D6
Perhaps the most extensively characterized variation in a P450 gene is that involving the CYP2D6 gene. More than a dozen examples of mutations, rearrangements and deletions affecting this gene have been described (Meyer 1994). This polymorphism was first suggested 20 years ago by clinical variability in patients’ response to the antihypertensive agent debrisoquine. Alterations in the CYP2D6 gene giving rise to altered enzyme activity are therefore collectively termed the debrisoquine polymorphism.
Prior to the advent of DNA-based studies, individuals had been classified as poor or extensive metabolizers (PMs, EMs) of debrisoquine based on metabolite concentrations in urine samples. It is now clear that alterations in the CYP2D6 gene may result in individuals displaying not only poor or extensive debrisoquine metabolism, but also ultrarapid metabolism. Most alterations in the CYP2D6 gene are associated with partial or total deficiency of enzyme function; however, individuals in two families have recently been described who possess multiple functional copies of the CYP2D6 gene, giving rise to ultrarapid metabolism of CYP2D6 substrates (Meyer 1994). This remarkable observation provides new insights into the wide spectrum of CYP2D6 activity previously observed in population studies. Alterations in CYP2D6 function are of particular significance, given the more than 30 commonly prescribed drugs metabolized by this enzyme. An individual’s CYP2D6 function is therefore a major determinant of both therapeutic and toxic response to administered therapy. Indeed, it has recently been argued that consideration of a patient’s CYP2D6 status is necessary for the safe use of both psychiatric and cardiovascular drugs.
The role of the CYP2D6 polymorphism as a determinant of individual susceptibility to human diseases such as lung cancer and Parkinson’s disease has also been the subject of intense study (Nebert and McKinnon 1994; Meyer 1994). While conclusions are difficult to define given the diverse nature of the study protocols utilized, the majority of studies appear to indicate an association between extensive metabolizers of debrisoquine (EM phenotype) and lung cancer. The reasons for such an association are presently unclear. However, the CYP2D6 enzyme has been shown to metabolize NNK, a tobacco-derived nitrosamine.
As DNA-based assays improve—enabling even more accurate assessment of CYP2D6 status—it is anticipated that the precise relationship of CYP2D6 to disease risk will be clarified. Whereas the extensive metabolizer may be linked with susceptibility to lung cancer, the poor metabolizer (PM phenotype) appears to be associated with Parkinson’s disease of unknown cause. Whereas these studies are also difficult to compare, it appears that PM individuals having a diminished capacity to metabolize CYP2D6 substrates (e.g., debrisoquine) have a 2- to 2.5-fold increase in risk of developing Parkinson’s disease.
CYP2E1
The CYP2E1 gene encodes an enzyme that metabolizes many chemicals, including drugs and many low-molecular-weight carcinogens. This enzyme is also of interest because it is highly inducible by alcohol and may play a role in liver injury induced by chemicals such as chloroform, vinyl chloride and carbon tetrachloride. The enzyme is primarily found in the liver, and the level of enzyme varies markedly between individuals. Close scrutiny of the CYP2E1 gene has resulted in the identification of several polymorphisms (Nebert and McKinnon 1994). A relationship has been reported between the presence of certain structural variations in the CYP2E1 gene and apparent lowered lung cancer risk in some studies; however, there are clear interethnic differences which require clarification of this possible relationship.
The CYP3A subfamily
In humans, four enzymes have been identified as members of the CYP3A subfamily due to their similarity in amino acid sequence. The CYP3A enzymes metabolize many commonly prescribed drugs such as erythromycin and cyclosporin. The carcinogenic food contaminant aflatoxin B1 is also a CYP3A substrate. One member of the human CYP3A subfamily, designated CYP3A4, is the principal P450 in human liver as well as being present in the gastrointestinal tract. As is true for many other P450 enzymes, the level of CYP3A4 is highly variable between individuals. A second enzyme, designated CYP3A5, is found in only approximately 25% of livers; the genetic basis of this finding has not been elucidated. The importance of CYP3A4 or CYP3A5 variability as a factor in genetic determinants of toxic response has not yet been established (Nebert and McKinnon 1994).
Non-P450 Polymorphisms
Numerous polymorphisms also exist within other xenobiotic-metabolizing enzyme superfamilies (e.g., glutathione transferases, UDP glucuronosyltransferases, para-oxonases, dehydrogenases, N-acetyltransferases and flavin-containing mono-oxygenases). Because the ultimate toxicity of any P450-generated intermediate is dependent on the efficiency of subsequent Phase II detoxification reactions, the combined role of multiple enzyme polymorphisms is important in determining susceptibility to chemically induced diseases. The metabolic balance between Phase I and Phase II reactions (figure 3) is therefore likely to be a major factor in chemically induced human diseases and genetic determinants of toxic response.
The GSTM1 gene polymorphism
A well studied example of a polymorphism in a Phase II enzyme is that involving a member of the glutathione S-transferase enzyme superfamily, designated GST mu or GSTM1. This particular enzyme is of considerable toxicological interest because it appears to be involved in the subsequent detoxification of toxic metabolites produced from chemicals in cigarette smoke by the CYP1A1 enzyme. The identified polymorphism in this glutathione transferase gene involves a total absence of functional enzyme in as many as half of all Caucasians studied. This lack of a Phase II enzyme appears to be associated with increased susceptibility to lung cancer. By grouping individuals on the basis of both variant CYP1A1 genes and the deletion or presence of a functional GSTM1 gene, it has been demonstrated that the risk of developing smoking-induced lung cancer varies significantly (Kawajiri, Watanabe and Hayashi 1994). In particular, individuals displaying one rare CYP1A1 gene alteration, in combination with an absence of the GSTM1 gene, were at higher risk (as much as ninefold) of developing lung cancer when exposed to a relatively low level of cigarette smoke. Interestingly, there appear to be interethnic differences in the significance of variant genes which necessitate further study in order to elucidate the precise role of such alterations in susceptibility to disease (Kalow 1962; Nebert and McKinnon 1994; Kawajiri, Watanabe and Hayashi 1994).
Synergistic effect of two or more polymorphisms on the toxic response
A toxic response to an environmental agent may be greatly exaggerated by the combination of two pharmacogenetic defects in the same individual, for example, the combined effects of the N-acetyltransferase (NAT2) polymorphism and the glucose-6-phosphate dehydrogenase (G6PD) polymorphism.
Occupational exposure to arylamines constitutes a grave risk of urinary bladder cancer. Since the elegant studies of Cartwright in 1954, it has become clear that the N-acetylator status is a determinant of azo-dye-induced bladder cancer. There is a highly significant correlation between the slow-acetylator phenotype and the occurrence of bladder cancer, as well as the degree of invasiveness of this cancer in the bladder wall. On the contrary, there is a significant association between the rapid-acetylator phenotype and the incidence of colorectal carcinoma. The N-acetyltransferase (NAT1, NAT2) genes have been cloned and sequenced, and DNA-based assays are now able to detect the more than a dozen allelic variants which account for the slow-acetylator phenotype. The NAT2 gene is polymorphic and responsible for most of the variability in toxic response to environmental chemicals (Weber 1987; Grant 1993).
Glucose-6-phosphate dehydrogenase (G6PD) is an enzyme critical in the generation and maintenance of NADPH. Low or absent G6PD activity can lead to severe drug- or xenobiotic-induced haemolysis, due to the absence of normal levels of reduced glutathione (GSH) in the red blood cell. G6PD deficiency affects at least 300 million people worldwide. More than 10% of African-American males exhibit the less severe phenotype, while certain Sardinian communities exhibit the more severe “Mediterranean type” at frequencies as high as one in every three persons. The G6PD gene has been cloned and localized to the X chromosome, and numerous diverse point mutations account for the large degree of phenotypic heterogeneity seen in G6PD-deficient individuals (Beutler 1992).
Thiozalsulphone, an arylamine sulpha drug, was found to cause a bimodal distribution of haemolytic anaemia in the treated population. When treated with certain drugs, individuals with the combination of G6PD deficiency plus the slow-acetylator phenotype are more affected than those with the G6PD deficiency alone or the slow-acetylator phenotype alone. G6PD-deficient slow acetylators are at least 40 times more susceptible than normal-G6PD rapid acetylators to thiozalsulphone-induced haemolysis.
Effect of genetic polymorphisms on exposure assessment
Exposure assessment and biomonitoring (figure 1) also requires information on the genetic make-up of each individual. Given identical exposure to a hazardous chemical, the level of haemoglobin adducts (or other biomarkers) might vary by two or three orders of magnitude among individuals, depending upon each person’s metabolic fingerprint.
The same combined pharmacogenetics has been studied in chemical factory workers in Germany (table 1). Haemoglobin adducts among workers exposed to aniline and acetanilide are by far the highest in G6PD-deficient slow acetylators, as compared with the other possible combined pharmacogenetic phenotypes. This study has important implications for exposure assessment. These data demonstrate that, although two individuals might be exposed to the same ambient level of hazardous chemical in the work place, the amount of exposure (via biomarkers such as haemoglobin adducts) might be estimated to be two or more orders of magnitude less, due to the underlying genetic predisposition of the individual. Likewise, the resulting risk of an adverse health effect may vary by two or more orders of magnitude.
Table 1: Haemoglobin adducts in workers exposed to aniline and acetanilide
Acetylator status | G6PD deficiency | |||
Fast | Slow | No | Yes | Hgb adducts |
+ | + | 2 | ||
+ | + | 30 | ||
+ | + | 20 | ||
+ | + | 100 |
Source: Adapted from Lewalter and Korallus 1985.
Genetic differences in binding as well as metabolism
It should be emphasized that the same case made here for meta-bolism can also be made for binding. Heritable differences in the binding of environmental agents will greatly affect the toxic response. For example, differences in the mouse cdm gene can profoundly affect individual sensitivity to cadmium-induced testicular necrosis (Taylor, Heiniger and Meier 1973). Differences in the binding affinity of the Ah receptor are likely affect dioxin-induced toxicity and cancer (Nebert, Petersen and Puga 1991; Nebert, Puga and Vasiliou 1993).
Figure 5 summarizes the role of metabolism and binding in toxicity and cancer. Toxic agents, as they exist in the environment or following metabolism or binding, elicit their effects by either a genotoxic pathway (in which damage to DNA occurs) or a non-genotoxic pathway (in which DNA damage and mutagenesis need not occur). Interestingly, it has recently become clear that “classical” DNA-damaging agents can operate via a reduced glutathione (GSH)-dependent nongenotoxic signal transduction pathway, which is initiated on or near the cell surface in the absence of DNA and outside the cell nucleus (Devary et al. 1993). Genetic differences in metabolism and binding remain, however, as the major determinants in controlling different individual toxic responses.
Figure 5. The general means by which toxicity occurs
Role of Drug-metabolizing Enzymesin Cellular Function
Genetically based variation in drug-metabolizing enzyme function is of major importance in determining individual response to chemicals. These enzymes are pivotal in determining the fate and time course of a foreign chemical following exposure.
As illustrated in figure 5, the importance of drug-metabolizing enzymes in individual susceptibility to chemical exposure may in fact present a far more complex issue than is evident from this simple discussion of xenobiotic metabolism. In other words, during the past two decades, genotoxic mechanisms (measurements of DNA adducts and protein adducts) have been greatly emphasized. However, what if nongenotoxic mechanisms are at least as important as genotoxic mechanisms in causing toxic responses?
As mentioned earlier, the physiological roles of many drug-metabolizing enzymes involved in xenobiotic metabolism have not been accurately defined. Nebert (1994) has proposed that, because of their presence on this planet for more than 3.5 billion years, drug-metabolizing enzymes were originally (and are now still primarily) responsible for regulating the cellular levels of many nonpeptide ligands important in the transcriptional activation of genes affecting growth, differentiation, apoptosis, homeostasis and neuroendocrine functions. Furthermore, the toxicity of most, if not all, environmental agents occurs by means of agonist or antagonist action on these signal transduction pathways (Nebert 1994). Based on this hypothesis, genetic variability in drug-metabolizing enzymes may have quite dramatic effects on many critical biochemical processes within the cell, thereby leading to important differences in toxic response. It is indeed possible that such a scenario may also underlie many idiosyncratic adverse reactions encountered in patients using commonly prescribed drugs.
Conclusions
The past decade has seen remarkable progress in our understanding of the genetic basis of differential response to chemicals in drugs, foods and environmental pollutants. Drug-metabolizing enzymes have a profound influence on the way humans respond to chemicals. As our awareness of drug-metabolizing enzyme multiplicity continues to evolve, we are increasingly able to make improved assessments of toxic risk for many drugs and environmental chemicals. This is perhaps most clearly illustrated in the case of the CYP2D6 cytochrome P450 enzyme. Using relatively simple DNA-based tests, it is possible to predict the likely response of any drug predominantly metabolized by this enzyme; this prediction will ensure the safer use of valuable, yet potentially toxic, medication.
The future will no doubt see an explosion in the identification of further polymorphisms (phenotypes) involving drug-metabolizing enzymes. This information will be accompanied by improved, minimally invasive DNA-based tests to identify genotypes in human populations.
Such studies should be particularly informative in evaluating the role of chemicals in the many environmental diseases of presently unknown origin. The consideration of multiple drug-metabolizing enzyme polymorphisms, in combination (e.g., table 1), is also likely to represent a particularly fertile research area. Such studies will clarify the role of chemicals in the causation of cancers. Collectively, this information should enable the formulation of increasingly individualized advice on avoidance of chemicals likely to be of individual concern. This is the field of preventive toxicology. Such advice will no doubt greatly assist all individuals in coping with the ever increasing chemical burden to which we are exposed.
Mechanistic toxicology is the study of how chemical or physical agents interact with living organisms to cause toxicity. Knowledge of the mechanism of toxicity of a substance enhances the ability to prevent toxicity and design more desirable chemicals; it constitutes the basis for therapy upon overexposure, and frequently enables a further understanding of fundamental biological processes. For purposes of this Encyclopaedia the emphasis will be placed on animals to predict human toxicity. Different areas of toxicology include mechanistic, descriptive, regulatory, forensic and environmental toxicology (Klaassen, Amdur and Doull 1991). All of these benefit from understanding the fundamental mechanisms of toxicity.
Why Understand Mechanisms of Toxicity?
Understanding the mechanism by which a substance causes toxicity enhances different areas of toxicology in different ways. Mechanistic understanding helps the governmental regulator to establish legally binding safe limits for human exposure. It helps toxicologists in recommending courses of action regarding clean-up or remediation of contaminated sites and, along with physical and chemical properties of the substance or mixture, can be used to select the degree of protective equipment required. Mechanistic knowledge is also useful in forming the basis for therapy and the design of new drugs for treatment of human disease. For the forensic toxicologist the mechanism of toxicity often provides insight as to how a chemical or physical agent can cause death or incapacitation.
If the mechanism of toxicity is understood, descriptive toxicology becomes useful in predicting the toxic effects of related chemicals. It is important to understand, however, that a lack of mechanistic information does not deter health professionals from protecting human health. Prudent decisions based on animal studies and human experience are used to establish safe exposure levels. Traditionally, a margin of safety was established by using the “no adverse effect level” or a “lowest adverse effect level” from animal studies (using repeated-exposure designs) and dividing that level by a factor of 100 for occupational exposure or 1,000 for other human environmental exposure. The success of this process is evident from the few incidents of adverse health effects attributed to chemical exposure in workers where appropriate exposure limits had been set and adhered to in the past. In addition, the human lifespan continues to increase, as does the quality of life. Overall the use of toxicity data has led to effective regulatory and voluntary control. Detailed knowledge of toxic mechanisms will enhance the predictability of newer risk models currently being developed and will result in continuous improvement.
Understanding environmental mechanisms is complex and presumes a knowledge of ecosystem disruption and homeostasis (balance). While not discussed in this article, an enhanced understanding of toxic mechanisms and their ultimate consequences in an ecosystem would help scientists to make prudent decisions regarding the handling of municipal and industrial waste material. Waste management is a growing area of research and will continue to be very important in the future.
Techniques for Studying Mechanisms of Toxicity
The majority of mechanistic studies start with a descriptive toxicological study in animals or clinical observations in humans. Ideally, animal studies include careful behavioural and clinical observations, careful biochemical examination of elements of the blood and urine for signs of adverse function of major biological systems in the body, and a post-mortem evaluation of all organ systems by microscopic examination to check for injury (see OECD test guidelines; EC directives on chemical evaluation; US EPA test rules; Japan chemicals regulations). This is analogous to a thorough human physical examination that would take place in a hospital over a two- to three-day time period except for the post-mortem examination.
Understanding mechanisms of toxicity is the art and science of observation, creativity in the selection of techniques to test various hypotheses, and innovative integration of signs and symptoms into a causal relationship. Mechanistic studies start with exposure, follow the time-related distribution and fate in the body (pharmacokinetics), and measure the resulting toxic effect at some level of the system and at some dose level. Different substances can act at different levels of the biological system in causing toxicity.
Exposure
The route of exposure in mechanistic studies is usually the same as for human exposure. Route is important because there can be effects that occur locally at the site of exposure in addition to systemic effects after the chemical has been absorbed into the blood and distributed throughout the body. A simple yet cogent example of a local effect would be irritation and eventual corrosion of the skin following application of strong acid or alkaline solutions designed for cleaning hard surfaces. Similarly, irritation and cellular death can occur in cells lining the nose and/or lungs following exposure to irritant vapours or gases such as oxides of nitrogen or ozone. (Both are constituents of air pollution, or smog). Following absorption of a chemical into blood through the skin, lungs or gastrointestinal tract, the concentration in any organ or tissue is controlled by many factors which determine the pharmacokinetics of the chemical in the body. The body has the ability to activate as well as detoxify various chemicals as noted below.
Role of Pharmacokinetics in Toxicity
Pharmacokinetics describes the time relationships for chemical absorption, distribution, metabolism (biochemical alterations in the body) and elimination or excretion from the body. Relative to mechanisms of toxicity, these pharmacokinetic variables can be very important and in some instances determine whether toxicity will or will not occur. For instance, if a material is not absorbed in a sufficient amount, systemic toxicity (inside the body) will not occur. Conversely, a highly reactive chemical that is detoxified quickly (seconds or minutes) by digestive or liver enzymes may not have the time to cause toxicity. Some polycyclic halogenated substances and mixtures as well as certain metals like lead would not cause significant toxicity if excretion were rapid; but accumulation to sufficiently high levels determines their toxicity since excretion is not rapid (sometimes measured in years). Fortunately, most chemicals do not have such long retention in the body. Accumulation of an innocuous material still would not induce toxicity. The rate of elimination from the body and detoxication is frequently referred to as the half-life of the chemical, which is the time for 50% of the chemical to be excreted or altered to a non-toxic form.
However, if a chemical accumulates in a particular cell or organ, that may signal a reason to further examine its potential toxicity in that organ. More recently, mathematical models have been developed to extrapolate pharmacokinetic variables from animals to humans. These pharmacokinetic models are extremely useful in generating hypotheses and testing whether the experimental animal may be a good representation for humans. Numerous chapters and texts have been written on this subject (Gehring et al. 1976; Reitz et al. 1987; Nolan et al. 1995). A simplified example of a physiological model is depicted in figure 1.
Figure 1. A simplified pharmacokinetic model
Different Levels and Systems Can Be Adversely Affected
Toxicity can be described at different biological levels. Injury can be evaluated in the whole person (or animal), the organ system, the cell or the molecule. Organ systems include the immune, respiratory, cardiovascular, renal, endocrine, digestive, muscolo-skeletal, blood, reproductive and central nervous systems. Some key organs include the liver, kidney, lung, brain, skin, eyes, heart, testes or ovaries, and other major organs. At the cellular/biochemical level, adverse effects include interference with normal protein function, endocrine receptor function, metabolic energy inhibition, or xenobiotic (foreign substance) enzyme inhibition or induction. Adverse effects at the molecular level include alteration of the normal function of DNA-RNA transcription, of specific cytoplasmic and nuclear receptor binding, and of genes or gene products. Ultimately, dysfunction in a major organ system is likely caused by a molecular alteration in a particular target cell within that organ. However, it is not always possible to trace a mechanism back to a molecular origin of causation, nor is it necessary. Intervention and therapy can be designed without a complete understanding of the molecular target. However, knowledge about the specific mechanism of toxicity increases the predictive value and accuracy of extrapolation to other chemicals. Figure 2 is a diagrammatic representation of the various levels where interference of normal physiological processes can be detected. The arrows indicate that the consequences to an individual can be determined from top down (exposure, pharmaco- kinetics to system/organ toxicity) or from bottom up (molecular change, cellular/biochemical effect to system/organ toxicity).
Figure 2. Reresentation of mechanisms of toxicity
Examples of Mechanisms of Toxicity
Mechanisms of toxicity can be straightforward or very complex. Frequently, there is a difference among the type of toxicity, the mechanism of toxicity, and the level of effect, related to whether the adverse effects are due to a single, acute high dose (like an accidental poisoning), or a lower-dose repeated exposure (from occupational or environmental exposure). Classically, for testing purposes, an acute, single high dose is given by direct intubation into the stomach of a rodent or exposure to an atmosphere of a gas or vapour for two to four hours, whichever best resembles the human exposure. The animals are observed over a two-week period following exposure and then the major external and internal organs are examined for injury. Repeated-dose testing ranges from months to years. For rodent species, two years is considered a chronic (lifetime) study sufficient to evaluate toxicity and carcinogenicity, whereas for non-human primates, two years would be considered a subchronic (less than lifetime) study to evaluate repeated dose toxicity. Following exposure a complete examination of all tissues, organs and fluids is conducted to determine any adverse effects.
Acute Toxicity Mechanisms
The following examples are specific to high-dose, acute effects which can lead to death or severe incapacitation. However, in some cases, intervention will result in transient and fully reversible effects. The dose or severity of exposure will determine the result.
Simple asphyxiants. The mechanism of toxicity for inert gases and some other non-reactive substances is lack of oxygen (anoxia). These chemicals, which cause deprivation of oxygen to the central nervous system (CNS), are termed simple asphyxiants. If a person enters a closed space that contains nitrogen without sufficient oxygen, immediate oxygen depletion occurs in the brain and leads to unconsciousness and eventual death if the person is not rapidly removed. In extreme cases (near zero oxygen) unconsciousness can occur in a few seconds. Rescue depends on rapid removal to an oxygenated environment. Survival with irreversible brain damage can occur from delayed rescue, due to the death of neurons, which cannot regenerate.
Chemical asphyxiants. Carbon monoxide (CO) competes with oxygen for binding to haemoglobin (in red blood cells) and therefore deprives tissues of oxygen for energy metabolism; cellular death can result. Intervention includes removal from the source of CO and treatment with oxygen. The direct use of oxygen is based on the toxic action of CO. Another potent chemical asphyxiant is cyanide. The cyanide ion interferes with cellular metabolism and utilization of oxygen for energy. Treatment with sodium nitrite causes a change in haemoglobin in red blood cells to methaemoglobin. Methaemoglobin has a greater binding affinity to the cyanide ion than does the cellular target of cyanide. Consequently, the methaemoglobin binds the cyanide and keeps the cyanide away from the target cells. This forms the basis for antidotal therapy.
Central nervous system (CNS) depressants. Acute toxicity is characterized by sedation or unconsciousness for a number of materials like solvents which are not reactive or which are transformed to reactive intermediates. It is hypothesized that sedation/anaesthesia is due to an interaction of the solvent with the membranes of cells in the CNS, which impairs their ability to transmit electrical and chemical signals. While sedation may seem a mild form of toxicity and was the basis for development of the early anaesthetics, “the dose still makes the poison”. If sufficient dose is administered by ingestion or inhalation the animal can die due to respiratory arrest. If anaesthetic death does not occur, this type of toxicity is usually readily reversible when the subject is removed from the environment or the chemical is redistributed or eliminated from the body.
Skin effects. Adverse effects to the skin can range from irritation to corrosion, depending on the substance encountered. Strong acids and alkaline solutions are incompatible with living tissue and are corrosive, causing chemical burns and possible scarring. Scarring is due to death of the dermal, deep skin cells responsible for regeneration. Lower concentrations may just cause irritation of the first layer of skin.
Another specific toxic mechanism of skin is that of chemical sensitization. As an example, sensitization occurs when 2,4-dinitrochlorobenzene binds with natural proteins in the skin and the immune system recognizes the altered protein-bound complex as a foreign material. In responding to this foreign material, the immune system activates special cells to eliminate the foreign substance by release of mediators (cytokines) which cause a rash or dermatitis (see “Immunotoxicology”). This is the same reaction of the immune system when exposure to poison ivy occurs. Immune sensitization is very specific to the particular chemical and takes at least two exposures before a response is elicited. The first exposure sensitizes (sets up the cells to recognize the chemical), and subsequent exposures trigger the immune system response. Removal from contact and symptomatic therapy with steroid-containing anti-inflammatory creams are usually effective in treating sensitized individuals. In serious or refractory cases a systemic acting immunosuppresant like prednisone is used in conjunction with topical treatment.
Lung sensitization. An immune sensitization response is elicited by toluene diisocyanate (TDI), but the target site is the lungs. TDI over-exposure in susceptible individuals causes lung oedema (fluid build-up), bronchial constriction and impaired breathing. This is a serious condition and requires removing the individual from potential subsequent exposures. Treatment is primarily symptomatic. Skin and lung sensitization follow a dose response. Exceeding the level set for occupational exposure can cause adverse effects.
Eye effects. Injury to the eye ranges from reddening of the outer layer (swimming-pool redness) to cataract formation of the cornea to damage to the iris (coloured part of the eye). Eye irritation tests are conducted when it is believed serious injury will not occur. Many of the mechanisms causing skin corrosion can also cause injury to the eyes. Materials corrosive to the skin, like strong acids (pH less than 2) and alkali (pH greater than 11.5), are not tested in the eyes of animals because most will cause corrosion and blindness due to a mechanism similar to that which causes skin corrosion. In addition, surface active agents like detergents and surfactants can cause eye injury ranging from irritation to corrosion. A group of materials that requires caution is the positively charged (cationic) surfactants, which can cause burns, permanent opacity of the cornea and vascularization (formation of blood vessels). Another chemical, dinitrophenol, has a specific effect of cataract formation. This appears to be related to concentration of this chemical in the eye, which is an example of pharmacokinetic distributional specificity.
While the listing above is far from exhaustive, it is designed to give the reader an appreciation for various acute toxicity mechanisms.
Subchronic and Chronic Toxicity Mechanisms
When given as a single high dose, some chemicals do not have the same mechanism of toxicity as when given repeatedly as a lower but still toxic dose. When a single high dose is given, there is always the possibility of exceeding the person’s ability to detoxify or excrete the chemical, and this can lead to a different toxic response than when lower repetitive doses are given. Alcohol is a good example. High doses of alcohol lead to primary central nervous system effects, while lower repetitive doses result in liver injury.
Anticholinesterase inhibition. Most organophosphate pesticides, for example, have little mammalian toxicity until they are metabolically activated, primarily in the liver. The primary mechanism of action of organophosphates is the inhibition of acetylcholinesterase (AChE) in the brain and peripheral nervous system. AChE is the normal enzyme that terminates the stimulation of the neurotransmitter acetylcholine. Slight inhibition of AChE over an extended period has not been associated with adverse effects. At high levels of exposure, inability to terminate this neuronal stimulation results in overstimulation of the cholinergic nervous system. Cholinergic overstimulation ultimately results in a host of symptoms, including respiratory arrest, followed by death if not treated. The primary treatment is the administration of atropine, which blocks the effects of acetylcholine, and the administration of pralidoxime chloride, which reactivates the inhibited AChE. Therefore, both the cause and the treatment of organophosphate toxicity are addressed by understanding the biochemical basis of toxicity.
Metabolic activation. Many chemicals, including carbon tetrachloride, chloroform, acetylaminofluorene, nitrosamines, and paraquat are metabolically activated to free radicals or other reactive intermediates which inhibit and interfere with normal cellular function. At high levels of exposure this results in cell death (see “Cellular injury and cellular death”). While the specific interactions and cellular targets remain unknown, the organ systems which have the capability to activate these chemicals, like the liver, kidney and lung, are all potential targets for injury. Specifically, particular cells within an organ have a greater or lesser capacity to activate or detoxify these intermediates, and this capacity determines the intracellular susceptibility within an organ. Metabolism is one reason why an understanding of pharmacokinetics, which describes these types of transformations and the distribution and elimination of these intermediates, is important in recognizing the mechanism of action of these chemicals.
Cancer mechanisms. Cancer is a multiplicity of diseases, and while the understanding of certain types of cancer is increasing rapidly due to the many molecular biological techniques that have been developed since 1980, there is still much to learn. However, it is clear that cancer development is a multi-stage process, and critical genes are key to different types of cancer. Alterations in DNA (somatic mutations) in a number of these critical genes can cause increased susceptibility or cancerous lesions (see “Genetic toxic- ology”). Exposure to natural chemicals (in cooked foods like beef and fish) or synthetic chemicals (like benzidine, used as a dye) or physical agents (ultraviolet light from the sun, radon from soil, gamma radiation from medical procedures or industrial activity) are all contributors to somatic gene mutations. However, there are natural and synthetic substances (such as anti-oxidants) and DNA repair processes which are protective and maintain homeostasis. It is clear that genetics is an important factor in cancer, since genetic disease syndromes such as xeroderma pigmentosum, where there is a lack of normal DNA repair, dramatically increase susceptibility to skin cancer from exposure to ultraviolet light from the sun.
Reproductive mechanisms. Similar to cancer, many mechanisms of reproductive and/or developmental toxicity are known, but much is to be learned. It is known that certain viruses (such as rubella), bacterial infections and drugs (such as thalidomide and vitamin A) will adversely affect development. Recently, work by Khera (1991), reviewed by Carney (1994), show good evidence that the abnormal developmental effects in animal tests with ethylene glycol are attributable to maternal metabolic acidic metabolites. This occurs when ethylene glycol is metabolized to acid metabolites including glycolic and oxalic acid. The subsequent effects on the placenta and foetus appear to be due to this metabolic toxication process.
Conclusion
The intent of this article is to give a perspective on several known mechanisms of toxicity and the need for future study. It is important to understand that mechanistic knowledge is not absolutely necessary to protect human or environmental health. This knowledge will enhance the professional’s ability to better predict and manage toxicity. The actual techniques used in elucidating any particular mechanism depend upon the collective knowledge of the scientists and the thinking of those who make decisions regarding human health.
Virtually all of medicine is devoted to either preventing cell death, in diseases such as myocardial infarction, stroke, trauma and shock, or causing it, as in the case of infectious diseases and cancer. It is, therefore, essential to understand the nature and mechanisms involved. Cell death has been classified as “accidental”, that is, caused by toxic agents, ischaemia and so on, or “programmed”, as occurs during embryological development, including formation of digits, and resorption of the tadpole tail.
Cell injury and cell death are, therefore, important both in physiology and in pathophysiology. Physiological cell death is extremely important during embryogenesis and embryonic development. The study of cell death during development has led to important and new information on the molecular genetics involved, especially through the study of development in invertebrate animals. In these animals, the precise location and the significance of cells that are destined to undergo cell death have been carefully studied and, with the use of classic mutagenesis techniques, several involved genes have now been identified. In adult organs, the balance between cell death and cell proliferation controls organ size. In some organs, such as the skin and the intestine, there is a continual turnover of cells. In the skin, for example, cells differentiate as they reach the surface, and finally undergo terminal differentiation and cell death as keratinization proceeds with the formation of crosslinked envelopes.
Many classes of toxic chemicals are capable of inducing acute cell injury followed by death. These include anoxia and ischaemia and their chemical analogues such as potassium cyanide; chemical carcinogens, which form electrophiles that covalently bind to proteins in nucleic acids; oxidant chemicals, resulting in free radical formation and oxidant injury; activation of complement; and a variety of calcium ionophores. Cell death is also an important component of chemical carcinogenesis; many complete chemical carcinogens, at carcinogenic doses, produce acute necrosis and inflammation followed by regeneration and preneoplasia.
Definitions
Cell injury
Cell injury is defined as an event or stimulus, such as a toxic chemical, that perturbs the normal homeostasis of the cell, thus causing a number of events to occur (figure 1). The principal targets of lethal injury illustrated are inhibition of ATP synthesis, disruption of plasma membrane integrity or withdrawal of essential growth factors.
Lethal injuries result in the death of a cell after a variable period of time, depending on temperature, cell type and the stimulus; or they can be sublethal or chronic—that is, the injury results in an altered homeostatic state which, though abnormal, does not result in cell death (Trump and Arstila 1971; Trump and Berezesky 1992; Trump and Berezesky 1995; Trump, Berezesky and Osornio-Vargas 1981). In the case of a lethal injury, there is a phase prior to the time of cell death
during this time, the cell will recover; however, after a particular point in time (the “point of no return” or point of cell death), the removal of the injury does not result in recovery but instead the cell undergoes degradation and hydrolysis, ultimately reaching physical-chemical equilibrium with the environment. This is the phase known as necrosis. During the prelethal phase, several principal types of change occur, depending on the cell and the type of injury. These are known as apoptosis and oncosis.
Apoptosis
Apoptosis is derived from the Greek words apo, meaning away from, and ptosis, meaning to fall. The term falling away from is derived from the fact that, during this type of prelethal change, the cells shrink and undergo marked blebbing at the periphery. The blebs then detach and float away. Apoptosis occurs in a variety of cell types following various types of toxic injury (Wyllie, Kerr and Currie 1980). It is especially prominent in lymphocytes, where it is the predominant mechanism for turnover of lymphocyte clones. The resulting fragments result in the basophilic bodies seen within macrophages in lymph nodes. In other organs, apoptosis typically occurs in single cells which are rapidly cleared away before and following death by phagocytosis of the fragments by adjacent parenchymal cells or by macrophages. Apoptosis occurring in single cells with subsequent phagocytosis typically does not result in inflammation. Prior to death, apoptotic cells show a very dense cytosol with normal or condensed mitochondria. The endoplasmic reticulum (ER) is normal or only slightly dilated. The nuclear chromatin is markedly clumped along the nuclear envelope and around the nucleolus. The nuclear contour is also irregular and nuclear fragmentation occurs. The chromatin conden- sation is associated with DNA fragmentation which, in many instances, occurs between nucleosomes, giving a characteristic ladder appearance on electrophoresis.
In apoptosis, increased [Ca2+]i may stimulate K+ efflux resulting in cell shrinkage, which probably requires ATP. Injuries that totally inhibit ATP synthesis, therefore, are more likely to result in apoptosis. A sustained increase of [Ca2+]i has a number of deleterious effects including activation of proteases, endonucleases, and phospholipases. Endonuclease activation results in single and double DNA strand breaks which, in turn, stimulate increased levels of p53 and in poly-ADP ribosylation, and of nuclear proteins which are essential in DNA repair. Activation of proteases modifies a number of substrates including actin and related proteins leading to bleb formation. Another important substrate is poly(ADP-ribose) polymerase (PARP), which inhibits DNA repair. Increased [Ca2+]i is also associated with activation of a number of protein kinases, such as MAP kinase, calmodulin kinase and others. Such kinases are involved in activation of transcription factors which initiate transcription of immediate-early genes, for example, c-fos, c-jun and c-myc, and in activation of phospholipase A2 which results in permeabilization of the plasma membrane and of intracellular membranes such as the inner membrane of mitochondria.
Oncosis
Oncosis, derived from the Greek word onkos, to swell, is so named because in this type of prelethal change the cell begins to swell almost immediately following the injury (Majno and Joris 1995). The reason for the swelling is an increase in cations in the water within the cell. The principal cation responsible is sodium, which is normally regulated to maintain cell volume. However, in the absence of ATP or if Na-ATPase of the plasmalemma is inhibited, volume control is lost because of intracellular protein, and sodium in the water continuing to increase. Among the early events in oncosis are, therefore, increased [Na+]i which leads to cellular swelling and increased [Ca2+]i resulting either from influx from the extracellular space or release from intracellular stores. This results in swelling of the cytosol, swelling of the endoplasmic reticulum and Golgi apparatus, and the formation of watery blebs around the cell surface. The mitochondria initially undergo condensation, but later they too show high-amplitude swelling because of damage to the inner mitochondrial membrane. In this type of prelethal change, the chromatin undergoes condensation and ultimately degradation; however, the characteristic ladder pattern of apoptosis is not seen.
Necrosis
Necrosis refers to the series of changes that occur following cell death when the cell is converted to debris which is typically removed by the inflammatory response. Two types can be distinguished: oncotic necrosis and apoptotic necrosis. Oncotic necrosis typically occurs in large zones, for example, in a myocardial infarct or regionally in an organ after chemical toxicity, such as the renal proximal tubule following administration of HgCl2. Broad zones of an organ are involved and the necrotic cells rapidly incite an inflammatory reaction, first acute and then chronic. In the event that the organism survives, in many organs necrosis is followed by clearing away of the dead cells and regeneration, for example, in the liver or kidney following chemical toxicity. In contrast, apoptotic necrosis typically occurs on a single cell basis and the necrotic debris is formed within the phagocytes of macrophages or adjacent parenchymal cells. The earliest characteristics of necrotic cells include interruptions in plasma membrane continuity and the appearance of flocculent densities, representing denatured proteins within the mitochondrial matrix. In some forms of injury that do not initially interfere with mitochondrial calcium accumulation, calcium phosphate deposits can be seen within the mitochondria. Other membrane systems are similarly fragmenting, such as the ER, the lysosomes and the Golgi apparatus. Ultimately, the nuclear chromatin undergoes lysis, resulting from attack by lysosomal hydrolases. Following cell death, lysosomal hydrolases play an important part in clearing away debris with cathepsins, nucleolases and lipases since these have an acid pH optimum and can survive the low pH of necrotic cells while other cellular enzymes are denatured and inactivated.
Mechanisms
Initial stimulus
In the case of lethal injuries, the most common initial interactions resulting in injury leading to cell death are interference with energy metabolism, such as anoxia, ischaemia or inhibitors of respiration, and glycolysis such as potassium cyanide, carbon monoxide, iodo-acetate, and so on. As mentioned above, high doses of compounds that inhibit energy metabolism typically result in oncosis. The other common type of initial injury resulting in acute cell death is modification of the function of the plasma membrane (Trump and Arstila 1971; Trump, Berezesky and Osornio-Vargas 1981). This can either be direct damage and permeabilization, as in the case of trauma or activation of the C5b-C9 complex of complement, mechanical damage to the cell membrane or inhibition of the sodium-potassium (Na+-K+) pump with glycosides such as ouabain. Calcium ionophores such as ionomycin or A23187, which rapidly carry [Ca2+] down the gradient into the cell, also cause acute lethal injury. In some cases, the pattern in the prelethal change is apoptosis; in others, it is oncosis.
Signalling pathways
With many types of injury, mitochondrial respiration and oxidative phosphorylation are rapidly affected. In some cells, this stimulates anaerobic glycolysis, which is capable of maintaining ATP, but with many injuries this is inhibited. The lack of ATP results in failure to energize a number of important homeostatic processes, in particular, control of intracellular ion homeostasis (Trump and Berezesky 1992; Trump, Berezesky and Osornio-Vargas 1981). This results in rapid increases of [Ca2+]i, and increased [Na+] and [Cl-] results in cell swelling. Increases in [Ca2+]i result in the activation of a number of other signalling mechanisms discussed below, including a series of kinases, which can result in increased immediate early gene transcription. Increased [Ca2+]i also modifies cytoskeletal function, in part resulting in bleb formation and in the activation of endonucleases, proteases and phospholipases. These seem to trigger many of the important effects discussed above, such as membrane damage through protease and lipase activation, direct degradation of DNA from endonuclease activation, and activation of kinases such as MAP kinase and calmodulin kinase, which act as transcription factors.
Through extensive work on development in the invertebrate C. elegans and Drosophila, as well as human and animal cells, a series of pro-death genes have been identified. Some of these invertebrate genes have been found to have mammalian counterparts. For example, the ced-3 gene, which is essential for programmed cell death in C. elegans, has protease activity and a strong homology with the mammalian interleukin converting enzyme (ICE). A closely related gene called apopain or prICE has recently been identified with even closer homology (Nicholson et al. 1995). In Drosophila, the reaper gene seems to be involved in a signal that leads to programmed cell death. Other pro-death genes include the Fas membrane protein and the important tumour-suppressor gene, p53, which is widely conserved. p53 is induced at the protein level following DNA damage and when phosphorylated acts as a transcription factor for other genes such as gadd45 and waf-1, which are involved in cell death signalling. Other immediate early genes such as c-fos, c-jun, and c-myc also seem to be involved in some systems.
At the same time, there are anti-death genes which appear to counteract the pro-death genes. The first of these to be identified was ced-9 from C. elegans, which is homologous to bcl-2 in humans. These genes act in an as yet unknown way to prevent cell killing by either genetic or chemical toxins. Some recent evidence indicates that bcl-2 may act as an antioxidant. Currently, there is much effort underway to develop an understanding of the genes involved and to develop ways to activate or inhibit these genes, depending on the situation.
Genetic toxicology, by definition, is the study of how chemical or physical agents affect the intricate process of heredity. Genotoxic chemicals are defined as compounds that are capable of modifying the hereditary material of living cells. The probability that a particular chemical will cause genetic damage inevitably depends on several variables, including the organism’s level of exposure to the chemical, the distribution and retention of the chemical once it enters the body, the efficiency of metabolic activation and/or detoxification systems in target tissues, and the reactivity of the chemical or its metabolites with critical macromolecules within cells. The probability that genetic damage will cause disease ultimately depends on the nature of the damage, the cell’s ability to repair or amplify genetic damage, the opportunity for expressing whatever alteration has been induced, and the ability of the body to recognize and suppress the multiplication of aberrant cells.
In higher organisms, hereditary information is organized in chromosomes. Chromosomes consist of tightly condensed strands of protein-associated DNA. Within a single chromosome, each DNA molecule exists as a pair of long, unbranched chains of nucleotide subunits linked together by phosphodiester bonds that join the 5 carbon of one deoxyribose moiety to the 3 carbon of the next (figure 1). In addition, one of four different nucleotide bases (adenine, cytosine, guanine or thymine) is attached to each deoxyribose subunit like beads on a string. Three-dimensionally, each pair of DNA strands forms a double helix with all of the bases oriented toward the inside of the spiral. Within the helix, each base is associated with its complementary base on the opposite DNA strand; hydrogen bonding dictates strong, noncovalent pairing of adenine with thymine and guanine with cytosine (figure 1). Since the sequence of nucleotide bases is complementary throughout the entire length of the duplex DNA molecule, both strands carry essentially the same genetic information. In fact, during DNA replication each strand serves as a template for the production of a new partner strand.
Figure 1. The (a) primary, (b) secondary and (c) tertiary organization of human hereditary information
Using RNA and an array of different proteins, the cell ultimately deciphers the information encoded by the linear sequence of bases within specific regions of DNA (genes) and produces proteins that are essential for basic cell survival as well as normal growth and differentiation. In essence, the nucleotides function like a biological alphabet which is used to code for amino acids, the building blocks of proteins.
When incorrect nucleotides are inserted or nucleotides are lost, or when unnecessary nucleotides are added during DNA synthesis, the mistake is called a mutation. It has been estimated that less than one mutation occurs for every 109 nucleotides incorporated during the normal replication of cells. Although mutations are not necessarily harmful, alterations causing inactivation or overexpression of important genes can result in a variety of disorders, including cancer, hereditary disease, developmental abnormalities, infertility and embryonic or perinatal death. Very rarely, a mutation can lead to enhanced survival; such occurrences are the basis of natural selection.
Although some chemicals react directly with DNA, most require metabolic activation. In the latter case, electrophilic intermediates such as epoxides or carbonium ions are ultimately responsible for inducing lesions at a variety of nucleophilic sites within the genetic material (figure 2). In other instances, genotoxicity is mediated by by-products of compound interaction with intracellular lipids, proteins, or oxygen.
Figure 2. Bioactivation of: a) benzo(a)pyrene; and b) N-nitrosodimethylamine
Because of their relative abundance in cells, proteins are the most frequent target of toxicant interaction. However, modification of DNA is of greater concern due to the central role of this molecule in regulating growth and differentiation through multiple generations of cells.
At the molecular level, electrophilic compounds tend to attack oxygen and nitrogen in DNA. The sites that are most prone to modification are illustrated in figure 3. Although oxygens within phosphate groups in the DNA backbone are also targets for chemical modification, damage to bases is thought to be biologically more relevant since these groups are considered to be the primary informational elements in the DNA molecule.
Figure 3. Primary sites of chemically-induced DNA damage
Compounds that contain one electrophilic moiety typically exert genotoxicity by producing mono-adducts in DNA. Similarly, compounds that contain two or more reactive moieties can react with two different nucleophilic centres and thereby produce intra- or inter-molecular crosslinks in genetic material (figure 4). Interstrand DNA-DNA and DNA-protein crosslinks can be particularly cytotoxic since they can form complete blocks to DNA replication. For obvious reasons, the death of a cell eliminates the possibility that it will be mutated or neoplastically transformed. Genotoxic agents can also act by inducing breaks in the phosphodiester backbone, or between bases and sugars (producing abasic sites) in DNA. Such breaks may be a direct result of chemical reactivity at the damage site, or may occur during the repair of one of the aforementioned types of DNA lesion.
Figure 4. Various types of damage to the protein-DNA complex
Over the past thirty to forty years, a variety of techniques have been developed to monitor the type of genetic damage induced by various chemicals. Such assays are described in detail elsewhere in this chapter and Encyclopaedia.
Misreplication of “microlesions” such as mono-adducts, abasic sites or single-strand breaks may ultimately result in nucleotide base-pair substitutions, or the insertion or deletion of short polynucleotide fragments in chromosomal DNA. In contrast, “macrolesions,” such as bulky adducts, crosslinks, or double-strand breaks may trigger the gain, loss or rearrangement of relatively large pieces of chromosomes. In any case, the consequences can be devastating to the organism since any one of these events can lead to cell death, loss of function or malignant transformation of cells. Exactly how DNA damage causes cancer is largely unknown. It is currently believed the process may involve inappropriate activation of proto-oncogenes such as myc and ras, and/or inactivation of recently identified tumour suppressor genes such as p53. Abnormal expression of either type of gene abrogates normal cellular mechanisms for controlling cell proliferation and/or differentiation.
The preponderance of experimental evidence indicates that the development of cancer following exposure to electrophilic compounds is a relatively rare event. This can be explained, in part, by the cell’s intrinsic ability to recognize and repair damaged DNA or the failure of cells with damaged DNA to survive. During repair, the damaged base, nucleotide or short stretch of nucleotides surrounding the damage site is removed and (using the opposite strand as a template) a new piece of DNA is synthesized and spliced into place. To be effective, DNA repair must occur with great accuracy prior to cell division, before opportunities for the propagation of mutation.
Clinical studies have shown that people with inherited defects in the ability to repair damaged DNA frequently develop cancer and/or developmental abnormalities at an early age (table 1). Such examples provide strong evidence linking accumulation of DNA damage to human disease. Similarly, agents that promote cell proliferation (such as tetradecanoylphorbol acetate) often enhance carcinogenesis. For these compounds, the increased likelihood of neoplastic transformation may be a direct consequence of a decrease in the time available for the cell to carry out adequate DNA repair.
Table 1. Hereditary, cancer-prone disorders that appear to involve defects in DNA repair
Syndrome | Symptoms | Cellular phenotype |
Ataxia telangiectasia | Neurological deterioration Immunodeficiency High incidence of lymphoma |
Hypersensitivity to ionizing radiation and certain alkylating agents. Dysregulated replication of damaged DNA (may indicate shortened time for DNA repair) |
Bloom’s syndrome | Developmental abnormalities Lesions on exposed skin High incidence of tumours of the immune system and gastrointestinal tract |
High frequency of chromosomal aberrations Defective ligation of breaks associated with DNA repair |
Fanconi’s anaemia | Growth retardation High incidence of leukaemia |
Hypersensitivity to crosslinking agents High frequency of chromosomal aberrations Defective repair of crosslinks in DNA |
Hereditary nonpolyposis colon cancer | High incidence of colon cancer | Defect in DNA mismatch repair (when insertion of wrong nucleotide occurs during replication) |
Xeroderma pigmentosum | High incidence of epithelioma on exposed areas of skin Neurological impairment (in many cases) |
Hypersensitivity to UV light and many chemical carcinogens Defects in excision repair and/or replication of damaged DNA |
The earliest theories on how chemicals interact with DNA can be traced back to studies conducted during the development of mustard gas for use in warfare. Further understanding grew out of efforts to identify anticancer agents that would selectively arrest the replication of rapidly dividing tumour cells. Increased public concern over hazards in our environment has prompted additional research into the mechanisms and consequences of chemical interaction with the genetic material. Examples of various types of chemicals which exert genotoxicity are presented in table 2.
Table 2. Examples of chemicals that exhibit genotoxicity in human cells
Class of chemical | Example | Source of exposure | Probable genotoxic lesion |
Aflatoxins | Aflatoxin B1 | Contaminated food | Bulky DNA adducts |
Aromatic amines | 2-Acetylaminofluorene | Environmental | Bulky DNA adducts |
Aziridine quinones | Mitomycin C | Cancer chemotherapy | Mono-adducts, interstrand crosslinks and single-strand breaks in DNA. |
Chlorinated hydrocarbons | Vinyl chloride | Environmental | Mono-adducts in DNA |
Metals and metal compounds | Cisplatin | Cancer chemotherapy | Both intra- and inter-strand crosslinks in DNA |
Nickel compounds | Environmental | Mono-adducts and single-strand breaks in DNA | |
Nitrogen mustards | Cyclophosphamide | Cancer chemotherapy | Mono-adducts and interstrand crosslinks in DNA |
Nitrosamines | N-Nitrosodimethylamine | Contaminated food | Mono-adducts in DNA |
Polycyclic aromatic hydrocarbons | Benzo(a)pyrene | Environmental | Bulky DNA adducts |
The functions of the immune system are to protect the body from invading infectious agents and to provide immune surveillance against arising tumour cells. It has a first line of defence that is non-specific and that can initiate effector reactions itself, and an acquired specific branch, in which lymphocytes and antibodies carry the specificity of recognition and subsequent reactivity towards the antigen.
Immunotoxicology has been defined as “the discipline concerned with the study of the events that can lead to undesired effects as a result of interaction of xenobiotics with the immune system. These undesired events may result as a consequence of (1) a direct and/or indirect effect of the xenobiotic (and/or its biotransformation product) on the immune system, or (2) an immunologically based host response to the compound and/or its metabolite(s), or host antigens modified by the compound or its metabolites” (Berlin et al. 1987).
When the immune system acts as a passive target of chemical insults, the result can be decreased resistance to infection and certain forms of neoplasia, or immune disregulation/stimulation that can exacerbate allergy or auto-immunity. In the case that the immune system responds to the antigenic specificity of the xenobiotic or host antigen modified by the compound, toxicity can become manifest as allergies or autoimmune diseases.
Animal models to investigate chemical-induced immune suppression have been developed, and a number of these methods are validated (Burleson, Munson, and Dean 1995; IPCS 1996). For testing purposes, a tiered approach is followed to make an adequate selection from the overwhelming number of assays available. Generally, the objective of the first tier is to identify potential immunotoxicants. If potential immunotoxicity is identified, a second tier of testing is performed to confirm and characterize further the changes observed. Third-tier investigations include special studies on the mechanism of action of the compound. Several xenobiotics have been identified as immunotoxicants causing immunosuppression in such studies with laboratory animals.
The database on immune function disturbances in humans by environmental chemicals is limited (Descotes 1986; NRC Subcommittee on Immunotoxicology 1992). The use of markers of immunotoxicity has received little attention in clinical and epidemiological studies to investigate the effect of these chemicals on human health. Such studies have not been performed frequently, and their interpretation often does not permit unequivocal conclusions to be drawn, due for instance to the uncontrolled nature of exposure. Therefore, at present, immunotoxicity assessment in rodents, with subsequent extrapolation to man, forms the basis of decisions regarding hazard and risk.
Hypersensitivity reactions, notably allergic asthma and contact dermatitis, are important occupational health problems in industrialized countries (Vos, Younes and Smith 1995). The phenomenon of contact sensitization was investigated first in the guinea pig (Andersen and Maibach 1985). Until recently this has been the species of choice for predictive testing. Many guinea pig test methods are available, the most frequently employed being the guinea pig maximization test and the occluded patch test of Buehler. Guinea pig tests and newer approaches developed in mice, such as ear swelling tests and the local lymph node assay, provide the toxicologist with the tools to assess skin sensitization hazard. The situation with respect to sensitization of the respiratory tract is very different. There are, as yet, no well-validated or widely accepted methods available for the identification of chemical respiratory allergens although progress in the development of animal models for the investigation of chemical respiratory allergy has been achieved in the guinea pig and mouse.
Human data show that chemical agents, in particular drugs, can cause autoimmune diseases (Kammüller, Bloksma and Seinen 1989). There are a number of experimental animal models of human autoimmune diseases. Such comprise both spontaneous pathology (for example systemic lupus erythematosus in New Zealand Black mice) and autoimmune phenomena induced by experimental immunization with a cross-reactive autoantigen (for example the H37Ra adjuvant induced arthritis in Lewis strain rats). These models are applied in the preclinical evaluation of immunosuppressive drugs. Very few studies have addressed the potential of these models for assessment of whether a xenobiotic exacerbates induced or congenital autoimmunity. Animal models that are suitable to investigate the ability of chemicals to induce autoimmune diseases are virtually lacking. One model that is used to a limited extent is the popliteal lymph node assay in mice. Like the situation in humans, genetic factors play a crucial role in the development of autoimmune disease (AD) in laboratory animals, which will limit the predictive value of such tests.
The Immune System
The major function of the immune system is defence against bacteria, viruses, parasites, fungi and neoplastic cells. This is achieved by the actions of various cell types and their soluble mediators in a finely tuned concert. The host defence can be roughly divided into non-specific or innate resistance and specific or acquired immunity mediated by lymphocytes (Roitt, Brostoff and Male 1989).
Components of the immune system are present throughout the body (Jones et al. 1990). The lymphocyte compartment is found within lymphoid organs (figure 1). The bone marrow and thymus are classified as primary or central lymphoid organs; the secondary or peripheral lymphoid organs include lymph nodes, spleen and lymphoid tissue along secretory surfaces such as the gastrointestinal and respiratory tracts, the so-called mucosa-associated lymphoid tissue (MALT). About half of the body’s lymphocytes are located at any one time in MALT. In addition the skin is an important organ for the induction of immune responses to antigens present on the skin. Important in this process are epidermal Langerhans cells that have an antigen-presenting function.
Figure 1. Primary and secondary lymphoid organs and tissues
Phagocytic cells of the monocyte/macrophage lineage, called the mononuclear phagocyte system (MPS), occur in lymphoid organs and also at extranodal sites; the extranodal phagocytes include Kupffer cells in the liver, alveolar macrophages in the lung, mesangial macrophages in the kidney and glial cells in the brain. Polymorphonuclear leukocytes (PMNs) are present mainly in blood and bone marrow, but accumulate at sites of inflammation.
Non-specific defence
A first line of defence to micro-organisms is executed by a physical and chemical barrier, such as at the skin, the respiratory tract and the alimentary tract. This barrier is helped by non-specific protective mechanisms including phagocytic cells, such as macrophages and polymorphonuclear leukocytes, which are able to kill pathogens, and natural killer cells, which can lyse tumour cells and virus-infected cells. The complement system and certain microbial inhibitors (e.g., lysozyme) also take part in the non-specific response.
Specific immunity
After initial contact of the host with the pathogen, specific immune responses are induced. The hallmark of this second line of defence is specific recognition of determinants, so-called antigens or epitopes, of the pathogens by receptors on the cell surface of B- and T-lymphocytes. Following interaction with the specific antigen, the receptor-bearing cell is stimulated to undergo proliferation and differentiation, producing a clone of progeny cells that are specific for the eliciting antigen. The specific immune responses help the non-specific defence presented to the pathogens by stimulating the efficacy of the non-specific responses. A fundamental characteristic of specific immunity is that memory develops. Secondary contact with the same antigen provokes a faster and more vigorous but well-regulated response.
The genome does not have the capacity to carry the codes of an array of antigen receptors sufficient to recognize the number of antigens that can be encountered. The repertoire of specificity develops by a process of gene rearrangements. This is a random process, during which various specificities are brought about. This includes specificities for self components, which are undesirable. A selection process that takes place in the thymus (T cells), or bone marrow (B cells) operates to delete these undesirable specificities.
Normal immune effector function and homeostatic regulation of the immune response is dependent upon a variety of soluble products, known collectively as cytokines, which are synthesized and secreted by lymphocytes and by other cell types. Cytokines have pleiotropic effects on immune and inflammatory responses. Cooperation between different cell populations is required for the immune response—the regulation of antibody responses, the accumulation of immune cells and molecules at inflammatory sites, the initiation of acute phase responses, the control of macrophage cytotoxic function and many other processes central to host resistance. These are influenced by, and in many cases are dependent upon, cytokines acting individually or in concert.
Two arms of specific immunity are recognized—humoral immunity and cell-mediated or cellular immunity:
Humoral immunity. In the humoral arm B-lymphocytes are stimulated following recognition of antigen by cell-surface receptors. Antigen receptors on B-lymphocytes are immunoglobulins (Ig). Mature B cells (plasma cells) start the production of antigen-specific immunoglobulins that act as antibodies in serum or along mucosal surfaces. There are five major classes of immunoglobulins: (1) IgM, pentameric Ig with optimal agglutinating capacity, which is first produced after antigenic stimulation; (2) IgG, the main Ig in circulation, which can pass the placenta; (3) IgA, secretory Ig for the protection of mucosal surfaces; (4) IgE, Ig fixing to mast cells or basophilic granulocytes involved in immediate hypersensitivity reactions and (5) IgD, whose major function is as a receptor on B-lymphocytes.
Cell-mediated immunity. The cellular arm of the specific immune system is mediated by T-lymphocytes. These cells also have antigen receptors on their membranes. They recognize antigen if presented by antigen presenting cells in the context of histocompatibility antigens. Hence, these cells have a restriction in addition to the antigen specificity. T cells function as helper cells for various (including humoral) immune responses, mediate recruitment of inflammatory cells, and can, as cytotoxic T cells, kill target cells after antigen-specific recognition.
Mechanisms of Immunotoxicity
Immunosuppression
Effective host resistance is dependent upon the functional integrity of the immune system, which in turn requires that the component cells and molecules which orchestrate immune responses are available in sufficient numbers and in an operational form. Congenital immunodeficiencies in humans are often characterized by defects in certain stem cell lines, resulting in impaired or absent production of immune cells. By analogy with congenital and acquired human immunodeficiency diseases, chemical-induced immunosuppression may result simply from a reduced number of functional cells (IPCS 1996). The absence, or reduced numbers, of lymphocytes may have more or less profound effects on immune status. Some immunodeficiency states and severe immunosuppression, as can occur in transplantation or cytostatic therapy, have been associated in particular with increased incidences of opportunistic infections and of certain neoplastic diseases. The infections can be bacterial, viral, fungal or protozoan, and the predominant type of infection depends on the associated immunodeficiency. Exposure to immunosuppressive environmental chemicals may be expected to result in more subtle forms of immunosuppression, which may be difficult to detect. These may lead, for example, to an increased incidence of infections such as influenza or the common cold.
In view of the complexity of the immune system, with the wide variety of cells, mediators and functions that form a complicated and interactive network, immunotoxic compounds have numerous opportunities to exert an effect. Although the nature of the initial lesions induced by many immunotoxic chemicals have not yet been elucidated, there is increasing information available, mostly derived from studies in laboratory animals, regarding the immunobiological changes which result in depression of immune function (Dean et al. 1994). Toxic effects might occur at the following critical functions (and some examples are given of immunotoxic compounds affecting these functions):
Allergy
Allergy may be defined as the adverse health effects which result from the induction and elicitation of specific immune responses. When hypersensitivity reactions occur without involvement of the immune system the term pseudo-allergy is used. In the context of immunotoxicology, allergy results from a specific immune response to chemicals and drugs that are of interest. The ability of a chemical to sensitize individuals is generally related to its ability to bind covalently to body proteins. Allergic reactions may take a variety of forms and these differ with respect to both the underlying immunological mechanisms and the speed of the reaction. Four major types of allergic reactions have been recognized: Type I hypersensitivity reactions, which are effectuated by IgE antibody and where symptoms are manifest within minutes of exposure of the sensitized individual. Type II hypersensitivity reactions result from the damage or destruction of host cells by antibody. In this case symptoms become apparent within hours. Type III hypersensitivity, or Arthus, reactions are also antibody mediated, but against soluble antigen, and result from the local or systemic action of immune complexes. Type IV, or delayed-type hypersensitivity, reactions are effected by T-lymphocytes and normally symptoms develop 24to 48hours following exposure of the sensitized individual.
The two types of chemical allergy of greatest relevance to occupational health are contact sensitivity or skin allergy and allergy of the respiratory tract.
Contact hypersensitivity. A large number of chemicals are able to cause skin sensitization. Following topical exposure of a susceptible individual to a chemical allergen, a T-lymphocyte response is induced in the draining lymph nodes. In the skin the allergen interacts directly or indirectly with epidermal Langerhans cells, which transport the chemical to the lymph nodes and present it in an immunogenic form to responsive T-lymphocytes. Allergen- activated T-lymphocytes proliferate, resulting in clonal expansion. The individual is now sensitized and will respond to a second dermal exposure to the same chemical with a more aggressive immune response, resulting in allergic contact dermatitis. The cutaneous inflammatory reaction which characterizes allergic contact dermatitis is secondary to the recognition of the allergen in the skin by specific T-lymphocytes. These lymphocytes become activated, release cytokines and cause the local accumulation of other mononuclear leukocytes. Symptoms develop some 24 to 48 hours following exposure of the sensitized individual, and allergic contact dermatitis therefore represents a form of delayed-type hypersensitivity. Common causes of allergic contact dermatitis include organic chemicals (such as 2,4-dinitrochlorobenzene), metals (such as nickel and chromium) and plant products (such as urushiol from poison ivy).
Respiratory hypersensitivity. Respiratory hypersensitivity is usually considered to be a Type I hypersensitivity reaction. However, late phase reactions and the more chronic symptoms associated with asthma may involve cell-mediated (Type IV) immune processes. The acute symptoms associated with respiratory allergy are effected by IgE antibody, the production of which is provoked following exposure of the susceptible individual to the inducing chemical allergen. The IgE antibody distributes systemically and binds, via membrane receptors, to mast cells which are found in vascularized tissues, including the respiratory tract. Following inhalation of the same chemical a respiratory hypersensitivity reaction will be elicited. Allergen associates with protein and binds to, and cross-links, IgE antibody bound to mast cells. This in turn causes the degranulation of mast cells and the release of inflammatory mediators such as histamine and leukotrienes. Such mediators cause bronchoconstriction and vasodilation, resulting in the symptoms of respiratory allergy; asthma and/or rhinitis. Chemicals known to cause respiratory hypersensitivity in man include acid anhydrides (such as trimellitic anhydride), some diisocyanates (such as toluene diisocyanate), platinum salts and some reactive dyes. Also, chronic exposure to beryllium is known to cause hypersensitivity lung disease.
Autoimmunity
Autoimmunity can be defined as the stimulation of specific immune responses directed against endogenous “self” antigens. Induced autoimmunity can result either from alterations in the balance of regulatory T-lymphocytes or from the association of a xenobiotic with normal tissue components such as to render them immunogenic (“altered self”). Drugs and chemicals known to incidentally induce or exacerbate effects like those of autoimmune disease (AD) in susceptible individuals are low molecular weight compounds (molecular weight 100 to 500) that are generally considered to be not immunogenic themselves. The mechanism of AD by chemical exposure is mostly unknown. Disease can be produced directly by means of circulating antibody, indirectly through the formation of immune complexes, or as a consequence of cell-mediated immunity, but likely occurs through a combination of mechanisms. The pathogenesis is best known in immune haemolytic disorders induced by drugs:
A variety of chemicals and drugs, in particular the latter, have been found to induce autoimmune-like responses (Kamüller, Bloksma and Seinen 1989). Occupational exposure to chemicals may incidentally lead to AD-like syndromes. Exposure to monomeric vinyl chloride, trichloroethylene, perchloroethylene, epoxy resins and silica dust may induce scleroderma-like syndromes. A syndrome similar to systemic lupus erythematosus (SLE) has been described after exposure to hydrazine. Exposure to toluene diisocyanate has been associated with the induction of thrombocytopenic purpura. Heavy metals such as mercury have been implicated in some cases of immune complex glomerulonephritis.
Human Risk Assessment
The assessment of human immune status is performed mainly using peripheral blood for analysis of humoral substances like immunoglobulins and complement, and of blood leukocytes for subset composition and functionality of subpopulations. These methods are usually the same as those used to investigate humoral and cell-mediated immunity as well as nonspecific resistance of patients with suspected congenital immunodeficiency disease. For epidemiological studies (e.g., of occupationally exposed populations) parameters should be selected on the basis of their predictive value in human populations, validated animal models, and the underlying biology of the markers (see table 1). The strategy in screening for immunotoxic effects after (accidental) exposure to environmental pollutants or other toxicants is much dependent on circumstances, such as type of immunodeficiency to be expected, time between exposure and immune status assessment, degree of exposure and number of exposed individuals. The process of assessing the immunotoxic risk of a particular xenobiotic in humans is extremely difficult and often impossible, due largely to the presence of various confounding factors of endogenous or exogenous origin that influence the response of individuals to toxic damage. This is particularly true for studies which investigate the role of chemical exposure in autoimmune diseases, where genetic factors play a crucial role.
Table 1. Classification of tests for immune markers
Test category | Characteristics | Specific tests |
Basic-general Should be included with general panels |
Indicators of general health and organ system status | Blood urea nitrogen, blood glucose, etc. |
Basic-immune Should be included with general panels |
General indicators of immune status Relatively low cost Assay methods are standardized among laboratories Results outside reference ranges are clinically interpretable |
Complete blood counts Serum IgG, IgA, IgM levels Surface marker phenotypes for major lymphocyte subsets |
Focused/reflex Should be included when indicated by clinical findings, suspected exposures, or prior test results |
Indicators of specific immune functions/events Cost varies Assay methods are standardized among laboratories Results outside reference ranges are clinically interpretable |
Histocompatibility genotype Antibodies to infectious agents Total serum IgE Allergen-specific IgE Autoantibodies Skin tests for hypersensitivity Granulocyte oxidative burst Histopathology (tissue biopsy) |
Research Should be included only with control populations and careful study design |
Indicators of general or specific immune functions/events Cost varies; often expensive Assay methods are usually not standardized among laboratories Results outside reference ranges are often not clinically interpretable |
In vitro stimulation assays Cell activation surface markers Cytokine serum concentrations Clonality assays (antibody, cellular, genetic) Cytotoxicity tests |
As adequate human data are seldom available, the assessment of risk for chemical-induced immunosuppression in humans is in the majority of cases based upon animal studies. The identification of potential immunotoxic xenobiotics is undertaken primarily in controlled studies in rodents. In vivo exposure studies present, in this regard, the optimal approach to estimate the immunotoxic potential of a compound. This is due to the multifactoral and complex nature of the immune system and of immune responses. In vitro studies are of increasing value in the elucidation of mechanisms of immunotoxicity. In addition, by investigating the effects of the compound using cells of animal and human origin, data can be generated for species comparison, which can be used in the “parallelogram” approach to improve the risk assessment process. If data are available for three cornerstones of the parallelogram (in vivo animal, and in vitro animal and human) it may be easier to predict the outcome at the remaining cornerstone, that is, the risk in humans.
When assessment of risk for chemical-induced immunosuppression has to rely solely upon data from animal studies, an approach can be followed in the extrapolation to man by the application of uncertainty factors to the no observed adverse effect level (NOAEL). This level can be based on parameters determined in relevant models, such as host resistance assays and in vivo assessment of hypersensitivity reactions and antibody production. Ideally, the relevance of this approach to risk assessment requires confirmation by studies in humans. Such studies should combine the identification and measurement of the toxicant, epidemiological data and immune status assessments.
To predict contact hypersensitivity, guinea pig models are available and have been used in risk assessment since the 1970s. Although sensitive and reproducible, these tests have limitations as they depend on subjective evaluation; this can be overcome by newer and more quantitative methods developed in the mouse. Regarding chemical-induced hypersensitivity induced by inhalation or ingestion of allergens, tests should be developed and evaluated in terms of their predictive value in man. When it comes to setting safe occupational exposure levels of potential allergens, consideration has to be given to the biphasic nature of allergy: the sensitization phase and the elicitation phase. The concentration required to elicit an allergic reaction in a previously sensitized individual is considerably lower than the concentration necessary to induce sensitization in the immunologically naïve but susceptible individual.
As animal models to predict chemical-induced autoimmunity are virtually lacking, emphasis should be given to the development of such models. For the development of such models, our knowledge of chemical-induced autoimmunity in humans should be advanced, including the study of genetic and immune system markers to identify susceptible individuals. Humans that are exposed to drugs that induce autoimmunity offer such an opportunity.
The study and characterization of chemicals and other agents for toxic properties is often undertaken on the basis of specific organs and organ systems. In this chapter, two targets have been selected for in-depth discussion: the immune system and the gene. These examples were chosen to represent a complex target organ system and a molecular target within cells. For more comprehensive discussion of the toxicology of target organs, the reader is referred to standard toxicology texts such as Casarett and Doull, and Hayes. The International Programme on Chemical Safety (IPCS) has also published several criteria documents on target organ toxicology, by organ system.
Target organ toxicology studies are usually undertaken on the basis of information indicating the potential for specific toxic effects of a substance, either from epidemiological data or from general acute or chronic toxicity studies, or on the basis of special concerns to protect certain organ functions, such as reproduction or foetal development. In some cases, specific target organ toxicity tests are expressly mandated by statutory authorities, such as neurotoxicity testing under the US pesticides law (see “The United States approach to risk assessment of reproductive toxicants and neurotoxic agents,” and mutagenicity testing under the Japanese Chemical Substance Control Law (see “Principles of hazard identification: The Japanese approach”).
As discussed in “Target organ and critical effects,” the identification of a critical organ is based upon the detection of the organ or organ system which first responds adversely or to the lowest doses or exposures. This information is then used to design specific toxicology investigations or more defined toxicity tests that are designed to elicit more sensitive indications of intoxication in the target organ. Target organ toxicology studies may also be used to determine mechanisms of action, of use in risk assessment (see “The United States approach to risk assessment of reproductive toxicants and neurotoxic agents”).
Methods of Target Organ Toxicity Studies
Target organs may be studied by exposure of intact organisms and detailed analysis of function and histopathology in the target organ, or by in vitro exposure of cells, tissue slices, or whole organs maintained for short or long term periods in culture (see “Mechanisms of toxicology: Introduction and concepts”). In some cases, tissues from human subjects may also be available for target organ toxicity studies, and these may provide opportunities to validate assumptions of cross-species extrapolation. However, it must be kept in mind that such studies do not provide information on relative toxicokinetics.
In general, target organ toxicity studies share the following common characteristics: detailed histopathological examination of the target organ, including post mortem examination, tissue weight, and examination of fixed tissues; biochemical studies of critical pathways in the target organ, such as important enzyme systems; functional studies of the ability of the organ and cellular constituents to perform expected metabolic and other functions; and analysis of biomarkers of exposure and early effects in target organ cells.
Detailed knowledge of target organ physiology, biochemistry and molecular biology may be incorporated in target organ studies. For instance, because the synthesis and secretion of small-molecular-weight proteins is an important aspect of renal function, nephrotoxicity studies often include special attention to these parameters (IPCS 1991). Because cell-to-cell communication is a fundamental process of nervous system function, target organ studies in neurotoxicity may include detailed neurochemical and biophysical measurements of neurotransmitter synthesis, uptake, storage, release and receptor binding, as well as electrophysiological measurement of changes in membrane potential associated with these events.
A high degree of emphasis is being placed upon the development of in vitro methods for target organ toxicity, to replace or reduce the use of whole animals. Substantial advances in these methods have been achieved for reproductive toxicants (Heindel and Chapin 1993).
In summary, target organ toxicity studies are generally undertaken as a higher order test for determining toxicity. The selection of specific target organs for further evaluation depends upon the results of screening level tests, such as the acute or subchronic tests used by OECD and the European Union; some target organs and organ systems may be a priori candidates for special investigation because of concerns to prevent certain types of adverse health effects.
The word biomarker is short for biological marker, a term that refers to a measurable event occurring in a biological system, such as the human body. This event is then interpreted as a reflection, or marker, of a more general state of the organism or of life expectancy. In occupational health, a biomarker is generally used as an indicator of health status or disease risk.
Biomarkers are used for in vitro as well as in vivo studies that may include humans. Usually, three specific types of biological markers are identified. Although a few biomarkers may be difficult to classify, usually they are separated into biomarkers of exposure, biomarkers of effect or biomarkers of susceptibility (see table 1).
Table 1. Examples of biomarkers of exposure or biomarkers of effect that are used in toxicological studies in occupational health
Sample | Measurement | Purpose |
Exposure biomarkers | ||
Adipose tissue | Dioxin | Dioxin exposure |
Blood | Lead | Lead exposure |
Bone | Aluminium | Aluminium exposure |
Exhaled breath | Toluene | Toluene exposure |
Hair | Mercury | Methylmercury exposure |
Serum | Benzene | Benzene exposure |
Urine | Phenol | Benzene exposure |
Effect biomarkers | ||
Blood | Carboxyhaemoglobin | Carbon monoxide exposure |
Red blood cells | Zinc-protoporphyrin | Lead exposure |
Serum | Cholinesterase | Organophosphate exposure |
Urine | Microglobulins | Nephrotoxic exposure |
White blood cells | DNA adducts | Mutagen exposure |
Given an acceptable degree of validity, biomarkers may be employed for several purposes. On an individual basis, a biomarker may be used to support or refute a diagnosis of a particular type of poisoning or other chemically-induced adverse effect. In a healthy subject, a biomarker may also reflect individual hypersusceptibility to specific chemical exposures and may therefore serve as a basis for risk prediction and counselling. In groups of exposed workers, some exposure biomarkers can be applied to assess the extent of compliance with pollution abatement regulations or the effectiveness of preventive efforts in general.
Biomarkers of Exposure
An exposure biomarker may be an exogenous compound (or a metabolite) within the body, an interactive product between the compound (or metabolite) and an endogenous component, or another event related to the exposure. Most commonly, biomarkers of exposures to stable compounds, such as metals, comprise measurements of the metal concentrations in appropriate samples, such as blood, serum or urine. With volatile chemicals, their concentration in exhaled breath (after inhalation of contamination-free air) may be assessed. If the compound is metabolized in the body, one or more metabolites may be chosen as a biomarker of the exposure; metabolites are often determined in urine samples.
Modern methods of analysis may allow separation of isomers or congeners of organic compounds, and determination of the speciation of metal compounds or isotopic ratios of certain elements. Sophisticated analyses allow determination of changes in the structure of DNA or other macromolecules caused by binding with reactive chemicals. Such advanced techniques will no doubt gain considerably in importance for applications in biomarker studies, and lower detection limits and better analytical validity are likely to make these biomarkers even more useful.
Particularly promising developments have occurred with biomarkers of exposure to mutagenic chemicals. These compounds are reactive and may form adducts with macromolecules, such as proteins or DNA. DNA adducts may be detected in white blood cells or tissue biopsies, and specific DNA fragments may be excreted in the urine. For example, exposure to ethylene oxide results in reactions with DNA bases, and, after excision of the damaged base, N-7-(2-hydroxyethyl)guanine will be eliminated in the urine. Some adducts may not refer directly to a particular exposure. For example, 8-hydroxy-2´-deoxyguanosine reflects oxidative damage to DNA, and this reaction may be triggered by several chemical compounds, most of which also induce lipid peroxidation.
Other macromolecules may also be changed by adduct formation or oxidation. Of special interest, such reactive compounds may generate haemoglobin adducts that can be determined as biomarkers of exposure to the compounds. The advantage is that ample amounts of haemoglobin can be obtained from a blood sample, and, given the four-month lifetime of red blood cells, the adducts formed with the amino acids of the protein will indicate the total exposure during this period.
Adducts may be determined by sensitive techniques such as high-performance lipid chromatography, and some immunological methods are also available. In general, the analytical methods are new, expensive and need further development and validation. Better sensitivity can be obtained by using the 32P post labelling assay, which is a nonspecific indication that DNA damage has taken place. All of these techniques are potentially useful for biological monitoring and have been applied in a growing number of studies. However, simpler and more sensitive analytical methods are needed. Given the limited specificity of some methods at low-level exposures, tobacco smoking or other factors may impact significantly on the measurement results, thus causing difficulties in interpretation.
Exposure to mutagenic compounds, or to compounds which are metabolized into mutagens, may also be determined by assessing the mutagenicity of the urine from an exposed individual. The urine sample is incubated with a strain of bacteria in which a specific point mutation is expressed in a way that can be easily measured. If mutagenic chemicals are present in the urine sample, then an increased rate of mutations will occur in the bacteria.
Exposure biomarkers must be evaluated with regard to temporal variation in exposure and the relation to different compartments. Thus, the time frame(s) represented by the biomarker, that is, the extent to which the biomarker measurement reflects past exposure(s) and/or accumulated body burden, must be determined from toxicokinetic data in order to interpret the result. In particular, the degree to which the biomarker indicates retention in specific target organs should be considered. Although blood samples are often used for biomarker studies, peripheral blood is generally not regarded as a compartment as such, although it acts as a transport medium between compartments. The degree to which the concentration in the blood reflects levels in different organs varies widely between different chemicals, and usually also depends upon the length of the exposure as well as time since exposure.
Sometimes this type of evidence is used to classify a biomarker as an indicator of (total) absorbed dose or an indicator of effective dose (i.e., the amount that has reached the target tissue). For example, exposure to a particular solvent may be evaluated from data on the actual concentration of the solvent in the blood at a particular time following the exposure. This measurement will reflect the amount of the solvent that has been absorbed into the body. Some of the absorbed amount will be exhaled due to the vapour pressure of the solvent. While circulating in the blood, the solvent will interact with various components of the body, and it will eventually become subject to breakdown by enzymes. The outcome of the metabolic processes can be assessed by determining specific mercapturic acids produced by conjugation with glutathione. The cumulative excretion of mercapturic acids may better reflect the effective dose than will the blood concentration.
Life events, such as reproduction and senescence, may affect the distribution of a chemical. The distribution of chemicals within the body is significantly affected by pregnancy, and many chemicals may pass the placental barrier, thus causing exposure of the foetus. Lactation may result in excretion of lipid-soluble chemicals, thus leading to a decreased retention in the mother along with an increased uptake by the infant. During weight loss or development of osteoporosis, stored chemicals may be released, which can then result in a renewed and protracted “endogenous” exposure of target organs. Other factors may affect individual absorption, metabolism, retention and distribution of chemical compounds, and some biomarkers of susceptibility are available (see below).
Biomarkers of Effect
A marker of effect may be an endogenous component, or a measure of the functional capacity, or some other indicator of the state or balance of the body or organ system, as affected by the exposure. Such effect markers are generally preclinical indicators of abnormalities.
These biomarkers may be specific or non-specific. The specific biomarkers are useful because they indicate a biological effect of a particular exposure, thus providing evidence that can potentially be used for preventive purposes. The non-specific biomarkers do not point to an individual cause of the effect, but they may reflect the total, integrated effect due to a mixed exposure. Both types of biomarkers may therefore be of considerable use in occupational health.
There is not a clear distinction between exposure biomarkers and effect biomarkers. For example, adduct formation could be said to reflect an effect rather than the exposure. However, effect biomarkers usually indicate changes in the functions of cells, tissues or the total body. Some researchers include gross changes, such as an increase in liver weight of exposed laboratory animals or decreased growth in children, as biomarkers of effect. For the purpose of occupational health, effect biomarkers should be restricted to those that indicate subclinical or reversible biochemical changes, such as inhibition of enzymes. The most frequently used effect biomarker is probably inhibition of cholinesterase caused by certain insecticides, that is, organophosphates and carbamates. In most cases, this effect is entirely reversible, and the enzyme inhibition reflects the total exposure to this particular group of insecticides.
Some exposures do not result in enzyme inhibition but rather in increased activity of an enzyme. This is the case with several enzymes that belong to the P450 family (see “Genetic determinants of toxic response”). They may be induced by exposures to certain solvents and polyaromatic hydrocarbons (PAHs). Since these enzymes are mainly expressed in tissues from which a biopsy may be difficult to obtain, the enzyme activity is determined indirectly in vivo by administering a compound that is metabolized by that particular enzyme, and then the breakdown product is measured in urine or plasma.
Other exposures may induce the synthesis of a protective protein in the body. The best example is probably metallothionein, which binds cadmium and promotes the excretion of this metal; cadmium exposure is one of the factors that result in increased expression of the metallothionein gene. Similar protective proteins may exist but have not yet been explored sufficiently to become accepted as biomarkers. Among the candidates for possible use as biomarkers are the so-called stress proteins, originally referred to as heat shock proteins. These proteins are generated by a range of different organisms in response to a variety of adverse exposures.
Oxidative damage may be assessed by determining the concentration of malondialdehyde in serum or the exhalation of ethane. Similarly, the urinary excretion of proteins with a small molecular weight, such as albumin, may be used as a biomarker of early kidney damage. Several parameters routinely used in clinical practice (for example, serum hormone or enzyme levels) may also be useful as biomarkers. However, many of these parameters may not be sufficiently sensitive to detect early impairment.
Another group of effect parameters relate to genotoxic effects (changes in the structure of chromosomes). Such effects may be detected by microscopy of white blood cells that undergo cell division. Serious damage to the chromosomes—chromosomal aberrations or formation of micronuclei—can be seen in a microscope. Damage may also be revealed by adding a dye to the cells during cell division. Exposure to a genotoxic agent can then be visualized as an increased exchange of the dye between the two chromatids of each chromosome (sister chromatid exchange). Chromosomal aberrations are related to an increased risk of developing cancer, but the significance of an increased rate of sister chromatid exchange is less clear.
More sophisticated assessment of genotoxicity is based on particular point mutations in somatic cells, that is, white blood cells or epithelial cells obtained from the oral mucosa. A mutation at a specific locus may make the cells capable of growing in a culture that contains a chemical that is otherwise toxic (such as 6-thioguanine). Alternatively, a specific gene product can be assessed (e.g., serum or tissue concentrations of oncoproteins encoded by particular oncogenes). Obviously, these mutations reflect the total genotoxic damage incurred and do not necessarily indicate anything about the causative exposure. These methods are not yet ready for practical use in occupational health, but rapid progress in this line of research would suggest that such methods will become available within a few years.
Biomarkers of Susceptibility
A marker of susceptibility, whether inherited or induced, is an indicator that the individual is particularly sensitive to the effect of a xenobiotic or to the effects of a group of such compounds. Most attention has been focused on genetic susceptibility, although other factors may be at least as important. Hypersusceptibility may be due to an inherited trait, the constitution of the individual, or environmental factors.
The ability to metabolize certain chemicals is variable and is genetically determined (see “Genetic determinants of toxic response”). Several relevant enzymes appear to be controlled by a single gene. For example, oxidation of foreign chemicals is mainly carried out be a family of enzymes belonging to the P450 family. Other enzymes make the metabolites more water soluble by conjugation (e.g., N-acetyltransferase and μ-glutathion-S-transferase). The activity of these enzymes is genetically controlled and varies considerably. As mentioned above, the activity can be determined by administering a small dose of a drug and then determining the amount of the metabolite in the urine. Some of the genes have now been characterized, and techniques are available to determine the genotype. Important studies suggest that a risk of developing certain cancer forms is related to the capability of metabolizing foreign compounds. Many questions still remain unanswered, thus at this time limiting the use of these potential susceptibility biomarkers in occupational health.
Other inherited traits, such as alpha1-antitrypsin deficiency or glucose-6-phosphate dehydrogenase deficiency, also result in deficient defence mechanisms in the body, thereby causing hypersusceptibility to certain exposures.
Most research related to susceptibility has dealt with genetic predisposition. Other factors play a role as well and have been partly neglected. For example, individuals with a chronic disease may be more sensitive to an occupational exposure. Also, if a disease process or previous exposure to toxic chemicals has caused some subclinical organ damage, then the capacity to withstand a new toxic exposure is likely to be less. Biochemical indicators of organ function may in this case be used as susceptibility biomarkers. Perhaps the best example regarding hypersusceptibility relates to allergic responses. If an individual has become sensitized to a particular exposure, then specific antibodies can be detected in serum. Even if the individual has not become sensitized, other current or past exposures may add to the risk of developing an adverse effect related to an occupational exposure.
A major problem is to determine the joint effect of mixed exposures at work. In addition, personal habits and drug use may result in an increased susceptibility. For example, tobacco smoke usually contains a considerable amount of cadmium. Thus, with occupational exposure to cadmium, a heavy smoker who has accumulated substantial amounts of this metal in the body will be at increased risk of developing cadmium-related kidney disease.
Application in Occupational Health
Biomarkers are extremely useful in toxicological research, and many may be applicable in biological monitoring. Nonetheless, the limitations must also be recognized. Many biomarkers have so far been studied only in laboratory animals. Toxicokinetic patterns in other species may not necessarily reflect the situation in human beings, and extrapolation may require confirmatory studies in human volunteers. Also, account must be taken of individual variations due to genetic or constitutional factors.
In some cases, exposure biomarkers may not at all be feasible (e.g., for chemicals which are short-lived in vivo). Other chemicals may be stored in, or may affect, organs which cannot be accessed by routine procedures, such as the nervous system. The route of exposure may also affect the distribution pattern and therefore also the biomarker measurement and its interpretation. For example, direct exposure of the brain via the olfactory nerve is likely to escape detection by measurement of exposure biomarkers. As to effect biomarkers, many of them are not at all specific, and the change can be due to a variety of causes, including lifestyle factors. Perhaps in particular with the susceptibility biomarkers, interpretation must be very cautious at the moment, as many uncertainties remain about the overall health significance of individual genotypes.
In occupational health, the ideal biomarker should satisfy several requirements. First of all, sample collection and analysis must be simple and reliable. For optimal analytical quality, standardization is needed, but the specific requirements vary considerably. Major areas of concern include: preparation of the in- dividual, sampling procedure and sample handling, and measurement procedure; the latter encompasses technical factors, such as calibration and quality assurance procedures, and individual- related factors, such as education and training of operators.
For documentation of analytical validity and traceability, reference materials should be based on relevant matrices and with appropriate concentrations of toxic substances or relevant metabolites at appropriate levels. For biomarkers to be used for biological monitoring or for diagnostic purposes, the responsible laboratories must have well-documented analytical procedures with defined performance characteristics, and accessible records to allow verification of the results. At the same time, nonetheless, the economics of characterizing and using reference materials to supplement quality assurance procedures in general must be considered. Thus, the achievable quality of results, and the uses to which they are put, have to be balanced against the added costs of quality assurance, including reference materials, manpower and instrumentation.
Another requirement is that the biomarker should be specific, at least under the circumstances of the study, for a particular type of exposure, with a clear-cut relationship to the degree of exposure. Otherwise, the result of the biomarker measurement may be too difficult to interpret. For proper interpretation of the measurement result of an exposure biomarker, the diagnostic validity must be known (i.e., the translation of the biomarker value into the magnitude of possible health risks). In this area, metals serve as a paradigm for biomarker research. Recent research has demonstrated the complexity and subtlety of dose-response relationships, with considerable difficulty in identifying no-effect levels and therefore also in defining tolerable exposures. However, this kind of research has also illustrated the types of investigation and the refinement that are necessary to uncover the relevant information. For most organic compounds, quantitative associations between exposures and the corresponding adverse health effects are not yet available; in many cases, even the primary target organs are not known for sure. In addition, evaluation of toxicity data and biomarker concentrations is often complicated by exposure to mixtures of substances, rather than exposure to a single compound at the time.
Before the biomarker is applied for occupational health purposes, some additional considerations are necessary. First, the biomarker must reflect a subclinical and reversible change only. Second, given that the biomarker results can be interpreted with regard to health risks, then preventive efforts should be available and should be considered realistic in case the biomarker data suggests a need to reduce the exposure. Third, the practical use of the biomarker must be generally regarded as ethically acceptable.
Industrial hygiene measurements may be compared with applicable exposure limits. Likewise, results on exposure biomarkers or effect biomarkers may be compared to biological action limits, sometimes referred to as biological exposure indices. Such limits should be based on the best advice of clinicians and scientists from appropriate disciplines, and responsible administrators as “risk managers” should then take into account relevant ethical, social, cultural and economic factors. The scientific basis should, if possible, include dose-response relationships supplemented by information on variations in susceptibility within the population at risk. In some countries, workers and members of the general public are involved in the standard-setting process and provide important input, particularly when scientific uncertainty is considerable. One of the major uncertainties is how to define an adverse health effect that should be prevented—for example, whether adduct formation as an exposure biomarker by itself represents an adverse effect (i.e., effect biomarker) that should be prevented. Difficult questions are likely to arise when deciding whether it is ethically defensible, for the same compound, to have different limits for adventitious exposure, on the one hand, and occupational exposure, on the other.
The information generated by the use of biomarkers should generally be conveyed to the individuals examined within the physician-patient relationship. Ethical concerns must in particular be considered in connection with highly experimental biomarker analyses that cannot currently be interpreted in detail in terms of actual health risks. For the general population, for example, limited guidance exists at present with regard to interpretation of exposure biomarkers other than the blood-lead concentration. Also of importance is the confidence in the data generated (i.e., whether appropriate sampling has been done, and whether sound quality assurance procedures have been utilized in the laboratory involved). An additional area of special worry relates to individual hypersusceptibility. These issues must be taken into account when providing the feedback from the study.
All sectors of society affected by, or concerned with carrying out, a biomarker study need to be involved in the decision-making process on how to handle the information generated by the study. Specific procedures to prevent or overcome inevitable ethical conflicts should be developed within the legal and social frameworks of the region or country. However, each situation represents a different set of questions and pitfalls, and no single procedure for public involvement can be developed to cover all applications of exposure biomarkers.
Genetic toxicity assessment is the evaluation of agents for their ability to induce any of three general types of changes (mutations) in the genetic material (DNA): gene, chromosomal and genomic. In organisms such as humans, the genes are composed of DNA, which consists of individual units called nucleotide bases. The genes are arranged in discrete physical structures called chromosomes. Genotoxicity can result in significant and irreversible effects upon human health. Genotoxic damage is a critical step in the induction of cancer and it can also be involved in the induction of birth defects and foetal death. The three classes of mutations mentioned above can occur within either of the two types of tissues possessed by organisms such as humans: sperm or eggs (germ cells) and the remaining tissue (somatic cells).
Assays that measure gene mutation are those that detect the substitution, addition or deletion of nucleotides within a gene. Assays that measure chromosomal mutation are those that detect breaks or chromosomal rearrangements involving one or more chromosomes. Assays that measure genomic mutation are those that detect changes in the number of chromosomes, a condition called aneuploidy. Genetic toxicity assessment has changed considerably since the development by Herman Muller in 1927 of the first assay to detect genotoxic (mutagenic) agents. Since then, more than 200 assays have been developed that measure mutations in DNA; however, fewer than ten assays are used commonly today for genetic toxicity assessment. This article reviews these assays, describes what they measure, and explores the role of these assays in toxicity assessment.
Identification of Cancer HazardsPrior to the Development of the Fieldof Genetic Toxicology
Genetic toxicology has become an integral part of the overall risk assessment process and has gained in stature in recent times as a reliable predictor for carcinogenic activity. However, prior to the development of genetic toxicology (before 1970), other methods were and are still being used to identify potential cancer hazards to humans. There are six major categories of methods currently used for identifying human cancer risks: epidemiological studies, long-term in vivo bioassays, mid-term in vivo bioassays, short-term in vivo and in vitro bioassays, artificial intelligence (structure-activity), and mechanism-based inference.
Table 1 gives advantages and disadvantages for these methods.
Table 1. Advantages and disadvantages of current methods for identifying human cancer risks
Advantages | Disadvantages | |
Epidemiological studies | (1) humans are ultimate indicators of disease; (2) evaluate sensitive or susceptible populations; (3) occupational exposure cohorts; (4) environmental sentinel alerts |
(1) generally retrospective (death certificates, recall biases, etc.); (2) insensitive, costly, lengthy; (3) reliable exposure data sometimes unavailable or difficult to obtain; (4) combined, multiple and complex exposures; lack of appropriate control cohorts; (5) experiments on humans not done; (6) cancer detection, not prevention |
Long-term in vivo bioassays | (1) prospective and retrospective (validation) evaluations; (2) excellent correlation with identified human carcinogens; (3) exposure levels and conditions known; (4) identifies chemical toxicity and carcinogenicity effects; (5) results obtained relatively quickly; (6) qualitative comparisons among chemical classes; (7) integrative and interactive biologic systems related closely to humans | (1) rarely replicated, resource intensive; (3) limited facilities suitable for such experiments; (4) species extrapolation debate; (5) exposures used are often at levels far in excess of those experienced by humans; (6) single-chemical exposure does not mimic human exposures, which are generally to multiple chemicals simultaneously |
Mid- and short-term in vivo and in vitro bioassays | (1) more rapid and less expensive than other assays; (2) large samples that are easily replicated; (3) biologically meaningful end points are measured (mutation, etc.); (4) can be used as screening assays to select chemicals for long-term bioassays |
(1) in vitro not fully predictive of in vivo; (2) usually organism or organ specific; (3) potencies not comparable to whole animals or humans |
Chemical structure–biological activity associations | (1) relatively easy, rapid, and inexpensive; (2) reliable for certain chemical classes (e.g., nitrosamines and benzidine dyes); (3) developed from biological data but not dependent on additional biological experimentation | (1) not “biological”; (2) many exceptions to formulated rules; (3) retrospective and rarely (but becoming) prospective |
Mechanism-based inferences | (1) reasonably accurate for certain classes of chemicals; (2) permits refinements of hypotheses; (3) can orient risk assessments to sensitive populations | (1) mechanisms of chemical carcinogenesis undefined, multiple, and likely chemical or class specific; (2) may fail to highlight exceptions to general mechanisms |
Rationale and Conceptual Basisfor Genetic Toxicology Assays
Although the exact types and numbers of assays used for genetic toxicity assessment are constantly evolving and vary from country to country, the most common ones include assays for (1) gene mutation in bacteria and/or cultured mammalian cells and (2) chromosomal mutation in cultured mammalian cells and/or bone marrow within living mice. Some of the assays within this second category can also detect aneuploidy. Although these assays do not detect mutations in germ cells, they are used primarily because of the extra cost and complexity of performing germ-cell assays. Nonetheless, germ-cell assays in mice are used when information about germ-cell effects is desired.
Systematic studies over a 25-year period (1970-1995), especially at the US National Toxicology Program in North Carolina, have resulted in the use of a discrete number of assays for detecting the mutagenic activity of agents. The rationale for evaluating the usefulness of the assays was based on their ability to detect agents that cause cancer in rodents and that are suspected of causing cancer in humans (i.e., carcinogens). This is because studies during the past several decades have indicated that cancer cells contain mutations in certain genes and that many carcinogens are also mutagens. Thus, cancer cells are viewed as containing somatic-cell mutations, and carcinogenesis is viewed as a type of somatic-cell mutagenesis.
The genetic toxicity assays used most commonly today have been selected not only because of their large database, relatively low cost, and ease of performance, but because they have been shown to detect many rodent and, presumptively, human carcinogens. Consequently, genetic toxicity assays are used to predict the potential carcinogenicity of agents.
An important conceptual and practical development in the field of genetic toxicology was the recognition that many carcinogens were modified by enzymes within the body, creating altered forms (metabolites) that were frequently the ultimate carcinogenic and mutagenic form of the parent chemical. To duplicate this metabolism in a petri dish, Heinrich Malling showed that the inclusion of a preparation from rodent liver contained many of the enzymes necessary to perform this metabolic conversion or activation. Thus, many genetic toxicity assays performed in dishes or tubes (in vitro) employ the addition of similar enzyme preparations. Simple preparations are called S9 mix, and purified preparations are called microsomes. Some bacterial and mammalian cells have now been genetically engineered to contain some of the genes from rodents or humans that produce these enzymes, reducing the need to add S9 mix or microsomes.
Genetic Toxicology Assays and Techniques
The primary bacterial systems used for genetic toxicity screening are the Salmonella (Ames) mutagenicity assay and, to a much lesser extent, strain WP2 of Escherichia coli. Studies in the mid-1980s indicated that the use of only two strains of the Salmonella system (TA98 and TA100) were sufficient to detect approximately 90% of the known Salmonella mutagens. Thus, these two strains are used for most screening purposes; however, various other strains are available for more extensive testing.
These assays are performed in a variety of ways, but two general procedures are the plate-incorporation and liquid-suspension assays. In the plate-incorporation assay, the cells, the test chemical and (when desired) the S9 are added together into a liquefied agar and poured onto the surface of an agar petri plate. The top agar hardens within a few minutes, and the plates are incubated for two to three days, after which time mutant cells have grown to form visually detectable clusters of cells called colonies, which are then counted. The agar medium contains selective agents or is composed of ingredients such that only the newly mutated cells will grow. The liquid-incubation assay is similar, except the cells, test agent, and S9 are incubated together in liquid that does not contain liquefied agar, and then the cells are washed free of the test agent and S9 and seeded onto the agar.
Mutations in cultured mammalian cells are detected primarily in one of two genes: hprt and tk. Similar to the bacterial assays, mammalian cell lines (developed from rodent or human cells) are exposed to the test agent in plastic culture dishes or tubes and then are seeded into culture dishes that contain medium with a selective agent that permits only mutant cells to grow. The assays used for this purpose include the CHO/HPRT, the TK6, and the mouse lymphoma L5178Y/TK+/- assays. Other cell lines containing various DNA repair mutations as well as containing some human genes involved in metabolism are also used. These systems permit the recovery of mutations within the gene (gene mutation) as well as mutations involving regions of the chromosome flanking the gene (chromosomal mutation). However, this latter type of mutation is recovered to a much greater extent by the tk gene systems than by the hprt gene systems due to the location of the tk gene.
Similar to the liquid-incubation assay for bacterial mutagenicity, mammalian cell mutagenicity assays generally involve the exposure of the cells in culture dishes or tubes in the presence of the test agent and S9 for several hours. The cells are then washed, cultured for several more days to allow the normal (wild-type) gene products to be degraded and the newly mutant gene products to be expressed and accumulate, and then they are seeded into medium containing a selective agent that permits only the mutant cells to grow. Like the bacterial assays, the mutant cells grow into visually detectable colonies that are then counted.
Chromosomal mutation is identified primarily by cytogenetic assays, which involve exposing rodents and/or rodent or human cells in culture dishes to a test chemical, allowing one or more cell divisions to occur, staining the chromosomes, and then visually examining the chromosomes through a microscope to detect alterations in the structure or number of chromosomes. Although a variety of endpoints can be examined, the two that are currently accepted by regulatory agencies as being the most meaningful are chromosomal aberrations and a subcategory called micronuclei.
Considerable training and expertise are required to score cells for the presence of chromosomal aberrations, making this a costly procedure in terms of time and money. In contrast, micronuclei require little training, and their detection can be automated. Micronuclei appear as small dots within the cell that are distinct from the nucleus, which contains the chromosomes. Micronuclei result from either chromosome breakage or from aneuploidy. Because of the ease of scoring micronuclei compared to chromosomal aberrations, and because recent studies indicate that agents that induce chromosomal aberrations in the bone marrow of living mice generally induce micronuclei in this tissue, micronuclei are now commonly measured as an indication of the ability of an agent to induce chromosomal mutation.
Although germ-cell assays are used far less frequently than the other assays described above, they are indispensable in determining whether an agent poses a risk to the germ cells, mutations in which can lead to health effects in succeeding generations. The most commonly used germ-cell assays are in mice, and involve systems that detect (1) heritable translocations (exchanges) among chromosomes (heritable translocation assay), (2) gene or chromosomal mutations involving specific genes (visible or biochemical specific-locus assays), and (3) mutations that affect viability (dominant lethal assay). As with the somatic-cell assays, the working assumption with the germ-cell assays is that agents positive in these assays are presumed to be potential human germ-cell mutagens.
Current Status and Future Prospects
Recent studies have indicated that only three pieces of information were necessary to detect approximately 90% of a set of 41 rodent carcinogens (i.e., presumptive human carcinogens and somatic-cell mutagens). These included (1) knowledge of the chemical structure of agent, especially if it contains electrophilic moieties (see section on structure-activity relationships); (2) Salmonella mutagenicity data; and (3) data from a 90-day chronic toxicity assay in rodents (mice and rats). Indeed, essentially all of the IARC-declared human carcinogens are detectable as mutagens using just the Salmonella assay and the mouse-bone marrow micronucleus assay. The use of these mutagenicity assays for detecting potential human carcinogens is supported further by the finding that most human carcinogens are carcinogenic in both rats and mice (trans-species carcinogens) and that most trans- species carcinogens are mutagenic in Salmonella and/or induce micronuclei in mouse bone marrow.
With advances in DNA technology, the human genome project, and an improved understanding of the role of mutation in cancer, new genotoxicity assays are being developed that will likely be incorporated into standard screening procedures. Among these are the use of transgenic cells and rodents. Transgenic systems are those in which a gene from another species has been introduced into a cell or organism. For example, transgenic mice are now in experimental use that permit the detection of mutation in any organ or tissue of the animal, based on the introduction of a bacterial gene into the mouse. Bacterial cells, such as Salmonella, and mammalian cells (including human cell lines) are now available that contain genes involved in the metabolism of carcinogenic/mutagenic agents, such as the P450 genes. Molecular analysis of the actual mutations induced in the trans-gene within transgenic rodents, or within native genes such as hprt, or the target genes within Salmonella can now be performed, so that the exact nature of the mutations induced by the chemicals can be determined, providing insights into the mechanism of action of the chemical and allowing comparisons to mutations in humans presumptively exposed to the agent.
Molecular advances in cytogenetics now permit more detailed evaluation of chromosomal mutations. These include the use of probes (small pieces of DNA) that attach (hybridize) to specific genes. Rearrangements of genes on the chromosome can then be revealed by the altered location of the probes, which are fluorescent and easily visualized as colored sectors on the chromosomes. The single-cell gel electrophoresis assay for DNA breakage (commonly called the “comet” assay) permits the detection of DNA breaks within single cells and may become an extremely useful tool in combination with cytogenetic techniques for detecting chromosomal damage.
After many years of use and the generation of a large and systematically developed database, genetic toxicity assessment can now be done with just a few assays for relatively small cost in a short period of time (a few weeks). The data produced can be used to predict the ability of an agent to be a rodent and, presumptively, human carcinogen/somatic-cell mutagen. Such an ability makes it possible to limit the introduction into the environment of mutagenic and carcinogenic agents and to develop alternative, nonmutagenic agents. Future studies should lead to even better methods with greater predictivity than the current assays.
The emergence of sophisticated technologies in molecular and cellular biology has spurred a relatively rapid evolution in the life sciences, including toxicology. In effect, the focus of toxicology is shifting from whole animals and populations of whole animals to the cells and molecules of individual animals and humans. Since the mid-1980s, toxicologists have begun to employ these new methodologies in assessing the effects of chemicals on living systems. As a logical progression, such methods are being adapted for the purposes of toxicity testing. These scientific advances have worked together with social and economic factors to effect change in the evaluation of product safety and potential risk.
Economic factors are specifically related to the volume of materials that must be tested. A plethora of new cosmetics, pharmaceuticals, pesticides, chemicals and household products is introduced into the market every year. All of these products must be evaluated for their potential toxicity. In addition, there is a backlog of chemicals already in use that have not been adequately tested. The enormous task of obtaining detailed safety information on all of these chemicals using traditional whole animal testing methods would be costly in terms of both money and time, if it could even be accomplished.
There are also societal issues that relate to public health and safety, as well as increasing public concern about the use of animals for product safety testing. With regard to human safety, public interest and environmental advocacy groups have placed significant pressure on government agencies to apply more stringent regulations on chemicals. A recent example of this has been a movement by some environmental groups to ban chlorine and chlorine-containing compounds in the United States. One of the motivations for such an extreme action lies in the fact that most of these compounds have never been adequately tested. From a toxicological perspective, the concept of banning a whole class of diverse chemicals based simply on the presence of chlorine is both scientifically unsound and irresponsible. Yet, it is understandable that from the public’s perspective, there must be some assurance that chemicals released into the environment do not pose a significant health risk. Such a situation underscores the need for more efficient and rapid methods to assess toxicity.
The other societal concern that has impacted the area of toxicity testing is animal welfare. The growing number of animal protection groups throughout the world have voiced considerable opposition to the use of whole animals for product safety testing. Active campaigns have been waged against manufacturers of cosmetics, household and personal care products and pharmaceuticals in attempts to stop animal testing. Such efforts in Europe have resulted in the passage of the Sixth Amendment to Directive 76/768/EEC (the Cosmetics Directive). The consequence of this Directive is that cosmetic products or cosmetic ingredients that have been tested in animals after January 1, 1998 cannot be marketed in the European Union, unless alternative methods are insufficiently validated. While this Directive has no jurisdiction over the sale of such products in the United States or other countries, it will significantly affect those companies that have international markets that include Europe.
The concept of alternatives, which forms the basis for the development of tests other than those on whole animals, is defined by the three Rs: reduction in the numbers of animals used; refinement of protocols so that animals experience less stress or discomfort; and replacement of current animal tests with in vitro tests (i.e., tests done outside of the living animal), computer models or test on lower vertebrate or invertebrate species. The three Rs were introduced in a book published in 1959 by two British scientists, W.M.S. Russell and Rex Burch, The Principles of Humane Experimental Technique. Russell and Burch maintained that the only way in which valid scientific results could be obtained is through the humane treatment of animals, and believed that methods should be developed to reduce animal use and ultimately replace it. Interestingly, the principles outlined by Russell and Burch received little attention until the resurgence of the animal welfare movement in the mid-1970s. Today the concept of the three Rs is very much in the forefront with regard to research, testing and education.
In summary, the development of in vitro test methodologies has been influenced by a variety of factors that have converged over the last ten to 20 years. It is difficult to ascertain if any of these factors alone would have had such a profound effect on toxicity testing strategies.
Concept of In Vitro Toxicity Tests
This section will focus solely on in vitro methods for evaluating toxicity, as one of the alternatives to whole-animal testing. Additional non-animal alternatives such as computer modelling and quantitative structure-activity relationships are discussed in other articles of this chapter.
In vitro studies are generally conducted in animal or human cells or tissues outside of the body. In vitro literally means “in glass”, and refers to procedures carried out on living material or components of living material cultured in petri dishes or in test tubes under defined conditions. These may be contrasted with in vivo studies, or those carried out “in the living animal”. While it is difficult, if not impossible, to project the effects of a chemical on a complex organism when the observations are confined to a single type of cells in a dish, in vitro studies do provide a significant amount of information about intrinsic toxicity as well as cellular and molecular mechanisms of toxicity. In addition, they offer many advantages over in vivo studies in that they are generally less expensive and they may be conducted under more controlled conditions. Furthermore, despite the fact that small numbers of animals are still needed to obtain cells for in vitro cultures, these methods may be considered reduction alternatives (since many fewer animals are used compared to in vivo studies) and refinement alternatives (because they eliminate the need to subject the animals to the adverse toxic consequences imposed by in vivo experiments).
In order to interpret the results of in vitro toxicity tests, determine their potential usefulness in assessing toxicity and relate them to the overall toxicological process in vivo, it is necessary to understand which part of the toxicological process is being examined. The entire toxicological process consists of events that begin with the organism’s exposure to a physical or chemical agent, progress through cellular and molecular interactions and ultimately manifest themselves in the response of the whole organism. In vitro tests are generally limited to the part of the toxicological process that takes place at the cellular and molecular level. The types of information that may be obtained from in vitro studies include pathways of metabolism, interaction of active metabolites with cellular and molecular targets and potentially measurable toxic endpoints that can serve as molecular biomarkers for exposure. In an ideal situation, the mechanism of toxicity of each chemical from exposure to organismal manifestation would be known, such that the information obtained from in vitro tests could be fully interpreted and related to the response of the whole organism. However, this is virtually impossible, since relatively few complete toxicological mechanisms have been elucidated. Thus, toxicologists are faced with a situation in which the results of an in vitro test cannot be used as an entirely accurate prediction of in vivo toxicity because the mechanism is unknown. However, frequently during the process of developing an in vitro test, components of the cellular and molecular mechanism(s) of toxicity are elucidated.
One of the key unresolved issues surrounding the development and implementation of in vitro tests is related to the following consideration: should they be mechanistically based or is it sufficient for them to be descriptive? It is inarguably better from a scientific perspective to utilize only mechanistically based tests as replacements for in vivo tests. However in the absence of complete mechanistic knowledge, the prospect of developing in vitro tests to completely replace whole animal tests in the near future is almost nil. This does not, however, rule out the use of more descriptive types of assays as early screening tools, which is the case presently. These screens have resulted in a significant reduction in animal use. Therefore, until such time as more mechanistic information is generated, it may be necessary to employ to a more limited extent, tests whose results simply correlate well with those obtained in vivo.
In Vitro Tests for Cytotoxicity
In this section, several in vitro tests that have been developed to assess a chemical’s cytotoxic potential will be described. For the most part, these tests are easy to perform and analysis can be automated. One commonly used in vitro test for cytotoxicity is the neutral red assay. This assay is done on cells in culture, and for most applications, the cells can be maintained in culture dishes that contain 96 small wells, each 6.4mm in diameter. Since each well can be used for a single determination, this arrangement can accommodate multiple concentrations of the test chemical as well as positive and negative controls with a sufficient number of replicates for each. Following treatment of the cells with various concentrations of the test chemical ranging over at least two orders of magnitude (e.g., from 0.01mM to 1mM), as well as positive and negative control chemicals, the cells are rinsed and treated with neutral red, a dye that can be taken up and retained only by live cells. The dye may be added upon removal of the test chemical to determine immediate effects, or it may be added at various times after the test chemical is removed to determine cumulative or delayed effects. The intensity of the colour in each well corresponds to the number of live cells in that well. The colour intensity is measured by a spectrophotometer which may be equipped with a plate reader. The plate reader is programmed to provide individual measurements for each of the 96 wells of the culture dish. This automated methodology permits the investigator to rapidly perform a concentration-response experiment and to obtain statistically useful data.
Another relatively simple assay for cytotoxicity is the MTT test. MTT (3[4,5-dimethylthiazol-2-yl]-2,5-diphenyltetrazolium bromide) is a tetrazolium dye that is reduced by mitochondrial enzymes to a blue colour. Only cells with viable mitochondria will retain the ability to carry out this reaction; therefore the colour intensity is directly related to the degree of mitochondrial integrity. This is a useful test to detect general cytotoxic compounds as well as those agents that specifically target mitochondria.
The measurement of lactate dehydrogenase (LDH) activity is also used as a broad-based assay for cytotoxicity. This enzyme is normally present in the cytoplasm of living cells and is released into the cell culture medium through leaky cell membranes of dead or dying cells that have been adversely affected by a toxic agent. Small amounts of culture medium may be removed at various times after chemical treatment of the cells to measure the amount of LDH released and determine a time course of toxicity. While the LDH release assay is a very general assessment of cytotoxicity, it is useful because it is easy to perform and it may be done in real time.
There are many new methods being developed to detect cellular damage. More sophisticated methods employ fluorescent probes to measure a variety of intracellular parameters, such as calcium release and changes in pH and membrane potential. In general, these probes are very sensitive and may detect more subtle cellular changes, thus reducing the need to use cell death as an endpoint. In addition, many of these fluorescent assays may be automated by the use of 96-well plates and fluorescent plate readers.
Once data have been collected on a series of chemicals using one of these tests, the relative toxicities may be determined. The relative toxicity of a chemical, as determined in an in vitro test, may be expressed as the concentration that exerts a 50% effect on the endpoint response of untreated cells. This determination is referred to as the EC50 (Effective Concentration for 50% of the cells) and may be used to compare toxicities of different chemicals in vitro. (A similar term used in evaluating relative toxicity is IC50, indicating the concentration of a chemical that causes a 50% inhibition of a cellular process, e.g., the ability to take up neutral red.) It is not easy to assess whether the relative in vitro toxicity of the chemicals is comparable to their relative in vivo toxicities, since there are so many confounding factors in the in vivo system, such as toxicokinetics, metabolism, repair and defence mechanisms. In addition, since most of these assays measure general cytotoxicity endpoints, they are not mechanistically based. Therefore, agreement between in vitro and in vivo relative toxicities is simply correlative. Despite the numerous complexities and difficulties in extrapolating from in vitro to in vivo, these in vitro tests are proving to be very valuable because they are simple and inexpensive to perform and may be used as screens to flag highly toxic drugs or chemicals at early stages of development.
Target Organ Toxicity
In vitro tests can also be used to assess specific target organ toxicity. There are a number of difficulties associated with designing such tests, the most notable being the inability of in vitro systems to maintain many of the features of the organ in vivo. Frequently, when cells are taken from animals and placed into culture, they tend either to degenerate quickly and/or to dedifferentiate, that is, lose their organ-like functions and become more generic. This presents a problem in that within a short period of time, usually a few days, the cultures are no longer useful for assessing organ-specific effects of a toxin.
Many of these problems are being overcome because of recent advances in molecular and cellular biology. Information that is obtained about the cellular environment in vivo may be utilized in modulating culture conditions in vitro. Since the mid-1980s, new growth factors and cytokines have been discovered, and many of these are now available commercially. Addition of these factors to cells in culture helps to preserve their integrity and may also help to retain more differentiated functions for longer periods of time. Other basic studies have increased the knowledge of the nutritional and hormonal requirements of cells in culture, so that new media may be formulated. Recent advances have also been made in identifying both naturally occurring and artificial extracellular matrices on which cells may be cultured. Culture of cells on these different matrices can have profound effects on both their structure and function. A major advantage derived from this knowledge is the ability to intricately control the environment of cells in culture and individually examine the effects of these factors on basic cell processes and on their responses to different chemical agents. In short, these systems can provide great insight into organ-specific mechanisms of toxicity.
Many target organ toxicity studies are conducted in primary cells, which by definition are freshly isolated from an organ, and usually exhibit a finite lifetime in culture. There are many advantages to having primary cultures of a single cell type from an organ for toxicity assessment. From a mechanistic perspective, such cultures are useful for studying specific cellular targets of a chemical. In some instances, two or more cell types from an organ may be cultured together, and this provides an added advantage of being able to look at cell-cell interactions in response to a toxin. Some co-culture systems for skin have been engineered so that they form a three dimensional structure resembling skin in vivo. It is also possible to co-culture cells from different organs—for example, liver and kidney. This type of culture would be useful in assessing the effects specific to kidney cells, of a chemical that must be bioactivated in the liver.
Molecular biological tools have also played an important role in the development of continuous cell lines that can be useful for target organ toxicity testing. These cell lines are generated by transfecting DNA into primary cells. In the transfection procedure, the cells and the DNA are treated such that the DNA can be taken up by the cells. The DNA is usually from a virus and contains a gene or genes that, when expressed, allow the cells to become immortalized (i.e., able to live and grow for extended periods of time in culture). The DNA can also be engineered so that the immortalizing gene is controlled by an inducible promoter. The advantage of this type of construct is that the cells will divide only when they receive the appropriate chemical stimulus to allow expression of the immortalizing gene. An example of such a construct is the large T antigen gene from Simian Virus 40 (SV40) (the immortalizing gene), preceded by the promoter region of the metallothionein gene, which is induced by the presence of a metal in the culture medium. Thus, after the gene is transfected into the cells, the cells may be treated with low concentrations of zinc to stimulate the MT promoter and turn on the expression of the T antigen gene. Under these conditions, the cells proliferate. When zinc is removed from the medium, the cells stop dividing and under ideal conditions return to a state where they express their tissue-specific functions.
The ability to generate immortalized cells combined with the advances in cell culture technology have greatly contributed to the creation of cell lines from many different organs, including brain, kidney and liver. However, before these cell lines may be used as a surrogate for the bona fide cell types, they must be carefully characterized to determine how “normal” they really are.
Other in vitro systems for studying target organ toxicity involve increasing complexity. As in vitro systems progress in complexity from single cell to whole organ culture, they become more comparable to the in vivo milieu, but at the same time they become much more difficult to control given the increased number of variables. Therefore, what may be gained in moving to a higher level of organization can be lost in the inability of the researcher to control the experimental environment. Table 1 compares some of the characteristics of various in vitro systems that have been used to study hepatotoxicity.
Table 1. Comparison of in vitro systems for hepatotoxicity studies
System | Complexity (level of interaction) |
Ability to retain liver-specific functions | Potential duration of culture | Ability to control environment |
Immortalized cell lines | some cell to cell (varies with cell line) | poor to good (varies with cell line) | indefinite | excellent |
Primary hepatocyte cultures | cell to cell | fair to excellent (varies with culture conditions) | days to weeks | excellent |
Liver cell co-cultures | cell to cell (between the same and different cell types) | good to excellent | weeks | excellent |
Liver slices | cell to cell (among all cell types) | good to excellent | hours to days | good |
Isolated, perfused liver | cell to cell (among all cell types), and intra-organ | excellent | hours | fair |
Precision-cut tissue slices are being used more extensively for toxicological studies. There are new instruments available that enable the researcher to cut uniform tissue slices in a sterile environment. Tissue slices offer some advantage over cell culture systems in that all of the cell types of the organ are present and they maintain their in vivo architecture and intercellular communication. Thus, in vitro studies may be conducted to determine the target cell type within an organ as well as to investigate specific target organ toxicity. A disadvantage of the slices is that they degenerate rapidly after the first 24 hours of culture, mainly due to poor diffusion of oxygen to the cells on the interior of the slices. However, recent studies have indicated that more efficient aeration may be achieved by gentle rotation. This, together with the use of a more complex medium, allows the slices to survive for up to 96 hours.
Tissue explants are similar in concept to tissue slices and may also be used to determine the toxicity of chemicals in specific target organs. Tissue explants are established by removing a small piece of tissue (for teratogenicity studies, an intact embryo) and placing it into culture for further study. Explant cultures have been useful for short-term toxicity studies including irritation and corrosivity in skin, asbestos studies in trachea and neurotoxicity studies in brain tissue.
Isolated perfused organs may also be used to assess target organ toxicity. These systems offer an advantage similar to that of tissue slices and explants in that all cell types are present, but without the stress to the tissue introduced by the manipulations involved in preparing slices. In addition, they allow for the maintenance of intra-organ interactions. A major disadvantage is their short-term viability, which limits their use for in vitro toxicity testing. In terms of serving as an alternative, these cultures may be considered a refinement since the animals do not experience the adverse consequences of in vivo treatment with toxicants. However, their use does not significantly decrease the numbers of animals required.
In summary, there are several types of in vitro systems available for assessing target organ toxicity. It is possible to acquire much information about mechanisms of toxicity using one or more of these techniques. The difficulty remains in knowing how to extrapolate from an in vitro system, which represents a relatively small part of the toxicological process, to the whole process occurring in vivo.
In Vitro Tests for Ocular Irritation
Perhaps the most contentious whole-animal toxicity test from an animal welfare perspective is the Draize test for eye irritation, which is conducted in rabbits. In this test, a small fixed dose of a chemical is placed in one of the rabbit’s eyes while the other eye is used as a control. The degree of irritation and inflammation is scored at various times after exposure. A major effort is being made to develop methodologies to replace this test, which has been criticized not only for humane reasons, but also because of the subjectivity of the observations and variability of the results. It is interesting to note that despite the harsh criticism the Draize test has received, it has proven to be remarkably successful in predicting human eye irritants, particularly slightly to moderately irritating substances, that are difficult to identify by other methods. Thus, the demands on in vitro alternatives are great.
The quest for alternatives to the Draize test is a complicated one, albeit one that is predicted to be successful. Numerous in vitro and other alternatives have been developed and in some cases they have been implemented. Refinement alternatives to the Draize test, which by definition, are less painful or distressful to the animals, include the Low Volume Eye Test, in which smaller amounts of test materials are placed in the rabbits’ eyes, not only for humane reasons, but to more closely mimic the amounts to which people may actually be accidentally exposed. Another refinement is that substances which have a pH less than 2 or greater than 11.5 are no longer tested in animals since they are known to be severely irritating to the eye.
Between 1980 and 1989, there has been an estimated 87% decline in the number of rabbits used for eye irritation testing of cosmetics. In vitro tests have been incorporated as part of a tier-testing approach to bring about this vast reduction in whole-animal tests. This approach is a multi-step process that begins with a thorough examination of the historical eye irritation data and physical and chemical analysis of the chemical to be evaluated. If these two processes do not yield enough information, then a battery of in vitro tests is performed. The additional data obtained from the in vitro tests might then be sufficient to assess the safety of the substance. If not, then the final step would be to perform limited in vivo tests. It is easy to see how this approach can eliminate or at least drastically reduce the numbers of animals needed to predict the safety of a test substance.
The battery of in vitro tests that is used as part of this tier-testing strategy depends upon the needs of the particular industry. Eye irritation testing is done by a wide variety of industries from cosmetics to pharmaceuticals to industrial chemicals. The type of information required by each industry varies and therefore it is not possible to define a single battery of in vitro tests. A test battery is generally designed to assess five parameters: cytotoxicity, changes in tissue physiology and biochemistry, quantitative structure-activity relationships, inflammation mediators, and recovery and repair. An example of a test for cytotoxicity, which is one possible cause for irritation, is the neutral red assay using cultured cells (see above). Changes in cellular physiology and biochemistry resulting from exposure to a chemical may be assayed in cultures of human corneal epithelial cells. Alternatively, investigators have also used intact or dissected bovine or chicken eyeballs obtained from slaughterhouses. Many of the endpoints measured in these whole organ cultures are the same as those measured in vivo, such as corneal opacity and corneal swelling.
Inflammation is frequently a component of chemical-induced eye injury, and there are a number of assays available to examine this parameter. Various biochemical assays detect the presence of mediators released during the inflammatory process such as arachidonic acid and cytokines. The chorioallantoic membrane (CAM) of the hen’s egg may also be used as an indicator of inflammation. In the CAM assay, a small piece of the shell of a ten-to-14-day chick embryo is removed to expose the CAM. The chemical is then applied to the CAM and signs of inflammation, such as vascular hemorrhaging, are scored at various times thereafter.
One of the most difficult in vivo processes to assess in vitro is recovery and repair of ocular injury. A newly developed instrument, the silicon microphysiometer, measures small changes in extracellular pH and can been used to monitor cultured cells in real time. This analysis has been shown to correlate fairly well with in vivo recovery and has been used as an in vitro test for this process. This has been a brief overview of the types of tests being employed as alternatives to the Draize test for ocular irritation. It is likely that within the next several years a complete series of in vitro test batteries will be defined and each will be validated for its specific purpose.
Validation
The key to regulatory acceptance and implementation of in vitro test methodologies is validation, the process by which the credibility of a candidate test is established for a specific purpose. Efforts to define and coordinate the validation process have been made both in the United States and in Europe. The European Union established the European Centre for the Validation of Alternative Methods (ECVAM) in 1993 to coordinate efforts there and to interact with American organizations such as the Johns Hopkins Centre for Alternatives to Animal Testing (CAAT), an academic centre in the United States, and the Interagency Coordinating Committee for the Validation of Alternative Methods (ICCVAM), composed of representatives from the National Institutes of Health, the US Environmental Protection Agency, the US Food and Drug Administration and the Consumer Products Safety Commission.
Validation of in vitro tests requires substantial organization and planning. There must be consensus among government regulators and industrial and academic scientists on acceptable procedures, and sufficient oversight by a scientific advisory board to ensure that the protocols meet set standards. The validation studies should be performed in a series of reference laboratories using calibrated sets of chemicals from a chemical bank and cells or tissues from a single source. Both intralaboratory repeatability and interlaboratory reproducibility of a candidate test must be demonstrated and the results subjected to appropriate statistical analysis. Once the results from the different components of the validation studies have been compiled, the scientific advisory board can make recommendations on the validity of the candidate test(s) for a specific purpose. In addition, results of the studies should be published in peer-reviewed journals and placed in a database.
The definition of the validation process is currently a work in progress. Each new validation study will provide information useful to the design of the next study. International communication and cooperation are essential for the expeditious development of a widely acceptable series of protocols, particularly given the increased urgency imposed by the passage of the EC Cosmetics Directive. This legislation may indeed provide the needed impetus for a serious validation effort to be undertaken. It is only through completion of this process that the acceptance of in vitro methods by the various regulatory communities can commence.
Conclusion
This article has provided a broad overview of the current status of in vitro toxicity testing. The science of in vitro toxicology is relatively young, but it is growing exponentially. The challenge for the years ahead is to incorporate the mechanistic knowledge generated by cellular and molecular studies into the vast inventory of in vivo data to provide a more complete description of toxicological mechanisms as well as to establish a paradigm by which in vitro data may be used to predict toxicity in vivo. It will only be through the concerted efforts of toxicologists and government representatives that the inherent value of these in vitro methods can be realized.
Structure activity relationships (SAR) analysis is the utilization of information on the molecular structure of chemicals to predict important characteristics related to persistence, distribution, uptake and absorption, and toxicity. SAR is an alternative method of identifying potential hazardous chemicals, which holds promise of assisting industries and governments in prioritizing substances for further evaluation or for early-stage decision making for new chemicals. Toxicology is an increasingly expensive and resource-intensive undertaking. Increased concerns over the potential for chemicals to cause adverse effects in exposed human populations have prompted regulatory and health agencies to expand the range and sensitivity of tests to detect toxicological hazards. At the same time, the real and perceived burdens of regulation upon industry have provoked concerns for the practicality of toxicity testing methods and data analysis. At present, the determination of chemical carcinogenicity depends upon lifetime testing of at least two species, both sexes, at several doses, with careful histopathological analysis of multiple organs, as well as detection of preneoplastic changes in cells and target organs. In the United States, the cancer bioassay is estimated to cost in excess of $3 million (1995 dollars).
Even with unlimited financial resources, the burden of testing the approximately 70,000 existing chemicals produced in the world today would exceed the available resources of trained toxicologists. Centuries would be required to complete even a first tier evaluation of these chemicals (NRC 1984). In many countries ethical concerns over the use of animals in toxicity testing have increased, bringing additional pressures upon the uses of standard methods of toxicity testing. SAR has been widely used in the pharmaceutical industry to identify molecules with potential for beneficial use in treatment (Hansch and Zhang 1993). In environmental and occupational health policy, SAR is used to predict the dispersion of compounds in the physical-chemical environment and to screen new chemicals for further evaluation of potential toxicity. Under the US Toxic Substances Control Act (TSCA), the EPA has used since 1979 an SAR approach as a “first screen” of new chemicals in the premanufacture notification (PMN) process; Australia uses a similar approach as part of its new chemicals notification (NICNAS) procedure. In the US SAR analysis is an important basis for determining that there is a reasonable basis to conclude that manufacture, processing, distribution, use or disposal of the substance will present an unreasonable risk of injury to human health or the environment, as required by Section 5(f) of TSCA. On the basis of this finding, EPA can then require actual tests of the substance under Section 6 of TSCA.
Rationale for SAR
The scientific rationale for SAR is based upon the assumption that the molecular structure of a chemical will predict important aspects of its behaviour in physical-chemical and biological systems (Hansch and Leo 1979).
SAR Process
The SAR review process includes identification of the chemical structure, including empirical formulations as well as the pure compound; identification of structurally analogous substances; searching databases and literature for information on structural analogs; and analysis of toxicity and other data on structural analogs. In some rare cases, information on the structure of the compound alone can be sufficient to support some SAR analysis, based upon well-understood mechanisms of toxicity. Several databases on SAR have been compiled, as well as computer-based methods for molecular structure prediction.
With this information, the following endpoints can be estimated with SAR:
It should be noted that SAR methods do not exist for such important health endpoints as carcinogenicity, developmental toxicity, reproductive toxicity, neurotoxicity, immunotoxicity or other target organ effects. This is due to three factors: the lack of a large database upon which to test SAR hypotheses, lack of knowledge of structural determinants of toxic action, and the multiplicity of target cells and mechanisms that are involved in these endpoints (see “The United States approach to risk assessment of reproductive toxicants and neurotoxic agents”). Some limited attempts to utilize SAR for predicting pharmacokinetics using information on partition coefficients and solubility (Johanson and Naslund 1988). More extensive quantitative SAR has been done to predict P450-dependent metabolism of a range of compounds and binding of dioxin- and PCB-like molecules to the cytosolic “dioxin” receptor (Hansch and Zhang 1993).
SAR has been shown to have varying predictability for some of the endpoints listed above, as shown in table 1. This table presents data from two comparisons of predicted activity with actual results obtained by empirical measurement or toxicity testing. SAR as conducted by US EPA experts performed more poorly for predicting physical-chemical properties than for predicting biological activity, including biodegradation. For toxicity endpoints, SAR performed best for predicting mutagenicity. Ashby and Tennant (1991) in a more extended study also found good predictability of short-term genotoxicity in their analysis of NTP chemicals. These findings are not surprising, given current understanding of molecular mechanisms of genotoxicity (see “Genetic toxicology”) and the role of electrophilicity in DNA binding. In contrast, SAR tended to underpredict systemic and subchronic toxicity in mammals and to overpredict acute toxicity to aquatic organisms.
Table 1. Comparison of SAR and test data: OECD/NTP analyses
Endpoint | Agreement (%) | Disagreement (%) | Number |
Boiling point | 50 | 50 | 30 |
Vapour pressure | 63 | 37 | 113 |
Water solubility | 68 | 32 | 133 |
Partition coefficient | 61 | 39 | 82 |
Biodegradation | 93 | 7 | 107 |
Fish toxicity | 77 | 22 | 130 |
Daphnia toxicity | 67 | 33 | 127 |
Acute mammalian toxicity (LD50 ) | 80 | 201 | 142 |
Skin irritation | 82 | 18 | 144 |
Eye irritation | 78 | 22 | 144 |
Skin sensitization | 84 | 16 | 144 |
Subchronic toxicity | 57 | 32 | 143 |
Mutagenicity2 | 88 | 12 | 139 |
Mutagenicity3 | 82–944 | 1–10 | 301 |
Carcinogenicity3 : Two year bioassay | 72–954 | — | 301 |
Source: Data from OECD, personal communication C. Auer ,US EPA. Only those endpoints for which comparable SAR predictions and actual test data were available were used in this analysis. NTP data are from Ashby and Tennant 1991.
1 Of concern was the failure by SAR to predict acute toxicity in 12% of the chemicals tested.
2 OECD data, based on Ames test concordance with SAR
3 NTP data, based on genetox assays compared to SAR predictions for several classes of “structurally alerting chemicals”.
4 Concordance varies with class; highest concordance was with aromatic amino/nitro compounds; lowest with “miscellaneous” structures.
For other toxic endpoints, as noted above, SAR has less demonstrable utility. Mammalian toxicity predictions are complicated by the lack of SAR for toxicokinetics of complex molecules. Nevertheless, some attempts have been made to propose SAR principles for complex mammalian toxicity endpoints (for instance, see Bernstein (1984) for an SAR analysis of potential male reproductive toxicants). In most cases, the database is too small to permit rigorous testing of structure-based predictions.
At this point it may be concluded that SAR may be useful mainly for prioritizing the investment of toxicity testing resources or for raising early concerns about potential hazard. Only in the case of mutagenicity is it likely that SAR analysis by itself can be utilized with reliability to inform other decisions. For no endpoint is it likely that SAR can provide the type of quantitative information required for risk assessment purposes as discussed elsewhere in this chapter and Encyclopaedia.
" DISCLAIMER: The ILO does not take responsibility for content presented on this web portal that is presented in any language other than English, which is the language used for the initial production and peer-review of original content. Certain statistics have not been updated since the production of the 4th edition of the Encyclopaedia (1998)."