Thursday, 24 March 2011 19:17

Actors

Acting involves placing your mind in the world of fantasy and bringing forth a character for a performance. Actors are involved in many arts and entertainment areas, including theatre, film, television, amusement and theme parks and so on. Hazards faced by actors include stress, physical hazards and chemical hazards. Stage fright (performance anxiety) is considered in a separate article.

Stress

Causes of stress include the fierce competition for scarce jobs, the pressure of performing shows daily or even more frequently (e.g., theme parks and matinee days), working at night, touring shows, filming deadlines, frequent retakes (especially while filming television commercials) and so on. There are also psychological pressures involved in adopting and maintaining a character role, including the pressure to express certain emotions upon demand, and the tactics often used by directors to obtain a given reaction from an actor. As a result, actors have higher rates of alcoholism and suicide. The solution to many of these causes of stress involves improved working and living conditions, especially when touring and on location. In addition, personal measures such as therapy and relaxation techniques can also help.

Costumes

Many costumes are a fire hazard near open flames or other ignition sources. Special effects costumes and masks can create problems of heat stress and excess weight.

The costumes of all actors working near open flames must be treated with an approved fire retardant. Actors wearing heavy costumes or costumes not suitable to the climate should be given adequate work breaks. With heavy metal or wood framework costumes, supplying cool air inside the costume might be necessary. Provision should also be made for easy escape from such costumes in case of emergency.

Theatrical Makeup

Theatrical makeup can cause allergic skin and eye reactions and irritation in some people. The widespread practice of sharing makeup or applying it to many people from the same container can create risks of transmitting bacterial infections. According to medical experts, transmission of the HIV and other viruses is not likely through shared makeup. The use of hair sprays and other spray products in unventilated dressing rooms is also a problem. Special effects makeup can involve the use of more hazardous materials such as polyurethane and silicone rubber resins and a variety of solvents.

Basic precautions when applying makeup include washing hands before and after; not using old makeup; no smoking, eating or drinking during application; using potable water and not saliva for moistening brushes; avoiding creation of airborne dust; and using pump sprays instead of aerosol sprays. Each performer should have his or her own makeup kit when practical. When applying makeup to several individuals, disposable sponges, brushes and individual applicators, individual lipsticks (or sliced and labelled lipsticks) and so on should be used. The least toxic materials possible should be used for special effects makeup. The dressing room should have a mirror, good lighting and comfortable chairs.

Stunts

A stunt can be defined as any action sequence that involves a greater than normal risk of injury to performers or others on the set. In many such situations, actors are doubled by stunt performers who have extensive experience and training in carrying out such action sequences. Examples of potentially hazardous stunts include falls, fights, helicopter scenes, car chases, fires and explosions. Careful preplanning and written safety procedures are necessary. See the article “Motion picture and television production” for detailed information on stunts.

Other Hazards

Other hazards to actors, especially on location, include environmental conditions (heat, cold, polluted water, etc.), water scenes with possible risk of hypothermia and special effects (fogs and smoke, pyrotechnics, etc.). Special consideration must be given to these factors before filming starts. In theatres, scenes with dirt, gravel, artificial snow and so on can create eye and respiratory irritation problems when hazardous materials are used, or when materials are swept up and reused, resulting in possible biological contamination. An additional hazard is the growing phenomenon of stalking of well-known actors, actresses and other celebrities, with resultant threats or actuality of violence.

Child Actors

The use of children in theatre and motion picture production can lead to exploitation unless careful procedures are enforced to ensure that children do not work long hours, are not placed in hazardous situations and receive adequate education. Concern has also been expressed about the psychological effects on children participating in theatre or motion picture scenes involving simulated violence. Child labour laws in many countries do not adequately protect child actors.

 

Back

Thursday, 24 March 2011 19:15

Performance Anxiety

Performance anxiety is, like fear, joy or grief, an emotion which includes physical and psychological components. Motor responses, autonomic reactions, memories, ideas and thoughts continuously interact. Performance anxiety is no longer thought of as an isolated symptom but rather as a syndrome comprising attitudes, traits and unconscious conflicts that become activated in particular circumstances.

Nearly every person must deal with performance anxiety in one form or another at one time or another. By the nature of their profession, however, performing artists, or those for whom public performance is an important part of their profession, have to deal with performance anxiety more frequently and often more intensely than do others. Even those with years of experience may still have a performance anxiety problem.

Performance anxiety is mainly characterized by an irrational situational anxiety accompanied by unwanted physical symptoms which can lead to dysfunction and/or uncontrolled behaviour. It occurs especially in those situations in which a task has to be done that could subject the performer to possible criticism from others. Examples of such situations include public speaking, giving a concert, writing exams, sexual performance, etc. Performance anxiety can cause a broad range of possible physical symptoms of distress, such as trembling hands, trembling lips, diarrhoea, sweating hands and palpitations of the heart. These symptoms can not only affect the quality of a performance but may also negatively influence the sufferer’s future and career.

Some experts believe that the causes of performance anxiety include improper practice and preparation habits, insufficient performance experience, having an inappropriate repertoire and so on. Other theories view performance anxiety as mainly caused by negative thoughts and poor self-esteem. Still others are of the opinion that the stress and fear of performance anxiety is closely related to so-called career stress, which includes feelings of inadequacy, anticipation of punishment or criticism and loss of status. Although there is no agreement as to the cause of performance anxiety, and the explanation cannot be simple, it is clear that the problem is widespread and that even world-famous artists such as Yehudi Menuhin or Pablo Casals are known to have suffered from performance anxiety and fear all their lives.

Personal traits are undoubtedly related to performance anxiety. A challenge for one person can be a catastrophe for another. The experience of performance anxiety depends to a great extent on the personal perception of a fearful situation. Some introverted individuals may, for example, be more prone to stressful events and thus more likely to suffer performance anxiety than others. For some people, success can also cause fear and performance anxiety. This in turn reduces and undermines the communicative and creative aspects of the performer.

To achieve an optimum performance a bit of fear and stress and a certain amount of nervousness may be unavoidable. The margin between the degree of (still) acceptable performance anxiety and the necessity of therapeutic intervention, however, can be set only by the performer.

Performance anxiety is a complex phenomenon; its various components lead to variable and changing reactions depending on the situation. Individual aspects, work situations, social factors, personal development and so on play a considerable role, making it difficult to give general rules.

Methods for diminishing performance anxiety include developing personal coping strategies or learning relaxation techniques such as biofeedback. Such approaches are directed towards transforming task-irrelevant negative thoughts and worrisome anticipations into task-relevant demands and the positive task-orientated self. Medical interventions, such as beta-blockers and tranquillizers are also commonly used (Nubé 1995). The taking of drugs however, remains controversial and should be done only under medical supervision due to possible side effects and contra-indications.

 

Back

Thursday, 24 March 2011 18:59

Radiation Biology and Biological Effects

After its discovery by Roentgen in 1895, the x ray was introduced so rapidly into the diagnosis and treatment of disease that injuries from excessive radiation exposure began to be encountered almost immediately in pioneer radiation workers, who had yet to become aware of the dangers (Brown 1933). The first such injuries were predominantly skin reactions on the hands of those working with the early radiation equipment, but within a decade many other types of injury also had been reported, including the first cancers attributed to radiation (Stone 1959).

Throughout the century since these early findings, study of the biological effects of ionizing radiation has received continuing impetus from the growing uses of radiation in medicine, science and industry, as well as from the peaceful and military applications of atomic energy. As a result, the biological effects of radiation have been investigated more thoroughly than those of virtually any other environmental agent. The evolving knowledge of radiation effects has been influential in shaping measures for the protection of human health against many other environmental hazards as well as radiation.

Nature and Mechanisms of the Biological Effects of Radiation

Energy deposition. In contrast to other forms of radiation, ionizing radiation is capable of depositing enough localized energy to dislodge electrons from the atoms with which it interacts. Thus, as radiation collides randomly with atoms and molecules in passing through living cells, it gives rise to ions and free radicals which break chemical bonds and cause other molecular changes that injure the affected cells. The spatial distribution of the ionizing events depends on the radiation weighting factor, w R of the radiation (see table 1 and figure 1).

Table 1. Radiation weighting factors wR

Type and energy range

wR 1

Photons, all energies

1

Electrons and muons, all energies2

1

Neutrons, energy <10 keV

5

10 keV to 100 keV

10

>100 keV to 2 MeV

20

>2 MeV to 20 MeV

10

>20 MeV

5

Protons, other than recoil protons, energy >2 MeV

5

Alpha particles, fission fragments, heavy nuclei

20

1 All values relate to the radiation incident on the body or, for internal sources, emitted from the source.

2 Excluding Auger electrons emitted from nuclei bound to DNA.

Figure 1. Differences among various types of ionizing radiation in penetrating power in tissue

ION020F1

Effects on DNA. Any molecule in the cell may be altered by radiation, but DNA is the most critical biological target because of the limited redundancy of the genetic information it contains. An absorbed dose of radiation large enough to kill the average dividing cell—2 gray (Gy)—suffices to cause hundreds of lesions in its DNA molecules (Ward 1988). Most such lesions are reparable, but those produced by a densely ionizing radiation (for example, a proton or an alpha particle) are generally less reparable than those produced by a sparsely ionizing radiation (for example, an x ray or a gamma ray) (Goodhead 1988). Densely ionizing (high LET) radiations, therefore, typically have a higher relative biological effectiveness (RBE) than sparsely ionizing (low LET) radiations for most forms of injury (ICRP 1991).

Effects on genes. Damage to DNA that remains unrepaired or is misrepaired may be expressed in the form of mutations, the frequency of which appears to increase as a linear, non-threshold function of the dose, approximately 10–5 to 10–6 per locus per Gy (NAS 1990). The fact that the mutation rate appears to be proportional to the dose is interpreted to signify that traversal of the DNA by a single ionizing particle may, in principle, suffice to cause a mutation (NAS 1990). In Chernobyl accident victims, the dose-response relationship for glycophorin mutations in bone marrow cells closely resembles that observed in atomic bomb survivors (Jensen, Langlois and Bigbee 1995).

Effects on chromosomes. Radiation damage to the genetic apparatus may also cause changes in chromosome number and structure, the frequency of which has been observed to increase with the dose in radiation workers, atomic bomb survivors, and others exposed to ionizing radiation. The dose-response relationship for chromosome aberrations in human blood lymphocytes (figure 2) has been characterized well enough so that the frequency of aberrations in such cells can serve as a useful biological dosimeter (IAEA 1986).

Figure 2. Frequency of dicentric chromosome aberrations in human lymphocytes in relation to dose, dose rate, and quality of irradiation in vitro

ION020F2

 

Effects on cell survival. Among the earliest reactions to irradiation is the inhibition of cell division, which appears promptly after exposure, varying both in degree and duration with the dose (figure 3). Although the inhibition of mitosis is characteristically transitory, radiation damage to genes and chromosomes may be lethal to dividing cells, which are highly radiosensitive as a class (ICRP 1984). Measured in terms of proliferative capacity, the survival of dividing cells tends to decrease exponentially with increasing dose, 1 to 2 Gy generally sufficing to reduce the surviving population by about 50% (figure 4).

Figure 3. Mitotic inhibition induced by x rays in rat corneal epithelial cells

ION020F3

 

Figure 4. Typical dose-survival curves for mammalian cells exposed to x rays and fast neutrons

 

ION020F4

Effects on tissues. Mature, non-dividing cells are relatively radioresistant, but the dividing cells in a tissue are radiosensitive and may be killed in sufficient numbers by intensive irradiation to cause the tissue to become atrophic (figure 5). The rapidity of such atrophy depends on cell population dynamics within the affected tissue; that is, in organs characterized by slow cell turnover, such as the liver and vascular endothelium, the process is typically much slower than in organs characterized by rapid cell turnover, such as the bone marrow, epidermis and intestinal mucosa (ICRP 1984). It is noteworthy, moreover, that if the volume of tissue irradiated is sufficiently small, or if the dose is accumulated gradually enough, the severity of injury may be greatly reduced by the compensatory proliferation of surviving cells.

Figure 5. Characteristic sequence of events in the pathogenesis of nonstochastic effects  of ionizing radiation

 ION020F5

Clinical Manifestations of Injury

Types of effects. Radiation effects encompass a wide variety of reactions, varying markedly in their dose-response relationships, clinical manifestations, timing and prognosis (Mettler and Upton 1995). The effects are often subdivided, for convenience, into two broad categories: (1) heritable effects, which are expressed in the descendants of exposed individuals, and (2) somatic effects, which are expressed in exposed individuals themselves. The latter include acute effects, which occur relatively soon after irradiation, as well as late (or chronic) effects, such as cancer, which may not appear until months, years or decades later.

Acute effects. The acute effects of radiation result predominantly from the depletion of progenitor cells in affected tissues (figure 5) and can be elicited only by doses that are large enough to kill many such cells (for example, table 2). For this reason, such effects are viewed as nonstochastic, or deterministic, in nature (ICRP 1984 and 1991), in contradistinction to the mutagenic and carcinogenic effects of radiation, which are viewed as stochastic phenomena resulting from random molecular alterations in individual cells that increase as linear-nonthreshold functions of the dose (NAS 1990; ICRP 1991).

Table 2. Approximate threshold doses of conventionally fractionated therapeutic x-radiation for clinically detrimental nonstochastic effects in various tissues

Organ

Injury at 5 years

Threshold
dose (Gy)*

Irradiation
field (area)

Skin

Ulcer, severe fibrosis

55

100 cm2

Oral mucosa

Ulcer, severe fibrosis

60

50 cm2

Oesophagus

Ulcer, stricture

60

75 cm2

Stomach

Ulcer, perforation

45

100 cm2

Small intestine

Ulcer, stricture

45

100 cm2

Colon

Ulcer, stricture

45

100 cm2

Rectum

Ulcer, stricture

55

100 cm2

Salivary glands

Xerostomia

50

50 cm2

Liver

Liver failure, ascites

35

whole

Kidney

Nephrosclerosis

23

whole

Urinary bladder

Ulcer, contracture

60

whole

Testes

Permanent sterility

5-15

whole

Ovary

Permanent sterility

2-3

whole

Uterus

Necrosis, perforation

>100

whole

Vagina

Ulcer, fistula

90

5 cm2

Breast, child

Hypoplasia

10

5 cm2

Breast, adult

Atrophy, necrosis

>50

whole

Lung

Pneumonitis, fibrosis

40

lobe

Capillaries

Telangiectasis, fibrosis

50-60

s

Heart

Pericarditis, pancarditis

40

whole

Bone, child

Arrested growth

20

10 cm2

Bone, adult

Necrosis, fracture

60

10 cm2

Cartilage, child

Arrested growth

10

whole

Cartilage, adult

Necrosis

60

whole

Central nervous system (brain)

Necrosis

50

whole

Spinal cord

Necrosis, transection

50

5 cm2

Eye

Panophthalmitis, haemorrhage

55

whole

Cornea

Keratitis

50

whole

Lens

Cataract

5

whole

Ear (inner)

Deafness

>60

whole

Thyroid

Hypothyroidism

45

whole

Adrenal

Hypoadrenalism

>60

whole

Pituitary

Hypopituitarism

45

whole

Muscle, child

Hypoplasia

20-30

whole

Muscle, adult

Atrophy

>100

whole

Bone marrow

Hypoplasia

2

whole

Bone marrow

Hypoplasia, fibrosis

20

localized

Lymph nodes

Atrophy

33-45

s

Lymphatics

Sclerosis

50

s

Foetus

Death

2

whole

* Dose causing effect in 1-5 per cent of exposed persons.

Source: Rubin and Casarett 1972.

Acute injuries of the types that were prevalent in pioneer radiation workers and early radiotherapy patients have been largely eliminated by improvements in safety precautions and treatment methods. Nevertheless, most patients treated with radiation today still experience some injury of the normal tissue that is irradiated. In addition, serious radiation accidents continue to occur. For example, some 285 nuclear reactor accidents (excluding the Chernobyl accident) were reported in various countries between 1945 and 1987, irradiating more than 1,350 persons, 33 of them fatally (Lushbaugh, Fry and Ricks 1987). The Chernobyl accident alone released enough radioactive material to require the evacuation of tens of thousands of people and farm animals from the surrounding area, and it caused radiation sickness and burns in more than 200 emergency personnel and fire-fighters, injuring 31 fatally (UNSCEAR 1988). The long-term health effects of the radioactive material released cannot be predicted with certainty, but estimates of the resulting risks of carcinogenic effects, based on nonthreshold dose-incidence models (discussed below), imply that up to 30,000 additional cancer deaths may occur in the population of the northern hemisphere during the next 70 years as a result of the accident, although the additional cancers in any given country are likely to be too few to be detectable epidemiologically (USDOE 1987).

Less catastrophic, but far more numerous, than reactor accidents have been accidents involving medical and industrial gamma ray sources, which also have caused injuries and loss of life. For example, the improper disposal of a caesium-137 radiotherapy source in Goiânia, Brazil, in 1987, resulted in the irradiation of dozens of unsuspecting victims, four of them fatally (UNSCEAR 1993).

A comprehensive discussion of radiation injuries is beyond the scope of this review, but acute reactions of the more radiosensitive tissues are of widespread interest and are, therefore, described briefly in the following sections.

Skin. Cells in the germinal layer of the epidermis are highly radiosensitive. As a result, rapid exposure of the skin to a dose of 6 Sv or more causes erythema (reddening) in the exposed area, which appears within a day or so, typically lasts a few hours, and is followed two to four weeks later by one or more waves of deeper and more prolonged erythema, as well as by epilation (hair loss). If the dose exceeds 10 to 20 Sv, blistering, necrosis and ulceration may ensue within two to four weeks, followed by fibrosis of the underlying dermis and vasculature, which may lead to atrophy and a second wave of ulceration months or years later (ICRP 1984).

Bone marrow and lymphoid tissue. Lymphocytes also are highly radiosensitive; a dose of 2 to 3 Sv delivered rapidly to the whole body can kill enough of them to depress the peripheral lymphocyte count and impair the immune response within hours (UNSCEAR 1988). Haemopoietic cells in the bone marrow are similarly radiosensitive and are depleted sufficiently by a comparable dose to cause granulocytopenia and thrombocytopenia to ensue within three to five weeks. Such reductions in granulocyte and platelet counts may be severe enough after a larger dose to result in haemorrhage or fatal infection (table 3).

Table 3. Major forms and features of the acute radiation syndrome

Time after
irradiation

Cerebral form
(>50 Gy)

Gastro-
intestinal form
(10-20 Gy)

Hemopoietic form
(2-10 Gy)

Pulmonary form
(>6 Gy to lungs)

First day

nausea
vomiting
diarrhea
headache
disorientation
ataxia
coma
convulsions
death

nausea
vomiting
diarrhea

nausea
vomiting
diarrhea

nausea
vomiting

Second week

 

nausea
vomiting
diarrhea
fever
erythema
prostration
death

   

Third to sixth
weeks

   

weakness
fatigue
anorexia
fever
haemorrhage
epilation
recovery (?)
death (?)

 

Second to eighth
months

     

cough
dyspnoea
fever
chest pain
respiratory
failure (?)

Source: UNSCEAR 1988.

Intestine. Stem cells in the epithelium lining the small bowel also are extremely radiosensitive, acute exposure to 10 Sv depleting their numbers sufficiently to cause the overlying intestinal villi to become denuded within days (ICRP 1984; UNSCEAR 1988). Denudation of a large area of the mucosa can result in a fulminating, rapidly fatal dysentery-like syndrome (table 3).

Gonads. Mature spermatozoa can survive large doses (100 Sv), but spermatogonia are so radiosensitive that as little as 0.15 Sv delivered rapidly to both testes suffices to cause oligospermia, and a dose of 2 to 4 Sv can cause permanent sterility. Oocytes, likewise, are radiosensitive, a dose of 1.5 to 2.0 Sv delivered rapidly to both ovaries causing temporary sterility, and a larger dose, permanent sterility, depending on the age of the woman at the time of exposure (ICRP 1984).

Respiratory tract. The lung is not highly radiosensitive, but rapid exposure to a dose of 6 to 10 Sv can cause acute pneumonitis to develop in the exposed area within one to three months. If a large volume of lung tissue is affected, the process may result in respiratory failure within weeks, or may lead to pulmonary fibrosis and cor pulmonale months or years later (ICRP 1984; UNSCEAR 1988).

Lens of the eye. Cells of the anterior epithelium of the lens, which continue to divide throughout life, are relatively radiosensitive. As a result, rapid exposure of the lens to a dose exceeding 1 Sv may lead within months to the formation of a microscopic posterior polar opacity; and 2 to 3 Sv received in a single brief exposure—or 5.5 to 14 Sv accumulated over a period of months—may produce a vision-impairing cataract (ICRP 1984).

Other tissues. In comparison with the tissues mentioned above, other tissues of the body are generally appreciably less radiosensitive (for example, table 2); however, the embryo constitutes a notable exception, as discussed below. Noteworthy also is the fact that the radiosensitivity of every tissue is increased when it is in a rapidly growing state (ICRP 1984).

Whole-body radiation injury. Rapid exposure of a major part of the body to a dose in excess of 1 Gy can cause the acute radiation syndrome. This syndrome includes: (1) an initial prodromal stage, characterized by malaise, anorexia, nausea and vomiting, (2) an ensuing latent period, (3) a second (main) phase of illness and (4) ultimately, either recovery or death (table 3). The main phase of the illness typically takes one of the following forms, depending on the predominant locus of radiation injury: (1) haematological, (2) gastro-intestinal, (3) cerebral or (4) pulmonary (table 3).

Localized radiation injury. Unlike the clinical manifestations of acute whole-body radiation injury, which typically are dramatic and prompt, the reaction to sharply localized irradiation, whether from an external radiation source or from an internally deposited radionuclide, tends to evolve slowly and to produce few symptoms or signs unless the volume of tissue irradiated and/or the dose are relatively large (for example, table 3).

Effects of radionuclides. Some radionuclides - for example, tritium (3H), carbon-14 (14C) and cesium-137 (137Cs) - tend to be distributed systemically and to irradiate the body as a whole, whereas other radionuclides are characteristically taken up and concentrated in specific organs, producing injuries that are correspondingly localized. Radium (Ra) and strontium-90
(90Sr), for example, are deposited predominantly in bone and thus injure skeletal tissues primarily, whereas radioactive iodine concentrates in the thyroid gland, the primary site of any resulting injury (Stannard 1988; Mettler and Upton 1995).

Carcinogenic Effects

General features. The carcinogenicity of ionizing radiation, first manifested early in this century by the occurrence of skin cancers and leukaemias in pioneer radiation workers (Upton 1986), has since been documented extensively by dose-dependent excesses of many types of neoplasms in radium-dial painters, underground hardrock miners, atomic bomb survivors, radiotherapy patients and experimentally irradiated laboratory animals (Upton 1986; NAS 1990).

The benign and malignant growths induced by irradiation characteristically take years or decades to appear and exhibit no known features by which they can be distinguished from those produced by other causes. With few exceptions, moreover, their induction has been detectable only after relatively large dose equivalents (0.5 Sv), and it has varied with the type of neoplasm as well as the age and sex of those exposed (NAS 1990).

Mechanisms. The molecular mechanisms of radiation carcinogenesis remain to be elucidated in detail, but in laboratory animals and cultured cells the carcinogenic effects of radiation have been observed to include initiating effects, promoting effects, and effects on the progression of neoplasia, depending on the experimental conditions in question (NAS 1990). The effects also appear to involve the activation of oncogenes and/or the inactivation or loss of tumor-suppressor genes in many, if not all, instances. In addition, the carcinogenic effects of radiation resemble those of chemical carcinogens in being similarly modifiable by hormones, nutritional variables and other modifying factors (NAS 1990). It is noteworthy, moreover, that the effects of radiation may be additive, synergistic or mutually antagonistic with those of chemical carcinogens, depending on the specific chemicals and exposure conditions in question (UNSCEAR 1982 and 1986).

Dose-effect relationship. Existing data do not suffice to describe the dose-incidence relationship unambiguously for any type of neoplasm or to define how long after irradiation the risk of the growth may remain elevated in an exposed population. Any risks attributable to low-level irradiation can, therefore, be estimated only by extrapolation, based on models incorporating assumptions about such parameters (NAS 1990). Of various dose-effect models that have been used to estimate the risks of low-level irradiation, the one that has been judged to provide the best fit to the available data is of the form:

where R0 denotes the age-specific background risk of death from a specific type of cancer, D the radiation dose, f(D) a function of dose that is linear-quadratic for leukaemia and linear for some other types of cancer, and g(b) is a risk function dependent on other parameters, such as sex, age at exposure and time after exposure (NAS 1990).

Non-threshold models of this type have been applied to epidemiological data from the Japanese atomic-bomb survivors and other irradiated populations to derive estimates of the lifetime risks of different forms of radiation-induced cancer (for example, table 4). Such estimates must be interpreted with caution, however, in attempting to predict the risks of cancer attributable to small doses or doses that are accumulated over weeks, months or years, since experiments with laboratory animals have shown the carcinogenic potency of x rays and gamma rays to be reduced by as much as an order of magnitude when the exposure is greatly prolonged. In fact, as has been emphasized elsewhere (NAS 1990), the available data do not exclude the possibility that there may be a threshold in the millisievert (mSv) dose equivalent range, below which radiation may lack carcinogenicity.

Table 4. Estimated lifetime risks of cancer attributable to 0.1 Sv rapid irradiation

Type or site of cancer

Excess cancer deaths per 100,000

 

(No.)

(%)*

Stomach

110

18

Lung

85

3

Colon

85

5

Leukaemia (excluding CLL)

50

10

Urinary bladder

30

5

Oesophagus

30

10

Breast

20

1

Liver

15

8

Gonads

10

2

Thyroid

8

8

Osteosarcoma

5

5

Skin

2

2

Remainder

50

1

Total

500

2

* Percentage increase in “background” expectation for a non-irradiated population.

Source: ICRP 1991.

It is also noteworthy that the estimates tabulated are based on population averages and are not necessarily applicable to any given individual; that is, susceptibility to certain types of cancer (for example, cancers of the thyroid and breast) is substantially higher in children than in adults, and susceptibility to certain cancers is also increased in association with some hereditary disorders, such as retinoblastoma and the nevoid basal cell carcinoma syndrome (UNSCEAR 1988, 1994; NAS 1990). Such differences in susceptibility notwithstanding, population-based estimates have been proposed for use in compensation cases as a basis for gauging the probability that a cancer arising in a previously irradiated person may have been caused by the exposure in question (NIH 1985).

Low-dose risk assessment. Epidemiological studies to ascertain whether the risks of cancer from low-level exposure to radiation actually vary with dose in the manner predicted by the above estimates have been inconclusive thus far. Populations residing in areas of elevated natural background radiation levels manifest no definitely attributable increases in cancer rates (NAS 1990; UNSCEAR 1994); conversely, a few studies have even suggested an inverse relationship between background radiation levels and cancer rates, which has been interpreted by some observers as evidence for the existence of beneficial (or hormetic) effects of low-level irradiation, in keeping with the adaptive responses of certain cellular systems (UNSCEAR 1994). The inverse relationship is of questionable significance, however, since it has not persisted after controlling for the effects of confounding variables (NAS 1990). Likewise in today’s radiation workers—except for certain cohorts of underground hardrock miners (NAS 1994; Lubin, Boice and Edling 1994)—the rates of cancers other than leukaemia are no longer detectably increased (UNSCEAR 1994), thanks to advances in radiation protection; furthermore, the rates of leukaemia in such workers are consistent with the estimates tabulated above (IARC 1994). In summary, therefore, the data available at present are consistent with the estimates tabulated above (table 4), which imply that less than 3% of cancers in the general population are attributable to natural background radiation (NAS 1990; IARC 1994), although up to 10% of lung cancers may be attributable to indoor radon (NAS 1990; Lubin, Boice and Edling 1994).

High levels of radioactive fallout from a thermonuclear weapons test at Bikini in 1954 have been observed to cause a dose-dependent increase in the frequency of thyroid cancer in Marshall Islanders who received large doses to the thyroid gland in childhood (Robbins and Adams 1989). Similarly, children living in areas of Belarus and the Ukraine contaminated by radionuclides released from the Chernobyl accident have been reported to show an increased incidence of thyroid cancer (Prisyazhuik, Pjatak and Buzanov 1991; Kasakov, Demidchik and Astakhova 1992), but the findings are at variance with those of the International Chernobyl Project, which found no excess of benign or malignant thyroid nodules in children living in the more heavily contaminated areas around Chernobyl (Mettler, Williamson and Royal 1992). The basis for the discrepancy, and whether the reported excesses may have resulted from heightened surveillance alone, remain to be determined. In this connection, it is noteworthy that children of south-western Utah and Nevada who were exposed to fallout from nuclear weapons tests in Nevada during the 1950s have shown increase in the frequency of any type of thyroid cancer (Kerber et al. 1993), and the prevalence of acute leukaemia appears to have been elevated in such children dying between 1952 and 1957, the period of greatest exposure to fallout (Stevens et al. 1990).

The possibility that excesses of leukaemia among children residing in the vicinity of nuclear plants in the United Kingdom may have been caused by radioactivity released from the plants has also been suggested. The releases, however, are estimated to have increased the total radiation dose to such children by less than 2%, from which it is inferred that other explanations are more likely (Doll, Evans and Darby 1994). An ineffective aetiology for the observed clusters of leukaemia is implied by the existence of comparable excesses of childhood leukaemia at sites in the UK that lack nuclear facilities but otherwise resemble nuclear sites in having similarly experienced large influxes of population in recent times (Kinlen 1988; Doll, Evans and Darby 1994). Another hypothesis—namely, that the leukaemias in question may have been caused by occupational irradiation of the fathers of the affected children—also has been suggested by the results of a case-control study (Gardner et al. 1990), but this hypothesis is generally discounted for reasons that are discussed in the section to follow.

Heritable Effects

Heritable effects of irradiation, although well documented in other organisms, have yet to be observed in humans. For example, intensive study of more than 76,000 children of the Japanese atomic-bomb survivors, carried out over four decades, has failed to disclose any heritable effects of radiation in this population, as measured by untoward pregnancy outcomes, neonatal deaths, malignancies, balanced chromosomal rearrangements, sex-chromosome aneuploidy, alterations of serum or erythrocyte protein phenotypes, changes in sex ratio or disturbances in growth and development (Neel, Schull and Awa 1990). Consequently, estimates of the risks of heritable effects of radiation must rely heavily on extrapolation from findings in the laboratory mouse and other experimental animals (NAS 1990; UNSCEAR 1993).

From the available experimental and epidemiological data, it is inferred that the dose required to double the rate of heritable mutations in human germ cells must be at least 1.0 Sv (NAS 1990; UNSCEAR 1993). On this basis, it is estimated that less than 1% of all genetically determined diseases in the human population can be attributed to natural background irradiation (table 5).

Table 5. Estimated frequencies of heritable disorders attributable to natural background ionizing irradiation

Type of disorder

Natural prevalence
(per million live births)

Contribution from natural background
radiation
1 (per million live births)2

   

First generation

Equilibrium
generations
3

Autosomal
dominant

180,000

20-100

300

X-linked

400

<1

<15

Recessive

2,500

<1

very slow increase

Chromosomal

4,400

<20

very slow increase

Congenital
defects

20,000-30,000

30

30-300

Other disorders of complex aetiology:

Heart disease

600,000

not estimated4

not estimated4

Cancer

300,000

not estimated4

not estimated4

Selected others

300,000

not estimated4

not estimated4

1 Equivalent to » 1 mSv per year, or » 30 mSv per generation (30 years).

2 Values rounded.

3 After hundreds of generations, the addition of unfavorable radiation-induced mutations eventually becomes balanced by their loss from the population, resulting in a genetic "equilibrium".

4 Quantitative risk estimates are lacking because of uncertainty about the mutational component of the disease(s) indicated.

Source: National Research Council 1990.

The hypothesis that the excess of leukaemia and non-Hodgkin’s lymphoma in young people residing in the village of Seascale resulted from heritable oncogenic effects caused by the occupational irradiation of the children’s fathers at the Sellafield nuclear installation has been suggested by the results of a case-control study (Gardner et al. 1990), as noted above. Arguments against this hypothesis, however, are:

  1. the lack of any comparable excess in larger numbers of children born outside Seascale to fathers who had received similar, or even larger, occupational doses at the same nuclear plant (Wakeford et al. 1994a)
  2. the lack of similar excesses in French (Hill and LaPlanche 1990), Canadian (McLaughlin et al. 1993) or Scottish (Kinlen, Clarke and Balkwill 1993) children born to fathers with comparable occupational exposures
  3. the lack of excesses in the children of atomic-bomb survivors (Yoshimoto et al. 1990)
  4. the lack of excesses in US counties containing nuclear plants (Jablon, Hrubec and Boice 1991)
  5. the fact that the frequency of radiation-induced mutations implied by the interpretation is far higher than established rates (Wakeford et al. 1994b).

 

On balance, therefore, the available data fail to support the paternal gonadal irradiation hypothesis (Doll, Evans and Darby 1994; Little, Charles and Wakeford 1995).

Effects of Prenatal Irradiation

Radiosensitivity is relatively high throughout prenatal life, but the effects of a given dose vary markedly, depending on the developmental stage of the embryo or foetus at the time of exposure (UNSCEAR 1986). During the pre-implantation period, the embryo is most susceptible to killing by irradiation, while during critical stages in organogenesis it is susceptible to the induction of malformations and other disturbances of development (table 6). The latter effects are dramatically exemplified by the dose-dependent increase in the frequency of severe mental retardation (figure 6) and the dose-dependent decrease in IQ test scores in atomic-bomb survivors who were exposed between the eighth and fifteenth weeks (and, to a lesser extent, between the sixteenth and twenty-fifth weeks) (UNSCEAR 1986 and 1993).

Table 6. Major developmental abnormalities produced by prenatal irradiation

Brain

Anencephaly

Porencephaly

Microcephaly*

Encephalocoele

Mongolism*

Reduced medulla

Cerebral atrophy

Mental retardation*

Neuroblastoma

Narrow aqueduct

Hydrocephalus*

Dilatation of ventricles*

Spinal cord anomalies*

Cranial nerve anomalies

 

Eyes

Anophthalmia

Microphthalmia*

Microcornia*

Coloboma*

Deformed iris

Absence of lens

Absence of retina

Open eyelids

Strabismus*

Nystagmus*

Retinoblastoma

Hypermetropia

Glaucoma

Cataract*

Blindness

Chorioretinitis*

Partial albinism

Ankyloblepharon

Skeleton

General stunting

Reduced size of skull

Skull deformities*

Head ossification defects*

Vaulted cranium

Narrow head

Cranial blisters

Cleft palate*

Funnel chest

Dislocation of hip

Spina bifida

Deformed tail

Deformed feet

Club foot*

Digital anomalies*

Calcaneo valgus

Odontogenesis imperfecta*

Tibial exostosis

Amelanogenesis*

Scleratomal necrosis

 

Miscellaneous

Situs inversus

Hydronephrosis

Hydroureter

Hydrocoele

Absence of kidney

Gonadal anomalies*

Congenital heart disease

Facial deformities

Pituitary disturbances

Deformities of ears

Motor disturbances

Dermatomal necrosis

Myotomal necrosis

Abnormalities in skinpigmentation

 

* These abnormalities have been observed in humans exposed prenatally to large doses of radiation and have, therefore, been tentatively attributed to irradiation.

Source: Brill and Forgotson 1964.

Susceptibility to the carcinogenic effects of radiation also appears to be relatively high throughout the prenatal period, judging from the association between childhood cancer (including leukaemia) and prenatal exposure to diagnostic x rays reported in case-control studies (NAS 1990). The results of such studies imply that prenatal irradiation may cause a 4,000% per Sv increase in the risk of leukaemia and other childhood cancers (UNSCEAR 1986; NAS 1990), which is a far larger increase than is attributable to postnatal irradiation (UNSCEAR 1988; NAS 1990). Although, paradoxically, no excess of childhood cancer was recorded in A-bomb survivors irradiated prenatally (Yoshimoto et al. 1990), as noted above, there were too few such survivors to exclude an excess of the magnitude in question.

Figure 6. The frequency of severe mental retardation in relation to radiation dose in prenatally irradiated atomic bomb survivors    

ION020F6

Summary and Conclusions

The adverse effects of ionizing radiation on human health are widely diverse, ranging from rapidly fatal injuries to cancers, birth defects, and hereditary disorders that appear months, years or decades later. The nature, frequency and severity of effects depend on the quality of the radiation in question as well as on the dose and conditions of exposure. Most such effects require relatively high levels of exposure and are, therefore, encountered only in accident victims, radiotherapy patients, or other heavily irradiated persons. The genotoxic and carcinogenic effects of ionizing radiation, by contrast, are presumed to increase in frequency as linear non-threshold functions of the dose; hence, although the existence of thresholds for these effects cannot be excluded, their frequency is assumed to increase with any level of exposure. For most effects of radiation, the sensitivity of exposed cells varies with their rate of proliferation and inversely with their degree of differentiation, the embryo and growing child being especially vulnerable to injury.

 

Back

Thursday, 24 March 2011 19:13

Singers

The term singer applies to any person whose career, avocation or livelihood relies heavily on the use of his or her voice in a musical context rather than ordinary speech. Unlike percussionists, pianists or violinists, the singer is the instrument. Hence, the well-being of a singer depends not only on the health of his or her larynx (where the sound originates) or vocal tract (where the sound is modified), but also on proper functioning and maximal coordination of most mind and body systems.

Of the many styles of singing documented throughout the world, some reflect a unique liturgical, cultural, linguistic, ethnic or geo-political heritage, while others are more universal in nature. Among the common styles of singing in the United States and Western world are: traditional classical (including oratorio, opera, art songs and so on), barbershop, jazz, musical theatre (Broadway), choral, gospel, folk, country (and western), popular, rhythm and blues, rock ’n’ roll (including heavy metal, alternative rock and so on) and others. Each style of delivery has its typical settings, patterns, habits and associated risk factors.

Vocal Problems

Unlike non-singers, who may not be significantly hindered by vocal problems, for the classical singer, the effect of subtle vocal impairment can be devastating. Even within that category of trained singers, vocal impairment is much more debilitating for the higher voice classifications (sopranos and tenors) than for lower classifications (mezzo sopranos, altos, baritones and basses). On the other hand, some vocal performers (pop, gospel or rock, for example) go to great lengths to achieve a unique trademark and enhance their marketability by inducing vocal pathologies which often yield a breathy, husky, muffled diplophonic (simultaneous multiple pitches) quality. Owing, in part, to their impairment, they tend to sing with great effort, struggling particularly to produce the high notes. To many listeners, this struggle adds a dramatic effect, as if the singer is sacrificing his or her self while engaging in the artistic process.

The prevalence of occupation-related injuries in general, and voice disorders in particular, among singers is not well documented in the literature. This author estimates that on the average, between 10 and 20% of singers in the United States sustain some form of chronic voice disorder. However, the incidence of vocal injury varies significantly with many factors. Because many singers must adhere to specific artistic/aesthetic criteria, performance practices, popular (consumer) demands, financial constraints and social pressures, they often stretch their vocal capabilities and endurance to the limits. Furthermore, singers generally tend to deny, trivialize or ignore warning signs and even diagnoses of vocal injury (Bastian, Keidar and Verdolini-Marston 1990).

The most common problems among singers are benign mucosal disorders. The mucosa is the outer layer, or cover, of the vocal folds (commonly called vocal cords) (Zeitels 1995). Acute problems can include laryngitis and transient vocal fold swelling (oedema). Chronic mucosal lesions include vocal fold swellings, nodules (“calluses”), polyps, cysts, sub-mucosal haemorrhage (bleeding), capillary ectasia (widening), chronic laryngitis, leukoplakia (white spots or patches), mucosal tears and glottic sulci (deep furrows in the tissue). Although these disorders can be exacerbated by smoking and excessive alcohol consumption, it is important to note that these benign mucosal lesions are typically related to the amount and manner of voice use, and are the product of vibratory trauma (Bastian 1993).

Causes of Vocal Problems

In looking at the causes of vocal problems in singers, one should distinguish between intrinsic and extrinsic factors. Intrinsic factors are those related to personality, vocal behaviour (including speaking) on and off stage, vocal technique, and intake habits (primarily if substance abuse, improper medication, malnutrition and/or dehydration is involved). Extrinsic factors are related to environmental pollutants, allergies and so on. Based on clinical experience, intrinsic factors tend to be most important.

Vocal injury is usually a cumulative process of misuse and/or overuse during the singer’s productive (performance-related) and/or non-productive (domestic, social) activities. It is difficult to ascertain how much of the damage is attributable directly to the former versus the latter. Performance risk factors can include unreasonably long dress rehearsals requiring full-voice singing, performing with an upper-respiratory infection in the absence of a replacement and excessive singing. Most vocalists are advised not to sing for more than about 1.5 hours (net) per day. Unfortunately, many singers do not respect the limitations of their apparatus. Some tend to get caught up in the exploratory excitement of new technical skills, new means of artistic expression, new repertoire and so on, and practice 4, 5 or 6 hours daily. Even worse is the beating of the voice into shape when distress signals of injury (such as loss of high notes, inability to sing softly, breathy delay in sound initiation, unstable vibrato and increased phonatory effort) are manifested. The culpability of vocal overtaxing is shared with other taskmasters such as the booking agent who squeezes multiple performances into an impossible time frame, and the recording agent who leases the studio for 12 consecutive hours during which the singer is expected to record a complete CD sound track from start to finish.

Although every singer may encounter acute episodes of voice problems at some point in his or her career, it is generally believed that those singers who are musically literate and can adjust the musical score to their voice limitations, and those who have had proper voice training, are less likely to encounter severe problems of a chronic nature than their untrained peers, who often learn their repertoire by rote, repeatedly imitating or singing along with demo tapes or recordings of other performers. In doing so, they frequently sing in a key, range or style unsuitable for their voices. Singers who lend themselves to periodic tutelage and maintenance by proficient voice experts are less likely to resort to faulty compensatory vocal manoeuvres if confronted by physical impairment, and are more inclined to establish a reasonable balance between artistic demands and vocal longevity. A good teacher is aware of the normal (expected) capabilities of each instrument, can usually distinguish between technical and physical limitations, and often is the first to detect warning signs of vocal impairment.

Sound amplification can also create problems for singers. Many rock groups, for example, amplify not only the singer, but the entire band. When the noise level interferes with auditory feedback, the singer is often unaware that he or she is singing too loudly and using faulty technique. This may contribute significantly to the development and exacerbation of vocal pathology.

Non-performance factors can also be important. Singers must realize that they do not have separate laryngeal mechanisms for singing and speaking. Although most professional singers spend much more time talking than singing, speaking technique is commonly discarded or rejected, which can adversely affect their singing.

Many of today’s singers must travel regularly from one performance venue to another, on trains, tour buses or airplanes. Ongoing touring requires not only psychological adaptation, but also physical adjustments on many levels. In order for singers to function optimally, they must receive adequate quality and quantity of sleep. Radical rapid changes in time zones causes jet lag, which forces singers to remain awake and alert when their internal clock is cueing various body systems to shut down for sleep, and conversely, to sleep when their brain systems are aroused to plan and execute normal daytime activities. Such interruption may result in a host of debilitating symptoms, including chronic insomnia, headaches, sluggishness, dizziness, irritability and forgetfulness (Monk 1994). Aberrant sleep patterns are also a common problem among those singers who perform late at night. These abnormal sleep patterns are all too often mismanaged with alcohol or recreational, prescription or over-the-counter (OTC) drugs (most of which adversely affect the voice). Frequent and/or prolonged confinement to a closed cabin of a motor vehicle, train or aircraft may create additional problems. Inhalation of poorly filtered (often recycled), contaminated, dehumidified (dry) air (Feder 1984), according to many singers, can cause respiratory discomfort, tracheitis, bronchitis or laryngitis that may linger on for hours or even days following a trip.

Owing to environmental instability and hectic scheduling, many singers develop erratic, unhealthful eating habits. In addition to reliance on restaurant food and unpredictable changes in meal times, many singers eat the main meal of the day after their performance, usually late at night. Particularly for the overweight singer, and especially if spicy, greasy or acidic foods, alcohol or coffee were consumed, lying down soon after having filled the stomach is likely to result in gastroesophageal reflux. Reflux is the retrograde flow of acids from the stomach up the oesophagus and into the throat and larynx. The resulting symptoms can be devastating to the singer. Eating disorders are quite common among singers. In the operatic and classical realm, overeating and obesity are quite common. In the musical theatre and pop domain, particularly among young females, reportedly one-fifth of all singers have encountered some form of eating disorder, such as anorexia or bulimia. The latter involves various purging methods, of which vomiting is thought to be particularly hazardous to the voice.

A detrimental factor to voice production is exposure to pollutants, such as formaldehyde, solvents, paints and dusts, and allergens, such as tree, grass or weed pollens, dust, mould spores, animal danders and perfumes (Sataloff 1996). Such exposure may occur on and off stage. In their work milieu, singers can be exposed to these and other pollutants associated with vocal symptoms, including cigarette smoke and theatrical smoke and fog effects. Singers use a greater percentage of their vital capacity than ordinary speakers. Furthermore, during intense aerobic activity (such as dancing), the number of breathing cycles per minute increases, and mouth breathing prevails. This results in the inhalation of larger amounts of cigarette smoke and fogs during performances.

Treatment of Vocal Problems

Two major issues in the treatment of vocal problems of singers are self-medication and improper treatment by physicians who are not knowledgeable about the voice and its problems. Sataloff (1991, 1995) surveyed the potential side effects associated with medications commonly used by singers. Whether recreational, prescription, over the counter or food supplements, most drugs are likely to have some effect on phonatory function. In an attempt to control “allergies”, “phlegm” or “sinus congestion”, the self-medicating singer will ultimately ingest something that will damage the vocal system. Likewise, the physician who keeps prescribing steroids to reduce chronic inflammation caused by abusive vocal habits and ignores the underlying causes will eventually hurt the singer. Vocal dysfunction resulting from poorly indicated or ill-performed phonosurgery has been documented (Bastian 1996). To avoid injuries secondary to treatment, singers are advised to know their instruments, and consult only with health care professionals who understand and have experience and expertise managing the vocal problems of singers, and who possess the patience to educate and empower singers.

 

Back

Thursday, 24 March 2011 18:50

Introduction

Ionizing radiation is everywhere. It arrives from outer space as cosmic rays. It is in the air as emissions from radioactive radon and its progeny. Naturally occurring radioactive isotopes enter and remain in all living things. It is inescapable. Indeed, all species on this planet evolved in the presence of ionizing radiation. While humans exposed to small doses of radiation may not immediately show any apparent biological effects, there is no doubt that ionizing radiation, when given in sufficient amounts, can cause harm. These effects are well known both in kind and in degree.

While ionizing radiation can cause harm, it also has many beneficial uses. Radioactive uranium generates electricity in nuclear power plants in many countries. In medicine, x rays produce radiographs for diagnosis of internal injuries and diseases. Nuclear medicine physicians use radioactive material as tracers to form detailed images of internal structures and to study metabolism. Therapeutic radiopharmaceuticals are available to treat disorders such as hyperthyroidism and cancer. Radiotherapy physicians use gamma rays, pion beams, electron beams, neutrons and other types of radiation to treat cancer. Engineers use radioactive material in oil well logging operations and in soil moisture density gauges. Industrial radiographers use x rays in quality control to look at internal structures of manufactured devices. Exit signs in buildings and aircraft contain radioactive tritium to make them glow in the dark in the event of a power failure. Many smoke detectors in homes and commercial buildings contain radioactive americium.

These many uses of ionizing radiation and radioactive materials enhance the quality of life and help society in many ways. The benefits of each use must always be compared with the risks. The risks may be to workers directly involved in applying the radiation or radioactive material, to the public, to future generations and to the environment or to any combination of these. Beyond political and economic considerations, benefits must always outweigh risks when ionizing radiation is involved.

Ionizing Radiation

Ionizing radiation consists of particles, including photons, which cause the separation of electrons from atoms and molecules. However, some types of radiation of relatively low energy, such as ultraviolet light, can also cause ionization under certain circumstances. To distinguish these types of radiation from radiation that always causes ionization, an arbitrary lower energy limit for ionizing radiation usually is set around 10 kiloelectron volts (keV).

Directly ionizing radiation consists of charged particles. Such particles include energetic electrons (sometimes called negatrons), positrons, protons, alpha particles, charged mesons, muons and heavy ions (ionized atoms). This type of ionizing radiation interacts with matter primarily through the Coulomb force, repelling or attracting electrons from atoms and molecules by virtue of their charges.

Indirectly ionizing radiation consists of uncharged particles. The most common kinds of indirectly ionizing radiation are photons above 10 keV (x rays and gamma rays) and all neutrons.

X-ray and gamma-ray photons interact with matter and cause ionization in at least three different ways:

    1. Lower-energy photons interact mostly via the photoelectric effect, in which the photon gives all of its energy to an electron, which then leaves the atom or molecule. The photon disappears.
    2. Intermediate-energy photons mostly interact through the Compton effect, in which the photon and an electron essentially collide as particles. The photon continues in a new direction with reduced energy while the released electron goes off with the remainder of the incoming energy (less the electron’s binding energy to the atom or molecule).
    3. Pair production is possible only for photons with energy in excess of 1.02 MeV. (However, near 1.02 MeV, the Compton effect still dominates. Pair production dominates at higher energies.) The photon disappears and an electron-positron pair appears in its place (this occurs only in the vicinity of a nucleus because of conservation of momentum and energy considerations). The total kinetic energy of the electron-positron pair is equal to the energy of the photon less the sum of the rest-mass energies of the electron and positron (1.02 MeV). These energetic electrons and positrons then proceed as directly ionizing radiation. As it loses kinetic energy, a positron will eventually encounter an electron, and the particles will annihilate each other. Two (usually) 0.511 MeV photons are then emitted from the annihilation site at 180 degrees from each other.

         

        a given photon any of these can occur, except that pair production is possible only for photons with energy greater than 1.022 MeV. The energy of the photon and the material with which it interacts determine which interaction is the most likely to occur.

        Figure 1 shows the regions in which each type of photon interaction dominates as a function of photon energy and atomic number of absorber.

        Figure 1. Relative importance of the three principal interactions of photons in matter

        ION010F1

        The most common neutron interactions with matter are inelastic collisions, neutron capture (or activation) and fission. All of these are interactions with nuclei. A nucleus colliding inelastically with a neutron is left at a higher energy level. It can release this energy in the form of a gamma ray or by emitting a beta particle, or both. In neutron capture, an affected nucleus may absorb the neutron and eject energy as gamma or x rays or beta particles, or both. The secondary particles then cause ionization as discussed above. In fission, a heavy nucleus absorbs the neutron and splits into two lighter nuclei that are almost always radioactive.

        Quantities, Units and Related Definitions

        The International Commission on Radiation Units and Measurements (ICRU) develops internationally accepted formal definitions of quantities and units of radiation and radioactivity. The International Commission on Radiological Protection (ICRP) also sets standards for definition and use of various quantities and units used in radiation safety. A description of some quantities, units and definitions commonly used in radiation safety follows.

        Absorbed dose. This is the fundamental dosimetric quantity for ionizing radiation. Basically, it is the energy ionizing radiation imparts to matter per unit mass. Formally,

        where D is the absorbed dose, de is the mean energy imparted to matter of mass dm. Absorbed dose has units of joules per kilogram (J kg–1). The special name for the unit of absorbed dose is the gray (Gy).

        Activity. This quantity represents the number of nuclear transformations from a given nuclear energy state per unit time. Formally,

        where A is the activity, dN is the expectation value of the number of spontaneous nuclear transitions from the given energy state in the time interval dt. It is related to the number of radioactive nuclei N by:

        where l is the decay constant. Activity has units of inverse seconds (s–1). The special name for the unit of activity is the becquerel (Bq).

        Decay constant (l). This quantity represents the probability per unit time that a nuclear transformation will occur for a given radionuclide. The decay constant has units of inverse seconds (s–1). It is related to the half-life t½ of a radionuclide by:

        The decay constant l is related to the mean lifetime, t, of a radionuclide by:

        The time dependence of activity A(t) and of the number of radioactive nuclei N(t) can be expressed by and  respectively.

        Deterministic biological effect. This is a biological effect caused by ionizing radiation and whose probability of occurrence is zero at small absorbed doses but will increase steeply to unity (100%) above some level of absorbed dose (the threshold). Cataract induction is an example of a stochastic biological effect.

        Effective dose. The effective dose E is the sum of the weighted equivalent doses in all the tissues and organs of the body. It is a radiation safety quantity, so its use is not appropriate for large absorbed doses delivered in a relatively short period of time. It is given by:

        where w T is the tissue weighting factor and HT is the equivalent dose for tissue T. Effective dose has units of J kg–1. The special name for the unit of effective dose is the sievert (Sv).

        Equivalent dose. The equivalent dose HT is the absorbed dose averaged over a tissue or organ (rather than at a point) and weighted for the radiation quality that is of interest. It is a radiation safety quantity, so its use is not appropriate for large absorbed doses delivered in a relatively short period of time. The equivalent dose is given by:

        where DT,R is the absorbed dose averaged over the tissue or organ T due to radiation R and w R
        is the radiation weighting factor. Equivalent dose has units of J kg–1. The special name for the unit of equivalent dose is the sievert (Sv).

        Half-life. This quantity is the amount of time required for the activity of a radionuclide sample to reduce by a factor of one-half. Equivalently, it is the amount of time required for a given number of nuclei in a given radioactive state to reduce by a factor of one-half. It has fundamental units of seconds (s), but is also commonly expressed in hours, days and years. For a given radionuclide, half-life t½ is related to the decay constant l by:

        Linear energy transfer. This quantity is the energy a charged particle imparts to matter per unit length as it traverses the matter. Formally,

        where L is the linear energy transfer (also called linear collision stopping power) and de is the mean energy lost by the particle in traversing a distance dl. Linear energy transfer (LET) has units of J m–1.

        Mean lifetime. This quantity is the average time a nuclear state will survive before it undergoes a transformation to a lower energy state by emitting ionizing radiation. It has fundamental units of seconds (s), but may also be expressed in hours, days or years. It is related to the decay constant by:

        where t is the mean lifetime and l is the decay constant for a given nuclide in a given energy state.

        Radiation weighting factor. This is a number w R that, for a given type and energy of radiation R, is representative of values of the relative biological effectiveness of that radiation in inducing stochastic effects at low doses. The values of w R are related to linear energy transfer (LET) and are given in table 1. Figure 2 (overleaf) shows the relationship between w R and LET for neutrons.

        Table 1. Radiation weighting factors wR

        Type and energy range

        wR 1

        Photons, all energies

        1

        Electrons and muons, all energies2

        1

        Neutrons, energy 10 keV

        5

        10 keV to 100 keV

        10

        >100 keV to 2 MeV

        20

        >2 MeV to 20 MeV

        10

        >20 MeV

        5

        Protons, other than recoil protons, energy >2 MeV

        5

        Alpha particles, fission fragments, heavy nuclei

        20

        1 All values relate to the radiation incident on the body or, for internal sources, emitted from the source.

        2 Excluding Auger electrons emitted from nuclei bound to DNA.

        Relative biological effectiveness (RBE). The RBE of one type of radiation compared with another is the inverse ratio of the absorbed doses producing the same degree of a defined biological end point.

        Figure 2. Radiation weighting factors for neutrons (the smooth curve is to be treated  as an approximation)

        ION010F2

        Stochastic biological effect. This is a biological effect caused by ionizing radiation whose probability of occurrence increases with increasing absorbed dose, probably with no threshold, but whose severity is independent of absorbed dose. Cancer is an example of a stochastic biological effect.

        Tissue weighting factor w T. This represents the contribution of tissue or organ T to the total detriment due to all of the stochastic effects resulting from uniform irradiation of the whole body. It is used because the probability of stochastic effects due to an equivalent dose depends on the tissue or organ irradiated. A uniform equivalent dose over the whole body should give an effective dose numerically equal to the sum of effective doses for all tissues and organs of the body. Therefore, the sum of all tissue weighting factors is normalized to unity. Table 2 gives values for tissue weighting factors.

        Table 2. Tissue weighting factors wT

        Tissue or organ

        wT 1

        Gonads

        0.20

        Bone marrow (red)

        0.12

        Colon

        0.12

        Lung

        0.12

        Stomach

        0.12

        Bladder

        0.05

        Breast

        0.05

        Liver

        0.05

        Oesophagus

        0.05

        Thyroid

        0.05

        Skin

        0.01

        Bone surface

        0.01

        Remainder

        0.052, 3

        1 The values have been developed from a reference population of equal numbers of both sexes and a wide range of ages. In the definition of effective dose they apply to workers, to the whole population, and to either sex.

        2 For purposes of calculation, the remainder is composed of the following additional tissues and organs: adrenals, brain, upper large intestine, small intestine, kidneys, muscle, pancreas, spleen, thymus and uterus. The list includes organs that are likely to be selectively irradiated. Some organs in the list are known to be susceptible to cancer induction.

        3 In those exceptional cases in which a single one of the remainder tissues or organs receives an equivalent dose in excess of the highest dose in any of the twelve organs for which a weighting factor is specified, a weighting factor of 0.025 should be applied to that tissue or organ and a weighting factor of 0.025 to the average dose in the rest of the remainder as defined above.

         

        Back

        Thursday, 24 March 2011 18:34

        Passive Fire Protection Measures

        Confining Fires by Compartmentation

        Building and site planning

        Fire safety engineering work should begin early in the design phase because the fire safety requirements influence the layout and design of the building considerably. In this way, the designer can incorporate fire safety features into the building much better and more economically. The overall approach includes consideration of both interior building functions and layout, as well as exterior site planning. Prescriptive code requirements are more and more replaced by functionally based requirements, which means there is an increased demand for experts in this field. From the beginning of the construction project, the building designer therefore should contact fire experts to elucidate the following actions:

        • to describe the fire problem specific to the building
        • to describe different alternatives to obtain the required fire safety level
        • to analyse system choice regarding technical solutions and economy
        • to create presumptions for technical optimized system choices.

         

        The architect must utilize a given site in designing the building and adapt the functional and engineering considerations to the particular site conditions that are present. In a similar manner, the architect should consider site features in arriving at decisions on fire protection. A particular set of site characteristics may significantly influence the type of active and passive protection suggested by the fire consultant. Design features should consider the local fire-fighting resources that are available and the time to reach the building. The fire service cannot and should not be expected to provide complete protection for building occupants and property; it must be assisted by both active and passive building fire defences, to provide reasonable safety from the effects of fire. Briefly, the operations may be broadly grouped as rescue, fire control and property conservation. The first priority of any fire-fighting operation is to ensure that all occupants are out of the building before critical conditions occur.

        Structural design based on classification or calculation

        A well-established means of codifying fire protection and fire safety requirements for buildings is to classify them by types of construction, based upon the materials used for the structural elements and the degree of fire resistance afforded by each element. Classification can be based on furnace tests in accordance with ISO 834 (fire exposure is characterized by the standard temperature-time curve), combination of test and calculation or by calculation. These procedures will identify the standard fire resistance (the ability to fulfil required functions during 30, 60, 90 minutes, etc.) of a structural load-bearing and/or separating member. Classification (especially when based on tests) is a simplified and conservative method and is more and more replaced by functionally based calculation methods taking into account the effect of fully developed natural fires. However, fire tests will always be required, but they can be designed in a more optimal way and be combined with computer simulations. In that procedure, the number of tests can be reduced considerably. Usually, in the fire test procedures, load-bearing structural elements are loaded to 100% of the design load, but in real life the load utilization factor is most often less than that. Acceptance criteria are specific for the construction or element tested. Standard fire resistance is the measured time the member can withstand the fire without failure.

        Optimum fire engineering design, balanced against anticipated fire severity, is the objective of structural and fire protection requirements in modern performance-based codes. These have opened the way for fire engineering design by calculation with prediction of the temperature and structural effect due to a complete fire process (heating and subsequent cooling is considered) in a compartment. Calculations based on natural fires mean that the structural elements (important for the stability of the building) and the whole structure are not allowed to collapse during the entire fire process, including cool down.

        Comprehensive research has been performed during the past 30 years. Various computer models have been developed. These models utilize basic research on mechanical and thermal properties of materials at elevated temperatures. Some computer models are validated against a vast number of experimental data, and a good prediction of structural behaviour in fire is obtained.

        Compartmentation

        A fire compartment is a space within a building extending over one or several floors which is enclosed by separating members such that the fire spread beyond the compartment is prevented during the relevant fire exposure. Compartmentation is important in preventing the fire to spread into too large spaces or into the whole building. People and property outside the fire compartment can be protected by the fact that the fire is extinguished or burns out by itself or by the delaying effect of the separating members on the spread of fire and smoke until the occupants are rescued to a place of safety.

        The fire resistance required by a compartment depends upon its intended purpose and on the expected fire. Either the separating members enclosing the compartment shall resist the maximum expected fire or contain the fire until occupants are evacuated. The load-bearing elements in the compartment must always resist the complete fire process or be classified to a certain resistance measured in terms of periods of time, which is equal or longer than the requirement of the separating members.

        Structural integrity during a fire

        The requirement for maintaining structural integrity during a fire is the avoidance of structural collapse and the ability of the separating members to prevent ignition and flame spread into adjacent spaces. There are different approaches to provide the design for fire resistance. They are classifications based on standard fire-resistance test as in ISO 834, combination of test and calculation or solely calculation and the performance-based procedure computer prediction based on real fire exposure.

        Interior finish

        Interior finish is the material that forms the exposed interior surface of walls, ceilings and floor. There are many types of interior finish materials such as plaster, gypsum, wood and plastics. They serve several functions. Some functions of the interior material are acoustical and insulational, as well as protective against wear and abrasion.

        Interior finish is related to fire in four different ways. It can affect the rate of fire build-up to flashover conditions, contribute to fire extension by flame spread, increase the heat release by adding fuel and produce smoke and toxic gases. Materials that exhibit high rates of flame spread, contribute fuel to a fire or produce hazardous quantities of smoke and toxic gases would be undesirable.

        Smoke movement

        In building fires, smoke often moves to locations remote from the fire space. Stairwells and elevator shafts can become smoke-logged, thereby blocking evacuation and inhibiting fire-fighting. Today, smoke is recognized as the major killer in fire situations (see figure 1).

        Figure 1. The production of smoke from a fire.

        FIR040F1

        The driving forces of smoke movement include naturally occurring stack effect, buoyancy of combustion gases, the wind effect, fan-powered ventilation systems and the elevator piston effect.

        When it is cold outside, there is an upward movement of air within building shafts. Air in the building has a buoyant force because it is warmer and therefore less dense than outside air. The buoyant force causes air to rise within building shafts. This phenomenon is known as the stack effect. The pressure difference from the shaft to the outside, which causes smoke movement, is illustrated below:

        where

        = the pressure difference from the shaft to the outside

        g = acceleration of gravity

        = absolute atmospheric pressure

        R = gas constant of air

        = absolute temperature of outside air

        = absolute temperature of air inside the shaft

        z = elevation

        High-temperature smoke from a fire has a buoyancy force due to its reduced density. The equation for buoyancy of combustion gases is similar to the equation for the stack effect.

        In addition to buoyancy, the energy released by a fire can cause smoke movement due to expansion. Air will flow into the fire compartment, and hot smoke will be distributed in the compartment. Neglecting the added mass of the fuel, the ratio of volumetric flows can simply be expressed as a ratio of absolute temperature.

        Wind has a pronounced effect on smoke movement. The elevator piston effect should not be neglected. When an elevator car moves in a shaft, transient pressures are produced.

        Heating, ventilating and air conditioning (HVAC) systems transport smoke during building fires. When a fire starts in an unoccupied portion of a building, the HVAC system can transport smoke to another occupied space. The HVAC system should be designed so that either the fans are shut down or the system transfers into a special smoke control mode operation.

        Smoke movement can be managed by use of one or more of the following mechanisms: compartmentation, dilution, air flow, pressurization or buoyancy.

        Evacuation of Occupants

        Egress design

        Egress design should be based upon an evaluation of a building’s total fire protection system (see figure 2).

        Figure 2. Principles of exit safety.

        FIR040F2

        People evacuating from a burning building are influenced by a number of impressions during their escape. The occupants have to make several decisions during the escape in order to make the right choices in each situation. These reactions can differ widely, depending upon the physical and mental capabilities and conditions of building occupants.

        The building will also influence the decisions made by the occupants by its escape routes, guidance signs and other installed safety systems. The spread of fire and smoke will have the strongest impact on how the occupants make their decisions. The smoke will limit the visibility in the building and create a non-tenable environment to the evacuating persons. Radiation from fire and flames creates large spaces that cannot be used for evacuation, which increases the risk.

        In designing means of egress one first needs a familiarity with the reaction of people in fire emergencies. Patterns of movement of people must be understood.

        The three stages of evacuation time are notification time, reaction time and time to evacuate. The notification time is related to whether there is a fire alarm system in the building or if the occupant is able to understand the situation or how the building is divided into compartments. The reaction time depends on the occupant’s ability to make decisions, the properties of the fire (such as the amount of heat and smoke) and how the building’s egress system is planned. Finally, the time to evacuate depends on where in the building crowds are formed and how people move in various situations.

        In specific buildings with mobile occupants, for example, studies have shown certain reproducible flow characteristics from persons exiting the buildings. These predictable flow characteristics have fostered computer simulations and modelling to aid the egress design process.

        The evacuation travel distances are related to the fire hazard of the contents. The higher the hazard, the shorter the travel distance to an exit.

        A safe exit from a building requires a safe path of escape from the fire environment. Hence, there must be a number of properly designed means of egress of adequate capacity. There should be at least one alternative means of egress considering that fire, smoke and the characteristics of occupants and so on may prevent use of one means of egress. The means of egress must be protected against fire, heat and smoke during the egress time. Thus, it is necessary to have building codes that consider the passive protection, according to evacuation and of course to fire protection. A building must manage the critical situations, which are given in the codes concerning evacuation. For example, in the Swedish Building Codes, the smoke layer must not reach below

        1.6 + 0.1H (H is the total compartment height), maximum radiation 10 kW/m2 of short duration, and the temperature in the breathing air must not exceed 80 °C.

        An effective evacuation can take place if a fire is discovered early and the occupants are alerted promptly with a detection and alarm system. A proper mark of the means of egress surely facilitates the evacuation. There is also a need for organization and drill of evacuation procedures.

        Human behaviour during fires

        How one reacts during a fire is related to the role assumed, previous experience, education and personality; the perceived threat of the fire situation; the physical characteristics and means of egress available within the structure; and the actions of others who are sharing the experience. Detailed interviews and studies over 30 years have established that instances of non-adaptive, or panic, behaviour are rare events that occur under specific conditions. Most behaviour in fires is determined by information analysis, resulting in cooperative and altruistic actions.

        Human behaviour is found to pass through a number of identified stages, with the possibility of various routes from one stage to the next. In summary, the fire is seen as having three general stages:

        1. The individual receives initial cues and investigates or misinterprets these initial cues.
        2. Once the fire is apparent, the individual will try to obtain further information, contact others or leave.
        3. The individual will thereafter deal with the fire, interact with others or escape.

         

        Pre-fire activity is an important factor. If a person is engaged in a well-known activity, for example eating a meal in a restaurant, the implications for subsequent behaviour are considerable.

        Cue reception may be a function of pre-fire activity. There is a tendency for gender differences, with females more likely to be recipient of noises and odours, though the effect is only slight. There are role differences in initial responses to the cue. In domestic fires, if the female receives the cue and investigates, the male, when told, is likely to “have a look” and delay further actions. In larger establishments, the cue may be an alarm warning. Information may come from others and has been found to be inadequate for effective behaviour.

        Individuals may or may not have realized that there is a fire. An understanding of their behaviour must take account of whether they have defined their situation correctly.

        When the fire has been defined, the “prepare” stage occurs. The particular type of occupancy is likely to have a great influence on exactly how this stage develops. The “prepare” stage includes in chronological order “instruct”, “explore” and “withdraw”.

        The “act” stage, which is the final stage, depends upon role, occupancy, and earlier behaviour and experience. It may be possible for early evacuation or effective fire-fighting to occur.

        Building transportation systems

        Building transportation systems must be considered during the design stage and should be integrated with the whole building’s fire protection system. The hazards associated with these systems must be included in any pre-fire planning and fire protection survey.

        Building transportation systems, such as elevators and escalators, make high-rise buildings feasible. Elevator shafts can contribute to the spread of smoke and fire. On the other hand, an elevator is a necessary tool for fire-fighting operations in high-rise buildings.

        Transportation systems may contribute to dangerous and complicated fire safety problems because an enclosed elevator shaft acts as a chimney or flue because of the stack effect of hot smoke and gases from fire. This generally results in the movement of smoke and combustion products from lower to upper levels of the building.

        High-rise buildings present new and different problems to fire-suppression forces, including the use of elevators during emergencies. Elevators are unsafe in a fire for several reasons:

        1. Persons may push a corridor button and have to wait for an elevator that may never respond, losing valuable escape time.
        2. Elevators do not prioritize car and corridor calls, and one of the calls may be at the fire floor.
        3. Elevators cannot start until the lift and shaft doors are closed, and panic could lead to overcrowding of an elevator and the blockage of the doors, which would thus prevent closing.
        4. The power can fail during a fire at any time, thus leading to entrapment. (See figure 3)

         

        Figure 3. An example of a pictographic warning message for elevator use.

        FIR040F3

        Fire drills and occupant training

        A proper mark of the means of egress facilitates the evacuation, but it does not ensure life safety during fire. Exit drills are necessary to make an orderly escape. They are specially required in schools, board and care facilities and industries with high hazard. Employee drills are required, for example, in hotel and large business occupancies. Exit drills should be conducted to avoid confusion and ensure the evacuation of all occupants.

        All employees should be assigned to check for availability, to count occupants when they are outside the fire area, to search for stragglers and to control re-entry. They should also recognize the evacuation signal and know the exit route they are to follow. Primary and alternative routes should be established, and all employees should be trained to use either route. After each exit drill, a meeting of responsible managers should be held to evaluate the success of the drill and to solve any kind of problem that could have occurred.

         

        Back

        Thursday, 24 March 2011 18:29

        Fire Prevention Measures

        History tells us that fires were useful for heating and cooking but caused major damage in many cities. Many houses, major buildings and sometimes whole cities were destroyed by fire.

        One of the first fire prevention measures was a requirement to extinguish all fires before nightfall. For example, in 872 in Oxford, England, authorities ordered a curfew bell to be rung at sunset to remind citizens to extinguish all indoor fires for the night (Bugbee 1978). Indeed, the word curfew is derived from the French couvre feu which literally means “cover fire”.

        The cause of fires is often a result of human action bringing fuel and an ignition source together (e.g., waste paper stored next to heating equipment or volatile flammable liquids being used near open flames).

        Fires require fuel, an ignition source and some mechanism to bring the fuel and ignition source together in the presence of air or some other oxidizer. If strategies can be developed to reduce fuel loads, eliminate ignition sources or prevent the fuel/ignition interaction, then fire loss and human death and injury can be reduced.

        In recent years, there has been increasing emphasis on fire prevention as one of the most cost-effective measures in dealing with the fire problem. It is often easier (and cheaper) to prevent fires starting than to control or extinguish them once they have started.

        This is illustrated in the Fire Safety Concepts Tree (NFPA 1991; 1995a) developed by the NFPA in the United States. This systematic approach to fire safety problems shows that objectives, such as reducing fire deaths in the workplace, can be achieved by preventing fire ignition or managing the impact of fire.

        Fire prevention inevitably means changing human behaviour. This requires fire safety education, supported by management, using the latest training manuals, standards and other educational materials. In many countries such strategies are reinforced by law, requiring companies to meet legislated fire prevention objectives as part of their occupational health and safety commitment to their workers.

        Fire safety education will be discussed in the next section. However, there is now clear evidence in commerce and industry of the important role of fire prevention. Great use is being made internationally of the following sources: Lees, Loss Prevention in the Process Industries, Volumes 1 and 2 (1980); NFPA 1—Fire Prevention Code (1992); The Management of Health and Safety at Work Regulations (ECD 1992); and Fire Protection Handbook of the NFPA (Cote 1991). These are supplemented by many regulations, standards and training materials developed by national governments, businesses and insurance companies to minimize losses of life and property.

        Fire Safety Education and Practices

        For a fire safety education programme to be effective, there must be a major corporate policy commitment to safety and the development of an effective plan that has the following steps: (a) Planning phase—establishment of goals and objectives; (b) Design and implementation phase; and (c) Program evaluation phase—monitoring effectiveness.

        Goals and objectives

        Gratton (1991), in an important article on fire safety education, defined the differences between goals, objectives and implementation practices or strategies. Goals are general statements of intent that in the workplace may be said “to reduce the number of fires and thus reduce death and injury among workers, and the financial impact on companies”.

        The people and financial parts of the overall goal are not incompatible. Modern risk management practice has demonstrated that improvements in safety for workers through effective loss control practices can be financially rewarding to the company and have a community benefit.

        These goals need to be translated into specific fire safety objectives for particular companies and their workforce. These objectives, which must be measurable, usually include statements such as:

        • reduce industrial accidents and resulting fires
        • reduce fire deaths and injuries
        • reduce company property damage.

         

        For many companies, there may be additional objectives such as reduction in business interruption costs or minimization of legal liability exposure.

        The tendency among some companies is to assume that compliance with local building codes and standards is sufficient to ensure that their fire safety objectives are met. However, such codes tend to concentrate on life safety, assuming fires will occur.

        Modern fire safety management understands that absolute safety is not a realistic goal but sets measurable performance objectives to:

        • minimize fire incidents through effective fire prevention
        • provide effective means of limiting the size and consequence of fire incidents through effective emergency equipment and procedures
        • use insurance to safeguard against large, unforeseen fires, particularly those arising from natural hazards such as earthquakes and bushfires.

         

        Design and implementation

        The design and implementation of fire safety education programmes for fire prevention are critically dependent upon development of well-planned strategies and effective management and motivation of people. There must be strong and absolute corporate support for full implementation of a fire safety programme for it to be successful.

        The range of strategies have been identified by Koffel (1993) and in NFPA’s Industrial Fire Hazards Handbook (Linville 1990). They include:

        • promoting the company policy and strategies on fire safety to all company employees
        • identifying all potential fire scenarios and implementing appropriate risk reduction actions
        • monitoring all local codes and standards that define the standard of care in a particular industry
        • operating a loss administration programme to measure all losses for comparison with performance objectives
        • training of all employees in proper fire prevention and emergency response techniques.
        • Some international examples of implementation strategies include:
        • courses operated by the Fire Protection Association (FPA) in the United Kingdom that lead to the European Diploma in Fire Prevention (Welch 1993)
        • the creation of SweRisk, a subsidiary company of the Swedish Fire Protection Association, to assist companies in undertaking risk assessments and in developing fire prevention programmes (Jernberg 1993)
        • massive citizen and worker involvement in fire prevention in Japan to standards developed by the Japan Fire Defence Agency (Hunter 1991)
        • fire safety training in the United States through the use of the Firesafety Educator’s Handbook (NFPA 1983) and the Public Fire Education Manual (Osterhoust 1990).

         

        It is critically important to measure the effectiveness of fire safety education programmes. This measurement provides the motivation for further programme financing, development and adjustment where necessary.

        The best example of monitoring and success of fire safety education is probably in the United States. The Learn Not to BurnÒ programme, aimed at educating the young people in America on the dangers of fire, has been coordinated by the Public Education Division of the NFPA. Monitoring and analysis in 1990 identified a total of 194 lives saved as a result of proper life safety actions learned in fire safety education programmes. Some 30% of these lives saved can be directly attributed to the Learn Not to BurnÒ programme.

        The introduction of residential smoke detectors and fire safety education programmes in the United States have also been suggested as the primary reasons for the reduction in home fire deaths in that country, from 6,015 in 1978 to 4,050 in 1990 (NFPA 1991).

        Industrial housekeeping practices

        In the industrial field, Lees (1980) is an international authority. He indicated that in many industries today, the potential for very large loss of life, serious injuries or property damage is far greater than in the past. Large fires, explosions and toxic releases can result, particularly in the petrochemical and nuclear industries.

        Fire prevention is therefore the key to minimizing fire ignition. Modern industrial plants can achieve good fire safety records through well-managed programmes of:

        • housekeeping and safety inspections
        • employee fire prevention training
        • equipment maintenance and repair
        • security and arson prevention (Blye and Bacon 1991).

         

        A useful guide, on the importance of housekeeping for fire prevention in commercial and industrial premises is given by Higgins (1991) in the NFPA’s Fire Protection Handbook.

        The value of good housekeeping in minimizing combustible loads and in preventing exposure of ignition sources is recognized in modern computer tools used for assessing fire risks in industrial premises. The FREM (Fire Risk Evaluation Method) software in Australia identifies housekeeping as a key fire safety factor (Keith 1994).

        Heat Utilization Equipment

        Heat utilization equipment in commerce and industry includes ovens, furnaces, kilns, dehydrators, dryers and quench tanks.

        In the NFPA’s Industrial Fire Hazards Handbook, Simmons (1990) identified the fire problems with heating equipment to be:

        1. the possibility of igniting combustible materials stored nearby
        2. fuel hazards resulting from unburned fuel or incomplete combustion
        3. overheating leading to equipment failure
        4. ignition of combustible solvents, solid materials or other products being processed.

         

        These fire problems can be overcome through a combination of good housekeeping, proper controls and interlocks, operator training and testing, and cleaning and maintenance in an effective fire prevention programme.

        Detailed recommendations for the various categories of heat utilization equipment are set out in the NFPA’s Fire Protection Handbook (Cote 1991).These are summarized below.

        Ovens and furnaces

        Fires and explosions in ovens and furnaces typically result from the fuel used, from volatile substances provided by the material in the oven or by a combination of both. Many of these ovens or furnaces operate at 500 to 1,000 °C, which is well above the ignition temperature of most materials.

        Ovens and furnaces require a range of controls and interlocks to ensure that unburned fuel gases or products of incomplete combustion cannot accumulate and be ignited. Typically, these hazards develop while firing up or during shut-down operations. Therefore, special training is required to ensure that operators always follow safety procedures.

        Non-combustible building construction, separation of other equipment and combustible materials and some form of automatic fire suppression are usually essential elements of a fire safety system to prevent spread should a fire start.

        Kilns

        Kilns are used to dry timber (Lataille 1990) and to process or “fire” clay products (Hrbacek 1984).

        Again, this high-temperature equipment represents a hazard to its surroundings. Proper separation design and good housekeeping are essential to prevent fire.

        Lumber kilns used for drying timber are additionally hazardous because the timber itself is a high fire load and is often heated close to its ignition temperature. It is essential that kilns be cleaned regularly to prevent a build-up of small pieces of wood and sawdust so that this does not come in contact with the heating equipment. Kilns made of fire-resistive construction material, fitted with automatic sprinklers and provided with high-quality ventilation/air circulation systems are preferred.

        Dehydrators and dryers

        This equipment is used to reduce the moisture content of agricultural products such as milk, eggs, grains, seeds and hay. The dryers may be direct-fired, in which case the productions of combustion contact the material being dried, or they may be indirect-fired. In each case, controls are required to shut off the heat supply in the event of excessive temperature or fire in the dryer, exhaust system or conveyor system or failure of air circulation fans. Again, adequate cleaning to prevent build-up of products that could ignite is required.

        Quench tanks

        The general principles of fire safety of quench tanks are identified by Ostrowski (1991) and Watts (1990).

        The process of quenching, or controlled cooling, occurs when a heated metal item is immersed in a tank of quenching oil. The process is undertaken to harden or temper the material through metallurgical change.

        Most quenching oils are mineral oils which are combustible. They must be chosen carefully for each application to ensure that the ignition temperature of the oil is above the operating temperature of the tank as the hot metal pieces are immersed.

        It is critical that the oil does not overflow the sides of the tank. Therefore, liquid level controls and appropriate drains are essential.

        Partial immersion of hot items is the most common cause of quench tank fires. This can be prevented by appropriate material transfer or conveyor arrangements.

        Likewise, appropriate controls must be provided to avoid excessive oil temperatures and entry of water into the tank that can result in boil-over and major fire in and around the tank.

        Specific automatic fire extinguishing systems such as carbon dioxide or dry chemical are often used to protect the tank surface. Overhead, automatic sprinkler protection of the building is desirable. In some cases, special protection of operators who need to work close to the tank is also required. Often, water spray systems are provided for exposure protection for workers.

        Above all, proper training of workers in emergency response, including use of portable fire extinguishers, is essential.

        Chemical Process Equipment

        Operations to chemically change the nature of materials have often been the source of major catastrophes, causing severe plant damage and death and injury to workers and surrounding communities. Risks to life and property from incidents in chemical process plants may come from fires, explosions or toxic chemical releases. The energy of destruction often comes from uncontrolled chemical reaction of process materials, combustion of fuels leading to pressure waves or high levels of radiation and flying missiles that can cause damage at large distances.

        Plant operations and equipment

        The first stage of design is to understand the chemical processes involved and their potential for energy release. Lees (1980) in his Loss Prevention in the Process Industries sets out in detail the steps required to be undertaken, which include:

        • proper process design
        • study of failure mechanisms and reliability
        • hazard identification and safety audits
        • hazard assessment—cause/consequences.
        • The assessment of the degrees of hazard must examine:
        • potential emission and dispersal of chemicals, particularly toxic and contaminating substances
        • effects of fire radiation and dispersal of combustion products
        • results of explosions, particularly pressure shock waves that can destroy other plants and buildings.

         

        More details of process hazards and their control are given in Plant guidelines for technical management of chemical process safety (AIChE 1993); Sax’s Dangerous Properties of Industrial Materials (Lewis 1979); and the NFPA’s Industrial Fire Hazards Handbook (Linville 1990).

        Siting and exposure protection

        Once the hazards and consequences of fire, explosion and toxic releases have been identified, siting of chemical process plants can be undertaken.

        Again, Lees (1980) and Bradford (1991) provided guidelines on plant siting. Plants must be separated from surrounding communities sufficiently to ensure that those communities cannot be affected by an industrial accident. The technique of quantitative risk assessment (QRA) to determine separation distances is widely used and legislated for in the design of chemical process plants.

        The disaster in Bhopal, India, in 1984 demonstrated the consequences of locating a chemical plant too close to a community: over 1,000 people were killed by toxic chemicals in an industrial accident.

        Provision of separating space around chemical plants also allows ready access for fire-fighting from all sides, regardless of wind direction.

        Chemical plants must provide exposure protection in the form of explosion-resistant control rooms, worker refuges and fire-fighting equipment to ensure that workers are protected and that effective fire-fighting can be undertaken after an incident.

        Spill control

        Spills of flammable or hazardous materials should be kept small by appropriate process design, fail-safe valves and appropriate detection/control equipment. However, if large spills occur, they should be confined to areas surrounded by walls, sometimes of earth, where they can burn harmlessly if ignited.

        Fires in drainage systems are common, and special attention must be paid to drains and sewerage systems.

        Heat transfer hazards

        Equipment that transfers heat from a hot fluid to a cooler one can be a source of fire in chemical plants. Excessive localized temperatures can cause decomposition and burn out of many materials. This may sometimes cause rupture of the heat-transfer equipment and transfer of one fluid into another, causing an unwanted violent reaction.

        High levels of inspection and maintenance, including cleaning of heat transfer equipment, is essential to safe operation.

        Reactors

        Reactors are the vessels in which the desired chemical processes are undertaken. They can be of a continuous or batch type but require special design attention. Vessels must be designed to withstand pressures that might result from explosions or uncontrolled reactions or alternatively must be provided with appropriate pressure-relief devices and sometimes emergency venting.

        Safety measures for chemical reactors include:

        • appropriate instrumentation and controls to detect potential incidents, including redundant circuitry
        • high quality cleaning, inspection and maintenance of the equipment and the safety controls
        • adequate training of operators in control and emergency response
        • appropriate fire suppression equipment and fire-fighting personnel.

         

        Welding and Cutting

        The Factory Mutual Engineering Corporation’s (FM) Loss Prevention Data Sheet (1977) shows that nearly 10% of losses in industrial properties are due to incidents involving cutting and welding of materials, generally metals. It is clear that the high temperatures required to melt the metals during these operations can start fires, as can the sparks generated in many of these processes.

        The FM Data Sheet (1977) indicates that the materials most frequently involved in fires due to welding and cutting are flammable liquids, oily deposits, combustible dusts and wood. The types of industrial areas where accidents are most likely are storage areas, building construction sites, facilities undergoing repair or alteration and waste disposal systems.

        Sparks from cutting and welding can often travel up to 10 m and lodge in combustible materials where smouldering and later flaming fires can occur.

        Electrical processes

        Arc welding and arc cutting are examples of processes involving electricity to provide the arc that is the heat source for melting and joining metals. Flashes of sparks are common, and protection of workers from electrocution, spark flashes and intense arc radiation is required.

        Oxy-fuel gas processes

        This process uses the heat of combustion of the fuel gas and oxygen to generate flames of high temperature that melt the metals being joined or cut. Manz (1991) indicated that acetylene is the most widely used fuel gas because of its high flame temperature of about 3,000 °C.

        The presence of a fuel and oxygen at high pressure makes for an increased hazard, as is leakage of these gases from their storage cylinders. It is important to remember that many materials that do not burn, or only burn slowly in air, burn violently in pure oxygen.

        Safeguards and precautions

        Good safety practices are identified by Manz (1991) in the NFPA Fire Protection Handbook.

        These safeguards and precautions include:

        • proper design, installation and maintenance of welding and cutting equipment, particularly storage and leak testing of fuel and oxygen cylinders
        • proper preparation of work areas to remove all chance of accidental ignition of surrounding combustibles
        • strict management control over all welding and cutting processes
        • training of all operators in safe practices
        • proper fire-resistant clothing and eye protection for operators and nearby workers
        • adequate ventilation to prevent exposure of operators or nearby workers to noxious gases and fumes.

         

        Special precautions are required when welding or cutting tanks or other vessels that have held flammable materials. A useful guide is the American Welding Society’s Recommended Safe Practices for the Preparation for Welding and Cutting of Containers that have held Hazardous Substances (1988).

        For building works and alterations, a UK publication, the Loss Prevention Council’s Fire Prevention on Construction Sites (1992) is useful. It contains a sample hot-work permit to control cutting and welding operations. This would be useful for management in any plant or industrial site. A similar sample permit is provided in the FM Data Sheet on cutting and welding (1977).

        Lightning Protection

        Lightning is a frequent cause of fires and deaths of people in many countries in the world. For example, each year some 240 US citizens die as a result of lightning.

        Lightning is a form of electrical discharge between charged clouds and the earth. The FM Data Sheet (1984) on lightning indicates that lightning strikes may range from 2,000 to 200,000 A as a result of a potential difference of 5 to 50 million V between clouds and the earth.

        The frequency of lightning varies between countries and areas depending on the number of thunderstorm-days per year for the locality. The damage that lightning can cause depends very much on the ground condition, with more damage occurring in areas of high earth resistivity.

        Protective measures—buildings

        The NFPA 780 Standard for the Installation of Lightning Protection Systems (1995b) sets out the design requirements for protection of buildings. While the exact theory of lightning discharges is still being investigated, the basic principle of protection is to provide a means by which a lightning discharge may enter or leave the earth without damaging the building being protected.

        Lightning systems, therefore, have two functions:

        • to intercept the lightning discharge before it strikes the building
        • provide a harmless discharge path to earth.
        • This requires buildings to be fitted with:
        • lightning rods or masts
        • down conductors
        • good ground connections, typically 10 ohms or less.

         

        More details for the design of lightning protection for buildings is provided by Davis (1991) in the NFPA Fire Protection Handbook (Cote 1991) and in the British Standards Institute’s Code of Practice (1992).

        Overhead transmission lines, transformers, outdoor substations and other electrical installations can be damaged by direct lightning strikes. Electrical transmission equipment can also pick up induced voltage and current surges that can enter buildings. Fires, damage to equipment and serious interruption to operations may result. Surge arresters are required to divert these voltage peaks to ground through effective earthing.

        The increased use of sensitive computer equipment in commerce and industry has made operations more sensitive to transient over-voltages induced in power and communication cables in many buildings. Appropriate transient protection is required and special guidance is provided in the British Standards Institute BS 6651:1992, The Protection of Structures Against Lightning.

        Maintenance

        Proper maintenance of lightning systems is essential for effective protection. Special attention has to be paid to ground connections. If they are not effective, lightning protection systems will be ineffective.

         

        Back

        Thursday, 24 March 2011 18:22

        Sources of Fire Hazards

        Fire and combustion have been defined in various ways. For our purposes, the most important statements in connection with combustion, as a phenomenon, are as follows:

        • Combustion represents a self-sustaining run of reactions consisting of physical and chemical transformations.
        • The materials involved enter into reaction with the oxidizing agent in their surroundings, which in most cases is with the oxygen in the air.
        • Ignition requires favourable starting conditions, which are generally a sufficient heating up of the system that covers the initial energy demand of the chain reaction of burning.
        • The resultant of the reactions are often exothermic, which means that during burning, heat is released and this phenomenon is often accompanied by visibly observable flaming.

         

        Ignition may be considered the first step of the self-sustaining process of combustion. It may occur as piloted ignition (or forced ignition) if the phenomenon is caused by any outer ignition source, or it may occur as auto ignition (or self ignition) if the phenomenon is the result of reactions taking place in the combustible material itself and coupled with heat release.

        The inclination to ignition is characterized by an empirical parameter, the ignition temperature (i.e., the lowest temperature, to be determined by test, to which the material has to be heated to for ignition). Depending upon whether or not this parameter is determined—with special test methods—by the use of any ignition source, we distinguish between the piloted ignition temperature and the auto ignition temperature.

        In the case of piloted ignition, the energy required for the activation of the materials involved in the burning reaction is supplied by ignition sources. However, there is no direct relationship between the heat quantity needed for ignition and the ignition temperature, because although the chemical composition of the components in the combustible system is an essential parameter of ignition temperature, it is considerably influenced by the sizes and shapes of materials, the pressure of the environment, conditions of air flow, parameters of ignition source, the geometrical features of the testing device, etc. This is the reason for which the data published in literature for autoignition temperature and piloted ignition temperature can be significantly different.

        The ignition mechanism of materials in different states may be simply illustrated. This involves examining materials as either solids, liquids or gases.

        Most solid materials take up energy from any outer ignition source either by conduction, convection or radiation (mostly by their combination), or are heated up as a result of the heat-producing processes taking place internally that start decomposition on their surfaces.

        For ignition to occur with liquids, these must have the formation of a vapour space above their surface that is capable of burning. The vapours released and the gaseous decomposition products mix with the air above the surface of liquid or solid material.

        The turbulent flows that arise in the mixture and/or the diffusion help the oxygen to reach the molecules, atoms and free radicals on and above the surface, which are already suitable for reaction. The particles induced enter into interaction, resulting in the release of heat. The process steadily accelerates, and as the chain reaction starts, the material comes to ignition and burns.

        The combustion in the layer under the surface of solid combustible materials is called smouldering, and the burning reaction taking place on the interface of solid materials and gas is called glowing. Burning with flames (or flaming) is the process in the course of which the exothermic reaction of burning runs in the gas phase. This is typical for the combustion of both liquid and solid materials.

        Combustible gases burn naturally in the gas phase. It is an important empirical statement that the mixtures of gases and air are capable of ignition in a certain range of concentration only. This is valid also for the vapours of liquids. The lower and upper flammable limits of gases and vapours depend on the temperature and pressure of the mixture, the ignition source and the concentration of the inert gases in the mixture.

        Ignition Sources

        The phenomena supplying heat energy may be grouped into four fundamental categories as to their origin (Sax 1979):

        1.      heat energy generated during chemical reactions (heat of oxidation, heat of combustion, heat of solution, spontaneous heating, heat of decomposition, etc.)

        2.      electrical heat energy (resistance heating, induction heating, heat from arcing, electric sparks, electrostatical discharges, heat generated by lightning stroke, etc.)

        3.      mechanical heat energy (frictional heat, friction sparks)

        4.      heat generated by nuclear decomposition.

        The following discussion addresses the most frequently encountered sources of ignition.

        Open flames

        Open flames may be the simplest and most frequently used ignition source. A large number of tools in general use and various types of technological equipment operate with open flames, or enable the formation of open flames. Burners, matches, furnaces, heating equipment, flames of welding torches, broken gas and oil pipes, etc. may practically be considered potential ignition sources. Because with an open flame the primary ignition source itself represents an existing self-sustaining combustion, the ignition mechanism means in essence the spreading of burning to another system. Provided that the ignition source with open flame possesses sufficient energy for initiating ignition, burning will start.

        Spontaneous ignition

        The chemical reactions generating heat spontaneously imply the risk of ignition and burning as “internal ignition sources”. The materials inclined to spontaneous heating and spontaneous ignition may, however, become secondary ignition sources and give rise to ignition of the combustible materials in the surroundings.

        Although some gases (e.g., hydrogen phosphide, boron hydride, silicon hydride) and liquids (e.g., metal carbonyls, organometallic compositions) are inclined to spontaneous ignition, most spontaneous ignitions occur as surface reactions of solid materials. Spontaneous ignition, like all ignitions, depends on the chemical structure of the material, but its occurrence is determined by the grade of dispersity. The large specific surface enables the local accumulation of reaction heat and contributes to the increase of temperature of material above spontaneous ignition temperature.

        Spontaneous ignition of liquids is also promoted if they come into contact with air on solid materials of large specific surface area. Fats and especially unsaturated oils containing double bonds, when absorbed by fibrous materials and their products, and when impregnated into textiles of plant or animal origin, are inclined to spontaneous ignition under normal atmospheric conditions. Spontaneous ignition of glass-wool and mineral-wool products produced from non-combustible fibres or inorganic materials covering large specific surfaces and contaminated by oil have caused very severe fire accidents.

        Spontaneous ignition has been observed mainly with dusts of solid materials. For metals with good heat conductivity, local heat accumulation needed for ignition necessitates very fine crushing of metal. As the particle size decreases, the likelihood of spontaneous ignition increases, and with some metal dusts (for example, iron) pyrophorosity ensues. When storing and handling coal dust, soot of fine distribution, dusts of lacquers and synthetic resins, as well as during the technological operations carried out with them, special attention should be given to the preventive measures against fire to reduce the hazard of spontaneous ignition.

        Materials inclined to spontaneous decomposition show special ability to ignite spontaneously. Hydrazine, when set on any material with a large surface area, bursts into flames immediately. The peroxides, which are widely used by the plastics industry, easily decompose spontaneously, and as a consequence of decomposition, they become dangerous ignition sources, occasionally initiating explosive burning.

        The violent exothermic reaction that occurs when certain chemicals come into contact with each other may be considered a special case of spontaneous ignition. Examples of such cases are contact of concentrated sulphuric acid with all the organic combustible materials, chlorates with sulphur or ammonium salts or acids, the organic halogen compounds with alkali metals, etc. The feature of these materials to be “unable to bear each other” (incompatible materials) requires special attention particularly when storing and co-storing them and elaborating the regulations of fire-fighting.

        It is worth mentioning that such hazardously high spontaneous heating may, in some cases, be due to the wrong technological conditions (insufficient ventilation, low cooling capacity, discrepancies of maintenance and cleaning, overheating of reaction, etc.), or promoted by them.

        Certain agricultural products, such as fibrous feedstuffs, oily seeds, germinating cereals, final products of the processing industry (dried beetroot slices, fertilizers, etc.), show an inclination for spontaneous ignition. The spontaneous heating of these materials has a special feature: the dangerous temperature conditions of the systems are exacerbated by some exothermic biological processes that cannot be controlled easily.

        Electric ignition sources

        Power machines, instruments and heating devices operated by electric energy, as well as the equipment for power transformation and lighting, typically do not present any fire hazard to their surroundings, provided that they have been installed in compliance with the relevant regulations of safety and requirements of standards and that the associated technological instructions have been observed during their operation. Regular maintenance and periodic supervision considerably diminish the probability of fires and explosions. The most frequent causes of fires in electric devices and wiring are overloading, short circuits, electric sparks and high contact resistances.

        Overloading exists when the wiring and electrical appliances are exposed to higher current than that for which they are designed. The overcurrent passing through the wiring, devices and equipment might lead to such an overheating that the overheated components of the electrical system become damaged or broken, grow old or carbonize, resulting in cord and cable coatings melting down, metal parts glowing and the combustible structural units coming to ignition and, depending on the conditions, also spreading fire to the environment. The most frequent cause of overloading is that the number of consumers connected is higher than permitted or their capacity exceeds the value stipulated.

        The working safety of electrical systems is most frequently endangered by short circuits. They are always the consequences of any damage and occur when the parts of the electrical wiring or the equipment at the same potential level or various potential levels, insulated from each other and the earth, come into contact with each other or with the earth. This contact may arise directly as metal-metal contact or indirectly, through electric arc. In cases of short circuits, when some units of the electric system come in contact with each other, the resistance will be considerably lower, and as a consequence, the intensity of current will be extremely high, perhaps several orders of magnitude lower. The heat energy released during overcurrents with large short circuits might result in a fire in the device affected by the short circuit, with the materials and equipment in the surrounding area coming to ignition and with the fire spreading to the building.

        Electric sparks are heat energy sources of a small nature, but as shown by experience, act frequently as ignition sources. Under normal working conditions, most electrical appliances do not release sparks, but the operation of certain devices is normally accompanied by sparks.

        Sparking introduces a hazard foremost at places where, in the zone of their generation, explosive concentrations of gas, vapour or dust might arise. Consequently, equipment normally releasing sparks during operation is permitted to be set up only at places where the sparks cannot give rise to fire. On its own, the energy content of sparks is insufficient for the ignition of the materials in the environment or to initiate an explosion.

        If an electrical system has no perfect metallic contact between the structural units through which the current flows, high contact resistance will occur at this spot. This phenomenon is in most cases due to the faulty construction of joints or to unworkmanlike installations. The disengagement of joints during operation and natural wear may also be cause for high contact resistance. A large portion of the current flowing through places with increased resistance will transform to heat energy. If this energy cannot be dissipated sufficiently (and the reason cannot be eliminated), the extremely large increase of temperature might lead to a fire condition that endangers the surrounding.

        If the devices work on the basis of the induction concept (engines, dynamos, transformers, relays, etc.) and are not properly calculated, eddy currents may arise during operation. Due to the eddy currents, the structural units (coils and their iron cores) might warm up, which might lead to the ignition of insulating materials and the burning of the equipment. Eddy currents might arise—with these harmful consequences—also in the metal structural units around high-voltage equipment.

        Electrostatic sparks

        Electrostatic charging is a process in the course of which any material, originally with electric neutrality (and independent of any electric circuit) becomes charged positively or negatively. This may occur in one of three ways:

        1.      charging with separation, such that charges of subtractive polarity accumulate on two bodies simultaneously

        2.      charging with passing, such that the charges passing away leave charges of opposed polarity signs behind

        3.      charging by taking up, such that the body receives charges from outside.

        These three ways of charging may arise from various physical processes, including separation after contact, splitting, cutting, pulverizing, moving, rubbing, flowing of powders and fluids in pipe, hitting, change of pressure, change of state, photoionization, heat ionization, electrostatical distribution or high-voltage discharge.

        Electrostatic charging may occur both on conducting bodies and insulating bodies as a result of any of the processes mentioned above, but in most cases the mechanical processes are responsible for the accumulation of the unwanted charges.

        From the large number of the harmful effects and risks due to electrostatic charging and the spark discharge resulting from it, two risks can be mentioned in particular: endangering of electronic equipment (for example, computer for process control) and the hazard of fire and explosion.

        Electronic equipment is endangered first of all if the discharge energy from the charging is sufficiently high to cause destruction of the input of any semi-conductive part. The development of electronic units in the last decade has been followed by the rapid increase of this risk.

        The development of fire or explosion risk necessitates the coincidence in space and time of two conditions: the presence of any combustible medium and the discharge with ability for ignition. This hazard occurs mainly in the chemical industry. It may be estimated on the basis of the so-called spark sensitivity of hazardous materials (minimum ignition energy) and depends on the extent of charging.

        It is an essential task to reduce these risks, namely, the large variety of consequences that extend from technological troubles to catastrophes with fatal accidents. There are two means of protecting against the consequences of electrostatic charging:

        1.      preventing the initiation of the charging process (it is evident, but usually very difficult to realize)

        2.      restricting the accumulation of charges to prevent the occurrence of dangerous discharges (or any other risk).

        Lightning is an atmospherical electric phenomenon in nature and may be considered an ignition source. The static charging produced in the clouds is equalized towards the earth (lightning stroke) and is accompanied by a high-energy discharge. The combustible materials at the place of lightning stroke and its surroundings might ignite and burn off. At some strokes of lightning, very strong impulses are generated, and the energy is equalized in several steps. In other cases, long-lasting currents start to flow, sometimes reaching the order of magnitude of 10 A.

        Mechanical heat energy

        Technical practice is steadily coupled with friction. During mechanical operation, frictional heat is developed, and if heat loss is restricted to such an extent that heat accumulates in the system, its temperature may increase to a value that is dangerous for the environment, and fire may occur.

        Friction sparks normally occur at metal technological operations because of heavy friction (grinding, chipping, cutting, hitting) or because of metal objects or tools dropping or falling on to a hard floor or during grinding operations because of metal contaminations within the material under grinding impact. The temperature of the spark generated is normally higher than the ignition temperature of the conventional combustible materials (such as for sparks from steel, 1,400-1,500 °C; sparks from copper-nickel alloys, 300-400 °C); however, the ignition ability depends on the whole heat content and the lowest ignition energy of the material and substance to be ignited, respectively. It has been proven in practice that friction sparks mean real fire risk in air spaces where combustible gases, vapours and dusts are present in dangerous concentrations. Thus, under these circumstances the use of materials that easily produce sparks, as well as processes with mechanical sparking, should be avoided. In these cases, safety is provided by tools that do not spark, i.e., made from wood, leather or plastic materials, or by using tools of copper and bronze alloys that produce sparks of low energy.

        Hot surfaces

        In practice, the surfaces of equipment and devices may warm up to a dangerous extent either normally or due to malfunction. Ovens, furnaces, drying devices, waste-gas outlets, vapour pipes, etc. often cause fires in explosive air spaces. Furthermore, their hot surfaces may ignite combustible materials coming close to them or by coming in contact. For prevention, safe distances should be observed, and regular supervision and maintenance will reduce the probability of the occurrence of dangerous overheating.

        Fire Hazards of Materials and Products

        The presence of combustible material in combustible systems represents an obvious condition of burning. Burning phenomena and the phases of the burning process fundamentally depend on the physical and chemical properties of the material involved. Therefore, it seems reasonable to make a survey of the flammability of the various materials and products with respect to their character and properties. For this section, the ordering principle for the grouping of materials is governed by technical aspects rather than by theoretical conceptions (NFPA 1991).

        Wood and wood-based products

        Wood is one of the most common materials in the human milieu. Houses, building structures, furniture and consumer goods are made of wood, and it is also widely used for products such as paper as well as in the chemical industry.

        Wood and wood products are combustible, and when in contact with high-temperature surfaces and exposed to heat radiation, open flames or any other ignition source, will carbonize, glow, ignite or burn, depending upon the condition of combustion. To widen the field of their application, the improvement of their combustion properties is required. In order to make structural units produced from wood less combustible, they are typically treated with fire-retardant agents (e.g., saturated, impregnated, provided with surface coating).

        The most essential characteristic of combustibility of the various kinds of wood is the ignition temperature. Its value strongly depends on some of the properties of wood and the test conditions of determination, namely, the wood sample’s density, humidity, size and shape, as well as the ignition source, time of exposure, intensity of exposure and the atmosphere during testing. It is interesting to note that the ignition temperature as determined by various test methods differs. Experience has shown that the inclination of clean and dry wood products to ignition is extremely low, but several fire cases caused by spontaneous ignition have been known to occur from storing dusty and oily waste wood in rooms with imperfect ventilation. It has been proven empirically that higher moisture content increases the ignition temperature and reduces the burning speed of wood. The thermal decomposition of wood is a complicated process, but its phases may clearly be observed as follows:

        • The thermal decomposition with mass loss starts already in the range 120-200 °C; moisture content releases and the non-combustible degradates occur in the combustion space.
        • At 200-280 °C, mainly endothermic reactions occur while the heat energy of ignition source is taken up.
        • At 280-500 °C, the exothermic reactions of decomposition products are steadily accelerating as the primary process, while carbonization phenomena may be observed. In this temperature range, sustaining combustion has already developed. After ignition, burning is not steady in time because of the good heat-insulating ability of its carbonized layers. Consequently, the warming up of the deeper layers is limited and time consuming. When the surfacing of the combustible decomposition products is accelerated, burning will be complete.
        • At temperatures exceeding 500 °C, the wood char forms residues. During its additional glowing, ash containing solid, inorganic materials is produced, and the process has come to an end.

         

        Fibres and textiles

        The majority of the textiles produced from fibrous materials that are found in the close surrounding of people is combustible. Clothing, furniture and the built environment partly or totally consists of textiles. The hazard which they present exists during their production, processing and storing as well as during their wearing.

        The basic materials of textiles are both natural and artificial; synthetic fibres are used either alone or mixed with natural fibres. The chemical composition of the natural fibres of plant origin (cotton, hemp, jute, flax) is cellulose, which is combustible, and these fibres have a relatively high ignition temperature (<<400°C). It is an advantageous feature of their burning that when brought to high temperature they carbonize but do not melt. This is especially advantageous for the medical treatments of burn casualties.

        The fire hazardous properties of fibres of protein base of animal origin (wool, silk, hair) are even more favourable than those of fibres of plant origin, because a higher temperature is required for their ignition (500-600 °C), and under the same conditions, their burning is less intensive.

        The plastics industry, utilizing several extremely good mechanical properties of polymer products, has also gained prominence in the textile industry. Among the properties of acrylic, polyester and the thermoplastic synthetic fibres (nylon, polypropylene, polyethylene), those associated with burning are the least advantageous. Most of them, in spite of their high ignition temperature (<<400-600 °C), melt when exposed to heat, easily ignite, burn intensively, drop or melt when burning and release considerably high quantities of smoke and toxic gases. These burning properties may be improved by addition of natural fibres, producing so-called textiles with mixed fibres. Further treatment is accomplished with flame-retardant agents. For the manufacture of textiles for industrial purposes and heat-protective clothing, inorganic, non-combustible fibre products (including glass and metal fibres) are already used in large quantities.

        The most important fire hazard characteristics of textiles are the properties connected with ignitability, flame spread, heat generation and the toxic combustion products. Special testing methods have been developed for their determination. The test results obtained influence the fields of application for these products (tents and flats, furniture, vehicle upholstery, clothes, carpets, curtains, special protective clothing against heat and weather), as well as the stipulations to restrict the risks in their use. An essential task of industrial researchers is to develop textiles that sustain high temperature, treated with fire-retardant agents, (heavily combustible, with long ignition time, low flame spread rate, low speed of heat release) and produce small amounts of toxic combustion products, as well as to improve the unfavourable effect of fire accidents due to the burning of such materials.

        Combustible and flammable liquids

        In the presence of ignition sources, combustible and flammable liquids are potential sources of risk. First, the closed or open vapour space above such liquids provides a fire and explosion hazard. Combustion, and more frequently explosion, might occur if the material is present in the vapour-air mixture in suitable concentration. From this it follows that burning and explosion in the zone of combustible and flammable liquids may be prevented if:

        • the ignition sources, air, and oxygen are excluded; or
        • instead of oxygen, inert gas is present in the surrounding; or
        • the liquid is stored in a closed vessel or system (see Figure 1); or
        • by proper ventilation, the development of the dangerous vapour concentration is prevented.

         

        Figure 1. Common types of tanks for storage of flammable and combustible liquids.

        FIR020F1

        In practice, a large number of material characteristics are known in connection with the dangerous nature of combustible and flammable liquids. These are closed-cup and open-cup flash points, boiling point, ignition temperature, rate of evaporation, upper and lower limits of the concentration for combustibility (flammable or explosive limits), the relative density of vapours compared to air and energy required for the ignition of vapours. These factors provide full information about the sensitivity for ignition of various liquids.

        Nearly all over the world the flash point, a parameter determined by standard test under atmospherical conditions, is used as the basis to group the liquids (and materials behaving as liquids at relatively low temperatures) into categories of risk. The safety requirements for storage of liquids, their handling, the technological processes, and the electrical equipment to be set up in their zone should be elaborated for each category of flammability and combustibility. The zones of risk around the technological equipment should also to be identified for each category. Experience has shown that fire and explosion might occur—depending on the temperature and pressure of the system—within the range of concentration between the two flammable limits.

        Gases

        Although all materials—under a specific temperature and pressure—may become gases, the materials considered gaseous in practice are those that are in a gas state at normal temperature (~20 °C) and normal atmospheric pressure (~100 kPa).

        In respect to fire and explosion hazards, gases may be ranked in two main groups: combustible and non-combustible gases. According to the definition accepted in practice, combustible gases are those that burn in air with normal oxygen concentration, provided that the conditions required for burning exist. Ignition only occurs above a certain temperature, with the necessary ignition temperature, and within a given range of concentration.

        Non-combustible gases are those that do not burn either in oxygen or in air with any concentration of air. A portion of these gases support combustion (e.g., oxygen), while the other portion inhibit burning. The non-combustible gases not supporting burning are called inert gases (nitrogen, noble gases, carbon dioxide, etc.).

        In order to achieve economic efficiency, the gases stored and transported in containers or transporting vessels are typically in compressed, liquefied, or cooled-condensated (cryogenic) state. Basically, there are two hazardous situations in connection with gases: when they are in containers and when they are released from their containers.

        For compressed gases in storage containers, external heat might considerably increase the pressure within the container, and the extreme overpressure might lead to explosion. Gaseous storage containers will typically include a vapour phase and a liquid phase. Because of changes in pressure and temperature, the extension of the liquid phase gives rise to the further compression of vapour space, while the vapour pressure of the liquid increases in proportion with the increase of temperature. As a result of these processes, critically dangerous pressure may be produced. Storage containers are generally required to contain the application of overpressure relief devices. These are capable of mitigating a hazardous situation due to higher temperatures.

        If the storage vessels are insufficiently sealed or damaged, the gas will flow out to the free air space, mix with air and depending on its quantity and the way of its flowing, may cause the formation of a large, explosive air space. The air around a leaking storage vessel can be unsuitable for breathing and may be dangerous for people nearby, partly due to the toxic effect of some gases and partly due to the diluted concentration of oxygen.

        Bearing in mind the potential fire hazard due to gases and the need for safe operation, one must get detailed knowledge of the following features of gases either stored or used, especially for industrial consumers: the chemical and physical properties of gases, ignition temperature, the lower and upper limits of concentration for flammability, the hazardous parameters of the gas in the container, the risk factors of the hazardous situation caused by the gases released into the open air, the extent of the necessary safety zones and the special measures to be taken in case of a possible emergency situation connected with fire-fighting.

        Chemicals

        Knowledge of the hazardous parameters of chemicals is one of the basic conditions of safe working. The preventive measures and requirements for protection against fire may be elaborated only if the physical and chemical properties connected with fire hazard are taken into consideration. Of these properties, the most important ones are the following: combustibility; ignitability; ability to react with other materials, water or air; inclination to corrosion; toxicity; and radioactivity.

        Information on the properties of chemicals can be obtained from the technical data sheets issued by manufacturers and from the manuals and handbooks containing the data of hazardous chemicals. These provide users with information not only about the general technical features of materials, but also about the actual values of hazard parameters (decomposition temperature, ignition temperature, limit concentrations of combustion, etc.), their special behaviour, requirements for storage and fire-fighting, as well as recommendations for first aid and medical therapy.

        The toxicity of chemicals, as potential fire hazard, may act in two ways. First, the high toxicity of certain chemicals themselves, may be hazardous in a fire. Second, their presence within the fire zone may effectively restrict fire-fighting operations.

        The oxidizing agents (nitrates, chlorates, inorganic peroxides, permanganates, etc.), even if they themselves are non-combustible, largely contribute to the ignition of combustible materials and to their intensive, occasionally explosive burning.

        The group of unstable materials includes the chemicals (acetaldehyde, ethylene oxide, organic peroxides, hydrogen cyanide, vinyl chloride) which polymerize or decompose in violent exothermic reactions spontaneously or very easily.

        The materials sensitive to water and air are extremely dangerous. These materials (oxides, hydroxides, hydrides, anhydrides, alkali metals, phosphorus, etc.) interact with the water and air that are always present in the normal atmosphere, and start reactions accompanied by very high heat generation. If they are combustible materials, they will come to spontaneous ignition. However, the combustible components that initiate the burning may possibly explode and spread to the combustible materials in the surrounding area.

        The majority of corrosive materials (inorganic acids—sulphuric acid, nitric acid, perchloric acid, etc.—and halogens —fluorine, chlorine, bromine, iodine) are strong oxidizing agents, but at the same time they have very strong destructive effects on living tissues, and therefore special measures have to be taken for fire-fighting.

        The dangerous characteristic of radioactive elements and compounds is increased by the fact that the radiation emitted by them may be harmful in several ways, besides that such materials may be fire hazards themselves. If in a fire the structural containment of the radioactive objects involved becomes damaged, λ-radiating materials might be released. They can have a very strong ionizing effect, and are capable of the fatal destruction of living organisms. Nuclear accidents can be accompanied by fires, the decomposition products of which bind radioactive (α-and β-radiating) contaminants by adsorption. These may cause permanent injuries to the persons taking part in rescue operations if they penetrate into their bodies. Such materials are extremely dangerous, because the persons affected do not perceive any radiation by their sensing organs, and their general state of health does not seem to be any worse. It is obvious that if radioactive materials burn, the radioactivity of the site, the decomposition products and the water used for fire-fighting should be kept under constant observation by means of radioactive signalling devices. The knowledge of these factors has to be taken into account for the strategy of intervention and all additional operations. The buildings for handling and storing radioactive materials as well as for their technological use need to be built of non-combustible materials of high fire resistance. At the same time, high-quality, automatic equipment for detecting, signalling and extinguishing a fire should be provided.

        Explosives and blasting agents

        Explosive materials are used for many military and industrial purposes. These are chemicals and mixtures which, when affected by strong mechanical force (hitting, shock, friction) or starting ignition, suddenly transform to gases of large volume through an extremely rapid oxidizing reaction (e.g., 1,000-10,000 m/s). The volume of these gases is the multiple of the volume of the explosive material already exploded, and they will exert very high pressure on the surroundings. During an explosion, high temperatures can arise (2,500-4,000 °C) that promote the ignition of the combustible materials in the zone of explosion.

        Manufacture, transport and storage of the various explosive materials are governed by rigorous requirements. An example is NFPA 495, Explosive Materials Code.

        Besides the explosive materials used for military and industrial purposes, the inductive blasting materials and pyrotechnical products are also treated as hazards. In general, mixtures of explosive materials are often used (picric acid, nitroglycerin, hexogene, etc.), but mixtures of materials capable of explosion are also in use (black powder, dynamite, ammonium nitrate, etc.). In the course of acts of terrorism, plastic materials have become well-known, and are, in essence, mixtures of brisant and plasticizing materials (various waxes, Vaseline, etc.).

        For explosive materials, the most effective method of protection against fire is the exclusion of ignition sources from the surroundings. Several explosive materials are sensitive to water or various organic materials with an ability to oxidate. For these materials, the requirements for the conditions of storage and the rules for storing in the same place together with other materials should be carefully considered.

        Metals

        It is known from practice that nearly all the metals, under certain conditions, are capable of burning in atmospheric air. Steel and aluminium in large structural thickness, on the basis of their behaviour in fire, are clearly evaluated as non-combustible. However, the dusts of aluminium, iron in fine distribution and metal cottons from thin metal fibres can easily be ignited and thus burn intensively. The alkali metals (lithium, sodium, potassium), the alkaline-earth metals (calcium, magnesium, zinc), zirconium, hafnium, titanium, etc. ignite extremely easily in the form of a powder, filings or thin bands. Some metals have such a high sensitivity that they are stored separately from air, in inert gas atmospheres or under a liquid that is neutral for the metals.

        The combustible metals and those that are conditioned to burn produce extremely violent burning reactions that are high-speed oxidation processes releasing considerably higher quantities of heat than observed from the burning of combustible and flammable liquids. The burning of metal dust in the case of settled powder, following the preliminary phase of glowing-ignition, might grow to rapid burning. With stirred-up dusts and clouds of dusts that might result, severe explosions can occur. The burning activity and affinity for oxygen of some metals (such as magnesium) are so high that after being ignited they will continue to burn in certain media (e.g., nitrogen, carbon dioxide, steam atmosphere) that are used for extinguishing fires derived from combustible solid materials and liquids.

        Extinguishing metal fires presents a special task for fire-fighters. The choice of the proper extinguishing agent and the process in which it is applied are of great importance.

        Fires of metals may be controlled with very early detection, the rapid and appropriate action of fire-fighters using the most effective method and, if possible, removal of metals and any other combustible materials from the zone of burning or at least a reduction of their quantities.

        Special attention should be given to the protection against radiation when radioactive metals (plutonium, uranium) burn. Preventive measures have to be taken to avoid the penetration of toxic decomposition products into living organisms. For example, alkali metals, because of their ability to react violently with water may be extinguished with dry fire-extinguishing powders only. Burning of magnesium cannot be extinguished with water, carbon dioxide, halons or nitrogen with good success, and more important, if these agents are used in fire-fighting, the hazardous situation will become even more severe. The only agents that can be applied successfully are the noble gases or in some cases boron trifluoride.

        Plastics and rubber

        Plastics are macromolecular organic compounds produced synthetically or by modification of natural materials. The structure and shape of these macromolecular materials, produced by polymerizational, polyadditional or polycondensational reactions, will strongly influence their properties. The chain molecules of thermoplastics (polyamides, polycarbonates, polyesters, polystyrene, polyvinyl chloride, polymethyl-metacrylate, etc.) are linear or branched, the elastomers (neoprene, polysulphides, isoprene, etc.) are lightly cross-linked, while thermosetting plastics (duroplastics: polyalkydes, epoxy resins, polyurethanes, etc.) are densely cross-linked.

        Natural caoutchouc is used as raw material by the rubber industry, and after being vulcanized, rubber is produced. The artificial caoutchoucs, the structure of which is similar to that of natural chaoutchouc, are polymers and co-polymers of butadiene.

        The range of products from plastics and rubber used in nearly all fields of everyday life is steadily widening. Use of the large variety and excellent technical properties of this group of materials results in items such as various building structures, furniture, clothes, commodities, parts for vehicles and machines.

        Typically, as organic materials, plastics and rubber also are considered to be combustible materials. For the description of their fire behaviour, a number of parameters are used that can be tested by special methods. With the knowledge of these parameters, one can allocate the fields of their application (determined, pointed out, set), and the fire safety provisions can be elaborated. These parameters are combustibility, ignitability, ability to develop smoke, inclination to produce toxic gases and burning dripping.

        In many cases the ignition temperature of plastics is higher than that of wood or any other materials, but in most cases they ignite more easily, and their burning takes place more rapidly and with higher intensity. Fires of plastics are often accompanied by the unpleasant phenomena of large quantities of dense smoke being released that can strongly restrict visibility and develop various toxic gases (hydrochloric acid, phosgene, carbon monoxide, hydrogen cyanide, nitrous gases, etc.). Thermoplastic materials melt during burning, then flow and depending on their location (if mounted in or on a ceiling) produce drops which remain in the burning area and might ignite the combustible materials underneath.

        The improvement of burning properties represents a complex problem and a “key issue” of plastics chemistry. Fire-retardant agents inhibit combustibility, ignition will be slower, the rate of combustion will fall, and flame propagation will slow down. At the same time, the quantity and optical density of smoke will be higher and the gas mixture produced will be more toxic.

        Dusts

        With regard to physical state, dusts belong to the solid materials, but their physical and chemical properties differ from those of those same materials in compact form. It is known that industrial accidents and catastrophes are caused by dust explosions. Materials that are non-combustible in their usual form, such as metals, may initiate an explosion in the form of dust mixed with air when affected by any ignition source, even of low energy. The hazard of an explosion also exists with dusts of combustible materials.

        Dust can be an explosion hazard not only when floating in the air, but also when settled. In layers of dust, heat may accumulate, and slow burning may develop in the inside as a result of the increased ability of particles to react and their lower thermal conductivity. Then the dust may be stirred up by flashes, and the possibility of dust explosion will grow.

        Floating particles in fine distribution present a more severe hazard. Similar to the explosion properties of combustible gases and vapours, dusts also have a special range of air-dust concentration in which an explosion may occur. The lower and upper limit values of explosion concentration and the width of concentration range depend on the size and distribution of particles. If the dust concentration exceeds the highest concentration leading to an explosion, a portion of the dust is not destroyed by fire and absorbs heat, and as a consequence the explosion pressure developed remains below the maximum. The moisture content of air also influences the occurrence of an explosion. At higher humidity, the ignition temperature of the cloud of dust will increase in proportion with the heat quantity necessary for the evaporation of humidity. If an inert foreign dust is mixed in a cloud of dust, the explosivity of the dust-air mixture will be reduced. The effect will be the same if inert gases are mixed in the mixture of air and dust, because the oxygen concentration necessary for burning will be lower.

        Experience has shown that all the ignition sources, even of minimum ignition energy, are capable of igniting dust clouds (open flames, electric arc, mechanical or electrostatic spark, hot surfaces, etc.). According to test results obtained in laboratory, the energy demand for ignition of dust clouds is 20 to 40 times higher than in the case of mixtures of combustible vapour and air.

        The factors that influence the explosion hazard for settled dusts are the physical and thermal engineering properties of the dust layer, the glowing temperature of the dust and the ignition properties of the decomposition products released by the dust layer.

         

        Back

        Thursday, 24 March 2011 18:15

        Basic Concepts

        The Chemistry and Physics of Fire

        Fire is a manifestation of uncontrolled combustion. It involves combustible materials which are found around us in the buildings in which we live, work and play, as well as a wide range of gases, liquids and solids which are encountered in industry and commerce. They are commonly carbon-based, and may be referred to collectively as fuels in the context of this discussion. Despite the wide variety of these fuels in both their chemical and physical states, in fire they share features that are common to them all. Differences are encountered in the ease with which fire can be initiated (ignition), the rate with which fire can develop (flame spread), and the power that can be generated (rate of heat release), but as our understanding of the science of fire improves, we become better able to quantify and predict fire behaviour and apply our knowledge to fire safety in general. The purpose of this section is to review some of the underlying principles and provide guidance to an understanding of fire processes.

        Basic Concepts

        Combustible materials are all around us. Given the appropriate circumstances, they can be made to burn by subjecting them to an ignition source which is capable of initiating a self-sustaining reaction. In this process, the “fuel” reacts with oxygen from the air to release energy (heat), while being converted to products of combustion, some of which may be harmful. The mechanisms of ignition and burning need to be clearly understood.

        Most everyday fires involve solid materials (e.g., wood, wood products and synthetic polymers), although gaseous and liquid fuels are not uncommon. A brief review of the combustion of gases and liquids is desirable before some of the basic concepts are discussed.

        Diffusion and premixed flames

        A flammable gas (e.g., propane, C3H8) can be burned in two ways: a stream or jet of gas from a pipe (cf. the simple Bunsen burner with the air inlet closed) can be ignited and will burn as a diffusion flame in which burning occurs in those regions where gaseous fuel and air mix by diffusive processes. Such a flame has a characteristic yellow luminosity, indicating the presence of minute soot particles formed as a result of incomplete combustion. Some of these will burn in the flame, but others will emerge from the flame tip to form smoke.

        If the gas and air are intimately mixed before ignition, then premixed combustion will occur, provided that the gas/air mixture lies within a range of concentrations bounded by the lower and upper flammability limits (see table 1). Outside these limits, the mixture is non-flammable. (Note that a premixed flame is stabilized at the mouth of a Bunsen burner when the air inlet is open.) If a mixture is flammable, then it can be ignited by a small ignition source, such as an electrical spark. The stoichiometric mixture is the most readily ignited, in which the amount of oxygen present is in the correct proportion to burn all the fuel to carbon dioxide and water (see accompanying equation, below, in which nitrogen can be seen to be present in the same proportion as in air but does not take part in the reaction). Propane (C3H8) is the combustible material in this reaction:

        C3H8 + 5O2 + 18.8N2 = 3CO2 + 4H2O + 18.8N2

        An electrical discharge as small as 0.3 mJ is sufficient to ignite a stoichiometric propane/air mixture in the reaction illustrated. This represents a barely perceptible static spark, as experienced by someone who has walked across a synthetic carpet and touched a grounded object. Even smaller amounts of energy are required for certain reactive gases such as hydrogen, ethylene and ethyne. In pure oxygen (as in the reaction above, but with no nitrogen present as a diluent), even lower energies are sufficient.

        Table 1. Lower and upper flammability limits in air

         

        Lower flammability 
        limit (% by volume)

        Upper flammability 
        limit (% by volume)

        Carbon monoxide

        12.5

        74

        Methane

        5.0

        15

        Propane

        2.1

        9.5

        n-Hexane

        1.2

        7.4

        n-Decane

        0.75

        5.6

        Methanol

        6.7

        36

        Ethanol

        3.3

        19

        Acetone

        2.6

        13

        Benzene

        1.3

        7.9

         

        The diffusion flame associated with a flow of gaseous fuel exemplifies the mode of burning that is observed when a liquid or solid fuel is undergoing flaming combustion. However, in this case, the flame is fed by fuel vapours generated at the surface of the condensed phase. The rate of supply of these vapours is coupled to their rate of burning in the diffusion flame. Energy is transferred from the flame to the surface, thus providing the energy necessary to produce the vapours. This is a simple evaporative process for liquid fuels, but for solids, enough energy must be provided to cause chemical decomposition of the fuel, breaking large polymeric molecules into smaller fragments which can vaporize and escape from the surface. This thermal feedback is essential to maintain the flow of vapours, and hence support the diffusion flame (figure 1). Flames can be extinguished by interfering with this process in a number of ways (see below).

        Figure 1. Schematic representation of a burning surface showing the heat and mass transfer processes.

        FIR010F1

        Heat transfer

        An understanding of heat (or energy) transfer is the key to an understanding of fire behaviour and fire processes. The subject deserves careful study. There are many excellent texts to which one may turn (Welty, Wilson and Wicks 1976; DiNenno 1988), but for the present purposes it is necessary only to draw attention to the three mechanisms: conduction, convection and radiation. The basic equations for steady-state heat transfer () are:

        Conduction:   

        Convection:    

        Radiation:      

        Conduction is relevant to heat transfer through solids; (k is a material property known as thermal conductivity (kW/mK ) and l is the distance (m) over which the temperature falls from T1 to T2 (in degrees Kelvin). Convection in this context refers to the transfer of heat from a fluid (in this case, air, flames or fire products) to a surface (solid or liquid); h is the convective heat transfer coefficient kW/m2K) and depends on the configuration of the surface and nature of the flow of fluid past that surface. Radiation is similar to visible light (but with a longer wavelength) and requires no intervening medium (it can traverse a vacuum); e is the emissivity (efficiency by which a surface can radiate), s is the Stefan-Boltzman constant (). Thermal radiation travels at the speed of light (3 x 108 m/s) and an intervening solid object will cast a shadow.

        Rate of burning and rate of heat release

        Heat transfer from flames to the surface of condensed fuels (liquids and solids) involves a mixture of convection and radiation, although the latter dominates when the effective diameter of the fire exceeds 1 m. The rate of burning (, (g/s)) can be expressed by the formula:

        is the heat flux from the flame to the surface (kW/m2); is the heat loss from the surface (e.g., by radiation, and by conduction through the solid) expressed as a flux (kW/m2); Afuel is the surface area of the fuel (m2); and Lv is the heat of gasification (equivalent to the latent heat of evaporation for a liquid) (kJ/g). If a fire develops in a confined space, the hot smoky gases rising from the fire (driven by buoyancy) are deflected beneath the ceiling, heating the upper surfaces. The resulting smoke layer and the hot surfaces radiate down to the lower part of the enclosure, in particular to the fuel surface, thus increasing the rate of burning:

        where is the extra heat supplied by radiation from the upper part of the enclosure (kW/m2). This additional feedback leads to greatly enhanced rates of burning and to the phenomenon of flashover in enclosed spaces where there is an adequate supply of air and sufficient fuel to sustain the fire (Drysdale 1985).

        The rate of burning is moderated by the magnitude of the value of Lv, the heat of gasification. This tends to be low for liquids and relatively high for solids. Consequently, solids tend to burn much more slowly than liquids.

        It has been argued that the most important single parameter which determines the fire behaviour of a material (or assembly of materials) is the rate of heat release (RHR) which is coupled to the rate of burning through the equation:

        whereis the effective heat of combustion of the fuel (kJ/g). New techniques are now available for measuring the RHR at different heat fluxes (e.g., the Cone Calorimeter), and it is now possible to measure the RHR of large items, such as upholstered furniture and wall linings in large-scale calorimeters which use oxygen consumption measurements to determine the rate of heat release (Babrauskas and Grayson 1992).

        It should be noted that as a fire grows in size, not only does the rate of heat release increase, but the rate of production of “fire products” also increases. These contain toxic and noxious species as well as particulate smoke, the yields of which will increase when a fire developing in a building enclosure becomes underventilated.

        Ignition

        Ignition of a liquid or solid involves raising the surface temperature until vapours are being evolved at a rate sufficient to support a flame after the vapours have been ignited. Liquid fuels can be classified according to their flashpoints, the lowest temperature at which there is a flammable vapour/air mixture at the surface (i.e., the vapour pressure corresponds to the lower flammability limit). These can be measured using a standard apparatus, and typical examples are given in table 2. A slightly higher temperature is required to produce a sufficient flow of vapours to support a diffusion flame. This is known as the firepoint. For combustible solids, the same concepts are valid, but higher temperatures are required as chemical decomposition is involved. The firepoint is typically in excess of 300 °C, depending on the fuel. In general, flame-retarded materials have significantly higher firepoints (see Table 2).

        Table 2. Flashpoints and firepoints of liquid and solid fuels

         

        Closed cup flashpoint1 (°C)

        Firepoint2 (°C)

        Gasoline (100 Octane) (l)

        –38

        n-Decane (l)

        46

        61.5

        n-Dodecane (l)

        74

        103

        Polymethylmethacrylate (s)

        310

        FR polymethylmethacrylate (s)

        377

        Polypropylene (s)

        330

        FR polypropylene (s)

        397

        Polystyrene (s)

        367

        FR polystyrene (s)

        445

        l = liquid; s = solid.
        1 By Pensky-Martens closed cup apparatus.
        2 Liquids: by Cleveland open cup apparatus. Solids: Drysdale and Thomson (1994).
        (Note that the results for the flame-retarded species refer to a heat flux of 37 kW/m2).

         

        Ease of ignition of a solid material is therefore dependent on the ease with which its surface temperature can be raised to the firepoint, e.g., by exposure to radiant heat or to a flow of hot gases. This is less dependent on the chemistry of the decomposition process than on the thickness and physical properties of the solid, namely, its thermal conductivity (k), density (r) and heat capacity (c). Thin solids, such as wood shavings (and all thin sections), can be ignited very easily because they have a low thermal mass, that is, relatively little heat is required to raise the temperature to the firepoint. However, when heat is transferred to the surface of a thick solid, some will be conducted from the surface into the body of the solid, thus moderating the temperature rise of the surface. It can be shown theoretically that the rate of rise of the surface temperature is determined by the thermal inertia of the material, that is, the product krc. This is borne out in practice, since thick materials with a high thermal inertia (e.g., oak, solid polyurethane) will take a long time to ignite under a given heat flux, whereas under identical conditions thick materials with a low thermal inertia (e.g., fibre insulating board, polyurethane foam) will ignite quickly (Drysdale 1985).

        Ignition sources

        Ignition is illustrated schematically in figure 2 (piloted ignition). For successful ignition, an ignition source must be capable not only of raising the surface temperature to the firepoint, or above, but it must also cause the vapours to ignite. An impinging flame will act in both capacities, but an imposed radiative flux from a remote source may lead to the evolution of vapours at a temperature above the firepoint, without the vapours igniting. However, if the evolved vapours are hot enough (which requires the surface temperature to be much higher than the firepoint), they may ignite spontaneously as they mix with air. This process is known as spontaneous ignition.

        Figure 2. The scenario for piloted ignition.

        FIR010F2

        A large number of ignition sources can be identified, but they have one thing in common, which is that they are the result of some form of carelessness or inaction. A typical list would include naked flames, “smokers’ materials”, frictional heating, electrical devices (heaters, irons, cookers, etc.) and so on. An excellent survey may be found in Cote (1991). Some of these are summarized in table 3.

         


        Table 3. Ignition sources

         

         


        Examples

         

        Electrically powered equipment

        Electric heaters, hair dryers, electric blankets, etc.

        Open flame source

        Match, cigarette lighter, blow torch, etc.

        Gas-fuelled equipment

        Gas fire, space heater, cooker, etc.

        Other fuelled equipment

        Wood stove, etc.

        Lighted tobacco product

        Cigar, pipe, etc.

        Hot object

        Hot pipes, mechanical sparks, etc.

        Exposure to heating

        Adjacent fire, etc.

        Spontaneous heating

        Linseed oil-soaked rags, coal piles, etc.

        Chemical reaction

        Rare-e.g., potassium permanganate with glycerol

         


         

        It should be noted that smouldering cigarettes cannot initiate flaming combustion directly (even in common gaseous fuels), but can cause smouldering in materials which have the propensity to undergo this type of combustion. This is observed only with materials which char on heating. Smouldering involves the surface oxidation of the char, which generates enough heat locally to produce fresh char from adjacent unburnt fuel. It is a very slow process, but may eventually undergo a transition to flaming. Thereafter, the fire will develop very rapidly.

        Materials which have the propensity to smoulder can also exhibit the phenomenon of self-heating (Bowes 1984). This arises when such a material is stored in large quantities and in such a way that heat generated by slow surface oxidation cannot escape, leading to a rise in temperature within the mass. If the conditions are right, this can lead to a runaway process ultimately developing into a smouldering reaction at depth within the material.

        Flame spread

        A major component in the growth of any fire is the rate at which flame will spread over adjacent combustible surfaces. Flame spread can be modelled as an advancing ignition front in which the leading edge of the flame acts as an ignition source for the fuel that is not yet burning. The rate of spread is determined partly by the same material properties that control the ease of ignition and partly by the interaction between the existing flame and the surface ahead of the front. Upward, vertical spread is the most rapid as buoyancy ensures that the flames flow upwards, exposing the surface above the burning area to direct heat transfer from the flames. This should be contrasted with spread over a horizontal surface when the flames from the burning area rise vertically, away from the surface. Indeed, it is common experience that vertical spread is the most hazardous (e.g., flame spread on curtains and drapes and on loose clothing such as dresses and nightgowns).

        The rate of spread is also affected by an imposed radiant heat flux. In the development of a fire in a room, the area of the fire will grow more rapidly under the increasing level of radiation that builds up as the fire progresses. This will contribute to the acceleration of fire growth that is characteristic of flashover.

        Theory of Fire Extinguishment

        Fire extinction and suppression can be examined in terms of the above outline of the theory of fire. The gas phase combustion processes (i.e., the flame reactions) are very sensitive to chemical inhibitors. Some of the flame retardants used to improve the “fire properties” of materials rely on the fact that small amounts of inhibitor released with the fuel vapours will suppress the establishment of flame. The presence of a flame retardant cannot render a combustible material non-combustible, but it can make ignition more difficult—perhaps preventing ignition altogether provided that the ignition source is small. However, if a flame-retarded material becomes involved in an existing fire, it will burn as the high heat fluxes overwhelm the effect of the retardant.

        Extinction of a fire may be achieved in a number of ways:

        1.      stopping the supply of fuel vapours

        2.      quenching the flame by chemical extinguishers (inhibiting)

        3.      removing the supply of air (oxygen) to the fire (smothering)

        4.      “blow-out”.

        Controlling the flow of fuel vapours

        The first method, stopping the supply of fuel vapours, is clearly applicable to a gas-jet fire in which the supply of the fuel can simply be turned off. However, it is also the most common and safest method of extinguishing a fire involving condensed fuels. In the case of a fire involving a solid, this requires the fuel surface to be cooled below the firepoint, when the flow of vapours becomes too small to support a flame. This is achieved most effectively by the application of water, either manually or by means of an automatic system (sprinklers, water spray, etc.). In general, liquid fires cannot be dealt with in this manner: liquid fuels with low firepoints simply cannot be cooled sufficiently, while in the case of a high-firepoint fuel, vigorous vaporization of water when it comes into contact with the hot liquid at the surface can lead to burning fuel being ejected from the container. This can have very serious consequences for those fighting the fire. (There are some special cases in which an automatic high-pressure water-spray system may be designed to deal with the latter type of fire, but this is not common.)

        Liquid fires are commonly extinguished by the use of fire-fighting foams (Cote 1991). This is produced by aspirating a foam concentrate into a stream of water which is then directed at the fire through a special nozzle which permits air to be entrained into the flow. This produces a foam which floats on top of the liquid, reducing the rate of supply of fuel vapours by a blockage effect and by shielding the surface from heat transfer from the flames. The foam has to be applied carefully to form a “raft” which gradually increases in size to cover the liquid surface. The flames will decrease in size as the raft grows, and at the same time the foam will gradually break down, releasing water which will aid the cooling of the surface. The mechanism is in fact complex, although the net result is to control the flow of vapours.

        There are a number of foam concentrates available, and it is important to choose one that is compatible with the liquids that are to be protected. The original “protein foams” were developed for hydrocarbon liquid fires, but break down rapidly if brought into contact with liquid fuels that are water soluble. A range of “synthetic foams” have been developed to tackle the entire range of liquid fires that may be encountered. One of these, aqueous film-forming foam (AFFF), is an all-purpose foam which also produces a film of water on the surface of the liquid fuel, thus increasing its effectiveness.

        Quenching the flame

        This method makes use of chemical suppressants to extinguish the flame. The reactions which occur in the flame involve free radicals, a highly reactive species which have only a fleeting existence but are continuously regenerated by a branched chain process that maintains high enough concentrations to allow the overall reaction (e.g., an R1 type reaction) to proceed at a fast rate. Chemical suppressants applied in sufficient quantity will cause a dramatic fall in the concentration of these radicals, effectively quenching the flame. The most common agents that operate in this way are the halons and dry powders.

        Halons react in the flame to generate other intermediate species with which the flame radicals react preferentially. Relatively small amounts of the halons are required to extinguish a fire, and for this reason they were traditionally considered highly desirable; extinguishing concentrations are “breathable” (although the products generated while passing through the flame are noxious). Dry powders act in a similar fashion, but under certain circumstances are much more effective. Fine particles are dispersed into the flame and cause termination of the radical chains. It is important that the particles are small and numerous. This is achieved by the manufacturers of many proprietary brands of dry powders by selecting a powder that “decrepitates”, that is, the particles fragment into smaller particles when they are exposed to the high temperatures of the flame.

        For a person whose clothing has caught fire, a dry powder extinguisher is recognized as the best method to control flames and to protect that individual. Rapid intervention gives rapid “knockdown”, thus minimizing injury. However, the flame must be completely extinguished because the particles quickly fall to the ground and any residual flaming will quickly regain hold. Similarly, halons will only remain effective if the local concentrations are maintained. If it is applied out of doors, the halon vapour rapidly disperses, and once again the fire will rapidly re-establish itself if there is any residual flame. More significantly, the loss of the suppressant will be followed by re-ignition of the fuel if the surface temperatures are high enough. Neither halons nor dry powders have any significant cooling effect on the fuel surface.

        Removing the supply of air

        The following description is an oversimplification of the process. While “removing the supply of air” will certainly cause the fire to extinguish, to do this it is only necessary to reduce the oxygen concentration below a critical level. The well-known “oxygen index test” classifies combustible materials according to the minimum oxygen concentration in an oxygen/nitrogen mixture that will just support flaming. Many common materials will burn at oxygen concentrations down to approximately 14% at ambient temperatures (ca. 20°C) and in the absence of any imposed heat transfer. The critical concentration is temperature dependent, decreasing as the temperature is increased. Thus, a fire that has been burning for some time will be capable of supporting flames at concentrations perhaps as low as 7%. A fire in a room may be held in check and may even self-extinguish if the supply of oxygen is limited by keeping doors and windows closed. Flaming may cease, but smouldering will continue at very much lower oxygen concentrations. Admission of air by opening a door or breaking a window before the room has cooled sufficiently can lead to a vigorous eruption of the fire, known as backdraught, or backdraft.

        “Removal of air” is difficult to achieve. However, an atmosphere may be rendered “inert” by total flooding by means of a gas which will not support combustion, such as nitrogen, carbon dioxide or gases from a combustion process (e.g., a ship’s engines) which are low in oxygen and high in carbon dioxide. This technique can only be used in enclosed spaces as it is necessary to maintain the required concentration of the “inert gas” until either the fire has extinguished completely or fire-fighting operations can begin. Total flooding has special applications, such as for ships’ holds and rare book collections in libraries. The required minimum concentrations of the inert gases are shown in Table 4. These are based on the assumption that the fire is detected at an early stage and that the flooding is carried out before too much heat has accumulated in the space.

        Table 4: Comparison of concentrations of different gases required for inerting

        Agent

        Minimum concentration (% volume)

        Halon 1301

        8.0

        Halon 1211

        8.1

        Nitrogen

        Carbon dioxide

         

        “Removal of air” can be effected in the immediate vicinity of a small fire by local application of a suppressant from an extinguisher. Carbon dioxide is the only gas that is used in this way. However, as this gas quickly disperses, it is essential to extinguish all flaming during the attack on the fire; otherwise, flaming will re-establish itself. Re-ignition is also possible because carbon dioxide has little if any cooling effect. It is worth noting that a fine water spray entrained into a flame can cause extinction as the combined result of evaporation of the droplets (which cools the burning zone) and reduction of the oxygen concentration by dilution by water vapour (which acts in the same way as carbon dioxide). Fine water sprays and mists are being considered as possible replacements for halons.

        It is appropriate to mention here that it is inadvisable to extinguish a gas flame unless the gas flow can be stopped immediately thereafter. Otherwise, a substantial volume of flammable gas may build up and subsequently ignite, with potentially serious consequences.

        Blow-out

        This method is included here for completeness. A match flame can easily be blown out by increasing the air velocity above a critical value in the vicinity of the flame. The mechanism operates by destabilizing the flame in the vicinity of the fuel. In principle, larger fires can be controlled in the same way, but explosive charges are normally required to generate sufficient velocities. Oil well fires can be extinguished in this manner.

        Finally, a common feature that needs to be emphasized is that the ease with which a fire can be extinguished decreases rapidly as the fire increases in size. Early detection permits extinction with minimal quantities of suppressant, with reduced losses. In choosing a suppressant system, one should take into account the potential rate of fire development and what type of detection system is available.

        Explosions

        An explosion is characterized by the sudden release of energy, producing a shock wave, or blast wave, that may be capable of causing remote damage. There are two distinct types of sources, namely, the high explosive and the pressure burst. The high explosive is typified by compounds such as trinitrotoluene (TNT) and cyclotrimethylenetrinitramine (RDX). These compounds are highly exothermic species, decomposing to release substantial quantities of energy. Although thermally stable (although some are less so and require desensitization to make them safe to handle), they can be induced to detonate, with decomposition, propagating at the velocity of sound through the solid. If the amount of energy released is high enough, a blast wave will propagate from the source with the potential to do significant damage at a distance.

        By assessing remote damage, one can estimate the size of the explosion in terms of “TNT equivalent” (normally in metric tons). This technique relies on the large amount of data that has been gathered on the damage potential of TNT (much of it during wartime), and uses empirical scaling laws which have been developed from studies of the damage caused by known quantities of TNT.

        In peacetime, high explosives are used in a variety of activities, including mining, quarrying and major civil engineering works. Their presence on a site represents a particular hazard that requires specific management. However, the other source of “explosions” can be equally devastating, particularly if the hazard has not been recognized. Overpressures leading to pressure bursts can be the result of chemical processes within plants or from purely physical effects, as will occur if a vessel is heated externally, leading to overpressurization. The term BLEVE (boiling liquid expanding vapour explosion) has its origins here, referring originally to the failure of steam boilers. It is now also commonly used to describe the event in which a pressure vessel containing a liquefied gas such as LPG (liquefied petroleum gas) fails in a fire, releasing the flammable contents, which then ignite to produce a “fireball”.

        On the other hand, the overpressure may be caused internally by a chemical process. In the process industries, self-heating can lead to a runaway reaction, generating high temperatures and pressures capable of causing a pressure burst. However, the most common type of explosion is caused by the ignition of a flammable gas/air mixture which is confined within an item of a plant or indeed within any confining structure or enclosure. The prerequisite is the formation of a flammable mixture, an occurrence which should be avoided by good design and management. In the event of an accidental release, a flammable atmosphere will exist wherever the concentration of the gas (or vapour) lies between the lower and upper flammability limits (Table 1). If an ignition source is introduced to one of these regions, a premixed flame will propagate rapidly from the source, converting the fuel/air mixture into combustion products at an elevated temperature. This can be as high as 2,100 K, indicating that in a completely closed system initially at 300 K, an overpressure as high as 7 bars is possible. Only specially designed pressure vessels are capable of containing such overpressures. Ordinary buildings will fall unless protected by pressure relief panels or bursting discs or by an explosion suppression system. Should a flammable mixture form within a building, the subsequent explosion can cause significant structural damage—perhaps total destruction—unless the explosion can vent to the outside through openings (e.g., the failure of windows) created during the early stages of the explosion.

        Explosions of this type are also associated with the ignition of dust suspensions in air (Palmer 1973). These are encountered when there is a substantial accumulation of “explosible” dust which is dislodged from shelves, rafters and ledges within a building to form a cloud, which is then exposed to an ignition source (e.g., in flour mills, grain elevators, etc.). The dust must (obviously) be combustible, but not all combustible dusts are explosible at ambient temperatures. Standard tests have been designed to determine whether a dust is explosible. These can also be used to illustrate that explosible dusts exhibit “explosibility limits”, similar in concept to the “flammability limits” of gases and vapours. In general, a dust explosion has the potential to do a great deal of damage because the initial event may cause more dust to be dislodged, forming an even greater dust cloud which will inevitably ignite, to produce an even greater explosion.

        Explosion venting, or explosion relief, will only operate successfully if the rate of development of the explosion is relatively slow, such as associated with the propagation of a premixed flame through a stationary flammable mixture or an explosible dust cloud. Explosion venting is of no use if detonation is involved. The reason for this is that the pressure relief openings have to be created at an early stage of the event when the pressure is still relatively low. If a detonation occurs, the pressure rises too rapidly for relief to be effective, and the enclosing vessel or item of a plant experiences very high internal pressures which will lead to massive destruction. Detonation of a flammable gas mixture can occur if the mixture is contained within a long pipe or duct. Under certain conditions, propagation of the premixed flame will push the unburnt gas ahead of the flame front at a rate that will increase turbulence, which in turn will increase the rate of propagation. This provides a feedback loop which will cause the flame to accelerate until a shock wave is formed. This, combined with the combustion process, is a detonation wave which can propagate at velocities well in excess of 1,000 m/s. This may be compared with the fundamental burning velocity of a stoichiometric propane/air mixture of 0.45 m/s. (This is the rate at which a flame will propagate through a quiescent (i.e., non-turbulent) propane/air mixture.)

        The importance of turbulence on the development of this type of explosion cannot be underestimated. The successful operation of an explosion protection system relies on early venting or early suppression. If the rate of development of the explosion is too fast, then the protection system will not be effective, and unacceptable overpressures can be produced.

        An alternative to explosion relief is explosion suppression. This type of protection requires that the explosion is detected at a very early stage, as close to ignition as possible. The detector is used to initiate the rapid release of a suppressant into the path of the propagating flame, effectively arresting the explosion before the pressure has increased to an extent at which the integrity of the enclosing boundaries is threatened. The halons have been commonly used for this purpose, but as these are being phased out, attention is now being paid to the use of high-pressure water-spray systems. This type of protection is very expensive and has limited application as it can only be used in relatively small volumes within which the suppressant can be distributed quickly and uniformly (e.g., ducts carrying flammable vapour or explosible dusts).

        Information Analysis for Fire Protection

        In general terms, fire science has only recently been developed to a stage at which it is capable of providing the knowledge base on which rational decisions regarding engineering design, including safety issues, can be based. Traditionally, fire safety has developed on an ad hoc basis, effectively responding to incidents by imposing regulations or other restrictions to ensure that there will be no re-occurrence. Many examples could be quoted. For example, the Great Fire of London in 1666 led in due course to the establishment of the first building regulations (or codes) and the development of fire insurance. More recent incidents, such as the high-rise office block fires in São Paulo, Brazil, in 1972 and 1974, initiated changes to the building codes, framed in such a way as to prevent similar multiple-fatality fires in the future. Other problems have been addressed in a similar fashion. In California in the United States, the hazard associated with certain types of modern upholstered furniture (particularly those containing standard polyurethane foam) was recognized, and eventually strict regulations were introduced to control its availability.

        These are simple cases in which observations of the consequences of fire have led to the imposition of a set of rules intended to improve the safety of the individual and the community in the event of fire. The decision for action on any issue has to be justified on the basis of an analysis of our knowledge of fire incidents. It is necessary to show that the problem is real. In some cases—such as the São Paulo fires—this exercise is academic, but in others, such as “proving” that modern furnishings are a problem, it is necessary to ensure that the associated costs are wisely spent. This requires a reliable database on fire incidents which over a number of years is capable of showing trends in the number of fires, the number of fatalities, the incidence of a particular type of ignition, etc. Statistical techniques can then be used to examine whether a trend, or a change, is significant, and appropriate measures taken.

        In a number of countries, the fire brigade is required to submit a report on each fire attended. In the United Kingdom and the United States, the officer in charge completes a report form which is then submitted to a central organization (the Home Office in the United Kingdom, the National Fire Protection Association, NFPA, in the United States) which then codes and processes the data in a prescribed fashion. The data are then available for inspection by government bodies and other interested parties. These databases are invaluable in highlighting (for example) the principal sources of ignition and the items first ignited. An examination of the incidence of fatalities and their relationship to sources of ignition, etc. has shown that the number of people who die in fires started by smokers’ materials is significantly out of proportion with the number of fires which originate in this way.

        The reliability of these databases depends on the skill with which the fire officers carry out the fire investigation. Fire investigation is not an easy task, and requires considerable ability and knowledge—in particular, a knowledge of fire science. The Fire Service in the United Kingdom has a statutory duty to submit a fire report form for every fire attended, which places a considerable responsibility on the officer in charge. The construction of the form is crucial, as it must elicit the required information in sufficient detail. The “Basic Incident Report Form” recommended by the NFPA is shown in the Fire Protection Handbook (Cote 1991).

        The data can be used in two ways, either to identify a fire problem or to provide the rational argument necessary to justify a particular course of action that may require public or private expenditure. A long-established database can be used to show the effects of actions taken. The following ten points have been gleaned from NFPA statistics over the period 1980 to 1989 (Cote 1991):

        1.      Home smoke detectors are widely used and very effective (but significant gaps in the detector strategy remain).

        2.      Automatic sprinklers produce large reductions in loss of life and property. Increased use of portable and area heating equipment sharply increased home fires involving heating equipment.

        3.      Incendiary and suspicious fires continued to decline from the 1970’s peak, but associated property damage stopped declining.

        4.      A large share of fire-fighter fatalities are attributed to heart attacks and activities away from the fireground.

        5.      Rural areas have the highest fire death rates.

        6.      Smoking materials igniting upholstered furniture, mattresses or bedding produce the most deadly residential fire scenarios.

        7.      US and Canadian fire death rates are amongst the highest of all the developed countries.

        8.      The states of the Old South in the United States have the highest fire death rates.

        9.      Older adults are at particularly high risk of death in fire.

         

        Such conclusions are, of course, country-specific, although there are some common trends. Careful use of such data can provide the means of formulating sound policies regarding fire safety in the community. However, it must be remembered that these are inevitably “reactive”, rather than “proactive”. Proactive measures can only be introduced following a detailed fire hazard assessment. Such a course of action has been introduced progressively, starting in the nuclear industry and moving into the chemical, petrochemical and offshore industries where the risks are much more easily defined than in other industries. Their application to hotels and public buildings generally is much more difficult and requires the application of fire modelling techniques to predict the course of a fire and how the fire products will spread through the building to affect the occupants. Major advances have been made in this type of modelling, although it must be said that there is a long way to go before these techniques can be used with confidence. Fire safety engineering is still in need of much basic research in fire safety science before reliable fire hazard assessment tools can be made widely available.

         

        Back

        Thursday, 24 March 2011 18:09

        Standards and Regulations

        Terms

        In the field of occupational noise, the terms regulation, standard, and legislation are often used interchangeably, even though technically they may have slightly different meanings. A standard is a codified set of rules or guidelines, much like a regulation, but it can be developed under the auspices of a consensus group, such as the International Organization for Standardization (ISO). Legislation consists of laws prescribed by legislating authorities or by local governing bodies.

        Many national standards are called legislation. Some official bodies use the terms standards and regulations as well. The Council of the European Communities (CEC) issues Directives. All members of the European Community needed to “harmonize” their noise standards (regulations or legislation) with the 1986 EEC Directive on occupational noise exposure by the year 1990 (CEC 1986). This means that the noise standards and regulations of the member countries had to be at least as protective as the EEC Directive. In the United States, a regulation is a rule or order prescribed by a government authority and is usually more in the nature of a formality than a standard.

        Some nations have a code of practice, which is somewhat less formal. For example, the Australian national standard for occupational exposure to noise consists of two short paragraphs setting forth mandatory rules, followed by a 35-page code of practice which provides practical guidance on how the standard should be implemented. Codes of practice usually do not have the legal force of regulations or legislation.

        Another term that is used occasionally is recommendation, which is more like a guideline than a mandatory rule and is not enforceable. In this article, the term standard will be used generically to represent noise standards of all degrees of formality.

        Consensus Standards

        One of the most widely used noise standards is ISO 1999, Acoustics: Determination of Occupational Noise Exposure and Estimate of Noise-Induced Hearing Impairment (ISO 1990). This international consensus standard represents a revision of an earlier, less detailed version and it can be used to predict the amount of hearing loss expected to occur in various centiles of the exposed population at various audiometric frequencies as a function of exposure level and duration, age and sex.

        The ISO is currently very active in the area of noise standardization. Its technical committee TC43, “Acoustics”, is working on a standard to evaluate the effectiveness of hearing conservation programmes. According to von Gierke (1993), TC43’s Subcommittee 1 (SC1) has 21 working groups, some of which are considering more than three standards each. TC43/SC1 has issued 58 noise-related standards and 63 additional standards are in a state of revision or preparation (von Gierke 1993).

        Damage-Risk Criteria

        The term damage-risk criteria refers to the risk of hearing impairment from various levels of noise. Many factors enter into the development of these criteria and standards in addition to the data describing the amount of hearing loss resulting from a certain amount of noise exposure. There are both technical and policy considerations.

        The following questions are good examples of policy considerations: What proportion of the noise-exposed population should be protected, and how much hearing loss constitutes an acceptable risk? Should we protect even the most sensitive members of the exposed population against any loss of hearing? Or should we protect only against a compensable hearing handicap? It amounts to a question of which hearing loss formula to use, and different governmental bodies have varied widely in their selections.

        In earlier years, regulatory decisions were made that allowed substantial amounts of hearing loss as an acceptable risk. The most common definition used to be an average hearing threshold level (or “low fence”) of 25 dB or greater at the audiometric frequencies 500, 1,000, and 2,000 Hz. Since that time, the definitions of “hearing impairment” or “hearing handicap” have become more restrictive, with different nations or consensus groups advocating different definitions. For example, certain US government agencies now use 25 dB at 1,000, 2,000, and 3,000 Hz. Other definitions may incorporate a low fence of 20 or 25 dB at 1,000, 2,000, and 4,000 Hz, and may include a broader range of frequencies.

        In general, as definitions include higher frequencies and lower “fences” or hearing threshold levels, the acceptable risk becomes more stringent and a higher percentage of the exposed population will appear to be at risk from given levels of noise. If there is to be no risk of any hearing loss from noise exposure, even in the more sensitive members of the exposed population, the permissible exposure limit would have to be as low as 75 dBA. In fact, the EEC Directive has established an equivalent level (Leq) of 75 dBA as the level at which the risk is negligible, and this level has also been put forward as a goal for Swedish production facilities (Kihlman 1992).

        Overall, the prevailing thought on this subject is that it is acceptable for a noise-exposed workforce to lose some hearing, but not too much. As for how much is too much, there is no consensus at this time. In all probability, most nations draft standards and regulations in an attempt to keep the risk at a minimum level while taking technical and economic feasibility into account, but without coming to consensus on such matters as the frequencies, fence, or percentage of the population to be protected.

        Presenting the Damage-Risk Criteria

        Criteria for noise-induced hearing loss may be presented in either of two ways: noise-induced permanent threshold shift (NIPTS) or percentage risk. NIPTS is the amount of permanent threshold shift remaining in a population after subtracting the threshold shift that would occur “normally” from causes other than occupational noise. The percentage risk is the percentage of a population with a certain amount of noise-induced hearing impairment after subtracting the percentage of a similar population not exposed to occupational noise. This concept is sometimes called excess risk. Unfortunately, neither method is without problems.

        The trouble with using NIPTS alone is that it is difficult to summarize the effects of noise on hearing. The data are usually set out in a large table showing noise-induced threshold shift for each audiometric frequency as a function of noise level, years of exposure and population centile. The concept of percentage risk is more attractive because it uses single numbers and appears easy to understand. But the trouble with percentage risk is that it can vary enormously depending on a number of factors, particularly the height of the hearing threshold level fence and the frequencies used to define hearing impairment (or handicap).

        With both methods, the user needs to be sure that the exposed and non-exposed populations are carefully matched for such factors as age and non-occupational noise exposure.

        National Noise Standards

        Table 1 gives some of the main features of the noise exposure standards of several nations. Most of the information is current as of this publication, but some standards may have been recently revised. Readers are advised to consult the newest versions of the national standards.

        Table 1. Permissible exposure limits (PEL), exchange rates, and other requirements for noise exposure according to nation

        Nation, date

        PEL Lav., 8-hour,

        dBAa

        Exchange rate, dBAb

        Lmax rms

        Lpeak SPL

        Level dBA engineering controlc

        Level dBA audiometric testc

        Argentina

        90

        3

        110 dBA

           

        Australia,1 1993

        85

        3

        140 dB peak

        85

        85

        Brazil, 1992

        85

        5

        115 dBA
        140 dB peak

        85

         

        Canada,2 1990

        87

        3

         

        87

        84

        CEC,3, 4 1986

        85

        3

        140 dB peak

        90

        85

        Chile

        85

        5

        115 dBA
        140 dB

           

        China,5 1985

        70-90

        3

        115 dBA

           

        Finland, 1982

        85

        3

         

        85

         

        France, 1990

        85

        3

        135 dB peak

         

        85

        Germany,3, 6 1990

        85
        55,70

        3

        140 dB peak

        90

        85

        Hungary

        85

        3

        125 dBA
        140 dB peak

        90

         

        India,7 1989

        90

         

        115 dBA
        140 dBA

           

        Israel, 1984

        85

        5

        115 dBA
        140 dB peak

           

        Italy, 1990

        85

        3

        140 dB peak

        90

        85

        Netherlands, 8 1987

        80

        3

        140 dB peak

        85

         

        New Zealand,9 1981

        85

        3

        115 dBA
        140 dB peak

           

        Norway,10 1982

        85
        55,70

        3

        110 dBA

         

        80

        Spain, 1989

        85

        3

        140 dB peak

        90

        80

        Sweden, 1992

        85

        3

        115 dBA
        140 dB C

        85

        85

        United Kingdom, 1989

        85

        3

        140 dB peak

        90

        85

        United States,11 1983

        90

        5

        115 dBA
        140 dB peak

        90

        85

        Uruguay

        90

        3

        110 dBA

           

        a PEL = Permissible exposure limit.

        b Exchange rate. Sometimes called the doubling rate or time/intensity trading ratio, this is the amount of change in noise level (in dB) allowed for each halving or doubling of exposure duration.

        c Like the PEL, the levels initiating the requirements for engineering controls and audiometric testing also, presumably, are average levels.

        Sources: Arenas 1995; Gunn; Embleton 1994; ILO 1994. Published standards of various nations have been further consulted.


        Notes to table 1.

        1 Levels for engineering controls, hearing tests, and other elements of the hearing conservation programme are defined in a code of practice.

        2 There is some variation among the individual Canadian provinces: Ontario, Quebec and New Brunswick use 90 dBA with a 5-dB exchange rate; Alberta, Nova Scotia and Newfoundland use 85 dBA with a 5-dB exchange rate; and British Columbia uses 90 dBA with a 3-dB exchange rate. All require engineering controls to the level of the PEL. Manitoba requires certain hearing conservation practices above 80 dBA, hearing protectors and training on request above 85 dBA, and engineering controls above 90 dBA.

        3 The Council of the European Communities (86/188/EEC) and Germany (UVV Larm-1990) state that it is not possible to give a precise limit for the elimination of hearing hazards and the risk of other health impairments from noise. Therefore the employer is obliged to reduce the noise level as far as possible, taking technical progress and the availability of control measures into account. Other EC nations may have adopted this approach as well.

        4 Those countries comprised by the European Community were required to have standards that at least conformed to the EEC Directive by January 1, 1990.

        5 China requires different levels for different activities: e.g., 70 dBA for precision assembly lines, processing workshops and computer rooms; 75 dBA for duty, observation and rest rooms; 85 dBA for new workshops; and 90 dBA for existing workshops.

        6 Germany also has noise standards of 55 dBA for mentally stressful tasks and 70 dBA for mechanized office work.

        7 Recommendation.

        8 The Netherlands’ noise legislation requires engineering noise control at 85 dBA “unless this cannot be reasonably demanded”. Hearing protection must be provided above 80 dBA and workers are required to wear it at levels above 90 dBA.

        9 New Zealand requires a maximum of 82 dBA for a 16-hour exposure. Ear muffs must be worn in noise levels exceeding 115 dBA.

        10 Norway requires a PEL of 55 dBA for work requiring a large amount of mental concentration, 85 dBA for work requiring verbal communication or great accuracy and attention, and 85 dBA for other noisy work settings. Recommended limits are 10 dB lower. Workers exposed to noise levels greater than 85 dBA should wear hearing protectors.

        11 These levels apply to the OSHA noise standard, covering workers in general industry and the maritime trades. The US military services require standards that are somewhat more stringent. The US Air Force and the US Army both use an 85-dBA PEL and a 3-dB exchange rate.


        Table 1 clearly shows the trend of most nations to use a permissible exposure limit (PEL) of 85 dBA, whereas about half of the standards still use 90 dBA for compliance with requirements for engineering controls, as allowed by the EEC Directive. The vast majority of the nations listed above have adopted the 3-dB exchange rate, except for Israel, Brazil and Chile, all of which use the 5-dB rule with an 85-dBA criterion level. The other notable exception is the United States (in the civilian sector), although both the US Army and the US Air Force have adopted the 3-dB rule.

        In addition to their requirements to protect workers against hearing loss, several nations include provisions for preventing other adverse effects of noise. Some nations state the need to protect against the extra-auditory effects of noise in their regulations. Both the EEC Directive and the German standard acknowledge that workplace noise involves a risk for the health and safety of workers beyond hearing loss, but that current scientific knowledge of the extra-auditory effects does not enable precise safe levels to be set.

        The Norwegian standard includes a requirement that noise levels must not exceed 70 dBA in work settings where speech communication is necessary. The German standard advocates noise reduction for the prevention of accident risks, and both Norway and Germany require a maximum noise level of 55 dBA to enhance concentration and prevent stress during mental tasks.

        Some countries have special noise standards for different kinds of workplaces. For example, Finland and the United States have noise standards for motor vehicle cabs, Germany and Japan specify noise levels for offices. Others include noise as one of many regulated hazards in a particular process. Still other standards apply to specific types of equipment or machines, such as air compressors, chain saws and construction equipment.

        In addition, some nations have promulgated separate standards for hearing protection devices (such as the EEC Directive, the Netherlands and Norway) and for hearing conservation programmes (such as France, Norway, Spain, Sweden and the United States.)

        Some nations use innovative approaches to attack the occupational noise problem. For example, the Netherlands has a separate standard for newly constructed workplaces, and Australia and Norway give information to employers for instructing manufacturers in the provision of quieter equipment.

        There is little information about the degree to which these standards and regulations are enforced. Some specify that employers “should” take certain actions (as in codes of practice or guidelines), while most specify that employers “shall”. Standards that use “shall” are more apt to be mandatory, but individual nations vary widely in their ability and inclination to secure enforcement. Even within the same nation, enforcement of occupational noise standards may vary considerably with the government in power.

         

        Back

        Page 49 of 122

        " DISCLAIMER: The ILO does not take responsibility for content presented on this web portal that is presented in any language other than English, which is the language used for the initial production and peer-review of original content. Certain statistics have not been updated since the production of the 4th edition of the Encyclopaedia (1998)."

        Contents