21
Risk Management Framework: Step 3 RISK ESTIMATION Risk Estimation is the step in the risk management framework that uses attempts to predict the degree of health risk arising from exposure to a specified level of an environmental contaminant. The risk estimation step consists of three types of activities: dose-response assessment exposure assessment risk characterization The dose-response assessment and exposure assessment are usually carried out as separate activities, since the former is largely determined by laboratory experiments, while the latter relies on observational studies such as environmental surveys and sampling in the community. The risk characterization combines the most critical findings obtained from dose-response assessment and exposure assessment as the final summation of the risk estimation step. Dose-response Assessment Both intuition and experience suggest that ingesting small quantities of a drug or poison will likely produce a lesser biological response than would the consumption of larger quantities. Likewise, we might suspect that minute exposures to certain types of toxic substances would produce little or no harmful effect, because it is well known that the body can sometimes tolerate very small amounts of chemicals through its natural capacity for detoxification or physiological adaptation. To paraphrase a traditional toxicological maxim first coined by the Swiss Evidence of hazard and estimation of risk constitute two distinct and separate dimensions of the risk assessment puzzle. Evidence of hazard is usually assessed in the risk identification step before proceeding to the risk estimation step. However, strong evidence of a potential environmental hazard should not be necessarily equated with a high degree of health risk, since the toxicologic potency of a contaminant may be weak or its environmental exposure levels may be very low. Conversely, weak or inconclusive evidence of hazard in risk identification findings should not be seen as grounds to automatically defer the risk estimation step, if the magnitude of possible harmful effects is great. For example, many environmental contaminants with relatively weak evidence of human carcinogenicity, such as dioxins and PCBs, exhibit very strong carcinogenic effects in animal dose-response experiments. We cannot be certain that trace amounts of these dioxins and PCBs are a significant hazard to humans, but we do know that their toxic potency in birds and wildlife is often very great, and humans may be similarly vulnerable. Initiation Risk Communication Risk Estimation Preliminary Analysis Risk Control Action/ Monitoring Risk Evaluation

Risk Management Framework: Step 3 - University of Waterlooirr.uwaterloo.ca/pdf_files/primer/estimation.pdf · that minute exposures to certain types of toxic substances would produce

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Risk Management Framework: Step 3 - University of Waterlooirr.uwaterloo.ca/pdf_files/primer/estimation.pdf · that minute exposures to certain types of toxic substances would produce

Risk Management Framework: Step 3

RISK ESTIMATION

Risk Estimation is the step in the risk management framework that uses attempts to predict the degree of health risk arising from exposure to a specified level of an environmental contaminant. The risk estimation step consists of three types of activities:

• dose-response assessment

• exposure assessment

• risk characterization

The dose-response assessment and exposure assessment are usually carried out as separate activities, since the former is largely determined by laboratory experiments, while the latter relies on observational studies such as environmental surveys and sampling in the community. The risk characterization combines the most critical findings obtained from dose-response assessment and exposure assessment as the final summation of the risk estimation step.

Dose-response Assessment

Both intuition and experience suggest that ingesting small quantities of a drug or poison will likely produce a lesser biological response than would the consumption of larger quantities. Likewise, we might suspect that minute exposures to certain types of toxic substances would produce little or no harmful effect, because it is well known that the body can sometimes tolerate very small amounts of chemicals through its natural capacity for detoxification or physiological adaptation. To paraphrase a traditional toxicological maxim first coined by the Swiss

Evidence of hazard and estimation of risk constitute two distinct and separate dimensions of the risk assessment puzzle. Evidence of hazard is usually assessed in the risk identification step before proceeding to the risk estimation step. However, strong evidence of a potential environmental hazard should not be necessarily equated with a high degree of health risk, since the toxicologic potency of a contaminant may be weak or its environmental exposure levels may be very low. Conversely, weak or inconclusive evidence of hazard in risk identification findings should not be seen as grounds to automatically defer the risk estimation step, if the magnitude of possible harmful effects is great. For example, many environmental contaminants with relatively weak evidence of human carcinogenicity, such as dioxins and PCBs, exhibit very strong carcinogenic effects in animal dose-response experiments. We cannot be certain that trace amounts of these dioxins and PCBs are a significant hazard to humans, but we do know that their toxic potency in birds and wildlife is often very great, and humans may be similarly vulnerable.

Initiation

Ris

k C

omm

unic

atio

n

Risk Estimation

Preliminary Analysis

Risk Control

Action/ Monitoring

Risk Evaluation

Page 2: Risk Management Framework: Step 3 - University of Waterlooirr.uwaterloo.ca/pdf_files/primer/estimation.pdf · that minute exposures to certain types of toxic substances would produce

RISK ESTIMATION 5–2

physician Paracelsus in the 16th century: �The dose makes the poison.� These ideas can be made more informative for risk managers by quantifying the frequency of adverse health effects in a group of test animals or in a population of humans exposed to measured levels of a toxic substance under well-defined conditions.

Dose-response relationships Such a quantitative dose-response relationship can be defined through experimental studies on rats or mice, by administering increasing doses of a purified substance (or a defined mixture of several substances) to several groups of animals, and subsequently measuring the percentage of animals exhibiting pathological changes or disease symptoms at each dose level. The results of such studies defines the dose-response relationship for each substance across a wide range of dose levels, from high doses where pathological changes are frequent occurrences to low doses where changes are infrequent or absent. Epidemiological studies in human populations, which are observational rather than experimental, attempt to measure the frequency of disease incidence arising from the unintentional exposure of groups of people to varying amounts of environmental contaminants. These studies must contend with the uncontrolled nature of everyday human existence, which may affect disease outcomes independent of the toxic exposures themselves. As well, the exact level of exposure to an environmental contaminant over time is difficult to measure in large numbers of persons, and often it is necessary to resort to indirect measures or approximations to estimate the dose incurred by each person. Under such circumstances, epidemiologists will often resort to stratification of exposure levels into several groups�such as high-, medium-, and low-exposed individuals. The relative disease frequency in each stratified exposure group can then be ascertained to determine the human exposure-response relationship for a given contaminant. Where adequate data on heavily exposed groups of individuals are available, human studies may play a decisive primary role in risk estimation. Otherwise, their use as supporting or confirmatory methods for risk estimates obtained from animal studies is also of considerable usefulness. In rare instances, there may be large numbers of humans exposed to sufficiently high levels of toxic contaminants to permit the direct estimation of a exposure-response relationship from epidemiological studies. However, it is more common for environmental epidemiology studies to encounter situations where there are too few subjects, relatively low exposure levels, and insufficient measures of individual exposure to support the development of risk control measures based exclusively on human data. Despite their important role in risk identification, epidemiological studies therefore have played a relatively smaller role in characterizing the dose-response relationship necessary for risk estimation.

The range of potencies exhibited by different toxic substances, most notably carcinogens, is extremely broad, varying by a factor of a million or more between a �weak� carcinogen such as the common solvent trichlorethylene, and a �strong� carcinogen such as the rare food contaminant aflatoxin-B1. Without risk estimation, it would be impossible to resolve the question of whether a common, but weak, carcinogen such as chloroethylene presents greater population health risk than a rare, but potent, carcinogen such as aflatoxin.

As unique biological organisms, humans are best represented for exposure-response measurements by members of their own species, rather than rats or mice. Despite frequent inaccuracies and imprecision in exposure measures, human exposure-response studies are valuable in several respects. The range of contaminant exposures in human groups under study are usually relevant to the exposures experienced by other portions of the general population. As well, the everyday living conditions that humans experience will likely be well represented in the groups under study.

Page 3: Risk Management Framework: Step 3 - University of Waterlooirr.uwaterloo.ca/pdf_files/primer/estimation.pdf · that minute exposures to certain types of toxic substances would produce

RISK ESTIMATION 5–3

It is more common to rely exclusively, or to a large extent, on dose-response measures obtained from experimental animal studies. In long-term toxicity studies of several months� or years� duration, the lower end of the dose range is considered of greatest importance for risk estimation, because human populations are most likely to be exposed to low concentrations of environmental contaminants over prolonged periods of time. Unfortunately, the lower end of the dose-response curve is generally the most difficult to observe and quantify in animal studies, because only a limited number of animals are available in each dose group for a given dose-response study. Due to the cost and complexity of maintaining test animals for prolonged periods under well-defined laboratory conditions, there are only a limited number of dose groups which can be tested, usually three to five groups in a single study, with about 50 to 100 animals assigned to each dose group. While an environmental contaminant may produce a high rate of toxic abnormalities in the more heavily-dosed groups of animals, those animals that are in the low-dose groups will experience a much lower frequency of pathological responses, with the lowest dose groups commonly exhibiting very few or no abnormalities throughout the course of the experiment. Whenever the observed response frequency for a particular dose group is small, say, less than 15% of a dose group comprising 50 animals, the statistical reliability of the response rate diminishes to the point where its accuracy is insufficient to allow direct dose-response measurements. To complicate things more, a certain proportion of test animals at any dose level may develop �spontaneous� disease conditions from factors unrelated to the contaminant under study, which further obscures the underlying dose-related effects. Therefore, for practical reasons that cannot be overcome in conventional dose-response studies, the opportunity to measure the rate of pathological responses for groups of animals in the low dose range is seriously limited, and often impossible.

Dose-response modeling It is usually necessary to rely on animal dose-response measurements taken within the range of observed responses, which corresponds to the high-dose region of the curve. The low-dose region of the curve corresponds to the range of unobserved responses, where accurate measures of response rates remain unobtainable due to the statistical unreliability of the infrequent responses. In order to bridge the gap in the low-dose region of the dose-response curve, risk assessors must consider resorting to dose-response modeling, choosing one of two alternative approaches commonly employed to characterize dose-response behaviour: (1) low-dose extrapolation; (2) uncertainty factors.

An almost routine reliance on the non-threshold linearity assumption and the upper-bound risk estimate demonstrates the strong bias towards conservatism which is typically built into the risk assessment process by the inclusion of multiple pessimistic assumptions. For potentially deadly contaminants, such as carcinogens and mutagens, such conservative assumptions are probably defensible, but only if they are understood to represent precaution, and not necessarily reality. It is the responsibility of risk assessors to make explicit all such underlying assumptions in reporting their conclusions about risk, and the responsibility of risk managers and concerned stakeholders to realize the limitations and biases that such risk estimation procedures entail. It needs to be clearly stated whether the risk estimate used for supporting a regulatory decision is based on the best-fit or on the upper confidence limit of the dose-response curve. For example, when assessing the risk of environmental carcinogens the U. S. EPA normally employs the upper confidence limit for deriving a carcinogenic slope factor, while Health Canada generally employs the best-fit for the dose-response curve to derive a measure of carcinogenic risk called the Tumourigenic Dose-05 (TD05).

The advantages of animal dose-response studies include the ability to precisely specify the amount and frequency of the administered doses, the wide range of dose levels that can be applied, the homogeneity of living conditions and genetic characteristics of the test animals, and the availability of test animals for comprehensive pathologic evaluation of dose-related effects.

Page 4: Risk Management Framework: Step 3 - University of Waterlooirr.uwaterloo.ca/pdf_files/primer/estimation.pdf · that minute exposures to certain types of toxic substances would produce

RISK ESTIMATION 5–4

1. low-dose extrapolation

The low-dose extrapolation approach for dose-response assessment relies on the fundamental concept of proportionality in dose-response relationships. Proportionality refers to the idea that higher doses will likely produce a correspondingly greater frequency of pathological responses, while lower doses will likely produce a proportionately lower response rate for a given toxic contaminant. The question to be addressed by the risk assessor is the quantitative nature of the proportionality relationship, and the range of doses over which the relationship holds true. If the relationship between dose and response follows the same proportionality across a wide range of doses, it should be possible to predict with confidence the probable incidence of toxic responses for any given dose, including very low doses, based on the calculated dose-response relationship for the more easily measured high doses. For many, but not all, dose-response relationships, we may assume that the observed frequency of toxic responses is directly proportional to the size of the administered doses, so that any dose above zero can produce a small but real probability of some response in the exposed population. The simplest of these proportional relationships is referred to as a linear non-threshold relationship by risk assessors�where a doubling of the original dose would be expected to result in a doubling of the response frequency, and a halving of the original dose would produce a halving of the response frequency, and so on down the dose ladder all the way to zero dose. In many instances, however, the dose-response data may not follow a simple straight line relationship across all of the measured dose levels, a situation which indicates the presence of a more complex non-linear relationship for dose-response. There are several possible families on non-linear dose-response curves: • sublinear

the slope of the dose-response curve becomes steeper with increasing dose, so that the estimated risk at low doses is less than would be expected from the observed dose-response relationship at higher doses

• supralinear

the slope of the dose-response curve becomes less steep with increasing dose, so that the estimated risk at low doses is greater than would be expected from the observed dose-response relationship at higher doses

• threshold

the dose-response curve intercepts the horizontal axis (dose axis) at a dose level greater than zero, so that doses smaller than a threshold level would be expected to produce no toxic responses;

Extrapolation in Everyday Life We are all familiar with the idea of simple linear relationships for every-day quantities in our lives. For exam-ple, paying our monthly credit card interest charges represents a linear �deterministic� process. We have to pay the interest owing to the bank�the question is how much interest repayment will be required for a specified amount of outstanding monthly balance on our card. In other words, we need to know the interest rate, so that we can predict by how much our interest charges will diminish, in a linear fashion, as the outstanding balance is paid off., Chance plays no role in this outcome. For those of us who are more adven-turous, we also know that our chances of winning the grand prize in the lottery is related in a linear fash-ion to the number of lottery tickets we buy. The more tickets we buy, the greater our chances of winning. Since winning the grand prize is naturally a rare event, this linear relationship is based on a �stochastic� process, where we are concerned about estimating our chances of winning, as well as the size of the prize. Both of the above examples of a linear relationship will hold true across a very wide range of possible scenarios. Even a small amount of outstanding principle will require small but real interest payments to the bank, and even a single lottery ticket has a small but real chance of winning the grand prize. In other instances, however, we know that the correspondence between cause and effect can become more complicated, so that the proportions change as the numbers involved get smaller, behaving according to a more com-plex non-linear relationship. For instance, the income tax rates in Canada vary depending on the amount of net taxable income we declare on our tax return, so that three different taxation rates must be calculated for portions of our income above a certain cutoff level. In addi-tion, income below a certain mini-mum value, or threshold, is not taxed at all, so the tax liability is zero in the range of very low incomes.

Page 5: Risk Management Framework: Step 3 - University of Waterlooirr.uwaterloo.ca/pdf_files/primer/estimation.pdf · that minute exposures to certain types of toxic substances would produce

RISK ESTIMATION 5–5

whenever it appears that a threshold does not exist or cannot be reliably estimated, the substance is considered to follow a non-threshold dose-response relationship.

In practice, by relying solely on a limited number of observed data points in animal experiments, it is difficult to determine exactly which of the different types of possible dose-response curves best describes the dose-response relationship for a given substance. Moreover, since

the data points are measured exclusively in the high dose region of the curve, it is impossible to observe the shape of the curve in the low-dose region. Because the low-dose region is the area of the dose-response curve that must be used to estimate the amount of health risk at low exposure levels, the curve must be plotted using mathematical extrapolation methods. Extrapolation refers to the use of a mathematical formula to predict the numerical relationship between an independent variable (dose) and a dependent variable (response) outside of the range of observable values for the dependent variable. It is possible for statisticians to devise an array of different mathematical models to characterize the dose-response relationship. Having fitted the dose-response curve to the observed data points using a particular mathematical model, the expected dose-response relationship for the low-dose region can then be plotted by low-dose extrapolation, using the mathematical model to compute the predicted shape and slope of the resulting curve. Based on the curve predicted by a specific extrapolation model, it is possible to estimate the amount of risk corresponding to a specified dose level across the entire range of unobserved responses.

Low-dose Extrapolation the dose-response curve is visiblein the ‘range of observed effects’, but needs extrapolation within the

‘range of unobserved effects’

Page 6: Risk Management Framework: Step 3 - University of Waterlooirr.uwaterloo.ca/pdf_files/primer/estimation.pdf · that minute exposures to certain types of toxic substances would produce

RISK ESTIMATION 5–6

While the intuitive simplicity of the linear non-threshold assumption in dose-response assessment is appealing, it is by no means the only possible extrapolation model that could be used by risk assessors. In fact, the linear (1-hit) extrapolation model is infrequently employed in dose-response assessment, because provides a very poor fit for any dose-response points that fall outside of an ordinary straight line pattern. The dose-response points in many toxicological experiments typically follow a curvilinear pattern, so it is common for statisticians to consider different mathematical models for non-linear curves.

Unfortunately, it is not possible to identify the most �correct� mathematical extrapolation model solely on the basis of the goodness-of-fit between the calculated dose-response curve and the observed data points. Inevitably, many different extrapolation models will produce roughly equivalent goodness-of-fit results for the observed data points, which means that no single model provides a unique solution to the low-dose extrapolation problem. The situation is compounded by the fact that different low-dose extrapolation models will often predict widely varying estimates of risk for a given dose level. Thus, risk estimates for many environmental contaminants are strongly dependent on model selection. Since the model selection for low-dose extrapolation purposes cannot be adequately resolved by goodness-of-fit criteria, most risk assessors now attempt to avoid prolonged arguments about extrapolation models by relying, by mutual consensus, almost exclusively on one or two general purpose models. These models are sufficiently flexible to accommodate the diverse shapes

Risk Estimation using Low-dose Extrapolation model selection has a major

influence on the level of the predicted risk at low doses

Page 7: Risk Management Framework: Step 3 - University of Waterlooirr.uwaterloo.ca/pdf_files/primer/estimation.pdf · that minute exposures to certain types of toxic substances would produce

RISK ESTIMATION 5–7

encountered in most dose-response curves, while also allowing for the inclusion, where necessary, of a linear relationship at very low doses. Such a consensus-based approach to model selection enables the prediction of health risks on a more-or-less consistent basis, assuring that the relative level of health risks for various contaminants can be reasonably compared for risk management purposes. However,

considering that science does not operate on unfounded consensus, the absolute level of health risks predicted by such extrapolation models is probably not very accurate, because so many underlying assumptions remain unproved. For example, a risk assessor will often assume that carcinogenic contaminants must follow a linear non-threshold dose-response relationship at low dose levels. This type of assumption asserts, without hard evidence, that any non-zero dose entails some degree of health risk, however small. Likewise, risk assessors may elect to employ the upper bound uncertainty on the dose response curve, which is very broad at low doses, instead of the best-estimate line for the same dose-response curve. The use of upper-bound uncertainty will typically inflate the estimated risk by several hundred-fold or more, compared to the risk estimate derived from the best-estimate curve. This approach ensures that risk assessors are unlikely to underestimate potential risks, but the procedure also means that health risks are very likely to be seriously overestimated in many cases.

Uncertainty Bounds in Low-dose Extrapolation risk estimates may be based either on the best estimate or upper bound risk—the latter is more conservative and is most often selected for carcinogens

Page 8: Risk Management Framework: Step 3 - University of Waterlooirr.uwaterloo.ca/pdf_files/primer/estimation.pdf · that minute exposures to certain types of toxic substances would produce

RISK ESTIMATION 5–8

2. uncertainty factors

The uncertainty factor approach assumes that many environmental contaminants are unlikely to produce substantive pathological effects at very low dose levels, so that any dose exceeding zero but less than a specified threshold level would be virtually harmless. The concept of threshold dose relies on the proposition that small amounts of many toxic contaminants could be neutralized by the body�s natural

detoxification mechanisms, or that adaptive physiological changes might overcome minor perturbations in body functions. The problem is thus defined as first determining a reasonable numerical value for the hypothesized threshold dose, and then converting it to a dose corresponding to a tolerable human exposure, expressed as the Tolerable Daily Intake (TDI). This approach depends on the use of animal dose-response data to determine two quantities: the no-observed-effect level (NOEL), corresponding to the highest administered dose that produces no detectable response in test animals, and the lowest-observed-effect level (LOEL), corresponding to the lowest administered dose capable of producing a measurable increase in the frequency of pathological changes. The NOEL and the LOEL are treated as the lower and upper numerical limits for a range of possible values corresponding to the unobserved threshold dose.

The NOEL/uncertainty factor method, while commonly accepted as the current basis for determining exposure limits for noncarcinogens, has many experimental and statistical limitations. Better methods for calculating the TDI (or RfD) value for threshold toxicants are now on the horizon, based on more sophisticated statistical methods, such as the benchmark dose (BMD) method for characterizing the dose-response curve across the entire range of observed data points.

Estimation of a Toxicological Threshold

for threshold-acting toxicants, the threshold dose cannot be exactly measured, but can be estimated by employing the NOEL and the

LOEL in the dose-response curve

Page 9: Risk Management Framework: Step 3 - University of Waterlooirr.uwaterloo.ca/pdf_files/primer/estimation.pdf · that minute exposures to certain types of toxic substances would produce

RISK ESTIMATION 5–9

The threshold dose for a substance represents the fuzzy boundary between dose levels where no effects are seen and those where obvious effects are seen. Consequently, the threshold dose is a notional value that can never be precisely measured, and it will naturally vary between different species, and across the human population from person-to-person according to their individual susceptibility. For these reasons, a variety of uncertainty factors must be introduced to ensure that humans have a reasonable margin of protection from potentially toxic exposure levels. Two standard uncertainty factors are generally used to account for: (1) the inter-species variability (UFA) between test animals and humans, and (2) the inter-individual variability (UFH) among people in the human population. For many environmental contaminants, the simplest method for converting the NOEL to the Tolerable Daily Intake (TDI) is to assume that the interspecies and inter-individual uncertainty factors are each assigned a 10-fold numerical value. The TDI can then be calculated by the formula:

�������������� ��������

� ���������������

� ������������

Alternatively if the LOEL is employed in place of the NOEL, an additional 10-fold uncertainty factor (UFL) is introduced, to account for uncertainty in the distance between the LOEL and the threshold dose along the dose-axis:

������������� �������������

� �����������

Similarly, other types of uncertainty factors may also be included in the calculation of the TDI on a case-by-case basis. Although a 10-fold value is normally assigned to each uncertainty factor, other values such as 5 or 3 can also be specified in the presence of convincing scientific evidence. Since the number, type, and magnitude of the uncertainty factors is derived�by scientists or risk managers (or both)�according to various qualitative criteria, the approach is frequently criticized for being excessively subjective, inconsistent, arbitrary, or politically motivated. While there is some truth in these assertions, the approach possesses the virtues of conceptual simplicity and scientific transparency.

Mechanistic considerations Some contaminants are more likely to possess a definable dose-response threshold while others probably have no discernable

Whenever someone proposes that uncertainty factors have been used inappropriately, it is usually possible to have recourse to the scientific and technical evidence that supports the derivation of a particular set of uncertainty factors. It is essential that risk assessors explain the rationale supporting the estimation of a particular value for the NOEL or LOEL in a dose-response assessment, as well as the selection and magnitude of all the uncertainty factors included in their derivation of a TDI. While this does not guarantee impartiality, it does assure a reasonable grounds for further discussion between various stakeholders, especially those with conflicting expectations about the risk in question.

In older terminology, the TDI is referred to as the ADI (Acceptable Daily Intake), and the uncertainty factor is referred to as the safety factor (SF). These terms are now considered obsolete, as they carry the incorrect connotation that safety can be totally assured and that risk is acceptable. The 1988 U.S. EPA GUIDELINE REFERENCE DOSE (RFD): DESCRIPTION AND USE IN HEALTH RISK ASSESSMENTS has redefined the ADI in terms of the Reference Dose (RfD), to avoid any implication there is any �tolerable exposure� for toxic chemicals, or any notion of absolute safety�. The RfD is calculated in a manner analogous to the TDI.

Page 10: Risk Management Framework: Step 3 - University of Waterlooirr.uwaterloo.ca/pdf_files/primer/estimation.pdf · that minute exposures to certain types of toxic substances would produce

RISK ESTIMATION 5–10

threshold. One must therefore adopt different quantitative methods for conducting dose-response assessment depending on whether the risk assessors have reason to believe that the agent possesses a mechanism of toxicity that follows either threshold or non-threshold behaviour. Hence, a careful mechanistic interpretation of the dose-response data is as important as the statistical techniques used to estimate both toxic threshold and toxic potency.

1. threshold processes

Threshold toxicants are considered to act through a deterministic process, which means that the dose level determines the onset and severity of the pathological effect, with greater doses producing progressively greater harm. Toxicological studies have demonstrated that the body can usually tolerate small doses of a toxic chemical because it can employ sophisticated cellular defense systems for metabolic detoxification, physiological adaptation, and tissue repair. Below a certain dose level, the toxicological threshold, these defense mechanisms are normally capable of neutralizing the effects of a toxic agent, thereby rendering it harmless. At some critical higher dose, significant pathological changes will begin to occur as the body�s defense systems become saturated or depleted by the excessive amounts of the toxic agent. For most substances other than carcinogens and mutagens, regulatory exposure limits are usually based on the threshold concept, assuming that harmful health effects will occur only when exposure exceeds the body�s capacity to resist low-level toxic effects. These substances are normally assessed using the NOEL/uncertainty factor approach.

2. non-threshold processes

Most carcinogenic and mutagenic contaminants are thought to produce their harmful effects by a �hit-or-miss� or stochastic mechanism of action. Here, health risk is expressed as the likelihood of disease occurrence, reflecting the accumulation of genetic damage in some critical cellular target. Most commonly, the genetic damage occurs in the DNA located in the nucleus of a cell. DNA is the biological material that contains the genetic information needed to regulate the structure and function of all cells and tissues. Substances that produce genetic damage are said to reflect a genotoxic mechanism of damage to the cell. The DNA damage produced by genotoxic agents often results in the creation of genetic mutations which may interfere with proper cell control of tissue growth and development. Usually the probability of disease occurrence will diminish as the level of exposure to genotoxic agents is reduced, but risk to health cannot be altogether eliminated even at very small doses. It might conceivably require only one critical gene mutation in one vulnerable cell to induce a cancer or birth defect. Therefore risk assessors usually treat genotoxic contaminants as non-threshold contaminants, unless there is compelling

Stochastic processes are often characterized by scientists and statisticians as progressing through a series of �hits� within individual cells, and are thus modeled statistically by various types of �hitness� models�the one-hit model, multi-hit model, the two-stage model, the multistage model, and so on. To a limited extent, these models have a reasonable biological foundation regarding the development and progression of diseases such as cancer, but the models are clearly an oversimplification of the complex biological processes that occur within damaged cells.

The NOEL can be replaced by a related measure called the NOAEL (No-Observed-Adverse-Effect-Level), which disregards changes related to the body�s physiological adaptation to chemical exposure that appear otherwise harmless. For example, small chemical exposures may induce increased levels of detoxifying factors in the liver, but these changes are often deemed adaptive rather than pathological. The observed NOAEL often occurs at a higher dose level in the dose-response curve than does the NOEL. If the dose level corresponding to the NOAEL was found to be twice as high as that for the NOEL, then the NOAEL would yield a calculated exposure limit (TDI or RfD) that was twice as high as that derived from the NOEL. Because the NOAEL may produce higher (less restrictive) regulatory exposure limits than the NOEL, the use of NOAEL and its relationship to adaptive and patho-logical changes remains contentious.

Page 11: Risk Management Framework: Step 3 - University of Waterlooirr.uwaterloo.ca/pdf_files/primer/estimation.pdf · that minute exposures to certain types of toxic substances would produce

RISK ESTIMATION 5–11

evidence to suggest that they possess a threshold-like dose-response behaviour. Substances thought to have a stochastic mechanism of action are normally assessed using the low-dose extrapolation approach. The development of a cancer or a heritable birth defect produces an all-or-nothing outcome for the persons affected�there will be no obvious gradations in the occurrence of cancer or heritable birth defects, either the disease is present or it is absent. Given these severe and irreversible consequences, it is normally assumed for risk assessment purposes that the only �risk-free� dose for a genotoxic agent is zero dose. However, very low contaminant exposures would likely be tolerated by most people if the anticipated health risks were sufficiently small.

Interspecies Extrapolation Animal toxicology studies possess a high sensitivity for identifying and quantifying toxicological hazards, but have limited relevance for assessing environmental risks in humans. Human epidemiology studies have lower sensitivity but much greater relevance. Thus it is clear that risk assessment is best served by using information from both types of studies to complement the respective strengths and weaknesses of each method. This goal can be accomplished using a concept called the parallelogram concept.

Since chemical toxicity is more readily revealed in experiments where animals have been exposed to high doses, some type of conversion must be made to predict the anticipated toxic effects that might be expected to occur in human populations. The interspecies extrapolation process involves the conversion of the estimated animal toxic risk at low doses to a corresponding prediction of human health risk at low exposure levels.

Considerable scientific evidence suggests that the body�s toxicological defenses can reduce risk, or perhaps in some cases eliminate the risk, from very low exposures of carcinogens and mutagens. Some animal studies even suggest that low doses of toxic chemicals or radiation might confer a partial protective effect on cells, a phenomenon termed �hormesis�, possibly by stimulating adaptive cell defenses to resist small amounts of DNA damage. However, it is very difficult to verify these ideas, or to determine how efficient these natural body defenses might be.

Parallelogram Concept for Estimating Human Toxicity at Low Dose

animal toxicity results are usually converted using interspecies and

dose extrapolation, but human toxicity by dose extrapolation only

Page 12: Risk Management Framework: Step 3 - University of Waterlooirr.uwaterloo.ca/pdf_files/primer/estimation.pdf · that minute exposures to certain types of toxic substances would produce

RISK ESTIMATION 5–12

Interspecies extrapolations are fraught with potential difficulties and uncertainties. The simplest methods rely on allometric conversion methods, which try to account for differences in body size between rodents and humans, by incorporating various conversion factors based on body weight. One routine allometric conversion simply adjusts the external dose level based on the amount of the substance ingested per unit body weight per day. A slightly more complicated conversion involves the calculation of body surface area, because this measure can account for differences in metabolic rates in the transformation or degradation of contaminants in the body. To the extent that allometric conversions can account for some of the interspecies differences in dose-response, the interspecies uncertainty factor can sometimes be reduced (e.g. from 10-fold to 3-fold) when adequate allometric conversions have been included in the dose-response assessment. Another advantage of the parallelogram concept is that the predictions of human health risk at low dose can be derived from two separate sources of evidence. An epidemiological risk estimate relies solely on direct observation at high exposures, or on dose extrapolation from high exposure to low exposure in humans; the toxicological prediction relies on dose extrapolation within the animal studies followed by interspecies extrapolation between the low dose animal risk and the low dose human risk. The concordance, which is a measure of the agreement between the epidemiological and toxicological estimates of human risk at low doses, can serve to confirm or refute predictions of health risk in the human population. For many of the environmental chemicals (about 200) which have undergone a formal risk assessment, the observed concordance between epidemiological and toxicological risk predictions is not particularly impressive. Nonetheless, where concordance for a given agent is good, this allows risk assessors to be reasonably confident that their risk estimates are on the right track.

Exposure Assessment

Exposure assessment is the process of measuring or estimating the intensity, frequency, and duration of human exposures to a contaminant present in the environment, or estimating hypothetical exposures that might arise from the release of new chemicals into the environment. Unlike dose-response assessment, exposure assessment cannot rely on a few laboratory or epidemiological studies, it requires the careful consideration of differential human exposures that cannot be readily generalized from one or two random observations. Instead, exposure assessment requires extensive community-based research to measure the actual amounts of contaminants in various environmental media�air, drinking water, soil, and various types of foods.

Much more sophisticated interspecies conversion, not relying on allometic methods, involve the use of physiologically-based pharmacokinetic models (PB�PK), which attempt to account for the intake, disposition, and excretion rates of the substance using computer simulation techniques. These methods, while difficult and time consuming, promise in future to provide much more accurate conversion of the external and internal doses of toxic contaminants in test animals and humans.

Page 13: Risk Management Framework: Step 3 - University of Waterlooirr.uwaterloo.ca/pdf_files/primer/estimation.pdf · that minute exposures to certain types of toxic substances would produce

RISK ESTIMATION 5–13

Highly exposed subgroups Clearly the greatest health concerns will focus on the most highly exposed individuals or subgroups�those who incur the highest, most frequent, or most prolonged exposure to a contaminant will be considered as having the greatest risk for harmful health effects. The number of individuals in such high-exposure subgroups may not be large in comparison to the overall population, but considering that health risk is a personal concern of great consequence, the small number of individuals at risk should not be used as an excuse for undue delay or inaction by risk managers. For example, aboriginal peoples frequently experience high exposures to heavy metals, such as mercury and cadmium, and persistent organic pollutants, such as dioxins and PCBs, due to their consumption of large quantities of fish, game, and other wildlife. To the extent that these �country food� sources may be heavily contaminated with toxic substances, a subgroup exposure assessment would need to be conducted in these individuals, to determine the patterns of food consumption and the corresponding intake of identified toxic contaminants.

Highly susceptible subgroups In addition, human populations in Canada and elsewhere are obviously not homogeneous, but instead consist of different groups of people who vary in their individual susceptibility to a given contaminant according to their age, sex, racial or ethnic group, existing medical conditions, and various genetic, nutritional, and lifestyle factors. Those with unusually high susceptibility to the toxic effects of a contaminant will likewise be viewed as a group with potentially higher-than-normal health risks�for example, infants and children, women of child-bearing age, the elderly, the sick and infirm, certain racial or ethnic groups, and so on. Information on differential exposure must then be combined with information about susceptibility to identify the most vulnerable high-risk individuals. In its most complete form, the exposure assessment will attempt to describe the magnitude, duration, schedule, and route of exposure; the size, nature, and classes of human populations exposed, and the uncertainties in all estimates.

General population studies In addition to highly exposed subgroups, the average exposure experienced by the general population is another exposure measure that must be considered. Although the average population-wide exposure level will typically be much lower than that encountered in highly exposed subgroups, the overall health impact of a contaminant may be greater in the general population because a much larger number of people are at risk. The population of Canada is widely distributed in many different regions and locales�some urban, some suburban, and some rural. So to determine the intake of contaminants from the air, water, and soil requires the operation of an extensive network of

Page 14: Risk Management Framework: Step 3 - University of Waterlooirr.uwaterloo.ca/pdf_files/primer/estimation.pdf · that minute exposures to certain types of toxic substances would produce

RISK ESTIMATION 5–14

monitoring and sampling stations to measure the concentrations of pollutants on a regional or locality basis. Such networks are complicated and expensive to operate, so in many instances there are serious gaps in the exposure assessment of common contaminants. To fill these gaps, risk assessors often must rely on indirect estimates of human exposure based on various exposure modeling techniques. For example, if the concentration of air pollutants in a given locality cannot be measured by systematic air quality monitoring, the local concentration of pollutants may be estimated by observing sources of pollution and airborne pollutant concentrations at other monitored localities downwind, and extrapolating these measured quantities using computerized air dispersion models. While such modeling techniques are relatively imprecise for any given locality, they often can provide enough information about estimated exposures to allow reasonable risk estimates to be derived. Much of our daily intake of environmental contaminants is ingested in the foods we eat. Persistent environmental pollutants can accumulate in the vegetation and feed grains that farm animals consume as their food supply, and we in turn consume these contaminants in animal products such as meat, fish, dairy products, and eggs. Assuming that the general population consumes most of its foods by shopping at grocery stores and markets, the intake patterns of foodstuffs can be estimated by the volume of sales of various products through retail distributors. Exposure assessment is an essential component of the risk estimation process. Without measures or estimates of human exposure levels for environmental contaminants, the dose-response assessment would have little meaning with regard to potential human health effects. Substances with miniscule human environmental exposures, however potent their effects, are unlikely to present a significant health risk. Conversely, contaminants with relatively weak dose-response characteristics may nonetheless pose a significant health risk if exposure levels are sufficiently high or prolonged to exceed a tolerable intake level. Health risks for a contaminant can only be meaningfully assessed by examining its observed or calculated environmental exposure levels in conjunction with the dose levels associated with harmful effects in dose-response studies.

Risk Characterization

As the final summation in the risk estimation step, risk characterization combines the most critical findings obtained from dose-response assessment and exposure assessment, providing a description of the nature of the hazard and an explicit accounting of the estimated degree of health risk. The risk characterization process must both summarize and explain the methods used to compute estimates of health risk in the

Another approach for estimating food intake patterns is the �market basket� survey, a questionnaire or sample that can be collected by social scientists and nutritionists. The average amounts of contaminants in various foodstuffs are measured by systematic sampling and analysis in government health laboratories on a continual basis. By combining the information about contaminant concentrations with information about food consumption patterns, it is possible to estimate the average amount of contaminant intake for the general population or particular subgroups, such as children.

Page 15: Risk Management Framework: Step 3 - University of Waterlooirr.uwaterloo.ca/pdf_files/primer/estimation.pdf · that minute exposures to certain types of toxic substances would produce

RISK ESTIMATION 5–15

risk estimation step. A risk characterization report basically consists of two aspects:

• numerical risk estimates, and their attendant uncertainties

• a narrative summary to judge the practical significance of risk

Because the risk characterization report marks a final summation of the sources of evidence, the methods of analysis, and the results obtained from in the risk estimation step, it is considered as a critical interface for exchange of information between the scientific community of risk assessors and the non-scientific community of risk managers and stakeholders. In the past, communication between these groups was typically unidirectional and formalistic�risk assessors would tell risk managers �what the risk numbers say,� with little guidance regarding the limitations, uncertainties, and assumptions contained in the quantitative risk assessment report. More recently, a concerted effort has been made to improve the information value and comprehension of the risk characterization summary for decision-makers�by including more qualitative descriptions of the risk assessment findings, by explaining in a narrative fashion the main line of reasoning leading to critical conclusions about risk, and expressing the interpretation of findings in everyday language rather than technical jargon. In several regulatory jurisdictions, including Health Canada/Environment Canada (through CEPA) and the U. S. EPA (through the Clean Air Act Amendments), risk characterization summaries are now being phrased in everyday language, to the extent possible, in order to enhance risk communication, not only with decision-makers, but also with stakeholders and the general public. In this sense, risk characterization is now viewed increasingly as an essential risk communication tool, as well as a risk decision-making tool.

Margin of Safety approach For virtually all anthropogenic (man-made) contaminants, zero exposure in the environment is not achievable unless a chemical is completely banned from production and distribution, and this approach is not always practicable or desirable for many economically important chemical products. For the many naturally occurring environmental contaminants (including a number of potent carcinogens) produced by plants, fungi, bacteria, and other organisms, a zero exposure philosophy is altogether impossible. Therefore, it is essential to adopt a concept of safety which has a more realistic operational definition than zero exposure and zero risk.

The implications of risk characterization procedures will vary depending on the expressions chosen to quantify the estimated risk. No single numerical expression of risk can adequately summarize the meaning of the risk estimate in all cases, nor can a simple numerical expression communicate the importance of qualitative considerations in the determination of risk�such as the weight-of-evidence criteria for inferring the toxic hazard presented by the substance in question.

Usually the dose-response relationship used for risk characterization will be based on data collected from the most sensitive available species of test animal, an example of conservatism in risk decision-making.

Page 16: Risk Management Framework: Step 3 - University of Waterlooirr.uwaterloo.ca/pdf_files/primer/estimation.pdf · that minute exposures to certain types of toxic substances would produce

RISK ESTIMATION 5–16

For threshold-acting substances, a risk characterization method that is widely applied is the Margin of Safety (MOS) approach, or more correctly the Margin of Exposure (MOE) approach. The MOE can be calculated as the ratio of the observed or expected environmental exposure level to an exposure limit, usually derived from the NOEL or some similar determinant of the Tolerable Daily Intake (TDI). In order to provide a reasonable assurance of safety, the ratio between observed or predicted environmental exposures and the specified exposure limit is required to be less than one, preferably by an ample margin of safety. Using a variant of the MOS approach, Environment Canada calculates a Hazard Quotient (HQ) for threshold-acting contaminants according to the formula:

������� �����������������������������������

When the observed ratio of the predicted exposure to the exposure limit approaches one, it can be inferred that the exposure has a large probability of producing toxic effects, since threshold level for potential toxic effects is almost exceeded. When the HQ ratio is somewhat less than one, harmful effects might still be expected in some portion of the population, since the toxicity threshold has probably been exceeded in sensitive subgroups. If the observed exposure level is well below the exposure limit, the HQ ratio becomes much less than one, and harmful effects are unlikely to occur in most of the exposed population, although some especially susceptible individuals might nonetheless experience some adverse effects. Ideally, the HQ ratio should be several orders of magnitude less than one (i.e., an MOE of 100-fold to 1000-fold) to provide reasonable assurance of no harm. This is not always possible when environmental contamination is persistent or where natural sources of contamination are present.

Slope Factor approach Mathematical models quantify the dose-response relationship of chemical carcinogens at relatively high doses, using low-dose extrapolation to estimate carcinogenic risk at lower doses. The carcinogenic potency of a substance at much lower doses can then be calculated as a slope factor (often denoted as q1* ; pronounced as �cue-one-star�). The slope factor predicts the amount of cancer risk over a lifetime that would expected from exposure to a specified dose of the substance, usually according to the Linearized Multistage model. The carcinogenic risk can then be calculated in a straightforward manner by multiplying the carcinogenic potency (slope factor, q1*) by

In Canada, an additional expression of risk characterization may also be employed. The risk characterization of carcinogenic and other non-threshold chemicals classified as CEPA Priority Substances by Health Canada is conducted according to an Exposure/Potency Index (EPI) which is derived from the Tumourigenic Dose (TD05), measured in test animals as the dose inducing a 5% increase in the incidence of tumours or other disease endpoints. The TD05 is not based on a statistical confidence limit, but is computed directly from the dose-response curve within or close to the experimental range. Health Canada considers this to be appropriate in view of the stability of the data in the experimental range and to avoid unnecessary conservative assumptions. EPI = (exposure level) / TD05 Here, the calculated EPI values for CEPA carcinogens are derived in basically the same manner as the hazard quotient (HQ) values for non-carcinogens, although in this instance the carcinogenic or mutagenic substances usually will be considered to have non-threshold properties. If the ratio between exposure and the TD05 is less than 2 x 10-6, there is little need for further action. If the ratio is 2 x 10-4 or greater, there is a high priority for further action. Values in between are considered as moderate risks.

Page 17: Risk Management Framework: Step 3 - University of Waterlooirr.uwaterloo.ca/pdf_files/primer/estimation.pdf · that minute exposures to certain types of toxic substances would produce

RISK ESTIMATION 5–17

SAFETY IS NOT EQUIVALENT TO ZERO RISK FOR CARCINOGENS Considering that complete elimination of exposures to carcinogens in the environment, synthetic or natural, is not technically feasible. If cancer can potentially occur at any level of exposure (i.e. the linear no-threshold hypothesis) then complete elimination of potential risk is not possible. Consequently, it is important to have an operational concept of safety which is more practical than that of zero risk. Such an approach uses the concept of acceptable or essentially negligible risk to determine the exposure levels at which carcinogens are regulated. Risk assessments for ionizing radiation and genotoxic chemicals assist in establishing a basis on which to recommend permissible limits. The degree to which environmental carcinogens should be controlled requires consideration of the risks of health effects and their severity, the benefits of the associated practice or industry, the costs of mitigation, and societal priorities, all in accordance with legal statutes underlying carcinogen regulation. An operational definition of safety which is consistent with the linear no-threshold assumption of cancer risk is that of very low, but non-zero, levels of exposure which reduce risk to the greatest extent feasible, with consideration given to economic and social factors. This introduces options for risk management other than complete elimination of exposure. Hrudey and Krewski, 1995

the expected level of environmental exposure, to derive an estimate of the lifetime cancer risk for each exposed individual.

����������� � �� �����!��������"�#������������������������

��� �������!�����$%1&'������!��#������$�( ('�

Alternatively, a predetermined small amount of risk, typically set at lifetime probability of cancer in the range 10

-5 to 10

-6 (one in a hundred

thousand, to one in a million), can be defined for regulatory purposes as an essentially negligible risk (Health Canada) or a de minimis risk level (U.S. EPA). The latter term reflects the traditional legal maxim: de minimis non curat lex, the law does not concern itself with trifles. Current risk characterization practice recommends that the value for the slope factor should be rounded to only one significant figure (e.g. 3 x 10-3 instead of 3.26 x 10-3), to emphasize the imprecise nature of the calculated risk. The slope factor also needs to be expressed in two ways�both as the best-fit estimate of risk (most likely to be true) and as the statistical upper confidence limit of risk (the upper bound risk that is unlikely to be exceeded under most circumstances). In cases where an alternative low dose extrapolation model (e.g. Robust Linear Extrapolation) has been employed in addition to the consensus model (e.g. Linearized Multistage model), risk estimates derived from alternative model should be presented for the purpose of comparing the magnitude of risk estimates between the two models. This provides a range of possible risk estimates based on model assumptions, and thereby demonstrates, on a case-by-case basis, the sens-itivity of the risk estimates to model assumptions in the low-dose extrapolation procedures. Alternative interspecies extrapolation models may also be intro-duced and compared in an analogous manner. In addition, the expected amount of risk reduction that might be gained from var-ious levels of decreased exposure should also be considered, according to a variety of proposed exposure reduction scenarios.

The practice of setting benchmarks for risk levels is generally avoided in Canada. For example, Health Canada does not require the specification of a �de minimis� risk level or a negligible risk under the proposed exposure limits for CEPA non-threshold substances (i.e., carcinogens)

Obviating the establishment of a single de minimis risk level enables the assessment of �toxic� for �non-threshold toxicants� to be based to the extent possible on scientific considerations. This approach is also consistent with the objective that exposure to �nonthreshold� toxicants� should be reduced to the extent possible.

Health Canada, 1994

Page 18: Risk Management Framework: Step 3 - University of Waterlooirr.uwaterloo.ca/pdf_files/primer/estimation.pdf · that minute exposures to certain types of toxic substances would produce

RISK ESTIMATION 5–18

UNDERLYING ASSUMPTIONS IN THE LINEARIZED MULTISTAGE (LMS) MODEL FOR RISK ESTIMATION OF CARCINOGENIC SUBSTANCES

Considering that the development of cancer cells progresses through several intermediate steps�from normal cells to pre-malignant cells and finally to cancer cells�statisticians often employ a mathematical model called the linearized multistage model (LMS) which better represents the multiple stages involved in carcinogenesis. For the LMS model, it is expected that the dose-response curve at the higher dose levels will often show curvature due to complex events occurring during the development of cancer cells. The LMS can adequately define this curvature in mathematical terms, and can also account for the expected linearity at lower doses. The LMS model has sufficient flexibility to characterize dose-response curves with many different amounts of curvature, so it represents a convenient �one size fits all� solution for modeling dose-response data. However, there are several serious objections to the uncritical use of the LMS model in risk estimation. � it is based on assumptions about cancer biology which are not firmly

established�for example, it takes no account of cell repair systems or other defenses that may retard or prevent the development of cancer.

� the LMS model makes a number of unproved statistical assumptions; if some of these are untrue, then the model will often produce inaccurate risk estimates.

� the model produces an estimate of risk that is surrounded by a very large envelope of statistical uncertainty; hence the risk estimate is presented in a manner that is both confusing and hard to interpret.

� the LMS model is very conservative�it produces risk estimates that are likely

to be excessively pessimistic, and thus often leads to regulatory exposure limits that are considered too restrictive by many scientists and risk assessors.

These risk estimation assumptions, while well-known to experienced risk assessors, often remain undisclosed within the risk characterization summary that is forwarded to decision-makers and stakeholders for use in the subsequent stages of the risk management process. If the inherent uncertainties and sources of conservatism in the LMS model (or other risk estimation models) are not expressed clearly in the risk characterization report, it is unlikely that effective risk management decisions will be achieved.

Finally, when serious disease risks are being considered, evidence of toxicity determined in the risk identification step need to be summarized by presenting the weight-of-evidence classification for critical endpoint that is deemed most relevant. A combined overall estimate of risk incorporating both a numerical estimate of lifetime risk and the weight-of-evidence classification are normally presented. A detailed narrative summary should accompany the risk estimates and weight-of-evidence classification, including a review of the assump-tions, scientific judgments, and uncertainties that underlie the risk as-sessment. This feature is intended to ensure that there is an appreciation by risk managers and stakeholders of the actual relevance to human health of the risk estimation results.

Page 19: Risk Management Framework: Step 3 - University of Waterlooirr.uwaterloo.ca/pdf_files/primer/estimation.pdf · that minute exposures to certain types of toxic substances would produce

RISK ESTIMATION 5–19

SUGGESTED READINGS

Butterworth, B.E., Conolly, R.B., Morgan, K.T. (1995). A strategy for establishing mode of action of chemical carcinogens as a guide for approaches to risk assessments. Cancer Lett 93:129-46. Butterworth, B.E., Bogdanffy, M.S. (1999). A comprehensive approach for integration of toxicity and cancer risk assessments. Regul Toxicol Pharmacol 29:23-36. Calabrese, E., (Ed.) (1994). Biological Effects of Low Level Exposures: Dose-Response Relationships. Boca Raton, FL: CRC Press. Canadian Standards Association (1997). CAN/CSA-Q850-97. Risk Management: Guideline for Decision-Makers. Etobicoke (Toronto), Canada: Canadian Standards Association. Dourson, M.L., Felter, S.P., Robinson, D. (1996). Evolution of science-based uncertainty factors in noncancer risk assessment. Regul Toxicol Pharmacol 24:108-20. Gart, J.J., Krewski, D., Lee, P.N., Tarone, R.E., Wahrendorf, J. (1986). Statistical methods in cancer research. Volume III--The design and analysis of long-term animal experiments. IARC Sci Publ :1-219. Goddard, M.J., Krewski, D. (1995). The future of mechanistic research in risk assessment: Where are we going and can we get there from here? Toxicology 102:53-70. Gold, L.S., Slone, T.H., Ames, B.N. (1998). What do animal cancer tests tell us about human cancer risk? Overview of analyses of the carcinogenic potency database. Drug Metab Rev 30:359-404. Health Canada (1994). Human health risk assessment for priority substances. Ottawa, Canada: Environmental Health Directorate. Health Canada (1995). Investigating human exposure to contaminants in the environment: A handbook for exposure calculations. Ottawa. Health Canada (1996). Health-based tolerable daily intakes/concentrations and tumourigenic doses/concentrations for priority substances. Ottawa: Environmental Health Directorate. Hrudey, S.E., Krewski, D. (1995). Is there a safe level of exposure to a carcinogen? Environ Sci Technol 29:370A-375. Illing, H.P. (1999). Are societal judgments being incorporated into the uncertainty factors used in toxicological risk assessment? Regul Toxicol Pharmacol 29:300-8.

Page 20: Risk Management Framework: Step 3 - University of Waterlooirr.uwaterloo.ca/pdf_files/primer/estimation.pdf · that minute exposures to certain types of toxic substances would produce

RISK ESTIMATION 5–20

Joint Working Group � Health Canada, Atomic Energy Control Board, Ontario Ministry of the Environment (1998). Assessment and management of cancer risks from radiological and chemical hazards. Ottawa, ON. Jones, H.B., Eldridge, S.R., Butterworth, B.E., Foster, J.R. (1996). Measures of cell replication in risk/safety assessment of xenobiotic-induced, nongenotoxic carcinogenesis. Regul Toxicol Pharmacol 23:117-27. Kamrin, M.A. (1989). Reducing uncertainty in risk assessment. Regul Toxicol Pharmacol 10:82-91. Krewski, D., Brown, C., Murdoch, D. (1984). Determining �safe� levels of exposure: safety factors or mathematical models? Fundam Appl Toxicol 4:S383-94. Krewski, D., Goddard, M.J., Murdoch, D. (1989). Statistical considerations in the interpretation of negative carcinogenicity data. Regul Toxicol Pharmacol 9:5-22. Krewski, D., Wigle, D., Clayson, D.B., Howe, G.R. (1990). Role of epidemiology in health risk assessment. Recent Results Cancer Res 120:1-24. Krewski, D., Gaylor, D.W., Soms, A.P., Szyszkowicz, M. (1993). An overview of the report: correlation between carcinogenic potency and the maximum tolerated dose: Implications for risk assessment. Risk Anal 13:383-98. Krewski, D., Withey, J.R., Ku, L.F., Andersen, M.E. (1994). Applications of physiologic pharmacokinetic modeling in carcinogenic risk assessment. Environ Health Perspect 102:37-50. Leech, J.A., Wilby, K., McMullen, E., Laporte, K. (1996). The Canadian Human Activity Pattern Survey: Report of methods and population surveyed. Chronic Dis Can 17:118-23. McClellan, R.O. (1994). A commentary on the NRC report �Science and judgment in risk assessment.� Regul Toxicol Pharmacol 20:S142-68. McColl, R.S. (1989). Biological safety factors in toxicological risk assessment (90-EHD-154). Ottawa: Environmental Health Directorate, Health Canada. Meek, B. (1996). Assessments of priority substances under the Canadian Environmental Protection Act: variations in exposure and response. Environ Toxicol Pharmacol 2:111-113. Moolgavkar, S., Krewski, D., Zeise, L., Cardis, E., Moller, H. (1999). Quantitative Estimation and Prediction of Human Cancer Risks. In IARC Scientific Publication No. 131, pp. 322: International Agency for Research on Cancer. National Research Council (1983). Risk Assessment in the Federal Government: Managing the Process. Washington, DC: National Academy Press.

Page 21: Risk Management Framework: Step 3 - University of Waterlooirr.uwaterloo.ca/pdf_files/primer/estimation.pdf · that minute exposures to certain types of toxic substances would produce

RISK ESTIMATION 5–21

National Research Council (1994). Science and Judgment in Risk Assessment. Washington, DC: National Academy Press. National Research Council (1996). Understanding Risk: Informing Decisions in a Democratic Society. Washington, DC: National Academy Press. Neumann, D.A., Kimmel, C.A. (1998). Human Variability in Response to Chemical Exposures: Measures, Modeling, and Risk. CRC Press. Ohanian, E.V., Moore, J.A. (1997). Risk characterization: A bridge to informed decision making. Fundam Appl Toxicol 39:81-88. Presidential/Congressional Commission on Risk Assessment and Risk Management (1997). Framework for Environmental Health Risk Management, Final Report, Volume 1: Environmental Protection Agency. Samet, J.M., Schnatter, R., Gibb, H. (1998). Epidemiology and risk assessment. Am J Epidemiol 148:929-36. Teeguarden, J.G., Dragan, Y.P., Pitot, H.C. (1998). Implications of hormesis on the bioassay and hazard assessment of chemical carcinogens. Hum Exp Toxicol 17:254-8. Trosko, J.E., Chang, C.C. (1989). Stem cell theory of carcinogenesis. Toxicol Lett 49:283-95. Trosko, J.E., Chang, C.C., Upham, B., Wilson, M. (1998). Epigenetic toxicology as toxicant-induced changes in intracellular signalling leading to altered gap junctional intercellular communication. Toxicol Lett 102-103:71-8. Walker, V. (1996). Risk characterization and the weight of evidence: adapting gatekeeping concepts from the courts. Risk Anal 16:793-799. Wilson, R., Crouch, E.A. (1987). Risk assessment and comparisons: an introduction. Science 236:267-70. Wolf, D.C., Butterworth, B.E. (1997). Risk assessment of inhaled chloroform based on its mode of action. Toxicol Pathol 25:49-52. Zielinski, J.M., Krewski, D., Goddard, M.J., Wang, Y. (1996). Empirical and biologically based models for cancer risk assessment. Drug Metab Rev 28:53-75.