No Animals Harmed: Toward a Paradigm Shift in Toxicity Testing

The original Food and Drugs Act, passed by the United States Congress in 1906, did not require any premarket testing of products.[1] In the ensuing years, however, several events led to passage of a stricter version of the law. In one case, a permanent mascara called Lash Lure caused blisters and ulcers in several women using the product and the death of one woman from a resulting infection. In another case, a drug preparation called Elixir Sulfanilamide was created by mixing an antibacterial sulfa drug with a sweet-tasting liquid to make it more palatable for children. The sweet liquid was actually diethylene glycol, a component of antifreeze, which is toxic to the kidneys. This drug claimed 107 lives, mostly children.[2] Thus followed the passage of the federal Food, Drug, and Cosmetic Act (FDCA) in 1938, which extended the power of the federal government to oversee the marketing of cosmetics and medical devices and mandated that drugs be tested for safety prior to marketing.

Over the course of the twentieth century, the creation and marketing of tens of thousands of new chemicals led to ever-growing concern about toxicity. The federal Insecticide, Fungicide, and Rodenticide Act, passed in 1947, required the registration of pesticides before they could be marketed on an interstate or international basis.[3] The Miller Pesticide Amendment to the FDCA in 1954 identified methods for setting safety limits for pesticide residues in food. In 1976, the Toxic Substances Control Act addressed the control of new and existing industrial chemicals not regulated by other laws. As of 2005, there were 82,000 chemicals in commerce, with approximately seven hundred new chemicals being introduced per year.[4] There is little publicly available safety data for most of these chemicals, and many of them are produced in quantities of a million pounds or more per year.

As this brief history demonstrates, the lack of safety information about the chemicals to which humans and animals—both domestic and wild—are exposed is a serious public health problem. But the prospect of generating the data for these numerous compounds presents other practical, scientific, and ethical challenges. Animal models have traditionally been used to test for toxicity, but animal testing cannot generate all the toxicity data we now need. To continue using animals for this purpose would lead to the killing of many millions of them. Moreover, animal models are not perfect substitutes for humans. It is true that much valuable information has been gleaned from animal research, and that the information has contributed to our knowledge of the mechanisms of many human diseases. On the other hand, it is not clear how often animal models have led research down a wrong path simply because results from animal studies were not applicable to humans.

Fortunately, advances in science have led to a new vision for toxicity testing based on human cell systems that will be more predictive, have higher throughput, cost less money, be more comparable to real-life exposures in humans, while using many fewer animals. This vision, embraced by leading scientific and regulatory groups, is a paradigm shift from animal-based to human-based testing that signals a major change in focus and promotes the development of new approaches to understanding the toxicity of chemicals in humans. Information gained from these new approaches will likely affect other areas of research as well, leading to less reliance on animals in the future.

A Call for Change

Since its inception, the mainstay of toxicity testing has been administering high doses of test compounds to animals and looking for adverse effects. While the methods of analysis have become more sophisticated, the premise has remained the same. There are numerous disadvantages to this approach. First, human exposures to environmental chemicals typically occur at low concentrations. However, if testing strategies were based on these low concentrations, many more animals, time, and money would be needed to detect adverse health effects in humans. Therefore, in order to maximize the detection of toxicities, animals are treated with very high doses of chemicals. Even so, the studies take years to complete, and only very low numbers of compounds can be tested in a given period. These studies also require large numbers of animals. In addition, in order to relate the test conditions in these high-dose animal studies to realistic human exposures, scientists must extrapolate the results to lower doses in order to determine safe levels of exposure for humans. Such extrapolations are fraught with difficulties because high-dose exposures may cause adverse effects through processes that may not occur at low doses.

Second, inbred strains of animals are routinely used for testing chemicals. Again, this strategy is employed to improve the detection of adverse effects. Inbred strains exhibit less genetic variability, which can affect an animal’s response to a chemical agent. However, humans are not inbred—we are quite heterogeneous genetically and thus potentially exhibit considerable variability in susceptibility to adverse effects from a chemical. Yet the toxicity test data reflect only the effects of a chemical in a genetically restricted population.

The third major problem with the current animal-testing strategies is that the results are obtained primarily from rats and mice, and though rats and mice exhibit many of the same responses to chemicals as humans, there are also many differences. Toxicity tests of pharmaceuticals in rodents predict human toxicity only 43 percent of the time.[5] Interestingly, the results in the rat predict the results in the mouse only 57 percent of the time. If a chemical is shown to cause adverse effects in an animal species by a process that is known to be irrelevant to humans, the data from those studies are not used for risk assessment. But since the differences among species are not all known, an uncertainty factor must be applied even to the animal data that are used.

Thus, there are several major scientific issues with the current toxicity testing approaches: the necessity for high-to-low dose extrapolation, the limitations of genetically homogeneous animal strains, and the uncertainty of interspecies comparisons. The practical problems are the cost and time requirements and the inability to test large numbers of chemicals. The ethical concerns are the number of animals that would be required to undertake a full testing regimen of all chemicals, and the fact that the testing protocols typically involve some pain, distress, or both to the animals, making the whole scenario a major animal welfare issue.

Recognizing these shortfalls, the Environmental Protection Agency in 2003 requested that the National Academy of Sciences review existing strategies and develop a vision for the future of toxicity testing. After four years of toil, a committee of twenty-two experts in toxicology, epidemiology, environmental health, risk assessment, and animal welfare, representing academia, industry, and nongovernmental organizations, produced two reports: Toxicity Testing for Assessment of Environmental Agents: Interim Report[6] and Toxicity Testing in the 21st Century: A Vision and a Strategy (Tox21c).[7] The second report proposed a major change in toxicity testing—from charting the effects on animals to mapping toxic pathways in human cells with minimal use of animals.

A pathway of toxicity is a cellular mechanism that, when sufficiently perturbed, is expected to result in an adverse effect at the cellular level and may lead to an adverse health effect for the organism. So rather than the “black box” approach of dosing an animal with a chemical and looking at the end result (death, cancer, or organ failure, for example), the new approach would involve measuring changes in the molecules of the cell in response to a chemical. With low concentrations of chemicals, these changes might be reversible and the cells might recover through adaptive responses; with higher concentrations, the changes might be irreversible and would begin a cascade of cellular events that eventually lead to impaired function or death of the cell. As more information is derived from the study of these pathways, researchers would identify early cellular changes that, in years to come, cause catastrophic effects.

Advances in science have led to a new vision for toxicity testing based on human cell systems that will be more predictive, have higher throughput, cost less money, and be more comparable to real-life exposures in humans, while using many fewer animals.

While the vision seems straightforward, in reality it describes an extremely ambitious task likely to take years, if not decades. Mapping the pathways of toxicity requires a concerted effort among many scientists and agencies. Recognizing the effort required to implement the vision, the committee outlined a plan to guide the development of the scientific basis for the new paradigm. In the first phase, scientists will work on elucidating the pathways of toxicity. Since this research will produce a large amount of data, massive data storage and management systems will be needed. Guidelines for assay performance and reporting of results will have to be developed as well. The first phase will also include creating a strategy for collecting data from human populations that have been exposed to chemicals already found in the environment. Population-based studies can provide information on toxicity pathways and health risks not revealed by traditional toxicity testing. Information gathered from population-based studies will also contribute to knowledge about susceptible populations including children, the elderly, and immune-compromised individuals.

The second phase of the plan calls for developing a suite of representative human cell lines that can be used for assessing toxicity. At the same time, emphasis will be placed on developing high- and medium-throughput assays. The goal of these assays is to screen large numbers of chemicals in human cells and then follow up with limited, targeted testing in whole animals for chemicals that need further characterization.

In the third phase of the plan, scientists will assess the relevance and validity of the new assays. First, they will compare the results from new tests with historical information obtained from traditional animal tests. In particular, information from chemicals that have large datasets from traditional tests will be valuable in determining the benefit of new assays. It should be pointed out, however, that comparing the results from new assays with old tests may not be an “acid test,” since the animal tests are considered less than perfect in predicting human toxicity. New methods for validation will also be needed. The new tests will screen many chemicals that have never have been tested, generating a large bank of valuable data. Surveillance of human populations will also be part of this phase and will provide data to support the validation of the new assays. Data from human populations will also be valuable in postmarket assessment of chemical toxicities because they will show adverse effects that occur in the population. In the final phase, a suite of validated tests will be proposed by the regulatory agencies for use in place of selected traditional methods.

Putting the Plan into Place

The vision and implementation plan are grand, and yet to achieve these goals the committee identified many other elements that must fall into place. There must be institutional changes in attitudes and expectations. The scientific community must foster and accept the ideas put forth in the vision and collaborate to move the science forward. It is also imperative that scientists in academia, industry, and government regulatory agencies work together to identify goals and milestones. These collaborations should also inform policy changes that recognize the value of the new tests and facilitate incorporating them into regulation.

In the years since the publication of the Tox21c report, progress has been made to ensure this vision is implemented. The National Toxicology Program (NTP) set out its vision in a document called “NTP Roadmap—Toxicology in the 21st Century: The Role of the National Toxicology Program.” Soon after the Tox21c report was published, a Memorandum of Understanding was drawn up among the EPA’s National Center for Computational Toxicology, the National Toxicology Program, the National Institutes of Health Chemical Genomics Center, and the Food and Drug Administration’s Center for Drug Evaluation and Research. This collaboration, called Tox21, commits these agencies to pursue together the goals put forth in the report.

Commitment to the goals set in the vision has expanded. In 2008, Francis Collins, now the director of the NIH, proposed “a shift from primarily in vivo animal studies to in vitro assays, in vivo assays with lower organisms, and computational modeling for toxicity assessments.” In 2009, the EPA published its “Strategic Plan for Evaluating the Toxicity of Chemicals,[8] which serves as the agency’s blueprint to take a leadership role in implementing the Tox21c vision. And at the 2011 annual meeting marking the fiftieth anniversary of the Society of Toxicology in Washington, D.C., Margaret Hamburg, director of the FDA, embraced the vision by declaring that, “With an advanced field of regulatory science, new tools, including functional genomics, proteomics, metabolomics, high-throughput screening, and systems biology, we can replace current toxicology assays with tests that incorporate the mechanistic underpinnings of disease and of underlying toxic side effects.”

Work to implement the vision of toxicology testing without animals is progressing. The EPA’s ToxCast Program is using 650 state-of-the-art rapid tests using mostly human cells to screen over two thousand environmental chemicals for potential toxicity.[9] In phase I of the program, ToxCast screened 309 chemicals (primarily pesticides) whose toxicity had been profiled over the last thirty years using animal tests. This phase was meant to provide the proof of concept for the approach. Currently in phase II, the investigators are screening one thousand chemicals that include industrial and consumer products, as well as food additives and drugs that failed in clinical trials, in order to validate, expand, and apply predictive models of toxicity. In addition, the Tox21 collaboration has begun testing ten thousand additional chemicals. Data obtained from these testing efforts will be shared publicly to encourage reproduction and validation of the results.

Thomas Hartung, the director of the Johns Hopkins Center for Alternatives to Animal Testing, has spearheaded a related joint effort to begin to map the entire human pathways of toxicity (the “human toxome”). Through a grant from the NIH director’s Transformative Research Award program, Hartung is collaborating with investigators at Johns Hopkins Bloomberg School of Public Health, Brown University, Georgetown University, The Hamner Institute for Health Sciences, Agilent Technologies, and the EPA ToxCast program to use integrated testing strategies in combination with computational models to study the pathways of toxicity for endocrine-disrupting chemicals. CAAT has also become the focal point and secretariat for the Evidence-Based Toxicology Collaboration, a group of individuals from government, industry, academia, and nongovernmental organizations formed to develop guidelines to validate new tests.

These new approaches to assessing the hazards of environmental chemicals essentially forecast the eventual end for the need for animals for toxicity testing. While the vision put forth in the Tox21c report still includes limited use of animals, the numbers will be dramatically reduced from those currently used. The advances in technology that can be applied to toxicity testing also serve as a bellwether for the future of animal use for biomedical research. The information gathered over the next several decades to inform regulatory decisions about environmental chemicals will vastly contribute to the body of scientific knowledge in other fields. Since much of biomedical research is focused on human health and disease, data obtained from studying the pathways of toxicity in human systems can be used to fill gaps in our knowledge of human biology and shed light on the differences between humans and the animals used to model humans.

There are potential drawbacks to any model system used to study disease. Certainly at this point, cell cultures cannot answer every scientific question. But it is likely that animals will eventually become obsolete for research to benefit humans. The paradigm shift in toxicology testing is the most significant force to date leading to the ultimate elimination of animal use for biomedical research and testing.

Acknowledgments

I would like to thank Alan Goldberg and James Yager for their critical review of this manuscript.

 

Joanne Zurlo is a senior scientist in the Department of Environmental Health Sciences and director of science strategy for the Center for Alternatives to Animal Testing at the Johns Hopkins Bloomberg School of Public Health. Previously, she was the director of the Institute for Laboratory Animal Research at the U.S. National Academy of Sciences, where she oversaw the publication of numerous reports related to the humane use of laboratory animals. Her interests lie in laboratory animal welfare and the application of new technologies to toxicity testing.

Footnotes    (↵ returns to text)

  1. 1. U.S. Food and Drug Administration, “Milestones in Food and Drug Law History,” at http://www.fda.gov/AboutFDA/WhatWeDo/History/Milestones/ucm081229.htm.
  2. 2. National Research Council, Science, Medicine, and Animals: A Circle of Discovery (Washington, D.C.: National Academies Press, 2004).
  3. 3. J. Zurlo, D. Rudacille, and A.M. Goldberg, Animals and Alternatives in Testing: History, Science and Ethics (New York: Mary Ann Liebert, 1994).
  4. 4. National Research Council, Toxicity Testing in the 21st Century: A Vision and a Strategy (Washington, D.C.: National Academies Press, 2007).
  5. 5. H. Olson et al., “Concordance of the Toxicity of Pharmaceuticals in Humans and in Animals,” Regulatory Toxicology and Pharmacology 32, no. 1 (2000): 56-67.
  6. 6. National Research Council, Toxicity Testing for Assessment of Environmental Agents: Interim Report (Washington, D.C.: National Academies Press, 2006).
  7. 7. National Research Council, Toxicity Testing in the 21st Century.
  8. 8. U.S. Environmental Protection Agency’s Science Policy Council, “The U.S. Environmental Agency’s Strategic Plan for Evaluating the Toxicity of Chemicals,” EPA 100/K-09/001, March 2009, http://www.epa.gov/spc/toxicitytesting/docs/toxtest_strategy_032309.pdf.
  9. 9. U.S. Environmental Protection Agency, “ToxCast: Screening Chemicals to Predict Toxicity Faster and Better,” http://www.epa.gov/ncct/toxcast, accessed October 2, 2012.

Posted

in

by

Tags: