36
Lawrence Livermore National Laboratory Technologies for Improving Health Care Volume II Technologies for Improving Health Care Volume II A collection of articles from UCRL-LR-124761 VOL 2 Center for Health Care Technologies Lawrence Livermore National Laboratory P.O. Box 808, L-452 Livermore, California 94551

Lawrence Livermore National Laboratory P.O. Box 808, L-452 .../67531/metadc627670/m2/1/high_res_d/16340.pdfof the relation between genes, the proteins they produce, and biological

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Lawrence Livermore National Laboratory P.O. Box 808, L-452 .../67531/metadc627670/m2/1/high_res_d/16340.pdfof the relation between genes, the proteins they produce, and biological

Lawrence Livermore National Laboratory

Technologiesfor ImprovingHealth Care

Volume II

Technologiesfor ImprovingHealth Care

Volume II

A collection of articles from

UCRL-LR-124761 VOL 2

Center for H

ealth Care Technologies

Lawrence Liverm

ore National Laboratory

P.O.B

ox 808, L-452Liverm

ore, California 94551

Page 2: Lawrence Livermore National Laboratory P.O. Box 808, L-452 .../67531/metadc627670/m2/1/high_res_d/16340.pdfof the relation between genes, the proteins they produce, and biological

LawrenceLivermoreNationalLaboratory

Center forHealthcareTechnologies

Lawrence Livermore is a national

resource in science and engineering

with over 40 years of basic and applied

research experience. The University of

California manages LLNL under

contract to the Department of Energy.

Of the approximately 7,500 UC career

employees at the Lab, roughly 50%

have scientific or technical degrees

and about 1,000 have doctorates.

Located on a square mile site in

Livermore, California, the main site

includes a physical complex valued at

approximately $5 billion. The annual

budget for LLNL is roughly $1 billion.

2 Technologies for Improving Health Care

4 The Role of Cooked Food in Genetic Changes

24 The Genetic Contribution of Sperm: Healthy Baby or Not?

38 Ergonomics Research: Impact on Injuries

41 Modeling Human Joints and Prosthetic Implants

44 PEREGRINE: Improving Radiation Treatment for Cancer

52 On the Offensive Against Brain Attack

60 Optical Networks: The Wave of the Future

63 The Microtechnology Center: When Smaller is Better

Printed in the United States of America

Available fromNational Technical Information ServiceU.S. Department of Commerce5285 Port Royal RoadSpringfield, Virginia 22161

This document was prepared as an account of work sponsored by an agency of theUnited States Government. Neither the United States Government nor the University ofCalifornia nor any of their employees makes any warranty, express or implied, or assumesany legal liability or responsibility for the accuracy, completeness, or usefulness of anyinformation, apparatus, product, or process disclosed, or represents that its use would notinfringe privately owned rights. Reference herein to any specific commercial product,process, or service by trade name, trademark, manufacturer, or otherwise, does notnecessarily constitute or imply its endorsement, recommendation, or favoring by theUnited States Government or the University of California. The views and opinions ofauthors expressed herein do not necessarily state or reflect those of the United StatesGovernment or the University of California and shall not be used for advertising orproduct endorsement purposes.

U.S. Government Printing Office: 1997/583-059-60028

• •

Prepared by LLNL under contractNo. W-7405-Eng-48

Page 3: Lawrence Livermore National Laboratory P.O. Box 808, L-452 .../67531/metadc627670/m2/1/high_res_d/16340.pdfof the relation between genes, the proteins they produce, and biological

32

financial and schedule penalties that result when perceivedsolutions drive a development process prior to understandingkey issues. Our focused experimentation leveragessophisticated facilities and well-developed sensor andmeasurement capabilities to rapidly reach milestones. TheCenter also employs the world-class engineering capabilitiesat LLNL for developing prototypes rapidly. The Center’sdevelopment programs often use computers to model andvirtual-test concepts and designs as a cost-effective alternativeto building prototypes and as an adjunct to experimentalstudies. The computer modeling often allows a broader designspace to be explored without prototyping. In short, ourapproach to project management enables discovery, reducesoverall costs, and accelerates development schedules.

Funding for the Center is a combination of privatesector, government, and the Lab’s discretionary sources. Ourcustomer–partners include government (federal and state),health-care organizations, and corporations. Our corporatecustomer-partners are usually market- and technology-drivencompanies that desire fresh technology perspectives, quickaccess to advanced technology and competencies thatcompliment their internal skills, and access tomultidisciplinary programs and R&D without having todevelop them. Our project teams address significant, unmetclinical and market needs that have the potential to reduce thehuman and economic costs of health care.

Our current development programs in health-caretechnologies include:• Acoustic tomography and digital x-ray mammography forbreast cancer screening• Catheter technologies for heart disease and stroke• Computer models of energy interactions with tissue,including failure mechanisms• Dental applications• Enhanced radiation-treatment planning software

• Improved understanding of angioplasty• Information systems for public health and home healthapplications• Lasers for surgical applications• Medical photonics, including tissue characterization• MRI-compatible surgical tools• Musculoskeletal applications of finite element analysis• Osteoporosis studies• Repetitive strain injury treatment and prevention• Sensors and systems for diabetes, dialysis, and strokescreening and diagnosis• Sensor feedback for tissue welding.

For additional information or updates, please visit theLLNL world wide web pages, including our Center forHealthcare Technologies homepage (http://www-bio.llnl.gov/bbrp/healthcare/healthcare.html), the Science &Technology Review on-line (http://www.llnl.gov/str), or sendus e-mail at [email protected]. So we can best directyour inquiry, please be specific and include information aboutyour organization, technical and application interests, andenvisioned interactions with LLNL.

This page can be reachedthrough the Center forHealthcare Technology’shomepage (http://www-bio.llnl.gov/bbrp/healthcare/healthcare.html). From there,click on “About the Center forHealthcare Technologies”and then on “How the Centerworks.”

ivermore National Laboratory is fortunate to havesignificant technologies and capabilities that can be

applied to improving health care. This special collection ofarticles, previously published in Science & TechnologyReview, focuses on health-care projects from our Center forHealthcare Technologies, and on technologies with significantpotential for health-care applications. While this collectioncannot cover all the Lab’s capabilities, it will give the readeran overview of some of our accomplishments in health-caretechnology.

As a preface to the collection, we offer this brief updateof our health-care technology activities. It describes thebackground, core capabilities, several new projects, and thephilosophy for our activities. The reprints themselves includearticles on specific health-care R&D activities in the Centerfor Healthcare Technologies (carcinogenic effects of cookingon food, the effects of radiation on reproduction, computermodels for human joints and implants, ergonomics, radiationtreatment planning, and stroke treatment and diagnosis).Alsoincluded is a subset of the many S&TR articles describingLawrence Livermore’s technologies and competencies thathave significant potential for application in health care (highbandwidth optical networks, and microtechnologies). The first“Technologies for Improving Health Care” was printed abouta year ago and included health care (osteoporosis, surgicaltools, artificial hip joints, digital mammography and computerassisted diagnosis), and technical competencies articles(biology and biotechnology, lasers, and micropower radar).The first collection concluded with an overview of health-caretechnology activities from early in our history that serves as abenchmark of progress toward our original goals for theCenter for Healthcare Technologies. Please contact us if youwould like a copy of the first volume.

BackgroundLawrence Livermore is operated in the public interest

to focus science and technology on assuring our nationalsecurity. We also apply that expertise to solve other importantnational problems in bioscience, energy, and the environment.

Within the biosciences, LLNL draws upon existingcompetencies and emphasizes three research areas: genomics,disease susceptibility and prevention, and health-care andmedical biotechnology. The genomics research is amultidisciplinary approach to characterize the genetic materialof human, animal, and microbial species. The diseasesusceptibility and prevention research seeks an understandingof the relation between genes, the proteins they produce, andbiological function. In addition, we are researching the role ofthe environment in health, as well as new methods of assessinghealth risks from adverse environments.

The Center for Healthcare TechnologiesPart of our mission in biosciences is to improve the

quality of life through the development of next-generation,cost-effective health-care technologies. The Center forHealthcare Technologies coordinates and serves as a catalystfor this activity, with the goal of applying technology tosignificant areas of unmet clinical, social, and economic need.The Center, conceived as a partnership among the privatesector, universities, and government, was established in 1993after an internal working group identified the significantpotential for National Laboratory involvement in health-caretechnologies. It is a coordinating and marketing center forLLNL activities in cost-effective health-care technologies.

The Center is a focal point for leveraging LawrenceLivermore’s core technologies. Through the rapid formation ofcentrally managed, matrixed teams, the Center providescustomer–partners with access to expertise and facilities insensors, lasers systems, microfabrication, modeling andsimulation, information systems, biology and biotechnology,chemistry and material sciences, and engineering and systemintegration. The continued involvement of medical instrumentmanufacturers, pharmaceutical companies, medical institutions,and health-care providers is fundamental to its success.

Managing Milestones for SuccessThe core philosophy and approach of the Center begins

with milestone management, where the initial milestones aredesigned to significantly improve understanding of the problembeing addressed. With information from these initialmilestones, a solution can be developed that avoids the

Technologies forImproving Health Care

L

Introduction and Overview

J. Patrick Fitch, Ph.D.Director, Center for Healthcare Technologies(510) 422-3276

Page 4: Lawrence Livermore National Laboratory P.O. Box 808, L-452 .../67531/metadc627670/m2/1/high_res_d/16340.pdfof the relation between genes, the proteins they produce, and biological

4

Science & Technology Review July 1995

CCORDING to a common, rathersimplistic notion, we are what we

eat. On a far more empirical level,epidemiological studies reveal aconnection between diet and adversehealth consequences. Many observeddifferences in cancer rates worldwide,including incidences of colon and breastcancer, are linked to variations inhuman diets.

Strong evidence suggests thatmutations are the initiating events in thecancer process. In other words, thecomplex sequence of cellular changesultimately leading to malignant tumorsis thought to begin with structuralchanges—mutations—within themolecular units that make up the genes.

For 17 years, LLNL researchers havebeen investigating certain biologicallyactive compounds in foods that cantrigger tumors in animals, at least afterexposure to high concentrations, byproducing cellular mutations.

At first glance, identifying themutagens that might put us at risk andunderstanding how they affect thebody appear to be simple matters. Infact, the opposite is true. Consider justa few of the questions that must beaddressed to understand the entirepicture of diet-induced mutations andpossible links to cancer. Exactly whatcompounds in foods are dangerous,how are the compounds formed duringcooking, in what amounts are they

present after cooking, and how toxicor cancer-causing are they? Whatchemical changes take placemetabolically at the molecular levelafter the mutagenic substances areconsumed? For example, what role do metabolic enzymes play, how isDNA affected, and how might tumorsbe triggered in the body’s somaticcells? What chemical, tissue, animal,and human models might be useful to estimate risk to the humanpopulation? Are all people affectedsimilarly, or are some resistant tocancer-causing effects? If people vary in cancer incidence, whataccounts for the differences insusceptibility?

5

Science & Technology Review July 1995

The Role of Cooked Food in Genetic ChangesWhen food derived from muscle is heated, potent mutagens are produced. Fornearly two decades, LLNL researchers have studied the formation of toxicmutagenic compounds in red meats and other foods containing protein. Thisreport, the first of two installments, focuses on the identification of food mutagensand measurement of their abundance in cooked foods as a function of cookingtemperature and time.

A

FOOD MUTAGENS:The Role of Cooked Food in Genetic Changes

Page 5: Lawrence Livermore National Laboratory P.O. Box 808, L-452 .../67531/metadc627670/m2/1/high_res_d/16340.pdfof the relation between genes, the proteins they produce, and biological

environmental sciences, and forensics(Figure 1). Our research requires toolssuch as accelerator mass spectrometryand nuclear magnetic resonancespectrometry, to name a few. TheLaboratory is one of the few places thatbrings together the broad expertise andstate-of-the-art analytic tools required tofully understand each important aspectof the problem of mutagens andcarcinogens in the human diet. The waywe became involved in this field ofresearch has much to do with our role as a national laboratory withinterdisciplinary research programs.

Mutagens are the damaging agentsthat can structurally change themolecular units that make up the genes(that is, the genetic material, DNA) orthe relation of one chromosome toanother. For many years, LLNLinvestigators have been studying someof the ways that x rays, ultraviolet light,and some chemicals in the environmentcan act as mutagens. Carcinogens areagents that incite the development of acancerous tumor or other malignancy.Some 80 to 90% of mutagenicsubstances are also carcinogenic. Morethan 50 years ago, scientists painted theskin of mice with extracts from heatedanimal muscle and found that theextracts were carcinogenic, but theresearch went no further.

By the early 1970s, Bruce Ames atthe University of California, Berkeley,had developed a biological test tomeasure the mutagenic potency(mutagenicity) of substances.1* In thelate 1970s, T. Sugimura, who directedresearch at the National Cancer Centerin Tokyo, applied the Ames method andpublished the fact that smokecondensate from cooking and thecharred surface of broiled fish and beefwere mutagenic.2 One year later, BarryCommoner, working at WashingtonUniversity, St. Louis, used the Amesmethod to show that cooking

temperature and time affect theformation of mutagens in food.3

The news that cooking amino acids(the building blocks of proteins) andmuscle-containing foods could bedangerous triggered considerablescientific interest around the world. In1978, biomedical researchers at LLNLwere working on the problem ofmutagenic chemicals produced by oilshale retorting and coal gasification.Because of our combined expertise inchemical analysis (including differenttypes of chromatography andspectrometry), biological analysis(including the Ames mutation assay),and our emerging program in geneticsand toxicology, we received a multiyearcontract from the National Institute ofEnvironmental Health Science (NIEHS)to take a detailed look into the problemof food mutagens. As it turns out, whathappens when oil shale and coal areheated is not so different from some of

7

Science & Technology Review July 1995

Food Mutagens

Clearly then, isolating, identifying,and assessing the biological activity ofmutagenic compounds in food is adifficult problem requiring extensiveeffort. Table 1 is an overview of someof the research issues addressed andanalytic methods used in this field ofinvestigation. This series of articlesfocuses on the first five questions under“Issues” listed in Table 1. A secondinstallment in Science and TechnologyReview will address the remainingissues.

A simple analogy can help put a keyfeature of our work into perspective.The compounds we have beeninvestigating for nearly two decades—the aromatic heterocyclic amines—arepresent in cooked foods at very lowlevels, in the range of about 0.1 to 50 parts per billion. Isolating material atthe part-per-billion level is equivalent topouring a jigger of Scotch into the holdof a full supertanker and then trying to retrieve it again. Although thecompounds we study are present in verysmall amounts, they are also the mostmutagenic compounds ever found, andthey produce tumors in mice, rats, andmonkeys. Such knowledge, combinedwith the fact that these compounds arepresent in many foods characteristic ofthe Western diet and that certain dietsare known to influence tumors atseveral body sites, gives our research an extra sense of urgency.

LLNL’s Approach

The single aspect that bestcharacterizes our research on foodmutagens and carcinogens—and sets our work apart from almost all otherefforts around the world—is itsmultidisciplinary nature. Biomedicalscientists at LLNL routinely collaboratewith investigators working in analyticalchemistry, synthetic chemistry,quantum chemistry, physics, the

6

Science & Technology Review July 1995

Food Mutagens

Table 1. Some of the required interdisciplinary research, analytic methods, and tools needed tounderstand the possible connection of mutagens in cooked food to cancer.

Issues Research required Analytic methods and tools

• What cooked foods contain mutagens? • Chemical extraction and • Gas chromatography (GC)• What are the mutagenic compounds? purification • Liquid chromatography (LC)• What amounts are produced? • Identification and • Mass spectrometry (MS)

quantification • High-resolution mass• Proof of structure spectrometry (HRMS)• Synthesis of isomers • Nuclear magnetic resonance

(NMR) spectrometry• Ames/Salmonella test• Monoclonal antibodies

• By what mechanism are mutagens • Study precursors and • Modeling mutagens from formed during cooking? reaction conditions in – creatine

chemical models – creatinine • Aqueous vs dry heating – amino acids• Vary cooking temperature – sugars

• Heavy isotope incorporation

• How potent (mutagenic) are • Mutagenicity research (e.g., • High-performancethe compounds? use chemical to induce liquid chromatography (HPLC)

mutations, and count • Ames/Salmonella testfrequency of mutant cells • Animal mutation studiesor chromosomal changes) – Mice

– Chinese hamster ovary (CHO)cell cultures

• How are mutagens activated • Study chemical intermediates • Cell modelsmetabolically? (bioactivation pathways) – Mammalian cell systems

• Modulate metabolism in – Bacterial cell culturescell models • Enzyme inhibitors

• Radioactive labeling

• How is DNA affected? • DNA damage and repair • Computational chemistry analysis• DNA binding analysis • 32P-postlabeling of DNA adducts• Data adduct analysis • Accelerator mass spectrometry (AMS)

• Models– Whole animals (in vivo)– Animal cells in culture (in vitro)– Bacterial assays

• How are tumors induced? • Carcinogenicity research • Animal models(e.g., assess tumor induction – Monkeysin various tissues – Ratsin laboratory animals) – Mice

• What are the health risks • Dose-response assessment • 32P-postlabeling of DNA adductsfrom exposure? in humans • Accelerator mass spectrometry

• What people are affected? • Adduct formation as an • Epidemiology• Who is most at risk? indicator of exposure

• Risk assessment• Extrapolation from animal

studies

*All references are at the conclusion of the third part ofthis installment on p. 25.

Figure 1. Cyndy Salmon,one of the researchers inthe LLNL food mutagenresearch group, pours acooked food sample into anextraction tube to prepare itfor subsequent analysis.

Page 6: Lawrence Livermore National Laboratory P.O. Box 808, L-452 .../67531/metadc627670/m2/1/high_res_d/16340.pdfof the relation between genes, the proteins they produce, and biological

N the foods that make up theWestern diet, the most common

mutagens belong to a class collectivelycalled the amino-imidazoazaarenes(AIAs). Not all the known foodmutagens are AIAs, but the commonlyfound ones are. As shown in Figure 2,AIA compounds have one or twoaromatic ring structures fused to theimidazole ring. They also have anamino group (NH2) on the number-2position of the imidazole ring and canhave methyl groups (CH3) of varyingnumber and location.

Of the list of toxic substancesknown to be produced during cooking,the most important may well be theAIAs. Also referred to as heterocyclicamines, these compounds are potentmutagens produced at normal cookingtemperatures in beef, chicken, pork,and fish when fried, broiled, or grilledover an open flame. The pan residuesthat remain after frying also have high

mutagenic activity,indicating that meatgravies can be a sourceof exposure. Ourresearch suggests thatsmoke from cookingmuscle meats ismutagenic as well, butany such air exposure islikely to be far less thanthat from eating thecooked food. Otherfoods, such as cheese,tofu, and meats derived from organsother than animal muscle, have verylow or undetectable levels of AIAmutagens after they are cooked.

Extraction

Analyzing cooked foods formutagens requires many differentmethods (Figure 3). The toxiccompounds in food must first be

9

Science & Technology Review July 1995

Food Mutagens

the chemical processes that occur whena hamburger is cooked.

Our work on food mutagens alsoparallels our interest in the mechanismsby which pesticides and many othertoxic chemicals can elicit adversebiological responses. For example,benzo[a]pyrene is a widely studiedpollutant found in combustion products,and it has been isolated from burned fatand cigarette smoke. However, thiscompound becomes carcinogenic onlyafter it interacts with DNA followingoxidation by metabolic enzymes. Theproduction of such enzymes and theirroles in changing the chemicalreactivity of compounds are part of thebody’s normal biological response tocertain foreign substances. We arelearning that similar “metabolicactivation” takes place before foodmutagens become harmful.

Today, our research is fundedprimarily by the National CancerInstitute, with additional support fromthe Laboratory Directed Research andDevelopment program and othersources. There are approximately 50 other prominent research teamsworldwide studying the heterocyclicamines. However, except for one otherprogram in Japan, ours is the only teamthat brings a truly multidisciplinaryapproach to the problem ofunderstanding mutagens andcarcinogens associated with cookedfood and their consequences at thecellular, genetic, and molecular levels.

A Problem of Strategy

Strictly speaking, it is inaccurate to say that cooked foods containmutagens. More precisely, certaincooked foods contain premutagenicsubstances (promutagens) that aremetabolized by enzymes naturallypresent in body tissues, leading to theformation of one or more reactivemutagenic substances. Conventionally,however, “promutagen” and “mutagen”are used synonomously, and we havefollowed that practice here unless thepoint being made about the researchdemands a precise distinction.

At the outset of our research, wewere faced with problems of strategy.Studying substances that are present atvery low concentrations imposes manyresearch constraints. If we focused ononly a few foods, as seemed wise, thenour results and their implications forpublic health might be misinterpreted.Instead, we decided to examine thefoods that are the principal sources ofcooked protein: meats (any muscle-containing food, including fish), eggs,beans, cheese, and tofu. Whereas weinitially focused on meats, especiallyfried beef, we have now expanded therange of foods to include cooked breadsand grain products, heated flour frommany different plant sources, and meatsubstitutes.

Over the years, our research has alsoevolved from relatively simple conceptsand approaches to more sophisticatedones. Initially, we had to identify themutagenic compounds in heated foodsbecause many were not known (that is,neither synthesized nor analyzed). Thus,we focused our efforts on identifyingthe chemical composition and structureof mutagens, assessing how different

cooking procedures affect the formationof mutagens, and determining theamount (abundance) of the mutagenicproducts. Even though chemicalidentification and quantification are stillimportant activities, our work hasexpanded to include many other aspectsof the problem.

For example, we developedtechniques to help us learn howmutagens are metabolized in the body.We use animals as models tounderstand complex metabolicpathways and are developing cell-culture methods that model humanmetabolic systems. One particularlyimportant issue is how metabolites (theintermediate products formed byenzymes) interact with the geneticmaterial. We need to know exactly whattakes place at the molecular level,including covalent binding with andstructural changes to specificcomponents of DNA. This work tapsthe skills and facilities in several relatedresearch programs, including theHuman Genome and DNA repairprojects. (See the April/May 1992 andApril 1993 issues of Energy andTechnology Review for morebackground on these two programs.)

In assessing the effects of low-levelexposure to food mutagens, we makeuse of Laboratory expertise inaccelerator mass spectrometry (AMS).Yet another part of the story is thedifferences among humans insusceptibility to cancer, which hasbecome our newest effort. In essence,our success in recent years is derivednot so much from simply applyingstandard analytical methods bythemselves as from combiningbiological analysis with state-of-the-artanalytical tools available at LLNL tostudy all aspects of the health risks,ranging from dietary exposure to effectsin model systems and humans.

8

Science & Technology Review July 1995

Food Mutagens

FOOD MUTAGENS:The Challenge of Identification

I Figure 2. Structure of theamino-imidazoazaarenes(AIAs), also called heterocyclicamines. (a) The imidazole ring iscommon to all AIAs. Numbersshow the position of atoms onthis ring. (b) In the AIAs, anamino group and one or moremethyl groups are attached. (c) IQ, one of the potent AIAsfound in cooked meats, has twoaromatic rings attached to theimidazole ring. The mutagenicactivity of the differentheterocyclic amines varies byseveral orders of magnitude andcan be increased when one ormore additional methyl groupsare present.

Page 7: Lawrence Livermore National Laboratory P.O. Box 808, L-452 .../67531/metadc627670/m2/1/high_res_d/16340.pdfof the relation between genes, the proteins they produce, and biological

Step 1. Extract mutagens from cooked food

Step 3. Detect Mutagenic Activity

Solid-phase extraction

Ames/Salmonella Test

Combine food extract,bacteria, and enzymes

Combine food extractand bacteria

Count revertant colonies

Count revertant colonies(baseline measurement)

Figure 3. Some of the steps required toextract, separate, purify, and confirm thepotency and chemical structure of mutagens incooked food. These steps show a typicalsequence of events during research on a givenmutagen. However, the sequence shown herecan vary depending on whether our objective isto study a known mutagen or to assess theproperties of a new candidate. Each of thesteps is described in more detail in the text.

Step 2. Separate and purify the many differentcompounds in the complex mixture

High-performance liquid chromatography (HPLC)

Pump

Separation column

Detector

Purified fraction

Step 4. Subsequent Characterization

Mass spectrometry

Determine molecularweight (MW) andchemical composition

Nuclear magnetic resonance (NMR) spectrometry

Determinedefinitivestructure

198 MW

IQ

224 MW

PhIP

CH3

NH2

N

N

N

N N

N

CH3

NH2

Spectrum review

Matching spectra

Confirm mutagenidentity

Ultravioletchromatogram

y axis = Absorbancex axis = Retention time

Detect peakabsorbance at aparticular retention time

Fluorescencechromatogram

y axis = Fluorescencex axis = Retention time

Increase sensitivityfor fluorescence only

1110

Page 8: Lawrence Livermore National Laboratory P.O. Box 808, L-452 .../67531/metadc627670/m2/1/high_res_d/16340.pdfof the relation between genes, the proteins they produce, and biological

needed for the next step—testing formutagenic potency.

Detection of Mutagenicity

The most widely used detectionmethod for mutagenic potency is theAmes/Salmonella mutation test,1 whichis described in more detail in the box onp. 16. This test for mutagenic activity isexquisitely sensitive and relativelyinexpensive. It is also convenientbecause each analysis requires only 48hours, and many samples can beanalyzed in parallel (Figure 5).

The essential point to remember isthat the Ames test (step 3 in Figure 3)gives us a number by which we canexpress the mutagenic activity of agiven compound or food extract. Thisnumber by itself for a single mutagenwould have little meaning. However,we now have numbers for most of theknown mutagens in cooked foods andfor over a hundred additional mutagensfrom other sources, so we can comparethe mutagenic activity of many differentstructural types. When the Ames test is

used during initial screening for newmutagens and carcinogens, it serves as aguide to the chemical purification ofbiologically active molecules. It canalso be used to test and compare thepotency of newly synthesizedchemicals.

Characterization

Once a mutagen has been detected,we can characterize it further through avariety of analytical methods (step 4 inFigure 3). The type and sequence oftests depend on our objective for agiven mutagen (Figure 6). For example,we can routinely determine themolecular weight through massspectrometry and study the detailedchemical composition (the number ofhydrogen, carbon, and nitrogen atoms)by high-resolution mass spectrometry(HRMS). In mass spectrometry,complex compounds are broken up intoionized fragments, which areaccelerated through a magnetic fielduntil they strike a detector. Because thepath of an ionized fragment through the

field is determined by its inertia, we candetermine the mass of the various ionsby their spatial distribution on thedetector. Ultraviolet absorbancespectrometry and fluorescencespectrometry are other identificationmethods that are often combined withchromatography.

Substantially more effort is requiredif we want to identify a mutagen for thefirst time. For an unknown compound,we first need information on the atomiccomposition and the position of atomsin the molecule. This work requiresHRMS and nuclear magnetic resonance(NMR) spectra (step 4 in Figure 3)

13

Science & Technology Review July 1995

Food Mutagens

chemically extracted beforepurification. Over the years, we andother researchers have dramaticallyimproved on the original extractiontechniques that required various acidsor mixed organic solvents in multistepschemes.

We now use solid-phase extraction,which is based on a method firstdescribed by G. A. Gross in 1990.4After homogenizing cooked food in ablender to obtain a uniform sample, we can extract a sample quickly andefficiently by passing it through a series of small tapered tubes containingchemically activated particles (see step 1 in Figure 3 and Figure 4). Thesmall amounts of organic solvents thatare needed during this solid-phaseextraction generate a minimum ofhazardous waste.

Separation and Purification

We use high-performance liquidchromatography (HPLC) for finalseparation and purification of theextracted compounds in a food sample

(see step 2 in Figure 3). Liquidchromatography is a standard techniquein chemistry labs. In HPLC, a liquidmixture is pumped under high pressurethrough a long, narrow tube filled withfine silica particles. This materialdifferentially retards the passage ofdifferent molecular components so thateach one exits after a characteristicdelay or retention time. Our recent solid-phase extraction method together withHPLC allows excellent quantificationfrom small samples (about a tenth of ahamburger patty, or one bite) and a 1- to2-day turnaround time for results.

For unknown mutagens, a separationis carried out in several stages. Weobtain about 100 fractions at the finalstage, where a “fraction” is one portionof the sample material that is capturedin a separate vial after exiting the HPLCdetector. One fraction at the final stageof separation contains as little as abillionth of the starting material.However, because the extracts frommeat and other food products cooked atelevated temperatures are tremendouslypotent, only a very small sample is

12

Science & Technology Review July 1995

Food Mutagens

Figure 4. Researcher Cyndy Salmon usessolid-phase extraction to extract a sample bypassing it through a series of small cylinderscontaining small amounts of organicparticles.

Figure 5. Julie Avila, one of theresearchers in the LLNL foodmutagen research group, testsmutagens in cooked beef using theAmes/Salmonella test. (a) The foodsample is added to a mixturecontaining bacteria, nutrients, andenzymes needed for metabolism,and then (b) poured onto a petriplate. (c) Close up of growingbacterial colonies (calledrevertants) after 48 hours. Countingthe colonies gives us a number thatrepresents the sample’s mutagenicactivity.

(a)

(c)

Figure 6. Kathleen Dewhirstcombines methods, such asgas chromatography andmass spectrometry or liquidchromatography and massspectrometry, tocharacterize the foodmutagens in cooked meat.Mass spectrometry allowsus to determine themolecular weight of amutagen.

(b)

Page 9: Lawrence Livermore National Laboratory P.O. Box 808, L-452 .../67531/metadc627670/m2/1/high_res_d/16340.pdfof the relation between genes, the proteins they produce, and biological

together with synthesis of all possibleisomers. Isomers are two moleculeswith the same number of atoms andmolecular weight but differentstructures. NMR spectrometry requiresthe highest quantity and sample purityof all the analytical methods, but it gives us the most definitiveinformation on chemical structure. Theexact chemical structure of a givenmutagen can be proven by comparing it with a known standard that issynthesized in the laboratory.

After the physical and chemicalproperties of a mutagen are known, wecan use the information to determinewhether that mutagen is present in othertypes of food. This approach gives us away to determine the dose of a givencompound in our diet and to assess thehuman risk associated with ingestingthat compound.

The Major Food Mutagens

Table 2 is a summary of the 14major mutagens that have beenidentified in at least one type of heatedfood to date.5 Notice that some of thecompounds have the same molecularweights. For example, 4-MeIQx and 8-MeIQx are isomer pairs and so are Trp-P-2 and Me-AåC. The ultravioletabsorbance spectra of two differentcompounds may be identical when theyare isomer pairs and differ only, forexample, in the position of a methylgroup on one of the rings. The similarproperties of isomers make themdifficult to separate usingchromatography. Likewise, other analytic tools do not alwaysdifferentiate between isomers.Additional mutagenic isomers havebeen synthesized for most of the foodmutagens in Table 2. The presence ofisomers means that we need to applyseveral different criteria foridentification purposes because nosingle property, such as an absorbance

spectrum, can uniquely identify all ofthe mutagens.

The compounds listed in Table 2 arenot the only mutagens or carcinogens infood. Researchers at LLNL andelsewhere have identified otherbiologically active compounds,including additional aromatic amines,nitrosamines, and hydrazines. However,

the heterocyclic amines we have beeninvestigating are among the mostabundant and potent substancesdetected to date. Because of theirpresence in cooked meats that arecommon in Western diets and theirassociation with certain types of cancerin laboratory animals, they warrantdetailed investigation.

15

Science & Technology Review July 1995

Food Mutagens14

Science & Technology Review July 1995

Food Mutagens

Table 2. Major mutagens that have been identified in at least one type ofheated food, such as fried beef or fish. The names of mutagens firstidentified at LLNL are in color.

Short name Chemical name Molecular weight

Phe-P-1 2-amino-5-phenylpyridine 170

TMIP 2-amino-n,n,n-trimethyl- 176imidazo[4,5-f]-pyridine

AåC 2-amino-9H-pyrido- 183[2,3-b]-indole

Glu-P-2 2-aminodipyrido- 184[1,2-a:3´,2´-d]-imidazole

Trp-P-2 3-amino-1-methyl- 1975H-pyrido[4,3-b]-indole

Me-AåC 2-amino-3-methyl- 1979H-pyrido[2,3-b]-indole

IQ 2-amino-3-methyl- 198imidazo[4,5-f]-quinoline

IQx 2-amino-3-methyl- 199imidazo[4,5-f]-quinoxaline

Trp-P-1 3-amino-1,4-dimethyl-5H- 211pyrido[4,3-b]-indole

4-MeIQ 2-amino-3,4-dimethyl- 212imidazo[4,5-f]-quinoline

8-MeIQx 2-amino-3,8-dimethyl- 213imidazo[4,5-f]-quinoxaline

4-MeIQx 2-amino-3,4-dimethyl- 213imidazo[4,5-f]-quinoxaline

PhIP 2-amino-1-methyl-6-phenyl- 224imidazo[4,5-b]-pyridine

4,8-DiMeIQx 2-amino-3,4,8-trimethyl- 227imidazo[4,5-f]-quinoxaline

The Ames/Salmonella Test: A Key to Our Research

Our success in detecting and identifying mutagens in cookedfoods is made possible by the interplay of many different types of chemical analyses, including chromatography and massspectrometry (Figure 3), and biological methods. The Ames testis an exquisitely sensitive biological method for measuring themutagenic potency of chemical substances. The Ames test byitself does not demonstrate cancer risk; however, mutagenicpotency in this test does correlate with the carcinogenic potencyfor many chemicals in rodents.

The test was developed in 1975 by Bruce Ames and hiscolleagues at The University of California at Berkeley. The Amesmethod is based on inducing growth in genetically altered strainsof the bacterium Salmonella typhimurium. To grow, the specialstrains need the amino acid histidine. However, when thechemical agent (mutagen) that is being studied is givento bacteria, some of the altered Salmonella undergomutations. Following a particular type of mutation, thebacteria can grow like the original “wild” (unaltered)strains without histidine. Because the mutant bacteriarevert to their original character with regard to thenutrient histidine, they are called “revertants.”

The Ames test yields a number—specifically, thenumber of growing bacterial colonies—which is ameasure of the mutagenic activity (potency) of atreatment chemical. This value is often expressed asthe number of revertants per microgram of a purechemical (mutagen) or per gram of food containingthat mutagen. Some pure mutagens result in hundredsof revertants per microgram, but many of thesubstances we have tested from cooked meat producehundreds of thousands of revertants per microgram.For example, in one strain of bacterium, the PhIPmutagen results in about 2000 revertants permicrogram, whereas another cooked food mutagen, IQ,results in 200,000. The illustration at the right showshow a food extruct is tested for its mutagenic activity.

In brief, a test begins by placing about 100 millionSalmonella bacteria in a petri dish containing a nutrientagar lacking histidine. A few bacteria willspontaneously revert in the absence of mutagens.Counting these revertant colonies gives us a baselineagainst which to check the validity of our complexlaboratory procedures. In a separate but essentiallyidentical histidine-deficient petri dish, another batch of

Salmonella bacteria are given a mutagen plus mammalianenzymes required for metabolism. (Adding such enzymes givesus a more realistic measure of the mutagenicity of a substancefor mammals. The enzymes are typically supplied from livercell extracts of rats given substances to increase levels ofmetabolizing enzymes.) Revertant bacteria grow into visiblecolonies. We simply count the colonies (equal to the number ofrevertants) after a standard time (48 hours) under standardgrowing conditions (37°C).

Different strains of altered Salmonella bacteria are availablefor the Ames test. The strains vary in sensitivity to specificmutagens. We used two strains, known as TA98 or TA100, formost of our recent work on fried beef and cooked grains. Thesestrains were generously supplied by Bruce Ames.

Control group

Mutagen group

Count spontaneousrevertants for baselinemeasurement

Incubate for 48 hours at 37˚C

Pour bacteria andfood extract onnutrient agar

Count colonies to obtainnumber of revertants pergram of food

Incubate for 48 hours at 37˚C

Pour bacteria, food extract, andenzymes on nutrient agar

Page 10: Lawrence Livermore National Laboratory P.O. Box 808, L-452 .../67531/metadc627670/m2/1/high_res_d/16340.pdfof the relation between genes, the proteins they produce, and biological

of 244. However, the presence of thisnew mutagen in food has not beenverified.

Variations in Cooking

During the actual cooking of meatpatties, water and precursors move tothe hot, drying contact surfaces of themeat where reactions occur. Suchmigration, with water serving as thetransport vehicle, may account for theconcentration of precursors near themeat surface, which we have observedin several investigations. However,different cooking practices can lead to

very different results. For example,some mutagens are produced at allfrying temperatures, whereas othersmay require higher temperatures.Furthermore, when hamburger pattiesare grilled at high temperature over anopen flame, we can account for lessthan 30% of the mutagens in the meat.When cooking over an open flame,polycyclic aromatic hydrocarbons(different from AIA food mutagens)arise from fat that drips from themeat—this is an entirely differentmechanism than those that produceheterocyclic amines from heatedmuscle tissue itself. Thus, theformation of mutagens is complex andhighly dependent on the details ofcooking.

Preparation Principles

Given this complicated picture,what statements about food preparationcan we make with any certainty? Hereis a summary of some of the important

things we have learned about thecooking process:• Food mutagens can be producedboth with and without water present.Early reports suggested that water isessential to produce food mutagens.In later studies, dry heating actuallygives a greater percentage of certaintypes of mutagens compared withaqueous heating. We know, forexample, that the mutagen PhIP isformed relatively efficiently in dryheating reactions. We have alsofound that water tends to inhibit theformation of IQ-type mutagens.• Microwave pretreatment of meatcan reduce the formation ofheterocyclic amine mutagens. Whenmeat is microwaved for a fewminutes, a clear liquid is released,which contains many of theprecursors of mutagens. When theresulting liquid is drained off beforefrying, our studies show thatmutagenic activity, as measured bythe Ames test, and the amount of

17

Science & Technology Review July 1995

Food Mutagens

OOKING practices can causelarge variations in the total

mutagenic activity and in the amountof specific mutagens present in muscle-containing foods. For example, theamount of mutagens in a cookedhamburger from a restaurant variesconsiderably from one vendor toanother and is often several-fold lowerthan that in a hamburger prepared inour laboratory (and presumably athome). The variation has much to dowith the details of food preparation,such as cooking temperature andcooking time. It is becomingincreasingly clear that there can bemany different routes and rates offormation for the different mutagenswe are investigating. Thus, a majorconcern is to identify the precursorsand specific reaction conditions thatlead to the formation of mutagensduring cooking. With this information,it may be possible to devise strategiesto reduce or prevent the formation ofmutagens.

Precursors

The reactions that produce mutagensin cooked food are not merely therandom coalescence of small fragments.We now know that the heterocyclicamines can be formed from singleamino acids (the building blocks ofproteins) or proteins when theseprecursors are heated alone. However,the temperatures required to producemutagens from amino acids or proteinsby themselves are higher than thosenormally used in cooking.

Muscle meats contain creatine andcreatinine. At more typical cookingtemperatures (greater than 150°C), oneor both of these two precursors reactwith the free amino acids and, in somecases, sugars to form a series ofheterocyclic amines more easily.

Modeling the Formation

We have modeled the formation of theimportant mutagen, PhIP (pronounced

“fip”), starting with the amino acidphenylalanine mixed with either creatineor creatinine, both of which are foundnaturally in animal muscle. Whenphenylalanine and creatine are mixed inthe proportion normally found in rawbeef and dry heated at 200°C, PhIP isproduced in amounts comparable tothose found after cooking beef. Figure 7shows the structures of phenylalanineand creatine and of the PhIP moleculethat is produced.

We have modeled the formation of several other food mutagens inadditional laboratory experiments. Forexample, the mutagen IQ can beformed with creatine, creatinine, andany of four different amino acids,again suggesting many differentpossible routes of formation.

Model reactions can help us identifynew mutagens as well. In one case, dryheating three precursors known to bepresent in meat led us to identify amutagen with two amino and twomethyl groups and a molecular weight

16

Science & Technology Review July 1995

Food Mutagens

CN

N

N

N

CH

CH2

CH3

CH3

NH2

NH2

NH

NH2

HOOC COOH

L-Phenylalanine

PhIP

Creatine

Food Mutagens:The Cooking Makes aDifference Figure 7. Modeling the formation of the potent mutagen

PhIP. We combined two precursors, phenylalanine andcreatine, in amounts naturally found in raw beef. Aftersimple dry heating, PhIP was produced in yieldscomparable to those we obtain in beef after the cookingprocess. We have also labeled the two precursors withheavy isotopes to track the incorporation of specific atomswithin each precursor molecule into the PhIP molecule.Such work shows unequivocally the source of atoms thatmake up the mutagenic product.

Page 11: Lawrence Livermore National Laboratory P.O. Box 808, L-452 .../67531/metadc627670/m2/1/high_res_d/16340.pdfof the relation between genes, the proteins they produce, and biological

important single source of heterocyclicamines in the typical American diet.However, several other popular cookedmeats, including fish, chicken, and pork,have been shown to produce a potentresponse in the Ames test.

Of the several different heterocyclicamine mutagens now identified, wewanted to know which ones are mostimportant (that is, most abundant bymass) in cooked muscle meats. To helpanswer this question, we compared theresults of many studies from LLNL andelsewhere. Specifically, we comparedthe mass percentages of differentmutagens in cooked muscle meats,including fried beef, broiled fish, andcommercially prepared beef extract. Wefound that the results were generallyconsistent among different laboratorieseven when different analytical methodswere used.

First, we did not detect significantlevels of three mutagens, Trp-P-1, Trp-P-2, and Glu-P-1, in any of our meatsamples. Second, we found that fourcompounds, IQ, 8-MeIQx, 4,8-DiMeIQx, and PhIP, contribute about80% of the mutagenic activity in thecooked muscle foods that were studied.Third, we found that PhIP alone canaccount for a startling 83 to 93% of themass of these four mutageniccompounds. Clearly, the analysis ofPhIP is important because it appears tobe, by far, the most abundantheterocyclic amine by mass incommonly eaten cooked meats. BecausePhIP is as carcinogenic as the othermutagens, its analysis becomes evenmore essential.

We examined the production of PhIPand other mutagens in beef at differentcooking temperatures and times. Thebox at the right gives the details on howwe prepare our fried beef. Figure 8shows the mutagenic activity, asmeasured by the Ames test, of all themutagens combined in a gram of beef

19

Science & Technology Review July 1995

Food Mutagens

heterocyclic amine are 90 to 95% lowerthan they are in meat samples that arenot pretreated by microwave cooking.6The box below discusses this and othermethods that have been tested to reducethe formation of mutagens.• Different cooking methods producequite different results. In general,frying, broiling, and flame grillingmuscle meats produce more

heterocyclic amines and mutagenicactivity than other methods. Stewing,steaming, and poaching produce little orno mutagenic activity. Roasting andbaking have variable responses.• Heating temperature is extremelyimportant as is the time of cooking at agiven temperature. Our extensive findingson this important topic are best discussedaccording to the type of food product.

Cooked Muscle Meats

Fried beef patties appear to be themost commonly eaten cooked meatwith the highest mutagenic activity.Because of the high intake of fried beef(based on surveys from the U.S.Department of Agriculture and theDepartment of Health, Education, andWelfare), this food may be the most

18

Science & Technology Review July 1995

Food Mutagens

Can the Mutagens in Cooked Beef Be Reduced?

Since mutagens were first observed in cooked meats, researchersin several different laboratories have explored various ways toreduce the amounts produced during food preparation. They havefound that mutagenic activity can be lowered by addingantioxidants, soy or cottonseed flour, tryptophan, and various otherfood additives or sugars either alone or with starch. However, noneof these additives is widely used commercially or at home.Consumer acceptance and possible changes in the taste, texture, andnutritional content of the cooked food need to be explored further.

Surveys have shown that more than 90% of American homeshave a microwave oven. As a practical way to reduce the mutagenand fat content of beef, we studied microwave pretreatment ofhamburger for various times before conventional frying either at200 or 250°C for 6 minutes per side. Our tests used a standardcommercial microwave oven set at 80% power for 0 to 3 minutes.The results were dramatic.

We found that the mutagen precursors in hamburger (creatine,creatinine, amino acids, and glucose), water, and fat were reducedup to 30% in the microwaved patties. The graph shows the amountof creatine remaining in the meat as a function of microwavepretreatment times. The fairly rapid loss of water-soluble mutagenprecursors and fat takes place in the clear liquid that is releasedafter microwaving. When this liquid is discarded before frying,mutagens in the cooked meat are reduced up to approximately 90%following frying, as shown in the table.

How is it possible that 90% of the mutagens disappear when the precursors are reduced by only 30%? The difference can beexplained by second-order reaction kinetics. For example, if tworeactants are needed, and each is reduced by 30%, then the productwould be reduced by about 50%. If three reactants are required andall are reduced by 30%, the product would be reduced by 70 to80%. It is also possible that some threshold level of precursor isnecessary to produce a mutagenic response or that some inhibitoris formed after microwave pretreatment. As with other techniquesto reduce mutagen content, the palatability of food may ultimatelygovern consumer acceptance of microwave pretreatment.

Microwave pretreatment time, min

500

43210

Pre

curs

or c

ompo

und

crea

tine,

m

g/10

0 g

of g

roun

d be

ef

400

300

200

Measured mutagenic activity, from the Ames test, in beef pattiespretreated in a microwave oven and then fried for 6 minutes at200 or 250°C.

Mutagenic activity from the Microwave time, Ames test, revertants per gram

min 200°C 250°C

0 450 14001.0 220 3691.5 130 2162.0 47 673.0 16 41

1200

121086Frying time, min

420

Mut

agen

ic a

ctiv

ity,

reve

rtan

ts p

er g

ram

of f

ried

beef 1000

800

230°C

190°C

150°C

600

400

200

0

Figure 8. A graph of themutagenic activity inbeef patties fried atthree differenttemperatures. Theessential point isthat mutagenicactivity increaseswith both fryingtemperature andtime.

One major difficulty in our dose-and exposure-assessment work is thatthe content of mutagens can varywidely even in the same kind of foodproduct when it is obtained fromdifferent suppliers or prepared bydifferent restaurants. Although therelative amounts of the heterocyclicamines are generally consistent amongdifferent studies and laboratories, theprecise amount of mutagen per gram ina given cooked food can span a tenfoldrange.

Hamburgers from fast-foodrestaurants generally have considerablylower levels of mutagens than thoseprepared at home. This result isprobably due to the fact that many fast-food restaurants cook their meat atmoderate temperatures on a grill or overopen flames for a short time. Becausethe meat patties are thin, the productsare not generally overcooked.

Because food-preparation practicesvary, over the years we have attempted toapproximate a range of cooking practicesthat are common in Americanhouseholds. In various experiments, foodswere pan fried, oven broiled, baked,boiled, stewed, grilled over coals, or leftraw. However, for the studies on red meatreported in this article (see Table 3), wepurchased ground beef, sold as containing15% fat, from a local market. We formedthe meat into patties weighing 100 grams(a little less than a quarter of a pound) andfried them on a commercial, electric,stainless-steel griddle for 2 to 10 minutesper side and at surface temperatures of150, 190, or 230°C. We monitored thegriddle surface with a digital probethermometer. After the meat was cooked,it was homogenized in a blender toproduce a uniform sample. Samples werefrozen at –4°C until extraction forsubsequent testing and analysis.

How We Fried the Burgers We Studied

Page 12: Lawrence Livermore National Laboratory P.O. Box 808, L-452 .../67531/metadc627670/m2/1/high_res_d/16340.pdfof the relation between genes, the proteins they produce, and biological

Table 3. Content of four different mutagens in fried beef patties (expressed as nanogramsof mutagen per gram of beef) cooked at different times and temperatures.

Cooking time Cooking temperature of grill, °CMutagen per side, min 150 190 230

IQ 2 none 0.1 none4 none 0.1 0.156 0.1 0.45 0.6

10 0.1 0.85 0.7

8-MeIQx 2 none 0.1 0.74 none 0.25 0.46 0.2 1.3 5.6

10 0.6 1.3 7.3

4,8-DiMeIQx 2 none none 1.64 none 0.1 0.156 0.2 0.55 1.2

10 0.4 1.1 1.0

PhIP 2 none none 1.34 none 0.15 1.36 0.25 1.9 7.8

10 1.8 9.8 32.0

In fact, the mutagenic activity ofbreadsticks cooked for double theregular heating time is 20% that of ahamburger fried 6 minutes per side at210°C. In all cases, overcooking grainfoods led to much higher mutagenicactivity. Cooked garbanzo bean flourand the grain beverage powder, whichwe tested as purchased, had relativelyhigh mutagenic activity. Cooked riceand rye flour (containing no gluten), onthe other hand, showed no detectableactivity, and rice cereal showed verylittle. Fried tofu (soy bean curd) was notmutagenic, and the measured level ofactivity in meat-substitute patties(which are made from vegetableproteins) after frying was about 10% orless than that of a beef patty cookedunder the same conditions.

Table 4 summarizes the mutagenicactivity, as measured by theAmes/Salmonella test, for a variety ofcooked-grain food products. The resultsare expressed as mutagenic activityfrom the Ames test, so they cannot bedirectly compared with those in Table 3.(Recall that the numbers in Table 3represent a different measure, namelythe content by weight of individualmutagens expressed as nanograms ofmutagen per gram of beef.) Because wedo not yet know the identity of themutagens present in cooked grainproducts, we cannot provide theircontent by weight. However, to allowfor some comparison between cookedgrains and meat, we have included thevalues of mutagenic activity forhamburger cooked for three differenttimes at the end of Table 4.

Overall, the level of mutagenicactivity measured in heated nonmeatfoods is lower than that in cookedmeats. It is important to recognize thatthe cooked grains we studied lack thecreatine and creatinine levels thatexplain the formation of mutagens inmuscle meats during cooking. We arecurrently investigating the question of

why foods high in gluten are quitemutagenic in the absence of creatineand creatinine. We suspect that theamino acid, arginine, can substitute forthe creatine and creatinine precursorsfound in meat, but it may be a less

21

Science & Technology Review July 1995

Food Mutagens

patty fried at 150°, 190°, and 230°C.We found no detectable heterocyclicamines after frying at 150°C for 2 to 4minutes. In general, increasing eitherthe temperature or time of cooking(specifically, frying on a solid metalrestaurant-type grill) causes a dramaticincrease in both the mutagenic activityand the total amount of mutagens

produced, especially PhIP and 8-MeIQx. For the most part, as shown in Table 3, the amount of individualmutagens in fried beef increasesproportionately with the cookingtemperature. A clear exception to this trend is the compound PhIP, which is produced at much greaterconcentrations at higher temperaturesrelative to the other mutagens we havestudied. When the cooking temperatureand time are increased, the PhIP contentof fried beef patties increases nearlyexponentially.

Mutagens from Grain?

We also recently used the Ames testto assess the mutagenic activity in manyheated foods derived from grainproducts. Our studies include cookedbreads (white, pumpernickel, crescentrolls, and pizza crust), breadsticks,heated flour from many different grainsources, breakfast cereals, grahamcrackers, and meat-substitute pattiesafter frying. These foods were eithertested as purchased without additionalcooking (for example, graham crackersand a grain beverage powder) or were cooked according to packageinstructions. In some studies, wedeliberately overcooked the grainproducts for twice the cooking time atthe specified temperature setting to seeif the mutagen content would increasewith continued cooking, as it does inmuscle meats.

Our studies generally demonstrateincreased mutagenic activity in grainfoods with cooking time, but the exactcomposition of the food is important.For example, when wheat gluten (theprotein in wheat seed) is heated alone at210°C in a beaker, it shows a potent,time-dependent mutagenic response(Figure 9). Because breadsticks are highin wheat gluten, they also show someactivity when heated normally andmuch higher activity when overcooked.

20

Science & Technology Review July 1995

Food Mutagens

4000

1209060Heating time, min

300

Mut

agen

ic a

ctiv

ity, r

ever

tant

spe

r gr

am o

f whe

at g

lute

n

3500

3000

2500

2000

1500

1000

500

0

Figure 9. Themutagenic activity ofwheat gluten increasesdramatically whenheated at 210°C for upto 2 hours. This potentresponse tells us thatone or more highlymutagenic chemicals,still unidentified, areformed with continuedcooking at hightemperature.

Table 4. Mutagenic activity of nonmeat food products (expressed as the number ofrevertants [mutants] per gram from the Ames/Salmonella test using the TA98 strain ofbacteria). Results for hamburger are given for comparison.

Mutagenic activity,Food sample revertants per gram

Flour from plant sources heated to 210°C for 60 minutesChemical-grade gluten 1330Food-grade wheat gluten 970Cornmeal 180Garbanzo flour 1890Teff flour 420Rice flour none detectedRye flour none detectedWheat flour for bread 320

Cooked food samples tested as purchased or cooked as directedWhite bread 2Pumpernickel 6Breadsticks 6Crescent rolls 1Pizza crust 3Graham crackers 4Grain beverage 320

Food samples cooked double the time directedWhite bread 5Pumpernickel 28Breadsticks 40Crescent rolls 4Pizza crust 8

Toasted breakfast cereals tested as purchasedRice-based 2.2Corn-based 4.4Wheat-based (various samples) 0 to 8.8

Commercial meat substitutes fried at 210°C for 6 minutes per sideGluten-based patties (various samples) 6 to 9.4Tofu none detectedFalafel 2.3Tempeh burger 23Tofu burger non detectedSoy-based patties 6.6Gluten, wheat, teff-based patties (230°C) 30

Hamburger fried at 210°C for 6 minutes per side 220Hamburger fried at 230°C for 6 minutes per side 800Hamburger fried at 250°C for 6 minutes per side 1400

Page 13: Lawrence Livermore National Laboratory P.O. Box 808, L-452 .../67531/metadc627670/m2/1/high_res_d/16340.pdfof the relation between genes, the proteins they produce, and biological

amine mutagens, even if the meat iscooked well-done.• Most nonmeat foods, includingcooked grain products, contain lowerlevels of mutagens than cooked meats.

At least in rodents, we know thatfood mutagens trigger cancer in severaldifferent target tissues, such as the liver,colon, breast, and pancreas. In a follow-up installment in Science andTechnology Review, we will address thehealth risks to humans that may arisefrom exposure to heterocyclic amines.For this intriguing part of the story, wewill show how these highly toxiccompounds can react with the mostcritical macromolecule of all, DNA.With a connection established betweenfood mutagens, DNA damage, and thepotential for cancer, we will then try tomake sense of what all the numbers onmutagenic activity and mutagen contentin food mean for the average person.

Key Words: Ames/Salmonella assay;amino-imidazoazaarenes (AIAs);carcinogen; DNA adducts; heterocyclicamines; high-performance liquidchromatography (HPLC); mutagens—airborne, in cooked foods, in fried beef;mutagenicity; 2-amino-1-methyl-6-phenylimidazo[4,5-b]pyridine (PhIP); 2-amino-3-methyl-imidazo[4,5-f]quinoline(IQ); 2-amino-3,8-dimethyl-imidazo[4,5-f]quinoxaline (MeIQx).

References1. B. N. Ames, J. McCann, and

E. Yamasaki, “Method for DetectingCarcinogens and Mutagens with theSalmonella/Mammalian-MicrosomalMutagenicity Test,” Mutation Research31, 347–364 (1975).

2. T. Sugimura et al., “Mutagen-Carcinogens in Foods with SpecialReference to Highly MutagenicPyrolytic Products in Broiled Foods,” inOrigins of Human Cancer, H. H. Hiatt,J. D. Watson, and J. A. Winsten, Eds.,(Cold Spring Harbor Laboratory, Cold

Spring Harbor, New York, 1977), pp. 1561–1577.

3. B. Commoner et al., “Formation ofMutagens in Beef and Beef ExtractDuring Cooking,” Science 210, 913–916(1978).

4. G. A. Gross, “Simple Methods forQuantifying Mutagenic HeterocyclicAromatic Amines in Food Products,”Carcinogenesis 11, 1597 (1990).

5. For a general review of research on foodmutagens, see J. S. Felton and M. G.Knize, “Heterocyclic-Amine

Mutagens/Carcinogens in Foods,”Handbook of ExperimentalPharmacology, Vol. 94/I, C. S. Cooperand P. L. Grover, Eds.(Springer–Verlag, Berlin, Germany,1990), pp. 471–502.

6. J. S. Felton et al., “Effect of MicrowavePretreatment on Heterocyclic AromaticAmine Mutagens/Carcinogens in FriedBeef Patties,” Food ChemicalToxicology 32 (10), 897–903 (1994).(UCRL-JC-116450)

23

Science & Technology Review July 1995

Food Mutagens

efficient mutagen precursor in cookedgrain products.

Before we can evaluate the riskassociated with cooked grains, we needto determine the mass of mutagens ineach food and to identify the specificmutagenic compounds that are present.Except for very low levels of PhIP inwheat gluten (accounting for only 4%of its mutagenic activity), our analysisdid not reveal any of the other mutagensfound in cooked meat or listed in Table 2. Because the mutagens incooked grain appear to be as potent asthe heterocyclic amines—and suchpotency is unusual, we suspect that themutagenic compounds may be newheterocyclic amines similar to those wehave found in cooked meats. However,more work needs to be done before weunderstand the entire picture.

What About Fumes?

Some studies have suggested thepossibility of an increased risk ofrespiratory tract cancer among cooksand bakers. When foods rich in proteinare heated, the fumes that are generatedsometimes contain many differentknown carcinogens, includingpolycyclic aromatic hydrocarbons andheterocyclic amines. Working withcolleagues at the University ofCalifornia at Davis, we recently studiedthe mutagenicity of fumes generatedwhen beef is fried at high temperatures.

We collected smoke from cookingby using a special sampling systemconsisting of a condenser, Teflon filters,and absorbent tubes containingpolyurethane foam and a resin. Wefound that the main volatile compoundsgenerated during frying were alcohols,alkanes, aldehydes, ketones, phenols,and acids. Their presence—wemeasured 34 different components—may account for much of the toxicity of

fume samples in bacterial tests. Twomutagens, PhIP and AåC, were themost abundant of the heterocyclicamines we measured in smoke, withAåC accounting for 57% of the totalweight of mutagens in the recoveredsamples. However, even though AåCseems to be the most volatile of ourquantified heterocyclic amines insmoke, its actual contribution to themutagenicity of fumes is negligiblebecause its mutagenic potency is lowerthan that of some other heterocyclicamines in smoke. We also detectedsignificant levels of MeIQx andDiMeIQx.

In a modified Ames test, one that ismuch more sensitive than our standardassay and uses two different strains ofbacteria, the fried meat extracts had30,700 revertants per gram (see box, p. 16 for a definition of “revertants”),whereas the fumes produced by fryinghad 10,400 revertants per gram of friedmeat. Thus, the fumes generated duringthe cooking of meat represent about one-third of the mutagenic activity measuredin the fried meat itself. It is important torecognize that the amount of mutagensinhaled is very low compared toconsuming solid, cooked meat.Nevertheless, the presence of toxiccompounds in meat fumes, even atrelatively low levels, could pose somerisk to food preparers who are exposed tothem for long periods over many years.

Cook to Manage Mutagens

Our research on food mutagens isnot specifically designed to generatepractical advice for diet- and health-conscious individuals. Many questionsremain unanswered in this highlycomplex field of investigation.Although food mutagens are extremelypotent, our preliminary estimates of riskare not alarming primarily because of

their low concentrations in food.Nevertheless, the amount of mutagensingested can be reduced by choice ofdiet and by modifying cooking practices.

Cooking Tips Summary• Fried beef has very high mutagenicactivity. Its popularity suggests that thisfood may be the most important sourceof heterocyclic amines in the typicalWestern diet.• Most, but not all, of the mutagenicactivity in fried beef can be accountedfor by known heterocyclic amines. Thesingle mutagen PhIP accounts for mostof the combined mass of mutagens infried beef cooked well-done.• The fumes generated during thecooking of beef have about one-thirdthe mutagenic activity measured in thefried meat itself. Occupational exposureover long periods could pose some risk,but probably much less than that fromconsuming the meat.• The fat content and thickness of meathave little effect on mutagen content,whereas the method and extent ofcooking have major effects. Frying,broiling, and barbecuing muscle meatsproduce more heterocyclic amines andmutagenic activity, whereas stewing,steaming, and poaching produce little orno mutagenic activity. Roasting andbaking show variable responses.• Both cooking temperature and timecan be manipulated to minimize theformation of mutagens. Increasing thefrying temperature of ground beef from200 to 250°C increases mutagenicactivity about six- to sevenfold.Reducing cooking temperature and timecan significantly lower the amounts ofmutagens generated and subsequentlyconsumed in the diet.• Microwave pretreatment of meat,followed by pouring off the clear liquidbefore further cooking, can substantiallyreduce the formation of heterocyclic

22

Science & Technology Review July 1995

Food Mutagens

JAMES FELTON joined the Biomedical Sciences Division ofLawrence Livermore National Laboratory as a Senior BiomedicalScientist in 1976. He is currently the Group Leader of theMolecular Toxicology Group of the Biology and BiotechnologyProgram at the Laboratory. He received his A.B. in Zoology fromthe University of California, Berkeley, in 1967 and his Ph.D. inMolecular Biology from the State University of New York at

Buffalo in 1973. From 1969 until 1976, he was a Fellow of the National Institute ofHealth, initially in New York and later in Maryland.

In more than 147 professional publications, James Felton has explored the role ofdiet in carcinogenesis and mutagenesis. He has been a part of the Laboratory’sresearch on food mutagens since it began 17 years ago and has led it for the past 8 years.

About the Scientist

For further information contact James S. Felton (510) 422-5656 ([email protected]) or Mark G. Knize (510) 422-8260 ([email protected]).

To view this article with interactive links to these references, visit our Internet homepage athttp:llwww.llnl.gov/str/str.html. After August 1, click on references in color for immediate access toadditional specific information.

Page 14: Lawrence Livermore National Laboratory P.O. Box 808, L-452 .../67531/metadc627670/m2/1/high_res_d/16340.pdfof the relation between genes, the proteins they produce, and biological

25

Science & Technology Review November/December 1995

Genetic Sperm

offspring and most abnormalities in thenumbers of the sex chromosomescome from the father’s sperm.Nevertheless, we have only had alimited understanding of the detailsunderlying the father’s contribution toreproductive problems and failures.

Biomedical scientists at theLaboratory are now conductingresearch on chromosomal defects in sperm and their effects on thedeveloping embryo. Until recently,little was known about such defectsbecause no practical method wasavailable for detecting abnormalchromosomes in sperm and earlyembryos.

Three Kinds of Evidence

Three primary lines of evidenceform the basis for our research.

Declining Sperm CountFirst, the sperm count of men has

been declining over the last five decades,and we still do not know exactly whataccounts for the decline. In 1983, weconducted studies for the U.S.Environmental Protection Agency (EPA)on the effects of nearly one hundreddifferent types of exposures on spermproduction in human males.1 About halfof the agents we studied, includingalcoholic beverages, cigarette smoke,and lead, lowered the production ofsperm or affected sperm motility ormorphology.

Occupational ExposureA second important line of evidence

suggests that certain jobs and workplaceand environmental exposures of thefather are linked to spontaneous abortionand problems in their offspring,

including birth defects and cancer.Some occupations seem to berepeatedly associated with abnormalreproductive outcomes; however,findings are variable, and actualexposures are often poorly defined. We still do not have conclusive links between specific exposures,mechanisms of transmission, and anincreased frequency of birth defects orchildhood cancers.

At least two different models can account for many of theepidemiological findings. It is possiblethat some fathers might bring homepotentially damaging agents onequipment, skin, or clothing, thusaffecting the wife, offspring, or both.On the other hand, the route ofexposure could be more direct, via thefather’s sperm. LLNL researchers havebeen developing and applyingimproved sperm assays to helpdistinguish between these two models.

Genetic Mutations in InfantsA third line of evidence for male-

mediated reproductive effects comesfrom studies of babies when some ofthe newest molecular methods areapplied. Such studies show that whenentirely new gene mutations occur inan offspring—ones never seen beforein either the mother’s or the father’sfamily—they are almost alwaysassociated with the father’s genes.Several defects also predominantlyoccur in the father’s chromosomes. For example, about 80% of thechromosomal aberrations seen in thechromosomes of babies—defects suchas chromosome breaks—come fromthe father.

Confusing Evidence

For decades, we have known thatmale animals—especially laboratorymice and rats—that are exposed tocertain damaging agents can sufferadverse reproductive effects and other

2524

Science & Technology Review November/December 1995

HE global population explosionwould seem to suggest that humanreproduction functions quite well.

However, reproductive failures,abnormalities during pregnancy, andbirth defects are more common thanmany people realize. Every year in theU.S., more than 2 million couples whowant to have children are infertile, andover 2 million conceptions are lostbefore the twentieth week of gestation.In addition, about 7% of newborns havelow birth weight, and up to 7% ofbabies, or about 210,000 children peryear in the U.S., are born with somebirth defect. Half of these birth defectsare major, affecting the health andviability of the individual.

The social and medical costs ofreproductive abnormalities areformidable, yet their causes are not wellunderstood. Abnormal reproductiveoutcomes include a wide variety ofproblems listed in Figure 1. Differentmolecular mechanisms, diagnoses, andtreatments are typically involved in thedifferent conditions. The cause ofalmost any reproductive abnormalitycan be the result of genetic and

physiological events that occurred inany one (or in some combination) ofthree people—the mother, father, andchild. Because of such complexities,pinpointing the cause of a specificreproductive abnormality may be evenmore difficult than determining thecause of cancer.

Certain abnormal reproductiveoutcomes can be caused by events thatoccurred in the germ cells (sperm oregg) of one of the parents beforefertilization. Some abnormalreproductive outcomes, such as Downsyndrome, have been traced toabnormalities in the eggs of the mother.Historically, the picture has been muchless clear for the father. Now, we havecompelling evidence that the maleparent can be the source of detrimentaleffects on the genetic makeup andhealth of the embryo and child.

Geneticists estimate that about 40%of the cases of human infertility are dueto male factors. About 80% ofchromosomal aberrations (structuraldefects in chromosomes seen at birth)originate from the father. Furthermore,almost all new gene mutations seen in

The Genetic Contribution of Sperm:

Healthy Baby or Not?

T

We are developing powerful molecular methods to visualize individual

chromosomes in sperm and to detect genetic defects in embryos. Our

research methods, combined with animal models, have broad

implications for screening males with chromosomal abnormalities and

genetic diseases, for studying the effects of exposure to mutagenic

agents, and for assessing genetic risks to the embryo and offspring.

Factors affecting development of sperm, egg, and embryo

Normal, healthy babyFather’s sperm

Mother’s egg

Fertilization (or)Embryo development

Normal, healthy baby

Abnormal reproductiveoutcomes

• Infertility• Pregnancy loss• Abnormal pregnancy• Birth defect• Childhood cancer

Abnormal reproductiveoutcomes

• Infertility• Pregnancy loss• Abnormal pregnancy• Birth defect• Childhood cancer

• Diet• Age• Physiology

• Occupation• Environment• Toxic drugs

• Random errors• Inherited factors• Genetic susceptibilities

Figure 1. Many risk factors (top of diagram)acting on the mother, father, and offspring maylead to abnormal reproductive outcomes(bottom, right). Some risks can date to shortlyafter the time of conception of either parent.Livermore researchers are focusing on defectsin sperm that lead to abnormal outcomes.

Members of the sperm and embryocytogenetics team at LLNL. Back row, leftto right: Francesco Marchetti, Jiri Rubes,Andy Wyrobek (principal investigator),Joyce deStoppelaar, and Paul VanHummelen. Front row, left to right:Armand Tcheong, Emily Panico, XiuLowe, Nancy Oschbach, and MikeCassell. Not shown: Christina Sanders,Thomas Ahlborn, Adi Baumgartner,Luoann Uelese, Wendie Robbins,Elizabeth Notley, and Janet Baulch.

24

Page 15: Lawrence Livermore National Laboratory P.O. Box 808, L-452 .../67531/metadc627670/m2/1/high_res_d/16340.pdfof the relation between genes, the proteins they produce, and biological

27

Science & Technology Review November/December 1995

Genetic Sperm

sex chromosomes, a normal femalecarries two X chromosomes, and anormal male carries one X and one Ychromosome.

In contrast to somatic cells, eachsperm and egg contains 23 chromosomes(the haploid number in humans). Eachnormal sperm and egg carries one copy of chromosome 1, one copy ofchromosome 2, and so forth. Figure 2shows the haploid number ofchromosomes from human sperm thatwere specially prepared by a techniquedeveloped at the Laboratory.

If either of the germ cells carries anabnormal number of chromosomes orsome other genetic defect, major hazardsmay arise for the offspring. A fetusresulting, for example, from fertilizationwith a genetically defective sperm wouldcarry a mutation not only in the germtissues but also in all somatic cells. Anembryo’s survival and quality of lifethrough birth and beyond depend on thespecific chromosomal defect it maycarry. An embryo carrying majorchromosomal defects will die duringdevelopment. Thus, a validated methodfor detecting chromosome abnormalitiesin sperm has broad implications formaintaining or improving human health.

About Aneuploidy

The measure (or biological marker ofmale reproductive risk) we have chosento study in depth is sperm aneuploidy.Aneuploidies in general are an importantcategory of chromosomal damage thatcan be transmitted to an offspring fromeither the father or mother. The word“aneuploidy” refers to cells carrying thewrong (thus the prefix “an”) number ofchromosomes (“euploid”). Aneuploidy is one of the most common and seriouschromosomal abnormalities recognized in humans. It is responsible for a largeportion of infertility, pregnancy loss,infant death, malformations, mentalretardation, and behavioral abnormalities.

Human embryos with an abnormalnumber of sex chromosomes or with anextra chromosome 13, 18, or 21 cansurvive to birth and beyond. An extrachromosome 21 causes Down syndromeand is a familiar example of aneuploidyinvolving one of the nonsexchromosomes. However, the mostcommon aneuploidies in humans atbirth involve an abnormal number of X or Y chromosomes. This condition,sex-chromosome aneuploidy, can bediagnosed prenatally throughamniocentesis, and the incidence isabout 1 in 250.

Table 1 shows different types of sex-chromosome aneuploidies together with

other abnormalities involving theautosomes. A male child who inherits, say,an extra Y chromosome from the fatherwould have a total of 47 chromosomesand a sex-chromosome aneuploidy(XYY). We know that human fathers areresponsible for 100% of 47, XYY casesbecause the mother carries no Ychromosome. Another aneuploidyinvolving the sex chromosomes is Turnersyndrome (45, XO), in which a paternalchromosome is lacking about 80% of thetime. Other conditions are Klinefeltersyndrome (47, XXY) and a triplet of Xchromosomes (47, XXX).

Slightly more than half of the sex-chromosome aneuploidies at birth

2726

Science & Technology Review November/December 1995

Genetic Sperm

health problems. The effects caninclude reduced sperm production,diminished quality of sperm, andreduced libido.

Researchers can systematically studyrodents to gain a better understanding of the links between exposure andreproductive effects. However, forhumans, we must rely on the fewsources of evidence that are available tous, including exposed individuals andtheir offspring. Studies since the 1950shave consistently shown that exposuresof human males to environmental,occupational, or therapeutic agents canhave detrimental effects on spermcount, motion, or shape. In contrast,although many environmental agentsclearly have mutagenic potential inanimals, experts have disagreed onwhether environmental exposure ofhuman males contributes very much to

genetic disease or to adverse effects intheir offspring.

For example, the research literatureis consistent in confirming the adversegenetic effects of ionizing radiation inthe male mouse and its offspring.However, the human offspring of atom-bomb survivors have no measurableincrease in induced mutations.Similarly, exposures to various agentsused in chemotherapy and to radiationtherapy do not yield clear-cut results forgenetic effects in the offspring oftreated male patients.

Some of the puzzling inconsistenciesbetween humans and mice may be dueto individual variation and speciesdifferences. Other explanations involvethe role of DNA repair processes or thepossibility that some chemical orphysical agents (mutagens) may havelimited or short-term effects on sperm.

Human doses are often small comparedto those used in research on mice, and thenumber of human offspring that havebeen studied for induced genetic effectsremains relatively small. Finally, it ispossible that the types of genetic damage(called “endpoints” by geneticists)assessed in studies of exposed humansare not sensitive enough to always reveala significant effect.

A Review of the Basics

The body (or somatic) cells of humansand other mammals contain pairs ofchromosomes. Except for the sperm oregg cells and red blood cells, humansomatic cells carry 46 chromosomes (thediploid number). Normal human somaticcells have 22 pairs of autosomes (nonsexchromosomes) and one pair of sexchromosomes, either XX or XY. Of the

26

1 2 3 4 5

6 7 8 9 10 11 12

13 14 15 16 17 18

19 20 21 22 Y

1 2 3 4 5

6 7 8 9 10 11 12

13 14 15 16 17 18

19 20 21 22 X

Figure 2. Normal humansomatic cells carry 46 chromosomes, but eachsperm and egg carries only 23 chromosomes (the haploid number). Thephotomicrographs show thenormal complement of 23 chromosomes from twohuman sperm that werespecially prepared to make thechromosomes visible under amicroscope (ref. 4). One sperm(a) carries the Y chromosomeand would produce a male; theother (b) carries the Xchromosome and wouldproduce a female.(Photographs courtesy of L. Gordon and B. Brandriff ofLLNL.)

(a) Normal human sperm carrying 22 autosomesand a Y chromosome.

(b) Normal human sperm carrying 22 autosomesand an X chromosome.

Table 1. Examples of aneuploidy in the sperm of humans. Abnormal chromosomal conditionsarise in the embryo when a sperm contributes an abnormal number of chromosomes to theembryo. The normal complement of sex chromosomes is shown in the shaded area at the topfor comparison.

Father’s Mother’scontribution contribution Embryo orin sperm in egg offspring Syndrome

X X 46,XX Normal female

Y X 46,XY Normal male

XY X 47,XXY Klinefelter syndrome• Hypogonadism• Sterile

No sex chromosome X 45,XO Turner syndrome• Characteristic physical features• Hypogonadism• Sterile

YY X 47,XYY XYY male

XX X 47,XXX XXX female

21,21 21 47,+21* Down syndrome• Mental retardation• Characteristic physical features

18,18 18 47,+18* Edward syndrome• Mental deficiency• Anomalous hands, face• Often fatal

13,13 13 47,+13* Trisomy 13• Severe anomalies• Often fatal

*The extra autosome is often contributed by an aneuploid egg, but sperm are also known to be responsiblefor the genetic defect.

Page 16: Lawrence Livermore National Laboratory P.O. Box 808, L-452 .../67531/metadc627670/m2/1/high_res_d/16340.pdfof the relation between genes, the proteins they produce, and biological

the hamster technique to reliably workfor a variety of applications, aconsiderable challenge becausebiologists elsewhere in the world werehaving major difficulties in obtainingusable results. (The March 1984 issueof Energy and Technology Reviewprovides a more complete description ofthis highly useful tool.)4 We showedthat the hamster technique givesvaluable baseline information on thenormal burden of damage in healthymen. Using the method, we found that asmall proportion of sperm in otherwisehealthy males carries aneuploidies orother types of structural aberrations.

The hamster technique has become ahighly reliable tool, and we consider itto be the “gold standard” against whichwe evaluate any new methods.However, the technique is difficult toapply and is both labor-intensive andinefficient, so it is costly to perform.

Fluorescence In SituHybridization

By about 1990, the Laboratory haddeveloped a new biological procedure thatcould detect aneuploid sperm moreefficiently than the hamster technique.Fluorescence in situ hybridization (FISH)has been previously described in Energyand Technology Review (see theApril/May 1992 issue) as a gene-mappingtool.5 The method is illustrated in thecontext of our sperm research in the boxon page 12. In essence, we preparechemically labeled DNA probes and bind(hybridize) them to target chromosomalDNA within the sperm head.

Assessing Damage

The DNA complex forming thechromosomes (chromatin) in sperm istypically rigid and so dense that it occupies nearly the minimum possible volume. The high degree of condensation makes it nearlyimpossible to visualize and identifyindividual chromosomes by standardlight or electron microscopies. LLNLresearchers have been involved in thedevelopment of several techniques thatallow sperm chromosomes to bevisualized and assessed for anomalies.

The Hamster TechniqueIn 1978, Rudak and colleagues

working in Hawaii pioneered a way toanalyze the chromosomes in humansperm after fusion with hamster eggs.3Until recently, this “hamster technique”was the only method available for

characterizing chromosomaldefects in human sperm.

During the 1980s, LLNLresearchers were the first to get

are of paternal origin. The effects of such aneuploidy depend on whichcombination of X and Y chromosomes is involved. The health effects of XYY,for example, are minor; however, theeffects of Turner and Klinefeltersyndrome include physical, behavioral,and intellectual impairment as well assterility.

Among human babies, the frequencyof known chromosomal abnormalities,including aneuploidies and structuralaberrations, is about 0.6%. In addition,about 1% of newborns carry a mutationfor a genetic disease.2 When an inheritederror occurs, we need some way toascertain when the condition is due to the father, what factors can cause thecondition, and what prevention strategiesmight be effective.

29

Science & Technology Review November/December 1995

Genetic Sperm

One of the challenges we met inapplying FISH to human sperm wasfinding the right chemical treatmentsneeded for introducing probes withfluorescent tags into the dense spermhead to penetrate the DNA. We learnedhow to control the amount of swelling(see Figure 3) that occurs during theprocess while maintaining the integrityof the nuclear material in sperm. FISH is especially useful because it providesvivid fluorescent signals, it allows us todistinguish between several probes withdifferent colors, and it is reliable.

First Use of FISH in Sperm

In our first demonstration of FISH in human sperm, we applied afluorescently labeled DNA probe to theY chromosomes of sperm from humanvolunteers. As shown in Figure 4a, the Y chromosomes were tagged with afluorescein label, a green-fluorescingdye that can look yellow, for example,on a red background. The Ychromosomes are easily recognized asbright yellow spots, called “domains.”We counterstained the sperm nuclei withthe red-fluorescing dye propidiumiodide, which produces the bright redbackground color.

After examining and scoring 11,500sperm nuclei, we found that 50% ofsperm showed fluorescent domainsconsistent with the presence of a Ychromosome. The proportion is what wewould expect, because about half of allsperm carry a Y chromosome, and halfcarry an X chromosome. This finding isalso consistent with the proportion ofsperm containing Y chromosomes asdetermined by the hamster technique. Asanticipated, FISH proved to be a directand reproducible method for monitoringthe chromosome constitution of sperm,and it allows us to visually analyzethousands of cells rapidly. In subsequentstudies, we expanded the number ofDNA probes we can apply to spermnuclei, allowing us to tag two or threedifferent chromosomes simultaneously,

and we extended the method for use inlaboratory animals.

Extending the Human Assay

Two-Probe AssaysBecause aneuploidy at birth

frequently involves the two sexchromosomes, we initially extended the FISH assay to include the secondsex chromosome (chromosome X). Figure 4b shows normal sperm carryingeither a single X chromosome, whichfluoresces blue-green due to the dyeFITC, or a single Y chromosome, whichfluoresces red due to the dye TexasRed. In the photograph, the red domainsare larger than the blue-green onesbecause the DNA regions we targetedon the Y chromosome had longerrepetitive sequences. In this andsubsequent photos, the precise color ofa fluorescent dye can vary as a functionof counterstains used to highlight thesperm nucleus.

Beyond studies of normal sperm, thetwo-probe assay gives us a method for

detecting sperm carrying an abnormalnumber of chromosomes X and Y. Thistype of assay can be applied to studysperm that give rise to Turnersyndrome, Klinefelter syndrome, andother inherited sex-chromosomeconditions. When such aneuploid spermare produced, our two-probe assay candifferentiate among sperm containingtwo red domains (YY), two greendomains (XX), or both colors (XY).

Three-Probe AssaysNext, we added a fluorescently

labeled DNA probe for one of theautosomes in sperm. Whereas anyautosome would suit the purpose, weselected a probe for chromosome 8,which was our best DNA probeavailable at the time. Adding oneautosome to FISH is a major advantagebecause it allows us to distinguishamong three possibilities: duplication of a sex chromosome only (sex-chromosome aneuploidy), duplicationof a single autosome only (autosomalaneuploidy), and duplication of the

2928

Science & Technology Review November/December 1995

Genetic Sperm28

Figure 3. The high degree of condensation of the DNA–protein complex (called chromatin) in sperm makes it nearly impossible to identify individual chromosomes by standardmicroscopy. The two photos show how our chemical treatments (a–c) swell the sperm head,which is shown before (d) and after (e) treatment. The scale is the same in (d) and (e).

(d) Untreated sperm

(a) Obtain semen sample

(b) Dry the sperm onto themicroscope slide

(c) Chemically treat sperm toinduce swelling

(e) Decondensed sperm head after treatment

Figure 4. (a) The fluorescent dyefluorescein is applied to the Ychromosomes in human sperm.When they are present (50% ofthe time), the Y chromosomes areeasily recognized as bright yellowareas. The sperm nuclei arecounterstained with propidiumiodide, giving a red backgroundcolor. (b) This example of our two-probe FISH procedure shows howwe can differentiate amonghuman sperm carrying a single Y chromosome (red fluorescence)or a single X chromosome (greenfluorescence).

(b) Two-probe FISH

(a) One-probe FISH

Page 17: Lawrence Livermore National Laboratory P.O. Box 808, L-452 .../67531/metadc627670/m2/1/high_res_d/16340.pdfof the relation between genes, the proteins they produce, and biological

31

Science & Technology Review November/December 1995

Genetic Sperm 3130

Science & Technology Review November/December 1995

Genetic Sperm

entire genome (diploidy). In other words,if we detected the condition XX88 orYY88, then it is highly likely that all theother chromosomes are duplicated aswell (the diploid condition).

Figure 5 illustrates our three-probeassay for chromosomes 8 (yellow), X(green), and Y (red). Figure 5a showsnormal human sperm, which carryeither X8 or Y8. All such normal spermfluoresce in only two colors and showtwo domains (two discrete fluorescentareas). Abnormal human sperm inFigures 5b through 5j show more thantwo domains (for example, XX88 hasfour domains) or more than two colors(for example, XY8 has three colors andthree domains).

As more FISH probes becomeavailable, we will add them to the assay.As soon as an excellent probe forchromosome 21 is developed, we will

include it in the FISH assay so that wecan look for this important marker forsperm that may lead to Down’ssyndrome. Similarly, we will soon addDNA probes for chromosomes 13 and18 because these trisomies, likechromosome 21, survive to birth andbeyond in humans.

Recently, we developed anotheruseful tool that has importantimplications. By adding the techniqueof phase-contrast imaging tofluorescence microscopy, we can nowdetect the tails of sperm and distinguishthem from somatic cells that are normally present in semen (Figure 6). Differentiation is criticalwhen we detect sperm carrying XY88,for example. Such an arrangementcould represent either a diploid spermor a normal somatic cell (which alwayscarries two sex chromosomes and

copies of each autosome). Phasecontrast allows us to differentiateclearly between the two possibilitiesbecause somatic cells have no tail.

Validating the Method

To demonstrate the utility of theFISH method for assessing spermchromosomes in humans, we needed toaddress the issue of validation. Howwould we know whether the values weobtain—for example, the baselinefrequency of aneuploid sperm in healthy males—are actually correct?Fortunately, several lines of evidencefrom independent sources can be usedto validate our assay.

Researchers at Livermore Laboratoryand in Canada and Japan have used thehamster technique to collect baselineinformation establishing the normal

30

(b) XX8 (c) YY8 (d) XY8 (e) X88 (f) Y88Normal human sperm Aneuploid human sperm(a) X8 and Y8

(g) XX88 (h) XX88 (i) YY88 (j) YY88

Fluorescence Fluorescence

Phase contrast Phase contrast

Figure 5. Our three-probe FISH procedure applies a mixture of probes specific for chromosomes X, Y, and 8, each taggedwith a different fluorescent dye. Here, human chromosomes X fluoresce green, Y fluoresce red, and 8 fluoresce yellow. (a) Normal sperm carry either X8 or Y8 and are marked by only two different colors and two domains. (b–f) Abnormal sperm,such as XX8, YY8, XY8, X88, or Y88, have three domains but of varying colors. (g–j) Abnormal sperm, such as XX88 or YY88,have two colors but four domains. (h) and (j) show the sperm tail using phase-contrast imaging.

Abnormal human sperm can contain two X chromosomes,two Y chromosomes, neither X or Y, both X and Y, and anabnormal number of autosomes (either more or less than 22).How can we tell the difference between normal and defectivesperm when chromosomes are packaged so tightly in the spermhead?

Our technique, called fluorescence in situ hybridization(FISH), uses two starting DNA materials: target spermchromosomes and probe DNA. Sperm carrying the targetchromosomes are placed on glass slides. The sperm chromatin ischemically treated, or in technical terms, decondensed, so thatour probe DNA can penetrate the chromatin to reach the targetchromosomes. The probe consists of DNA fragments (hundredsof copies of a specific region of a particular chromosome)prepared by attaching a fluorescent dye and heating to yield

single-stranded DNA. At Livermore, we have DNA probes formost of the human chromosomes, including X and Y, and formany rodent chromosomes. We use several different dyes todifferentiate among different chromosome types. When thelabeled probes hybridize (bind) with the complementary singlestrand of target sperm chromosomal DNA, the dyes vividly“light up” the specific region of the chromosome underinvestigation. We then count and record (score) the fluorescentspots, called domains, which appear as vivid signals through alight microscope.

Our methods are equally successful in studies of humanand rodent sperm, and they are far more efficient and lesscostly than any other assay developed to date, including thehamster technique. Ten thousand cells can be scored in lessthan two days.

How FISH Is Used to Detect Aneuploid Sperm

Attach fluorescent dye to probe DNA

1. Prepare target chromosomal DNA in sperm nuclei (see Figure 3)

Chemical treatment decondenses target chromosomal DNA

2. Prepare probe DNA

3. Denature (by heating) probe and target DNA to yield single-stranded DNA

4. Hybridize probe to complementary DNA on target 5. Visualize fluorescent domains in sperm nuclei under microscope

Target

Target

Target

Probe

Probe

Probe

DNA base pairs:

AdenineCytosine

ThymineGuanine

Page 18: Lawrence Livermore National Laboratory P.O. Box 808, L-452 .../67531/metadc627670/m2/1/high_res_d/16340.pdfof the relation between genes, the proteins they produce, and biological

33

Science & Technology Review November/December 1995

Genetic Sperm

chromosomes X and 8 in more than80,000 sperm from healthy, young adultmice. About 3 sperm per 10,000 cellsevaluated showed XX or 88 aneuploidies(Figure 7).7 The frequencies we foundfor these particular numerical errors intwo strains of mice wereindistinguishable from those for spermfrom healthy men using similarprocedures and scoring methods. Thiswork serves to demonstrate what we call“bridging biomarkers” between humansand animals for detecting spermaneuploidy.

Bridging biomarkers, in essence,allow us to use the same type ofmeasurable variation as we assesssimilarities and differences amongspecies. With this type of information,we can compare the mean error rate ofspecific chromosomal defects in humanversus rodent sperm, especially thesperm of mice. Our data show that themean error rate is about the same inotherwise healthy male mice andhumans. The bridging biomarkers ofsperm aneuploidy also allow us tocompare human and laboratory speciesfor effects of physiological changes(e.g., diet, age), effects of exposure totoxicants, and effects of geneticdifferences. The studies of age effectsare summarized to illustrate the utility ofbridging biomarkers.

Age Effects in Mice and Men

We divided the 14 men in our three-probe study into two groups withaverage ages of 47 versus 29 years.Compared to the younger men, oldermen had higher fractions of abnormalsperm with either two copies of the Xchromosome (XX8) or two copies ofthe Y chromosome (YY8). However,older men did not have higherfrequencies of the other possibleaneuploid conditions, such as XY8,X88, XY88, and so forth.6

Our findings on age effects arepreliminary and should be interpretedwith caution. A more detailed studyusing a larger population of men isneeded. We have recently receivedfunding from the National Institutes ofHealth to carry out such a study. Thenew collaboration will be one of thefirst human tests potentially linkingchromosomally abnormal sperm to ageand other life-style factors of the father.

Our preliminary findings of ageeffects in men are strikingly similar toour recent results on aneuploid sperm inaged mice. Aged mice (mice normallylive for about two and a half years) hadhigher levels of sex-chromosomeaneuploidy in sperm than did youngmice. Mice of advanced age (older thanabout two years) had about twice as

many aneuploidies of the types XX8,YY8, 88X, and 88Y than did youngermice (slightly older than two months).As with human males, we found thelargest age-related increases in the XX8and YY8 aneuploidies (Figure 8).

If our findings on aging continue tohold up with further research, they maypoint to an intriguing possibility.According to several lines of evidenceon the production of sperm and eggcells with genetic errors, age effects inhuman females are predominant in thefirst stage of meiosis (the first of a two-stage process in forming an egg orsperm). Our preliminary data on bothmice and human males suggest that ageeffects in males are predominant at thesecond stage of meiosis. Thus, malesand females may differ in terms of theexact stage at which some geneticerrors, such as aneuploidy, arise.

Effects of Smoking

Cigarette smoking is one of the mostpervasive examples of the self-administration of toxic compounds.Research over several decades atLivermore and elsewhere has shownthat cigarette smoking can cause defectsin sperm quality. However, noinformation has been available on itsmutagenic potential in sperm.

3332

Science & Technology Review November/December 1995

Genetic Sperm32

(b) XX8 (c) YY8 (d) XY8 (e) X88 (f) Y88Normal mouse sperm Aneuploid mouse sperm(a) X8 and Y8

Figure 7. Three-probe FISH applied to the sperm of healthy, young adult mice. Chromosomes X fluoresce yellow, Yfluoresce green, and 8 fluoresce red. Notice that the sperm of mice have a characteristic hook shape. (a) Normalsperm with single copies of chromosome X or Y and one copy of chromosome 8. (b–f) Abnormal (aneuploid) mousesperm have three domains.

burden of chromosome damage inhuman sperm. These studies suggest thatthe baseline frequency of aneuploidsperm in young, healthy males is 3 per10,000 chromosomes. This is thereference value we used in assessing thenew FISH assay. We also have hamsterdata on the frequency of abnormalchromosomes after administering dosesof some mutagenic drugs. Finally,

hospitals publish the results ofpopulation-based surveys that provideadditional statistics on the frequency ofXYY babies, XXY babies, and othergenetic anomalies at birth.

Baseline FISH Research

Using the one-, two-, and three-probe FISH assays in human sperm, we

have assessed chromosomes X, Y, 1, and 8 for evidence of aneuploidy inhundreds of thousands of cells fromhealthy men. The frequencies ofaneuploid sperm can vary among thedifferent chromosome types and amongindividual male donors. Furthermore,most healthy men give consistent resultsover time (up to four years).

Our assays show that human spermcontain the abnormal chromosome pairs8-8, XX, YY, and XY at frequencies ofroughly one per 2,000 sperm analyzed,averaging across donors. This frequencyis quite similar to the value of 0.6 per2,000 sperm obtained by the hamstertechnique. We found that the abnormalchromosome pairs 1-1 had the highestfrequency of all, about 3 per 2,000sperm. No sperm of the many thousandswe tested from different healthy donorscontains more than two of the samechromosome type (for example, we donot find the triplets 888, 111, or YYY).The most common sex chromosomalabnormality we found using the three-probe assay was XY8, with an averagefrequency of 9.5 per 10,000 humansperm scored.6 Table 2 summarizes thefrequencies of abnormal sperm types wefound in various studies of young,healthy, human males using FISH.

Bridging Biomarkers

We recently developed corollarymethods for detecting aneuploidy in thesperm of mice and rats using two- andthree-probe FISH. We used a multicolorFISH procedure to evaluate

(a) Fluorescence (b) Phase contrast (c) Fluorescence (d) Phase contrastXY88 sperm with tail XY88 somatic cell (no tail)

Figure 6. Phase-contrast microscopy, togetherwith fluorescence microscopy, allows us todifferentiate between (a and b) XY88 sperm,which have tails, and (c and d) XY88 somaticcells normally present in semen, which do nothave tails.

Table 2. Frequencies of human sperm, per 10,000 cells analyzed, with abnormal numbersof chromosomes determined by the FISH method. This summary table represents datafrom 14 donors and more than 220,000 scored cells. (Remember: normal sperm carry asingle X or a single Y and a single copy of each numbered chromosome. Any othercombination constitutes a chromosomal abnormality.)

Type of Type of chromosomal Frequency (pooled data FISH assay abnormality in sperm for donors studied)

One probe 1-1 14.0

Two probe YY 5.7

Two probe XX 3.9

Two probe XY 6.2

Three probe XX8 3.1

Three probe YY8 3.1

Three probe XY8 9.5

Three probe 88X 3.0

Three probe 88Y 3.6

Three probe XX88 2.2

Three probe YY88 1.7

Three probe XY88 10.6*

*This frequency may be elevated because it may include a small number of somatic cells of the type XY88normally found in semen.

Page 19: Lawrence Livermore National Laboratory P.O. Box 808, L-452 .../67531/metadc627670/m2/1/high_res_d/16340.pdfof the relation between genes, the proteins they produce, and biological

35Genetic Sperm34

Science & Technology Review November/December 1995

Genetic Sperm

We recently studied 15 smokers and 15 nonsmokers from the CzechRepublic and found that smokersproduce approximately twice thenumber of aneuploid sperm asnonsmokers. Cigarette smoking is alife-style that often includes alcoholconsumption and possible stressfactors. Thus, further research isneeded to determine whether theeffects we found are indeed due totobacco products or to other aspects ofa smoker’s life-style.

Effects of Chemotherapy

We have applied the three-probeFISH method to sperm cells of cancerpatients before, during, and aftertreatment with the combinationchemotherapy NOVP. Our basicquestion was whether the aneuploidiesinduced in sperm might persistfollowing treatment, raising thepossibility that genetic damage couldbe passed on to future offspring. Weelected to study NOVP treatmentbecause it contains drugs known toproduce aneuploidy in model systems.

Figure 9 shows our results foryoung male patients with Hodgkin’sdisease (a kind of lymphatic cancer).When compared with healthy controls,these patients had elevated frequenciesof aneuploid chromosomes X, Y, and 8even before treatment (twofold tosixfold increases over normal levels).Just after NOVP treatment, thefrequencies of numerical abnormalitiesincreased twofold to fivefold comparedto pretreatment levels. Followingchemotherapy, aneuploid spermreturned to pretreatment levels withintwo to six months—clear evidence thatat least one type of chemotherapy hastransient effects on aneuploidy inhuman sperm. Further studies areneeded to determine whether otherdrugs induce aneuploidy in humansperm and whether the effects are alsotransient.

New Research on Embryos

In very recent work, we have begunto assess damage in embryos, using themouse as the model species. Figure 10summarizes the various developmentalstages of the mouse embryo at whichwe can now apply the FISH method to

assess numerical and other errors inchromosomes. For example, we canstudy the embryos immediately afterfertilization (just before first cleavage ofthe embryo when it divides into twocells), at the two-cell stage, or aboutfour days later just before the embryoimplants into the uterine wall and when

it consists of 30 to 50 cells. This workhas required the development of specialFISH probes and techniques that aresuitable for embryos.

One of the advantages of studyingone-cell mouse embryos (called“zygotes”) is that the chromosomesfrom the paternal and maternal mice

Figure 8. In the sperm ofaged mice, as in older humans,we have found the largest age-relatedincreases in two aneuploid conditions, XX8 andYY8. These curves show that the frequencies of aneuploidchromosomes in the sperm of mice increase with increasing age, especiallyafter two years. This figure includes only those instances in which cells have gaineda chromosome (ref. 7).

Pretreatment

Duration of NOVP Treatment

0

20

40

50 100 150 200 250Posttreatment days

300 350 400 450

60

80

100

120

140

160

180

Fre

qu

en

cy o

f a

ne

up

loid

sp

erm

pe

r 1

0,0

00

ce

lls f

or

chro

mo

som

es

8,

X,

an

d Y

Figure 9. Frequency of aneuploid sperm in young men with Hodgkin’s disease undergoingNOVP chemotherapy. The baseline levels of abnormal sperm increase twofold to fivefoldfollowing treatment with NOVP and then return to pretreatment levels approximately threemonths after chemotherapy. Each color represents samples obtained from a separatedonor (ref. 6).

2 months22 monthsAge of individual mice

25 months29 months

40

Frequency

of aneuploid

sperm per

10,000 cells

30

20

10

All aneuploidies combinedSex-chromosome aneuploidiesChromosome 8 aneuploidies

(b) One-day-old mouse embryo

(c) One-and-one-half-day-oldmouse embryo (two-cell stage)

Chromosomes fromthe egg

All cells contain XXY

Chromosomes fromthe sperm

(d) Four-day-old mouse embryo (just beforeimplanting in uterus)

Each cell contains XXY

(a) Mature aneuploid mousesperm containing X and Y

Mouse egg(size shown relative

to mouse sperm)

Figure 10. We can now applythe FISH method to assessaneuploid chromosomes atseveral critical stages of mouseembryonic development. Thisdrawing illustrates how ananeuploid sperm carrying bothan X and a Y chromosomeproduces an offspring with theXXY genotype (comparable toKlinefelter syndrome inhumans). We are assessingchromosomes in (a) maturemouse sperm, (b) one-day-oldmouse embryos (zygotes), (c) two-cell mouse embryos, and(d) four-day-old mouse embryos.

Fertilization

X chromosome GreenY chromosome RedAutosomes Gray

Page 20: Lawrence Livermore National Laboratory P.O. Box 808, L-452 .../67531/metadc627670/m2/1/high_res_d/16340.pdfof the relation between genes, the proteins they produce, and biological

37

Science & Technology Review November/December 1995

Genetic Sperm

remain separated until the first cellulardivision (Figures 11a through 11c). Thismeans that we can determine thecontribution of each parent independently.Another advantage is that we can easilyvalidate our results on mice becauseinvestigators have published data on thefrequencies of aneuploid mousechromosomes at the time of firstcleavage.

Using FISH probes for the Ychromosome, our early results frommore than 200 mouse zygotes show thatabout 9% of the embryos havenumerical errors, and two of those cases were of paternal origin (seeFigure 11b). We will investigate theeffects on the embryo of exposingmouse sperm to chemical agents knownto cause genetic defects.

Our new studies on four-day mouseembryos (Figures 11d and e) are acollaboration with the University ofCalifornia at Berkeley. Very little workhas been done on such embryos, so ourresearch will be among the first. We areapplying FISH and other biologicalimaging methods to understand how thedevelopment and survival of implantedembryos are affected by mutagen

36

Science & Technology Review November/December 1995

Genetic Sperm

Figure 11. Chromosomes of (a) normal and (b) aneuploid1-day-old mouse embryos. FISH probes are applied tochromosomes 1, 2, 3, and X (bright green) andchromosome Y (orange). The male chromosomes areclustered on the left, and female are on the right in (a), (b),and (c). Note the presence of one extra chromosome in(b) from the male parent. (c) Abnormal one-day-oldmouse embryo with a chromosome break and atranslocation. (d) Normal four-day-old mouse embryoshown under phase-contrast microscopy and (e) afterlabeling with FISH probes. In (e), our FISH probeshighlight all 40 mouse chromosomes in the nucleus ofeach cell of the embryo.

ANDY WYROBEK joined the Biomedical Sciences Divisionof the Laboratory in 1975. He is currently the principalinvestigator of the sperm and embryo research team within theBiology and Biotechnology Research Program at theLaboratory. He received his B.S. in physics from the Universityof Notre Dame in 1970 and his Ph.D. in medical biophysicsfrom the Ontario Cancer Institute at the University of Toronto in

1975. In more than 80 publications, Andy Wyrobek has explored male-mediateddevelopmental toxicology, human male reproductive hazards, and mammaliantesting systems for detecting the genetic effects of environmental, occupational, andtherapeutic agents in sperm and embryos. His special interests are understanding themechanisms leading to birth defects and identifying the environmental and geneticrisk factors for abnormal pregnancies.

About the Scientist

exposure of the father’s sperm beforemating.

Acrylamide is the model mutagenfor our four-day embryo projectbecause it is known to induce heritabledefects. The damage we are seeing inembryos includes aneuploidy, mosaics(a combination of some normal cellsand some chromosomally altered cells),chromosome breakage, and polyploidy(the occurrence of chromosomes thatare three or more times the haploidnumber). We expect that the methodswill also be useful for future studies on the outcomes of human in vitrofertilization.

New Research onKlinefelter Syndrome

We are beginning to look at bloodsamples from humans who carry thegenetic abnormality associated withKlinefelter syndrome (47, XXY). Suchindividuals tend to be slower thannormal in physical and behavioraldevelopment, they eventually growtaller on average, and they are allsterile. The work is a collaborationinvolving LLNL and five otherinstitutions.

About half of 47, XXY cases receivethe extra X chromosome from thefather (such aneuploid sperm wouldcarry both the X and Y chromosomes).Male children with this syndrome andtheir fathers provide us with a uniqueopportunity to learn about the relationbetween sperm aneuploidy andaneuploidy at birth. We will study 40families with children whose diagnosisof Klinefelter syndrome has beengenetically confirmed. We want toknow if fathers who are responsible forthe syndrome in their child produceinherently elevated levels of aneuploidsperm, especially XY sperm. Toincrease the speed of scoring defectivechromosomes and the use of objectivecriteria in the FISH assay, we are alsodeveloping new automation and image-analysis techniques.

Looking Ahead

For decades, genetics researchers,concerned physicians, and manyparents have struggled to come to termswith the causes and conditions that mayunderlie abnormal reproductiveoutcomes. With highly efficient FISHprobes, we are beginning to understandsome of the ways chromosomalabnormalities can arise in sperm andhow those defects may lead to defectsin the embryo.

Our expectation is that the newprocedures will lead to a far greaterunderstanding of the relations amongcertain genetic defects in human sperm,the effects of age, environmentalexposures, and life-style factors, and the probability of fathering achromosomally defective child. On thehorizon are improved FISH assays formore chromosomes, new assays thatcan detect chromosome breakage insperm, and automation and objectiveimage processing. Such advances will help to make our methods moreaccessible to the rest of the researchcommunity.

Key Words: aneuploidy; chromosomalabnormality; DNA probes; fluorescence insitu hybridization (FISH); Klinefeltersyndrome; sperm—human, rodent; sexchromosomes.

References1. A. J. Wyrobek, et al., “An Evaluation of

Human Sperm as Indicators ofChemically Induced Alterations ofSpermatogenic Function. A Report forthe U.S. Environmental ProtectionAgency Gene-ToxProgram,” MutationResearch 115, 73 (1983).

2. U.S. Congress, Office of TechnologyAssessment, Technologies for DetectingHeritable Mutations in Human Beings,OTA-H-298, U.S. Government PrintingOffice, Washington, D.C. (1986).

3. E. Rudak, et al., “Direct Analysis of theChromosome Constitution of HumanSpermatozoa,” Nature 274, 911 (1978).

4. Brigitte Brandriff, “VisualizingChromosomes in Human Sperm,”Energy and Technology Review(March 1984), UCRL-52000-84-3, pp. 1–12.

5. A. V. Carrano, “The Human GenomeProject,” Energy and TechnologyReview (April/May 1992), UCRL-52000-92-4/5, pp. 29–62.

6. W. A. Robbins, et al., “Three-ProbeFluorescence In Situ Hybridization toAssess Chromosome X, Y, and 8Aneuploidy in Sperm of 14 Men fromTwo Healthy Groups: Evidence for aPaternal Age Effect on SpermAneuploidy,” Reproduction, Fertility,and Development (in press).

7. X. Lowe, et al., “Aneuploidies andMicronuclei in the Germ Cells of MaleMice of Advanced Age,” MutationResearch (1995) (in press).

For further information contact Andrew J. Wyrobek (510) 422-6296([email protected]).

One-day-old mouse embryos Four-day-old mouse embryos(a) Normal

Painted Y chromosome

Painted chromosomes1, 2, and 3 from male

Painted chromosomes 1, 2, 3,Y, and one extra from male

Break in Y chromosome from male

Female chromosomes

Translocation

Painted chromosomes1, 2, 3, and X fromfemale

Painted chromosomes1, 2, 3, and X fromfemale

(d) Normal, phase contrast microscopy

(c) Chromosome break

(e) Normal, FISH and flourescence microscopy(b) Aneuploid

Page 21: Lawrence Livermore National Laboratory P.O. Box 808, L-452 .../67531/metadc627670/m2/1/high_res_d/16340.pdfof the relation between genes, the proteins they produce, and biological

39

Science & Technology Review March 1997

Ergonomics

biomechanical engineering, sensors, industrial hygiene, andoccupational medicine.

These strengths make it appropriate for LawrenceLivermore to tackle a pressing national problem such asWRMSDs, says Burastero. The LLNL work is funded byLivermore’s Laboratory Directed Research and Development,the Department of Energy, and the computer industry. Theresearch projects have attracted collaborators from theUniversity of California’s San Francisco School of Medicine,UC Berkeley, the National Institute of Occupational Health inSweden, and the University of Michigan.

The research is also closely aligned with the Laboratory’sCenter for Healthcare Technologies and an internalergonomics program. The latter is an employee-orientedresearch program that aims to reduce the severity ofergonomic injuries and illnesses at the Laboratory and toreduce the lost and restricted time attributed to these injuries.

Unique Resources at LLNL Livermore’s ergonomics research program draws upon a

combination of four resources, which together exist nowhereelse in the national laboratory family. The first is anergonomics laboratory (see photo below) that is outfitted withstate-of-the-art three-dimensional motion-analysis equipmentthat is used to study dynamic wrist motion, a sensor-basedhand tracking system, an image processing lab, and a variety

of ergonomic assessment equipment. Much of the equipmentcan be transported to an employee’s worksite for “real-world”analysis of how people interact with their computers.

The ergonomics lab works closely with computationalmodeling experts in LLNL’s Institute for Scientific ComputingResearch. This modeling capability, the program’s secondsignificant resource, is being applied to the study ofhuman–machine interactions. For example, ergonomicslaboratory data help validate the work of LLNL bioengineerKarin Hollerbach, who is developing a dynamic computationalmodel of the bones and joints that are often associated withthese injuries. (See September 1996 S&TR, pp. 19–21).

Biomedical engineer Robert Van Vorhis, who coordinatesthe ergonomics lab, also heads the technical aspects of aproject to enhance a physician’s visualization duringendoscopic surgery for carpal tunnel syndrome. Improvementsin endoscopic surgery could improve the cure rate of thisminimally invasive surgical procedure such that, when it issuccessful, employees return to work in two weeks instead ofsix. This project is also leveraging Livermore’s capabilities inoptics and digital imaging with spinoff applications inadvanced manufacturing.

The ergonomics research program’s third major resource isHealth Services’ occupational health clinic (see photo on

Laboratory tests are conductedin setups like this one withhand-tracking devices (above)and digital cameras (left).Providing valuable data to theresearch program, graphicdesigner Kitty Tinsley tries outvarious keyboards to find akeyboard design that matchesher needs.

38

Science & Technology Review March 1997

Research Highlights

“The technology for measuring musculoskeletal risk is verycrude in comparison. At Lawrence Livermore, we rarely seepeople inhaling toxic materials, but, as at worksites everywhere,we see musculoskeletal injuries among computer users.”

Experts say preventing WRMSDs begins with ergonomics,a relatively new field concerned with studying the interactionbetween individuals and their working environment to ensurethat tasks are performed safely and efficiently. Burasterocontends, however, that all too often, products labeledergonomic are not backed by data gained from rigorousresearch or extensive field trials. As a result, there’s asurprising lack of knowledge on how injuries can be prevented.

In response to the lack of scientific data, LawrenceLivermore’s Interdisciplinary Ergonomics Research Programis addressing comprehensively the problem of WRMSDsplaguing U.S. industry. The program uses a multidisciplinaryresearch team that taps LLNL’s strengths in human factorsdesign and engineering, computational modeling,

Ergonomics Research: Impact on Injuries

O tool has characterized the modern workplace like thepersonal computer. An estimated 60 million PCs adorn

desks in virtually every work environment today, achievingremarkable increases in productivity while virtuallytransforming entire industries. At the same time, however, anincreasing number of employees are heavy computer users whosuffer painful and sometimes debilitating (and occasionallycareer-ending) injuries called work-related musculoskeletaldisorders (WRMSDs) involving their hands and arms.

According to Dr. Steve Burastero, director of LawrenceLivermore’s Interdisciplinary Ergonomics Research Programand a physician in the Health Services Department, themounting numbers of injuries should not come as a surprise.After all, someone typing 60 words per minute for 6 hours aday will keystroke a half-million keys every week, often in anawkward position or under stress.

Burastero says the number of cases of WRMSDs hasincreased dramatically to near-epidemic proportions in theU.S. workforce, from about 20% of occupational illnessesnationwide in 1981 to more than 60% of all occupationalillnesses today, according to U.S. Bureau of Labor statistics.Within computer-intensive occupations, the incidence ofinjury has doubled every year for the past four years.

These disorders cost the nation over $40 billion per year inmedical costs alone. When productivity losses and disabilityand retraining costs are included, the total bill may top $80 billion per year. A common injury is tendonitis—inflammation of tendons, which connect muscle to bone.Another well-publicized injury, carpal tunnel syndrome,involves damage to the median nerve that travels through atight space in the wrist called the carpal tunnel.

Burastero notes that in the past, safety at most work sites,including Lawrence Livermore, traditionally focused onavoiding accidental injuries caused by hazardous materials orindustrial equipment. As a result, procedures and instrumentswere developed that can detect, for example, toxic solvents atextremely low levels.

“We have technology that is very good for detecting themost minute amounts of hazardous materials,” says Burastero.

N

Clinical tests are done with Livermore subjects. Ergonomist PatTittiranonda shows Nancy Johnson how a nerve conduction test wouldbe performed to assess the symptoms of carpal-tunnel syndrome.Probes on the finger and wrist measure the nerve conduction velocityacross the wrist joint.

Page 22: Lawrence Livermore National Laboratory P.O. Box 808, L-452 .../67531/metadc627670/m2/1/high_res_d/16340.pdfof the relation between genes, the proteins they produce, and biological

EPETITIVE motion injuries are one of the fastest growingcauses of lost time to business and industry, not to mention

their impact on worker health and morale. The orthopedicsurgery these injuries sometimes necessitate is costly andpainful. The therapy and orthopedic implantsassociated with degenerative bone and musclediseases and acute injuries are also costly, and inthe case of implants, the initial cost may not bethe final cost if and when the implant needs tobe replaced.

Computational models of joint anatomy andfunction can help doctors and physical therapistsunderstand trauma from repetitive stress, degenerativediseases such as osteoarthritis, and acute injuries. Models ofprosthetic joint implants can provide surgeons andbiomechanical engineers with the analytical tools to improvethe life-span of implants and increase patient comfort.1

With such purposes in mind, the Laboratory embarkedabout three years ago on a mission to model the whole humanhand at high resolution. The challenge is that most biologicalstructures are dauntingly complex, and the hand is noexception. The human wrist alone has eight bones, and the restof the hand has 19 more, to say nothing of soft tissues—ligaments, tendons, muscles, and nerves—and the interactionsamong them.

More recently, the Laboratory’s ComputationalBiomechanics Group within the Institute for ScientificComputing Research (ISCR) narrowed the mission to acomputational model focusing on the dynamics of specificbones and joints that are often associated with injury ordamage. The group also undertook a closely related endeavor:creating a computational model of prosthetic joint implants,initially for the thumb.

In light of the complexity of these models and the need forvery high accuracy, it is appropriate that a facility like LLNL—which offers powerful computational resources, anunderstanding of complex engineering systems, andmultidisciplinary expertise—take on these tasks. It is alsosignificant that the work is being done collaboratively throughthe ISCR and draws on experts from the Laboratory(particularly the Mechanical Engineering Department),academia, medicine, and industry (see the box on p. 20). The

NIKE3D modeling code, for example, whichwas developed at the Laboratory to address

engineering problems involving dynamicdeformations, such as the response of bridges to largeearthquakes,2 is now being used as part of our collaborativejoint modeling work.

Each person’s bones differ in shape and size. Our models arebased on the detailed anatomy of individual people. We startwith high-resolution data obtained from computed tomographyor magnetic resonance imaging, as shown in the illustrations onpp. 20–21. Images from a single hand scan involve severalgigabytes of raw data, and the models developed from them arehighly complex—thus the need for powerful computers.

Focusing on the Hand and KneeWe focused our initial attention on a few joints in the hand.

One joint of considerable clinical interest is the thumb carpo-metacarpal (CMC) joint, which connects the long bone at thebase of the thumb with the wrist. During routine graspingactivities, CMC joint surfaces are subjected to total forcesgreater than 200 kilograms (440 pounds), so it is not surprisingthat injuries are common. The thumb is also often involved inrepetitive motion injuries, and the CMC joint is the structuremost affected in osteoarthritis, which strikes 8% of the U.S.population. Other joints of considerable interest are the kneeand the proximal interphalangeal joint and the metacarpo-phalangeal joint in the index finger, which have some of thestrongest ligaments in the hand.

40

Science & Technology Review March 1997

previous page), which has expertise in the diagnosis, treatment,and rehabilitation of WRMSDs. Livermore physicians andnurse practitioners diagnose and manage problems whilephysical therapists administer on-site treatment.

Finally, LLNL’s large, stable, and innovative workforceprovides excellent subjects for testing new products in theworkplace and for providing valuable feedback. “We are amini-town, with every profession represented, includingeditors, administrators, accountants, computer scientists,physicists, technicians, and engineers,” says Burastero. All usecomputers differently, and most, he adds, are not shy aboutvoicing their ideas and problems concerning computer usage.

With these four resources—laboratory, computer modeling,clinic, and employees—Livermore is providing a better pictureof how these injuries are initiated and prevented and the rolethat computer accessories play in prevention and cure.Burastero says combining data from laboratory studies,workplace observation, long-term subject feedback, and expertmedical monitoring is much preferable to other methods thatconsist solely of testing people in a controlled setting for a fewhours or lending them a product to try.

Much of the research program’s focus has been studyingrecent alternatives to standard computer keyboards.Conventional keyboards have been suspected of causing orexacerbating WRMSDs because their design can encourageuse with wrists bent into awkward postures. Burastero notesthat keyboards really have not changed much during the past30 years—witness the host of “vestigial keys” such as “scrolllock” that date from the PC’s earliest days.

Pat Tittiranonda, an ergonomist with the program, recentlycompleted the most comprehensive computer-keyboard studyever performed. The three-year study involved 80 participantsrepresenting a broad range of occupations at Livermore. Allhad suffered WRMSDs such as carpal-tunnel syndrome ortendonitis. The volunteers were given one of four alternatekeyboards to use for six months. The keyboards had beendeveloped and marketed by different companies with the goalsof increasing user comfort and reducing the risks of awkwardwrist postures.

Over a period of six months, the research team closelyanalyzed how the subjects used their keyboards, includinghow much force they exerted on the keys with their fingers.The team also did video and three-dimensional motionanalysis of the volunteers working at their computers. Theresults showed for the first time that keyboards specificallydesigned to lessen pressure on wrists can relieve thesymptoms of WRMSDs and promote recovery. What’s more,many of the alternative keyboards did not impair productivityand were relatively easy to learn.

The test subjects also completed a questionnaire onsupervisor and coworker support and conflict. The dataaffirmed preliminary findings that the effectiveness ofergonomic interventions can be reduced in a stressfulworking situation.

The study, says Burastero, should have a “significantimpact” by changing the way keyboards are designed andproviding safety and health professionals with a greaterunderstanding of the role of keyboards in WRMSDs. “Therehad been a lot of anecdotal evidence, but until now no onehad systematically looked at how people actually work withkeyboards or at the long-term effects,” he says.

Livermore ergonomics experts caution that keyboards areonly one factor associated with WRMSDs. For example,chairs, desks, terminals, and lighting conditions also playroles. So do pointing devices such as mice and trackballs,which the research team is planning to investigate in depth. At the very least, says Burastero, Lawrence Livermore canprovide a knowledgeable perspective to manufacturers,clinicians, and workers everywhere.

—Arnie Heller

Key Words: carpal tunnel syndrome, ergonomics, industrial health,work-related musculoskeletal disorders (WRMSDs).

For further information contact Stephen Burastero (510) 424-4506([email protected]).

Ergonomics 41

Science & Technology Review September 1996

R

Modeling Human Joints andProsthetic ImplantsModeling Human Joints andProsthetic Implants

Research Highlight

A new prosthetic implantdesign for the thumbjoint. We model thebehavior of each designwith physiological loadsapplied. Our techniqueuniquely reveals regionsof high stress (shown inred) versus relatively lowstress (blue).

Page 23: Lawrence Livermore National Laboratory P.O. Box 808, L-452 .../67531/metadc627670/m2/1/high_res_d/16340.pdfof the relation between genes, the proteins they produce, and biological

43Joint ModelingJoint Modeling42

Science & Technology Review September 1996 Science & Technology Review September 1996

Collaborators in BiomechanicsModeling

ISCR biomechanics research is collaborative in thebroadest sense. At Livermore, we work with experts incomputer vision, mechanical and electrical engineering,nondestructive evaluation, health care technology, healthservices, and with visiting scholars and students. Partnersoutside the Laboratory include:

Academic institutionsUniversity of California, BerkeleyUniversity of California, San FranciscoUniversity of California, DavisUniversity of California, Santa CruzUniversity of New MexicoInstitute for Math and Computer Science, Hamburg, Germany

Medical facilitiesKaiser PermanenteG. W. Long Hansen’s Disease CenterLouisiana State University Medical CenterMassachusetts General HospitalChildren’s Hospital, San Diego

IndustryArthroMotion/Avanta Orthopedics, Inc.ExacTechOrthopedic Biomechanics InstituteNational Highway Traffic Safety AdministrationWright Medical, Inc.XYZ Scientific Applications, Inc.Zimmer, Inc.

5 to 15 years. The initial implant can cost about $20,000;revision implants are more expensive. Our methods caneventually be applied to any human joint for whichprosthetic implants have been designed. Our models areleading to better designs for prosthetic implants, resulting inlonger life spans and fewer costly followup operations.Finally, our work can help the automobile industry todevelop safety features that will protect against injury to thehead, chest, and lower extremities.

Key Words: biomechanical modeling, finite-element modeling,Institute for Scientific Computing Research (ISCR), NIKE3D,prosthetic point implants.

References1. Modeling the Biomechanics of Human Joints and Prosthetic

Implants, UCRL-TB-118601 Rev. 1, Lawrence LivermoreNational Laboratory, Livermore, CA (1995).

2. Energy & Technology Review, UCRL-52000-95-9/10(September/October 1995) is devoted to a series of articles oncomputational mechanical modeling, including NIKE3D.

3. For more information on finite-element modeling usingmassively parallel processors, see “Frontiers of Research inAdvanced Computations,” Science & Technology Review,UCRL-52000-96-7 (July 1996), pp. 4–11.

For further information contactKarin Hollerbach (510) 422-9111([email protected]).

Previous analyses of joint function (for example, rigid-body kinematic analyses) have typically provided lessinformation than is possible through finite-element methods.On the other hand, most finite-element analyses of biologicalsystems have been linear and two dimensional. We areapplying three-dimensional, nonlinear, finite-element codesthat assign material properties to bone and the soft-tissuestructures associated with joints.

For example, using the NIKE3D modeling code to look atbiological problems for the first time, we can model bones asmaterials that are more rigid than tendons, but less rigid than ametal implant. Soft tissues are inhomogeneous, undergo

deformation, and some showelastic behavior. Our methodssimulate tissue behavior undervarious loads, and jointmovement with and withoutprosthetic implants, andthey can assess injuryfollowing trauma, such asthat caused by a car crash.Because we can assigndifferent material propertiesto the tissues and examine arange of loads that areexperienced in real life, ourmethods allow us to seeinteractions between tissue types forthe first time, and we can identifyregions of high stress. Our high-qualityvisualizations, such as the one shown in the illustration on p. 19, display complex results in easy-to-understand form.

Two main issues are the computer time needed to run afinite-element code and the labor needed to develop a modelfrom each new scanned data set. We are addressing the firstproblem by working on powerful supercomputers and bypreliminary research in partitioning biological finite-elementmodels to run on massively parallel machines.3 To address thesecond problem, we are automating as much of the modelingprocess as possible.

Other 3D Visualization ToolsWe are also designing three-dimensional computational

tools to interactively move tissues so we can simulate surgicalprocedures. In the future, we plan to extend our modeling toinclude additional joints and perhaps internal organs, such asthe heart, lungs, and liver.

How can our computational tool benefit the clinicalcommunity and ordinary individuals? Our models providedata on internal joint stresses and strains that are not otherwiseobtainable. They can be used by surgeons to help plantreatment and to assess outcome following a traumatic orrepetitive motion injury. They can help a surgeon predictresults, such as strength, range of motion, and other indicatorsof function after an operation.

Orthopedic implants are a multibillion-dollar U.S. andworldwide industry; however, today’s prosthetic jointimplants have high failure rates. They often loosen, wear, andfail before the end of a recipient’s life, necessitating painfuland costly replacement. Orthopedic implants last on average

Computer tomography or

magnetic resonance imaging

Processed image

Finite-element mesh

Starting with (a) an individual human hand, we obtain (b) rawmagnetic resonance imaging or computed tomography scans. Thescan shown here is a cross section of bones and surroundingtissues proximal to the knuckles. From many cross sections, we (c) process the images and then generate (d) a high-quality, three-dimensional mesh that is suitable for finite-element analysis.

(b)

(c)

(d)

Human hand

(a)

Page 24: Lawrence Livermore National Laboratory P.O. Box 808, L-452 .../67531/metadc627670/m2/1/high_res_d/16340.pdfof the relation between genes, the proteins they produce, and biological

45

Science & Technology Review May 1997

PEREGRINE

treatment process. It uses Monte Carlocalculations, in which statistical samplingtechniques are used to obtain aprobabilistic approximation of aproblem’s solution. This enablesPEREGRINE to model how trillions ofradiation particles interact with thecomplex tissues and structures in thehuman body and where they deposittheir energy. In the past, Monte Carlocalculations, known to be the best way tomodel these interactions, would haverequired days or weeks of supercomputerresources—impractical for radiationtreatment planning. The PEREGRINEteam has designed and built the MonteCarlo system to plan accurate radiationtreatments at a cost and speed practicalfor widespread medical use. PEREGRINEuses advanced algorithms integrated withoff-the-shelf computer hardwareconfigured in sophisticated architecturesto bring Monte Carlo–based treatmentplanning to the desktop.

Better Treatment StrategiesExperts have looked at the diagnosis

and treatment planning process to try toexplain why current cancer treatments arenot more effective.

When a physician suspectsmalignancy, a computerized tomography(CT) scan is made of the suspected areato determine the exact position and extentof the tumor. If the cancer has not yetmetastasized and is susceptible to radiation,the next step is to develop a plan forradiation treatment (Figure 1). AlthoughCT scans provide radiation planners witha three-dimensional (3D) electron-densitymap of the body, current dose calculationmethods model the body as a virtuallyhomogeneous “bucket of water.”Inhomogeneities, such as bone andairways, are ignored or highlyoversimplified.

Furthermore, interpolated data fromdose measurements made in water areused to calculate radiation treatments.These calculations are also based on avariety of simplifications in the wayradiation is produced by the source, howradiation travels through the body, andhow its energy is deposited.

Some tumors are particularly difficultto treat with radiation because of theirproximity to vital organs, the abundanceof different tissue types in the area, andthe differences in their susceptibility toradiation. Cancers of the head and neck,

lungs, and reproductive organs areexamples. Radiation planners know thattoo small a dose to the tumor can result inrecurrence of the cancer, while too largea dose to healthy tissue can causecomplications or even death. Because ofthe inaccurate dose provided by today’scalculations, doctors trying to avoiddamage to healthy tissue sometimesundertreat cancerous tissue (Figure 2).

PEREGRINE is a tool that meetsthese clinical challenges. It is the onlydose calculation system that can be usedfor all types of radiation therapy, canexactly model the radiation beam deliverysystem being used for each treatment, anduses each patient’s CT scan as a basis forthe dose calculations. “Most importantly,the PEREGRINE 3D Monte Carloalgorithms, used with Livermore’s atomicand nuclear databases, enable the mostaccurate dose calculations,” Moses says.“These breakthroughs could profoundlyimpact cancer treatment and the lives ofpatients who might otherwise die.”

When PEREGRINE becomesavailable for commercial distribution, itwill deliver dose calculationseconomically in today’s competitivehealth-care industry. Because it also can

44

Science & Technology Review May 1997

VERY year, about 1.25 millionpeople in the United States are

diagnosed with life-threatening formsof cancer. About 60% of these patientsare treated with radiation; half of themare considered curable because theirtumors are localized and susceptible toradiation. Yet, despite the use of the bestradiation therapy methods available,about one-third of these “curable”patients–nearly 120,000 people eachyear–die with primary tumors still activeat the original site.

Why does this occur? Experts in thefield have looked at the reasons for thesefailures and have concluded that radiationtherapy planning is often inadequate,providing either too little radiation to thetumor for a cure or too much radiation

to nearby healthy tissue, which results incomplications and sometimes death.

What can be done to improve thisprognosis? Since 1993, LawrenceLivermore National Laboratory hascombined its renowned expertise anddecades of experience in nuclear science,radiation transport, computer science,and engineering to adapt nuclear weaponsMonte Carlo techniques to a bettersystem for radiation dose calculations.Mentored in particle interactions andnuclear data by Lawrence Livermorephysicists Bill Chandler and RogerWhite, and armed with seed moneyfrom Livermore’s Laboratory DirectedResearch and Development, medicalphysicist Christine Siantar begandirecting a small project to develop

PEREGRINE, with the mission ofproviding better cancer treatment.

What resulted is a radically newdose calculation system that for the firsttime can model the varying materialsand densities in the body as well as theradiation beam delivery system.According to program manager EdwardMoses, the PEREGRINE team is movingthese unique radiation therapy planningcapabilities into the hands of the medicalcommunity so that doctors will have themost accurate tool available to planradiation treatments that cure cancer.

PEREGRINE BreakthroughPEREGRINE breaks the barriers to

accurate dose calculation with the firstfull-physics model of the radiation

PEREGRINE: Improving RadiationTreatment for Cancer

PEREGRINE: Improving RadiationTreatment for Cancer

PEREGRINE “. . . is a unique system of enormous value to society in terms of improving local control and reducingcomplications in radiation treatment of cancer.”

—Noted medical physicist Dr. Radhe Mohan

E

Figure 1. Livermore is using its PEREGRINE radiation therapy planning process to improve major parts of the cancer treatment process, which includes (a) diagnosis using high-resolutionCT (computerized tomography) scans, (b) treatment planning, and (c) actual treatment.

E

Page 25: Lawrence Livermore National Laboratory P.O. Box 808, L-452 .../67531/metadc627670/m2/1/high_res_d/16340.pdfof the relation between genes, the proteins they produce, and biological

47

Science & Technology Review May 1997

PEREGRINE

PEREGRINE in ActionThe PEREGRINE dose calculation

includes two main steps: defining thetreatment by describing the radiationsource and the patient, and calculatingthe dose.

Treatment DefinitionThe treatment-definition generator

prepares the data for the specifictreatment. As input, the generatorrequires three data sets for eachtreatment: the patient transport mesh,the treatment radiation source, and theparticle interaction database.

Patient Transport Mesh. Thepatient’s CT scan is composed of astack of image “slices” (Figure 3a).From these, PEREGRINE creates a 3Dmap—the patient transport mesh—of thedensity and composition of all the matterin the vicinity of the radiation beam(Figure 3b). The system creates this mapfrom the CT scan by assigning a materialand density value to each volume element(voxel) in the scan. The transport meshenables PEREGRINE to model details ofthe patient’s body—including irregularitieson the body surface, air cavities, anddifferences in tissue composition—withunprecedented, submillimeter accuracy.

The Radiation Source. Accuratedose calculations also depend on reliableinformation about the characteristics ofthe radiation beam delivery system.PEREGRINE is the first dose calculationsystem to use a complete model for theradiation source of each type ofaccelerator. The system models thesource by dividing the beam deliverysystem into two parts (Figure 3c). Theupper portion of the delivery system isthe accelerator itself, with componentsthat do not change from treatment totreatment. The lower portion of themodel has components such ascollimators, apertures, blocks, andwedges, which are used to customizethe beam for each patient’s treatmentto ensure coverage of the tumor.

PEREGRINE draws from built-insource libraries to model the upperportion of the system and combines thatinformation with the treatment-specificconfiguration of the lower portion toproduce a model of the radiation beingdelivered. The source model is the firstto provide an accurate, comprehensivedescription of photon, electron, neutron,or proton treatment beams as they enterthe body.

Particle Interaction Data. Insupport of its national security missions,Livermore has developed and maintainsthe world’s most extensive set of atomicand nuclear interaction databases. Thesehuge databases contain information aboutparticle interactions that enable the all-particle tracker to calculate how every

46

Science & Technology Review May 1997

PEREGRINE

be integrated into existing radiationtreatment planning systems throughstandard network connections,PEREGRINE can be installed in everycancer clinic. With PEREGRINE in thehands of radiation oncologists, patientswill have access to an unprecedentedlevel of accuracy in treatment planning.Armed with improved knowledge ofpredicted dose, physicians will be ableto develop aggressive new treatmentstrategies that minimize the risk topatients. (See the box below for moreinformation on radiation therapy.)

Lawrence Livermore has applied fortwo patents for key software andhardware elements and intends to submitits first application to the U.S. Food andDrug Administration (FDA) later thisyear. “Our goal is to make PEREGRINEavailable for use in cancer treatmentcenters by late 1998,” Siantar says.

Radiation has been used to treat cancer for almost 100 years.However, in most cases, a radiation dose sufficient to kill a tumormay also injure or damage nearby vital tissues or organs.Successful therapy thus depends on choosing the right type ofradiation and applying the right amounts to the right places.

Today, tumors usually are treated by beams of particles from aparticle accelerator, a process known as teletherapy, which isperformed with any of four types of radiation. Photon or electronbeams are the most frequently used, while therapies usingneutrons or heavy charged particles such as protons are largelyexperimental. Occasionally, treatment may derive from aradioactive source that is planted inside the body, a treatmentknown as brachytherapy.

Photon beam energies are high enough for them to beconsidered x rays. They have moderate to long ranges (tens ofcentimeters), so they can be used for internal tumors. Photontherapy accounts for about 90% of all radiation treatments inthis country.

About 10% of cancer patients receive electron therapy.Electron beams are useful for shallow cancers because electronshave limited penetrating power. Electron treatment spares deeper-lying tissues but is not effective for internal tumors.

High-energy proton beams can be designed to deposit most oftheir energy at one predictable depth or range. By controlling the

beam energy, oncologists can control their range. A planner cantailor a proton beam to deliver most of its radiation dose into thetumor while avoiding healthy surrounding tissue. Unfortunately,proton therapy is very expensive and is available at only twocenters in the U.S.

Neutron radiation has the advantage of being more effectivethan photons for treating some types of radiation-resistant tumors.But neutron treatment is also very damaging to healthy tissue.Experimental treatment is available at just 20 centers worldwide.

New radiation therapies are being developed to treat highlyinvasive tumors and cancer that has already metastasized. Inexperimental boron neutron-capture therapy, neutron-absorbingboron is injected into the body where it is absorbed by cancerouscells. When the body is irradiated by neutrons, the neutrons arepreferentially absorbed by the boron in the tumors and the tumorsare destroyed. Radio-immunotherapy, another approach understudy by the research community, uses the chemistry of thebody’s immune system to target radioactive compounds atmetastasized cancerous tumors.

PEREGRINE is now being used in research on photon, electron,proton, and neutron treatment at several leading hospitals acrossthe country. Plans for research with PEREGRINE in the next yearinclude collaborations on boron neutron-capture therapy,brachytherapy, and other advanced methods.

Radiation Therapy

Figure 2. (a) Axial view of a tumor inthe left lung. (b) The most accuratedose calculation available in clinicstoday indicates complete coverage ofthe tumor with a curative dose. (c) PEREGRINE’s Monte Carlocalculation of the intended dose in (b)reveals significant underdosingwould occur near the boundaries ofthe tumor, with increased likelihoodof recurrence.

(a) (b)

(c)

Target volumeCurative dose level

Target volumeCurative dose level

Tumor

Figure 3. PEREGRINE models both thepatient and the radiation source to ensureaccurate treatment planning. (a) The CTscan is a stack of image slices less than 1 centimeter (cm) apart. From these,PEREGRINE creates (b) a 3D transportmesh of the patient to model how radiationwill interact with the materials in the body.(c) PEREGRINE also models the radiationbeam source in two parts: the acceleratoritself (above the beam definition plane) andthe components that customize the beamfor each treatment.

(a) CT-scan stack of patient (c) Radiation source model

(b) Patient transport mesh

Beam definition plane

Jaws/apertures

Wedges

Blocks

Flattening filter

Page 26: Lawrence Livermore National Laboratory P.O. Box 808, L-452 .../67531/metadc627670/m2/1/high_res_d/16340.pdfof the relation between genes, the proteins they produce, and biological

49

Science & Technology Review May 1997

PEREGRINE

continue and as microprocessor chipsimprove, the system will be even faster.

Clinical VerificationAlmost since work began on

PEREGRINE, the Livermore projectteam has worked closely with an advisoryboard of internationally respected medicalphysicists and physicians in radiationoncology. (See the list of organizationson p. 10.) Dubbed the MEDPAC(Medical Physics Advisory Committee),the group ensures that PEREGRINEincorporates the best physics and thatthis new technology is relevant in aclinical setting.

48

Science & Technology Review May 1997

PEREGRINE

isotope of every element on the periodictable will interact with any radiationparticle. To describe the interaction ofradiation particles with the muscle, fat,air, bone, lung, and other tissue in ourbodies, PEREGRINE uses interactiondata for combinations of these elements(Figure 4). Fatty tissue, for instance, is12% hydrogen, 64% carbon, 1% nitrogen,and 23% oxygen by weight. Knowinghow various radiation particles willbehave with these elements enables thesystem to predict how they will behavein our bodies.

Monte Carlo All-Particle TrackerThe tracker selects a particle from the

radiation source and tracks it through thepatient transport mesh until it undergoesa collision. PEREGRINE then consultsthe interaction database and retrievesinformation on the incident particle andall secondary (daughter) particlesresulting from the collision. All thedaughter products are tracked as theytravel through the transport mesh untilthey are absorbed in the body or leavethe patient. During the simulation,PEREGRINE records the energydeposited at each interaction, buildingup a map of absorbed dose in the patienttransport mesh. By repeating the processfor millions of the trillions of particles apatient receives in a treatment, the MonteCarlo algorithm produces a statisticallyrealistic picture of an entire irradiation.

Figure 5 illustrates the dose buildup ina radiation treatment. The more particlesthat are tracked, the more accurate is thesimulation of the treatment and the betteris the information for the doctor. Buttracking particles takes time, and thereinlies the challenge—attaining bothaccuracy and speed.

Supercomputer to DesktopThe historical problem with Monte

Carlo has been that the radiation treatment

planning community cannot afford aturnaround time of more than an hour tomeet its caseload. Previously, even on a$20-million Cray-1 supercomputer, asingle dose calculation took weeks tocomplete. So Monte Carlo calculationsremained in the weapons, reactor, andhigh-energy-physics research communitieswhere the turnaround time for calculationscould stretch over months.

Livermore computer experts havecombined state-of-the-art computationtechniques and advanced computerarchitecture to bring Monte Carlotreatment planning to the hospitaldesktop and office network environment.Taking advantage of recent strides inmicrocomputer technology, thePEREGRINE dose calculation engineis constructed from economical, off-the-shelf computer components originallydeveloped for file- and Internet-serverapplications. PEREGRINE can beintegrated into any treatment planningsystem via conventional networkconnections. Adding it to an existingsystem will be as easy as adding a fileserver.

The system design uses multipleprocessors interconnected by an internalhigh-speed network. The physics softwaredistributes the calculations for a problemso that the dose is calculated by manymicroprocessors in parallel. The numberof microprocessors can be determined bythe user. For example, a big-city clinicthat plans many radiation treatmentseach day would require a larger numberof microprocessors to enable the fastestpossible turnaround time. A suburban orrural clinic that does fewer radiationtreatments might order a smaller, lessexpensive system. The system designsupports hardware upgrades to increasecalculational capability and to adapt tofuture technological changes. Now, aPEREGRINE calculation takes about30 minutes. As code refinements

LLNL evaluation [91] +8%–

00 0.1 0.2 0.3 0.4

1

2

3

4

Par

ticle

inte

ract

ion

prob

abili

ty, b

arns

Particle energy, million electron volts

Figure 4. (a) When a particle interacts withthe atoms in the body, it creates a showerof secondary (daughter) particles, which inturn move through the body. LawrenceLivermore’s all-particle Monte Carlo codespredict these interactions based on theinformation in the Laboratory’scomprehensive atomic and nuclear reactiondatabases. These databases are derivedfrom the best available measurements andtheoretical calculations. (b) The probabilityof particle interaction depends on the typeof particle and the particle’s energy.

(a)

(b)

Figure 5. These 25-cm-wide visualizations of aPEREGRINE dosecalculation show thesequence of the predicteddose buildup for thetreatment of a braintumor. PEREGRINEoffers the radiationoncologist unprecedentedhigh resolution ofabsorbed dose in thepatient.

The project team is particularlyinterested in validating the accuracy ofPEREGRINE dose calculations againstclinical measurements for photon beam(or x-ray) therapy, the most frequentlyused form of radiation therapy (Figure 6).The University of California, SanFrancisco (UCSF), has provided over650 dose distribution measurements in avariety of materials and geometries thatsimulate conditions in the patient. UCSFand the Medical College of Virginia haveprovided retrospective cases for dosecalculation comparisons for patients with

tumors in the head and neck, spine, lung,and larynx. Livermore has also begun acollaboration with the Radiation TherapyOncology Group, a team sponsored bythe National Cancer Institute, with thegoal of using PEREGRINE in their new3D lung cancer treatment protocol tocalculate doses on all their patients.

Through these efforts, Livermore isworking to improve the reliability ofradiation source characterizations,validate the PEREGRINE dosecalculations against clinicalmeasurements, and evaluate the

1.1 x 104 particles 6.6 x 104 particles 1.7 x 105 particles

1.2 x 106 particles 3.2 x 106 particles

3.8 x 107 particles 6.8 x 107 particles

8.2 x 105 particles

6 cm 3 cm

Page 27: Lawrence Livermore National Laboratory P.O. Box 808, L-452 .../67531/metadc627670/m2/1/high_res_d/16340.pdfof the relation between genes, the proteins they produce, and biological

“PEREGRINE may soon help tosave thousands of lives,” Siantar says.“Its high accuracy, speed, andaffordability add up to the likelihoodof widespread use at research hospitalsand small clinics, which will bringsuperior radiation dose calculationsand better treatment to more patients.”

—Katie Walter

Key Words: cancer treatment, Monte Carlophysics, nuclear databases, radiation dosecalculations, radiation therapy, tumors.

For further information contact EdwardMoses (510) 423-9624 ([email protected]),Christine Siantar (510) 422-4619([email protected]), or thePEREGRINE website (http://www-phys.llnl.gov/peregrine/).

EDWARD MOSES, PEREGRINE program leader since 1994,received a Ph.D. (1977) in electrical engineering from CornellUniversity, where he specialized in quantum electronics. Hisearlier Lawrence Livermore experience includes programmanager of the AVLIS (Atomic Vapor Laser Isotope Separation)Program from 1986 to 1990.

CHRISTINE SIANTAR, principal investigator of thePEREGRINE program, received her Ph.D. (1991) in medicalphysics from the University of Wisconsin. Prior to joiningLawrence Livermore in 1994, she gained experience in cancertreatment planning at the Medical College of Wisconsin. Herpresent duties also include validation and verification of theMonte Carlo calculations for PEREGRINE.

51

Science & Technology Review May 1997

PEREGRINE50

Science & Technology Review May 1997

PEREGRINE

impact of accurate dose calculations onthe patient’s outcome. Ultimately, thesystem should facilitate more accuratestandardized clinical trials and morereliable implementation of those resultsthroughout the medical community.

A Look AheadLivermore is now working with the

radiation treatment-planning communityto assure that PEREGRINE integrateseasily into existing treatment-planningsystems as a simple upgrade. LawrenceLivermore plans to license PEREGRINEto treatment-planning vendors so that itwill be widely available.

In the near future, the project teamplans to make PEREGRINE fullyapplicable to electron-beam therapy,stereotactic radiosurgery, andbrachytherapy. (See box on p. 6.)Later, the team will study applicationsfor heavy-particle (neutron and proton)therapies, which are used for cancers inand around the salivary glands, spinalcord, and eye.

The team is also pursuingcollaborations in the application of thesystem to cancers of the breast and

prostate. PEREGRINE is the only dosecalculation algorithm today that canfully model the dose buildup near thesurface of the breast, which is veryimportant because of the area’ssensitivity to radiation burns. Accuratedose calculations for prostate cancerare critically important because of thesensitivity of nearby structures toradiation injury.

As PEREGRINE becomes moresophisticated, it may change the wayradiation treatments are planned.Today, a radiation oncologist studies apatient’s CT scans to determine whereradiation should and should not bedelivered and what the absorbed dosemust be to destroy the tumor. Themedical physicist and dosimetrist thenrecommend a plan to deliver theradiation (numbers of beams, angles,etc.) prescribed by the oncologist. ThePEREGRINE project team plans tobring “inverse planning” to thisprocess, whereby treatment goals areestablished by the oncologist andinput into PEREGRINE, which thendetermines how best to deliver theradiation to the patient.

PEREGRINEUCSF data

0 10 20Depth along beam, cm

0

0.2

0.4

0.6

0.8

1.0

8.7 cm

1.5 cm

15.9 cm

PEREGRINE XPEREGRINE YUCSF data

�5.0 0.0 5.0Distance from beam, cm

Rel

ativ

e d

ose

Figure 6. Comparison ofPEREGRINE calculationsand clinical measurementsof radiation dose in waterfrom the University ofCalifornia, San Francisco,show PEREGRINE’sunparalleled accuracy inpredicting the delivereddose of radiation

MEDPAC AdvisoryCommittee

Livermore has depended onthe medical community for inputin developing PEREGRINE. Inaddition to Lawrence Livermore,hospitals and organizationsrepresented on the MEDPACinclude:• University of Wisconsin

Medical School• Massachusetts General Hospital

Harvard Medical School• M.D. Anderson Cancer Center• Gershenson Radiation

Oncology Center, Harper Hospital, Detroit

• Loma Linda University MedicalCenter

• Memorial Sloan Kettering Cancer Center

• Washington University• University of California,

San Francisco• Université Catholique de

Louvain, Brussels, Belgium• Los Alamos National Laboratory

About the Team

Abstract

PEREGRINE: Improving Radiation Treatment for Cancer

The Lawrence Livermore National Laboratory has developed a radiation dosecalculation system that will provide the most accurate and highest resolutiontreatment planning capability available. PEREGRINE is designed to be fast andaffordable and will run on low-cost computer hardware in a hospital networkenvironment. The availability of such accurate dose calculations will improve theeffectiveness of radiation therapy by providing quality radiation treatment planningfor patients in every clinical environment and facilitating accurate clinical trials.PEREGRINE will provide accurate estimates of required doses for tumor controland normal tissue tolerance and will advance the field of radiation oncology. It canbe used for all methods of radiation therapy and could help save thousands of liveseach year.

Page 28: Lawrence Livermore National Laboratory P.O. Box 808, L-452 .../67531/metadc627670/m2/1/high_res_d/16340.pdfof the relation between genes, the proteins they produce, and biological

On the Offensiveagainst BrainAttack

53

Science & Technology Review June 1997

Stroke Initiative

smaller. Recognizing that LawrenceLivermore has capabilities inmicrofabrication and other technologiesthat could be used to reduce the size ofmedical devices, the Center forHealthcare Technologies established aprogram to create a new standard ofstroke care.

Critical to defining this standard wasthe “Workshop on New Technologyfor the Treatment of Stroke,” which theCenter sponsored in March 1995. Theworkshop was attended by internationallyrecognized stroke clinicians andresearchers, cardiologists with experiencein medical devices used to treat heartattack, and scientists and engineers fromLawrence Livermore and Los Alamosnational laboratories. Instead of thetypical conference agenda of successstories, clinicians described significantareas of unmet need for diagnosing andtreating stroke victims, particularly thewant of medical devices that mightsatisfy those needs.

Out of the workshop grew a vision ofthe future of stroke care and a frameworkfor the priorities of a multidisciplinaryteam of Laboratory researchers who,with the help of Laboratory DirectedResearch and Development funding,are developing much-needed tools todiagnose and treat stroke. The Lawrence

Livermore stroke initiative team’svision of the future of stroke care issummarized in Figure 1. It focuses onthe greatest unmet clinical needs—restoring blood flow, preventinghemorrhage, improving treatmentdecisions with sensors, and identifyingthe at-risk population with newscreening technologies. (For a primeron the kinds, causes, and treatment ofstroke, see the box on p. 19.)

The Livermore team consists ofspecialists in biomedical engineering,biology and bioscience, laser medicineand surgery, micro-engineering,microsensors, and computer simulation.It also has key collaborators fromacademic medical centers and privatecompanies. These partnerships are thebasis for rapidly moving the medicaldevice concepts from the researchlaboratory through development,clinical trials, regulatory approval, andmanufacture so that the resulting newtools can have a timely impact on thelives of the thousands of people whohave strokes each year.

Since the workshop, the stroke-initiative team’s research has developedseveral proof-of-principle prototypes.The work falls into four categories:microsensors for brain and clotcharacterization, optical therapies for

breaking up clots in the blood vessels ofthe brain, laser–tissue interactionmodeling, and microtools for treatinganeurysms (a leading cause ofhemorrhagic stroke).

Sensors to Diagnose ClotsThe Laboratory’s stroke initiative has

made substantial progress in developingmicrosensors that improve understandingof the biochemistry of stroke as well asoffer the potential to improve strokediagnosis, monitoring, and treatment.The sensors have the potential to identifythe types of clots that cause stroke, tomonitor patients during therapies thatdissolve clots or protect brain cells withdrugs, and to determine the health ofbrain or blood vessel tissue at a strokesite prior to treatment.

Development and use of thesesensors, like much of the team’s work,are predicated on the availability ofmicrocatheters. These tiny, hollowtubes, which are available from anumber of manufacturers, can containoptical fibers to which microsensors andother diagnostic, treatment, andmonitoring tools being developed at theLaboratory are attached (Figure 2).Inserted in the femoral artery, themicrocatheters are guided by microwiresthrough the circulatory system to the clot

52

Science & Technology Review June 1997

N the fall of 1994, a group in Lawrence Livermore National

Laboratory’s Center for HealthcareTechnologies began asking a pointedquestion whose answer was toprofoundly affect the focus of a majorpart of the Center’s research: Given thatboth heart attack and stroke result fromdisruption of blood flow, why arecardiovascular conditions treated withaggressive medical intervention whilecerebrovascular conditions usuallyreceive passive intervention withemphasis on rehabilitation? Why isstroke not treated as “brain attack”? Theanswer, they found, was not that there issomething fundamentally different aboutthe two potentially deadly maladies.Instead, what they found was that doctorsfrequently did not have the proper toolsto treat stroke as quickly andaggressively as they treat heart attack.

Heart attacks and strokes usuallyresult from decreased blood flowinterrupting the supply of oxygen andnutrients to tissue. Most frequently, theflow is decreased because of ablockage, but flow can also bedisrupted by malformations or ruptureof the vessels. An important differencebetween a heart attack and a stroke isthat the size of the blood vesselsinvolved in a stroke are significantly

Under the leadership of the Laboratory’s Center forHealthcare Technologies, a multidisciplinary teamis developing a variety of much-needed tools toprovide stroke victims with early, aggressivediagnosis and treatment.

I

On the Offensiveagainst BrainAttack

Page 29: Lawrence Livermore National Laboratory P.O. Box 808, L-452 .../67531/metadc627670/m2/1/high_res_d/16340.pdfof the relation between genes, the proteins they produce, and biological
Page 30: Lawrence Livermore National Laboratory P.O. Box 808, L-452 .../67531/metadc627670/m2/1/high_res_d/16340.pdfof the relation between genes, the proteins they produce, and biological

57

Science & Technology Review June 1997

Stroke Initiative

Science & Technology Review June 1997

the tip, and the initial stress wave, whichcan generate high peak pressure within400 nanoseconds (billionths of a second),quickly propagates away from the tip. Inthe moderate case, energy is deposited inan extended zone beyond the fiber tip(Figure 5b). The initial stress evolves fromthe heated region within 500 micrometers(millionths of a meter) of the end of thetip, and the largest stress gradients,which develop within 20 nanoseconds,are directed radially and are situated inthe immediate vicinity of the tip.

into a blood phantom (water colored withred food coloring). They have identifiedtwo distinct regimes of dynamicresponse, one due to strong laser lightabsorption, the other to moderateabsorption.

In both cases, the confined stressesimparted to the liquid are substantialand determine most of the importantdynamics. In the strong absorption case,energy is deposited in a thin zone nearthe fiber tip (Figure 5a). A thermallygenerated vapor bubble develops around

Eventually (within 100 nanoseconds), acloud of tiny bubbles develops inresponse to the stress caused by laserheating. In both cases, this expansionand collapse of bubbles exert pressureand shear forces on a clot, which leadultimately to its breakup.

The Laboratory recently entered intoa Cooperative Research andDevelopment Agreement (CRADA)with EndoVasix Inc. of Belmont,California, which will eventuallymarket the laser “clot-busting”

Stroke (or “brain attack”) results from vascular diseaseaffecting the arteries supplying blood to the brain and occurs whenone of these vessels bursts or is clogged. Part of the brain isdeprived of the oxygen and nutrients it needs to function, the nervecells die within minutes, and the part of the body controlled bythese cells cannot function. Sometimes the devastating effects ofstroke are permanent because the dead brain cells are not replaced.

Stroke is the leading cause of permanent disability in the U.S. and the third leading cause of death. Each year, 550,000Americans have strokes. One-third of them die. Many of thesurvivors, who currently total over 3 million, have decreasedvocational function (71%); of these 16% remain institutionalized,and 31% need assisted care. The personal cost is incalculable; theannual cost for treatment, post-stroke care, rehabilitation, and lostincome to victims (but not their family caregivers) is $30 billion.

Types of StrokeThere are two main types of strokes, ischemic and

hemorrhagic. Clots—cerebral thrombuses or cerebral embolisms—cause ischemic strokes. Cerebral hemorrhage or subarachnoidhemorrhage causes hemorrhagic strokes. Ischemic strokes are themost common, hemorrhagic strokes the most deadly.

Cerebral thrombosis occurs when a blood clot (a thrombus)forms in an artery in or leading to the brain, blocking the bloodflow. It is the most common cause of ischemic stroke. Cerebralembolism occurs when a wandering clot (an embolus) or someother particle occurs in a blood vessel away from the brain,usually the heart. The clot is carried by the bloodstream until itlodges in an artery leading to or in the brain.

A cerebral hemorrhage occurs when an artery in the brainbursts, flooding the surrounding tissue with blood. Bleeding froman artery in the brain can be caused by a head injury or a burstaneurysm, a blood-filled pouch that balloons out from a weak spotin the artery wall. A subarachnoid hemorrhage occurs when ablood vessel on the surface of the brain ruptures and bleeds into thespace between the brain and the skull (but not into the brain itself).

Hemorrhagic strokes cause loss of brain function both fromloss of blood supply and from pressure of accumulated blood onsurrounding brain tissue. The amount of bleeding determines theseverity. If hemorrhagic stroke victims survive (which they do in50% of the cases), their prognosis is better than that of ischemicstroke victims. With ischemic stroke, part of the brain dies anddoes not regenerate. With hemorrhagic stroke, pressure from theblood compresses part of the brain, but the pressure diminishesgradually and the brain may return to its former state.

About 10% of all strokes are preceded by “little strokes”called transient ischemic attacks (TIAs). They are more useful forpredicting if, rather than when, a stroke will happen. They occurwhen a blood clot temporarily clogs an artery and part of thebrain does not get the blood it needs. The symptoms, which arethe same as stroke symptoms, occur rapidly and last a relativelyshort time, usually between 1 and 5 minutes. TIAs can last up to,but not more than, 24 hours. Unlike stroke, when a TIA is over,people return to normal, because the nerve cells were notdeprived of oxygen long enough to die.

Diagnosis and TreatmentDiagnosing that a stroke has occurred and its type and

severity takes time—time that stroke victims may not have.Diagnostic tools are tests that image the brain, such ascomputerized axial tomographic (CAT) scans, magneticresonance imaging (MRI) scanning, and radionuclideangiography or nuclear brain scan. Tests that show the electricalactivity of the brain are also used. The two basic tests of thistype, an electro-encephalogram (EEG) and the evoked responsetest, measure how the brain handles different sensory stimulisuch as flashes of light, bursts of sound, or electrical stimulationof nerves in an arm or leg.

Tests that show blood flow to and in the brain are also usedfor diagnosis. One of these is the Doppler ultrasound test, whichcan detect blockages in the carotid artery. Another is carotidphono-angiography, wherein a stethoscope or sensitivemicrophone is put on the neck over the carotid artery to detectabnormal sounds (bruits) that may indicate a partially blockedartery. Yet another is digital subtraction angiography, in whichdye is injected into a vein in the arm and an x-ray machinequickly takes a series of pictures of the head and neck. Fromthese x rays, doctors can determine the location of anyblockages, how severe they are, and what can be done about them.

Surgery to remove plaque from artery walls, drugs thatprevent clots from forming or getting bigger, acute hospital care,and rehabilitation are all accepted ways to treat stroke.Sometimes treating a stroke means treating the heart, becausevarious forms of heart disease can contribute to the risk ofstroke, particularly those caused by clots that form in a damagedheart and travel to the brain. But compared to the diagnosis andtreatment tools that have been developed for heart attack, thosefor brain attack seem extremely limited and have not advancedgreatly in recent years.

* Heart and Stroke Facts (The American Heart Association, Dallas, Texas, 1994), pp. 21–27. This booklet is available from the American HeartAssociation’s National Center, 7272 Greenville Avenue, Dallas, Texas 75231-4596 (telephone: 1-800-242-8721).

Charge-coupleddetector

Beamsplitter

Beamsplitter

Probe laser beam

Mirror

Mirror

Quartztube containingblood phantom

Quartz microfiber

1 to 5 millijoulesof laser energy

Compensationtube

Figure 4. Livermore scientists use atunable optic parametric oscillator (OPO)laser to create a series of laser back-lighted images (see Figure 5 below) of thepressure distribution of laser energy withina blood-like fluid.

56 Stroke Initiative

Figure 5. In laboratory experiments todevelop a laser system to break up clots,Livermore researchers have identified tworegimes of dynamic response. (a) In thestrong absorption case, heat from the laserenergy generates a vapor bubble about thefiber tip. (b) In the moderate case, initialstress evolves from the heated region verynear the fiber tip. Within 100 nanoseconds, acloud of tiny bubbles develops in response tothe stress caused by laser heating. Theexpansion and collapse of these bubblesexert pressure and shear forces on the clot,ultimately leading to its breakup. Fiber-optic tip

Peak pressureabout 250 bar after

20 nanoseconds

Peak pressureabout 400 bar after440 nanoseconds

(a) Strong absorption case (b) Moderate absorption case

Brain Attack Facts*

Page 31: Lawrence Livermore National Laboratory P.O. Box 808, L-452 .../67531/metadc627670/m2/1/high_res_d/16340.pdfof the relation between genes, the proteins they produce, and biological

New Vision of Stroke CureThe work of the stroke initiative at

Lawrence Livermore hopes to remedythe paucity of tools for diagnosing andtreating strokes. Its vision of stroke careincludes medical devices for screeningpeople without symptoms for strokerisk. It places special emphasis on thedevelopment of tools to provide earlierrather than later diagnosis of stroke typeand assessment of brain cell damage sothat appropriate treatment can beinitiated rapidly. It has guidedLivermore researchers in thedevelopment of technology to break upstroke-causing clots with laser energy aswell as microsensors and microtools toassist in the diagnosis and treatment ofvarious kinds of brain attack. And itlooks forward to providing the meansfor more instances of full recovery,fewer stroke-related disabilities, andless need for chronic care.

—Dean Wheatcraft

Key Words: brain attack, laser “clot-busting,” laser–tissue interaction modeling,LATIS code, medical photonics,microsensors, neuroprotectant drugs, shape-memory microgripper, stroke.

References1. Sheila A. Grant and Robert S. Glass,

Sol-Gel-Based Fiber-Optic Sensor forBlood pH Measurement, LawrenceLivermore National Laboratory,Livermore, California, UCRL-JC-125297 (January 1997). (Submitted toAnalytica Chemica Acta.)

2. P. Celliers et al., “Dynamics of Laser-Induced Transients Produced byNanosecond Duration Pulses”Proceedings of the Society of Photo-Optical Instrumentation Engineers,2671A (1996).

3. R. A. London et al., Laser–TissueInteraction Modeling with LATIS,Lawrence Livermore NationalLaboratory, Livermore, California,

UCRL-JC-125974 (February 1997). (To be published in Applied Optics.)

4. M. Strauss et al., “ComputationalModeling of Laser Thrombolysis forStroke Treatment,” Proceedings of theSociety of Photo-Optical InstrumentationEngineers, 2671 (1996).

59

Science & Technology Review June 1997

Stroke Initiative58

Science & Technology Review June 1997

Stroke Initiative

into account a raft of variables—size andcomposition of the clot, strength ofblood-vessel tissue, and buildup andtransport of heat during laser clotbusting—this code can numericallysimulate the hydrodynamics of the laser-created energy needed to break up clotsand predict the amount of energy neededto do so without damaging other tissue.

The modeling team has madesignificant progress in simulating thelaser clot-busting process with a focuson short-pulse (1- to 10-nanosecond)interactions of laser light with water andblood clots.4 These advances in LATISare being used to improve the laser clot-busting technology discussed earlier.They will be refined and expanded tobetter determine the parameters of thehydrodynamics at the heart of a safe,effective laser clot-busting system.

technology. EndoVasix is investing inthe development of a prototype systemfor clinical demonstrations beginningwith animal stroke models. Some of thepreliminary animal tests have alreadytaken place.

Laser–Tissue InteractionCentral to the design of clot-busting

tools is the refinement and use ofcomputer codes for modelinglaser–tissue interaction. Based on thelaser–matter interaction codes developedfor inertial confinement fusion atLivermore, the Laboratory’s LATIS(laser–tissue) code provides a basis forpredicting how short-pulse, low-energymedical lasers affect tissue.3 It thuspromote the rational design of clot-busting devices by modeling laser–tissueinteraction during the process. By taking

The medical applications have alsohad positive “spin-back” to the coreprograms at the Laboratory. Forinstance, because the medicalapplications required simulation ofhow laser beams interact with highlyscattering materials, a bug in one of theMonte Carlo x-ray transport subroutinesused in national security applicationswas discovered and corrected.

MicrotoolsMiniaturization expertise from

Lawrence Livermore’s MicrotechnologyCenter, Precision Engineering Group,and Plastic Shop has produced a varietyof silicon, metal, and plastic microsensorsand actuators for the stroke initiative.One of these with the potential to preventstrokes caused by hemorrhage ratherthan clots or other blockages is the “shapememory” microgripper (Figure 6).These tiny devices (less than a cubicmillimeter) have a variety of applications,but the initial one is for treatinganeurysms. Very fine metal thread isplaced into the microgripper, which isconnected to a guidewire cable andmaneuvered to the site of the aneurysm.Closed, it slips into the aneurysmthrough the narrow neck connecting theaneurysm with the vessel wall. Onceinside, the microgripper’s heater isactivated by power sent through theguidewire tether, and the gripper opens,releasing the metal thread into theaneurysm. The gripper then cools,“remembers” its closed shape, and canbe withdrawn through the neck, leavingbehind the metal thread. The threadembolizes (acts as a clot in) theaneurysm, reducing blood flow andpressure in the aneurysm. Without thepressure, the aneurysm eventually fillswith scar tissue and is significantlyless likely to rupture and cause ahemorrhagic stroke.

Figure 6. Microtools such as this silicon microgripper with “shape memory” will be used to treatthe cerebral aneurysms that lead to hemorrhagic stroke. The microgripper is less than 1 cubicmillimeter, that is, about the size of the head of a straight pin. (Actual size: )

The stroke-initiative team at the Laboratory is a multidisciplinary group of scientistsand engineers from several directorates—Biology and Biotechnology, Engineering, LaserPrograms, Physics and Space Technology, Defense and Nuclear Technologies, andChemistry and Materials Science. The team’s members have combined their expertisein biomedical engineering, biology and bioscience, laser medicine and surgery,micro-engineering, microsensors, and computer simulation to create a variety of toolsto respond quickly and urgently to brain attack and thereby improve the chances of astroke victim’s survival and recovery. They are collaborating with academic medicalcenters and private companies to move these proof-of-principle prototypes as quicklyas possible from the research stage to development, clinical trials, regulatory approval,and manufacture so that they can benefit as soon as possible the lives of people whohave strokes. Pictured left to right are: ABRAHAM LEE, ROBERT GLASS,WILLIAM BENETT, LUIZ DA SILVA, PATRICK FITCH, RICHARD LONDON,SHEILA GRANT, and STEVEN VISURI. (Not shown are PETER CELLIERS,PETER KRULEVITCH, and DENNIS MATTHEWS.)

For further information contact J. Patrick Fitch at the LawrenceLivermore Center for HealthcareTechnologies (510) 422-3276([email protected]).

About the Team

Page 32: Lawrence Livermore National Laboratory P.O. Box 808, L-452 .../67531/metadc627670/m2/1/high_res_d/16340.pdfof the relation between genes, the proteins they produce, and biological

61

Science & Technology Review April 1997

National Transparent Optical Network

global competition, change is costly andinvestors are fiscally conservative.Investors must be totally assured ofgood returns on their money.”

Making More of Optical-FiberBandwidth

The all-optical network used in thisdemonstration resides in the SanFrancisco Bay Area and at presentconsists of four backbone nodes—atPacific Bell in San Ramon, Sprint inBurlingame, the University of California at Berkeley, andLawrence Livermore. The nodes, connected by approximately600 kilometers of fiber, offer access to the network and routethe streams of data that pass through. Tributary fibers will linkthem to other user sites, where currently some 30 advancedapplications are being developed and tested (Figure 2).

The high speed and great capacity of the network are basedon the inherently large bandwidth of optical fibers. Bandwidthis the expression of a medium’s communications capacity.Optical bandwidth offers, of course, the speed of light. But italso offers the whole rainbow of light frequencies. Having thiscapacity range can be likened to having a musical keyboard ofmany octaves, which can be used to play far more complexmelodies than a keyboard of one octave.

NTON enlarges optical bandwidth capacity even morethrough a technique called wavelength division multiplexing(WDM), wherein each optical fiber is used to carry more thanone wavelength. The various wavelengths do not interferewith each other, so each can be used as a differentcommunication channel. (In the keyboard analogy, thischaracteristic would be tantamount to simultaneously playinga different song with each available octave.) The use ofwavelength division multiplexing increases fiber capacitywithout the need to install more fiber cable.

The NTON fiber carries four wavelengths at present, butplans are to expand to eight ultimately. The capacityexpansion that occurs with WDM requires new devices forregulating the resulting voluminous traffic. One of the new

devices used in the network is being developed into a newproduct by Uniphase Telecommunications Products, aconsortium member. It is an acousto-optic tunable filter(AOTF), whose function is to route the multiple wavelengthsthrough the different regions of the network. Made of lithiumniobate glass, the four-port filter selectively andsimultaneously switches many wavelengths on their way todifferent destinations. Some other wavelengths are isolatedby routing them to network-access equipment that “maps”their signals to a different wavelength. Because those signalsare isolated by this blocking, their former wavelengths can beused elsewhere in the network. This wavelength “reuse”makes the system scalable, that is, able to indefinitelyincrease the volume of information being switched through(Figure 3).

A Flexible, Transparent NetworkNTON is intended to be an open network; it must

therefore be easily accessible to heterogeneous systems andformats (including future ones such as high-definitiontelevision), and users should work at their desktops withoutany awareness of its operations. In short, the network must beflexible and transparent.

These characteristics are achieved through the use ofstandards, the rules that enable systems to “talk” to eachother. When different systems use different local formats,standards provide them with a common interchangelanguage. Various standards are used in different layers of

Figure 2. Currently, the NationalTransparent Optical Network consists offour backbone nodes connected by 600 kilometers of fiber that offer access tothe network and route the streams of datapassing through them. Tributary fibers willlink them to other user sites where some30 advanced applications are beingdeveloped and tested.

UC Berkeley

Pacific BellSan Ramon

Lawrence LivermoreNational Laboratory

SprintBurlingame

SiliconValley

San Jose

Fiber:Pacific BellSprintTributaries

Nodes

Applicationslocations (example)

San Francisco

60

Science & Technology Review April 1997

Research Highlights

the Internet cruiser’s ability to download large graphics files.They are envisioning users such as physicians of the future,who will be able to retrieve a host of complex medical recordsfrom various remote locations, perform remote telesurgery, orpractice space telemedicine.

The Context for Livermore’s WorkLawrence Livermore leads the work on the prototype

network, which is to integrate new, developing technologiesinto a logical and efficient working system. It is a fitting rolenot only because of the Laboratory’s broad expertise in opticsand large-scale computing but also because of its neutralperspective on work that ultimately must be commercialized.

Integrating the new technologies into a high-service-quality, high-speed network on which new high-capacityapplications can be tested will promote the advancedapplications and demonstrate the commercial feasibility of thenew technologies. One important goal of NTON is toconvince private-sector investors that the new opticalcomponents are worthy of commercialization and that thefiber infrastructure should be upgraded. But as Bill Lennon,Lawrence Livermore’s project leader from the AdvancedTelecommunications Program, points out, “While theseinnovations are necessary for technological advancement and

Optical NetworksThe Wave of the Future

HEN our ancestors built fires on distant hilltops tosignal to one another, they were using an early form

of optical communication. This idea of using light to sendinformation began to be developed scientifically in the 1800s,when British physicist John Tyndall demonstrated that hecould (although just barely) direct light down a stream ofwater. He found that light could be guided by transparentmaterials if those materials were denser than air, and hisinsight, when followed by other scientific inquiry, culminatedin fiber-optic technology.

Fiber-optic technology takes electric signals from ourphones, computers, and televisions and transmits them moreefficiently than other methods, making it possible to deal withthe volume and variety of communications that constitutemodern life. The information-carrying capacity of fiber opticsis so great that it is far from fully exploited. It is beingcounted on to help solve problems such as the trafficbottleneck on the Internet.

The members of the National Transparent Optical Network(NTON) consortium are among those counting on fiber optics.The consortium (Figure 1) has received matching funds fromthe Department of Defense’s Advanced Defense ResearchProjects Agency to test and demonstrate advanced opticalcomponents in a high-speed, all-optical communicationnetwork. The network is based on existing Sprint andPacific Bell fiber-optic lines and has been operationalsince February 1996. Currently, it is being tested byusers of large, emerging applications, and theconsortium is actively soliciting the interest of othersuch test users.

The project’s two components—next-generation opticaltechnologies and the emergingapplications used to test thesetechnologies—are bound into oneambitious objective: to provide atransmission capability for amultitude of complex, advanceduses, at speeds of billions of bitsper second, with complete security and reliability.NTON members are thinking beyond the needs of

W

Figure 1. The National Transparent Optical Network is a consortium joining LawrenceLivermore with private-sector firms and institutions of higher education.

Technical Contributors

Nortel (lead)Hughes Research Laboratories

Uniphase Telecommunications ProductsRockwell Science Center

Case Western Reserve UniversityColumbia University

Prototype Network Development

Lawrence Livermore National Laboratory(lead)

Pacific BellSprint

Consortium LeadNortel

Page 33: Lawrence Livermore National Laboratory P.O. Box 808, L-452 .../67531/metadc627670/m2/1/high_res_d/16340.pdfof the relation between genes, the proteins they produce, and biological

62

Science & Technology Review April 1997

Research Highlights

the network architecture to provide a hierarchy for signaltransmission. The hierarchical process may be compared tohaving sheets of paper packaged into envelopes and deliveredto an envelope handler who repacks them into boxes ofenvelopes, which are delivered to a box handler to turn intoboxes of envelopes inside trucks, and so on through thedelivery sequence until the packages arrive at theirdestination, where the reverse process yields the sheets ofpaper to the addressee.

NTON uses two standards developed specifically foradvanced networks. First, signals from various user formatssuch as video, data, and voice are fed into the network andconverted into a standardized common format by means of theAsynchronous Transfer Mode (ATM) standard. ATM not onlymakes the signals insensitive to transmission format, it alsoassigns transmission space and priority according to the needsof the terminals, thereby making best use of network capacityand efficiency. After ATM, the signals must undergo anotherconversion to package them for optical-fiber transmission.This packaging is the function of the Synchronous OpticalNetwork (SONET) standard.

SONET is particularly efficient. It keeps a signal and itsmanagement information together, and it synchronizes signalsto a common clock to simplify handoff between the networksworldwide. These features make the signals easily and quicklyextractable for distribution or routing. The SONET signals arethe ones that are transmitted over one of the switchablewavelengths of the optical layer.

Demonstration ApplicationsThe applications being tested on the network run the gamut

from accessing digital libraries to accessing offshore

geophysical data via satellite, from on-line collaborations onmanufacturing design to remote processing or visualization ofradiological records, angiogram analyses, motion rehabilitationtherapies, and tomography images.

Recently, SRI’s Terra Vision, a three-dimensional terrainvisualization program that runs on a high-performancegraphics workstation, used the network to access multipleremote data servers; obtain real-time, high-quality terrain andbattlefield data from these various locations; and transmitthem as a computer visualization to another remote site. Thevisualization was a helicopter pilot’s roving-eye view ofterrain in a military installation.

Another demonstration of the network involved anadvanced simulation of magnetic fusion plasma turbulence,which was run in real time on a Cray supercomputer atLivermore and displayed on a high-performance graphicsterminal at a conference booth in San Jose. The test illustratedthat with high bandwidth, remote visualization ofsupercomputing simulations was possible.

More of these futuristic applications are on the way, andthe work of NTON aims at making them happen sooner ratherthan later.

—Gloria Wilt

Key Words: acousto-optic tunable filter (AOTF), AsynchronousTransfer Mode (ATM), fiber optics, National Transparent OpticalNetwork (NTON), remote visualization, standards, SynchronousOptical Network (SONET), wavelength division multiplexing(WDM).

For further information contact William Lennon (510) 422-1091 ([email protected]).

Figure 3. Uniphase Telecommunications, an NTON consortiumparticipant, is developing an acousto-optic tunable filter (at right)whose function is illustrated above. Its ability to switch or blockwavelengths enables their reuse, and thus the volume of informationbeing switched through the system can increase indefinitely.

1 2 3 4 1

2

3a

b

c

d

4

dime-sized amplifier makes fiber-optic communications faster and clearer. A portable DNA analyzer helps detect and identify organisms in the field,

including human remains and biological warfare agents. A tiny gripper inserted in ablood vessel treats aneurysms in the brain to ward off potential strokes. What dothese technologies have in common? Each one is smaller than any comparableproduct, opening up a host of new applications. And each originated in LawrenceLivermore National Laboratory’s Microtechnology Center.

In the late 1960s, Livermore scientists and engineers began making miniaturedevices for high-speed diagnostic equipment required for nuclear tests. For manyyears, before the development of Silicon Valley and the ready availability ofmicrochips for a broad array of uses, Laboratory engineers fabricated chips to theirown specifications for high-speed switches, high-speed integrated circuits, andradiation detectors. By the early 1980s, Livermore was fabricating thin-filmmembranes for use as x-ray windows in low-energy x-ray experiments, as x-rayfilters, as debris shields for the Extreme Ultraviolet Lithography Program, and astargets for high-energy electron experiments in which x rays are generated.

These passive microstructures have been applied to dozens of projects. They have served as diagnostic devices for Livermore’s Nova laser experiments and willdo the same for experiments at the National Ignition Facility (NIF). Anothermicrostructure, a novel thin-film window developed by Glenn Meyer and DinoCiarlo, plays a critical role in a new, more efficient electron-beam system for

63

Science & Technology Review July/August 1997

From thin-film windows to

microactuators to photonic

devices—the Center

contributes to stockpile

stewardship, bioscience,

and nonproliferation

projects at Livermore.

A

The Microtechnology Center

When Smaller Is Better

The Microtechnology Center

When Smaller Is Better

Page 34: Lawrence Livermore National Laboratory P.O. Box 808, L-452 .../67531/metadc627670/m2/1/high_res_d/16340.pdfof the relation between genes, the proteins they produce, and biological

65

Science & Technology Review July/August 1997

Microtechnology Center64

Science & Technology Review July/August 1997

Microtechnology Center

microfabrication techniques includingmolecular beam epitaxy to grow devicewafers and chemically assisted ion-beametching to make the device structures.The team won a 1996 R&D 100 Awardfor their new device, which can be usednot only for computer interconnectionsbut also in wide-area networks and fortransmitting visual images.

They then replaced the standard gainmedium inside the waveguide of theamplifier with a tiny vertical-cavitysurface-emitting laser and tookadvantage of some basic laser propertiesto reduce crosstalk by a factor of 10,000.The photons’ stimulated emission in thegain medium when lasing occurs acts as

are an ideal medium provided that theoptical signals are sufficientlyamplified so that there are enoughphotons to go around for the manyreceiver nodes. Existing amplifiers hadproblems: erbium-doped fiberamplifiers are bulky and expensive,and conventional semiconductoroptical amplifiers produce too muchcrosstalk at transmission rates above 1 gigabit per second.

To solve this problem, Sol DiJaili,Frank Patterson, Jeff Walker, RobertDeri, William Goward, and HollyPeterson developed a miniature, low-cost semiconductor optical amplifier(Figure 3). They applied state-of-the-art

Test Site in 1989. This system madeavailable very high-resolution data thatconventional measuring techniquescould not deliver. In 1991, Livermorewas awarded the DOE WeaponsExcellence Award for this work.

A photonic network is typicallymade up of optical fibers, waveguides,amplifiers, receivers, wavelengthselection elements, and modulators,sometimes all on a single chip. One ofthe Laboratory’s contributions to thephotonics field has been its novelapplication of silicon micromachiningcapabilities, which have been critical topackaging photonic components in acost-effective manner. In 1994, MikePocha, Dan Nelson, and Ted Strandwon an R&D 100 Award for thedevelopment of a silicon “microbench”that reduces the time needed to alignand connect the optical fiber inphotonic components. Because of thesubmicrometer alignment tolerances,the standard manual process wasextremely time consuming andtherefore expensive. The team’stechnique (Figure 2) provides justenough heat to melt the microdrops ofsolder needed to make the connection,allowing a rapid manual alignment andconnection of the fiber to a laser diodeor a lithium niobate modulator in lessthan five minutes and reducing the costfor this work by 90%.

Photonic devices are finding theirway into two different parts of theLaboratory’s science-based stockpilestewardship program. One is ultrascalecomputing, which soon will be used forsimulations of nuclear tests; the other isdiagnostics for NIF.

Computing on a large scalerequires the cooperative action ofthousands of microprocessors sharingtremendous volumes of data. This datasharing demands a communication“fabric” of very high bandwidth andlow latency (short time delay) toenable the microprocessors to functionwithout waiting for data. Optical fibers

Figure 2. With the silicon microbench, twopolysilicon heating elements and gold solderattachment bases provide the means forattaching the optical fiber. While the fiber isheld in position, current is passed through theheater to reflow the solder, which wicks aroundthe metalized fiber without disturbing thealignment. This new method avoids thermalshifts and simplifies the alignment process.

processing inks, adhesives, and coatings.Laboratory scientists, led by BoothMyers and Hao-Lin Chen, teamed withAmerican International TechnologiesInc. of Torrance, California, on thisproject and won an R&D 100 Award in1995. Conventional electron-beamprocessing systems are inefficient,delivering about 5% of the beam’senergy to the polymer being cured. Withthis new window, efficiencies greaterthan 75% were achieved. The team alsorecently won a 1997 Federal LaboratoryConsortium Award for Excellence inTechnology Transfer.

In the mid-1980s, Livermore begancombining micro-optical devices withmicroelectronics for extremely high-speed, fiber-optic data transmission.Photonic devices have since found theirway into many microtechnologies that incorporate optical fibers fortransmission of laser light.

Livermore stopped fabricatingsilicon-based electronic circuits whencommercial microchips becameavailable in almost every configurationimaginable. But invention by no meansstopped. Today, the MicrotechnologyCenter, now headed by physical chemistand engineer Ray Mariella, invents andapplies microfabricated components,

including photonic devices, micro-structures, and microinstruments, todirectly support Laboratory projects inscience-based stockpile stewardship,nonproliferation, and biomedicalresearch. At any given moment, theCenter has about 25 projects in theworks. The Center’s major recent andongoing projects are highlighted here.

The Center’s state-of-the-artfabrication facilities are centered in abuilding whose location was selectedbecause the area had the smallestvibrations within the Laboratory site,permitting the high-resolutionmicrolithography that the Centerperforms. Microdevices can befabricated there in any of threematerial systems:• Silicon and silicon compounds formicrostructures and microelectro-mechanical systems applications.• Gallium-arsenide for photonicsapplications.• Lithium niobate for electro-opticapplications, such as phase andamplitude modulators.

The Center has the equipment andinfrastructure needed for lithography,etching, diffusion, wafer bonding, andthin-film deposition and vacuumtechniques. Its dry laboratories are used

for surface inspections, packaging, andelectrical and optical device testing.Groundbreaking recently took place foran addition that will increase clean-room space by 65%. The backbone ofthe Center is an interdisciplinary groupof about 50 electronics, mechanical,chemical, and biomedical engineers,physicists, and technical supportpersonnel (Figure 1). According toMariella, “Ideas, technologies, andcapabilities are shared at frequentbrainstorming sessions, so staff canfind solutions to programmaticproblems quickly.”

Putting Light to WorkPhotonics work at Livermore got its

start from the need to obtain remote,highly accurate measurements atnuclear weapons tests. Photonicsystems—which manipulate and exploitlight for control, communication,sensing, and information display—were the ideal solution because signalscan travel on them for long distances atthe speed of light with very little powerloss. After several years ofdevelopment, Livermore successfullyfielded its first photonic system formeasurement of ionizing radiationfrom a nuclear weapon at the Nevada

Figure 1. Most members of the Microtechnology Center staff.

Figure 3. This dime-sizedsemiconductor optical amplifierreduces crosstalk and noise infiber-optic communications.

Page 35: Lawrence Livermore National Laboratory P.O. Box 808, L-452 .../67531/metadc627670/m2/1/high_res_d/16340.pdfof the relation between genes, the proteins they produce, and biological

67Microtechnology Center66

Science & Technology Review July/August 1997

Microtechnology Center

material, forms a seal between thesilicon chip and the block. The frontedge of the silicon chip remainsexposed to allow fluid to freely exitthrough an edge port into the opticaldetection system.

A Laboratory team headed by M. Allen Northrup has developed anportable DNA analyzer that is smallenough to fit in a briefcase. It is alsofast (Figure 6). This unit, the world’sonly battery-powered DNA analyzer,moves analysis out of the laboratory forthe first time. Folta and Benett

developed a disposable polypropyleneliner for the analyzer’s tiny heatedchamber where the polymerase chainreaction occurs. The liner facilitatesrapid reuse of the chamber andeliminates tedious cleaning andpossible contamination. The entirechamber is a tiny chip of silicon. Asthe reaction progresses, the team uses afluorescent signal to analyze the DNAto determine whether it matches that ofa particular subject.

LLNL has delivered one of theseunits to the Department of Defense

Figure 5. (a) Schematic and (b) demonstration model of the new flow cytometer. As a cellpasses through the laser beam on the left, the cell simultaneously scatters (reflects) thelaser light into the forward-light-scatter and the right-angle-scatter detectors. The scatterpattern reveals the cell’s size and internal structure.

Liquid flow with cells

Liquid out

Forward-light- scatter detector

Scattered-light signal guided by liquid fiber-optic

Tapered fiber optic

Liquid collector

Scattered-light signal out

Microlens

Diode laser

Silicon support structure

(b)(a)a clamp on the signal gain, eliminatingthe fluctuation. Signal channels atmultiple optical wavelengths can passthrough the waveguide with virtually nocrosstalk among these channels. Thelasing action also speeds recovery timeof signals through the waveguide, froma billionth of a second to about 20trillionths of a second. Thus, theamplifier can successfully track theamplification of a serial bit stream atvery high bit rates.

The Microtechnology Center is alsoapplying photonics technology at NIF.Because NIF’s 192 laser beams will beaimed at such small targets (about thesize of mustard seeds), NIF will needmuch faster radiation diagnostics thanthose used at the Nevada Test Site.Mike Pocha and Howard Lee aredeveloping photonic radiation sensorsthat will modulate an optical beam inresponse to an ionizing radiation inputand then record it using single-shotoptical samplers having a response timeof 100 femtoseconds (quadrillionths ofa second). Pocha and Lee areinvestigating the use of waveguidesmade of nonlinear optical material toperform the extremely high-speedsignal gating required to sample at these

high data rates. (A nonlinear opticalmaterial is one whose index ofrefraction can be changed by theintroduction of another light beam.)These nonlinear gates will be capable of switching sequential slices of theradiation-modulated optical signal intoan array of relatively slow-speed opticaldetectors. A material that may have theright nonlinear properties is fullerene(C70), whose discoverers recently won the Nobel Prize in Chemistry.(Fullerene is a van der Waals crystalwith molecules shaped like BuckminsterFuller’s geodesic domes, hence itsname.) The world’s first fullerenewaveguide array, which is stillundergoing development, is shown inFigure 4. Work continues on this projectso that a fully functional system will beon line in time for the testing of NIF,which is scheduled for 2002.

Analyses in MiniatureAnalyzing DNA, testing for HIV,

and identifying pathogens and poisonsused in biological and chemical warfareall require sampling a range of products.Supporting the Laboratory’s bioscienceresearch program and its nonproliferationefforts, the Microtechnology Center has

Figure 4. This fullerene waveguide array will be part of a photonic radiation sensor systemat the Laboratory’s National Ignition Facility. (The waveguide was photographed next to apush pin to indicate scale; the waveguide on p. 11 is between two X-acto knife blades.)

developed several cutting-edgemicrodevices that facilitate biologicaland chemical sampling and analysis inthe field, allowing real-time detection.Because some samples cannot survivetransport from the field to a remotelaboratory, field analysis is often theonly solution.

Biological and chemical samplingwith micro-instruments offers otheradvantages as well. Smallerinstruments have lower powerrequirements. Highly integrated andautomated sample handling systemsusually result in improved productivityand less sample contamination. Also,because analytical diagnosticprocedures sometimes producehazardous wastes, smaller systemsmean less waste. However, extremelysmall-volume chambers requiregreater sensitivity in order to identifythe extremely small trace samples.

A team led by Ray Mariella haspatented a new system that easesalignment and increases the accuracyof flow cytometry. Flow cytometry isa powerful diagnostic tool used tocharacterize and categorize biologicalcells and/or their contents, such asDNA. It is used by laboratoriesthroughout the world for blood typingand for testing for a wide variety ofdiseases and viruses, including HIV.The cells flow in single file in solutionwhile the experimenter directs one ormore beams of laser light at them andobserves the scattered light, which iscaused by variations in the cells orDNA. Instead of using a microscopelens or an externally positioned opticalfiber as a detector, Mariella’s systemuses the flow stream itself as awaveguide for the laser light,

Figure 6. The MicrotechnologyCenter’s DNA analyzer and computersystem fit in a briefcase. Thepolymerase chain reaction chamber

and related analysis equipment areon the right side of the case.

Science & Technology Review July/August 1997

capturing the light and transmitting it toan optical detector. Alignment simplyrequires lining up the light source ontothe flow stream and placing thedetector into the same stream (Figure 5).With this system, measurements are upto three times as accurate as those takenwith conventional systems.

At international joint field trials lastfall at Dugway, Utah, the new flowcytometer performed extremely welldetecting simulated biological warfareagents. Participants from the U.S., theU.K., Canada, and France used avariety of instruments to detect foursimulants. The Livermore flowcytometer detected 87% of all theunknowns with a false positive rate ofjust 0.4%.

Dino Ciarlo, Jim Folta, and WilliamBenett developed a miniature sampleinjector that can be used in Mariella’sflow cytometer. Flow channels areformed by etching three silicon wafersand bonding them into a single chip.Fluidic connections are made to thisinjector chip via a plastic block. A thingasket, laser-cut from an elastomeric

Page 36: Lawrence Livermore National Laboratory P.O. Box 808, L-452 .../67531/metadc627670/m2/1/high_res_d/16340.pdfof the relation between genes, the proteins they produce, and biological

RAY MARIELLA, JR., is head of the Microtechnology Center. For the last five years he has been a team leader forbioinstrumentation and the thrust area leader for microtechnology.Mariella joined Lawrence Livermore in 1987 as a project engineerin the Electronics Engineering Department’s EngineeringResearch Division to establish a capability in molecular beamepitaxy. He received his B.S. in mathematics, chemistry, andchemical engineering from Rice University in 1969 and his M.A.

and Ph.D. in physical chemistry from Harvard University in 1970 and 1973. Beforecoming to Livermore, he worked at the Allied Signal research facility in Morristown,Virginia. He has published more than 30 articles and holds 5 patents.

69

Science & Technology Review July/August 1997

Microtechnology Center68

Science & Technology Review July/August 1997

Microtechnology Center

find that commercial products do notmeet their needs. They turn to theexperts at the Microtechnology Center,whose creations are often what enablea Laboratory experiment or diagnostictool to function successfully.Integrated microdevices are thus findingtheir way into increasing numbers ofLaboratory projects.

— Katie Walter

Key Words: bioanalysis, DNA analysis,DNA sequencing, flow cytometry, gaschromatography, microactuators,microdevices, microstructures, photonics,polymerase chain reaction, semiconductors,shape-memory alloys, thin films.

environments, such as high or lowpressures and hazardous fluids.

A key to the microgripper’seffectiveness is a thin-film microactuatorthat is fabricated from a shape-memoryalloy (SMA). At low temperatures,SMAs are easily deformed, but whenheated, they recover their original shape.This reversible transformation forms thebasis for shape-memory actuators, inwhich a biasing force, produced by aspring, for example, deforms the SMAelement at low temperature, and theSMA element overcomes the bias whenheated. For the microgripper, the teamdeveloped a sputter-deposited shape-memory actuator of nickel–titanium–copper, with a transformation temperaturejust above body temperature. Themicrogripper is inserted into a bloodvessel in the closed (deformed) position.Through a thin wire connected to themicrogripper, an electrical current of 0.1 milliamp activates theactuator, deflecting each arm up to 55 micrometers and returning the gripperto its undeformed (open) position. As itcools, the gripper will open again. (Apatent was recently issued for anothermicrogripper made of plastic with aballoon actuating system. It is brieflydescribed on p. 23.).

Lee, Julie Hamilton, and JimmyTrevino have also built a low-leakage,high-efficiency microvalve (Figure 9).Effective microvalves are an importantlink in creating miniature total analysissystems that can be used for drugdelivery and bioanalyticalinstrumentation. Nonmedicalapplications for the microvalve includefluid injection analysis, chemicalprocessing and analysis, andatmospheric and temperature controlequipment. In this design, an electrode issandwiched between two polyimidefilms with different coefficients ofthermal expansion. (Polyimide is a

flexible plastic material.) Delivery ofless than 1 milliwatt of power causesthe “cantilever” to clamp down, sealingan etched hole beneath it.

And the Work Goes OnTwo relatively new areas of

expertise for the MicrotechnologyCenter are treaty verification andcounterproliferation, which require low-cost, efficient, autonomousprocessing of large numbers ofchemical and biological samples.Integrated microdevices are critical tothe success of these new fields.

Microtechnologies and microdeviceshave never been an end in themselves atLivermore. Rather, they are problemsolvers. As Lawrence Livermoreresearchers search for solutions tomission-specific challenges, they often

Figure 8. The microgripper, the size oftwo grains of salt, is the first in a seriesof new surgical microtools beingdeveloped by the Laboratory’s Centerfor Healthcare Technologies.

Figure 9. The 200-micrometer-longmicrovalve is shown in the open position,revealing an etched hole.

Armed Forces Institute of Pathology,which is using it to quickly identifyhuman remains in the field, test foodand water for contamination in remotelocations, and identify pathogenicbacteria on the battlefield.

The Microtechnology Center is alsosupporting the Laboratory’s DNAsequencing work for the HumanGenome Project. As described in theresearch highlight beginning on p. 18of this issue, the Center has developedetched and bonded microchannel glassplates to speed up the sequencingprocess. A patent is pending on thenew bonding process.

Conrad Yu of the Center isparticipating in work on a miniature,portable, low-power gas chromatographto support the Laboratory’s program innonproliferation to counter the spread of chemical weapons. Gaschromatography is a proven method for identifying liquid or gas species,with detection sensitivities as high asparts per billion. Conventional gaschromatographs, however, are several

cubic feet in size and typically takeabout 20 minutes to analyze a gassample. A mini unit works faster, oftenrequiring just one minute to completean analysis, and would be very usefulto carry into an area where chemicalweapons or other poisonous gases aresuspected to have been used. Somedaythis unit could also be used at home forsniffing out radon gas.

Yu has developed a micromachined,silicon sample injector about the size ofa little fingernail. He has also reducedthe size of the chromatograph’s columnwhere the various elements in thesample are separated before beingdirected to the detector where they areidentified. The column has beenreduced from 1.6 liters (100 cubicinches) for a laboratory-sized unit to acoil etched on a silicon wafer. Acircular column 100 micrometers wideand several meters long is etched ontwo silicon wafers that are bondedtogether. The entire instrumentoccupies about 0.16 liter (10 cubicinches) (Figure 7).

Microtools for Better HealthA new surgical tool for treating

aneurysms, the silicon microgripper isabout the size of two salt grains—1 by0.2 by 0.4 millimeters. With guidancefrom researchers at the University ofCalifornia, San Francisco, AbrahamLee, M. Allen Northrup, and PeterKrulevitch developed this micro-electromechanical device. As shown in Figure 8, the microgripper is like atiny hand that surgeons can use toplace clot-inducing agents to fill ananeurysm. A surgeon may also use it to perform minimally invasive in vivobiopsy or catheter-based endovasculartherapy. Nonmedical uses includeassembling small parts formanufacturing and remote handling ofsmall particles in extreme

Figure 7. The column for the miniaturegas chromatograph has been reduced totwo silicon wafers bonded together.Here, one wafer is shown with its coiledgroove 100 micrometers wide andseveral meters long.

About the Scientist

For additional information contact Ray Mariella (510) 423-3610([email protected]).

Fluid path hole