31
e Purchase, Installation and Use of Hewlett-Packard Computer Hardware within the City of Oakland and Port of Oakland Joint Domain Awareness Center Violates “Oakland Nuclear Free Zone Act” January 21, 2014 by Joshua Smith [email protected]

Oakland Domain Awareness Center HP NFZO Violation Exhibits

Embed Size (px)

DESCRIPTION

TABLE OF CONTENTS (EXHIBITS)EXHIBIT A (01-03) [City of ] “Oakland Nuclear Free Zone Act” (1992)EXHIBIT B (01-02) Hewlett-Packard / Atomic Weapons Establishment (UK)EXHIBIT C (01-04) Hewlett-Packard / Los Alamos National Laboratory (US)EXHIBIT D (01-02) County of Marin, CA: Peace Conversion CommissionNuclear Weapons ContractorsEXHIBIT E (01-18) DAC Specific HP Equipment Examples & InvoicesOn November 19, 2013 the City Council of Oakland California broke ties withU.S. Defense and Intelligence contractor Science Applications InternationalCorporation (SAIC) who was contracted as the City of Oakland and Portof Oakland Joint Domain Awareness Center (DAC) Phase 1 Vendor. On this datethe City Council publicly and openly recognized that SAIC was indeeda corporation involved in systems and components directly related to NuclearWeapons. The City instructed that the DAC Phase 2 contract be reopened andsupplemental RFPs be sent to a recent and existing pool of interested vendors.The grounds leading to dismissal of SAIC are founded in the adherence to the1992 [City of ] “Oakland Nuclear Free Zone Act” (aka NFZO.) The City has nowestablished adherence to this municipal ordinance and established legal precedent.DAC Phase 1 vendor SAIC was not only in violation of the NFZO as a servicevendor but also SAIC coordinated and facilitated the installation of computerhardware from Hewlett-Packard (HP), another corporation involved in systemsand components directly related to Nuclear Weapons. e purchase, installationand use of HP computer hardware [now existing within the DAC] is inviolation of the NFZO and must be removed immediately.is report displaysthe elements supporting the grounds upon which HP is proven to be involved inNuclears Weapons and provides a detailed record of the specific HP hardware.

Citation preview

Page 1: Oakland Domain Awareness Center HP NFZO Violation Exhibits

e Purchase, Installation and Use ofHewlett-Packard Computer Hardwarewithin the City of Oakland and Port

of Oakland Joint Domain Awareness CenterViolates “Oakland Nuclear Free Zone Act”

January 21, 2014

by Joshua [email protected]

Page 2: Oakland Domain Awareness Center HP NFZO Violation Exhibits

TABLE OF CONTENTS (EXHIBITS)

EXHIBIT A (01-03) [City of ] “Oakland Nuclear Free Zone Act” (1992)

EXHIBIT B (01-02) Hewlett-Packard / Atomic Weapons Establishment (UK)

EXHIBIT C (01-04) Hewlett-Packard / Los Alamos National Laboratory (US)

EXHIBIT D (01-02) County of Marin, CA: Peace Conversion CommissionNuclear Weapons Contractors

EXHIBIT E (01-18) DAC Specific HP Equipment Examples & Invoices

On November 19, 2013 the City Council of Oakland California broke ties with U.S. Defense and Intelligence contractor Science Applications International Corporation (SAIC) who was contracted as the City of Oakland and Port of Oakland Joint Domain Awareness Center (DAC) Phase 1 Vendor. On this datethe City Council publicly and openly recognized that SAIC was indeed a corporation involved in systems and components directly related to NuclearWeapons. e City instructed that the DAC Phase 2 contract be reopened andsupplemental RFPs be sent to a recent and existing pool of interested vendors.

e grounds leading to dismissal of SAIC are founded in the adherence to the1992 [City of ] “Oakland Nuclear Free Zone Act” (aka NFZO.) e City has nowestablished adherence to this municipal ordinance and established legal precedent.

DAC Phase 1 vendor SAIC was not only in violation of the NFZO as a servicevendor but also SAIC coordinated and facilitated the installation of computerhardware from Hewlett-Packard (HP), another corporation involved in systems and components directly related to Nuclear Weapons. e purchase, installationand use of HP computer hardware [now existing within the DAC] is in violation of the NFZO and must be removed immediately. is report displaysthe elements supporting the grounds upon which HP is proven to be involved inNuclears Weapons and provides a detailed record of the specific HP hardware.

Page 3: Oakland Domain Awareness Center HP NFZO Violation Exhibits

EXHIBIT A (01) – 1992 "Oakland Nuclear Free Zone Act"

http://www2.oaklandnet.com/oakca1/groups/contracting/documents/webcontent/oak042285.pdf

Page 01 of 10

Page 4: Oakland Domain Awareness Center HP NFZO Violation Exhibits

Page 02 of 10

EXHIBIT A (02) – 1992 "Oakland Nuclear Free Zone Act"

http://www2.oaklandnet.com/oakca1/groups/contracting/documents/webcontent/oak042285.pdf

Page 5: Oakland Domain Awareness Center HP NFZO Violation Exhibits

EXHIBIT A (03) – 1992 "Oakland Nuclear Free Zone Act"

WIKIPEDIA / NUCLEAR-FREE ZONE

On November 8, 1988 the city of Oakland, California passed "Measure T" with 57% of thevote, making that city a nuclear free zone. Under Ordinance No. 11062 CMS then passed onDecember 6, 1988, the city is restricted from doing business with "any entity knowingly engaged in nuclear weapons work and any of its agents, subsidiaries or affiliates which are engaged in nuclear weapons work." e measure was invalidated in federal court, on thegrounds that it interfered with the Federal Government's constitutional authority over national defense and atomic energy. e issue being Oakland is a major port, and like Berkeley, andDavis, has major freeway and train arteries running through it. In 1992, the Oakland CityCouncil unanimously reinstated modified elements of the older ordinance, reportedly bringingthe total number of Nuclear Free Zones in the United States at that time to 188, with a totalpopulation of over 17 million in 27 states.

http://en.wikipedia.org/wiki/Nuclear-free_zone

Pertinent follow up issues:

Page 09 of 10 (partial)

http://www2.oaklandnet.com/oakca1/groups/contracting/documents/webcontent/oak042285.pdf

Page 6: Oakland Domain Awareness Center HP NFZO Violation Exhibits

EXHIBIT B (01) – Hewlett-Packard (HP) Nuclear Weapons

AWE Selects HP to Help Transform its Technology InfrastructureUK defence contractor signs $66 million infrastructure services agreement

LONDON, UK, June 10, 2010 – HP Enterprise Services today announced that UK-basedAWE plc has signed a 10-year, $66 million services agreement that will enable the AtomicWeapons Establishment (AWE) to enhance user productivity and service levels.

AWE makes and maintains warheads for the UK’s nuclear deterrent. It has done so for morethan 50 years, serving the country safely and securely.

"It is important for our staff to have access to the best technology and services to complete theirjobs with speed and efficiency," said David Maitland, chief information officer for AWE. "eHP team consistently delivers quality services with a collaborative and innovative approach,which will help us achieve best value for money."

With this agreement, HP will provide workplace services to manage and support all AWE end users across its sites. Workplace services include asset management, license managementand procurement of computing devices. ese services together with service desk and site support services, will be delivered to over 7,000 end-users including employees, integrated personnel and task based contractors. Due to the stringent security measures, the services aredelivered by an HP team based on site. In addition the contract includes an ongoing PC refreshprogramme.

e agreement renews and extends an existing workplace services contract by deploying manyof HP’s software innovations to deliver consistently against AWE’s demanding service level requirements. During the next phase of the contract many of HP’s Business Technology Optimization software products will be installed as part of a comprehensive end-to-end service management programme. is is designed to reduce outage times and provide proactiveservice management and therefore improve overall service availability and the overall IT experience for the end users.

"It is critical that AWE has an environment that enables fast and secure access to its systemsand data and the ability for employees to collaborate at all levels," said Craig Wilson, vice president, HP Enterprise Services, UK & Ireland. "We will continue to apply proven processesusing our deep industry and technology knowledge to deliver services that will allow AWE tomeet its cost and customer service goals."

Page 7: Oakland Domain Awareness Center HP NFZO Violation Exhibits

EXHIBIT B (02) – Hewlett-Packard (HP) Nuclear Weapons

About AWEAWE has played a crucial role in the defence of the United Kingdom for over 50 years. It provides warheads for Trident, a submarine-launched ballistic missile system and the country’snuclear deterrent. AWE’s work covers the entire lifecycle of nuclear warheads – from initial research and design, through to development, manufacture and maintenance, and finally decommissioning and disposal. Visit www.awe.co.uk for more information.

About HPHP creates new possibilities for technology to have a meaningful impact on people, businesses,governments and society. e world’s largest technology company, HP brings together a portfolio that spans printing, personal computing, software, services and IT infrastructure tosolve customer problems. More information about HP (NYSE: HPQ) is available athttp://www.hp.com/.

http://www8.hp.com/uk/en/hp-news/press-release.html?id=535795

Atomic Weapons Establishment Website (http://www.awe.co.uk)AWE has been central to the defence of the United Kingdom for more than 60 years. We provide and maintain the warheads for the country’s nuclear deterrent, Trident.

http://www.awe.co.uk/aboutus/what_we_do_27815.html

Atomic Weapons Establishment Outsources Tech Transformatione Atomic Weapons Establishment (AWE) is outsourcing technology services to HP EnterpriseServices for another 10 years.

e defence contractor will spend $66m with HP over the next decade to improve its IT infrastructure and related services. It already had an agreement with HP and the renewal willsee the manufacturer and maintainer of nuclear warheads receive hardware, software and servicesfrom HP.

HP will provide workplace services to manage and support all AWE 7000 users across its sites. ese services will include asset management, licence management and procurement ofcomputing devices.

e contract, which also includes a PC refresh programme, will see HP software rolled out.

http://www.computerweekly.com/news/1280092989/Atomic-Weapons-Establishment-out-sources-technology-transformation

Page 8: Oakland Domain Awareness Center HP NFZO Violation Exhibits

EXHIBIT C (01) – Hewlett-Packard (HP) Nuclear Weapons

Los Alamos National Laboratory (or LANL; previously known at various times as ProjectY, Los Alamos Laboratory, and Los Alamos Scientific Laboratory) is one of two laboratoriesin the United States where classified work towards the design of nuclear weapons is undertaken. e other, since 1952, is Lawrence Livermore National Laboratory. LANL is aUnited States Department of Energy (DOE) national laboratory, managed and operated byLos Alamos National Security (LANS), located in Los Alamos, New Mexico. e laboratory isone of the largest science and technology institutions in the world. It conducts multidisciplinary research in fields such as national security, space exploration, renewable energy, medicine, nanotechnology, and supercomputing.

http://en.wikipedia.org/wiki/Los_Alamos_National_Laboratory

Hewlett-Packard / Los Alamos National Laboratory

Page 9: Oakland Domain Awareness Center HP NFZO Violation Exhibits

Every year for the past 17 years, the director of Los Alamos National Laboratory has had a legally required task: write a letter—a personal assessment of Los Alamos–designed warheads and bombs in the U.S. nuclear stockpile. � is letter is sent to the secretaries of Energy and Defense and to the Nuclear Weapons Council. � rough them the letter goes to the president of the United States.

� e technical basis for the director’s assessment comes from the Laboratory’s ongoing execution of the nation’s Stockpile Stewardship Program; Los Alamos’ mission is to study its portion of the aging stockpile, � nd any problems, and address them. And for the past 17 years, the director’s letter has said, in e� ect, that any problems that have arisen in Los Alamos weapons are being addressed and resolved without the need for full-scale underground nuclear testing.

When it comes to the Laboratory’s work on the annual assess-ment, the director’s letter is just the tip of the iceberg. � e director composes the letter with the expert advice of the Laboratory’s nuclear weapons experts, who, in turn, depend on the results from another year’s worth of intense scienti� c investigation and analysis done across the 36 square miles of Laboratory property.

One key component of all that work, the one that the director and the Laboratory’s experts depend on to an ever-increasing degree, is the Laboratory’s supercomputers. In the absence of real-world testing, supercomputers provide the only viable

alternative for assessing the safety, reliability, and perfor-mance of the stockpile: virtual-world simulations.

I, IcebergHollywood movies such as the Matrix series or I, Robot typically portray supercomputers as massive, room-� lling machines that churn out answers to the most complex questions—all by themselves. In fact, like the director’s Annual Assessment Letter, supercomputers are themselves the tip of an iceberg.

Without people, a supercomputerwould be no more than a humble jumble

of wires, bits, and boxes.

Although these rows of huge machines are the most visible component of supercomputing, they are but one leg of today’s supercomputing environment, which has three main components. � e � rst leg is the supercomputers, which are the processors that run the simulations. � e triad also includes a huge, separate system for storing simulation data (and other data). � is leg is composed of racks of shelves containing thousands of data-storage disks sealed inside temperature- and humidity-controlled automated libraries. Remotely controlled robotic “librarians” are sent to retrieve the desired disks or return them to the shelves a� er they are

The most important assets in the Laboratory’s supercomputing environment are the people—designing, building, programming, and maintaining the comput-ers that have become such a critical part of national security science. (Photo: Los Alamos)

Los Alamos National Laboratory

played on the libraries’ disk readers. �e third leg consists of the many non-supercomputers at the national security laboratories. �e users of these computers request their data, over specially designed networks, from the robotic librarians so they can visualize and analyze the simulations from afar.

�e Los Alamos supercomputers are supported by a grand infrastructure of equipment used to cool the supercomputers and to feed them the enormous amounts of electrical power they need. �ey also need vast amounts of experimental data as input for the simulation codes they run, along with the simulation codes themselves (also called programs, or appli-cations), tailored to run e�ciently on the supercomputers. In addition, system so�ware is necessary to execute the codes, manage the �ow of work, and store and analyze data.

People are the most vital component of any supercomputer’s supporting infrastructure. It takes hundreds of computer scientists, engineers, and support sta� to design, build, maintain, and operate a supercomputer and all the system so�ware and codes it takes to do valuable science. Without such people, a supercomputer would be no more than a humble jumble of wires, bits, and boxes.

The computer room’s vast �oor space is 43,500 square feet, essentially an acre—

90 percent of a football �eld.

Supercomputers That Fill a StadiumAt Los Alamos, supercomputers, and the immense amount of machinery that backs them up, are in the Nicholas C. Metropolis Center for Modeling and Simulation, known pragmatically as the Strategic Computing Center (SCC).

Roadrunner, the world’s �rst peta�op computer, joined other supercomputers in the SCC’s computer room in 2008. It is a big machine, containing 57 miles of �ber-optic cables and weighing a half-million pounds. It covers over 6,000 square feet of �oor space, 1,200 square feet more than a football �eld’s end zone. But that represents only a portion of the computer room’s vast �oor space, which is 43,500 square feet, essentially an acre—90 percent of a football �eld (minus the end zones). (Roadrunner has �nished its work for the Laboratory and is currently being shut down.)

What is really amazing, however, lies beneath the super-computer room �oor. A trip straight down reveals more vast spaces crowded with machinery that users never see.

�e computer room is the SCC’s second �oor, but that one �oor is actually two, separated by almost four feet. �at 4-foot space hosts the miles of bundled network cables, electrical power lines inside large-diameter conduit, and other sub�oor equipment the supercomputers rely on. �e double �oor provides enough room for engineers and

maintenance sta�, decked out like spelunkers in hardhats and headlamps, to build and manage these sub�oor systems.

Below this double �oor, on the building’s �rst �oor, is another acre-size room, a half-acre of which holds row upon row of cabin-size air-conditioning units. �ese cool the air and then blow it upwards into the computing room, where it draws the heat o� the hard-working computers. �e now-warmed air then rises to the third �oor (basically an acre of empty space), whereupon it is drawn back down, at the rate of 2.5 million cubic feet per minute, to the �rst �oor by the air coolers so the cooling cycle can begin again.

An additional half-acre of �oor space stretches beyond the cooling room and holds the SCC’s electric power infrastruc-ture, the machines that collectively keep the supercomputers running. �ere are rows of towering power distribution units (PDUs), containing transformers and circuit breakers, and for backup power, rotary uninterruptible power supply (RUPS) generators. Each RUPS uses motor generator technology. Electricity fed into the RUPS is used to build kinetic energy in a 9-foot-diameter �ywheel that, in turn, generates electricity.

Supercomputers are cooled by chilled air circulating at the rate

of 2.5 million cubic feet per minute.

�at bit of extra electricity evens out the �ow of power to the supercomputers in the case of a power surge from, for example, a lightning strike, a common occurrence in summertime Los Alamos. In the case of a power outage, there is enough kinetic energy built up in the �ywheel to provide 8–12 seconds of electricity to the supercomputers. �ose few seconds are long enough for data about the current state of a running calculation to be written to memory, reducing the loss of valuable data.

The Metropolis Center, also called the Strategic Computing Center, is the home of Los Alamos’ supercomputers and the vast infrastructure that supports them. (Photo: Los Alamos)

up. If a so�ware problem occurs outside regular business hours, the administrators can be called in and must report to the SCC within two hours.

Evolution to RevolutionLooking for all the world like row upon row of large gym lockers, a supercomputer is visibly very di�erent from a personal computer (PC). But the real di�erence is in the work supercomputers do and the way they do it.

The guardians are expected to be able to �x both hardware and software

problems in about an hour.

Today’s supercomputers are collections of tens of thousands of processors housed in “racks,” cabinets holding the processors and supporting equipment. �e large number of processors is needed because supercomputers run immense calculations that no PC could do. �e calculations are divided into smaller portions that the processors work on concurrently. �is is parallel computing or actually, for a supercomputer, massively parallel computing.

A new supercomputer for Los Alamos can take years to create. �e process begins with an intense collaboration between commercial computer companies, like IBM,

�e PDUs transform the incoming high-voltage electric power feed into lower voltage and distribute it to the super-computers according to each machine’s particular voltage needs, for example, 220 volts for Roadrunner and 480 volts for the supercomputer named Cielo.

The GuardiansBecause the Laboratory’s supercomputers work on national security problems 24 hours a day, 365 days a year, they require dedicated overseers who stay onsite and collectively keep the same exhausting schedule. �e members of the SCC’s operations sta�, a team of 22, are the experts who keep things running and make sure anything that goes wrong gets �xed, right away.

Divided into three shi�s, the members of the operations sta� tend monitoring equipment and keep watch from inside the Operations Center, a high, windowed nerve center that overlooks the computer room. �e sta� ’s tasks are many and varied, as they are charged not only with overseeing the computer hardware and so�ware but also, for example, with keeping tabs on the cooling system. �e computer room’s environment must stay cool enough to prevent damage to the valuable computers; too much heat is a major threat.

�ese dedicated guardians are expected to be able to �x both hardware and so�ware problems in about an hour. For so�ware problems requiring additional support, a team of 30 so�ware administrators, also stationed onsite, backs them

The Laboratory’s Luna supercomputer can be accessed by all three national security labs (Los Alamos, Livermore, and Sandia), making it a “trilab” machine. Roadrunner and Cielo are also trilab machines. (Photo: Los Alamos)

Los Alamos National Laboratory

Cray, Hewlett-Packard, etc., and Los Alamos’ computer experts, who have extensive experience both operating and designing supercomputers. Los Alamos computer personnel involve themselves in the creation of each new Laboratory supercomputer from the generation of the �rst ideas to the machine’s delivery . . . and a�er its delivery. Once it is built and delivered, before it is put to work, a supercomputer is disassembled, inspected, and reassembled to ensure that it can handle classi�ed data securely and can be �xed and maintained by Laboratory sta�.

Once it is built and delivered, a supercomputer is disassembled,

inspected, and reassembled to ensure that it can handle classi�ed data securely.

As a practical and economic necessity, each new Los Alamos supercomputer takes advantage of commercial technological advances. And in the 21st century, beginning with Road-runner, technology from the private sector is being evolved in innovative ways that are, in e�ect, a reinvention of how a supercomputer is built. Roadrunner, for example, used video game technology originally conceived for the Sony PlayStation 3, and with that technology, it became the world’s �rst hybrid supercomputer, with an architecture that linked two di�erent types of processors to share computational functions. �is particular evolutionary step in supercomputer architecture let Roadrunner surge onto the global stage as the world’s �rst peta�op computer.

Architectures are still evolving, so the next generation of machines will be radically new, even revolutionary, as will Trinity, Los Alamos’ next supercomputer, projected to arrive in 2015–2016. On Trinity, Laboratory designers and their industry partners will be trying out numerous innovations that will directly a�ect supercomputing’s future. So Trinity will be unlike any other computer Los Alamos researchers have used. And by the way, it will be 40 to 50 times faster than Roadrunner.

�e exact form Trinity will take is still being decided, as design discussions are still underway, but whatever the �nal design is, it will be a means to an end. �e form each new supercomputer takes is dictated by what the Laboratory needs the machine to do. In general that always means it must answer more questions, answer new kinds of questions about new and bigger problems, compute more data, and compute more data faster.

Los Alamos’ speci�c need, however, is focused on the stockpiled nuclear weapons and the continuous analysis of them. Laboratory supercomputers are already simulating the detonation of nuclear weapons, but Trinity and the computers that will succeed it at the Laboratory will need to simulate

more and more of the entire weapon (button-to-boom) and in the �nest-possible detail. Design e�orts for Trinity will be aimed at that goal, and a great deal of e�ort will go into creating the many new and complex subsystems that the computer will need.

Saving Checkpoints Is the Name of the GameAt the system level, some design requirements remain the same from supercomputer to supercomputer, even when the next one is as fundamentally di�erent as Trinity will be. For example, while a PC serves one user at a time, Laboratory supercomputers must serve many users simultaneously—users from the Laboratory’s various divisions and from the other national security labs far beyond Los Alamos. �e computer they use must be designed not only to accommodate that multitude of users but also to provide ultra-secure access for the protection of classi�ed data.

Every Los Alamos supercomputer must also be designed to enable an operator to quickly and easily identify and locate which component within the computer’s 6,000 square feet (or more) of equipment needs repair. And repairs will always be needed because of the ever-increasing size and speed of supercomputers. As these machines get larger and faster, they naturally become more and more subject to breakdown.

�ink about this: If a PC crashes once year and a supercomputer is equal to at least 10,000 PCs, one might expect to see 11 failures per hour on a supercomputer. Consider what such a failure rate could mean for an extensive computation. At Los Alamos, a nuclear weapon simulation can take weeks or even months to be completed, and those weeks and months are already costly in terms of computer time �lled and electrical power used. In addition, successful simulations require a large collaborative e�ort between, for example, the weapons scientists, computer designers, computer code developers, and members of the supercomputer operations team. A breakdown equals time and money lost.

With downtime being a supercomputing inevitability, it is commonplace to mitigate the loss by “checkpointing,” which is like hitting “Save.” At predetermined times—say, every four hours—the calculation is paused and the results of the computation up to that point (the “checkpoint”) are downloaded to memory. Returning the simulation to the closest checkpoint allows a simulation (or other type of calculation) to be restarted a�er a crash with the least amount of data loss.

Unfortunately, the compute time lost even to checkpointing is becoming dearer as supercomputers grow larger and there-fore more prone to periodic crashes, so Trinity’s designers are working on new checkpointing methods and systems that will maintain a higher level of computational productivity. Los Alamos is working closely with industry to develop this kind of defensive capability.

An Itch That Needs ScratchingPCs are all fundamentally the same, similarly designed to do the same tasks. Users can just go out and buy the so�ware they need for their brand of PC. But supercomputers are di�erent. Designed and built to �ll a speci�c need, each one scratches a hard-to-reach itch. At Los Alamos, the special need is scien-ti�c computing and simulation, and a super-computer’s users need specially written codes for each project.

Who develops the advanced codes used on Los Alamos supercomputers—the codes for weapon simulation or for general science research? �ose highly specialized programs are created in-house, and for many years, the Laboratory’s successive supercomputers have had enough in common that existing codes adapted well to them. Trinity’s architecture and performance characteristics, however, will presage a com-plete upheaval. �e codes will need to be overhauled, not just adapted: more of a “build it from scratch” compared with just an updating.

�e developers are already busy making codes “Trinity friendly” and doing so without having anywhere near the variety and amount of resources the giant commercial computer companies have available. For this work, developers depend on partnering with a range of Laboratory scientists, who provide the unique algorithms for solving the basic physics equations governing how the dynamics of a complex system play out over time. �is is true whether the system being studied is the climate or a nuclear explosion. �e

nature of the scientists’ algorithms and the new data generated as a system changes with time determine how the code developers design and build a code to make e�cient use of the supercomputer and its data storage and networking connections. In this age of “big data,” building programs that e�ciently generate unbelievably massive datasets on a supercomputer and make them useful has become a grand challenge. (See the article “Big Data, Fast Data—Prepping for Exascale” in this issue.)

A Titanic AchievementDesigning, building, operating, and maintaining a super-computer are completely di�erent experiences than working with Word or Excel on a PC at home or at the o�ce. �at is true today and will be true, in spades, tomorrow. Computer architectures continue to evolve, leading to the upcoming Trinity and eventually to machines still unimagined.

�e Laboratory’s supercomputers cannot exist without massively complex and expensive infrastructures, which are o�en unacknowledged and unappreciated, and without the e�ort and creative thinking of hundreds of scientists, engineers, and technicians. Working together, they meet the challenge of providing the most-advanced supercomputing environments in the world and then use them to perform the national security science that makes the director’s Annual Assessment Letter possible.

It is hard work, and it is certainly worth it.

~ Clay Dillingham

Using supercomputers, scientists can interact with simulations of everything from nuclear detonations to protein synthesis or the birth of galaxies. These simu-lations can boggle the mind—and at the same time provide clarity. Scientists wear special glasses to view simulations in 3D at extremely high resolution. They can even manipulate the simulations, as the viewer shown here is doing. (Photo: Los Alamos)

Floor Space The SCC is a 300,000-square-foot building. The vast � oor of the supercomputing roomis 43,500 square feet, almost an acre in size.

The GuardiansThe Strategic Computing Center (SCC) operations sta� oversees the Laboratory’s supercomputers24 hours a day, 7 days a week, 365 days a year, in 8-hour shifts, from the Operations Center.These experts keep supercomputers, like Cielo(shown outside the windows) running at their best.

Electric PowerThe amount and cost of electric power required to run a supercomputer are staggering.

Today, a megawatt (MW) of power costs$1 million per year. Roadrunner uses2 MW per year. Cielo, the Laboratory’s newest supercomputer, is a 3-MW machine. Trinitywill be a 12-MW machine.

The combined supercomputing facilitiesat Los Alamos use $17 million per yearof electricity.

Using all that electric power means thatsupercomputers generate lots of heat. If not kept cool, a supercomputer will get too hot and overheat, causing processors to fail and the machine to need costly, timely repairs.

Managers at the SCC inspect the double � oor beneath the acre-size supercomputing room. Several of the giant air-cooling units are visible in the foreground and behindthe managers.

An electrician, wearing personal protective gear, works on a 480-volt breaker inside a power distribution unit.

Los Alamos National LaboratoryLos Alamos National Laboratory

To capture the dust and dirt that might otherwise blow into the super-

computers, the 84 giant air-coolers use 1,848 air � lters. It takes two sta�

members an entire month tochange the � lters.

Air-CoolingBeneath the acre-size supercomputing room in the SCC is a 1.5-acre � oor that houses 84 giant40-ton air-cooling units. Together, these units can move 2.5 million cubic feet of chilled airper minute through the supercomputing room above.

The air-cooling units use water, cooled by evaporation, to chill the air before it is blown upwardto circulate around the supercomputers.

The air, now heated by cooling the supercomputers, is drawn back down to the lower � oor andback into the air-cooling units. This process transfers the heat from the air to the water, whichis then recooled by evaporation.

The high winds blowing beneath the supercomputer room are generated by the massiveair-cooling units.

22

Water for CoolingThe amount of water required to cool the air that, in turn, cools a supercomputeris also staggering. The SCC uses 45,200,000 gallons of water per year to cool itssupercomputers. This amount of water costs approximately $120,000 per year.

By the end of the decade, as supercomputers become more powerful and require more cooling, the SCC is predicted to double its water use to 100,000,000 gallons.

The SCC has � ve evaporative cooling towers. These towers evaporate water todissipate the heat absorbed by the water in the air-cooling units.

There is room to add an additional cooling tower as the supercomputing needsof the Stockpile Stewardship Program increase.

23

THE TIP OF THE ICEBERGA new supercomputer for Los Alamos can takeyears to create. e process begins with an intensecollaboration between commercial computer companies, like IBM, Cray, Hewlett-Packard, etc.,and Los Alamos’ computer experts, who have extensive experience both operating and designingsupercomputers. Los Alamos computer personnelinvolve themselves in the creation of each new Laboratory supercomputer from the generation ofthe first ideas to the machine’s delivery . . . and afterits delivery. Once it is built and delivered, before itis put to work, a supercomputer is disassembled, inspected, and reassembled to ensure that it canhandle classified data securely and can be fixed andmaintained by Laboratory staff.

Los Alamos’ specific need, however, is focused onthe stockpiled nuclear weapons and the continuousanalysis of them. Laboratory supercomputers are already simulating the detonation of nuclearweapons, but Trinity and the computers that willsucceed it at the Laboratory will need to simulatemore and more of the entire weapon (button-to-boom) and in the finest-possible detail. Design efforts for Trinity will be aimed at that goal, and agreat deal of effort will go into creating the manynew and complex subsystems that the computerwill need.

Every year for the past 17 years, the director of Los Alamos National Laboratory has had a legallyrequired task: write a letter—a personal assessmentof Los Alamos–designed warheads and bombs inthe U.S. nuclear stockpile. is letter is sent to the secretaries of Energy and Defense and to the Nuclear Weapons Council. rough them theletter goes to the president of the United States.

http://www.lanl.gov/newsroom/publications/national-security-science/2013april/_assets/docs/under-supercomputer.pdf

Page 10: Oakland Domain Awareness Center HP NFZO Violation Exhibits

EXHIBIT C (03) – Hewlett-Packard (HP) Nuclear Weapons

TRANSFORMING DATA INTO KNOWLEDGE Super Computing 2002

The ASCI Q System:30 TeraOPSCapability at LosAlamos NationalLaboratoryThe Q supercomputing system at LosAlamos National Laboratory (LANL)is the most recent component of theAdvanced Simulation and Computing(ASCI) program, a collaborationbetween the U. S. Department ofEnergy's National Nuclear SecurityAdministration and the Sandia,Lawrence Livermore, and Los Alamos nationallaboratories. ASCI's mission is to create and useleading-edge capabilities in simulation and com-putational modeling. In an era without nucleartesting, these computational goals are vital formaintaining the safety and reliability of thenation's aging nuclear stockpile.

ASCI Q Hardware

The Q system, when complete, will include 3 segments, each providing 10 TeraOPS capabil-ity. The three segments will be able to operateindependently or as a single system. One-thirdof the final system has been available to usersfor classified ASCI codes since August 2002(with a smaller initial system available sinceApril). This portion of the system, known asQA, comprises 1024 AlphaServer ES45 SMPsfrom Hewlett Packard (HP), each with 4 Alpha21264 EV-68 processors. Each of these 4,096 CPUshas 1.25-GHz capability, creating an aggregate10 TeraOPS.

An identical segment, QB, is now being tested withunclassified scientific runs, but will soon be availablefor secure computing. Los Alamos has an option topurchase the third 10 TeraOPS system from HP.

The final Q system will provide 30 TeraOPScapability:

• 3072 AlphaServer ES45s from HewlettPackard (formerly Compaq)

• 12,288 EV-68 1.25-GHz CPUs with 16-MB cache• 33 Terabytes (TB) memory• Gigabit fiber-channel disk drives providing

664 TB of global storage• Dual controller accessible 72 GB drives

arranged in 1536 5+1 RAID5 stor-age arrays, interconnectedthrough fiber-channel switch-es to 384 file server nodes.

Figure 1 : The first sections of the ASCI Q 30-TeraOPS supercomputer beinginstalled at Los Alamos National Laboratory are now up and running.

On QA, the Linpack benchmark ran at 7.727TeraOPS. This is 75.48% of the 10.24-TeraOPStheoretical peak of the system.

Page 1 of 2: http://www.sandia.gov/supercomp/sc2002/flyers/ASCI_Q_rev.pdf

Page 11: Oakland Domain Awareness Center HP NFZO Violation Exhibits

EXHIBIT C (04) – Hewlett-Packard (HP) Nuclear Weapons

The Network - Tying together 3072 SMPs

Very integral to Q is the Quadrics (QSW) dual-rail switch interconnect, which uses a fat-treeconfiguration. The final switch system willinclude 6144 QSW PCI adapters and six 1024-wayQSW federated switches, providing high band-

width (250 Mbytes/s/rail) and low latency (~5 us).The Quadrics network enables high-performancefile serving within the segments. A 6th levelQuadrics network will connect the 3 segments.

Performance on QA

Even at one-third of the final capability, perform-ance on ASCI Q is impressive. Several ASCI codeshave scaled to the full 4096 processors, and manyapplications have experienced significant perform-ance increases (5-8 times faster) over previousASCI systems. LANL will run its December 2002ASCI Milepost calculation on the QA segment.

Supporting Q - Facilities

Q is housed in the new 303,000 sq ft Nicholas C.Metropolis Center for Modeling and Simulation.The Metropolis Center includes a 43,500 sq ft

unobstructed computer room and office space forabout 300 staff. In addition, it has the facilities tosupport air or water cooling of computers and7.1 MW of power, expandable to 30 MW. Thefinal 30T Q system will occupy 20,000 sq ft anduse about 3 MW power. The final system willcomprise about 900 cabinets for the 3072

AlphaServer ES45 SMPS and relatedperipherals. Cable trays 1.8 miles inlength will hold about 204 miles ofcable under the floor.

Supporting Q - Staff

A team of about 50 Los Alamos and HPemployees supports the Q system. Thework of this team involves extensivesystems integration, tying together sys-tem management, networking, security,distributed resource management, datastorage, applications support, develop-ment of parallel tools, user consulta-tion, documentation, problem tracking,usage monitoring, operations, andfacilities management. In addition to

the Q system segments described here, this teammanages several other Q-like clusters, providingadditional resources to users.

For more information about ASCI Q, contact:John Morrison, CCN Division Leader ([email protected] 505-667-6164), James Peery, LANL DeputyAssociate Director for ASCI ([email protected] or 505-667-0940), Manuel Vigil, Q Project Leader([email protected] or 505-665-1960), Ray Miller, Q Project team member ([email protected] or 505-665-3222), or Cheryl Wampler, CCN-7 GroupLeader ([email protected] or 505-667-0147)

ASCI Q is a DOE/NNSA/ASCI/HP (formerly Compaq)Partnership, operated by the Computing, Communicationsand Networking (CCN) Division at Los Alamos NationalLaboratory. http://www.lanl.gov/asci

LALP-02-0230

The new Nicholas C. Metropolis Center for Modeling and Simulation houses Q, the ASCI 30-TeraOPS supercomputer at Los Alamos National Laboratory.

Page 2 of 2: http://www.sandia.gov/supercomp/sc2002/flyers/ASCI_Q_rev.pdf

Page 12: Oakland Domain Awareness Center HP NFZO Violation Exhibits

EXHIBIT D (01) – County of MarinPeace Converstion Commission: Nuclear Weapons Contractors

http://www.marincounty.org/depts/bs/boards-and-commissions/commissions/peaceconversion

Page 13: Oakland Domain Awareness Center HP NFZO Violation Exhibits

EXHIBIT D (02) – County of MarinPeace Converstion Commission: Nuclear Weapons Contractors

Marin County Peace Conversion Commission c/o The Marin County Board of Supervisors

Telephone number for Commission Chair, Jon Oldfather: (415) 377-3931 Email for Commission Chair: jo

Please Email a copy to PCC Secretary: Ann Gregory @ [email protected]

Date _____________ From: (Name of person and name of department)

To: Marin County Peace Conversion Commission

Subject: Request for Override

We request an override for the following product or service: (Description of product or service):

_______________________________________________________________________________

______________________________________________________________________________________

______________________________________________________________________________________

Name of Entity which provides the needed product or service, but which is on the Marin County List of Nuclear Weapons Contractors: ______________________________________________________________________________________

The reason that the entity named is the only practical source for the product or service is as follows:

_______________________________________________________________________________

______________________________________________________________________________________

______________________________________________________________________________________

______________________________________________________________________________________

_________________

Thank you for your attention.

Sincerely,

http://www.marincounty.org/depts/bs/boards-and-commissions/commissions/~/media/Files/Departments/BS/Boards%20and%20Commission/C

ommissions/OverrideRequestLetter.ashx

Page 14: Oakland Domain Awareness Center HP NFZO Violation Exhibits

EXHIBIT E (01) – EOC / DOMAIN AWARENESS CENTER / JUNE SAIC INVOICES

Big possibilities. Compact form factor.

With its innovative design, the HP Z620 Workstation gives you a near silent computing solution in

The performance you demand.

for a single Intel® Xeon® ® Xeon®

Bring your ideas to life faster.

Modify your machine.

Data sheet

HP Z620 Workstation

HP recommends Windows.

http://h10010.www1.hp.com/wwpc/pscmisc/vac/us/product_pdfs/Z620_datasheet-highres.pdf(PDF Page 01)

Page 15: Oakland Domain Awareness Center HP NFZO Violation Exhibits

DAC – HP STOREEASY (DATA STORAGE) EXAMPLE ITEMS

EXHIBIT E (02) – EOC / DOMAIN AWARENESS CENTER / JUNE SAIC INVOICES

December 2012

Family data sheet

HP StoreEasy Storage

HP StoreEasy Storage Family StoreEasy 1630 – 42TB

StoreEasy 12LFF Disk Enclosure StoreEasy 18TB LFF Drive Bundle

Page 16: Oakland Domain Awareness Center HP NFZO Violation Exhibits

EXHIBIT E (03) – EOC / DOMAIN AWARENESS CENTER / JUNE SAIC INVOICES

http://info.publicintelligence.net/Oakland-DAC-Invoices-4.pdf(PDF Page 14)

Page 17: Oakland Domain Awareness Center HP NFZO Violation Exhibits

EXHIBIT E (04) – EOC / DOMAIN AWARENESS CENTER / JUNE SAIC INVOICES

http://info.publicintelligence.net/Oakland-DAC-Invoices-4.pdf(PDF Page 16)

Page 18: Oakland Domain Awareness Center HP NFZO Violation Exhibits

EXHIBIT E (05) – EOC / DOMAIN AWARENESS CENTER / JUNE SAIC INVOICES

http://info.publicintelligence.net/Oakland-DAC-Invoices-4.pdf(PDF Page 41)

Page 19: Oakland Domain Awareness Center HP NFZO Violation Exhibits

EXHIBIT E (06) – EOC / DOMAIN AWARENESS CENTER / JUNE SAIC INVOICES

http://info.publicintelligence.net/Oakland-DAC-Invoices-4.pdf(PDF Page 42)

Page 20: Oakland Domain Awareness Center HP NFZO Violation Exhibits

EXHIBIT E (07) – EOC / DOMAIN AWARENESS CENTER / JUNE SAIC INVOICES

http://info.publicintelligence.net/Oakland-DAC-Invoices-4.pdf(PDF Page 43)

Page 21: Oakland Domain Awareness Center HP NFZO Violation Exhibits

EXHIBIT E (08) – EOC / DOMAIN AWARENESS CENTER / JUNE SAIC INVOICES

http://info.publicintelligence.net/Oakland-DAC-Invoices-4.pdf(PDF Page 44)

Page 22: Oakland Domain Awareness Center HP NFZO Violation Exhibits

EXHIBIT E (09) – EOC / DOMAIN AWARENESS CENTER / JUNE SAIC INVOICES

http://info.publicintelligence.net/Oakland-DAC-Invoices-4.pdf(PDF Page 45)

Page 23: Oakland Domain Awareness Center HP NFZO Violation Exhibits

EXHIBIT E (10) – EOC / DOMAIN AWARENESS CENTER / JUNE SAIC INVOICES

http://info.publicintelligence.net/Oakland-DAC-Invoices-4.pdf(PDF Page 46)

Page 24: Oakland Domain Awareness Center HP NFZO Violation Exhibits

EXHIBIT E (11) – EOC / DOMAIN AWARENESS CENTER / JUNE SAIC INVOICES

http://info.publicintelligence.net/Oakland-DAC-Invoices-4.pdf(PDF Page 47)

Page 25: Oakland Domain Awareness Center HP NFZO Violation Exhibits

EXHIBIT E (12) – EOC / DOMAIN AWARENESS CENTER / JUNE SAIC INVOICES

http://info.publicintelligence.net/Oakland-DAC-Invoices-4.pdf(PDF Page 48)

Page 26: Oakland Domain Awareness Center HP NFZO Violation Exhibits

EXHIBIT E (13) – EOC / DOMAIN AWARENESS CENTER / JUNE SAIC INVOICES

http://info.publicintelligence.net/Oakland-DAC-Invoices-4.pdf(PDF Page 49)

Page 27: Oakland Domain Awareness Center HP NFZO Violation Exhibits

EXHIBIT E (14) – EOC / DOMAIN AWARENESS CENTER / JUNE SAIC INVOICES

http://info.publicintelligence.net/Oakland-DAC-Invoices-4.pdf(PDF Page 50)

Page 28: Oakland Domain Awareness Center HP NFZO Violation Exhibits

EXHIBIT E (15) – EOC / DOMAIN AWARENESS CENTER / JUNE SAIC INVOICES

http://info.publicintelligence.net/Oakland-DAC-Invoices-4.pdf(PDF Page 51)

Page 29: Oakland Domain Awareness Center HP NFZO Violation Exhibits

EXHIBIT E (16) – EOC / DOMAIN AWARENESS CENTER / JUNE SAIC INVOICES

http://info.publicintelligence.net/Oakland-DAC-Invoices-4.pdf(PDF Page 52)

Page 30: Oakland Domain Awareness Center HP NFZO Violation Exhibits

EXHIBIT E (17) – EOC / DOMAIN AWARENESS CENTER / JUNE SAIC INVOICES

http://info.publicintelligence.net/Oakland-DAC-Invoices-4.pdf(PDF Page 53)

Page 31: Oakland Domain Awareness Center HP NFZO Violation Exhibits

EXHIBIT E (18) – EOC / DOMAIN AWARENESS CENTER / JUNE SAIC INVOICES

http://info.publicintelligence.net/Oakland-DAC-Invoices-4.pdf(PDF Page 54)