Upload
others
View
1
Download
0
Embed Size (px)
Citation preview
Cray User Group Conference
40th
Final Program
June 15–19, 1998
Stuttgart, Germany
CALCULA
TING
EN
GINES
STUTT
GA
RT
CUG
19
9
8
Dear CUG members and friends,
It is with great pleasure that we welcome you to the 40th Cray UserGroup Conference. CUG 98 will provide you with the opportunity tohear the latest information about what’s “hot“ in High PerformanceComputing and Communication on SGI and Cray systems. debis Sys-temhaus, the Computer Center of the University of Stuttgart (RUS)and SGI Germany are pleased to jointly host this conference. The con-ference site, Kultur und Kongresszentrum Liederhalle (KKL), is nicelylocated in downtown Stuttgart within walking distance of manyattractive places.
Our theme, “Calculating Engines,” spans the days of WilhelmSchickard who in 1623 designed in nearby Tübingen the first mechani-cal calculating engine (on display at the conference) through thefamous Cray-2 at RUS, first outside the US, to today’s High-Perfor-mance Computing Center, jointly operated by RUS and Daimler-BenzInterServices (debis) AG, the services company of the Daimler-BenzGroup.
During CUG 98 you will meet many people with interests similar toyours. Visit beautiful Baden-Württemberg in the South of Germany inthe early summertime. Stroll through the city of Stuttgart, the capitalof Baden-Württemberg, surrounded by more hills than the seven ofRome, where you will find interesting and beautiful museums, the-aters, and estates to visit—and have the chance to do some shopping ina modern city with exciting shops. Take a tour to the famous old uni-versity towns of Heidelberg and Tübingen not far away. See Lake Con-stance and the Black Forest. Drive the Baroque Route of the UpperSwabia. Visit the birthplaces of Schiller, Hegel, Mörike, Daimler, or
Bosch to name only a few famous persons from our region. Throughoutyour stay and travels, you will certainly enjoy our regional cuisine, inaddition to that inspired by our French neighbors. You will find ourfamous wines to be a pleasure. Come see and feel the old culture ofEurope.
To make the most of your stay with us, we have prepared this bookletwhich provides information about:
• Conference program,• Conference registration,• Conference hotels, and• Social events.
Grüß Gott! in Stuttgart, the site of "Calculating Engines"!
Prof. Dr.-Ing. Roland RühleDirector
Computer Center University of Stuttgart (RUS)
Karl-Heinz StreibichMember of the Board
debis Systemhaus GmbH
Dipl.-Ing. Walter WehingerRUS
Local Arrangements Chair
Welcome
To the members of CUG:
Welcome to the 40th Cray Users Group meeting! This is your bestchance in 1998 to exchange professional information and enjoy per-sonal interactions with your fellow SGI/Cray Research supercomput-ing users.
While visiting the fabled capital of Baden-Württemberg, nestled in thesouth of Germany, you will discover how CUG—your SGI/Cray userforum for high performance communications and computing—cangive you up-to-the-minute information and insight for that competi-tive edge you need.
Please take a look at the technical program in the following pages.Here, you will find a wealth of great opportunities on the CUG themeof "Calculating Engines"—from the SGI/Cray kind during the day tothe Mercedes-Benz kind for our traditional conference "CUG NightOut" event.
The CUG Program Committee has assembled for you a diverse arrayof detail-packed presentations in Tutorials, Keynote and General Ses-sions, Special Interest Technical presentations, and spontaneous Birdsof a Feather (BOF) discussions.
• Tutorial Sessions on Monday morning—available to you at noadditional charge—are a great opportunity for you to updateyour technical skills with the help of selected technical expertsfrom SGI/ Cray Research and CUG sites.
• The Welcome and Keynote Session, Monday afternoon, will bethe Initial Program Load of "Calculating Engines" for CUG inStuttgart, hosted by the Computer Center of the Universityof Stuttgart (RUS) and by Daimler-Benz InterServices(debis) AG.
• Parallel Technical Sessions each day will give you the opportu-nity to focus on the specific knowledge domains of the SpecialInterest Groups (SIGs). Presentations in these sessions havebeen reviewed and selected by the SIG chairpersons.
• Birds of a Feather (BOF) Sessions, the most dynamic aspect of aCUG conference, are scheduled as needed, and notices of BOFmeetings will be posted near the Message Board. You are wel-come to organize a BOF session at the conference.
• Formal receptions and informal luncheons are among the count-less occasions you will have to exchange information with yourcolleagues from other CUG sites and to collaborate with repre-sentatives from SGI/Cray Research.
You can see how the Stuttgart CUG offers something for everyoneduring an exciting, educational, and entertaining week of "Calculat-ing Engines."
Sam MilosevichCUG Vice-President and Program Chair
Eli Lilly and Company
Message from Sam Mi losev ich , CUG Vice-Pres ident and from the CUG Program Commit tee
Page 3
Program Notes
Special Interest ForumsThe Special Interest Groups (SIGs) holdmeetings that are open to all interested CUGattendees. Depending on the SIG, thesemeetings take place on Monday or Thurs-day. They provide a forum to discuss thefuture direction of the SIG, important issues,and talks of special interest. You are encour-aged to attend any of these open meetings.
Program Committee MeetingThe Program Committee needs your ideas!Come to the Open Program Committeemeeting on Thursday to hear about the nextCUG conference and to share your sugges-tions of how to make each CUG the best oneto date. All conference attendees are invitedand encouraged to attend this open meet-ing. The CUG Conference Program is the netresult of the combined efforts of all interest-ed CUG sites. Come to the Program Com-mittee meeting and make your unique con-tribution to the next CUG conference!
Video TheaterSupercomputers generate data that, via sci-entific visualization, can be transformed intopictures that provide a way for scientists tounderstand the information more clearlyand quickly. Although the primary purposeof visualization is to facilitate the discoveryprocess, scientists are increasingly coming torely on it also to present their research con-clusions. This Friday morning session show-cases the latest in scientific/engineeringvisualization.
Conference Program UpdatesWe anticipate that there may be continuingchanges to this Program schedule. You shouldplan to check the Bulletin Boards on site atCUG each day for schedule changes. Thankyou for your patience and cooperation.
ContentsWelcome Message . . . . . . . . . . . . . . . .1
Message from CUG VP . . . . . . . . . . . .2
Program Notes . . . . . . . . . . . . . . . . . . .3
Schedule . . . . . . . . . . . . . . . . . . . . . . . .4
Monday . . . . . . . . . . . . . . . . . . . . . . .4
Tuesday . . . . . . . . . . . . . . . . . . . . . . .6
Wednesday . . . . . . . . . . . . . . . . . . . .8
Thursday . . . . . . . . . . . . . . . . . . . . .10
Friday . . . . . . . . . . . . . . . . . . . . . . .12
Abstracts . . . . . . . . . . . . . . . . . . . . . . .13
Local Arrangements . . . . . . . . . . . . . .33
How to Contact US . . . . . . . . . . . .33
Conference Information . . . . . . . . .33
On Site Facilities . . . . . . . . . . . . . . .34
Hotel Information . . . . . . . . . . . . .34
Travel Information . . . . . . . . . . . . .35
Social Events . . . . . . . . . . . . . . . . . . . .36
Contacts . . . . . . . . . . . . . . . . . . . . . . .37
Hotel Floor Plan . . . . . . . . . . . . . . . . .42
Call for Papers . . . . . . . . . . . . . . . . . .43
Hohenzol lern (4.4.22) Rhein (4.3.11) Monrepos (4.4.25) Sch i l le r -Saal
Page 4
Monday, June 15, 1998AM
12:30–2:00 Newcomers Luncheon
Tutorial I
8:30 UNICOS/IRIX Differencesin DMFPaul Ernst, SGI/Cray
Tutorial II
8:30 Scheduling and Configura-tion for Origin 2000 Jim Harrell, SGI/Cray
Tutorial V
11:00 Contemporary Issues inNetwork Security Jay McCauley, SGI/Cray
Tutorial III
8:30 Scheduling and Configura-tion Tuning for the T3E
Jim Grindle, SGI/Cray
Tutorial VI
11:00 Performance Optimizationfor the Origin 2000 Jeff Brooks, SGI/Cray
Tutorial VII
11:00 Mathematical LibrariesBill Harrod, SGI/Cray
10:30–11:00 Break
Tutorial IV
8:30 Visualization:from Theory to Practice Cal Kirchoff, SGI/Cray
Page 5
2AOpen Meetings
Hohenzol lern (4.4.22)
2BOpen Meetings
Rhein (4.3.11)
2C Open Meetings
Monrepos (4.4.25)
2DOpen Meetings
Sol i tude (4.4.20)
SGI /Cray Research Recept ion (A l te Re i tha l le) 7 :00 p.m.–10:00 p.m.
PMMonday, June 15, 1998
3:45–4:15 Break
5:00–6:00 Please plan to attend the Open Meetings to discuss topics of interest to each Special Interest Group.
Information on location and topics will be posted on the bulletin board on Monday, June 15.
1 General SessionChair: Walter Wehinger, RUS
2:00 Welcome —University of StuttgartProf. Dr. rer. nat. E. Messerschmid,Prorektor Research and Technology
2:15 Parallel Numerical Simulations ofEnvironmental PhenomenaProf. Dr. rer. nat. Gabriel Wittum, Director, Institute forComputer Applications, University of Stuttgart
2:55 Challenges in the Digital World of Automotive R&DProf. Dr. Bharat Balasubramanian
3:25 CUG ReportGary Jensen, CUG President, UIUCNCSASam Milosevich, CUG Vice-President, ELILILLY
Sch i l le r -Saal
Sch i l le r -Saal4:15–5:00 Future of CUG
Page 6
3A The Engine's Core: Operating Systems
Chair: Ingeborg Weidl, MPG
9:00 PVP UNICOS Status and UpdatePatti Langer, SGI/Cray
9:30 T3E and UNICOS/mk Status andUpdate, Jim Grindle, SGI/Cray
10:00 Advancing with UNICOS Toward IRIXBarry Sharp, BCS
Hohenzol lern (4.4.22) Rhein (4.3.11)
3C Operating HeterogeneousSites
Chair: Mike Brown, EPCC
9:00 SGI/Cray Monitoring ToolsRandy Lambertus, SGI/Cray
9:30 Industry Directions in StorageMike Anderson, SGI/Cray
10:00 (cont.)
Monrepos (4.4.25)
AM Tuesday, June 16, 1998
10:30–11:00 Break
12:30–2:00 Lunch
4 General SessionChair: Eric Greenwade, INEEL
11:00 The New Silicon GraphicsRick Belluzzo, CEO, SGI
12:00 Questions and AnswersBeau Vrolyk, SGI
Sch i l le r -Saal
3B Performance and Evaluation
Chair: Jeffery A. Kuehn, NCAR
9:00 Parallel Job Performance in a Time Shar-ing Environment Dave McWilliams, UIUCNCSA
9:30 Industrial Applications on MPP Archi-tecturesJörg Stadler, debis Systemhaus GmbH
10:00 Performance Co-Pilot and Large Sys-tems Performance Ken McDonell, SGI/Cray
Page 7
Tuesday, June 16, 1998
5C User Services
Chair: Denise Brown, USARL
2:00 Recycling Instructor-led Training on theWWW, Leslie Southern, OSC
2:30 Design Your Own Webzine: A PracticalGuide, Lynda Lester, NCAR
3:00 (cont.)
Sch i l le r -Saal
6B Programming Environmentsand Models
Chair: Hans-Hermann Frese, ZIB
4:00 Programming Environment Status andUpdate, Sylvia Crain, SGI/Cray
4:30 IRIX Programming Tools UpdateBill Cullen, SGI/Cray
5:00 Cray Product Installation and Configura-tion, Scott Grabow, SGI/Cray
Rhein (4.3.11)
PM
3:30–4:00 Break
5A IRIX: Origin Software RoadMaps
Chair: Nicholas Cardo, SS-SSD
2:00 IRIX on High End Origins: Plans & StatusJim Harrell, SGI/Cray
2:30 MPT and Cluster Software UpdateKarl Feind, SGI/Cray
3:00 Update of System Management Softwarefor Large Origin SystemsDaryl Coulthart, SGI/Cray
Hohenzol lern (4.4.22)
5B Exploiting Parallelism
Chair: Larry Eversole, JPL
2:00 OpenMP Programming ModelRamesh Menon, SGI/Cray
2:30 OpenMP: a Multitasking and AutotaskingPerspective, Neal Gaarder, SGI/Cray
3:00 Expressing Fine-Grained ParallelismUsing Fortran Bindings to PosixThreads, Henry Gabb, ARMY/WES
Rhein (4.3.11)
6A GigaRings, Fill-ups & Tune-ups
Chair: Hartmut Fichtel, DKRZ
4:00 GigaRing Disk UpdateTom Hotle, SGI/Cray
4:30 Performance Tips for GigaRing Disk IOKent Koeninger, SGI/Cray
5:00 Cray Data Migration Facility & Tape Man-agement Facility UpdateNeil Bannister, SGI/Cray
Hohenzol lern (4.4.22)
6C SGI/Cray Operations Tools andStorage Technologies
Chair: Mike Brown, EPCC
4:00 DCE/DFS: An Environment for Secure &Uniform Access to Data and ResourcesDieter Mack, RUS
4:30 Monitoring at NCSATom Roney, UIUCNCSA
5:00 A Common Operating EnvironmentBill Middlecamp, SGI/Cray
Monrepos (4.4.25)
Page 8
Wednesday, June 17, 1998AM
Hohenzol lern (4.4.22) Rhein (4.3.11) Sch i l le r -Saal
7A Networking the Engines
Chair: Hans Mandt, BCS-ASL
9:00 Scalability and Performance of Distributed I/O on Massively ParallelProcessors, Peter W. Haas, RUS
9:30 The UNICORE Project: Uniform Accessto Supercomputing over the WebJim Almond, ECMWFMathilde Romberg, KFA
10:00 Cray Networking UpdateMichael Langer, SGI/Cray
7B Parallel and Distributed Computing
Chair: Hans-Hermann Frese, ZIB
9:00 MPI Regression Testing and Migrationto Origin 2000, Terry Nelson, SS-SSD
9:30 Distributed Supercomputing Services ina Heterogeneous Environment Peter Morreale, NCAR
10:00 High Performance Fortran for the T3Eand Origin Douglas Miles, Portland Group
7C Visualization
Chair: L. Eric Greenwade, INEEL
9:00 Visualization of Astrophysical Datawith AVS/Express, Jean Favre, ETHZ
9:30 Interactive Direct Volume Rendering onthe O2K, John Clyne, NCAR
10:00 VRML for VisualizationJim Johnson, SGI/Cray
10:30–11:00 Break
12:30–2:00 Lunch
8 General SessionChair: Barbara Horner-Miller, ARSC
11:00 CUG ElectionsJean Shuler, LLNL
11:40 Keynote Address—25 Years of Computer Aided Engineering atDaimler Benz in Stuttgart, GermanyMichael Heib, Manager HWW
12:25 CUG Election Results
Sch i l le r -Saal
Page 9
Wednesday, June 17, 1998 PM
Hohenzol lern (4.4.22) Rhein (4.3.11)
9C SGI/Cray Service Plans and Q & A Panel
Chair: Dan Drobnis, SDSC
2:00 SGI/Cray Service ReportBob Brooks, SGI/Cray
2:30 SGI/Cray Q&A PanelModerator: Charlie Clark, SGI/Cray
3:00 (cont.)
10C Joint Session—Software Tools andUser Services
Chair: Hans-Hermann Fresse
4:00 Supercomputing—Strategies and Appli-cations at the High-Performance Com-puting Center, StuttgartAlfred Geiger, RUS
4:30 Customer Scenarios for Large Multi-processor EnvironmentsFritz Ferstl, Genias Software GmbH
5:00 How CUG Members Support TheirUsersBarbara Horner-Miller, ARSC
Monrepos (4.4.25)
3:30–4:00 Break
9A High Speed Data!
Chair: Harmut Fichtel, DKRZ
2:00 High Performance Network Backup &Restore with HYPERtapeUwe Olias, UNIKIEL
2:30 Functional Comparison of DMF andHPSS, Gerhard Rentschler, RUS
3:00 The Rebirth of DMF on IRIXAlan Powers, NAS
10A IRIX: The Long Road Ahead
Chair: Nicholas Cardo, SS-SSD
4:00 Cellular IRIX: Plans & StatusGabriel Broner, SGI/Cray
4:30 Year 2000 Compliance for UNICOS &UNICOS/mkDennis Arason and Bruce Schneider, SGI/Cray
5:00 (cont.)
9B Powering Science
Chair: Larry Ebersole, JPL
2:00 Material Science on Parallel Computers(T3E/900), Andrew Canning, NERSC
2:30 Clustering T3Es for MetacomputingApplications, Michael Resch, RUS
3:00 Synchronization Using Cray T3E VirtualShared Memory
10B Performance and Evaluation
Chair: Michael Resch, RUS
4:00 High-Performance I/O on Cray T3EUlrich Detert, KFA
4:30 Cray T90 versus Tera MTAJay Boisseau, SDSC
5:00 (cont.)
CUG Night Out , 7 :00 p.m.Busses depar t f rom in f ront o f the Conference Center a t 6 :30
(Remember to br ing your inv i ta t ion le t ter f rom debis !)
Sch i l le r -Saal
Page 10
Thursday, June 18, 1998AM
12 General SessionChair: Bonnie Hall, LANL
11:00 SGI/Cray Service ReportKen Coleman, SGI/Cray
11:30 SGI/Cray Joint Software ReportMike Booth, SGI, Denice Gibson, SGI/Cray
Sch i l le r -Saal
Hohenzol lern (4.4.22) Rhein (4.3.11)
11C GigaRing Operations
Chair: Mike Brown, EPCC
9:00 GigaRing ConfiguringMichael Langer, SGI/Cray
9:30 GigaRing Systems MonitoringBirgit Naun, Thomas Plaga, KFA;Ralph Krotz, SGI/Cray
10:00 Open
Monrepos (4.4.25)
10:30–11:00 Break
12:30–2:00 Lunch
11A The Power of the Engine: Users and the O/S
Chair: Ingeborg Weidl, MPG
9:00 The Age-Old Question of How to Bal-ance Batch and Interactive Barry Sharp, BCS
9:30 MISER: User Level Job SchedulerKostadis Roussos, SGI/Cray
10:00 Serving a Demanding Client WhileShort on Resources, Manfred Stolle, ZIB
11B Applications and Algorithms
Chair: Larry Ebersole, JPL
9:00 Application RoadmapJef Dawson, SGI/Cray
9:30 Mathematical Methods for Mining inMassive Data Sets Helene E. Kulsrud, CCR-P/IDA
10:00 (cont.)
Page 11
Thursday, June 18, 1998 PM
13A Hauling Big Data
Chair: Hartmut Fichtel, DKRZ
2:00 Mass Storage at the NCSA: DMF andConvex UniTree Michelle Butler, NCSA
2:30 Towards Petabytes of High PerformanceStorage at Los Alamos Gary Lee, LANL
3:00 Storage and Data Management—Big Data SolutionsKen Hibbard, SGI/Cray
Hohenzol lern (4.4.22)
13B Secure & Controlled Computing Engines
Chair: Bonnie Hall, LANL
2:00 Securing the User's Work Environment Nick Cardo, SS-SSD
2:30 The State of Security for UNICOS &IRIX, Jay McCauley, SGI/Cray
3:00 IRIX Accounting Limits and UDB Func-tionalityJay McCauley, SGI/Cray
Rhein (4.3.11)
13C User Services
Chair: Leslie Southern, OSC
2:00 NPACI User ServicesJay Boisseau, SDSC
2:30 Cray Training UpdateBill Mannel, SGI/Cray
3:00 Pushing/Dragging Users TowardsBetter UtilizationGuy Robinson, ARSC
14C Open Meetings
Monrepos (4.4.25)
4:00–5:30 Please plan to attend the Open Meetings to discuss topics of interest to each Special Interest Group.
6:00 Also plan to attend the Program Steering Committee Meeting and help us prepare the program for our
next conference.
The locations for these meetings will be posted on the bulletin board.
3:30–4:00 Break
Monrepos (4.4.25)
14B Open Meetings14A Open Meetings
Rhein (4.3.11)Hohenzol lern (4.4.22)
Page 12
Fr iday, June 19, 1998AM
15A IRIX: From Plans to Reality
Chair: Nick Cardo, SS-SSD
9:00 Getting It All TogetherDaryl Grunau, LANL
9:30 Integrating an Origin2000 into a CrayData CenterChuck Keagle, BCS
Hohenzol lern (4.4.22)
15B Applications and Algorithms
Chair: Richard Shaginaw, BMSPRI
9:00 XVM–Extended Volume ManagementColin Ngam, SGI/Cray
9:30 The Good, the Bad, and the UglyAspects of Installing New OS Releases Barry Sharp, BCS
10:00 Origin Craylink PartitioningSteve Whitney, SGI/Cray
Rhein (4.3.11)
15C Visualization
Chair: L. Eric Greenwade, INEEL
9:00 CUG Video Theater
9:30 (cont.)
10:00 Visualization of 3 Dimensional MaterialScience ApplicationsL. Eric Greenwade, INEEL
Sch i l le r -Saal
16 General SessionChair: CUG Vice-President
11:00 CUG SIG Reports: CUG Vice-President11:10 Parallel and Distributed Development and Simulation of
Atmospheric ModelsV. Mastrangelo and I. Mehilli, CNAM-Université;F. Schmidt, M. Weigele, J. Kaltenbach, A. Grohmann, andR. Kopetzky, University of Stuttgart
11:40 SGI/Cray Hardware Report and Hardware FuturesSteve Oberlin, SGI/Cray
12:40 CUG Next Steps: CUG 99 in Minneapolis, MNJohn Sell, MSC
12:50 Closing Remarks: CUG President1:00 End
Sch i l le r -Saal
10:30–11:00 Break
Page 13
Abstracts
Monday June 15, 1998
Tutorial I8:30
UNICOS/IRIX Differences in DMF
Paul Ernst, SIG/Cray
This tutorial presents the differences betweenthe UNICOS and IRIX implementations of theData Migration Facility. Topics covered willinclude feature differences, installation andconfiguration changes, DMF tape interfaceinformation (TMF and OpenVault), and con-version assistance for sites considering chang-ing platforms from UNICOS to IRIX.
Tutorial II8:30
Scheduling and Configuration forOrigin 2000
Jim Harrell, SGI/Cray
This talk will provide an understanding oflarge (64 processor and above) Origin configu-ration and tuning. The basics of Origin config-uration and tuning will be explained.SGI/Cray's understanding and experience run-ning large Origins will be used to provideguidance for customers on these large systems.
Tutorial III8:30
Scheduling and ConfigurationTuning for the T3E
Jim Grindle, SGI/Cray
The T3E, while similar to the T3D for applica-tion programming, offers a host of new possi-bilities for system configuration. This tutorialwill address these possibilities and presentknowledge obtained from recent experience.The configuration includes such issues as dis-tribution of the OS servers, the placement ofOS servers on the OS PEs, and the organizationof pcache. A key question, the use of politicalscheduling, will be described. A walk-throughof how to make key configuration choices anda discussion of their effects will be presented.Recommendations for optimal settings will bemade. Since our knowledge is incomplete,input or feedback from sites about their experi-ences will be welcomed as part of the tutorial.
Tutorial IV8:30
Visualization from Theory toPractice
L. Eric Greenwade, INEEL
The aim of this tutorial is to provide a greaterunderstanding of the process and products ofvisualization. After a brief overview of the
what, why and when of visualization, adetailed example will be presented. Startingfrom the problem statement, gaining newinsight in a complex data set, each step of theprocess will be detailed with the pitfalls andintermediate stages described. The end resultwill then be analyzed as to the effectiveness ofthe output.
Tutorial V11:00
Contemporary Issues in NetworkSecurity
Jay McCauley, SGI/Cray
This tutorial reviews the current state of the artin security for networked computer systems.The material covered will include: an introduc-tion to firewalls and proxies, internet authenti-cation technologies, encryption based solutionsincluding SSL and ssh, and security screeningand monitoring tools. Some of the new securityfeatures found in the IRIX™ 6.5 Operating Sys-tem and related products will be discussedincluding capabilities, access control lists, andthe new System Security Scanner found inWebForce™ product. While no formal text isrequired, Bellovin and Cheswick's "Firewallsand Internet Security: Repelling the Wily Hack-er" (new edition soon to be published) is anexcellent overview of some the topics covered.
Tutorial VI11:00
Performance Optimization for theOrigin 2000
Jeff Brooks, SGI/Cray
This tutorial will describe the Origin 2000architecture and the resulting programmingconcepts. Topics include: Features of the Origin2000 Hardware, MIPS f90 compiler and itscommand line options, performance tools, sin-gle processor tuning and multi-processor tun-ing. The tutorial assumes "Cray-centric" pro-gramming expertise.
Tutorial VII11:00
Mathematical Libraries
Bill Harrod, SGI/Cray
This tutorial will discuss a variety of scientificlibraries routines for solving linear algebra andsignal processing problems. The LAPACKpackage and SGI/Cray FFT routines will behighlighted. LAPACK provides a choice ofalgorithms mainly for dense matrix problemsthat are efficient and portable on a variety ofhigh performance computers. Examples will beprovided for converting LINPACK or EIS-PACK subroutine usage to the appropriate
LAPACK subroutine. We will also discussother migration issues such as porting LIBSCIbased application codes to the new Origin 2000scientific library called SCSL. The tutorial willalso address the challenge facing designers ofmathematical software in view of the develop-ment of highly parallel computer systems. Weshall discuss ScaLAPACK, a project to developand provide high performance scaleable algo-rithms suitable for highly parallel computers.
OPEN MEETINGS4:00, 4:30, and 5:00
Please plan to attend the Open Meetings to dis-cuss topics of interest to each Special InterestGroup. Check the conference bulletin board fordetails.
Page 14
Abstracts
Monday June 15, 1998
3A The Engine's Core: Operating Systems
Chair: Ingeborg Weidl, MPG
9:00
PVP UNICOS Status and Update
Patti Langer, SGI/Cray
This talk will cover current release and supportplans, reliability/stability efforts, future plansas well as T90P and J90++ updates.
9:30
T3E and UNICOS/mk Status andUpdate
Jim Grindle, SGI/Cray
This talk will cover current status of systemissues and their progress. We will specificallycover scheduling, GRM, and resiliency issuesin UNICOS/mk on the T3E, along with a pro-ject update with release and support plans.
10:00
Advancing with UNICOS TowardIRIX
Barry Sharp, BCS
During the past seven years of running UNI-COS at Boeing on X-MP, YMP, and T90 systems
many enhancements to the OS and SystemAdmin, and User-Level facilities have beenmade. This paper will outline the rationale forthese and provide brief descriptions on theirimplementation. The intent is to share theinventory with other UNICOS member sitesand discuss those that might be applicable orof interest to the Origin 2000 IRIX system.
3B Performance and EvaluationChair: Jeffery A. Kuehn, NCAR
9:00
Performance of Thread-Paralleland Message-Passing Jobs in aTime-Sharing Environment onOrigin 2000 Systems
Dave McWilliams, UIUCNCSA
At NCSA, we discovered that thread-parallel,gang-scheduled Origin 2000 jobs can use asmuch as 5 times as much CPU time when theload average on a system is high (e.g., 96 on a64-processor system). We have been workingwith SGI as they make changes in the IRIXprocess scheduler to fix the problem. Althoughthe work is still in progress, we have alreadyseen improvements in job performance. We areplanning to explore the performance of MPIjobs on a heavily loaded system in the nearfuture. We will also discuss efforts to limit theload average on IRIX systems.
9:30
Cray T90 versus Tera MTA:The Old Champ Faces a NewChallenger
Jay Boisseau, SDSC
The T90 represents the latest in CRI's line ofparallel-vector supercomputers and still reignsas the champion in terms of performance onmany production codes (for which parallel ver-sions have not and perhaps will not be devel-oped). Tera has recently delivered their firstMulti-Threaded Architecture (MTA) to SDSCand hopes to provide a truly scalable shared-memory parallel platform that will competefavorably in performance and ease of program-ming against the T90. We present the results ofrunning a few efficient T90 user codes on theMTA without and with optimization and com-pare to their T90 performance, with initialanalysis of the reasons for performance differ-ences.
10:00
Performance Co-Pilot and LargeSystems Performance
Ken McDonell, SGI/Cray
Brief Overview of PCP features, scope of per-formance issues it is trying to address, canneddemos, availability. Addressing Large SystemPerformance.
Page 15
Abstracts
Tuesday, June 16, 1998
• getting performance data out of the applica-tion and into PCB
• libpcp_trace• PMDA construction and libpcp_pmda• customization (leverage the PCP building
blocks to enhance the tools for performancemanagement of your application)
• pmchart, pmview,mpgadgets• pmlogger• pmie• scaleable performance visualizations• extensibility• correlating resource demands across the
hardware, operating system, service andapplication layers
• measuring quality of service• Case Study—building an MPI Performance
Monitoring Toolkit from PCP parts
3C Operating Heterogeneous Sites Chair: Mike Brown, EPCC
9:00
J90, T90, C90, T3, GigaRing, Ori-gin2000 Monitoring Tools
Randy Lambertus, SGI/Cray
The presentation includes product description,application implementation, system monitor-ing, problem notification, service responseactions and real-life examples, and concen-trates on the present products Watchstream,
Watchlog and Availmon and the related inter-face for each application sub-function.
Ideas for future development efforts, such assite system metric gathering and expandednotification options, will be opened for generaldiscussion with the audience.
9:30
A Look at Disk and Tape Futuresfrom the SGI/Cray Perspective
Mike Anderson, SGI/Cray
The disk drive industry continues to undergochanges at an ever increasing pace. Competi-tion and technology advances continue tochange the products produced and shorten theproduct life cycles. The tape drive industry isalso undergoing big changes. An overview ofthese changes and the impact on SGI/Craydisk and tape products will be presented
5A IRIX: Origin Software Road Maps
Chair: Nicholas Cardo, SS-SSD
2:00
IRIX on High End Origins: Plans& Status
Jim Harrell, SGI/Cray
This is an update of the plans and status forIRIX OS support of the high end systems. Theprimary focus will be the status of current sys-tem software releases and the plans for futurereleases and upgrades. The status and scalabili-ty of IRIX on more than 64 processors will bediscussed.
2:30
MPT and Cluster Software Updatefor IRIX, UNICOS/mk, andUNICOS
Karl Feind, SGI/Cray
IRIX cluster software continues evolving toadd both high availability and scalability func-tions. This talk briefly summarizes recent orplanned changes with the FailSafe, NQE,IRISconsole, Array Services, and MPT prod-ucts. The majority of the talk centers on MPIand SHMEM in MPT 1.2. In addition to reviewof new features, performance of MPI on Originand SHMEM on Origin and T3E will beoverviewed.
Page 16
Abstracts
Tuesday, June 16, 1998
3:00
Update of System ManagementSoftware for Large Origin Systems
Dan Higgins, SGI/Cray
Support for large Origin systems has dramati-cally improved in the last year. This paperdescribes the use of IRIX systems software toimprove management of large Origin systems.The software is IRIX 6.4, MPI, Array services,checkpoint, NQE jobs limits, MISER andAXMAN for interactive limits. There will beadditional improvements in 1998. UNICOSjobs limits, the UDB, job notion and extendedaccounting will be added to IRIX.
5B Exploiting ParallelismChair: Richard Shaginaw, BMSPRI
2:00
OpenMP Programming Model
Ramesh Menon, SGI/Cray
OpenMP is an application program interface(API) for shared-memory parallel program-ming. Pioneered by SGI, it is fast becoming ade facto industry standard, as evidenced by thelarge number of hardware and software ven-dors endorsing the standard. The functionalityis designed to enable programmers to writecoarse grain, scaleable, shared-memory parallelprograms. This talk will present the why, what,and how of OpenMP.
2:30
OpenMP: a Multitasking andAutotasking Perspective
Neal Gaarder, SGI/Cray
The new OpenMP standard for shared-memo-ry parallelism is a collection of standard com-piler directives, library routines, and environ-ment variables for shared memory parallelism.This paper discusses OpenMP and compares itwith current PVP autotasking and multitaskingcapabilities.
3:00
Expressing Fine-Grained Paral-lelism Using Fortran Bindings toPosix Threads
Henry Gabb, ARMY/WES
In dynamics simulations, the through-spaceinteractions between particles, which must becalculated every time step, consume the bulkof computational time. These calculations typi-cally occur in a single, large loop containingmany data dependencies. However, the itera-tions are often independent, so fine-grainedparallelism should confer a significant perfor-mance gain. Pthreads have significant advan-tages over compiler directives, which often cre-ate separate UNIX processes. Multiple threadsexist in a single process and require less systemoverhead. Also, threads are not linked to phys-
ical processors as is often the case for compilerdirectives. Multiple threads residing on a sin-gle processor give better resource utilization(e.g., separate threads doing computation andI/O operations). Performance and program-ming issues that arise when expressing fine-grained parallelism on SGI SMP and Cray T3Earchitectures will be discussed.
5C User ServicesChair: Denice Brown, USARL
2:00
Recycling Instructor-Led Trainingon the WWW
Leslie Southern, OSC
Web-based formats have great potential forbroadening the reach of traditional instructor-based training courses. To enhance trainingmaterials on the WWW, the Ohio Supercom-puter Center is experimenting with audio andslide combinations on the WWW. The objectiveis to re-use these courses to reach a larger audi-ence at any time and any place. In addition tocase studies, descriptions of experiences, hard-ware, software, and tools will be included.Evaluation methods and available user feed-back will also be presented.
Tuesday, June 16, 1998
Page 17
Abstracts
2:30
Design Your Own Webzine:A Practical Guide
Lynda Lester, NCAR
This paper gives concrete examples of what todo and what not to do in creating a newsletter(Webzine) for users on the WWW. We discussnavigation aids; methods of informationretrieval (search engine, index, contents); prin-ciples of typography and page design; use ofgraphics and animations; cross-platform"gotchas"; writing for the Web; review cycles;URLs and file structure; and user notificationprocedures. What does "interactive" mean? Doreaders still want hardcopy? How far will peo-ple click? We will cover all these points andmore in an entertaining and pragmatic talk onthe basic concepts of Web design.
6A GigaRings, Fill-ups & Tune-upsChair: Hartmut Fichtel, DKRZ
4:00
GigaRing Mass Storage Update
Michael Langer, SGI/Cray
This talk will focus on the GigaRing systemsMass storage plans, including release plans,support plans, stability and performance.
4:30
Performance Tips for GigaRingDisk IO
Kent Koeninger, SGI/Cray
This talk will present techniques on CRAY T90,CRAY J90 and CRAY T3E systems to maximizethis GigaRing disk performance, includingFCN RAID disks, pcache on T3Es, SSD-T90caching on T90s, memory caching on J90 sys-tems user-memory caching (FFIO), disk config-uration tips, and other UNICOS features forfast IO. The recent FCN RAID performanceimprovements enabled measurements andcharacterization of these important I/O tech-niques.
5:00
Cray Data Migration Facility &Tape Management Facility Update
Neil Bannister, SGI/Cray
This paper will review status and plans forboth Cray and SGI platforms. Since the lastCUG meeting DMF was released. This has pre-sented the DMF team with new challengeswhich will be covered in this paper. The paperwill also report progress and release plans forthe TMF and CRL products.
6B Programming Environments and Models
Chair: Hans-Hermann Frese, ZIB
4:00
Programming Environment Statusand Update
Sylvia Crain, SGI/Cray
This talk will cover current release and supportplans for the Programming Environment forthe MPP and PVP platforms. It will also dis-cuss future plans in this area including migra-tion efforts.
4:30
IRIX Programming Tools Update
Bill Cullen, SGI/Cray
In November 1997, the IRIX programmingtools changed direction. This talk will cover thecurrent status and road map for Workshop,Speedshop, MPF, RapidApp and CosmoCode.
Page 18
Abstracts
Tuesday, June 16, 1998
5:00
Cray Product Installation andConfiguration
Scott Grabow, SGI/Cray
The intent of this talk is to provide insight intohow the Common Installation Tool (CIT) isbeing used to make installation of software fol-low a similar process, and address commonproblems that have been seen in the migrationto CIT and CD-ROMs for media distribution.In addition, examples of how these processchanges affected various manuals will beshown, and how the goal is to follow a taskoriented approach, and to separate installationand system configuration issues into differentmanuals.
6C Cray/SGI Operations Tools andStorage Technologies
Chair: Mike Brown, EPCC
4:00
Integration of Origin 2000s, T90s& J90's at USARL
Albert G. Edwards IV, USARL
The ARL MSRC has an environment where theOrigin 2000, T90 and the J90 have been inte-grated into a common computing environ-ment. This experience has not been without itschallenges. The ARL MSRC has had to migrate
many users and codes from an 8 node PowerChallenge Array and a Cray-2 to the Origin2000, T916, J932 and the J916. The J916 is func-tioning as the fileserver for the Origins andT90's. This talk will focus on the experiencesand lessons learned in the installation and inte-gration of these computing platforms.
4:30
Toward an Integrated MonitoringScheme
Tom Roney, UIUCNCSA
The National Center for SupercomputingApplications (NCSA) employs a virtual opera-tor (Voper) on UNIX operating systems. Voperdetects potential system and application prob-
lems, and displays warning messages foradministrative attention. Voper is being furtherdeveloped for interactive use, to provide auto-mated implementation of counter measuresagainst reported problems. Voper and all othertools used to monitor NCSA systems are beingcollected to run under a single software pack-age, the Computer Associates' Unicenter.NCSA effectively pilots a production environ-ment, and is progressing toward an integratedmonitoring scheme.
5:00
SGI/Cray Plans for OperationsCommonality across MPP, Vector,and Scaleable Node Environments
Bill Middlecamp, SGI/Cray
Cray plans to offer a Common Operating Envi-ronment across traditional Cray MPP, parallelvector, and scaleable node product families.The Common Operating Environment, alongwith the Common Supercomputing API, areintended to provide interoperability and easecustomer transition among the product fami-lies. A Common Operating Environment APIspecification will be available to customers atthis CUG.
Page 19
Abstracts
Tuesday, June 16, 1998
7A The Network & Computing Engines
Chair: Hans Mandt, BCS-ASL
9:00
Scalability and Performance ofDistributed I/O on Massively Par-allel Processors
Peter W. Haas, RUS
Supercomputer network configurations, bothinternal and external, have developed overseveral orders of magnitude in performance inrecent years. At the same time, the scope ofnetworks has been broadened to accept notonly the traditional vector supercomputer butalso massively parallel systems of varioustypes, file servers, workstation clusters, visualization laboratories and multimedia technology.
Networking and file system I/O on MPPshave been confined to a limited number ofsystem level processors in the past leading towell known bottlenecks, especially with theexecution of network protocols. There are ele-gant ways, however, to co-locate serverprocesses on user nodes, which will enabletruly distributed I/O.
The High Performance Storage System (HPSS)testbed at RUS is taken as an example to illus-trate the basic principles. This testbed coversall major computing platforms.
9:30
The UNICORE Project: UniformAccess to Supercomputing overthe Web
Jim Almond, ECMWFMathilde Romberg, KFA
Supercomputers are becoming more powerful,but also more centralized in fewer centers. Tofully utilize the potential of such facilities,more uniform, secure, and user friendly accessvia the Internet is needed. In addition to thesegeneralities, this talk will describe the UNI-CORE project, a large collaboration dedicatedto the implementation of a prototype address-ing the above goals.
10:00
Cray Networking Update
Michael Langer, SGI/Cray
This talk will cover future Cray/SGI network-ing plans, including release plans, supportplans, stability and performance.
7B Parallel and Distributed Computing
Chair: Hans-Hermann Frese, ZIB
9:00
MPI Regression Testing andMigration to Origin 2000
Terry Nelson, SS-SSD
The computing industry, for economic andtechnical reasons, is moving inexorablytowards an increasingly parallel environment.One of the major paradigms to accomplish thisis message passing, and one of the major toolsin this area is MPI. This paper will describe aset of Fortran and C regression tests which testMPI functionality and performance. A series ofruns on a C90, J90, and Origin 2000, as well astests with production sized jobs, will bedescribed. Particular attention will be given tothe migration of programs from the C90 andJ90 environments to the Origins.
9:30
Distributed Supercomputing Ser-vices in a Heterogeneous Environ-ment
Peter Morreale, NCAR
The NCAR Distributed Computing Services(DCS) project is a five year effort aimed at pro-viding NCAR supercomputing services on
Page 20
Abstracts
Wednesday, June 17, 1998
local users desktops. The initial effort hasfocused on providing NCAR Mass Storage Sys-tem (MSS) services to major compute serversincluding Cray, SGI, IBM, and Sun systems.The DCS system is designed around OSF'sDCE software. DCS makes liberal use of theDCE Remote Procedure Call (RPC) mechanism,as well as the Cell Directory Service (CDS).This paper discusses the design of the NCARDCS system as currently implemented as wellas future directions.
10:00
High Performance Fortran for theT3E and Origin
Douglas Miles, Portland Group
No abstract available
7C VisualizationChair: L. Eric Greenwade, INEEL
9:00
Visualization of AstrophysicalData with AVS/Express
Jean Favre, ETHZ
In the frame of astrophysical applications Eulerequations including source terms are solvednumerically in two and three dimensions. Thedata are computed on a J90-cluster by an adap-
tive mesh code with a high degree of vectoriza-tion and parallelization. The output data arestored in a multi-level hierarchy with solutionsat different levels of spatial resolution.
These time-dependent simulations impose veryhigh visualization constraints. We useAVS/Express to integrate custom develop-ments required for memory, CPU, graphicsresources and remote data access imposed bythe application. We present the implementationof the visualization environment on SGI work-stations with our 512-Tbyte storage facility.
9:30
Interactive Direct Volume Render-ing of Time-Varying Data on theOrigin 2000
John Clyne, NCAR
Direct Volume Rendering (DVR) is a powerfulvolume visualization technique for exploringcomplex three and four dimensional scalardata sets. Unlike traditional surface fittingapproaches to volume visualization, whichmap volume data into geometric primitivesand can benefit greatly from widely-availablecommercial graphics hardware, computational-ly-expensive DVR is performed, with rareexception, exclusively on the CPU. FortunatelyDVR algorithms tend to parallelize readily.Much prior work has been done to produceparallel volume renderers capable of visualiz-
ing static data sets in real-time. Our contribu-tion to the field is the development of parallelsoftware that takes advantage of high-band-width networking and storage to deliver vol-ume rendering of time-varying data sets atinteractive rates. We discuss our experienceswith the software on the Origin 2000 class ofsupercomputers.
10:00
VRML for Visualization
Jim Johnson, SGI/Cray
VRML, the Virtual Reality Modeling Language,is heading for a browser near you. VRMLpromises a write once, use everywhere, capa-bility for visualizing the results of engineeringand scientific calculations. The same resultsmay be visualized on a multitude of platforms,locally or over the web, using familiar webbrowsers. The format and functionality areunchanged, only the performance and capacityvary. Simulation results can be packaged in aVRML format file to be loaded by a browser.Alternatively, a Java applet can read an exist-ing data format and inject the data into a run-ning VRML-capable browser.
Page 21
Abstracts
Wednesday, June 17, 1998
9A High Speed Data!Chair: Hartmut Fichtel, DKRZ
2:00
High Performance NetworkBackup & Restore withHYPERtape
Aindrias Wall, MultiStream Systems Inc.
The fastest backup & restore for heterogeneousplatforms, including Cray UNICOS and SGIIRIX as server platforms. It takes full advan-tage of their outstanding I/O performance.Overall, HYPERtape covers the most variousclient platforms and is highly scaleable.
2:30
Functional Comparison of DMFand HPSS
Gerhard Rentschler, RUS
RUS is looking for a new data managementsolution for its supercomputing area. Differentsolutions are being evaluated. Among them areHPSS and DMF for IRIX. This talk will com-pare both products and point out the strengthsand weaknesses.
Experiences in a real life environment are alsopresented.
3:00 The Rebirth of DMF on IRIX
Alan Powers, NAS
The Numerical Aerodynamic Simulation Facili-ty (NAS) at NASA Ames Research Center(AMES) is in the process of testing DMF 2.6 ona Power Challenge and an Origin 2000. Thesesystems are connected to a STK 4400 silo usingSCSI STK 9490 tape drives. A list of new fea-tures and differences will be covered. A simplebenchmark will be used to compare the perfor-mance of IRIX DMF and UNICOS DMF.
9B Powering ScienceChair: Richard Shaginaw, BMSPRI
2:00
Material Science on Parallel Com-puters (T3E/900)
Andrew Canning, NERSC
Three of the most heavily used quantum meth-ods in material science (Plane Wave, DensityFunctional Theory, and the Tight Bindingapproach) will be discussed, along with theirimplementation on a variety of different paral-lel machines, including the 544-processor CrayT3E at NERSC. The new parallel algorithmsdeveloped to run these codes efficiently onparallel machines will be presented. Applica-tions to large problems in material science willalso be presented.
2:30
Clustering T3Es for Metacomput-ing Applications
Michael Resch, HLRS
Having developed a library that allows run-ning MPI applications on a cluster of MPPs, wehave done some experiments with severalapplications. So far we have good results for aCFD application; but even more promising arethe results for Monte Carlo simulations.
3:00
Synchronization Using Cray T3EVirtual Shared Memory
Miltos Grammatikakis, KFA
We consider mutual exclusion on the Cray T3Eshared memory using various atomic opera-tions and algorithms. Our current implementa-tions, when compared to the Cray shmem_lockfunctions, indicate time improvements of atleast 2 orders of magnitude for a 64-processorT3E/900. Our comparisons are based on bothsynthetic and actual code for concurrent priori-ty queues. Software barrier performance is alsobriefly examined.
Page 22
Abstracts
Wednesday, June 17, 1998
9C SGI/Cray Service Plans and Q & A Panel
Chair: Dan Drobnis, SDSC
2:00
SGI/Cray World-Wide CustomerService Plans
Bob Brooks, SGI/Cray
World-Wide Service at SGI has undergonemany changes since the merger with Cray. Thistalk will describe those changes, provide adescription of current service delivery strate-gies, and define plans and initiatives that willaffect the delivery of service in the future.
2:30
SGI/Cray Q&A Panel
Moderator: Charlie Clark, SGI/Cray
Representatives of SGI/Cray Service, Develop-ment, and Support organizations will discussissues and questions raised by the CUG Surveyand at the OpsSIG Business Meeting, and fieldquestions and comments from the floor. This isa traditional wrap-up to OpsSIG sessions, andoften a lively dialog.
10A IRIX: The Long Road AheadChair: Nicholas Cardo, SS-SSD
4:00
Cellular IRIX: Plans & Status
Gabriel Broner, SGI/Cray
Future operating systems developed by Crayand SGI for the supercomputer and serverspaces will be based on Cellular IRIX technol-ogy. Cellular IRIX is an evolution of IRIXwhich incorporates supercomputer featuresexisting on UNICOS and UNICOS/mk. Keyfeatures of Cellular IRIX are its increasedfault containment and its scalability to thou-sands of processors. This talk will cover thearchitecture of Cellular IRIX and the contentsand status of the first operating systemreleases using this technology.
4:30
Year 2000 Compliance forUNICOS & UNICOS/mk
Dennis Arason, SGI/Cray
This talk will provide information as to whattesting was done to ensure 2000 compliancy forUNICOS and U/mk systems. It will cover theplatforms tested and what testing was done. Itwill also cover the products tested and openissues.
10B Performance and EvaluationChair: Michael Resch, RUS
4:00
Experiences with High-Perfor-mance I/O on Cray T3E
Ulrich Detert, KFA
No abstract available
4:30
Comparing T90 and T3E Perfor-mance on Optimized Fortran 90and HPF Codes
Jay Boisseau, SDSC
The National Partnership for Advanced Com-putational Infrastructure (NPACI) providesaccess to a CRAY T90 and two CRAY T3Es (andother parallel computing platforms). As thereasons (and pressures) to migrate codes toparallel platforms increase, some users areinterested in converting production Fortrancodes to HPF to minimize development time.We compare single- and multi-CPU perfor-mance results for several optimized T90 codesto their HPF versions optimized for the T3E toillustrate the performance one might expect inmigrating to HPF. We also provide informationon tuning HPF codes for maximum perfor-mance on the CRAY T3E.
Page 23
Abstracts
Wednesday, June 17, 1998
5:00
Industrial Applications on MPPArchitectures
Jörg Stadler, debis Systemhaus GmbH
HWW (Höchstleistungsrechenzentrum fürWissenschaft und Wirtschaft GmbH) is alarge German supercomputing facility, thatoffers its computer resources to scientific aswell as industrial users. This talk reports onthe applications that industrial users run onthe HWW equipment. An increasing demandfor computer resources from commercial cus-tomers eventually led to the installation ofan additional machine in March 1998. All ofthe demand is still exclusively met by vectormachines, although HWW offers two largeMPPs (Cray T3E and IBM SP2). These MPPsare still being evaluated by the industrialusers. On the one hand, the users are interest-ed in the MPPs, because they know thatMPPs offer advantages in terms of price/per-formance ratio, raw computer power, or mainmemory size. In addition, the main applica-tions are available on MPPs, so testing andevaluation is straightforward. On the otherhand, the MPP technology created it's ownbreed of problems. Consider for examplecrash simulation; although this is a wellknown industrial supercomputer application,it still is the area with the largest demand forcomputer resources. Naturally, several effortshave been made to run crash simulations on
MPPs. These efforts have shown that justporting an application to a new architectureis sometimes not enough. More specifically,the numerical sensitivity of crash models ledto several new difficulties. For example,dynamic load balancing, normally consideredgood programming practice on MPPs, givesrise to results that vary significantly from runto run. Furthermore, the results depend onthe number of processors used. CFD users arealso evaluating the use of MPPs. We have runsome successful benchmarks and demonstra-tions (including a model with an 8GB dataset)in that area. As a summary, it can be statedthat industrial users are having a close andconcentrated look at MPPs right now, butthey are not running their production calcula-tions on these machines at this time.
10C Joint Session—Software Tools and User Services
Chair: Hans-Hermann Frese, ZIB
4:00
Supercomputing—Strategies andApplications at the High-Perfor-mance Computing Center,Stuttgart
Alfred Geiger, HLRS
High Performance Computing in Germany isundergoing a major transition from tens of
middle-class centers to mainly four high-endcenters. The center in Stuttgart, HLRS, is ori-ented towards engineering-applications. It wastherefore a natural step to enable synergeticeffects in acquisition, operation and usage byoutsourcing the supercomputers into a joint-venture with industrial partners.
The main application-fields of the center inStuttgart are CFD, reacting flows, structuralmechanics and electromagnetics. In most ofthese applications it could be proven, thatlarge-scale engineering-simulations can scalevery well to hundreds of nodes on scalar aswell as on vector-based parallel systems.Examples of such simulations are:• Reentry of space-vehicles• Combustion-simulation in coal-fired power-
plants• Crashworthiness-simulations• Radar cross-section• Internal combustion-engines
For industrial users an optimal embedding ofHPC into the development-process is neces-sary. Most companies have development-groups all over the world, making tools forcooperative working in HPC environmentsunavoidable if time to market is a critical issue.Examples of EC-funded projects in this area arepresented.
There are problems, that cannot be simulatedon even the largest supercomputers in theworld. Metacomputing-scenarios, though not
Page 24
Abstracts
Wednesday, June 17, 1998
very efficient, are the only way in such cases.The results and techniques of such projectswill be explained.
4:30
Customer Scenarios for LargeMultiprocessor Environments
Fritz Ferstl, Genias Software GmbH
Genias Global Resource Director (GRD) wasdeveloped to meet the demands of large mul-tiprocessor sites with several 100s of CPUsand users. At the Army Research Laboratory(ARL), GRD is installed on several types ofCray and SGI machines.
The talk will describe how GRD• integrated these machines into one envi-
ronment, and enabled administrators todefine global management policies. Policydecisions for individual machines are nolonger necessary.
• ended the war between users. Usersagreed that machines partly controlled byGRD are going to be GRD-only machines.Previously, resources were degraded foreveryone when a few users “overused” thesystem.
• gave system administrators the benefit ofhaving resource allocation done automati-cally, as well as being able to monitorresources at both the site and individualjob level.
Page 25
Abstracts
Wednesday, June 17, 1998
These goals could be reached because ofGRD's policy enforcement capabilities whichare ground-breaking for a job managementsystem: GRD maintains control over runningjobs and controls their resource share bymeans of the Cray and SGI operating system("dynamic scheduling"). Thus, computingshares can be defined in advance and fullyenforced. The fair share mechanism cannotbe compared to existing Cray features,because it is based on a far more complexparadigm.
GRD's policy capabilities include automatedreactions to high-priority requirements, on-time termination for deadline jobs, weight-ing of individual machines with respect todifferent factors, a well-defined interface formanual override without policy changes andfair share.
5:00How CUG Members SupportTheir Users
Barbara Horner-Miller, ARSC
In April a survey was sent to all CUG membersites concerning how the site performs theUser Services task. The results of this surveywill be presented.
11A The Power of the Engine:Users and the O/S
Chair: Terry Jones, GRUMMAN
9:00
The Age-Old Question of How toBalance Batch and Interactive
Barry Sharp, BCS
The UNICOS system is designed with bothbatch and interactive workload requirementsin mind. However, in practice, the vanillaUNICOS system memory scheduler strugglesto adequately balance these two distinct work-loads, even with its wealth of scheduler tun-ing parameters. This paper presents a simplemodification to the kernel-level memoryscheduler that works harmoniously with thevanilla system to simplify the process.
9:30
MISER: User Level Job Scheduler
Nawaf Bitar, SGI/Cray
Miser is a dynamic batch process schedulerthat provides deterministic batch schedulingand increased system throughput without sta-tic partitioning of a system. Static partitioningallows resources to be targeted at specificclasses of processes, thus allowing determinis-tic run times, but can lead to waste. Miser
Thursday, June 18, 1998
Page 26
Abstracts
gives submitted batch jobs processing priorityover the required resources needed to completeon schedule, but reduces waste by allowingidle resources to be used. The batch schedulerconsists of a user process which schedules jobresources, and kernel support for deliveringresources to jobs on schedule. The combinationallows flexibility while improving responsive-ness and performance.
10:00
Serving a Demanding ClientWhile Short on Resources
Manfred Stolle, ZIB
DMF on UNICOS, UNICOS/MK and IRIX iswell suited to handle a local file system but itgets rather poor in a distributed environmentwith files larger than the DMF controlled filesystem. Using a small DMF controlled file sys-tem a “file system full” error is not supposedto occur due to poor migration performance ortoo large external files. A DMF controlled filesystem with a long history shows problemsconcerning the fast and safe restoration of datalocated on damaged tapes. Old style file attrib-utes are disturbing in these situations. Solu-tions of these problems are shown.
11B Applications and AlgorithmsChair: Richard Shaginaw, BMSPRI
9:00
Application Roadmap
Greg Clifford, SGI/Cray
The SGI HPC computing environment will beadvancing very rapidly in the next few years.This presentation will focus on how the SGIapplication environment will evolve and tran-sition. The emphasis will be on solution areas(e.g., crash, external CFD) rather than on spe-cific applications (e.g. MSC/NASTRAN, FLUENT).
9:30
Mathematical Methods for Miningin Massive Data Sets
Helene E. Kulsrud, CCR-P/IDA
With the advent of higher bandwidth andfaster computers, distributed data sets in thepetabyte range are being collected. The prob-lem of obtaining information quickly fromsuch data bases requires new and improvedmathematical methods. Parallel computationand scaling issues are important areas ofresearch. Techniques such as decision trees,vector-space methods, bayesian and neuralnets have been applied. A short description ofsome successful methods and the problems to
which they have been applied will be pre-sented. Methods which work effectively onPVP machines will be emphasized.
11C GigaRing OperationsChair: Mike Brown, EPCC
9:00
GigaRing Configuring and Dump-ing
Michael Langer, SGI/Cray
A technical interchange regarding system con-figuration with multiple mainframes on thesame GigaRing. The discussion would includeSWS configuration options and administrationissues. Single SWS support discussion will beincluded as well as a discussion on dumpingGigaRing based systems.
9:30
Monitoring and Automatic Rebootof Cray GigaRing Systems
Birgit Naun, Thomas Plaga, KFA; Ralph Krotz,SGI/Cray
In the operatorless environment of a moderncomputer center the automatic reboot of pro-duction systems in trouble is an urgent require-ment.
Thursday, June 18, 1998
Page 27
Abstracts
Thursday, June 18, 1998
This paper describes the distributed monitor-ing mechanism on the Cray mainframes, thecorresponding system workstations (SWS) andthe computer center's problem managementdatabase server. If this monitoring detects amalfunction (could be any hardware or soft-ware problem) a set of utilities implementedon the SWS gathers as much information aspossible for later problem analysis, reboots thefailing component or the complete system ifrequired, sends out Email messages and com-municates status changes to the central data-base.
The automatic rebooting mechanism isdesigned to run on top of the Cray suppliedbasic system operation commands and sup-ports all GigaRing based Cray systems (includ-ing T3E, T90 and J90se) and can be adaptedeasily to local information handling require-ments.
13A Hauling Big DataChair: Hartmut Fichtel, DKRZ
2:00
Mass Storage at the NCSA: DMFand Convex UniTree
Michelle Butler, NCSA
The NCSA has been using UniTree since 1993as the primary mass storage system for user'sfiles and enterprise wide backups. The rapidevolution and growth of the NCSA productionsupercomputing environment, currently 512Origin 2000 processors, a Power ChallengeArray and an HP/Convex Exemplar, is puttingincreasingly critical requirements on the massstorage system. To meet this challenge, we arecreating an environment which consists of ahigh performance mass storage system usingDMF running on an Origin 2000 and an enter-
prise wide backup UniTree sys-tem on a recently installedHP/Convex Exemplar SPP-2000. This requires a transitionfrom UniTree to DMF for ourusers and their data, approxi-mately 15 TB. We will describeour experiences with this
process using SGI/Cray tools for the migrationas well as providing some comparison of thecapabilities of the two systems.
2:30
Towards Petabytes of High Perfor-mance Storage at Los Alamos
Gary Lee, LANL
The High Performance Storage System (HPSS)is currently deployed in the open and securenetworks at Los Alamos National Laboratory.Users of the Accelerated Strategic ComputingInitiative (ASCI) Blue Mountain MPP systemand the Advanced Computing Laboratory'sMPP system, both from SGI/Cray, access HPSSfor their storage. We discuss our current HPSSconfigurations, how our MPP users accessHPSS, and the performance between HPSS andthe MPP systems. We will also discuss our pro-jected storage and storage performancerequirements for the next five to seven years.
3:00
Storage and Data Management—Big Data Solutions
Ken Hibbard, SGI/Cray
This talk summarizes the storage road map forthe Origin series of systems. In addition todescribing existing and new hardware andsoftware capabilities, we will highlight fiberchannel solutions, file system support for newtopologies, and data management solutionsthat are necessary for managing terabyte andpetabyte stores. We will conclude with a sum-mary of recent performance achievements.
Page 28
Abstracts
13B Secure & Controlled Computing Engines
Chair: Bonnie Hall, LANL
2:00
Securing the User's Work Environment
Nick Cardo, SS-SSD
High performance computing at the NumericalAerospace Simulation Facility at NASA AmesResearch Center includes C90's, J90's and Ori-gin 2000’s. Not only is it necessary to protectthese systems from outside attacks, but also toprovide a safe working environment on thesystems. With the right tools, security anom-alies in the user's work environment can bedetected and corrected. Validating proper own-ership of files against user's permissions, willreduce the risk of inadvertent data compro-mise. The detection of extraneous directoriesand files hidden among user home directoriesis important for identifying potential compro-mises. The first runs of these utilities detectedover 350,000 files with problems. With periodicscans, automated correction of problems takesonly minutes. Tools for detecting these types ofproblems as well as their development tech-niques will be discussed with emphasis onconsistency, portability and efficiency for bothUNICOS and IRIX.
2:30
The State of Security for UNICOS& IRIX
Jay McCauley, SGI/Cray
I will review general security administrationavailable for the UNICOS & IRIX operatingsystems. A review of the latest methods ofbreaking into systems and networks will beidentified and discussed. A detailed analysis ofpassword file security and password protectionmethods will be covered. Resources availableboth in printed form and "on the net" will belisted. Examples will be provided. Recommen-dations for increasing site security will be dis-cussed in a short Question and Answer sessionafter the presentation of the paper.
3:00
IRIX Accounting Limits and UDBFunctionality
Jay McCauley, SGI/Cray
This talk will discuss how IRIX is beingextended in the areas of accounting and limitsto enable it to perform effectively in a data cen-ter environment.
13C User ServicesChair: Leslie Southern, OSC
2:00
Coordinating User Support Acrossa Partnership: NPACI User Ser-vices
Jay Boisseau, SDSC
The National Partnership for Advanced Com-putational Infrastructure (NPACI) includes 5resource partners that provide computingresources (including CRAY PVPs and T3Es) forover 4000 users. NPACI is striving to providethe highest caliber of user services by combin-ing and leveraging the staff expertise of theresource partners. Coordinating the efforts ofdifferent sites to develop synergy and not suf-fer from increased overhead demands carefulplanning, flexible procedures, and effectivetools. We present the plans, procedures andtools developed by the NPACI ResourcesWorking Group to offer consulting, training,documentation, and other user services toenable computational scientists to excel.
2:30
Cray Training Update
Bill Mannel, SGI/Cray
On May 2, 1998, Silicon Graphics CustomerEducation closed the Eagan, Minnesota train-
Thursday, June 18, 1998
ing center. Future offerings for both IRIX andUNICOS-based curricula will be available inother worldwide training centers or at cus-tomer sites. This talk will discuss past andfuture changes in Cray training.
3:00
Pushing/Dragging Users TowardsBetter Utilization
Guy Robinson, ARSC
This talk will describe ARSC experiences inthe operation of MPP Cray systems. In partic-ular issues of user/center interaction will beconsidered with regard to:• education in and use of the latest software
tools and hardware features• dealing with parallel experts and novices• system configuration issues• how to ensure the best use is made of the
available resources by all users• what to change in the system configuration
and what/how to change users’ practices• ensuring users make full use of advanced
computational methods, both in terms ofsystem hardware/software developmentsand by exploiting suitable algorithms.
It is hoped that the talk will promote discus-sion of the need for centers to become moreinvolved with users and how this can beachieved.
OPEN MEETINGS4:00, 5:00, and 5:30
Please plan to attend the Open Meetings to dis-cuss topics of interest to each Special InterestGroup. The location of these meetings will beposted on the bulletin board.
Page 29
Abstracts
Thursday, June 18, 1998
Page 30
Abstracts
15A IRIX: From Plans to RealityChair: Nick Cardo, SS-SSD
9:00
Getting It All Together
Daryl Grunau, LANL
At Los Alamos National Laboratory we havebeen operating two 256 processor clusters ofOrigin 2000's for over a year, recently expand-ing to 1280 processors. The primary program-ming model is SGI/MPI, with job sizes rangingup to 512 processors and routinely spanningmachines. DFS is used for primary local stor-age, with HPSS targeted for archival storage.Free access to the nodes is restricted, and useof the resources is managed by Load SharingFacility (Platform Computing). We have beenactive in pushing ahead the latest releases ofthe operating system, and currently have earlyexperiences with IRIX 6.5 to report. The pur-pose of this paper will be to share the chal-lenges, trials, and successes in getting it all towork together.
9:30
Integrating an Origin 2000 into aCray Data Center
Chuck Keagle, BCS
This paper will present an overview of theData Center configuration and discuss the
application mix for an Origin 2000/Cray T916-256 Data Center. Major Topics will include:• Hardware Configuration—Origin 2000, Cray
T916-256, Connectivity• Configuration Control—OS Versions, Patch-
es, Local Code• User Maintenance—UID Management, User
Account Management • Operations—Remote console operation
using IRIS console • Performance Tuning—Interactive vs. Batch
limits, Performance Co-Pilot• File System Structure—SCSI, Fiber RAID,
DMF, mkfs/mount options• Application Mix—Scalar codes, Vector
codes, CPU availability
10:00
Getting the Best Mileage out ofYour Origin System
Jim Harrell, SGI/Cray
How to turn various kernel knobs, allocatenode-local data properly and use system profil-ing tools to get the best performance out ofyour Origin System.
15B Applications and AlgorithmsChair: Richard Shaginaw, BMSPRI
9:00
XVM–Extended Volume Management
Colin Ngam, SGI/Cray
Xtended Volume Management (XVM) is thenext generation product to XLV. XVM providesextended functionalities in the IRIX and Cellu-lar IRIX Operating System environment. XVMmanages the following logical objects: Vol-umes, Subvolumes, Concats, Mirrors, Stripesand Physical Slices. XVM provides transparentdistributed access to all XVM objects in boththe Enterprise (cluster) and HPC (Cellular)environment. This paper provides a discussionon the differences between XVM and XLV,XVM and UNICOS/UNICOSmk LogicalDevice Drivers and the full set of XVM func-tionalities.
9:30
The Good, the Bad, and the UglyAspects of Installing New OSReleases
Barry Sharp, BCS
Many SGI/Cray sites do not have the luxury ofhaving multiple like systems, one of which toserve the needs of validating a new OS release
Fr iday, June 19, 1998
prior to installing it as their production system.The difficulty of doing this on a single systemwithout endangering the running productionsystem is explored along with how Boeing hasdeveloped its processes over time to safeguardproduction while still being able to adequatelyQA the new OS release.
10:00
Origin Craylink Partitioning
Steve Whitney, SGI/Cray
Partitioning a machine involves dividing a sin-gle large machine into multiple machines, eachwith their own IP address, root disk, etc. with avery fast interconnect between them. A singlepartition can be brought down (S/W and H/Wfailures, controlled shutdown, etc.) withoutaffecting the rest of the machines, thus provid-ing higher availability for a partition than asingle large machine.
15C VisualizationChair: L. Eric Greenwade, INEEL
9:00
CUG Video Theater
Supercomputers generate data that, via scien-tific visualization, can be transformed into pic-tures that provide a way for scientists tounderstand the information more fully andquickly. Although the primary purpose of visu-alization is to facilitate the discovery process,scientists are increasingly coming to rely on itto also present their research conclusions. Thissession showcases the latest in scientific/engi-neering visualization. It promises to be botheducational and entertaining.
10:00
Visualization of 3 DimensionalMaterial Science Applications
L. Eric Greenwade, INEEL
The INEEL has a number of programs investi-gating the properties of materials underdemanding environmental conditions. Theseare related to a number of problems in envi-ronmental, military and industrial processingand reliability areas. This presentation willillustrate the results of some of these investiga-tions and the new approaches used to conveythe computational results to the researchers.
16 GENERAL SESSION
Parallel and Distributed Develop-ment and Simulation of Atmos-pheric Models
V. Mastrangelo and I. Mehilli, CNAM-Université;F. Schmidt, M. Weigele, J. Kaltenbach,A. Grohmann, and R. Kopetzky, University of Stuttgart
Parallel and distributed computing is a way tomeet the increasing demand for engineeringand computing power of scientific and techni-cal simulation applications. To utilize thesenew paradigms the authors of this paper aredeveloping a service based simulation envi-ronment which is operating on various com-puters at different European locations. In thispaper we intend to discuss possibilities anddifficulties of this approach by referring to theexample of the dispersion of air carried parti-cles. This example consists of different parts,where the air stream simulation (wind simula-tion) and the simulation of air carried particletransport (transport simulation) form thecomputationally expensive core of the appli-cation. This is run on a CRAY-T3E near Paris.To provide distributed calculation modules asa service various actions have to be taken.They include:• Modularisation of the overall problem, • Developing a strategy for distribution,
Page 31
Abstracts
Fr iday, June 19, 1998
• Parallelisation and optimization of the com-putationally expensive
• Encapsulation of modules as independentprocesses,
• Developing a strategy for the integration ofcompeting services into a simulation system,
• Performing a set of parallel simulations byproviding the services of the system withconsistent data and managing the distribu-tion of the results.
Solutions to most of the problems will be pro-posed. A hierarchy of parallelisation steps willbe discussed in connection with a correspond-ing strategy for distribution. Communicationof parts with high granularity takes place onthe basis of CORBA mechanisms. A special Ser-vice Agent Layer (SAL) provides methods toconvert, modules or objects (with functionali-ties) into services available in a multi-userenvironment. Experiences with this architec-ture and the implementation of parts of it willbe reported. Special emphasis is put on experi-ences including Cray computers into such ascenario.
Page 32
Abstracts
Fr iday, June 19, 1998
An aerial view of Stuttgart.
Page 33
Local Arrangements
Conference Information
Conference Site
KKL StuttgartKultur- und Kongresszentrum Liederhalle Berliner Platz 1-3D-70174 StuttgartPhone: + 49-(0)711-2027.800 and
- 2027.801Fax: + 49-(0)711-2027.860Email: [email protected]
In cooperation with: Hotel MaritimSeidenstr. 34D-70174 StuttgartPhone: + 49-(0)711-942.0Fax: + 49-(0)711-942.1000
All sessions take place in the KKL. Luncheswill be served in Saal Maritim on theground floor in the adjacent Hotel Maritim.
Who May Attend?CUG bylaws specify that employees of aCUG member (usually a facility or companyusing a Cray computer, identified by itsCUG site code) and users of computing ser-vices provided by a CUG member, mayattend a CUG meeting. Visitors are also eli-gible to attend a CUG meeting, providedthey have formal approval from a memberof the CUG Board of Directors. The mem-bers of the Board of Directors are listed laterin this booklet.
Conference Registration Location/Hours
Location: First floor “Ebene 2” in KKL.
Office Hours:Sun, June 14 3:00–5:00 p.m.Mon–Thu, June 15–18 8:00 a.m.–6:00 p.m.Fri, June 19 8:00 a.m.–1:00 p.m.
Badges and registration material may beobtained at these scheduled times. Note thatALL attendees must wear a conferencebadge when attending Stuttgart CUG Con-ference activities.
Call for Papers
There is a Call for Papers at the end of thisbrochure. To submit a paper for the Spring1999 Cray User Group Conference in Min-neapolis, Minnesota, USA, please completeand return the form as directed. All ques-tions regarding the program should bedirected to the Program Chair.
How to Contact UsFrom June 13-19, 1998
Stuttgart CUG ConferenceGrethe Knapp ChristiansenKKLBerliner Platz 1-3D-70174 StuttgartGermanyPhone: +49-(0) 711-2027.800 or 801Fax: +49-(0) 711-2027.860 Email: [email protected]
After the ConferenceGrethe Knapp ChristiansenInstitut für Computeranwendungen IIUniversität StuttgartPfaffenwaldring 27D-70569 StuttgartGermanyPhone: +49-(0) 711-685.3759Fax: +49-(0) 711-685.3758Email: [email protected]
Yet another beautiful Platz!!
Page 34
Local Arrangements
On Site FacilitiesThroughout the conference, our staff at theCUG Conference Office will be glad to assistyou with any special requirements.
MessagesDuring the conference (June 15-19) incomingtelephone calls will be received at the CUGConference Office Telphone: + 49-(0)711-2027.800 and -2027.801; Fax: + 49-(0)711-2027.860, Email: [email protected]. Mes-sages will be posted near the registrationdesk. Private telephone calls may be madein the downstairs entrance hall of the KKLusing coins or telephone cards.
FaxesA fax machine will be available at the CUGConference Office for conference-relatedbusiness.
Fax machines are available at the participat-ing hotels for non-conference business.Please check with your hotel for prices fornational and international fax transmissions.
Email and Personal ComputersThe Email room is located in Donau (4.3.14).For last minute editing of papers and trans-parencies a Macintosh and a PC are available.
PhotocopyingA copy machine will be available for makinga limited number of copies. If you plan todistribute copies of your presentation mate-
rials, please bring sufficient copies with you.For nearby copy services (to be paid in cashonly), please see the KKL Conference Office.
Dining ServicesThe conference fee includes refreshments dur-ing breaks and luncheons Tuesday throughThursday at the Hotel Maritim. Breaks willalso be provided on Friday. Breakfast isincluded in the hotel price. Lunch will beserved in the Hotel Maritim in Saal Maritimon the ground floor. On Monday, Newcomersare invited to a Newcomers’ Lunch in theHotel Maritim. Please sign up for this on theConference Registration Form. On Mondayevening, SGI/Cray Research will host a recep-tion for all CUG attendees. On Wednesdayevening, conference attendees are cordiallyinvited by debis Systemhaus GmbH to a mag-nificent evening in the wonderful Mercedes-Benz Museum in Stuttgart.
Special accommodations and/or dietaryrequirements should be noted on the Con-ference Registration Form and/or on theHotel Reservation Form under “Specialrequirements.”
Hotel Information
Conference Hotel
Maritim Hotel StuttgartSeidenstr. 3470174 StuttgartPhone: +49-711-942.0Fax: +49-711-942.1000
Hotel Maritim is situated adjacent to theConference center, KKL.
Other Hotel Options
A limited number of rooms have beenreserved at these two hotels at a specialweekend rate for the Saturday prior to theconference and the Friday after the confer-ence in case you should wish to extend yourvisit in Stuttgart. Rooms are available on afirst come first served basis. Special arrange-ments for families and students are availabledirectly from Stuttgart-Marketing GmbH.
Best Western Hotel KettererMarienstr. 370178 StuttgartPhone: +49-711-2039.0Fax: +49-711-2039.600Hotel RoyalSophienstr. 3570178 StuttgartPhone: +49-711-62505.0Fax: +49-711-628809
Local Arrangements
Travel Information
PassportsA valid passport is required for travel toEurope and between countriesonce here. Please check withyour travel agent or your localGerman consulate to determine ifyou need a visa and for informa-tion on how to obtain one. ForE.C. residents a national ID issufficient.
Transportation
Stuttgart is well equipped withofficial transport services. In theCity, these are underground cityrailways, trams, and buses. Yourconference registration packetincludes a three day ticket for theStuttgarter Straßenbahn andbuses–VVS. This ticket is validfor one person for three consecu-tive days, including the first dayof use. You may buy additionaltickets in your hotel. Additonal informationabout the VVS is included in your confer-ence registration packet.
We suggest taking a taxi from your hotel tothe railway station (Hauptbahnhof) andthen take the suburban railways (S2 and S3)to the Stuttgart airport. Trains leave approxi-mately every 15 minutes and the trip takesapproximately 27 minutes. You buy your
ticket at an automatic ticket machine in therailway station for your trip to the airport. Itcosts DM 4,60. The taxi fare from downtownStuttgart to the airport costs approximatelyDM 50,00.
CurrencyGerman currency is the Deutsche Mark(DM). Currency can be changed and travel-er’s checks can be cashed in banks whichare generally open Monday to Friday from9:00 a.m. to 12:00 p.m. and 2:00 p.m. to 4:00p.m. and on Thursday to 5:30 p.m. Exchangeoffices are open every day at all major rail-way stations, at airports, and at bordercrossing points.
VoltageThe standard voltage in Germany is 230volts, 50 cycles. Bring an adapter plug foruse in electrical outlets.
SmokingSmoking is not allowed in conference loca-tions (except outside).
Special AssistanceSession rooms for the Stuttgart CUG Confer-ence are located on several floors in the KKLand are accessible by stairs or elevators. Ifyou need special assistance, please come tothe Conference Office where our staff will bepleased to assist you.
ClimateStuttgart is situated in the south of Germanyand the climate is semicontinental, making ithard to tell what the weather will be like.Although the month of June is usually a nicesummer month with sunny weather andcomfortable temperature 18 to 24° C(65°–75°F), it may also be rainy and “chilly.”To be safe, we suggest that you bring anumbrella.
Page 35
Local ArrangementsLocal Arrangements
Page 36
Monday, June 15
Newcomers' Luncheon
First-time CUG participants are invited tothe Newcomers' Luncheon. At this event,newcomers can meet the CUG Board ofDirectors, the Special Interest Group Chairs,and executives from SGI/ Cray Research.This will also be a chance to hear a verybrief presentation of "What Is CUG?". Pleasebe sure to sign up for this luncheon at theConference Office.
SGI/Cray Research ReceptionAll conference participants and their guestsare invited by SGI/Cray Research to attenda reception on Monday, June 15, from7:00–10:00 p.m in the Old Riding Room ofthe Maritim Hotel.
Wednesday, June 17
CUG Night OutThe traditional CUG night out will be heldWednesday, June 17, 1998. Our conferencepartner, debis Systemhaus GmbH, has invit-ed all participants and their guests to theMercedes-Benz Museum in Untertürkheim,the traditional place of the Mercedes-Benzfactory. Here we shall dine and stroll aroundthe Mercedes Benz Oldtimers in admirationof the “good old days.” We are very happyfor this invitation which will be a highlightof the Stuttgart CUG Conference. Trans-
portation will be provided to and from themuseum by debis.
Additional guests must be noted on yourConference Registration Form for this event.
Site ToursTours of the Cray computer facilities at RUScan be arranged on short notice during theConference. Please come to the ConferenceOffice to make arrangements.
Accompanying Persons ProgramWithin the registration area at KKL, acounter with tourist information will bearranged where you may collect informationabout Stuttgart and Baden-Württemberg.
Why not stroll in the city just to get acc-quainted and see Königstrasse, the PalaceGardens, the State's Theaters, Schlossplatz,
Marketplace, shop at Calwer Passage or justrelax and try some Maultaschen and a glassof wine.
• The State Gallery in the center of Stuttgartwith its impressive Picasso collection isclosed on Monday so you may want to planthis for some other day during the week.
• The Wilhelma, Germany's only zoologicaland botanical garden, is situated near theriver Neckar also not far from the city.
• The city of Ludwigsburg with its castleknown as the “Swabian Versailles” and thefamous “Blooming Baroque,” the castle'spark (20 min. by public transportation).
• The city of Esslingen, with its well pre-served medieval city center and its old townhall with an impressive renaissance facadeand delightful carillon (20 min. by publictransportation).
Soc ia l Events
The State Gallery
CUG Board of Directors
PresidentGary JensenNational Center for Supercomputing
Applications152 Computing Applications Building605 East Springfield AvenueChampaign, IL 61820 USAPhone: (1-217) 333-1189Fax: (1-217) [email protected]
Vice-PresidentSam MilosevichEli Lilly and CompanyComputational Science Apps, MC499Lilly Corporate CenterIndianapolis, IN 46285 USAPhone: (1-317) 276-9118Fax: (1-317) [email protected]
SecretaryGunter GeorgiNorthrop GrummanData Systems and Services Division73 Irma AvenuePort Washington, NY 11050 USAPhone: (1-516) 883-2336Fax: (1-516) [email protected]
Treasurer Barbara Horner-MillerArctic Region Supercomputing Center910 Yukon Drive, Suite 108, P.O. Box 756020Fairbanks, AK 99775-6020 USAPhone: (1-907) 474-5409Fax: (1-907) [email protected]
Director—Asia/PacificShigeki MiyajiChiba UniversityDept. of Physics 1-33, Yayoi-cho, Inage-ku263 Chiba, JapanPhone: (81-43) 290-3719Fax: (81-43) [email protected]
Director—EuropeWalter WehingerRechenzentrum Universitaet StuttgartAllmandring 30D-70550 StuttgartGermanyPhone: (49-711) 685-2513Fax: (49-711) [email protected]
Director—AmericasBarry SharpBoeing Shared Services GroupEngineering Operating SystemsMC 7J-04P.O. Box 3707Seattle WA 98124-3707 USAPhone (1-425) 865-6411Fax: (1-425) [email protected]
Past PresidentClaude LecoeuvreCommissariat a l'Energie Atomique/
Ile de FranceDRIF/DP2I/ECCentre de Bruyeres-le-Chatel, BP n 12F-91680 Bruyeres-le-Chatel, FrancePhone: (33-1) 45 95 61 85Fax: (33-1) 69 26 70 [email protected]
Page 37
Contacts
Page 38
Contacts
Special Interest Groups
Applications and Algorithms
Chair: Richard Shaginaw
Bristol-Myers Squibb PharmaceuticalResearch Institute
P.O. Box 4000, Mail Stop J22-02Princeton, NJ 08543-4000 USAPhone: (1-609) 252-5184Fax: (1-609) [email protected]
Deputy: Larry C. Eversole
Jet Propulsion LaboratoryCalifornia Institute of Technology4800 Oak Grove Drive, MS 126-147Pasadena, CA 91109-8099 USAPhone: (1-818) 354-2786Fax: (1-818) 393-1187 [email protected]
Graphics
Chair: Eric Greenwade
INEEL-Lockheed Martin Idaho Technologies Co.Lockheed Martin Idaho Tech.P.O. Box 1625, MS-3605Idaho Falls, ID 83415 USAPhone: (1-208) 526-1276Fax: (1-208) 526-4017 [email protected]
SGI/Cray Contact: Cal Kirchhof
Visualization Program Manager655F Lone Oak DriveEagan, MN 55121 USAPhone: (1-612) 683-5245Fax: (1-612) [email protected]
IRIX
Chair: Nicholas Cardo
Sterling Software, Inc.NASA Ames Reseach CenterMS 258-6Moffett Field, CA 94035-1000 USAPhone: (1-415) 604-4754Fax: (1-415) [email protected]
J90
Chair: Jean Shuler
Lawrence Livermore National LaboratoryLivermore ComputingP.O. Box 808, L-67Livermore, CA 94552 USAPhone: (1-510) 423-1909Fax: (1-510) 422-0592 [email protected]
SGI/Cray Contact: Michelle Webster
SGI/Cray ResearchCustomer Service655F Lone Oak DriveEagan, MN 55121 USA
Phone: (1-612) 683-5301Fax: (1-612) [email protected]
Mass Storage Systems
Chair: Robert J. Silvia
North Carolina Supercomputing Center3021 Cornwallis RoadP.O. Box 12889Research Triangle Park, NC 27709 USAPhone: (1-919) 248-1132Fax: (1-919) [email protected]
Deputy: Helene E. Kulsrud
Institute for Defense AnalysesCCR-P/IDAThanet RoadPrinceton, NJ 08540 USAPhone: (1-609) 279-6243Fax: (1-609) [email protected]
European Deputy: Hartmut Fichtel
Deutsches Klimarechenzentrum GmbHSystems SoftwareBundesstrasse 55D-20146 Hamburg 13, GermanyPhone: (49-40) 41-173-220Fax: (49-40) [email protected]
Networking
Chair: Hans Mandt
Boeing Advanced Systems LaboratoryP.O. Box 24346, MS 7L-48Seattle, WA 98124-2207 USAPhone: (1-206) 865-3252Fax: (1-206) 865-2965 [email protected]
European Deputy: Dieter Raith
Rechenzentrum der Universitaet StuttgartSystems LANAllmandring 30D-70550 Stuttgart, GermanyPhone: (49-711) 685-4516Fax: (49-711) [email protected]
SGI/Cray Contact: Michael Langer
Manager, I/O Technical ComputingSGI/Cray Research655F Lone Oak DriveEagan, MN 55121 USAPhone: (1-612) [email protected]
Operating Systems
Chair: Terry Jones
Northrop Grumman Data Systems and Ser-vices
Bldg 1001, Room 101Stennis Space Center, MS 39522 USAPhone: (1-601) 688-5289
Fax: (1-601) 689-0400 [email protected]
Deputy: Virginia Sotler
United States Army Corps of EngineersWaterways Experiment Station3909 Halls Ferry Road, MS CEWES-IM-HVicksburg, MS 39180-6199 USAPhone: (1-601) 634-4418Fax: (1-601) 634-2301 [email protected]
Operations
Chair: Daniel Drobnis
San Diego Supercomputing CenterP.O. Box 85608SDSC-103San Diego, CA 92186-5608 USAPhone: (1-619) 534-5000Fax: (1-619) 534-5152 [email protected]
Deputy: Virginia BedfordArctic Region Supercomputing CenterUniversity of AlaskaP.O. Box 756020Fairbanks, AK 99775-6020 USAPhone: (1-907) 474-5426Fax: (1-907) 474-5494 [email protected]
European Deputy: Michael Brown
Edinburgh Parallel Computing CentreUniversity of Edinburgh, JCMBKing’s Buildings, Mayfield Road
Edinburgh, Scotland EH9 3J2 UKPhone: (44-131) 650 5031Fax: (44-131) 650 [email protected]
Performance and Evaluation
Chair: Jeff Kuehn
National Center for Atmospheric ResearchScientific Computing DivisioinP.O. Box 3000Boulder, CO 80307-3000 USAPhone: (1-303) 497-1311Fax: (1-303) 497-1804 [email protected]
Deputy (Americas): Bruce Kelly
Lawrence Livermore National LaboratoryL-73PO Box 808 Livermore, CA 94551 USAPhone: (1-510) [email protected]
Deputy (Europe): Michael Resch
High Performance Computing CenterStuttgart
Allmandring 30 D-70550 Stuttgart GermanyPhone: (49-711) 6855834Fax: (49-711) [email protected]
Page 39
Contacts
Page 40
SGI/Cray Contact: Jeff Brooks
SGI/Cray ResearchSoftware Development655F Lone Oak DriveEagan, MN 55121 USAPhone: (1-612) [email protected]
Security
Chair: Bonnie Hall
Los Alamos National LaboratoryGroup CIC-3, MS B265Los Alamos, NM 87545 USAPhone: (1-505) 667-0215Fax: (1-505) [email protected]
SGI/Cray Contact: Jim Grindle
SGI/Cray ResearchSoftware Development655F Lone Oak DriveEagan, MN 55121 USAPhone: (1-612) 683-5599Fax: (1-612) [email protected]
Software Tools
Chair: Hans-Hermann Frese
Konrad Zuse-Zentrum für Informationstechnik Berlin
Supercomputing DepartmentTakustrasse 7D-14195 Berlin-Dahlem GermanyPhone: (49-30) 84185-145Fax: (49-30) [email protected]
Deputy: Victor Hazlewood
San Diego Supercomputer CenterP.O. Box 85608SDSC-0505San Diego, CA 92186-9784 USAPhone: (1-619) 534-5115Fax: (1-619) [email protected]
SGI/Cray Contact: Bill Harrod
SGI/Cray ResearchSoftware Development655F Lone Oak DriveEagan, MN 55121 USAPhone: (1-612) 683-5249Fax: (1-612) [email protected]
User Services
Chair: Barbara Horner-Miller
Arctic Region Supercomputing Center910 Yukon Drive, Suite 108P.O. Box 756020Fairbanks, AK 99775-6020 USAPhone: (1-907) 474-5409Fax: (1-907) [email protected]
Deputy: Leslie Southern
Ohio Supercomputer CenterUser Services1224 Kinnear RoadColumbus, OH 43212-1163 USAPhone: (1-614) 292-9248Fax: (1-614) [email protected]
SGI/Cray Contact: Mike Sand
SGI/Cray ResearchApplication Services890 Industrial Blvd.Chippewa Falls, WI 54729 USAPhone: (1-715) 726-5109Fax: (1-715) [email protected]
Contacts
Program Steering Committee
Chair: Sam MilosevichEli Lilly and CompanyComputational Science Apps, MC499Lilly Corporate CenterIndianapolis, IN 46285 USAPhone: (1-317) 276-9118Fax: (1-317) [email protected]
Mary Kay Bunde
SGI/Cray ResearchSoftware Development655F Lone Oak driveEagan, MN 55121 USAPhone: (1-612) 683-5655Fax: (1-612) 683-5098 [email protected]
Eric GreenwadeINEEL-Lockheed Martin Idaho Technologies Co.Lockheed Martin Idaho Tech.P.O. Box 1625, MS-3605Idaho Falls, ID 83415USAPhone: (1-208) 526-1276Fax: (1-208) 526-4017 [email protected]
Bonnie HallLos Alamos National LaboratoryGroup CIC-3, MS B265Los Alamos, NM 87545 USAPhone: (1-505) 667-0215Fax: (1-505) [email protected]
Barbara Horner-Miller
Arctic Region Supercomputing Center910 Yukon Drive, Suite 108P.O. Box 756020Fairbanks, AK 99775-6020 USAPhone: (1-907) 474-5409Fax: (1-907) [email protected]
Helene Kulsrud
Institute for Defense AnalysesCCR-P/IDAThanet RoadPrinceton, NJ 08540USAPhone: (1-609) 279-6243Fax: (1-609) [email protected]
Conference Chair
Walter Wehinger
Rechenzentrum der Universitaet StuttgartAllmandring 30D-70550 Stuttgart GermanyPhone: (49-711) 685-2513Fax: (49-711) [email protected]
CUG Office
Bob Winget
2911 Knoll RoadShepherdstown, WV 25443USAPhone: (1-304) 263-1756Fax: (1-304) [email protected]
Page 41
Contacts
We would like to make you aware of therich historic background of the State ofBaden-Württemberg that is captured byour conference room names.
Hohenzollern is a crown atop a forestedhill, its many turrets reminiscent of knightsin shining armour, fair princesses, andkings upon their thrones. If you like thefairy tale image of a castle, Hohenzollernshould be on your itinerary.
Monrepos An enormous Baroque palacebuilt at the beginning of the 18th centuryby the Duke of Württemberg, dreaming ofanother Versailles, wasthe start of Ludwigs-burg—a town whose arti-ficial character is empha-sized by its orthogonalgrid of dead end streets.To the Northwest of Lud-wigsburg, nature and cul-ture meet in perfect har-mony at Schloß Monre-pos. Built in 1767 on asite bordering a smalllake, this gracious Rococopalace encircles a beauti-ful rotunda.
Solitude is located on awooded elevation west ofthe city. Despite emptystate coffers, Duke KarlEugen gave the greenlight to build Schloß Soli-
tude. He wanted brilliance, not quality.Therefore, the estate had to be restoredjoint by joint from 1975—1983 so that itcould continue to display its breathtakingeffect.
Donau or Fluvius Danuvius by its Romanname, German Donau, Slovak Dunaj,Serbo-Croatian and Bulgarian Dunav,Russian Dunay, this is Europe’s secondlongest and most powerful river. Rising inthe Black Forest mountains of Germany, itflows for approximately 2850 km throughthe southeastern portion of the continent tothe Black Sea.
The Neckar river unites the landscapes ofWürttemberg and Baden on its way fromthe Black Forest to Mannheim, where itjoins the Rhine river. Downstream fromBad Wimpfen, the Neckar, cutting oncemore through the sandstone massif of theOdenwald, runs through an area of wood-ed hills, many of them crowned by castles... performing all of these efforts only tomeet a true love—old Heidelberg.
Rhein The Rhine river, 1320 km long,flows through four different countries andas such has often been a source of contro-versy between neighbors. At the same time
it has always been aunique highway for theexchange of commer-cial, intellectual, artisticand religious ideas.
Schiller Marbach amNeckar, this small towncultivates the memoryof the poet FriedrichSchiller (1759-1805), itsmost illustrious son.The huge Schiller-Nationalmuseum risinglike a castle above thevalley, displays por-traits, casts, engravingsand manuscripts. Avisit to the writer’sbirthplace completesthe pilgrimage.
Notes on the Conference Room Names
Call
for
Pape
rsSp
ring
1999
CU
GM
ay 1
999,
Min
neap
olis
, MN
USA
Dea
dlin
e: N
ovem
ber 1
5, 1
998
Tit
le o
f P
aper
Abs
trac
t (tw
o or
thre
e se
nten
ces)
Nam
e
Org
aniz
atio
nC
UG
Sit
e C
ode(
Man
dat
ory)
Ad
dre
ss Cit
ySt
ate/
Prov
ince
Mai
l/Z
ip C
ode
Cou
ntry
Day
tim
e Ph
one
Fax
(Cou
ntry
cod
e-ci
ty/
area
cod
e) n
umbe
r(C
ount
ry c
ode-
city
/ar
ea c
ode)
num
ber
Ele
ctro
nic
Mai
l Ad
dre
ssPe
rson
sub
mit
ting
pap
er’s
Em
ail a
dd
ress
Au
dio
/Vis
ual
Req
uir
emen
ts❑
Vid
eo 1
/2″
❑V
ideo
Pro
ject
or to
att
ach
to y
our
lapt
op❑
Ove
rhea
d p
roje
ctor
❑35
-mm
slid
e pr
ojec
tor
❑V
ideo
3/
4″❑
Oth
er?
___
____
____
____
____
____
____
____
_
Su
gges
ted
Ses
sion
❑G
ener
al S
essi
on❑
Mas
s St
orag
e Sy
stem
s❑
Secu
rity
❑A
pplic
atio
ns a
nd A
lgor
ithm
s❑
Net
wor
king
❑So
ftw
are
Tool
s❑
Gra
phic
s❑
Ope
rati
ng S
yste
ms
❑U
ser
Serv
ices
❑IR
IX❑
Ope
rati
ons
❑Tu
tori
al❑
J90
❑Pe
rfor
man
ce a
nd E
valu
atio
n❑
Join
t Ses
sion
(lis
t sug
gest
ed s
essi
ons)
____
____
____
____
____
____
____
____
____
____
____
____
____
____
____
____
____
____
____
__❑
Oth
er__
____
____
____
____
____
____
____
____
____
____
____
____
____
____
____
____
____
____
____
____
____
____
____
____
____
____
Plea
se li
st a
ny c
o-au
thor
s’ n
ames
and
Em
ail a
dd
ress
es
Plea
se c
heck
the
CU
G h
ome
page
for
gui
del
ines
for
pre
pari
ng a
nd s
ubm
itti
ng y
our
pape
r: w
ww
.cu
g.or
gYo
u w
ill b
e co
ntac
ted
by
your
Ses
sion
Org
aniz
er to
not
ify
you
of a
ccep
tanc
e or
rej
ecti
on o
f yo
ur p
aper
. If
you
do
not h
ear
from
you
r Se
ssio
n O
rgan
izer
by
Janu
ary
15, 1
999,
then
ple
ase
cont
act S
am M
ilose
vich
, Pro
gram
Ste
erin
g C
omm
itte
e C
hair
, at t
he a
dd
ress
bel
ow.
Ret
urn
Ad
dre
ss:
Sam
Mil
osev
ich
Eli
Lill
y an
d C
ompa
nyC
ompu
tati
onal
Sci
ence
App
s, M
C49
9L
illy
Cor
pora
te C
ente
r, D
rop
0715
Ind
iana
polis
, IN
462
85-1
513
USA
Phon
e: (
1-31
7) 2
76-9
118
F
ax:
(1-3
17)
277-
3565
sam
@lil
ly.c
om