7
Interactive HPC-Enabled LVC S&T Experimentation Environments Jason Santiago, Rodney Smith, Jian Li, and Damon Priest C4ISR On-The-Move, US Army Communications-Electronics Research, Development and Engineering Center (CERDEC), Fort Monmouth, NJ [email protected] Abstract As the Armys operational paradigm shifts toward Network-Centric Warfare (NCW), the use of high performance computing assets to aid experimentation in Live, Virtual, and Constructive (LVC) environments has become increasingly valuable. Live test and evaluation of large complex Command, Control, Communications, Computers, Intelligence, Surveillance, and Reconnaissance (C4ISR) Systems-of-Systems (SoS) is becoming impracticable due to cost and component availability. The use of an HPC-enabled LVC science and technology (S&T) environment to aid in stimulating, scaling, and analyzing these technologies is a solution that C4ISR On-The-Move (OTM) leveraged heavily during experiments conducted for Event 2009. 1. Introduction The mission of the Command, Control, Communications, Computers, Intelligence, Surveillance, and Reconnaissance (C4ISR) On-The-Move (OTM) Modeling and Simulation (M&S) Team is to provide the US Army a cost-effective, relevant, and scalable Live, Virtual, and Constructive (LVC) environment, which can be employed to aid the assessment of emerging C4ISR Systems-of-Systems (SoS) technologies by utilizing simulations of varying fidelity to stimulate live- experimentation involving integrated C4ISR SoS in a net- centric warfare (NCW) context. The scales at which these complex SoS need to be examined pose challenges for experiment designers. Availability of emerging C4ISR SoS components, especially hardware, is generally low and the few developmental items in existence are often in high- demand. Further, most of these items are not ruggedizedand cannot stand up to the rigors of field experimentation. Even if these components were available in large quantities, the cost of conducting live experimentation at the scale necessary would quickly become prohibitive. High performance computing (HPC)-based virtual and constructive simulations are tools that may help overcome these challenges. Through the use of the High Performance Computing Army Laboratory for Live/Virtual/Constructive Experimentation (HALLE) Dedicated HPC Project Investment (DHPI) that was awarded to the Communications Electronics Research, Development and Engineering Center (CERDEC) by the Department of Defense (DoD) High Performance Computing Modernization Program (HPCMP), C4ISR OTM has been able to explore the non-traditional use of HPC in stimulating the live experimentation environment. 2. HALLE Description. HALLE (Figure 1) is an IBM P5-575 cluster-based supercomputer with 128 Power PC processor cores spread over 8 separate nodes (16 cores per node) yielding a theoretical peak-performance of 970 GFLOPS. Each node contains 64GB of RAM totaling 512GB. All systems are connected via IBM-proprietary high-speed Fibre Channel switches (6GB/sec), and a CISCO gigabit switch. Figure 2 shows the hardware wiring and layout of the HALLE DHPI. HALLE is capable of running either IBMs AIX or Linux (Red Hat or SUSE) operating systems. In the configuration employed during E09, four nodes ran Red Hat Linux Enterprise 5 for the One Semi- Automated Forces (OneSAF) constructive simulation application while the other four nodes run AIX to utilize the cluster computing power for computationally- intensive data reduction and analysis applications, such as the system of Systems Echelon Modeler (SoSEM), which is discussed in Section 7. 2010 DoD High Performance Computing Modernization Program Users Group Conference 978-0-7695-4392-5/10 $26.00 © 2010 IEEE DOI 10.1109/HPCMP-UGC.2010.78 333

[IEEE 2010 DoD High Performance Computing Modernization Program Users Group Conference (HPCMP-UGC) - Schaumburg, IL, USA (2010.06.14-2010.06.17)] 2010 DoD High Performance Computing

  • Upload
    damon

  • View
    214

  • Download
    1

Embed Size (px)

Citation preview

Interactive HPC-Enabled LVC S&T Experimentation Environments

Jason Santiago, Rodney Smith, Jian Li, and Damon Priest C4ISR On-The-Move, US Army Communications-Electronics Research, Development and Engineering

Center (CERDEC), Fort Monmouth, NJ [email protected]

Abstract

As the Army’s operational paradigm shifts toward Network-Centric Warfare (NCW), the use of high performance computing assets to aid experimentation in Live, Virtual, and Constructive (LVC) environments has become increasingly valuable. Live test and evaluation of large complex Command, Control, Communications, Computers, Intelligence, Surveillance, and Reconnaissance (C4ISR) Systems-of-Systems (SoS) is becoming impracticable due to cost and component availability. The use of an HPC-enabled LVC science and technology (S&T) environment to aid in stimulating, scaling, and analyzing these technologies is a solution that C4ISR On-The-Move (OTM) leveraged heavily during experiments conducted for Event 2009. 1. Introduction The mission of the Command, Control, Communications, Computers, Intelligence, Surveillance, and Reconnaissance (C4ISR) On-The-Move (OTM) Modeling and Simulation (M&S) Team is to provide the US Army a cost-effective, relevant, and scalable Live, Virtual, and Constructive (LVC) environment, which can be employed to aid the assessment of emerging C4ISR Systems-of-Systems (SoS) technologies by utilizing simulations of varying fidelity to stimulate live- experimentation involving integrated C4ISR SoS in a net-centric warfare (NCW) context. The scales at which these complex SoS need to be examined pose challenges for experiment designers. Availability of emerging C4ISR SoS components, especially hardware, is generally low and the few developmental items in existence are often in high- demand. Further, most of these items are not “ruggedized” and cannot stand up to the rigors of field experimentation. Even if these components were available in large quantities, the cost of conducting live experimentation at the scale necessary would quickly become prohibitive. High performance computing

(HPC)-based virtual and constructive simulations are tools that may help overcome these challenges. Through the use of the High Performance Computing Army Laboratory for Live/Virtual/Constructive Experimentation (HALLE) Dedicated HPC Project Investment (DHPI) that was awarded to the Communications Electronics Research, Development and Engineering Center (CERDEC) by the Department of Defense (DoD) High Performance Computing Modernization Program (HPCMP), C4ISR OTM has been able to explore the non-traditional use of HPC in stimulating the live experimentation environment. 2. HALLE Description. HALLE (Figure 1) is an IBM P5-575 cluster-based supercomputer with 128 Power PC processor cores spread over 8 separate nodes (16 cores per node) yielding a theoretical peak-performance of 970 GFLOPS. Each node contains 64GB of RAM totaling 512GB. All systems are connected via IBM-proprietary high-speed Fibre Channel switches (6GB/sec), and a CISCO gigabit switch. Figure 2 shows the hardware wiring and layout of the HALLE DHPI. HALLE is capable of running either IBM’s AIX or Linux (Red Hat or SUSE) operating systems. In the configuration employed during E09, four nodes ran Red Hat Linux Enterprise 5 for the One Semi-Automated Forces (OneSAF) constructive simulation application while the other four nodes run AIX to utilize the cluster computing power for computationally-intensive data reduction and analysis applications, such as the system of Systems Echelon Modeler (SoSEM), which is discussed in Section 7.

2010 DoD High Performance Computing Modernization Program Users Group Conference

978-0-7695-4392-5/10 $26.00 © 2010 IEEEDOI 10.1109/HPCMP-UGC.2010.78

333

Figure 1. HALLE DHPI

Figure 2. HALLE hardware wiring

Based on lessons learned and experience gained from C4ISR OTM E08, several hardware changes were implemented to improve performance for the applications running on HALLE. The number of nodes dedicated for the One Semi-Automated Forces (OneSAF) application was increased from two to four nodes, which allowed for the increased computational power that was needed to maximize the performance of the OneSAF application. Hardware changes also included installing two additional 4-port network interface cards on each node running OneSAF. This addition was made because, during E08, it was determined that the simulation was not capable of providing the required level of interaction due to bottlenecks associated with the limited number of network interfaces available to the OneSAF application. The E08 configuration used multiple instances of

OneSAF on each node, interconnected by virtual switches, and sharing two physical network interfaces for network traffic. The upgrade increased the number of network interface cards on four HALLE nodes from four to twelve. The additional interfaces allowed each OneSAF instance to have a dedicated physical interface for heavy, delay-sensitive internal OneSAF interactions. The upgrade allowed OneSAF to function in a distributed manner to support E09’s large-scale simulation scenarios. 3. HPC-based Use of OneSAF v2.1 OneSAF v2.1 Description and Porting. OneSAF is a government-off-the-shelf (GOTS), Computer-Generated Forces (CGF) simulation suite. OneSAF is a “composable” simulations toolkit that allows multiple Army domains—Advance Concepts and Requirement (ACR), Research Development and Acquisition (RDA), and Training Exercises and Military Operations (TEMO)—to modify, create, and mold the simulation to meet their specific simulation requirements. These types of simulations are often referred to as constructive simulations. A constructive simulation toolkit, such as OneSAF, allows users to represent and control large numbers of individual entities within a given force structure from a user-friendly graphical user interface (GUI). The ability to accurately represent large numbers of Army units in a realistic manner is critical when training Army battle staffs and testing new battle-command systems. During E09, OneSAF, running on HALLE, represented the positions and behaviors of approximately 3,300 moving entities (or approximately a brigade combat team (BCT)) that stimulated the live battle command applications under experimentation, including BCS, MCS, AFATDS, DCGS-A, JCR, and FBCB2. OneSAF also was used to provide the position and movements for two virtual platoons that were operating in-concert with a live platoon. The execution of fire missions, in the HALLE-based BCT scenario, to support the live battle command systems was made possible through the use of the OneSAF C2 Adapter, a Joint Variable-Message Format (JVMF) message gateway, and the translation service portion of OneSAF. OneSAF porting to the IBM P5-575 Power PC architecture was a complex and challenging undertaking. Significant differences exist between the native Intel x86 architecture for which OneSAF was written, and HALLE’s IBM Power PC architecture. Through prior partnerships and support from the HPCMP Productivity Enhancement and Technology Transfer and Training (PETTT) Program, C4ISR OTM enlisted the aid of SAIC to port the OneSAF v2.1 software onto the HALLE DHPI. SAIC had ported OF OneSAF v1.5.1 with great success for E08 and had experience with the difficulty

334

and challenges they would encounter in the porting process for OneSAF v2.1. Based on E08 lessons learned, SAIC was successful in porting OneSAF v2.1 in less time and with less difficulty than the prior year. OneSAF Terrain Challenges. Mobility of constructive entities plays a critical part in realistically-executing tactical missions. The time required to move an individual entity, platoon, company, battalion, or brigade can have a dramatic impact on how accurately the simulated mission executes. The mobility models included with OneSAF can accurately represent the time it takes for a given entity or unit to move from point-to-point over the terrain. In order to allow for the interactions from the LVC environments to occur as realistically as possible, the terrain across all platforms and software systems must be correlated. Terrain correlation refers to the correct alignment of all terrain features. The correlation of many different systems was a difficult challenge to overcome. OneSAF comes with four default terrain files, none of which include the Fort Dix, NJ area required for the experiment. Further, the OneSAF simulation software uses its own format of terrain. The OneSAF Objective Terrain Format (OTF) was a difficult product to create, considering there are very few vendors that are capable of producing it. For E09, terrain development required partnering with one of the leaders in the terrain industry in order to produce what was required for the successful integration and coupling of the LVC battle space. Correlated OneSAF OTF, CTDB, and OpenFlight terrain were successfully generated and utilized in various simulation applications to construct a seamless LVC environment. Though the OneSAF software baseline was ported with greater success this year compared to the prior year, the challenge encountered this year was the OTF conversion from little-endian to big-endian ordering. Endian refers to the order in which a processor reads bytes from storage media. The little endian to big endian conversion was necessary because of the differences in the processor architecture. The OneSAF software and terrain format were written for little-endian processing; for the OTF to work on HALLE, it needed to be converted from little-endian to big-endian. This process was, by far, the most time-consuming and difficult to execute. Changes to the OneSAF software from OneSAF v1.5.1 to OneSAF v2.1 were large enough not to allow reuse of the previous year’s conversion process. After many attempts, SAIC was able to produce the final OTF big endian terrain that was used for E09.

4. OneSAF v2.1 and HALLE In order for OneSAF to function on HALLE appropriately, several software modifications needed to be made to the OneSAF source code. The key changes that affected the use of OneSAF on HALLE were made to the Distributed Interactive Simulation (DIS) Protocol Data Unit (PDU) code. These changes included the modification of the Time-to-Live (TTL) code of the DIS PDU, allowing control of how far through the network the DIS information would travel. Modification of the Entity State PDU (ESPDU) marking field also was required in order to identify the entities within the scenario by Unit Reference Number (URN). This modification was required by the filtering rule defined in the C4ISR OTM-developed C4ISR Information Management Service (CIMS) bridge, which was used as one of the messaging gateways between the simulation and live tactical domains. OneSAF was designed to function in a distributed computing environment. Therefore, in order for OneSAF to work optimally on HALLE, multiple instances of OneSAF were required to create a distributed computing architecture capable of utilizing HPC parallel processing power. In order to get multiple instances to run concurrently on each HALLE node, source code changes were made to the OneSAF network interface card (NIC) binding code. OneSAF uses the MAC address of the NIC to generate a Unique Identification (UID) for objects created during run-time. Modifications to the code provided the ability to run the required instances on each node.

Figure 3. HALLE OneSAF configuration

HALLE was operated with 32 instances of OneSAF instantiated over four nodes. The 32 instances were composed of 30 Simulation Cores, one Management Control Tool (MCT), and one Interoperability Control Tool (ICT) as shown in Figure 3. Each HALLE node contained 64GB of RAM and 16 Power PC processors. The OneSAF scenario consisted of approximately 3,300 entities, roughly a BCT-sized element. HALLE is capable of providing larger entity counts if needed, but for E09 experimentation it was constrained to the required numbers to meet experimentation objectives. Throughout

335

E09 the computing power of HALLE was used to stimulate the live environment by increasing the level of interaction between the 3,300 entities in simulation for long periods of time. This prolonged interaction would have been difficult to achieve using only non-HPC assets. 5. Cross-Coupling: Creating the Interactive LVC Environment Cross-coupling refers to the live, virtual and constructive domains being integrated so that bi-directional communications between each domain could occur seamlessly. By cross-coupling the LVC environments, C4ISR OTM was able to augment and scale its live assets to a full brigade’s worth of soldiers, vehicles, air assets, and unmanned systems that could dynamically be tasked to execute tactical missions and test activities. Considering the scarcity and cost associated with some of these resources, the ability to employ Unmanned Aerial Systems (UAS) such as Predator, Shadow, and Global Hawk would not be possible without the use of simulation. The training and experimentation benefits of a cross-coupled LVC tactical battle space are evident when considering the magnitude of virtual and constructive resources this immersive environment provides to support the activities taking place at a C4ISR OTM experimentation event. Without the use of these simulated assets, a realistic and scalable brigade-sized test environment would not be economically feasible. Beyond OneSAF v2.1, several other tools were employed to create an environment wherein simulated entities became indistinguishable from their live counterparts. Chief among these were components of the Night-Vision Toolset (v9.1), developed by CERDEC’s Night-Vision and Electronic Sensors Directorate at Fort Belvoir, VA. NV Toolset is a GOTS product that provides physics-based visual simulation of Electro-Optic (EO) and Infra-Red (IR) sensors. The toolset provides dynamic, real-time human-in-the-loop (HITL) sensor simulation for driving, piloting, and conducting reconnaissance, surveillance, and target acquisition (RSTA) missions and operations. NV toolset consists of several different components, providing a robust application suite. The components of the toolset utilized by the C4ISR OTM M&S team included Night-Vision Image Generator (NVIG), Three-Dimensional Visualization Tool (3DViz), Comprehensive Munitions and Sensor Server (CMS2), and Universal Controller (UC). Each component provided a unique ability that augmented the live battle space. A more complete description of these tools can be found in the C4ISR OTM Event 09 Final Report (see References).

To cross-stimulate the LVC environment, eight live opposing force (OPFOR) dismounted soldiers and their vehicles were instrumented with Soldier Radio Waveform (SRW) and NovaRoam radios, as well as Global Positioning System (GPS) receivers. Each SRW radio beaconed its location/position information in a designated multicast group (MCG) via a dedicated-SRW network. The live-OPFOR position information was received by the CIMS bridge, which then translated the position information into DIS ESPDUs and forwarded the information to the simulation domain. The representation of live OPFOR in the simulated world enabled dynamic interaction between live and simulated entities, allowing simulated sensor detection and image capture of live OPFOR, live and simulated sensor cross-cueing, and simulated fire missions against live OPFOR targets; in short, a truly immersive battle space. 6. Cross-Stimulation Mission Thread Execution Live-to-Simulated Sensor Cross-cueing and Battle-Handover. Cross-cueing is the ability of two or more sensors to act on a target in a manner that allows the sensors to function in unison, and transition the target from one sensor to another as it enters and leaves a sensor’s detection area. Figure 4 depicts a screenshot of a Joint Capabilities Release (JCR) battle command application during a sensor cross-cueing mission. These screenshots were taken from a live vehicle conducting reconnaissance missions against live OPFOR, aided by live and simulated unmanned aerial systems (UAS). During this mission, the OPFOR started in the south and maneuvered through all three platoon sectors (one live and two virtual platoons). The virtual platoons provided tracking, imagery, and sensor detections of the OPFOR as it moved. Based on intelligence gathered by the virtual platoons, the Company Commander was able to position his live-assets in preparation for the arrival of the live OPFOR. Once the live-assets were in position, a battle-handover occurred from the virtual UAS to the live UAS. The sensor image on the left in Figure 4 is a live-image of the OPFOR at the location provided by the live platoon UAS. The screenshot on the right is the simulated image of the same location and OPFOR generated by the virtual platoon UAS. Sensor cross-cueing and battle handover were executed several times throughout E09 with great success, and provided the tactical realism necessary to make simulated assets indistinguishable from live assets to the live BLUFOR.

336

Figure 4. Live-to-Simulated Sensor Cross-Cueing

UAS Observed Fire Missions and Battle Damage Assessment. The ability for a live leader to employ simulated indirect fires against an instrumented live- OPFOR and obtain lethal effects in simulation was a new M&S capability featured in E09. The full fire mission thread is depicted in Figure 5. The fire mission and Battle Damage Assessment (BDA) thread was added to the mission execution capability set using the same design concepts and operating principles used for sensor cross-cueing, battle handover, instrumented OPFOR, and image generation threads discussed previously.

Figure 5. UAS Observed Fire Mission Thread

The mission was generated by a live operator locating a target through the use of ISR assets at his disposal. Once a target was located, a simulated UAS would confirm the location and provide “eyes” on the target. The live operator, through the use of the battle- command system (in this case, FBCB2), would then generate a call-for-fire (CFF). The CFF would be transmitted to the Advanced Field Artillery Tactical Data System (AFATDS) ABCS, which would process the weapon pairing and fire unit selection. The fire mission was then transmitted to the simulated fire-units in the HALLE OneSAF scenario through the C2 Adapter interface. The HALLE OneSAF brigade scenario would fire the mission in simulation, and provide all the required message traffic back to the live operators of the ABCS. The simulated munitions detonations would display in the

DIS domain and could be seen by the simulated UAS. The ESPDUs would then be changed to reflect the damaged state of the live OPFOR based on the munitions detonations. The simulated UAS could then provide a BDA of the target and confirm the damage with imagery, as shown in Figure 6. The live operator would then determine whether another CFF was required or whether the mission was completed successfully. This capability enhanced realism for battle command system operators and decision makers by providing a more complete information set for the Soldier’s situational awareness and understanding. The use of live indirect fire systems in a force-on-force experimentation environment would be unthinkable, but through the use of an interactive LVC environment, test threads and mission objectives that would normally be hazardous can be executed in a safe and realistic manner without sacrificing technical merit.

Figure 6. JCR with Simulated-BDA image of Live OPFOR

7. Network Stimulation with Operational Traffic Another M&S goal for C4ISR OTM E09 involved providing meaningful and realistic background-traffic loads to the integrated tactical network. Typically, background traffic consists of simple scripts running on traffic generators to introduce some kind of load on the network. The M&S team took this process a step further by opting to generate a series of network traffic loads based on actual operational deployed current force- architectures and C4ISR applications. Essentially, this involved simulating the traffic of a current force battalion and using the results of the simulation as background- traffic competing for BCT-level network resources. In addition, only the traffic generated by ABCS systems not represented in the live experimental network architecture was included, allowing for refinement of the initial data

337

set and the production of more meaningful background traffic. The process of converting operational traffic data into a network generator script is depicted in Figure 7 and began with classified operational network traffic data collected from a unit command post in an operational theater. The classified data was processed into a custom unclassified traffic profile, which was loaded onto HALLE, and further processed through the System-of-Systems Echelon Modeler (SoSEM), a traffic profile developer and network analysis tool. SoSEM assisted in refining the profile to include only data not represented by live systems. The resulting profile was then statistically simulated and converted into a series of traffic generation scripts that were executed by the Multi-Generator (MGEN) utility and introduced into the experimental network. By multi-threading the entire process, the M&S team was able to take full advantage of HALLE’s computing power and reduce many days’ worth of processing to hours.

Figure 7. Network Traffic Stimulation Concept

8. Mobile Network Modeling Institute (MNMI) The Mobile Network Modeling Institute (MNMI) is a DoD High Performance Computing Modernization Program (HPCMP) awarded collaborative effort, led by ARL’s Computational and Information Sciences Directorate, with the mission to develop and employ HPC software for the design and analysis of large mobile ad hoc networks (MANETs) in complex environments. During E09, C4ISR OTM provided MNMI data collected during experimentation involving the 2013 Modular BCT (MBCT) network architecture, as well as data collected during the future force portion of the Soldier-centric Unified Battle Command (UBC) activity. MNMI plans to use these data sets to continue to explore the scientific foundations of MANETs and design tools

for accurately modeling the behavior of secure mobile communications, sensors, and command-and-control (C2) networks. The data will further the development of accurate representations of the networks, and will assist in the cyclic validation of models and simulations under development for use in a HPC environment. 9. C4ISR OTM Event 10 C4ISR OTM plans to capitalize on the lessons learned and successes achieved during E09 to provide a robust, HPC-based simulation environment in support of Event 10 (E10). The current set of tools, including OneSAF v2.1, will continue to be employed on HALLE in much the same manner as in E09, having demonstrated their utility and stability in an HPC environment. However, HALLE will reach end-of-life immediately following E10 and will be de-commissioned. At the same time, C4ISR OTM will begin moving its M&S capability to Aberdeen Proving Ground (APG), MD, to be completed prior to C4ISR OTM Event 11 (E11). Looking ahead, in order to support E11, C4ISR OTM is conducting a proof-of-concept experiment to examine whether the same techniques used to create the LVC environment on a DHPI can be employed on a shared HPC located at APG, remotely connected to the live experimentation venue at Fort Dix, NJ. C4ISR OTM will try to understand and examine the particular technical and business practice challenges associated with building and operating a constructive, interactive simulation environment on a shared HPC resource, with an eye toward determining its viability as a standard method of operation for E11 and beyond. Also, in order to more fully support the MNMI, C4ISR OTM is establishing a permanent, secure network connection between the Fort Dix facility and the MNMI facilities at APG, to speed the delivery of collected experimental data sets to the institute’s simulation and emulation development teams and to enhance future interoperability among all the institute’s components. 10. Significance to DoD As the Armed Forces of the United States continue their transformation to more agile, responsive and interoperable forces, the ability to conduct SoS experimentation, testing and evaluation at large scales and high-fidelity becomes more and more critical. The cost in time and other resources to produce and integrate emerging C4ISR SoS components in the quantities required for live experimentation is prohibitive; increasingly, major acquisition decisions will have to be made based on results gained from smaller scale live evaluations augmented by large scale simulations and

338

emulations, which are, in turn, based on high-fidelity engineering models. C4ISR OTM has taken the first steps toward achieving this end by developing and employing an immersive, integrated LVC environment, based on the non-traditional use of HPC, with the ultimate goal of reducing risk in developing, fielding and deploying new C4ISR technologies in support of Warfighters.

References Santiago, J., R. Smith, J. Li, and D. Priest, “Chapter 5. Modeling and simulation.” C4ISR OTM E09 Final Report, pp. 588–615, 2010.

339