58
Decision Analysis and Test Plan for Multi-Operational Wireless Ranging and Low Power LIDAR Exploitation of Subterranean Structures (MOWLES) Rian Kistner Kathleen McLane Joseph Navarro May 11, 2018

1. Executive Summary - George Mason University  · Web viewIt is assumed that the sensors and SLAM modules listed in this report are the only ones under consideration by the TEC

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: 1. Executive Summary - George Mason University  · Web viewIt is assumed that the sensors and SLAM modules listed in this report are the only ones under consideration by the TEC

Decision Analysis and Test Plan for Multi-Operational Wireless Ranging and Low

Power LIDAR Exploitation of Subterranean Structures (MOWLES)

Rian KistnerKathleen McLaneJoseph Navarro

May 11, 2018

Page 2: 1. Executive Summary - George Mason University  · Web viewIt is assumed that the sensors and SLAM modules listed in this report are the only ones under consideration by the TEC

1. Executive Summary 1

2. Introduction 2

2.1 Background 2

2.1.1 Platform 2

2.1.2 Sensors 3

2.1.3 Software Package/SLAM Module 4

2.1.3.1 SLAM Module 5

2.1.3.2 Categories of SLAM Modules 5

2.1.4 User 6

2.2 Need 6

3. Problem Statement 6

4. Objectives and Scope 7

4.1 Decision Analysis Objectives 7

4.2 Test Plan Objectives 7

5. Technical approach 8

5.1 Technical Approach for the DSS 8

DSS Evaluation Methods 8

5.2 Assumptions and Limitations for the DSS 8

Assumptions 8

Limitations 9

5.3 Technical Approach for the Test Plan 9

5.4 Assumptions and Limitations for the Test Plan 9

Assumptions 10

6. Model / Architecture for Decision Analysis 10

7. Decision Support System 14

8. Evaluation 17

8.1 Procedure 17

8.2 Sensitivity Analysis 18

9. Test Plan Document 21

9.1 Tested and Non-Tested Features 21

9.2 Approach 22

9.3 Pass/Fail Criteria and Suspension Approaches 23

9.4 Test Deliverables 24

Page 3: 1. Executive Summary - George Mason University  · Web viewIt is assumed that the sensors and SLAM modules listed in this report are the only ones under consideration by the TEC

9.5 Environmental Needs 25

9.6 Other Considerations 26

10. Recommendations 26

11. Future work 26

12. References 27

13. Appendices 29

Appendix A Project Management 29

Roles and Responsibilities 29

Rian Kistner – Team Lead and Customer Liaison 29

Kathleen McLane – System Architect and System Engineer 30

Joseph Navarro - Data Analyst and Test Plan Developer 30

Work Breakdown Structure 31

Earned Value Management Charts 31

Appendix B Materials 32

Rating of the Objectives for the Test Plan 32

Appendix C Model and analysis details 33

Decision Analysis Breakdown 33

Swing Weight Method 33

Utility Values 33

Construction of Swing Weights 34

Determining Rankings 34

Calculating Swing Weight Multi-Attribute Utility Score 35

Rationale Behind Categories 35

Appendix D - Evaluation and Sensitivity Analysis Details 36

SMARTER Method 36

Utility Values 37

Determining SMARTER Weights 37

Calculating SMARTER Multi-Attribute Utility Score 37

Modified Analytical Hierarchical Process 37

Appendix E - Test Plan Calculations 38

1.1. Calculate benchmark metric for any GVR-bot configuration. 38

1.2. Light Effect on Visual Odometry 38

1.3. GVR-BOT’s Battery Capacity Test 38

ii

Page 4: 1. Executive Summary - George Mason University  · Web viewIt is assumed that the sensors and SLAM modules listed in this report are the only ones under consideration by the TEC

1.3.1. Step 1. Back of the envelope 39

1.3.2. Step 2. Cycle life considerations 39

1.3.3. Step 3: Rate of discharge considerations 39

iii

Page 5: 1. Executive Summary - George Mason University  · Web viewIt is assumed that the sensors and SLAM modules listed in this report are the only ones under consideration by the TEC

1. Executive SummaryThe U.S. Army Engineer Research and Development Center (ERDC) Geospatial Research Laboratory (GRL) Topographic Engineering Center (TEC) has a team that has been researching methods to improve situational awareness in underground tunnels and building interiors. From this team, the Multi-Operational Wireless Ranging and Low-Power LIDAR Exploitation of Subterranean Structures, or MOWLES, project came to be. MOWLES relies on a GVR-bot and a combination of a sensor and Simultaneous Localization and Mapping (SLAM) algorithm to produce three-dimensional (3D) point cloud maps. This is the second semester that the TEC team has come to GMU for help with their project. This semester they came to the GMU team in search of a recommendation for what sensor and SLAM combination would be best for the GVR-bot as well as a test plan for when it came time to evaluate the GVR-bot and sensor.

Through the specifications given to the GMU team by the TEC team and looking at the previous GMU team’s work on the TEC team’s project, the GMU team defined five alternatives, or options, for the sensor and SLAM module pairing (1). These five options used either an Orbbec Astra sensor or a ZED photogrammetry sensor, and each sensor was then paired with a SLAM module that could work with its specifications. The GMU team then determined three objectives while developing a value model. The three objectives were to 1) Achieve the highest accuracy in point clouds to decrease the error present in the point cloud map, 2) Achieve the longest range possible from a sensor and SLAM pairing, and 3) Achieve the fastest processing speed to decrease the time it takes to process the image. Through communication with the sponsor, the rankings for these three objectives were determined.

Using the Swing Weights Method, a Decision Support System (DSS) was constructed based on the objectives defined. The five alternatives’ values were converted to five separate utility functions, and the calculations were performed in Microsoft Excel. From these calculations it was seen that the ZED photogrammetry Sensor using the ZEDfu algorithm was the recommended pick for a sensor and SLAM module. Sensitivity analysis using the Simple Multi-Attribute Rating Technique Exploiting Ranks (SMARTER) Method and the Analytical Hierarchy Process (AHP) helped confirm the GMU Team’s recommendation.

In addition, a test plan was developed that focuses on the aspects of collecting point cloud datasets for comparative analysis, determining the effects of light level on visual odometry, and determining the effects of GPU on the GVR-bot’s battery capacity when using different sensor and SLAM combinations. This test plan follows the Institute of

Page 6: 1. Executive Summary - George Mason University  · Web viewIt is assumed that the sensors and SLAM modules listed in this report are the only ones under consideration by the TEC

Electronics and Electrical Engineers (IEEE) procedures, and lists approaches and methodologies for performing the tests, as well as how to document the results.

The deliverables developed by the GMU team support the MOWLES’ objectives and provide a substantive analysis for the TEC team to utilize to make important decisions related to the next steps for the project. From email correspondence with the TEC team, the GMU team understands these items will be used when developing the baseline configurations for the GVR-bot as well as aid in the formal testing and evaluation to be performed by the TEC team. Future academic teams could add considerations for environmental factors to the DSS. They could also develop a baseline configuration that could be used for comparisons when assumptions need to be made. Another GMU team could also support the TEC team in investigating the application of Visual SLAM (vSLAM), which uses cameras for the algorithm, instead of the traditional SLAM modules.

2. IntroductionThe Multi-Operational Wireless Ranging and Low-Power LIDAR Exploitation of Subterranean Structures (MOWLES) is a system currently being developed by the U.S. Army Engineer Research and Development Center (ERDC) Geospatial Research Laboratory (GRL) Topographic Engineering Center (TEC). The TEC team is currently researching methods to improve situational awareness in underground tunnels and building interiors.

2.1 BackgroundThis is the second of five years allotted to the TEC team to analyze the best means of providing this underground surveilling capability. So far the team has identified the basic components that will provide the necessary capability as follows: 1. the platform, 2. the sensor, and 3. the software package. In August of 2017, the TEC team approached GMU with help determining the basic set-up of the GVR bot. Using the basic components identified by the TEC team, the previous GMU team created a Decision Support System (DSS) that provided a baseline configuration based on generic assumptions that could occur in three different environments (1). The project presented by the TEC team this semester deals with the same basic components.

2.1.1 PlatformThe platform, GVR-Bot, is an tactical mobile robot developed and manufactured by iRobot that has been customized to meet the U.S. Army’s needs. An example of what this looks like can be seen in Figure 1.

2

Page 7: 1. Executive Summary - George Mason University  · Web viewIt is assumed that the sensors and SLAM modules listed in this report are the only ones under consideration by the TEC

Figure 1: GVR-Bot

2.1.2 Sensors

The TEC team has identified various potential sensors to be considered for the MOWLES system. The functionality of the sensor is to collect and transmit raw data to the software package. The problem of mapping an environment when a robot does not have a map already built and does not have access to external global localization sensors, like GPS, is called Simultaneous Localization and Mapping (SLAM). The robot also needs to keep track of its own location within the area being mapped. There are a few SLAM algorithms available to solve this problem, including Real-Time Appearance-Based Mapping (RTAB-Map), Elastic Fusion, ORB-SLAM2, and Direct Sparse Odometry (DSO). The sensors under consideration are as follows:

1. Orbbec Astra

The Astra, shown in Figure 2, is a standalone 3D camera developed to be compatible with existing Open Natural Interaction (OpenNI) applications. A Natural Interaction (NI) device is capable of capturing body movements and sounds. The sensing range of the Astra is 0.4 - 8 meters. The Astra has a Zotac mainboard with both a CPU and a GPU. This sensor produces highly-dense point cloud maps using Elastic Fusion.

The TEC team indicated this sensor as their preferred choice. The team’s concern with this sensor is that the memory module, the hard drive, is not recognized by the Linux distribution, Ubuntu. The team indicated that they are exploring the option of getting a traditional hard drive to connect to the CPU via a Serial AT Attachment (SATA) port. There is also a possibility of leveraging a high-performance laptop to serve the same purpose as the traditional hard drive.

3

Page 8: 1. Executive Summary - George Mason University  · Web viewIt is assumed that the sensors and SLAM modules listed in this report are the only ones under consideration by the TEC

Figure 2: Orbbec Astra Sensor

2. ZED Photogrammetry Sensor

The ZED photogrammetry sensor, shown in Figure 3, is a 3D camera for depth-sensing and motion tracking. Photogrammetry is the use of photography in surveying and mapping to measure distances between objects. The ZED sensor has a Zotac mainboard with a CPU, but no GPU. This sensor produces dense point cloud maps using RTABMAP or ZEDfu. ZEDfu is a 3D scanning application for capturing 3D models in real-time.

The TEC team considers this sensor as a potential option since its use is contingent upon acquiring a high-performance laptop since the sensor recommends the NVIDIA GTX1060 or higher for the GPU. The team is also looking at using the ZED sensor with the NVIDIA Jetson TX2 Module CPU, but the required connect boards have not arrived yet.

Figure 3: ZED Photogrammetry Sensor

3. Orbbec Astra with ORBSLAM and DSO

This implementation of the Astra sensor would have a Zotac mainboard with a CPU, but without a GPU. The Astra sensor would use one of two spare SLAM modules, ORBSLAM2 or DSO, to model the environment using sparse point cloud. The TEC team indicated that they are currently working with saved data sets, since live data feeds require the SLAM modules to be incorporated into the Robot Operating System (ROS). The use of DSO may require a ROS visualization tool (rviz).

2.1.3 Software Package/SLAM ModuleThe Software Package processes the raw data and provides the information to the user in a manner that allows them to identify possible threats and their respective locations.

4

Page 9: 1. Executive Summary - George Mason University  · Web viewIt is assumed that the sensors and SLAM modules listed in this report are the only ones under consideration by the TEC

Raw sensor data is modified by the software package so that the sensor output can be understood and accessed by the user. This may also include image processing so that objects of interest can be identified. The specific software packages that were investigated and researched in this project are Simultaneous Localization and Mapping (SLAM) modules.

2.1.3.1 SLAM Module

SLAM modules work by simultaneously building maps of the world while keeping the orientation and direction of the camera on record. This process occurs as a loop where it finds a point, determines the location of that point to assign a coordinate to it, then goes back to the point to make sure the location and coordinate match. Once a location and a coordinate are correct the algorithm moves on to the next point. This process continues until the entire area is mapped. The data is then cleaned up in post processing to help reduce any noise or irrelevant points using different mathematical formulas, and the result is a point cloud, or a 3D map.

The performance for SLAM modules can be calculated in post-processing. This is done by calculating the localization error for each point, or how far off each point is from the actual location it is representing. The result from this procedure is an average localization error for each point. The accuracy of a point cloud is primarily determined by the quality of the algorithm used to produce it. A high-quality algorithm will normally produce a high-quality point cloud. Due to the importance of the algorithm, the SLAM module must be taken into account when determining baseline configurations.

2.1.3.2 Categories of SLAM Modules

For this project, three types of point cloud categories were researched. These categories were: sparse point clouds, dense point clouds, and highly-dense point clouds.

1. Sparse Point Clouds: This algorithm works by unevenly distributing points over the surface of the image to be mapped. It works well on images with little texture, as details can easily be lost due to the small amount of points. An average estimate is that there are 0.5 to 1 point per square meter to map a surface. While this provides the least amount of points, so details can be hard to pick up on, it is a very quick and simple to use algorithm. The Sparse Point Cloud algorithms looked at were the ORBSLAM2 and DSO.

2. Dense Point Clouds: This algorithm evenly distributes points across the surface of an image to deliver a 3D point cloud. This provides an average of 5-10 points per square meter allowing a more detailed map then a sparse point cloud. The

5

Page 10: 1. Executive Summary - George Mason University  · Web viewIt is assumed that the sensors and SLAM modules listed in this report are the only ones under consideration by the TEC

increase in detail does bring in an increase for noise associated with the image. The SLAM algorithms that fell in this category included the ZEDfu and RTABMAP.

3. Highly-Dense Point Clouds: Similar to dense point clouds, this algorithm works well on highly intricate backgrounds that have texture. This algorithm evenly distributes points over the surface at a rate of 10+ points per square meter to deliver the most detailed map. With this increased accuracy brings a decrease in processing speed, and it can deliver a map with a lot of noise if the algorithm is not executed correctly. The Highly Dense Point Cloud algorithm looked at was the ElasticFusion Algorithm.

2.1.4 UserThe user is any person interfacing with the MOWLES system in order to operate the GVR-Bot.

2.2 NeedSince soldiers may be operating in denied areas, ERDC is seeking to provide a means for acquiring, processing, and delivering imagery of underground infrastructure without risk to human lives. The increased likelihood that combat operations will take place in populated urban areas makes this functionality necessary for identifying threats and preventing harm.

3. Problem StatementThere are many potential stakeholders and objectives associated with the MOWLES system. The interests of these stakeholders and the objectives themselves are potentially in conflict with one another. As a result, the TEC team needs to apply systems engineering principles to identify and analyze the available decisions to improve the MOWLES system performance. There are various sensors available that could be implemented as part of the MOWLES system. Each sensor has required hardware, specific SLAM algorithms, and each one produces a different type of result in the form of a point cloud: sparse, dense, or highly-dense. The TEC team does not have the resources to evaluate all of these options thoroughly and requires assistance performing a quantitative analysis on the available alternatives. The TEC team also requires a repeatable set of test procedures with defined metrics for comparing results from different sensor and SLAM algorithm tests.

6

Page 11: 1. Executive Summary - George Mason University  · Web viewIt is assumed that the sensors and SLAM modules listed in this report are the only ones under consideration by the TEC

4. Objectives and ScopeThe TEC team is looking to determine what the best sensor and SLAM algorithm pairing is for their GVR-bot. The GVR-bot will need to undergo testing and evaluation before it can go into the field, and the TEC team is currently trying to determine the best way to accomplish these tests. The GMU Team identified the following two main objectives for the semester-long project.

● To create a decision analysis (DA) tool to help the TEC team select a sensor and SLAM module for the GVR-bot.

● To create a test plan to be used by the TEC team in evaluation of the GVR-bot’s SLAM module, sensor, and comparison of the battery capacity.

These two objectives were then used to break the project down into more specific objectives for each deliverable.

4.1 Decision Analysis ObjectivesTo determine the DA tool objectives, the TEC team was contacted as to determine their main priorities for the sensor and SLAM module combination. Based on their input, and the current technical environment for sensors and SLAM modules the GMU team was able to come up with three objectives to help with the construction of the decision support system (DSS).

1. Achieve high accuracy in the point cloud maps, which should decrease the error associated with the maps.

2. Achieve the longest range available from the sensor and SLAM module pair.

3. Achieve the fastest processing speed from the sensor and SLAM module pair to produce a complete point cloud. This goal should decrease the overall processing time for the construction of a point cloud map.

4.2 Test Plan ObjectivesTo construct the Test Plan for the TEC team for the in-house data collection of the GVR-bot’s configurations certain objectives needed to be developed. The testing will be conducted at CERDEC’s Night Vision Lab on Fort Belvoir, VA. While testing is still in the early stages, the GMU team was able to identify three objectives in order to determine metrics and methodologies to evaluate the GVR-Bot across three key objectives.

7

Page 12: 1. Executive Summary - George Mason University  · Web viewIt is assumed that the sensors and SLAM modules listed in this report are the only ones under consideration by the TEC

1. Collect point cloud datasets for comparative analysis

2. Determine effects of light level on visual odometry

3. Determine effects of GPU on the GVR-Bot’s battery capacity

5. Technical approach

5.1 Technical Approach for the DSSThe MOWLES DSS applies the swing weight approach in order to perform a multi-attribute utility analysis. The first step in this approach was to establish a value model to figure out a way to optimize this decision. The objectives that compose the value model were decided on by the GMU team through communications with the TEC team on what they are looking for in a sensor and SLAM combination for the GVR-bot.

In an effort to make the DSS more robust, the GMU team decided to apply additional evaluation techniques, SMARTER and a modified version of AHP, and conduct a sensitivity analysis to compare these results.

DSS Evaluation Methods1. Swing Weights

2. Simple Multi-Attribute Rating Technique Exploiting Ranks (SMARTER) Method

3. Modified Analytic Hierarchy Process (AHP)

5.2 Assumptions and Limitations for the DSS

AssumptionsIt is assumed that the sensors and SLAM modules listed in this report are the only ones under consideration by the TEC team. The ratings for the objectives can be seen below:

1. Processing Time:

a. Best: Sparse-point cloud

b. Good: Dense-point cloud

c. Fair: Highly-Dense point cloud

8

Page 13: 1. Executive Summary - George Mason University  · Web viewIt is assumed that the sensors and SLAM modules listed in this report are the only ones under consideration by the TEC

2. Error Value Categories: From outside research; more information about the rationale can be seen in Appendix C:

a. Excellent - Orbbec Astra w/GPU and Elastic Fusion algorithm

b. Better - ZED photogrammetry Sensor with ZEDfu algorithm

c. Good - Orbbec Astra w/o GPU and using ORBSLAM2 algorithm

d. Fair - ZED Photogrammetry Sensor with RTAB mapping

e. Poor - Orbbec Astra w/o GPU sensor and using DSO algorithm

3. Range: Values came from Manufacturer’s website:

a. Orbbec Astra Sensor: 8 meters

b. ZED photogrammetry Sensor: 40 meters

Limitations1. As this is a developing project, these recommendations address the TEC team’s

interests as of April 2018.

2. Most of these pairings rely on the TEC team acquiring the appropriate software and hardware components.

5.3 Technical Approach for the Test PlanThe test plan document follows the IEEE standard 829 as well as taking into account the Army’s standards for testing. This was found through another test plan dealing with baseline testing of batteries in robots. These documents were used as a guide to put together the test plan specifically for the MOWLES project. Approaches for completing the testing necessary came from outside research, specifically from work done by Durrant-Whyte and Bailey (5). More information on this approach can be seen in Appendix E.

5.4 Assumptions and Limitations for the Test PlanSince MOWLES is still in its beginning phases of development there were certain assumptions and limitations taken in creation of the test plan. These are as follows:

9

Page 14: 1. Executive Summary - George Mason University  · Web viewIt is assumed that the sensors and SLAM modules listed in this report are the only ones under consideration by the TEC

Assumptions● Testing and evaluation will be limited to the sensors and SLAM modules and only

look at their effect on the battery capacity.

● Only the sensors and SLAM modules listed above will be tested

● It is assumed that only one sensor will be put on each bot during testing (i.e. One GVR-bot cannot have two sensors at the same time).

● The data values presented for error are subject to change in accordance with the Defense Advanced Research Projects Agency (DARPA)

● It was assumed that the TEC team would be able to acquire all necessary hardware and software for testing by the designated timelines

Limitations

● Following the TEC team’s guidance, it is understood that the Orbbec Astra sensor would rely on saved datasets for SLAM module evaluation. It would not be a live feed.

● As this is a developing project, stakeholder interests changed as the course of the semester developed. The test plan presented relates to the TEC team’s interests as of April 2018.

6. Model / Architecture for Decision AnalysisThe decision support system for the Sensor and SLAM module pairings is based on the Swing Weight Approach/Gold Standard Method. This method was chosen to prevent inconsistencies from occurring in the final decision combination. The Context Diagram for the DSS can be seen in Figure 4.

10

Page 15: 1. Executive Summary - George Mason University  · Web viewIt is assumed that the sensors and SLAM modules listed in this report are the only ones under consideration by the TEC

Figure 4: Context Diagram

The decision analysis tool takes in the outside data available for the sensors dealing with the sensor error values, range estimations, and the estimated processing times for each type of sensor and SLAM algorithm, and using the objectives, sensor and SLAM module list provided by the sponsor produces the pairing with the highest score. Microsoft Excel was used as the software for the DSS. With this known, the context diagram was broken down to a second level, as shown in Figure 5.

Figure 5: IDEF0 Model

11

Page 16: 1. Executive Summary - George Mason University  · Web viewIt is assumed that the sensors and SLAM modules listed in this report are the only ones under consideration by the TEC

As there were certain sensors and SLAM algorithms in consideration from the sponsor, the team did not need to try to limit the options for the DSS. The task that needed to be accomplished was determining which pairings went with which sensor, and what was the best use for these pairings. This involved determining what category of point clouds the pair would fall under. While one of the assumptions the GMU team made was that the TEC team would be in possession of all necessary hardware needed to use the sensor and SLAM pairs, a decision tree was created to show what the appropriate hardware would be for each pair. Figure 6 shows this decision tree. This decision tree does not correspond to the utility analysis. An additional decision tree was created to help the TEC team understand what type of environment to use each SLAM module as well as any critical hardware needed for use with the SLAM modules, as shown in Figure 7. Both Figures 6 and 7 were made to help the TEC team better understand the sensor and SLAM pairings.

Figure 6: Sensor and SLAM Module Pairing Alternatives

12

Page 17: 1. Executive Summary - George Mason University  · Web viewIt is assumed that the sensors and SLAM modules listed in this report are the only ones under consideration by the TEC

Figure 7: Design and Hardware needed for SLAM Modules

7. Decision Support SystemThe Decision Support System follows the swing weight approach, and in following with the procedures for this approach the first step was to determine the problem that was trying to be addressed, and the objectives that follow the problem statement. The problem and the objectives decided on by the GMU team come from communication with the TEC team on what they are looking for in a sensor and SLAM pairing combination for the GVR-bot. The problem statement, and abbreviated objectives can be seen below.

Problem: ERDC GRL TEC needs a sensor and SLAM algorithm pair for the GVR-bot that has the highest accuracy, best range possible, and quick processor to allow for timely decisions to be made in the field.

Objectives

1. Achieve high accuracy in the point cloud maps, which should decrease the error associated with the maps.

2. Achieve the longest range available from the sensor and SLAM module pair.

13

Page 18: 1. Executive Summary - George Mason University  · Web viewIt is assumed that the sensors and SLAM modules listed in this report are the only ones under consideration by the TEC

3. Achieve the fastest processing speed from the sensor and SLAM module pair to produce a complete point cloud. This goal should decrease the overall processing time for the construction of a point cloud map.

Objectives 1 and 2 are fairly straightforward in what is trying to be accomplished. Objective 1 dealt with the accuracy of the point cloud, and used error data that had been compiled from various outside research to come up with a weighting score. Objective 2 used the range values provided from the sensor manufacturer’s website. Objective 3 is a more complex objective, as the GMU team viewed it as the time it takes to process the data to create a complete mapping. This could be very useful to the TEC team if the GVR bot will be mapping an area that is time sensitive.

Using the objectives, the Measures of Performance (MOPs) and Measures of Effectiveness (MOEs) were created, as well as the categories or values that will be used to determine the rating of the alternatives. The rationale for the ratings can be seen in Appendix C. Once the MOPs and MOEs were established, the alternatives were listed with their rating for each objective.

Measures of Performance/Measures of Effectiveness

1. Error Value

a. Categories: Poor, Fair, Good, Better, Excellent

2. Range

a. Numerical Values

3. Processing Time

a. Categories: Good, Better, Excellent

Alternatives

1. Orbbec Astra w/o GPU Sensor using ORBSLAM algorithm

a. Error Value: Good (0.19 m)

b. Range: 8 m

c. Time values: Excellent - sparse point clouds (less data, faster time)

14

Page 19: 1. Executive Summary - George Mason University  · Web viewIt is assumed that the sensors and SLAM modules listed in this report are the only ones under consideration by the TEC

2. Orbbec Astra w/o GPU Sensor using DSO algorithm

a. Error Value: Poor

b. Range: 8 m

c. Time values: Excellent - sparse point clouds (less data, faster time)

3. Orbbec Astra w/ GPU Sensor using Elastic Fusion algorithm

a. Error Value: Excellent

b. Range: 8 m

c. Time values: Good - Slowest time/most data, highly dense point cloud

4. ZED photogrammetry Sensor using ZEDfu algorithm

a. Error Value: Better (0.14 m)

b. Range: 40 m

c. Time values: Better - dense point cloud, so more data/more time to process

5. ZED photogrammetry Sensor using RTABmap algorithm

a. Error Value: Fair (0.42 m)

b. Range: 40 m

c. Time values: Better - dense point cloud, so more data/more time to process

Using the alternatives, the utility values were constructed so that the swing weights could be created. The TEC team’s input was received to know the rating of the objectives on a scale of 0 to 100. The TEC team’s order of objectives was to Achieve the highest accuracy to decrease the error, Achieve the longest range, and Achieve the fastest processing time. Achieving accuracy was given a rating of 1, with Range being 0.3 as important as Error, and Time being 0.2 as important as Error. These ratings proved to be 100, 30, and 20, respectively. The ratings were then normalized, and each alternative was given a score using Multi-Attribute Utility Analysis. A full break down of the process can be seen in Appendix C.

15

Page 20: 1. Executive Summary - George Mason University  · Web viewIt is assumed that the sensors and SLAM modules listed in this report are the only ones under consideration by the TEC

The sensor/SLAM algorithm pair that had the highest score was the ZED Photogrammetry Sensor using the ZEDfu algorithm. A screenshot of the DSS with the resulting Swing Weight recommendation can be seen below in Figure 8. The SMARTER and AHP recommendation are discussed in the Evaluation Section.

Figure 8: MOWLES DSS Interface in Excel

8. Evaluation

8.1 ProcedureThe DSS results were validated by comparing the same decision scenario, but using different methods for eliciting weights. The DSS presented to the TEC team primarily uses the Swing Weight Method. Swing Weights provide more accurate results because the method assesses weights by explicitly defining the importance and variation between the attributes. For comparison purposes, the GMU team conducted additional analyses using the SMARTER Method approach, and a modified version of the AHP.

SMARTER follows a similar approach to the Swing Weights method, except the weights are already set based on the number of criteria in the decision problem. This simplifies the process compared to swing weights because decision-makers only need to put the criteria in order of importance. There are a couple methods for turning ranked objectives into an approximate weight. The GMU team used the Rank Sum calculation to determine the SMARTER weights. For three criteria, the weights are ½, ⅓, and . The ⅙rankings applied to the objectives in the swing weight analysis were used to determine

16

Page 21: 1. Executive Summary - George Mason University  · Web viewIt is assumed that the sensors and SLAM modules listed in this report are the only ones under consideration by the TEC

the weights for the SMARTER method. The rest of the calculations remain the same as the Swing Weight method so the same utility values were combined with the new SMARTER weights in order to calculate a second utility score for each of the five alternatives. The full set of equations and analysis for the SMARTER technique can be found in Appendix C. By using this approach, similar calculated scores for each alternative were seen, and in the end the SMARTER approach suggested the ZED Photogrammetry Sensor using ZEDfu algorithm as well. This helped to validate the work from the Swing Weight Calculations of the DSS.

Standard AHP differs from the Swing Weight method and the SMARTER approach by having the user answer a series of questions to determine the weights. Instead, the group followed the rating system from the Swing Weight Method and SMARTER method to develop the weights. The calculations can be seen in Appendix D. From the modified AHP it was seen that the suggested sensor and algorithm pair was the Orbbec Astra with GPU sensor using the Elastic Fusion algorithm. This varied from both the Swing Weight method and the SMARTER method. The GMU group’s thought for this is because AHP is not as strong of an approach, so it is not as accurate. Additionally, the GMU group had to do some intricate calculations to get the AHP approach to work in a similar fashion as the Swing Weight method and SMARTER approach, so this may have caused the suggested result to be different. The value for the ZEDfu sensor is only 0.0552 away from being the recommended sensor, so it was very close.

The scores received for both the modified AHP and the SMARTER approach can be seen in Figure 8 in Section 7.

8.2 Sensitivity AnalysisTo help provide more confidence in the recommendation, each method was tested by changing the input parameters slightly. This was to see how sound the logic and reasoning in the methodology was, in addition to giving more confidence behind the suggestion to the TEC team. The sensitivity analysis was performed in Microsoft Excel using the “What-If” function to create a set of tables showing all possible outputs from the DSS for the Swing Weight Method, the SMARTER Method, and the modified AHP Method. These charts were created for if the Accuracy/Error Rating objective was most important, if the Range Rating objective was most important, and if the Processing Speed objective was most important. The output table for the Accuracy/Error Rating Objective for each method can be seen below. All other tables can be seen in Appendix C.

17

Page 22: 1. Executive Summary - George Mason University  · Web viewIt is assumed that the sensors and SLAM modules listed in this report are the only ones under consideration by the TEC

Figure 9: Strategy Region Graph for Range and Processing Speed; Approach: Swing Weight

Figure 10: Strategy Region Graph for Range and Processing Speed; Approach:SMARTER Approach

18

Page 23: 1. Executive Summary - George Mason University  · Web viewIt is assumed that the sensors and SLAM modules listed in this report are the only ones under consideration by the TEC

Figure 11: Strategy Region Graph for Range and Processing Speed; Approach:AHP

The results from all nine tables showed that alternative D was picked 624 out of 729 times. This equates to it being picked about 85.6% of the time even when the different objectives are varied. This high rate of selection helped give the GMU group a better feeling about recommending alternative D. Alternatives A and C did appear in the results, but came up far fewer times. Alternative A was picked about 9.8% of the time, and Alternative C came up around 4.6% of the time. For the specific ratings that the TEC team required these would not have been the best options.

Alternatives B and E did not appear at all on any of the sensitivity analysis charts. When this result came back, the GMU team checked to see if any dominance had occured with these options. From the utility functions it could be seen that especially for the objective value rankings that the TEC team provided that Alternative B would never be picked, as there were always options better than it in the objectives of error and range. It does tie with Alternative A for processing time, but Alternative A wins out as it has a higher weight for Error. With this, it was apparent that Alternative B would never be the recommended option. A similar scenario occurred between Alternatives D and E. Alternative E was never selected as Alternative D is always better than Alternative E. With this, Alternative D will always win out over Alternative E. This brought the GMU team to the conclusion that dominance did occur for Alternatives B and E, and if the test

19

Page 24: 1. Executive Summary - George Mason University  · Web viewIt is assumed that the sensors and SLAM modules listed in this report are the only ones under consideration by the TEC

were to be run again, this options could immediately be rejected as being the recommended alternative.

9. Test Plan DocumentFollowing correspondence with the TEC team, the Test Plan document used an IEEE format to help provide a consistent approach to perform the testing and evaluation of the sensors. The testing will take place at CERDEC’s Night Vision and Electronic Sensors Directorate which researches and develops sensor technology for use in air or ground applications. The center has a tunnel with lights installed that come equipped with a controller who can change the light cycle. Following the stated objectives for the Test Plan, the plan outlines areas of interest including features that will be tested, features that will not be tested, approach, and the pass/fail criteria for each test.

9.1 Tested and Non-Tested Features The features to be tested are the sensor configuration, evaluation of the dense point cloud algorithms, and the evaluation of the sparse point cloud algorithms.

1. Sensor Configuration

The GVR-bot is to be configured with both an Orbbec Astra and ZED photogrammetry sensor for data collection. Independent trials will be run using each configuration. Datasets from these trials will be processed by their respective SLAM modules for further analysis.

2. Evaluation of dense point cloud SLAM algorithms

Datasets collected via both the Orbbec Astra and ZED photogrammetry sensor are compatible with dense point SLAM modules, but not all. There are 3 SLAM modules currently in consideration: 1. RTABMAP, 2. ElasticFusion, 3. ZEDfu. A test protocol described later will produce data for benchmarking each SLAM based on its predictive performance against a known environment.

3. Evaluation of sparse point cloud SLAM algorithms

At this point in the MOWLES research project, sparse point cloud SLAM processes have yet to be researched fully. The aforementioned test protocol is foundational to a comparative analysis between the two sparse point cloud SLAM algorithms in consideration, ORBSLAM2 and DSO. A

20

Page 25: 1. Executive Summary - George Mason University  · Web viewIt is assumed that the sensors and SLAM modules listed in this report are the only ones under consideration by the TEC

benchmark metric will again be calculated for each image reconstruction method for preliminary consideration.

The features that will not be tested are wireless data collection, data fusion with existing point clouds, geo-referencing point clouds, and environmental conditions.

1. Wireless Data collection

At this time data collection will be controlled manually and saved to a hard drive/SD card. The TEC team is not yet prepared to test advanced wireless capabilities.

2. Data fusion with existing point clouds

While databases exist with point cloud data already, the TEC team is not ready to combine the data collected in real time with existing point clouds.

3. Geo-referencing point clouds

At some point in testing, the point clouds collected will be combined with Geographic Information System (GIS) data to produce more insightful information. At this stage of testing, geo-referencing is not yet implemented.

4. Environmental Conditions

Night Vision tunnel lacks climate controls, thus testing for equipment performance under climate variables such as temperature or humidity is not on the agenda.

9.2 ApproachThe approach for testing is numbered out in the test plan, so that each step can be followed exactly, and easily recorded. Each objective to be tested has its own approach, and these are marked under the section headings in the test plan. More information on these calculations can be seen in the Test Plan deliverable, or in Appendix E.

9.3 Pass/Fail Criteria and Suspension ApproachesFollowing guidelines provided by DARPA’s SubT challenge, the following pass/fail guidelines could be created for the testing procedures. As with the approach, each pass fail guideline is unique for each objective.

1. Test Objective 1

21

Page 26: 1. Executive Summary - George Mason University  · Web viewIt is assumed that the sensors and SLAM modules listed in this report are the only ones under consideration by the TEC

If resolution for the point clouds is less than 10 centimeters, the trial is considered passing. If resolution is above 10 centimeters the trial is an automatic fail.

2. Test Objective 2

If drift stays under 1 meter for every 1 kilometer traveled then the trial is considered passing. If the drift exceeds that, the trial is an automatic fail.

3. Test Objective 3

If the battery life lasts for at least 4 hours, then the trial passes. If the GVR-bot does not last four hours the test will be considered an automatic fail.

As this project is still in its initial phases of development it was understood that these values could change at anytime if better recommendations are brought forward. Some of the values were still not known, so placeholders were created where the TEC team could go in and modify the test plan to suit their specific needs. It was also understood that there needed to be suspension criteria if a situation presented itself where the test needed to be stopped until better advancements had been made on the current set up.

1. Test Objective 1

If testing with for the point cloud datasets with comparative analysis reaches a certain number of errors, the test will be suspended. The value has yet to be determined, and will be under discretion of the TEC team.

2. Test Objective 2

If testing for the light level on visual odometry on the sensor reaches a certain number of errors, the test will be suspended. The value has yet to be determined, and will be under discretion of the TEC team.

3. Test Objective 3

If testing of the GVR-bot’s battery capacity reaches a certain number of errors, the test will be suspended. The value has yet to be determined, and will be under discretion of the TEC team.

For all three of the objectives the test will resume when:

● The error has been recorded in the error log

● A solution has been presented and put in place to solve the issue.

● A problem report has been written for the incident.

22

Page 27: 1. Executive Summary - George Mason University  · Web viewIt is assumed that the sensors and SLAM modules listed in this report are the only ones under consideration by the TEC

● The new details for the test are listed in the trial log, and the environmental lab conditions are correct for the trail.

9.4 Test Deliverables Once testing has occurred, the GMU team laid out a set of testing documents that the TEC team should have in their possession. These were based on IEEE standards as well. From testing the following items are expected to be delivered:

● Test Plan Document - This document will be available during and after testing with any corrections that have been deemed necessary recorded.

● Test Cases - List of trial runs that are to be taken place with a detailed description of the setup of the GVR-bot including, but not limited to, the sensor, SLAM module, and the type of computer on board. This will be listed for testing on all three objectives.

● Test Design specifications - A description of the setup of the testing facility and what will be tested during the trial.

● Conditions of the testing facility- This should include what the settings for testing were in the facility at the time of the test.

● Error logs - Spreadsheet with each error reported. This will include the time the error took place, the specific part that had the error, and the test trial that was being conducted.

● Trial logs - A spreadsheet with each trial listed that will document the time the trial started, the time the trial ended, and the specifications for the GVR-bot used during that trial. This will include the sensor, SLAM module, and computer on board.

● Problem reports - A document stating the problem discovered, the pieces of hardware/software responsible for the issue, and one or more ways to reconcile the issue.

9.5 Environmental NeedsThe environmental needs for testing were outlined following outside research and the requirements laid out by the TEC team. Using this, the GMU team came up with a list that can be used so the TEC team knows what to bring into testing with them.

1. Mapping Needs

23

Page 28: 1. Executive Summary - George Mason University  · Web viewIt is assumed that the sensors and SLAM modules listed in this report are the only ones under consideration by the TEC

Map and quantify a known testing route for the GVR-bot to maneuver. Number of travel nodes can vary, but suggests in the 6-10 range for pragmatic purposes and to maximize statistical significance.

2. Lighting Needs

Ensure lighting controls are properly calibrated and operable. Light will be measured using a spectrometer that is calibrated with a NIST certified light source.

3. Hardware Needs

A GVR-bot base, a high performance computer, NVIDIA Jetson TX2 (CPU), an Orbbec Astra with a GPU sensor, an Orbbec Astra without a GPU sensor, and a ZED Photogrammetry sensor will need to be in possession of the TEC team before testing can begin.

4. Software Needs

The ORBSLAM2 algorithm, DSO algorithm, ZEDfu algorithm, and RTABMAP algorithm will need to be installed and ready for use with the corresponding sensors before testing can begin.

5. Power Requirements

The GVR-bot will be able to have up to four BB-2590 Li-ion batteries. These will need to be purchased before testing can begin. The battery must have a capacity of 9.9 amp-hours.

9.6 Other ConsiderationsWhile keeping with IEEE standards, the test plan contains sections outlining the responsibilities of the TEC team, a schedule, and the risk and contingencies that go along with the testing. These risks include the hardware requirements, hardware availability, and the software requirements. Finally, the last section of the test plan includes who the TEC team needs to have authorize and sign off on the testing for it to be approved. The organizational structure of ERDC GRL was taken into account when composing the list of approvals. This list of approvals is subject to change. More information about these sections can be seen in the Test Plan Deliverable.

24

Page 29: 1. Executive Summary - George Mason University  · Web viewIt is assumed that the sensors and SLAM modules listed in this report are the only ones under consideration by the TEC

10. RecommendationsThe original problem statement dealt with determining the best sensor option for the GVR-bot. As progress occurred for the TEC team it became apparent that the GMU team needed to find the best sensor and mapping algorithm pair for the bot. Using a Decision Support System following the Swing Weight method the GMU team recommends the TEC team use the ZED Photogrammetry Sensor with the ZEDfu sensor. The Test Plan deliverable shows the recommended course of action that should be completed for testing.

11. Future workAfter completing the deliverables requested by the TEC team, the GMU team did notice areas that could be improved upon further. Especially since the MOWLES project is a 5 year project, and is only in its early stages, there is still room for improvement. The suggested future work could be done by the TEC team, or another GMU masters project team.

In regards to the data values used in the DA tool, the GMU team suggests that baseline values are established for the error, range, and processing time/speed for the sensors that are being looked at. These values could come from the testing which will be completed by the TEC team. Having consistent data values that will not change over the entire course of the MOWLES project, and that come from internal testing of the project, would help provide a more accurate DA tool, and be beneficial in testing of the GVR-Bot. The values used by the GMU team in this project came from many different research papers, but are not necessarily tailored to the TEC team’s environmental needs. If baseline values were created, the TEC team would be able to easily analyze data in testing. This would help show if the sensors and SLAM module pairs were working at maximum capacity, or if testing factors were causing them to suffer.

Once the testing data is acquired, the DSS could be updated to provide a more accurate recommendation. The accuracy objective could easily be updated by changing the accuracy categories to being numerical values. A fourth objective can be added to take into account how the sensor and SLAM pair affects the battery life for the GVR bot. The set up for this objective would be reliant on the data collected, but it should follow a similar approach to an updated accuracy objective. It would be anticipated that the multi-attribute utility analysis equation would have a fourth term if a fourth objective is added.

25

Page 30: 1. Executive Summary - George Mason University  · Web viewIt is assumed that the sensors and SLAM modules listed in this report are the only ones under consideration by the TEC

For the testing of the Bot, the GMU Team suggests that the TEC team look into exploring vSLAM algorithms that use visual odometry. These algorithms have progressed quite far in recent years, and their accuracy is getting better. This could be another alternative to the SLAM modules that were looked at for this project that could offer the TEC team a different approach. Additionally, the GMU team suggests that the TEC team look into accessing the capability requirements for specific operational needs. If these are better defined and understood, the testing phase of the project can be more beneficial for the TEC team. This recommendation may be something that is just fulfilled at a later date when the TEC team has more information about the project and its needs. If the capability requirements are understood better than both the DA tool and the test plan could be redesigned to have greater accuracy in assisting the TEC team.

The last recommendation that the GMU team suggests to the TEC team is to incorporate environmental factors into the DA tool. When the GMU team received the project, some environmental factors were looked at for the sensors, but there was not enough data available to fully incorporate this information into the model. Specifically adding data for the sensors in regards to different terrain, climate, and visibility conditions could be very useful in determining the best sensor for the Bot. Some visibility conditions to consider would be fog, smog, or smoke as these could greatly hinder the accuracy and range of the sensor.

26

Page 31: 1. Executive Summary - George Mason University  · Web viewIt is assumed that the sensors and SLAM modules listed in this report are the only ones under consideration by the TEC

12. References1. AbuRealh, O. Collier, R. Pack, J. (2017). Decision Support System(DSS) for

future Multi-Operational Wireless Ranging and Low Power LIDAR Exploitation of Subterranean Structures. George Mason University, 1-42.

2. Al-Hawari, T. Al-Bo’ol,S., Momani, A, (2011). Selection of Temperature measuring sensors using the analytical hierarchy process. Jordan Journal of Mechanical and Industrial Engineering, 5 (5), 451-459.

3. Barber, D., Mills, J., & Smith-Voysey, S. (2008). Geometric validation of a ground-based mobile laser scanning system. ISPRS Journal of Photogrammetry and Remote Sensing, 63 (1), 128-141.

4. Deris, A., Trigonis, I., Aravanis, A., & Stathopoulou, E. K. (2017). DEPTH CAMERAS ON UAVs: A FIRST APPROACH. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XLII-2/W3, 231-236. doi:10.5194/isprs-archives-xlii-2-w3-231-2017

5. Durrant-Whyte, H., & Bailey, T. (2006). Simultaneous localization and mapping: part I. IEEE robotics & automation magazine, 13 (2), 99-110.

6. Engel, J., Koltun, V., & Cremers, D. (2018). Direct sparse odometry. IEEE transactions on pattern analysis and machine intelligence, 40 (3), 611-625.

7. Haala, N., Peter, M., Kremer, J., & Hunter, G. (2008). Mobile LiDAR mapping for 3D point cloud collection in urban areas—A performance test. The international archives of the photogrammetry, remote sensing and spatial information sciences, 37 (Part B5).

8. Huising, E. J., & Pereira, L. G. (1998). Errors and accuracy estimates of laser data acquired by various laser scanning systems for topographic applications. ISPRS Journal of photogrammetry and remote sensing, 53 (5), 245-261.

9. I. (2013, January 31). Retrieved March, 2018, from https://www.youtube.com/watch?v=AMLwjo80WzI

10. Ibragimov, I. Z., & Afanasyev, I. M. (2017, October). Comparison of ROS-based Visual SLAM methods in homogeneous indoor environment. In Positioning, Navigation and Communications (WPNC), 2017 14th Workshop on (pp. 1-6). IEEE.

11.Kümmerle, R., Steder, B., Dornhege, C., Ruhnke, M., Grisetti, G., Stachniss, C., & Kleiner, A. (2009). On measuring the accuracy of SLAM algorithms. Autonomous Robots, 27 (4), 387.

12.Labbé, M. (n.d.). Real-Time Appearance-Based Mapping. Retrieved April, 2018, from http://introlab.github.io/rtabmap/

13.MRSD Team B. (2017). Autonomous Multimodal Aerial Mapping Vehicle. Retrieved March, 2018.

27

Page 32: 1. Executive Summary - George Mason University  · Web viewIt is assumed that the sensors and SLAM modules listed in this report are the only ones under consideration by the TEC

14.Murtiyoso, Arnadi & Suwardhi, Deni. (2011). A COMPARISON OF SPARSE AND DENSE POINT APPROACH TO PHOTOGRAMMETRIC 3D MODELING FOR STONE TEXTURED OBJECTS (CASE STUDY: ARCHEOLOGICAL SITES).

15.Osborne, M. (1997). Examples and exercises on Pareto efficiency. Retrieved May, 2018, from https://www.economics.utoronto.ca/osborne/2x3/tutorial/PEDEX.HTM

16.Rohrba, F. (2016, June 06). Point Density and Point Spacing. Retrieved from http://felix.rohrba.ch/en/2015/point-density-and-point-spacing/

17.Swing weighting. (2009, February). Retrieved April, 2018, from https://wiki.ece.cmu.edu/ddl/index.php/Swing_weighting

18.Taketomi, T., Uchiyama, H., & Ikeda, S. (2017). Visual SLAM algorithms: a survey from 2010 to 2016. IPSJ Transactions on Computer Vision and Applications, 9 (1), 16.

19.Whelan, T., Leutenegger, S., Salas-Moreno, R., Glocker, B., & Davison, A. (2015, July). ElasticFusion: Dense SLAM without a pose graph. Robotics: Science and Systems.

20.Zhou, T., & Popescu, S. C. (2017). Bayesian decomposition of full waveform LiDAR data with uncertainty analysis. Remote Sensing of Environment, 200, 43-62.

28

Page 33: 1. Executive Summary - George Mason University  · Web viewIt is assumed that the sensors and SLAM modules listed in this report are the only ones under consideration by the TEC

13. Appendices

Appendix A Project Management

Figure A-1: GANTT Chart

Roles and Responsibilities

Rian Kistner – Team Lead and Customer Liaison

Rian organized the team to ensure that milestones were met and that the scope and direction of the project aligned with the customer’s expectations. Other duties included validating deliverables against customer requirements, and facilitating internal reviews and status meetings. Rian implemented the decision analysis performed by Kathleen and developed the DSS interface for combining all evaluation techniques into one tool. Rian also contributed to analyzing the DSS results and performing a sensitivity analysis.

Kathleen McLane – System Architect and System Engineer

Kathleen constructed the system architecture models to show the proposed direction of the project. She performed the decision analysis models that were then implemented by Rian into the team's final deliverable. Other duties extended to assisting in documenting the project, as well as working on the project manager aspects including the schedule, EVM charts, and the work breakdown structure.

29

Page 34: 1. Executive Summary - George Mason University  · Web viewIt is assumed that the sensors and SLAM modules listed in this report are the only ones under consideration by the TEC

Joseph Navarro - Data Analyst and Test Plan Developer

Joseph focused on the development of standardized testing methodologies and metrics for continued evaluations of GVR-bots configurations. He identified key components of laboratory conditions and GVR-bot specifications used to outline procedures for test plan objectives. Duties include acting as key liaison to sponsor and periodic information gathering from ERDC's MOWLES research team.

30

Page 35: 1. Executive Summary - George Mason University  · Web viewIt is assumed that the sensors and SLAM modules listed in this report are the only ones under consideration by the TEC

Work Breakdown Structure

Figure A-2: MOWLES GMU Team Work Breakdown Structure

Page 36: 1. Executive Summary - George Mason University  · Web viewIt is assumed that the sensors and SLAM modules listed in this report are the only ones under consideration by the TEC

Earned Value Management Charts

Figure A-3: Earned Value Chart

Figure A-4: Cost Performance Index and Schedule Performance Index

Page 37: 1. Executive Summary - George Mason University  · Web viewIt is assumed that the sensors and SLAM modules listed in this report are the only ones under consideration by the TEC

Appendix B Materials

Rating of the Objectives for the Test Plan

Figure B-1: Response from Sponsor on Inputs to Multi-Attribute Utility Analysis

Appendix C Model and analysis details

Decision Analysis Breakdown

33

Error Values Processing Time

Transfer TimeUpload Time

TimeRangeAccuracy

Sensor/SLAM algorithm Pair with highest accuracy, highest range, and quickest processing time

Page 38: 1. Executive Summary - George Mason University  · Web viewIt is assumed that the sensors and SLAM modules listed in this report are the only ones under consideration by the TEC

Figure C-1: MOWLES Value Model of Attributes

Swing Weight Method

Alternative Error Range Processing Time

A Good 8 Excellent

B Poor 8 Excellent

C Excellent 8 Good

D Better 20 Better

E Fair 20 Better

Utility Values

Alternative Error Range Processing Time

A 0.5 0 1

B 0 0 1

C 1 0 0

D 0.75 1 0.5

E 0.25 1 0.5

Construction of Swing Weights

Alternative Error Range Processing Time

Worst Alternative (Imaginary)

Poor 8 Fair

Error Excellent 8 Fair

Range Poor 20 Fair

Processing Time Poor 8 Excellent

34

Page 39: 1. Executive Summary - George Mason University  · Web viewIt is assumed that the sensors and SLAM modules listed in this report are the only ones under consideration by the TEC

Determining Rankings

Alternative Error Range Processing Time

Rank

Worst Alternative (Imaginary)

Fair 8 Fair 4

Error Excellent 8 Fair 1

Range Fair 20 Fair 2

Processing Time Fair 8 Excellent 3

Alternative Error Range Processing Time

Rank Rating

Worst Alternative(Imaginary)

Fair 8 Fair 4 0

Error Excellent 8 Fair 1 100

Range Fair 20 Fair 2 30

Processing Time

Fair 8 Excellent 3 20

Alternative Error Range Processing Time

Rank Rating Weights

Worst Alternative(Imaginary)

Fair 8 Fair 4 0

Error Excellent 8 Fair 1 100 =100/(100+30+20)=0.66667

Range Fair 20 Fair 2 30 =30/(100+30+20)=0.2

Processing Time

Fair 8 Excellent 3 60 =20/(100+30+20)=0.13333

35

Page 40: 1. Executive Summary - George Mason University  · Web viewIt is assumed that the sensors and SLAM modules listed in this report are the only ones under consideration by the TEC

Calculating Swing Weight Multi-Attribute Utility Score

Alternative Multi-Attribute Utility Analysis Score

A 0.66667*0.5+0.2*0+0.13333*1=0.466667

B 0.66667*0+0.2*0+0.13333*1=0.13333

C 0.66667*1+0.2*0+0.13333*1=0.5

D 0.66667*0.75+0.2*1+0.13333*0.5=0.79165

E 0.66667*0.25+0.2*1+0.13333*0.5=0.54165

Rationale Behind Categories1. Processing Time:

a. Best: sparse point cloud

b. Good: dense point cloud

c. Fair: highly-dense point cloud

2. Error Value categories

a. Excellent - Orbbec Astra w/GPU and Elastic Fusion algorithm: from outside research it appeared that this option would have the least error as it was a highly-dense point cloud. This would lead to the smallest amount of error in mapping.

b. Better - ZED photogrammetry Sensor with ZEDfu algorithm: from outside research it appeared the error estimates were about 0.14 m.

c. Good - Orbbec Astra w/o GPU and using ORBSLAM algorithm: from outside research it appeared the error estimates were about 0.19 meters.

d. Fair - ZED Photogrammetry Sensor with RTAB mapping: from outside research it was seen the error estimates were about 0.42 m.

e. Poor - Orbbec Astra w/o GPU sensor and using DSO algorithm: from outside research it was seen this one was one of the newest algorithms, and while it was quick to process the sparse point clouds ignored detail providing more errors.

3. Range (values came from manufacturer website for each sensor)

36

Page 41: 1. Executive Summary - George Mason University  · Web viewIt is assumed that the sensors and SLAM modules listed in this report are the only ones under consideration by the TEC

a. Orbbec Astra w/o GPU Sensor using ORBSLAM algorithm - 8m

b. Orbbec Astra w/o GPU Sensor using DSO algorithm - 8m

c. Orbbec Astra w/ GPU Sensor using Elastic Fusion algorithm - 8m

d. ZED photogrammetry Sensor using ZEDfu algorithm - 40m

e. ZED photogrammetry Sensor using RTABmap algorithm - 40m

37

Page 42: 1. Executive Summary - George Mason University  · Web viewIt is assumed that the sensors and SLAM modules listed in this report are the only ones under consideration by the TEC

Appendix D - Evaluation and Sensitivity Analysis Details

SMARTER Method

Alternative Error Range Processing Time

A Good 8 Excellent

B Poor 8 Excellent

C Excellent 8 Good

D Better 20 Better

E Fair 20 Better

Utility Values

Alternative Error Range Processing Time

A 0.5 0 1

B 0 0 1

C 1 0 0

D 0.75 1 0.5

E 0.25 1 0.5

Determining SMARTER Weights1. Order of criteria

a. Example : c_1>c_3>c_22. Using Rank Sum Weights method

a. Error weight = 0.5b. Range weight = 0.3333c. Processing Time weight = 0.1667

38

Page 43: 1. Executive Summary - George Mason University  · Web viewIt is assumed that the sensors and SLAM modules listed in this report are the only ones under consideration by the TEC

Calculating SMARTER Multi-Attribute Utility Score

Alternative Multi-Attribute Utility Analysis Score

A 0.5*0.5+0.3333*0+0.1667*1=0.4167

B 0*0+0.3333*0+0.1667*1=0.1667

C 0.5*1+0.3333*0+0.1667*0=0.5

D 0.5*0.75+0.3333*1+0.1667*0.5=0.79165

E 0.5*0.25+0.3333*1+0.1667*0.5=0.54165

Modified Analytical Hierarchical Process1. To get integer value between 1-9:

a. Values of the score was subtracted from 100 and then divided by 100 to get an integer between 1 and 9. For example, if the rating was 30, then (100-30)/100 to quantify the difference. This was then multiplied by 10.

2. For the opposite comparison, (i.e. when the difference between the two ratings is negative) the value found was just taken as a reciprocal of itself.

3. The resulting matrix can be seen below in Figure C-#.

Figure D-1: AHP Matrix

39

Page 44: 1. Executive Summary - George Mason University  · Web viewIt is assumed that the sensors and SLAM modules listed in this report are the only ones under consideration by the TEC

Appendix E - Test Plan Calculations

1.1. Calculate benchmark metric for any GVR-bot configuration.This benchmark is essentially a error rate metric that can be calculated under any controlled laboratory setting, for any bot configuration to generate and apples to apples comparison for SLAM module accuracy. A general procedure can be described to generate said metric:

δi, j = x j ⊖ xi : poses of the robot estimated by a SLAM algorithm from time step i to j.

δ*i, j = x* j ⊖ x*i : reference poses of the robot, ideally the true locations.

1.2. Light Effect on Visual OdometrySame procedure has above can be executed under desired light settings. The interval where accuracy deteriorates at the largest rate can then be targeted to find a maximum derivative of accuracy deterioration i.e. increased error rate. General idea, will expand*

1.3. GVR-BOT’s Battery Capacity TestThe BB 2590 Lithium Ion battery is a rechargeable battery used throughout the Army to power many electronic devices. Current GVR-Bot designs are configured to hold 4 BB 2590 batteries at a capacity of 9.9 amps-hour. The goal of this test is to record and analyze GVR-BOT energy-consumption and performance under a controlled exercise.

1.3.1. Step 1. Back of the envelopeIf the current drawn is x amps, the time is T hours then the capacity C in amp-hours is C = xTFor example, if your pump is drawing 120 mA and you want it to run for 24 hours C = 0.12 Amps * 24 hours = 2.88 amp hours

1.3.2. Step 2. Cycle life considerationsIt isn’t good to run a battery all the way down to zero during each charge cycle. For example, if you want to use a lead acid battery for many cycles you shouldn’t run it past 80% of its charge, leaving 20% left in the battery. This not only extends the

40

Page 45: 1. Executive Summary - George Mason University  · Web viewIt is assumed that the sensors and SLAM modules listed in this report are the only ones under consideration by the TEC

number of cycles you get, but lets the battery degrade by 20% before you start getting less run time than the design calls for C’ = C/0.8For the example above C’ = 2.88 AH / 0.8 = 3.6 AH

1.3.3. Step 3: Rate of discharge considerationsSome battery chemistries give much fewer amp hours if you discharge them fast. This is called the Peukart effect. This is a big effect in alkaline, carbon zinc, zinc-air and lead acid batteries. For example if you draw at 1C on a lead acid battery you will only get half of the capacity that you would have if you had drawn at 0.05C. It is a small effect in NiCad, Lithium Ion, Lithium Polymer, and NiMH batteries.

For lead acid batteries the rated capacity (i.e. the number of AH stamped on the side of the battery) is typically given for a 20 hour discharge rate. If you are discharging at a slow rate you will get the rated number of amp-hours out of them. However, at high discharge rates the capacity falls steeply. A rule of thumb is that for a 1 hour discharge rate (i.e. drawing 10 amps from a 10 amp hour battery, or 1C) you will only get half of the rated capacity (or 5 amp-hours from a 10 amp-hour battery). Charts that detail this effect for different discharge rate can be used for greater accuracy. For example the data sheets listed in /BB.htm

For example, if your portable guitar amplifier is drawing a steady 20 amps and you want it to last 1 hour you would start out with Step 1: C=20 amps * 1 hour = 20 AHThen proceed to Step 2 C’ = 20 AH / 0.8 = 25 AHThen take the high rate into account C’‘=25 /.5 = 50 AHThus you would need a 50 amp hour sealed lead acid battery to run the amplifier for 1 hour at 20 amps average draw.

A rule of thumb is that for a 1 hour discharge rate (i.e. drawing 10 amps from a 10 amp hour battery, or 1C) you will only get half of the rated capacity (or 5 amp-hours from a 10 amp-hour battery). Charts that detail this effect for different discharge rate can be used for greater accuracy. For example the data sheets listed in /BB.htm

Figure out an average power drawn. Consider a repetitive cycle where each cycle is 1 hour. It consists of 20 amps for 1 second followed by 0.1 amps for the rest of the hour. The average current would be calculated as follows.20*1/3600 + 0.1(3599)/3600 = 0.1044 amps average current.(3600 is the number of seconds in an hour).

41

Page 46: 1. Executive Summary - George Mason University  · Web viewIt is assumed that the sensors and SLAM modules listed in this report are the only ones under consideration by the TEC

42